Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SimpleChat: a simple and dumb web front end for testing /chat/completions and /completions end points and try chat #7350

Merged
merged 64 commits into from
May 22, 2024

Conversation

hanishkvc
Copy link
Contributor

@hanishkvc hanishkvc commented May 17, 2024

Hi @ggerganov, @ngxson, @mofosyne

As mentioned in my other PR #6834 comment section, the web front end available with examples/server doesnt currently use the /chat/completions endpoint. It only generates requests for /completions endpoint. Also the web front end code is more involved/complicated for minimal testing.

This simple web frontend, allows triggering/testing the server's /completions or /chat/completions endpoints in a simple way with minimal code from a common code base. And also allows trying to maintain a basic back and forth chatting to an extent.

NOTE: Given that the idea is for basic minimal testing, it doesnt bother with any model context length and culling of old messages from the chat. Also currently I havent added input for a system prompt, but may add it [update: system prompt support added].

usage

first run examples/server

  • bin/server -m path/model.gguf

next run this web front end in examples/server/public_simplechat

  • ./simplechat.sh
  • this uses python3's http.server to host this web front end

Open this simple web front end from your local browser as noted in the message printed when simplechat.sh is run

Contains a div placeholder for showing chat messages till now

a text-input for allowing user to enter next chat message/query
to the model.

a submit button to allow sending of the user entered message and
chat till now to the model.
Allows maintaining an array of chat message.

Allows adding chat message (from any of the roles be it system,
user, assistant, ...)

Allows showing chat messages till now, in a given div element.
Define Role class with static members corresponding to the roles.

Update startme to

* Get hold of the ui elements.

* Attach a click handler to submit button, which adds the user input
  to xchats array and shows the chat messages till now in chat div
  element.

Trap DOMContentLoaded to trigger startme
Also show a default message to user

Also add some metas
So one needs to run the llm server locally
then run this script and access it using a local browser
So user can either press submit button or press enter key
Also make chat the default selection wrt mode
Try read the assistance response from appropriate field in the
response got.

Also examples/server seems to return the response in a slightly
different field, so try account for that also.
@hanishkvc
Copy link
Contributor Author

Have added support for system prompt.

Also

allow to switch chat session optionally, wrt some of the related
helpers.

setup for two chat sessions by default.
Re-enable user-input, only after response to a user query has been
updated to the chat-div. This ensures that if user tries to switch
chat session, it wont be allowed till chat-request-response flow is
done.
Helper to get the latest system prompt and inturn use same to
set the system prompt ui, when switching.

Ensure that system prompt is set if and when enter key is pressed.
Also have a general helper for setting class of children.
Show system prompt in chat space, when it is set by pressing enter,
as a feedback to user.

Alert user, if they try to switch chat session in the middle of
waiting for a response from the ai model.
Also a general create button helper.
Also fix a oversight wrt using stale data wrt the list of chat
sessions.
Skip NewChat if user cancels or if one waiting for response from
the ai model.

Dont show a chat with newly got ai model response, if current chat
session has changed, some how. Chat session shouldnt be allowed to
change, if there is a pending response, but still as a additional
sanity check.
@hanishkvc
Copy link
Contributor Author

hanishkvc commented May 21, 2024

@mofosyne @ggerganov @ngxson

I have further structured the code and also added support for multiple independent chats, it is ready for merging now.

It implements a responsive design to adapt to laptop/mobile/tablet and their different screen resolutions and orientations implicitly by simply being structured around basic html+css's flex-column of flex-rows. It uses a scrolled view/div in normal usage, while parallely allowing unrolled multi page printing, by implicitly changing the document to have auto height (thus bypassing the overflow-scroll style entry) in print mode.

The whole idea of this is to keep things simple with out using any modules/frameworks/etal, and instead build on what is already available with basic html/js/css, which is good enough with current standards and browsers.

Currently this is placed independent of the existing web-frontend, in its own folder under examples/server/public_simplechat. We can either retain this there itself or move into public/chatui.

Lets keep this as it is with its simple and straight forward html/js/css use, without changing/refactoring it to use other modules like preact etal. Which should also keep it simple for anyone to modify it if required in future, indepedent of any framework conventions or changes or replacements to them.

However in future, if ngxson or anyone else wants to refactor it wrt preact/xyz etal, I would suggest that then parallely keep this pure html/css/js based version also, as is, in a seperate folder, like the current examples/server/public_simplechat.

NOTE: Input for anyone looking at refactoring, a copy, in future

  • The core chat-[role+content]-set is maintained by the basic SimpleChat class including mapping it into prompt or messages api equivalent json. It also provides the logic for setting of system prompt either only-once-at-begining or anytime, as well as getting the latest system prompt wrt a chat session.

  • While MultiChatUI class's handle_user_submit takes care of the getting the request json from current/needed SimpleChat instance, handshaking with server using fetch, extracting model response and updating the SimpleChat instance.

  • MultiChatUI also provides the logic to maintain a map/dict/object of multiple chat sessions (SimpleChat instances) as well as manage the html UI elements, interms of taking care of switching between chat sessions. [This is the addition done in the last 24 hours, after ggerganov reviewed things].

@hanishkvc hanishkvc marked this pull request as ready for review May 21, 2024 21:14
@mofosyne
Copy link
Collaborator

mofosyne commented May 22, 2024

CI failing noted

features/results.feature:75 consistent results with same seed and varying batch size -- @1.3


Replication attempt

cmake -B build
cmake --build build --config Release
./build/bin/server --hf-repo TheBloke/phi-2-GGUF --hf-file phi-2.Q6_K.gguf --path ./examples/server/public_simplechat

image

I'm suppose to put something in system line?

@mofosyne mofosyne added the merging soon Will merge soon unless anyone objects label May 22, 2024
@mofosyne
Copy link
Collaborator

If the CI issue is more due to what's happening in the main branch and not within this PR, then I'll merge it after the CI in main branch is stable.

@hanishkvc
Copy link
Contributor Author

hanishkvc commented May 22, 2024

@mofosyne

system prompt

Its up to the user if they want to put a system prompt or not.

If one enters a system prompt or changes it, it will be sent to the server and inturn the model, as a system role content, from the moment they entered it in the chat history. The image attached shows a model responding with and without system prompt. The 1st response is before a system prompt was set, the 2nd respond in the same chat session, is after a prompt was set.

with mistral-7b-instruct-v0.2.Q4_K_M.gguf

SimpleChat-Screenshot20240522IST151109-WithAndWithoutSystemPrompt

CI issue

I have fetched the upstream and verified this PR through examples/server from latest upstream, and everything is working as expected. That CI seems decoupled from this PR.

@mofosyne
Copy link
Collaborator

mofosyne commented May 22, 2024

Fair enough, well I guess I was a bit confused by the statement that the system prompt must be filled in.

Something like an input placeholder attribute with the text you suggested above could help. @hanishkvc

e.g. "Optional system prompt. e.g. you are a drunk poet"

https://www.w3schools.com/TAGS/att_input_placeholder.asp


Thanks for confirming that the PR is most likely decoupled from main branch CI issue, will keep it in mind

@hanishkvc
Copy link
Contributor Author

hanishkvc commented May 22, 2024

Hi @mofosyne @ngxson @ggerganov

possible issue wit CI test

If the phi2 line which mofosyne mentioned above is the one related to the CI issue
ie
./build/bin/server --hf-repo TheBloke/phi-2-GGUF --hf-file phi-2.Q6_K.gguf --path ./examples/server/public_simplechat

Then because currently llama.cpp's chat-apply-template doesnt have support for phi2 directly, so server and inturn its oaicompat_completion_params_parse will generate wrongly templated chat message which will be tokenized and sent to the ai llm model, this is independent of this PR. Rather any code be it server or some 3rd party program which uses llama.cpp's chat-apply-template will have same issue.

So also potentially the phi2 model will generate response which depending on the query may either be within sane limits beyond the required text completion/chatting, or it may even get into a runaway mode or so and get stuck for sometime or ...

Also given that this specific model is a base model and not a chat or instruct model, it eitherway will be unhinged wrt generating text beyond what is asked, which also can create problem.

ideal solution, independent of this CI issue

So either one will have to hard code phi2 chat template standard into chat-apply-template in current llama.cpp OR implement a new jinja template based flow OR if one is using my other PR #6834 , they will be able to specify phi2's chat template info in chaton_meta.json file and inturn get things working as needed.

Even this in itself may not be enough, wrt expecting a non-run-away answers, as this specific model is a base model and not chat/instruct model.

solution for CI issue at hand

Independent of anything else, rather if any CI code is currently using Phi2 to test llama.cpp's chat-templating, then it is invalid and needs to be fixed (unless it is a negative test or so), as it stands now. That is the test needs to be changed so that it doesnt use chat-templating-unsupported phi2 models.

Fair enough, well I guess I was a bit confused by the statement that the system prompt must be filled in.

Something like an input placeholder attribute with the text you suggested above could help. @hanishkvc

e.g. "Optional system prompt. e.g. you are a drunk poet"

https://www.w3schools.com/TAGS/att_input_placeholder.asp

Thanks for confirming that the PR is most likely decoupled from main branch CI issue, will keep it in mind

@mofosyne mofosyne merged commit 1e37436 into ggerganov:master May 22, 2024
16 of 17 checks passed
@hanishkvc
Copy link
Contributor Author

Hi @mofosyne @ngxson @ggerganov

update / a correction

I need to correct myself partly, wrt my previous comment. I assume mofosyne was using phi-2 to test server in general, while the actual CI test was potentially using a different model. And inturn it seems to be testing both /chat/completions and /completions endpoints. So my comment wrt phi-2 and ci-testing if wrt /chat/completions doesnt hold.

HOWEVER my original comment that this specific CI failure issue has nothing to do with this PR and more of some issue between server in general and the test code around it still holds. So we can safely merge this PR with the main/master branch. Also this PR doesnt touch server code in any way and sits in its own subdirectory.

local direct check

Also wrt server on its own, I have tried testing the configuration which seems to be used by the ci-test which is failing.

cmake .. \
-DLLAMA_NATIVE=OFF
-DLLAMA_BUILD_SERVER=ON
-DLLAMA_CURL=ON
-DCMAKE_BUILD_TYPE=RelWithDebInfo
-DLLAMA_SANITIZE_THREAD=ON ;

cmake --build . --config RelWithDebInfo -j 4 --target server

Inturn testing the server against a bunch of models (llama3, phi-3, phi-2, mistral), things seem to be fine wrt server and this PR in general wrt chat and completion mode.

Hi @mofosyne @ngxson @ggerganov

possible issue wit CI test

If the phi2 line which mofosyne mentioned above is the one related to the CI issue ie ./build/bin/server --hf-repo TheBloke/phi-2-GGUF --hf-file phi-2.Q6_K.gguf --path ./examples/server/public_simplechat

Then because currently llama.cpp's chat-apply-template doesnt have support for phi2 directly, so server and inturn its oaicompat_completion_params_parse will generate wrongly templated chat message which will be tokenized and sent to the ai llm model, this is independent of this PR. Rather any code be it server or some 3rd party program which uses llama.cpp's chat-apply-template will have same issue.

So also potentially the phi2 model will generate response which depending on the query may either be within sane limits beyond the required text completion/chatting, or it may even get into a runaway mode or so and get stuck for sometime or ...

Also given that this specific model is a base model and not a chat or instruct model, it eitherway will be unhinged wrt generating text beyond what is asked, which also can create problem.

ideal solution, independent of this CI issue

So either one will have to hard code phi2 chat template standard into chat-apply-template in current llama.cpp OR implement a new jinja template based flow OR if one is using my other PR #6834 , they will be able to specify phi2's chat template info in chaton_meta.json file and inturn get things working as needed.

Even this in itself may not be enough, wrt expecting a non-run-away answers, as this specific model is a base model and not chat/instruct model.

solution for CI issue at hand

Independent of anything else, rather if any CI code is currently using Phi2 to test llama.cpp's chat-templating, then it is invalid and needs to be fixed (unless it is a negative test or so), as it stands now. That is the test needs to be changed so that it doesnt use chat-templating-unsupported phi2 models.

Fair enough, well I guess I was a bit confused by the statement that the system prompt must be filled in.
Something like an input placeholder attribute with the text you suggested above could help. @hanishkvc
e.g. "Optional system prompt. e.g. you are a drunk poet"
https://www.w3schools.com/TAGS/att_input_placeholder.asp
Thanks for confirming that the PR is most likely decoupled from main branch CI issue, will keep it in mind

@mofosyne
Copy link
Collaborator

mofosyne commented May 22, 2024

Merged in now as @hanishkvc shown that this is unlikely to be an issue.
Regarding any further improvements, I think this is still a good base to start and it be ideal to see how it functions in the real world.

Any further changes could be in a new PR @hanishkvc but fingers crossed everyone understands it enough to not need to adjust the UI further (e.g. adding placeholder text to system input)

Thanks for your contribution. You may want to also consider adding a general guide/tutorial to the wiki if you so choose to do so (which is a bit more freeform than the readme in the codebase).

@hanishkvc
Copy link
Contributor Author

hanishkvc commented May 22, 2024

@mofosyne

I had mentioned in the note shown to user that they can set the system prompt above. However I had avoided adding a placeholder, as sometimes it is not dull enough to distinguish that it is not something already entered. However now I have added the placeholder system prompt with e.g.

As well as changed the behaviour wrt Completion mode, such that the code doesnt automatically add the user: prefix, so that user has full flexibility wrt adding any role related prefix that they may want. As well as optionally insert a assistant related prefix even in the completion mode.

Shall I push these additions to this PR itself?

@mofosyne
Copy link
Collaborator

Already merged so best as a new PR. Also those kinds of changes are smaller quality of life changes so best as a separate PR now.

@hanishkvc
Copy link
Contributor Author

Thanks @mofosyne,

Will create a new PR.

Wanted to be sure that I hadnt missed out something and inturn once I found a small difference compared to what I had commented, wanted to cross check things are fine or not and inturn inform you all. So missed out on pushing these earlier.

Already merged so best as a new PR. Also those kinds of changes are smaller quality of life changes so best as a separate PR now.

@mofosyne
Copy link
Collaborator

Continuation of minor changes and improvements #7480

teleprint-me pushed a commit to teleprint-me/llama.cpp that referenced this pull request May 23, 2024
…ions and /completions end points and try chat (ggerganov#7350)

* SimpleChat: Add a skeletal html page

Contains a div placeholder for showing chat messages till now

a text-input for allowing user to enter next chat message/query
to the model.

a submit button to allow sending of the user entered message and
chat till now to the model.

* SimpleChat: A js skeleton with SimpleChat class

Allows maintaining an array of chat message.

Allows adding chat message (from any of the roles be it system,
user, assistant, ...)

Allows showing chat messages till now, in a given div element.

* SimpleChat: request_json, globals, startme

* SimpleChatJS: Roles Class, submitClick

Define Role class with static members corresponding to the roles.

Update startme to

* Get hold of the ui elements.

* Attach a click handler to submit button, which adds the user input
  to xchats array and shows the chat messages till now in chat div
  element.

Trap DOMContentLoaded to trigger startme

* SimpleChat:HTML: Bring in the js file

* SimpleChat: Rather value wrt input text element

* SimpleChat: Also add completions related prompt

* SimpleChat: Use common helper logic wrt json data

* SimpleChat: Move handling of submit request into its own func

* SimpleChat: Try handshake with llm over its web service endpoint

* SimpleChat:JS: Extract model response and show to user

* SimpleChat:JS: Messages/Prompt, indicate working to end user

* SimpleChat: Try keep input element in view

* SimpleChat: Diff user/assistant msgs, Make input wider

Also show a default message to user

Also add some metas

* SimpleChat: Move into its own sub directory to avoid confusion

* SimpleChat:sh: Add simple shell script to run python3 http.server

So one needs to run the llm server locally
then run this script and access it using a local browser

* SimpleChat:JS: Try trap enter key press wrt input text field

So user can either press submit button or press enter key

* SimpleChat: Allow user to select chat or completion mode

* SimpleChat: Dont submit if already submitted and waiting

Also make chat the default selection wrt mode

* SimpleChat:JS: Handle difference in response

Try read the assistance response from appropriate field in the
response got.

Also examples/server seems to return the response in a slightly
different field, so try account for that also.

* SimpleChat:JS: Force completion mode be single message by default

* SimpleChat: Add a simple readme file

* SimpleChat:HTML: Cleanup/structure UI a bit, Add input for system

* SimpleChat:Allow system prompt to be set, if provided before user

* SimpleChat: Ignore empty user input, without trimming

* SimpleChat:Alert user if they provide sysprompt late or change it

* SimpleChat: Move handling systemprompt into its own func

* SimpleChat:HTML: Add a style for system role message

* SimpleChat: Update the readme file

* SimpleChat:CSS: Move style info into its own css file

To keep it simple, clean and seperate so that things are not
unnecessarily cluttered.

* SimpleChat:CSS: Allow for chat div to be scrollable

* SimpleChat:JS: Try ensure the last entry in chat is visible

Needed because now only the chat div is scrollable and not the full
page.

In last commit the chat div size was fixed to 75% vertical height,
so the full page no longer scrolls, so the old bring user-input
element to view wont work, instead now the last element in the
chat div should be brought into view.

* SimpleChat:JS: bottom of element visible, Set focus to user input

As the generated text could be multiple lines and occupy more space
that the full scrollable div's vertical space, make the bottom of
the last element (which can be such a generated text) in the div
visible by scrolling.

Ensure that the user input box has focus

* SimpleChat: Update notes a bit. Try keep browser happy

Avoid browser quirk mode with DOCTYPE.

Help with accessibility a bit by specifying the language explicitly.

Specify the char encoding explicitly, inturn utf-8 is a safe bet,
even with intermixing of languages if reqd in future.

Add a cache-control http-equiv meta tag, which in all probability
will be ignored.

Defer js loading and execution, just for fun and future, not that
critical here as it stands now.

* SimpleChat:HTML:Group user input+btn together; Note about multichat

* SimpleChat:JS: Allow for changing system prompt anytime for future

* SimpleChat:Readme: Note about handle_systemprompt begin/anytime

* SimpleChat:HTML: Add viewport meta for better mobile friendliness

Without this the page content may look too small.

* SimpleChat:HtmlCss: Cleanup UI flow

set margin wrt vmin rather than vw or vh so portrait/landscape ok.

Use flex and flex-grow to put things on the same line as well as
distribute available space as needed. Given two main elements/line
so it remains simple.

In each line have one element with grows and one sits with a basic
comfortably fixed size.

* SimpleChat: textarea for multiline user chat, inturn shift+enter 4 enter

* SimpleChat: Make vertical layout better responsive (flex based)

Also needed to make things cleaner and properly usable whether
landscape or portrait, after changing to multiline textarea rather
than single line user input.

Avoid hardcoding the chat-till-now display area height, instead
make it a flex-growable within a flex column of ui elements within
a fixed vertical area.

* SimpleChat: Rename simplechat.html to index.html, update readme

Instead of providing a seperate shell script, update the readme wrt
how to run/use this web front end.

* SimpleChat: Screen fixed view and scrolling, Printing full

* SimpleChat:JS:CI: Avoid space at end of jsdoc param line

* SimpleChat:JS: MultiChat initial skeleton

Will help maintain multiple independent chats in future

* SimpleChat:JS: Move system prompt begin/anytime into SimpleChat

* SimpleChat:JS:Keep MultiChatUI simple for now

Worry about different chats with different servers for later.

* SimpleChat:JS: Move handle submit into MultiChat, build on same

Create an instance of MultiChatUI and inturn a instance of chat
session, which is what the UI will inturn work on.

* SimpleChat:JS: Move to dictionary of SimpleChat, instead of array

* SimpleChat: Move ui elements into MultiChatUI, Update el IDs

Move ui elements into MultiChatUI, so that current handleUserSubmit
doesnt need to take the element arguments. Also in future, when
user is allowed to switch between different chat sessions, the
UI can be updated as needed by using the elements in UI already
known to MultiChatUI instance.

Rename the element ids' so that they follow a common convention,
as well as one can identify what the element represents in a more
consistant manner.

* SimpleChat:MCUI:Show available chat sessions, try switch btw them

Previous commits brought in / consolidated existing logic into
MultiChatUI class.

Now start adding logic towards multichat support

* show buttons indicating available chat sessions

* on sessin button click, try switch to that session

* SimpleChat:MCUI: Store and use current chat session id

Also

allow to switch chat session optionally, wrt some of the related
helpers.

setup for two chat sessions by default.

* SimpleChat:MCUI: Delay enabling user-input to avoid race

Re-enable user-input, only after response to a user query has been
updated to the chat-div. This ensures that if user tries to switch
chat session, it wont be allowed till chat-request-response flow is
done.

* SimpleChat: Take care of system prompt

Helper to get the latest system prompt and inturn use same to
set the system prompt ui, when switching.

Ensure that system prompt is set if and when enter key is pressed.

* SimpleChat:GetSystemLatest, fix a oversight.

* SimpleChat:MCUI: Allow selected chat-session btn to be highlighted

Also have a general helper for setting class of children.

* SimpleChat:Cleanup corners

Show system prompt in chat space, when it is set by pressing enter,
as a feedback to user.

Alert user, if they try to switch chat session in the middle of
waiting for a response from the ai model.

* SimpleChat:MCUI: Ensure req-resp failure doesnt lock up things

* SimpleChat:MCUI: Support for new chat sessions

Also a general create button helper.

* SimpleChat:MCUI: CreateSessionBtn helper, use wrt NewChat

Also fix a oversight wrt using stale data wrt the list of chat
sessions.

* SimpleChat:MCUI: NewChat btn first before existing chat sessions

* SimpleChat:MCUI:CornerCases:Skip new chat, show only if current

Skip NewChat if user cancels or if one waiting for response from
the ai model.

Dont show a chat with newly got ai model response, if current chat
session has changed, some how. Chat session shouldnt be allowed to
change, if there is a pending response, but still as a additional
sanity check.

* SimpleChat: Update readme, title, show usage if no chat to show

* SimpleChat: Cleanup the log/dialog messages a bit
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request examples merging soon Will merge soon unless anyone objects review complexity : medium Generally require more time to grok but manageable by beginner to medium expertise level server testing Everything test related
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants