-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: rendering Mermaid diagrams in the chat response #2271
Comments
This is support in some LLM clients. It is a very useful feature. Is there a plugin interface to open-webui to enable content filters? |
@newzealandpaul We're planning on pushing our plugin framework Pipelines to allow such actions, stay tuned! |
@tjbck great! I Looked at the example, does it allow modifying how LLM responses are rendered? Here is an example conversation below. Ideally some JS could be inserted inside the web ui that simply renders a mermaid chart above or below the response. Just like how GitHub issues does it. User: Generate a mermaid 10.0.7 diagram showing relationships inside a transformer neural network Assistant: Creating a diagram in Mermaid 10.0.7 to show the relationship in a Transformer Neural Network is a great idea. A Transformer model comprises an encoder and a decoder, with multiple layers for attention mechanism and feedforward neural networks. I'll outline a basic structure for you. Here's the Mermaid diagram code to demonstrate the components and how they relate to each other: graph TB
subgraph Encoder Layer
EncoderSelfAttention[Self-Attention Mechanism]
EncoderAddNorm1[Add & Norm]
EncoderFeedForward[Feed Forward Network]
EncoderAddNorm2[Add & Norm]
Encoder1((Encoder Layer n))
EncoderSelfAttention --> EncoderAddNorm1
EncoderAddNorm1 --> EncoderFeedForward
EncoderFeedForward --> EncoderAddNorm2
EncoderAddNorm2 --> Encoder1
end
subgraph Decoder Layer
DecoderSelfAttention[Masked Self-Attention Mechanism]
DecoderAddNorm1[Add & Norm]
EncoderDecoderAttention[Encoder-Decoder Attention]
DecoderAddNorm2[Add & Norm]
DecoderFeedForward[Feed Forward Network]
DecoderAddNorm3[Add & Norm]
Decoder1((Decoder Layer n))
DecoderSelfAttention --> DecoderAddNorm1
DecoderAddNorm1 --> EncoderDecoderAttention
EncoderDecoderAttention --> DecoderAddNorm2
DecoderAddNorm2 --> DecoderFeedForward
DecoderFeedForward --> DecoderAddNorm3
DecoderAddNorm3 --> Decoder1
end
Input -->|Token Embedding + Positional Encoding| Encoder1
Encoder1 -->|Output of Encoder Layer n| Decoder1
Decoder1 -->|Output of Decoder Layer n| Linear
Linear --> Softmax
Softmax --> Output
style Input fill:#f9f,stroke:#333,stroke-width:2px;
style Output fill:#f9f,stroke:#333,stroke-width:2px;
Here's the Mermaid diagram code to demonstrate the components and how they relate to each other:
|
Mermaid as generated by GPT4 start with the following (where ' is a `) '''mermaid Its very hard showing this in Github issues because Github aggressively renders mermaid |
Is your feature request related to a problem? Please describe.
I'm always frustrated when I need to copy generated Mermaid script from the chat window into Mermaid Live just to see the diagram.
Describe the solution you'd like
It would be amazing if the chat renderer could detect the Mermaid script tag and render the diagram using Mermaid.js.
This feature could be enabled or disabled via a user setting or controlled by an inline toggle in the chat when you hover over the Mermaid script. This setting should be sticky, remembering the last used preference.
Describe alternatives you've considered
The only alternative I've found is to copy and paste the generated Mermaid code from the chat into an online renderer.
Additional context
This project is great, and I am looking forward to seeing how it evolves.
The text was updated successfully, but these errors were encountered: