Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: rendering Mermaid diagrams in the chat response #2271

Closed
joelmnz opened this issue May 15, 2024 · 5 comments
Closed

feat: rendering Mermaid diagrams in the chat response #2271

joelmnz opened this issue May 15, 2024 · 5 comments

Comments

@joelmnz
Copy link

joelmnz commented May 15, 2024

Is your feature request related to a problem? Please describe.
I'm always frustrated when I need to copy generated Mermaid script from the chat window into Mermaid Live just to see the diagram.

Describe the solution you'd like
It would be amazing if the chat renderer could detect the Mermaid script tag and render the diagram using Mermaid.js.
This feature could be enabled or disabled via a user setting or controlled by an inline toggle in the chat when you hover over the Mermaid script. This setting should be sticky, remembering the last used preference.

Basically, I'm looking for the "Diagram" feature from big-aig

Describe alternatives you've considered
The only alternative I've found is to copy and paste the generated Mermaid code from the chat into an online renderer.

Additional context
This project is great, and I am looking forward to seeing how it evolves.

@tjbck tjbck changed the title Support rendering Mermaid diagrams in the chat response feat: rendering Mermaid diagrams in the chat response May 15, 2024
@newzealandpaul
Copy link

This is support in some LLM clients. It is a very useful feature. Is there a plugin interface to open-webui to enable content filters?

@tjbck
Copy link
Contributor

tjbck commented May 29, 2024

@newzealandpaul We're planning on pushing our plugin framework Pipelines to allow such actions, stay tuned!

@newzealandpaul
Copy link

newzealandpaul commented May 29, 2024

@tjbck great! I Looked at the example, does it allow modifying how LLM responses are rendered?

Here is an example conversation below. Ideally some JS could be inserted inside the web ui that simply renders a mermaid chart above or below the response. Just like how GitHub issues does it.

User: Generate a mermaid 10.0.7 diagram showing relationships inside a transformer neural network

Assistant: Creating a diagram in Mermaid 10.0.7 to show the relationship in a Transformer Neural Network is a great idea. A Transformer model comprises an encoder and a decoder, with multiple layers for attention mechanism and feedforward neural networks. I'll outline a basic structure for you.

Here's the Mermaid diagram code to demonstrate the components and how they relate to each other:

graph TB
    subgraph Encoder Layer
        EncoderSelfAttention[Self-Attention Mechanism]
        EncoderAddNorm1[Add & Norm]
        EncoderFeedForward[Feed Forward Network]
        EncoderAddNorm2[Add & Norm]
        Encoder1((Encoder Layer n))

        EncoderSelfAttention --> EncoderAddNorm1
        EncoderAddNorm1 --> EncoderFeedForward
        EncoderFeedForward --> EncoderAddNorm2
        EncoderAddNorm2 --> Encoder1
    end
    
    subgraph Decoder Layer
        DecoderSelfAttention[Masked Self-Attention Mechanism]
        DecoderAddNorm1[Add & Norm]
        EncoderDecoderAttention[Encoder-Decoder Attention]
        DecoderAddNorm2[Add & Norm]
        DecoderFeedForward[Feed Forward Network]
        DecoderAddNorm3[Add & Norm]
        Decoder1((Decoder Layer n))

        DecoderSelfAttention --> DecoderAddNorm1
        DecoderAddNorm1 --> EncoderDecoderAttention
        EncoderDecoderAttention --> DecoderAddNorm2
        DecoderAddNorm2 --> DecoderFeedForward
        DecoderFeedForward --> DecoderAddNorm3
        DecoderAddNorm3 --> Decoder1
    end
    
    Input -->|Token Embedding + Positional Encoding| Encoder1
    Encoder1 -->|Output of Encoder Layer n| Decoder1
    Decoder1 -->|Output of Decoder Layer n| Linear
    Linear --> Softmax
    Softmax --> Output

    style Input fill:#f9f,stroke:#333,stroke-width:2px;
    style Output fill:#f9f,stroke:#333,stroke-width:2px;

Here's the Mermaid diagram code to demonstrate the components and how they relate to each other:

'''mermaid
graph TB
subgraph Encoder Layer
EncoderSelfAttention[Self-Attention Mechanism]
EncoderAddNorm1[Add & Norm]
EncoderFeedForward[Feed Forward Network]
EncoderAddNorm2[Add & Norm]
Encoder1((Encoder Layer n))

    EncoderSelfAttention --> EncoderAddNorm1
    EncoderAddNorm1 --> EncoderFeedForward
    EncoderFeedForward --> EncoderAddNorm2
    EncoderAddNorm2 --> Encoder1
end

subgraph Decoder Layer
    DecoderSelfAttention[Masked Self-Attention Mechanism]
    DecoderAddNorm1[Add & Norm]
    EncoderDecoderAttention[Encoder-Decoder Attention]
    DecoderAddNorm2[Add & Norm]
    DecoderFeedForward[Feed Forward Network]
    DecoderAddNorm3[Add & Norm]
    Decoder1((Decoder Layer n))

    DecoderSelfAttention --> DecoderAddNorm1
    DecoderAddNorm1 --> EncoderDecoderAttention
    EncoderDecoderAttention --> DecoderAddNorm2
    DecoderAddNorm2 --> DecoderFeedForward
    DecoderFeedForward --> DecoderAddNorm3
    DecoderAddNorm3 --> Decoder1
end

Input -->|Token Embedding + Positional Encoding| Encoder1
Encoder1 -->|Output of Encoder Layer n| Decoder1
Decoder1 -->|Output of Decoder Layer n| Linear
Linear --> Softmax
Softmax --> Output

style Input fill:#f9f,stroke:#333,stroke-width:2px;
style Output fill:#f9f,stroke:#333,stroke-width:2px;

'''

@newzealandpaul
Copy link

newzealandpaul commented May 29, 2024

Mermaid as generated by GPT4 start with the following (where ' is a `)

'''mermaid

Its very hard showing this in Github issues because Github aggressively renders mermaid

@tjbck
Copy link
Contributor

tjbck commented Jun 3, 2024

Added to dev, releasing 0.2.2 soon!

image

@tjbck tjbck closed this as completed Jun 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants