Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GenAI: Fixes multiple calls to settings and health #87623

Merged
merged 3 commits into from
May 10, 2024
Merged

Conversation

torkelo
Copy link
Member

@torkelo torkelo commented May 10, 2024

Keep seeing 1 call to llm app settings and 1 health for every input that uses the generative AI features. I find these duplicated calls pretty annoying.

This caches the is enabled check by using a shared promise. Also avoids future checks if the check successed we assume it will succeed in the future as well (we could have a timeout and clear the cached promise after an hour if we think the the health check is likely to fail later)

@torkelo torkelo requested a review from a team as a code owner May 10, 2024 12:30
@torkelo torkelo requested review from oscarkilhed, dprokop, ivanortegaalba and nmarrs and removed request for a team May 10, 2024 12:30
@torkelo torkelo added no-backport Skip backport of PR no-changelog Skip including change in changelog/release notes labels May 10, 2024
@grafana-delivery-bot grafana-delivery-bot bot added this to the 11.1.x milestone May 10, 2024
@nmarrs nmarrs requested a review from SandersAaronD May 10, 2024 17:42
@nmarrs nmarrs added the area/generativeAI for issues that involve generative AI label May 10, 2024
@nmarrs
Copy link
Contributor

nmarrs commented May 10, 2024

@SandersAaronD do you think we should clear out the cached promise at a set interval? As far as I am aware the health check is just that status of configuration from the LLM plugin in Grafana which in theory should only change if the LLM plugin config is modified / disabled and a page refresh will cause health to be checked again

In local testing seems fine

Screen.Recording.2024-05-10.at.11.29.01.AM.mov


const enabled = await isLLMPluginEnabled();

expect(enabled).toBe(false);
expect(enabled).toBe(true);
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These changes are just switching the order of tests to first check false case then true given newly added cache

@torkelo torkelo merged commit f83366f into main May 10, 2024
14 checks passed
@torkelo torkelo deleted the gen-ai-multiple-calls branch May 10, 2024 18:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/frontend area/generativeAI for issues that involve generative AI no-backport Skip backport of PR no-changelog Skip including change in changelog/release notes
Projects
Status: 🚀 Done
Development

Successfully merging this pull request may close these issues.

None yet

2 participants