-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase time between running async_process_queue and async_handle_critical_repositories #3688
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @srt32 馃憢
) | ||
|
||
self.recurring_tasks.append( | ||
async_track_time_interval( | ||
self.hass, self.async_handle_critical_repositories, timedelta(hours=6) | ||
self.hass, self.async_handle_critical_repositories, timedelta(hours=24) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one does not match the description.
This handler will (now) fetch this file: https://github.com/hacs/default/blob/master/critical. However, starting from the next version, this will not hit GitHub at all, as a different API (https://data-v2.hacs.xyz/critical/data.json) will serve this up.
@@ -674,12 +674,12 @@ async def startup_tasks(self, _=None) -> None: | |||
async_track_time_interval(self.hass, self.async_check_rate_limit, timedelta(minutes=5)) | |||
) | |||
self.recurring_tasks.append( | |||
async_track_time_interval(self.hass, self.async_process_queue, timedelta(minutes=10)) | |||
async_track_time_interval(self.hass, self.async_process_queue, timedelta(minutes=30)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does not do much, really; the queue is executed fully each time unless a rate limit is hit, and from the next version, that should not really be possible as most data is no longer fetched from GitHub.
In most cases, this queue is empty, so at best, it just adds 20 min extra to the current 48h/96h intervals.
@@ -674,12 +674,12 @@ async def startup_tasks(self, _=None) -> None: | |||
async_track_time_interval(self.hass, self.async_check_rate_limit, timedelta(minutes=5)) | |||
) | |||
self.recurring_tasks.append( | |||
async_track_time_interval(self.hass, self.async_process_queue, timedelta(minutes=10)) | |||
async_track_time_interval(self.hass, self.async_process_queue, timedelta(minutes=30)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at the description, it is probably these 2 you are looking for https://github.com/hacs/integration/pull/3688/files#diff-49e399d00036b3a319719192cd99a76fd838f6c9f2c01e026b666e605a65de27R640-R651.
But as you can see, these are already quite high (they were set to that after a meeting with GitHub early last year).
However, these handlers will not be used from the next version (2.0.0), as this integration will use the new data source for these recurring tasks (like https://data-v2.hacs.xyz/integration/data.json).
From that version, the integration will only hit the GitHub API when a user triggers it to show the most up-to-date content.
Some more information of that can be found here: https://experimental.hacs.xyz/docs/faq/data_sources
Thank you so much for the timely and detailed replies! Do you have an ETA or way for me to keep up to date on the 2.0 release work? That version looks like a very exciting improvement to the data sources. I'll go ahead and close this PR. 鉂わ笍 |
There is no set timeline currently, so the best I can say for now is "as soon as possible". |
馃憢 from GitHub! Big fan of this project!
As is, the integration makes a lot of GitHub API calls to get repository information. This project is a statistically significant consumer of our API resources, and we'd like to see if we can lighten the load a bit while still maintaining the integration's functionality.
This PR proposes polling less frequently. I think I found the right config that controls the polling but would be very happy to learn if there are better / other spots to update. Are there impacts of making this change that I am not aware of? Thank you!