-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
send task_received signal before receive log #8697
base: main
Are you sure you want to change the base?
Conversation
c2530bc
to
f3fad4e
Compare
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #8697 +/- ##
==========================================
- Coverage 81.26% 81.25% -0.02%
==========================================
Files 150 149 -1
Lines 18686 18553 -133
Branches 3193 3166 -27
==========================================
- Hits 15186 15075 -111
+ Misses 3209 3191 -18
+ Partials 291 287 -4
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need more context here and preferably some unit / integration tests for the suggested change
@auvipy hello! My point is to send receive logs after task_received signal. For me it's important, because i sent to celery some context for logs. If you tell me what tests are best to write for this, then I will try to attach them |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍🏼 to @auvipy request to help resolve concerns, thank you.
@50-Course @auvipy hello! I wrote a unit test to cover the flow change Please ask me if you need anything else. UPD: No idea, why tests with pypy failed |
Not related to this PR. |
Will check after the office hour. |
I've noticed GitHub Actions in general is having troubles since the last few hours, well, much more than usual (jobs fail before the code/tests actually run). FYI @auvipy |
Hello, @auvipy @50-Course If i can help for this, just ask me how. Thanks for your time and effort! |
Hello again, @Niccolum, by my understanding this has nothing to do with your MR. I think the PR has achieved it's scope and success criteria given the tests and non-breaking changes. The failing pipeline, If you take a look at the last function on the stacktrace, you'd figure:
Obviously, a problem with the tox configuration. If @auvipy or @Nusnus could help ensure that the Nonetheless, the PR is as good as merged 👍🏼 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I re run the failed job to trigger Integrations tests
Hello everyone. |
TBH I am not sure what happened, but the latest Let’s update the PR first so we can direct our focus more efficiently. |
@50-Course Hello) Yes, you can, why not) |
@Nusnus @50-Course hello! Pull to actual main state and run with result
Maybe you can run pipeline again?) |
@50-Course UPD: As i see - celery jobs fail because of lost codecov token |
I'll give codecov a few minutes to calm down and rerun. It's due to many PRs running at the same time (right now) |
I so happen to spin up a proposed solution, can't run workflow from the cli so i added a ticket for that would require separate implementation: #8822 Once approved, I'd send in a patch! |
As i see - on pypy FAILED t/unit/worker/test_strategy.py::test_default_strategy_proto1::test_when_logging_disabled Nothing for other builds... How is this possible? |
locally, i see t/unit/worker/test_strategy.py::test_default_strategy_proto2::test_when_logging_disabled PASSED Do you have any ideas? Maybe we can try re-run pypy job? |
@Niccolum, just got in by workspace. Quick question, the was no diff in the affected files? getting to it, once my system comes up. |
If this passes, we'd just have to re-push these files off to VCS. btw, i can't seem to re-run workflow on my end... I should turn in the PR for #8822 to help in that regard. |
@Niccolum, Please update it from your end.. can't update the commits from my end - I'd have to reach out to @Nusnus @auvipy With that, this is good as merged 🚀 @Nusnus please make one quick sweep, and if anything, I'd be back in couple of minutes. That said, I'll be waiting for the update. Well done @Niccolum, and thank you for making this patch 🤝 |
@50-Course added your one-line patch to my PR |
@50-Course i remove coverage flags and now it falls because no coverage files for codecov. Sooo.... What can we do now? UPD: these three tests from #8697 (comment) still failed... |
I revert my latest commit with tox.ini config changing |
All of the recent commits “choke” the GitHub CI. EDIT: There’s another PR already using the CI workers to 100%. Let’s take a short break and come back to see where we are in 30m. |
2024-01-30T10:16:56.0775343Z =================================== FAILURES ===================================
2024-01-30T10:16:56.0792182Z ___________ test_default_strategy_proto1.test_when_logging_disabled ____________
2024-01-30T10:16:56.0794508Z
2024-01-30T10:16:56.0795055Z self = <t.unit.worker.test_strategy.test_default_strategy_proto1 object at 0x00007fd9f8dac8e0>
2024-01-30T10:16:56.0796526Z caplog = <_pytest.logging.LogCaptureFixture object at 0x0000000032b90090>
2024-01-30T10:16:56.0797102Z
2024-01-30T10:16:56.0797329Z def test_when_logging_disabled(self, caplog):
2024-01-30T10:16:56.0797933Z # Capture logs at any level above `NOTSET`
2024-01-30T10:16:56.0798677Z caplog.set_level(logging.NOTSET + 1, logger="celery.worker.strategy")
2024-01-30T10:16:56.0799776Z with patch('celery.worker.strategy.logger') as logger:
2024-01-30T10:16:56.0800620Z logger.isEnabledFor.return_value = False
2024-01-30T10:16:56.0801611Z with self._context(self.add.s(2, 2)) as C:
2024-01-30T10:16:56.0802306Z C()
2024-01-30T10:16:56.0802783Z > assert not caplog.records
2024-01-30T10:16:56.0804472Z E assert not [<LogRecord: celery.utils.dispatch.signal, 40, /home/runner/work/celery/celery/celery/utils/dispatch/signal.py, 280, "Signal handler %r raised: %r">]
2024-01-30T10:16:56.0807768Z E + where [<LogRecord: celery.utils.dispatch.signal, 40, /home/runner/work/celery/celery/celery/utils/dispatch/signal.py, 280, "Signal handler %r raised: %r">] = <_pytest.logging.LogCaptureFixture object at 0x0000000032b90090>.records
2024-01-30T10:16:56.0809695Z
2024-01-30T10:16:56.0809961Z t/unit/worker/test_strategy.py:154: AssertionError |
Long shot - but can you try updating the PR to use pytest v8.0.0? But it’s a very long shot, so let’s not get our hopes too high :) |
@Nusnus Have you any ideas why is fails only on pypy? P.S. i can try to use pytest 8.0.0, of course. I'll give you feedback later |
My best guess is a dependency that was upgraded internally (maybe a dep of a dep), that changed something. But @auvipy just merged 🤞 |
@Niccolum best bet right now, is there's a transitive dependency issue somewhere down the line, and that is related to I should do a quick grep project-wide |
@50-Course as I see, it's best bet for local tox running. Not for resolve current problem |
Ha interesting it is a core dependency, from the pytest module: ~../site-packages/_pytest/config/init.py # Plugins that cannot be disabled via "-p no:X" currently.
essential_plugins = (
"mark",
"main",
"runner",
"fixtures",
"helpconfig", # Provides -p.
) |
@Nusnus locally runs successful with Can you run pipeline again? ============================== 3245 passed, 11 skipped, 3 xfailed, 59 warnings, 28910 subtests passed in 175.47s (0:02:55) ==============================
.pkg: _exit> python /Users/n.vidov/Desktop/projects/github/celery/.tox/.tox/lib/python3.10/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
pypy-3-unit: OK (268.48=setup[83.63]+cmd[184.85] seconds)
congratulations :) (272.16 seconds) |
@50-Course i think, i resolve problem with cov while tox running locally with one-line fix in tox config =) As i understand - in gh - pypy names |
Thanks - let's see if it works! |
@Nusnus No, still unsuccessful only on pypy. |
Do you have any ideas? |
When I hit a wall, I usually try to review my approach again and try from a different angle. I’d suggest:
If you’re up for it, I am willing to be “on watch” for CI approvals etc. @Niccolum |
@Nusnus Of course I agree. Thank you very much. Then, not today, I’ll throw out these options |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See my suggestions @Niccolum.
Remember, we’re just messing up with the code to remove noise and trying to pinpoint the reason it fails (in the CI only if I understand correctly).
CONTRIBUTORS.txt
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change obviously doesn’t affect anything so we can move on
@@ -33,7 +33,7 @@ deps= | |||
|
|||
3.8,3.9,3.10,3.11,3.12: -r{toxinidir}/requirements/test-ci-default.txt | |||
3.8,3.9,3.10,3.11,3.12: -r{toxinidir}/requirements/docs.txt | |||
pypy3: -r{toxinidir}/requirements/test-ci-default.txt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Revert this
requirements/test.txt
Outdated
@@ -1,4 +1,4 @@ | |||
pytest==7.4.4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Revert this
@@ -234,6 +234,35 @@ def test_signal_task_received(self): | |||
request=ANY, | |||
signal=signals.task_received) | |||
|
|||
def test_log_task_received_meta(self, caplog): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add @pytest.mark.skip
(double-check me for the syntax)
@@ -148,17 +148,7 @@ def task_message_handler(message, body, ack, reject, callbacks, | |||
eventer=eventer, task=task, connection_errors=connection_errors, | |||
body=body, headers=headers, decoded=decoded, utc=utc, | |||
) | |||
if _does_info: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just for the sake of being “the first iteration of messing up the code”, try to get all of this code back here, except the info
call itself.
'kwargs': req.kwargsrepr, | ||
'eta': req.eta, | ||
} | ||
info(_app_trace.LOG_RECEIVED, context, extra={'data': context}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
leave just the if … info().
The purpose is mainly to reduce the changed lines. This is just to remove noise so we can, step by step, improve our focus on the changes that caused it.
Also, just for the sake of it, rebase on |
Description
I want to receive task_received signal before info(_app_trace.LOG_RECEIVED, context, extra={'data': context}) for add some context to log