You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request
If you are interested in working on this issue or have submitted a pull request, please leave a comment
Problem
We encountered problem with Splunk HeC sink. When Splunk endpoint returns 504 Gateway Timeout, we found out that other sinks went in broken state to other Datadog destinations. Eventually, it causes whole Vector Pod to fail and we need to restart it
It is expected behavior that that a failing sink will apply back-pressure to other sinks connected to the same inputs. The way to obviate this is to configure a buffer on the sink you want to tolerate downtime for, which I see you did. If the buffer fills up, back-pressure will still be applied if you have when_full: block though so you may want to consider increasing the size of your buffer to tolerate a longer period of downtime. Alternatively, you can configure the buffer to drop data rather than block with when_full: drop. The concept of backpressure in Vector is described in more detail here: https://vector.dev/docs/about/concepts/#backpressure
I'll close this out since it appears to have worked as-designed, but let me know if you have any additional questions about this!
A note for the community
Problem
We encountered problem with Splunk HeC sink. When Splunk endpoint returns 504 Gateway Timeout, we found out that other sinks went in broken state to other Datadog destinations. Eventually, it causes whole Vector Pod to fail and we need to restart it
Configuration
Version
0.32.1
Debug Output
No response
Example Data
No response
Additional Context
No response
References
No response
The text was updated successfully, but these errors were encountered: