We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
We are seeing following error in loki logs
To Reproduce Steps to reproduce the behavior:
Expected behavior Please provide necessary configuration to fix this issue
Environment:
Screenshots, Promtail config, or terminal output loki: auth_enabled: false analytics: reporting_enabled: false storage: type: azure azure: accountName: ${loki_azurerm_storage_account_name} bucketNames: chunks: ${loki_chunks_azurerm_storage_container_name} ruler: ${loki_ruler_azurerm_storage_container_name} admin: ${loki_admin_azurerm_storage_container_name} ingester: max_chunk_age: 24h
ingester_client: grpc_client_config: grpc_keepalive_time: 30s # Adjust keepalive settings grpc_keepalive_timeout: 20s # Adjust keepalive settings
structuredConfig: query_range: parallelise_shardable_queries: false server: http_server_write_timeout: 10m
limits_config: allow_structured_metadata: false max_concurrent_tail_requests: 100 discover_service_name: [ ] schemaConfig: configs: - from: 2022-01-11 store: boltdb-shipper object_store: azure schema: v12 index: prefix: loki_index_ period: 24h
lokiCanary: tolerations: - key: "stack" operator: "Equal" value: "monitoring" effect: "NoSchedule" resources: requests: cpu: "0.01" memory: 64Mi limits: cpu: "0.05" memory: 128Mi
monitoring: enabled: true selfMonitoring: enabled: true grafanaAgent: installOperator: true tolerations: - key: "stack" operator: "Equal" value: "monitoring" effect: "NoSchedule"
write: nodeSelector: stack: monitoring tolerations:
read: nodeSelector: stack: monitoring tolerations:
backend: nodeSelector: stack: monitoring tolerations:
chunksCache: nodeSelector: stack: monitoring tolerations:
resultsCache: nodeSelector: stack: monitoring tolerations:
The text was updated successfully, but these errors were encountered:
@slim-bean can you please suggest
Sorry, something went wrong.
Please provide necessary configuration to fix this issue
Questions have a better chance of being answered if you ask them on the community forums.
No branches or pull requests
We are seeing following error in loki logs
level=error ts=2024-05-16T08:35:43.267554629Z caller=manager.go:49 component=distributor path=write msg="write operation failed" details="Max entry size '262144' bytes exceeded for stream '{app="generate-preview-5jp7j", container="main", filename="/var/log/pods/argo-workflows_generate-preview-5jp7j_31f70b67-5db3-4933-ab9b-0d4513b46316/main/0.log", job="argo-workflows/generate-preview-5jp7j", namespace="argo-workflows", node_name="aks-defaultgreen-11165910-vmss00008s", pod="generate-preview-5jp7j", stream="stderr"}' while adding an entry with length '583758' bytes" org_id=fake
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Please provide necessary configuration to fix this issue
Environment:
Screenshots, Promtail config, or terminal output
loki:
auth_enabled: false
analytics:
reporting_enabled: false
storage:
type: azure
azure:
accountName: ${loki_azurerm_storage_account_name}
bucketNames:
chunks: ${loki_chunks_azurerm_storage_container_name}
ruler: ${loki_ruler_azurerm_storage_container_name}
admin: ${loki_admin_azurerm_storage_container_name}
ingester:
max_chunk_age: 24h
ingester_client:
grpc_client_config:
grpc_keepalive_time: 30s # Adjust keepalive settings
grpc_keepalive_timeout: 20s # Adjust keepalive settings
structuredConfig:
query_range:
parallelise_shardable_queries: false
server:
http_server_write_timeout: 10m
limits_config:
allow_structured_metadata: false
max_concurrent_tail_requests: 100
discover_service_name: [ ]
schemaConfig:
configs:
- from: 2022-01-11
store: boltdb-shipper
object_store: azure
schema: v12
index:
prefix: loki_index_
period: 24h
lokiCanary:
tolerations:
- key: "stack"
operator: "Equal"
value: "monitoring"
effect: "NoSchedule"
resources:
requests:
cpu: "0.01"
memory: 64Mi
limits:
cpu: "0.05"
memory: 128Mi
monitoring:
enabled: true
selfMonitoring:
enabled: true
grafanaAgent:
installOperator: true
tolerations:
- key: "stack"
operator: "Equal"
value: "monitoring"
effect: "NoSchedule"
write:
nodeSelector:
stack: monitoring
tolerations:
operator: "Equal"
value: "monitoring"
effect: "NoSchedule"
replicas: 3
resources:
requests:
cpu: "0.2"
memory: 4Gi
limits:
cpu: "1"
memory: 4Gi
read:
nodeSelector:
stack: monitoring
tolerations:
operator: "Equal"
value: "monitoring"
effect: "NoSchedule"
replicas: 3
resources:
requests:
cpu: "0.2"
memory: 3Gi
limits:
cpu: "3"
memory: 8Gi
backend:
nodeSelector:
stack: monitoring
tolerations:
operator: "Equal"
value: "monitoring"
effect: "NoSchedule"
replicas: 3
resources:
requests:
cpu: "0.1"
memory: 512Mi
limits:
cpu: "0.2"
memory: 1Gi
chunksCache:
nodeSelector:
stack: monitoring
tolerations:
operator: "Equal"
value: "monitoring"
effect: "NoSchedule"
resources:
requests:
cpu: "0.1"
memory: 1Gi
limits:
cpu: "0.5"
memory: 3Gi
resultsCache:
nodeSelector:
stack: monitoring
tolerations:
operator: "Equal"
value: "monitoring"
effect: "NoSchedule"
resources:
requests:
cpu: "0.2"
memory: 64Mi
limits:
cpu: "0.5"
memory: 256Mi
The text was updated successfully, but these errors were encountered: