Get Instant Solutions for Kubernetes, Databases, Docker and more
Apache Kafka is a distributed event streaming platform used for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. Kafka brokers are the heart of the Kafka cluster, responsible for receiving, storing, and forwarding messages to consumers. They manage the storage of messages on disk and ensure data durability and availability.
The KafkaHighLogRetentionSize alert is triggered when the log retention size exceeds a predefined threshold. This can lead to performance degradation and potential data loss if not addressed promptly.
The alert indicates that the size of the logs retained on the Kafka broker has grown beyond the expected limit. This can occur due to insufficient log cleanup, misconfigured retention policies, or unexpected spikes in data volume. High log retention size can lead to increased disk usage, slower read/write operations, and potential broker instability.
When the log retention size is too high, it can consume excessive disk space, leading to potential out-of-space errors. It can also slow down log segment operations, affecting the overall throughput and latency of the Kafka cluster.
Check the current log retention settings using the following command:
kafka-configs --bootstrap-server <broker-address> --entity-type topics --describe
Look for the retention.bytes
and retention.ms
configurations. Adjust these settings to align with your data retention requirements. For example, to set a retention size of 10GB, use:
kafka-configs --bootstrap-server <broker-address> --entity-type topics --entity-name <topic-name> --alter --add-config retention.bytes=10737418240
Regularly monitor the log retention size using Prometheus and Grafana dashboards. Set up alerts to notify you when the log size approaches critical thresholds. This proactive monitoring helps in taking timely actions to prevent issues.
If optimizing retention policies is not sufficient, consider increasing the disk capacity of your Kafka brokers. This can be done by adding more disks or expanding existing storage volumes. Ensure that the new storage is properly configured and integrated with your Kafka setup.
Manually delete old log segments if necessary. Use the following command to force log cleanup:
kafka-log-dirs --bootstrap-server <broker-address> --describe
Identify the log directories and remove old segments that are no longer needed. Be cautious when performing manual deletions to avoid data loss.
(Perfect for DevOps & SREs)
(Perfect for DevOps & SREs)