Kafka Topic RecordTooLargeException

A record exceeds the maximum allowable size.

Understanding Kafka and Its Purpose

Apache Kafka is a distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. Kafka is designed to handle real-time data feeds and is often used for building real-time streaming data pipelines and applications that adapt to the data streams.

Identifying the Symptom: RecordTooLargeException

When working with Kafka, you might encounter the RecordTooLargeException. This error typically manifests when a producer attempts to send a record that exceeds the maximum allowable size configured in Kafka. The error message usually indicates that the record size is too large for the broker to handle.

What You Observe

Developers may notice that certain messages are not being delivered to the Kafka topic, and upon checking the logs, they find an error similar to:

org.apache.kafka.common.errors.RecordTooLargeException: The message is 1048576 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.

Details About the RecordTooLargeException

The RecordTooLargeException occurs when a producer sends a message that exceeds the maximum size allowed by the Kafka broker. By default, Kafka has a size limit on messages to ensure efficient processing and avoid memory issues. The default maximum message size is typically set to 1MB.

Configuration Parameters Involved

  • max.message.bytes: This broker configuration parameter defines the maximum size of a message that the broker can accept.
  • max.request.size: This producer configuration parameter defines the maximum size of a request that the producer can send.

Steps to Fix the RecordTooLargeException

To resolve the RecordTooLargeException, you can either increase the maximum allowable message size or reduce the size of the records being sent. Here are the steps to address this issue:

Step 1: Increase the Maximum Message Size

To increase the maximum message size, you need to adjust the max.message.bytes configuration on the broker and the max.request.size on the producer.

# On the Kafka broker, update the server.properties file:
max.message.bytes=2097152 # Set to 2MB or your desired size

# On the producer configuration:
max.request.size=2097152 # Ensure this matches the broker setting

After making these changes, restart the Kafka broker and the producer application to apply the new settings.

Step 2: Reduce the Record Size

If increasing the message size is not feasible, consider reducing the size of the records. This can be done by compressing the data or breaking it into smaller chunks.

# Enable compression on the producer:
compression.type=gzip # Options include gzip, snappy, lz4, zstd

For more information on Kafka compression, refer to the Kafka Documentation.

Conclusion

Handling the RecordTooLargeException in Kafka involves understanding the configuration limits and adjusting them according to your needs. By either increasing the allowable message size or optimizing the record size, you can ensure smooth data flow in your Kafka pipelines. For further reading, check out the Kafka Broker Configuration and Producer Configuration documentation.

Never debug

Kafka Topic

manually again

Let Dr. Droid create custom investigation plans for your infrastructure.

Start Free POC (15-min setup) →
Automate Debugging for
Kafka Topic
See how Dr. Droid creates investigation plans for your infrastructure.

MORE ISSUES

Made with ❤️ in Bangalore & San Francisco 🏢

Doctor Droid