You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This project also makes use of slf4j. Fetch the latest at mvnrepository
Kafka Producers
Writing a basic producer in Java. See ProducerDemo.java for further details.
StringbootstrapServers = "localhost:9092";
// create Producer propertiesPropertiesproperties = newProperties();
properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
properties.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
properties.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
// create the producerKafkaProducer<String, String> producer = newKafkaProducer<String, String>(properties);
ProducerRecord<String, String> record = newProducerRecord<String, String>("first_topic", "hello world");
// send the data - asynchronousproducer.send(record);
// flush dataproducer.flush();
// flush and close producerproducer.close();
Kafka Producers with Callback
Send data with a callback function. See ProducerWithCallback.java for further details.
for (inti=0; i<10; i++) {
ProducerRecord<String, String> record = newProducerRecord<String, String>("first_topic", "hello world " + Integer.toString(i));
// send the data - asynchronousproducer.send(record, newCallback() {
publicvoidonCompletion(RecordMetadatarecordMetadata, Exceptione) {
if (e == null) {
// record was successfully sentlogger.info("Received new metadata. \n" +
"Topic: " + recordMetadata.topic() + "\n" +
"Partition: " + recordMetadata.partition() + "\n" +
"Offset: " + recordMetadata.offset() + "\n" +
"Timestamp: " + recordMetadata.timestamp());
} else {
logger.error("Error while producing", e);
}
}
});
}
Kafka Producers with Keys
A producer with keys value pairs. By providing a key we guarantee that the same key goes to the same partition.
See ProducerDemoKeys.java for further details.
for (inti=0; i<10; i++) {
Stringtopic = "first_topic";
Stringvalue = "hello world " + Integer.toString(i);
Stringkey = "id_" + Integer.toString(i);
logger.info("Key: " + key); // log the key// By providing a key we guarantee that the same key goes to the same partitionProducerRecord<String, String> record = newProducerRecord<String, String>(topic, key, value);
// send the data - asynchronousproducer.send(record, newCallback() {
publicvoidonCompletion(RecordMetadatarecordMetadata, Exceptione) {
if (e == null) {
// record was successfully sentlogger.info("Received new metadata. \n" +
"Topic: " + recordMetadata.topic() + "\n" +
"Partition: " + recordMetadata.partition() + "\n" +
"Offset: " + recordMetadata.offset() + "\n" +
"Timestamp: " + recordMetadata.timestamp());
} else {
logger.error("Error while producing", e);
}
}
}).get(); // Bad practice, we just made the call synchronous.
}
Enable compression by setting compression.type. Experiment with different methods!
// Enable compression, your network will thank youproperties.setProperty(ProducerConfig.COMPRESSION_TYPE_CONFIG, "snappy"); // experiment with different methods
Enable Producer Batching
By default, produced messages are sent as soon as they are created. Enable linger.ms & batch.size to control this flow.
// Enable batch sizingproperties.setProperty(ProducerConfig.LINGER_MS_CONFIG, "20"); // wait 20ms before sending messages to they can be batched.properties.setProperty(ProducerConfig.BATCH_SIZE_CONFIG, Integer.toString(32*1024)); // 32KB Batch Size
Using Kafka to consume live tweets
See TwitterProducer.java for further details
logger.info("Setup");
/** Set up your blocking queues: Be sure to size these properly based on expected TPS of your stream */BlockingQueue<String> msgQueue = newLinkedBlockingQueue<String>(1000);
// create a twitter clientClientclient = createTwitterClient(msgQueue);
// Attempts to establish a connection.client.connect();
// create a kafka producerKafkaProducer<String, String> producer = createKafkaProducer();
// loop to send tweets to kafka// on a different thread, or multiple different threads....while (!client.isDone()) {
Stringmsg = null;
try {
msg = msgQueue.poll(5, TimeUnit.SECONDS);
} catch (InterruptedExceptione) {
e.printStackTrace();
client.stop();
}
if (msg != null) {
logger.info(msg);
producer.send(newProducerRecord<>("twitter_tweets", null, msg), newCallback() {
@OverridepublicvoidonCompletion(RecordMetadatarecordMetadata, Exceptione) {
if (e != null) {
logger.error("Something bad happened", e);
}
}
});
}
}
logger.info("End of application");
Credits to Stephane. Checkout his awesome course on Udemy!