Skip to content

xiaohuoye/confluent-kafka-dotnet

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Confluent's .NET Client for Apache KafkaTM

Travis Build Status Build status Chat on Slack

confluent-kafka-dotnet is Confluent's .NET client for Apache Kafka and the Confluent Platform.

Features:

  • High performance - confluent-kafka-dotnet is a lightweight wrapper around librdkafka, a finely tuned C client.

  • Reliability - There are a lot of details to get right when writing an Apache Kafka client. We get them right in one place (librdkafka) and leverage this work across all of our clients (also confluent-kafka-python and confluent-kafka-go).

  • Supported - Commercial support is offered by Confluent.

  • Future proof - Confluent, founded by the creators of Kafka, is building a streaming platform with Apache Kafka at its core. It's high priority for us that client features keep pace with core Apache Kafka and components of the Confluent Platform.

confluent-kafka-dotnet is derived from Andreas Heider's rdkafka-dotnet. We're fans of his work and were very happy to have been able to leverage rdkafka-dotnet as the basis of this client. Thanks Andreas!

Referencing

confluent-kafka-dotnet is distributed via NuGet. We provide three packages:

  • Confluent.Kafka [net45, netstandard1.3] - The core client library.
  • Confluent.Kafka.Avro [net452, netstandard2.0] - Provides a serializer and deserializer for working with Avro serialized data with Confluent Schema Registry integration.
  • Confluent.SchemaRegistry [net452, netstandard1.4] - Confluent Schema Registry client (a dependency of Confluent.Kafka.Avro).

To install Confluent.Kafka from within Visual Studio, search for Confluent.Kafka in the NuGet Package Manager UI, or run the following command in the Package Manager Console:

Install-Package Confluent.Kafka -Version 1.0-beta2

To add a reference to a dotnet core project, execute the following at the command line:

dotnet add package -v 1.0-beta2 Confluent.Kafka

Note: We recommend using the 1.0-beta2 version of Confluent.Kafka for new projects in preference to the most recent stable release (0.11.5). The 1.0 API provides more features, is considerably improved and is more performant than 0.11.x releases. In choosing the label 'beta', we are signaling that we do not anticipate making any high impact changes to the API before the 1.0 release, however be warned that some breaking changes are still planned. You can track progress and provide feedback on the new 1.0 API here.

Branch builds

Nuget packages corresponding to all commits to release branches are available from the following nuget package source (Note: this is not a web URL - you should specify it in the nuget package manger): https://ci.appveyor.com/nuget/confluent-kafka-dotnet. The version suffix of these nuget packages matches the appveyor build number. You can see which commit a particular build number corresponds to by looking at the AppVeyor build history

Usage

Take a look in the examples directory for example usage. The integration tests also serve as good examples.

For an overview of configuration properties, refer to the librdkafka documentation.

Basic Producer Examples

You should use the ProduceAsync method if you would like to wait for the result of your produce requests before proceeding. You might typically want to do this in highly concurrent scenarios, for example in the context of handling web requests. Behind the scenes, the client will manage optimizing communication with the Kafka brokers for you, batching requests as appropriate.

using System;
using System.Threading.Tasks;
using Confluent.Kafka;

class Program
{
    public static async Task Main(string[] args)
    {
        var config = new ProducerConfig { BootstrapServers = "localhost:9092" };

        // A Producer for sending messages with null keys and UTF-8 encoded values.
        using (var p = new Producer<Null, string>(config))
        {
            try
            {
                var dr = await p.ProduceAsync("test-topic", new Message<Null, string> { Value="test" });
                Console.WriteLine($"Delivered '{dr.Value}' to '{dr.TopicPartitionOffset}'");
            }
            catch (KafkaException e)
            {
                Console.WriteLine($"Delivery failed: {e.Error.Reason}");
            }
        }
    }
}

Note that a server round-trip is slow (3ms at a minimum; actual latency depends on many factors). In highly concurrent scenarios you will achieve high overall throughput out of the producer using the above approach, but there will be a delay on each await call. In stream processing applications, where you would like to process many messages in rapid succession, you would typically make use the BeginProduce method instead:

using System;
using Confluent.Kafka;

class Program
{
    public static void Main(string[] args)
    {
        var conf = new ProducerConfig { BootstrapServers = "localhost:9092" };

        Action<DeliveryReportResult<Null, string>> handler = r => 
            Console.WriteLine(!r.Error.IsError
                ? $"Delivered message to {r.TopicPartitionOffset}"
                : $"Delivery Error: {r.Error.Reason}");

        using (var p = new Producer<Null, string>(conf))
        {
            for (int i=0; i<100; ++i)
            {
                p.BeginProduce("my-topic", new Message<Null, string> { Value = i.ToString() }, handler);
            }

            // wait for up to 10 seconds for any inflight messages to be delivered.
            p.Flush(TimeSpan.FromSeconds(10));
        }
    }
}

Basic Consumer Example

using System;
using Confluent.Kafka;

class Program
{
    public static void Main(string[] args)
    {
        var conf = new ConsumerConfig
        { 
            GroupId = "test-consumer-group",
            BootstrapServers = "localhost:9092",
            // Note: The AutoOffsetReset property determines the start offset in the event
            // there are not yet any committed offsets for the consumer group for the
            // topic/partitions of interest. By default, offsets are committed
            // automatically, so in this example, consumption will only start from the
            // earliest message in the topic 'my-topic' the first time you run the program.
            AutoOffsetReset = AutoOffsetResetType.Earliest
        };

        using (var c = new Consumer<Ignore, string>(conf))
        {
            c.Subscribe("my-topic");

            bool consuming = true;
            // The client will automatically recover from non-fatal errors. You typically
            // don't need to take any action unless an error is marked as fatal.
            c.OnError += (_, e) => consuming = !e.IsFatal;

            while (consuming)
            {
                try
                {
                    var cr = c.Consume();
                    Console.WriteLine($"Consumed message '{cr.Value}' at: '{cr.TopicPartitionOffset}'.");
                }
                catch (ConsumeException e)
                {
                    Console.WriteLine($"Error occured: {e.Error.Reason}");
                }
            }
            
            // Ensure the consumer leaves the group cleanly and final offsets are committed.
            c.Close();
        }
    }
}

Working with Apache Avro

The Confluent.Kafka.Avro nuget package provides an Avro serializer and deserializer that integrate with Confluent Schema Registry. The Confluent.SchemaRegistry nuget package provides a client for interfacing with Schema Registry's REST API.

You can use the Avro serializer and deserializer with the GenericRecord class or with specific classes generated using the avrogen tool, available via Nuget (.NET Core 2.1 required):

dotnet tool install -g Confluent.Apache.Avro.AvroGen

Usage:

avrogen -s your_schema.asvc .

For more information about working with Avro in .NET, refer to the the blog post Decoupling Systems with Apache Kafka, Schema Registry and Avro

Confluent Cloud

The Confluent Cloud example demonstrates how to configure the .NET client for use with Confluent Cloud.

Build

To build the library or any test or example project, run the following from within the relevant project directory:

dotnet restore
dotnet build

To run an example project, run the following from within the example's project directory:

dotnet run <args>

Tests

Unit Tests

From within the test/Confluent.Kafka.UnitTests directory, run:

dotnet test

Integration Tests

From within the Confluent Platform (or Apache Kafka) distribution directory, run the following two commands (in separate terminal windows) to set up a single broker test Kafka cluster:

./bin/zookeeper-server-start ./etc/kafka/zookeeper.properties

./bin/kafka-server-start ./etc/kafka/server.properties

Now use the bootstrap-topics.sh script in the test/Confleunt.Kafka.IntegrationTests directory to set up the prerequisite topics:

./bootstrap-topics.sh <confluent platform path> <zookeeper>

then:

dotnet test

Copyright (c) 2016-2017 Confluent Inc., 2015-2016, Andreas Heider