LogoLogo
HomeProductsDownload Community Edition
  • Lenses DevX
  • Kafka Connectors
  • Overview
  • Understanding Kafka Connect
  • Connectors
    • Install
    • Sources
      • AWS S3
      • Azure Data Lake Gen2
      • Azure Event Hubs
      • Azure Service Bus
      • Cassandra
      • GCP PubSub
      • GCP Storage
      • FTP
      • JMS
      • MQTT
    • Sinks
      • AWS S3
      • Azure CosmosDB
      • Azure Data Lake Gen2
      • Azure Event Hubs
      • Azure Service Bus
      • Cassandra
      • Elasticsearch
      • GCP PubSub
      • GCP Storage
      • HTTP
      • InfluxDB
      • JMS
      • MongoDB
      • MQTT
      • Redis
      • Google BigQuery
  • Secret Providers
    • Install
    • AWS Secret Manager
    • Azure KeyVault
    • Environment
    • Hashicorp Vault
    • AES256
  • Single Message Transforms
    • Overview
    • InsertFieldTimestampHeaders
    • InsertRecordTimestampHeaders
    • InsertRollingFieldTimestampHeaders
    • InsertRollingRecordTimestampHeaders
    • InsertRollingWallclock
    • InsertRollingWallclockHeaders
    • InsertSourcePartitionOrOffsetValue
    • InsertWallclock
    • InsertWallclockHeaders
    • InsertWallclockDateTimePart
    • TimestampConverter
  • Tutorials
    • Backup & Restore
    • Creating & managing a connector
    • Cloud Storage Examples
      • AWS S3 Source Examples
      • AWS S3 Sink Time Based Partitioning
      • GCP Source
      • GCP Sink Time Based Partitioning
    • Http Sink Templating
    • Sink converters & different data formats
    • Source converters with incoming JSON or Avro
    • Loading XML from Cloud storage
    • Loading ragged width files
    • Using the MQTT Connector with RabbitMQ
    • Using Error Policies
    • Using dead letter queues
  • Contributing
    • Developing a connector
    • Utilities
    • Testing
  • Lenses Connectors Support
  • Downloads
  • Release notes
    • Stream Reactor
    • Secret Providers
    • Single Message Transforms
Powered by GitBook
LogoLogo

Resources

  • Privacy
  • Cookies
  • Terms & Conditions
  • Community EULA

2024 © Lenses.io Ltd. Apache, Apache Kafka, Kafka and associated open source project names are trademarks of the Apache Software Foundation.

On this page
  • FAQ
  • Can the datalakes sinks lose data?
  • Do the datalake sinks support exactly once semantics?
  • How do I escape dots in field names in KCQL?
  • How do I escape other special characters in field names in KCQL?

Was this helpful?

Export as PDF
  1. Connectors

Sinks

This page details the configuration options for the Stream Reactor Kafka Connect sink connectors.

PreviousMQTTNextAWS S3

Last updated 2 months ago

Was this helpful?

Sink connectors read data from Kafka and write to an external system.


FAQ

Can the datalakes sinks lose data?

Kafka topic retention policies determine how long a message is retained in a topic before it is deleted. If the retention period expires and the connector has not processed the messages, possibly due to not running or other issues, the unprocessed Kafka data will be deleted as per the retention policy. This can lead to significant data loss since the messages will no longer be available for the connector to sink to the target system.

Do the datalake sinks support exactly once semantics?

Yes, the datalakes connectors natively support exactly-once guarantees.

How do I escape dots in field names in KCQL?

Field names in Kafka message headers or values may contain dots (.). To access these correctly, enclose the entire target in backticks (```) and each segment which consists of a field name in single quotes ('):

INSERT INTO `_value.'customer.name'.'first.name'` SELECT * FROM topicA

How do I escape other special characters in field names in KCQL?

For field names with spaces or special characters, use a similar escaping strategy:

  • Field name with a space: `_value.'full name'`

  • Field name with special characters: `_value.'$special_characters!'`

This ensures the connector correctly extracts the intended fields and avoids parsing errors.

Cover

AWS S3

Sink data from Kafka to AWS S3 including backing up topics and offsets.

Cover

Azure CosmosDB

Sink data from Kafka to Azure CosmosDB.

Cover

Azure Data Lake Gen2

Sink data from Kafka to Azure Data Lake Gen2 including backing up topics and offsets.

Cover

Azure Event Hubs

Load data from Azure Event Hubs into Kafka topics.

Cover

Azure Service Bus

Sink data from Kafka to Azure Service Bus topics and queues.

Cover

Cassandra

Sink data from Kafka to Cassandra.

Cover

Elasticsearch

Sink data from Kafka to Elasticsearch.

Cover

GCP PubSub

Sink data from Kafka to GCP PubSub.

Cover

GCP Storage

Sink data from Kafka to GCP Storage.

Cover

HTTP Sink

Sink data from Kafka to a HTTP endpoint.

Cover

InfluxDB

Sink data from Kafka to InfluxDB.

Cover

JMS

Sink data from Kafka to JMS.

Cover

MongoDB

Sink data from Kafka to MongoDB.

Cover

MQTT

Sink data from Kafka to MQTT.

Cover

Redis

Sink data from Kafka to Redis.