一、下载Confluent Kafka
Confluent Kafka的下载地址:
- https://www.confluent.io/download/
下载社区免费版本:
二、配置文件connect-distributed.properties
核心参数如下所示:
- /data/src/confluent-7.3.3/etc/schema-registry/connect-distributed.properties
bootstrap.servers=realtime-kafka-001:9092,realtime-kafka-003:9092,realtime-kafka-002:9092
group.id=datasight-confluent-test-debezium-cluster-status
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
config.storage.topic=offline_confluent_test_debezium_cluster_connect_configs
offset.storage.topic=offline_confluent_test_debezium_cluster_connect_offsets
status.storage.topic=offline_confluent_test_debezium_cluster_connect_statuses
config.storage.replication.factor=3
offset.storage.replication.factor=3
status.storage.replication.factor=3
offset.storage.partitions=25
status.storage.partitions=5
config.storage.partitions=1
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=true
internal.value.converter.schemas.enable=true
#rest.host.name=0.0.0.0
#rest.port=8083
#rest.advertised.host.name=0.0.0.0
#rest.advertised.port=8083
plugin.path=/data/service/debezium/connectors2
三、启动脚本connect-distributed
-
/data/src/confluent-7.3.3/bin/connect-distributed
-
connect-distributed的脚本内容如下所示,可以不需要修改
-
如果需要导出kafka connector的jmx,则需要设置jmx导出端口和jmx导出器,详细的部署方式可以参考博主下面这篇技术博客:
- Debezium系列之:安装jmx导出器监控debezium指标
if [ $# -lt 1 ];
then
echo "USAGE: $0 [-daemon] connect-distributed.properties"
exit 1
fi
base_dir=$(dirname $0)
###
### Classpath additions for Confluent Platform releases (LSB-style layout)
###
#cd -P deals with symlink from /bin to /usr/bin
java_base_dir=$( cd -P "$base_dir/../share/java" && pwd )
# confluent-common: required by kafka-serde-tools
# kafka-serde-tools (e.g. Avro serializer): bundled with confluent-schema-registry package
for library in "confluent-security/connect" "kafka" "confluent-common" "kafka-serde-tools" "monitoring-interceptors"; do
dir="$java_base_dir/$library"
if [ -d "$dir" ]; then
classpath_prefix="$CLASSPATH:"
if [ "x$CLASSPATH" = "x" ]; then
classpath_prefix=""
fi
CLASSPATH="$classpath_prefix$dir/*"
fi
done
if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
LOG4J_CONFIG_DIR_NORMAL_INSTALL="/etc/kafka"
LOG4J_CONFIG_NORMAL_INSTALL="${LOG4J_CONFIG_DIR_NORMAL_INSTALL}/connect-log4j.properties"
LOG4J_CONFIG_DIR_ZIP_INSTALL="$base_dir/../etc/kafka"
LOG4J_CONFIG_ZIP_INSTALL="${LOG4J_CONFIG_DIR_ZIP_INSTALL}/connect-log4j.properties"
if [ -e "$LOG4J_CONFIG_NORMAL_INSTALL" ]; then # Normal install layout
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_CONFIG_NORMAL_INSTALL} -Dlog4j.config.dir=${LOG4J_CONFIG_DIR_NORMAL_INSTALL}"
elif [ -e "${LOG4J_CONFIG_ZIP_INSTALL}" ]; then # Simple zip file layout
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_CONFIG_ZIP_INSTALL} -Dlog4j.config.dir=${LOG4J_CONFIG_DIR_ZIP_INSTALL}"
else # Fallback to normal default
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/connect-log4j.properties -Dlog4j.config.dir=$base_dir/../config"
fi
fi
export KAFKA_LOG4J_OPTS
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xms256M -Xmx2G"
fi
EXTRA_ARGS=${EXTRA_ARGS-'-name connectDistributed'}
COMMAND=$1
case $COMMAND in
-daemon)
EXTRA_ARGS="-daemon "$EXTRA_ARGS
shift
;;
*)
;;
esac
export CLASSPATH
exec $(dirname $0)/kafka-run-class $EXTRA_ARGS org.apache.kafka.connect.cli.ConnectDistributed "$@"
四、启动Kafka Connect集群
启动命令如下所示:
/data/src/confluent-7.3.3/bin/connect-distributed /data/src/confluent-7.3.3/etc/schema-registry/connect-distributed.properties
正常启动Kafka Connect集群完整输出如下所示:
[2023-06-21 16:43:01,249] INFO EnrichedConnectorConfig values:
config.action.reload = restart
connector.class = io.debezium.connector.mysql.MySqlConnector
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
exactly.once.support = requested
header.converter = null
key.converter = null
name = mysql-dw-valuekey-test
offsets.storage.topic = null
predicates = []
tasks.max = 1
topic.creation.default.exclude = []
topic.creation.default.include = [.*]
topic.creation.default.partitions = 12
topic.creation.default.replication.factor = 3
topic.creation.groups = []
transaction.boundary = poll
transaction.boundary.interval.ms = null
transforms = [unwrap, moveFieldsToHeader, moveHeadersToValue, Reroute]
transforms.Reroute.key.enforce.uniqueness = true
transforms.Reroute.key.field.regex = null
transforms.Reroute.key.field.replacement = null
transforms.Reroute.logical.table.cache.size = 16
transforms.Reroute.negate = false
transforms.Reroute.predicate =
transforms.Reroute.topic.regex = debezium-dw-encryption-test.dw.(.*)
transforms.Reroute.topic.replacement = debezium-test-dw-encryption-all3
transforms.Reroute.type = class io.debezium.transforms.ByLogicalTableRouter
transforms.moveFieldsToHeader.fields = [cdc_code, product]
transforms.moveFieldsToHeader.headers = [product_code, productname]
transforms.moveFieldsToHeader.negate = false
transforms.moveFieldsToHeader.operation = copy
transforms.moveFieldsToHeader.predicate =
transforms.moveFieldsToHeader.type = class org.apache.kafka.connect.transforms.HeaderFrom$Value
transforms.moveHeadersToValue.fields = [product_code2, productname2]
transforms.moveHeadersToValue.headers = [product_code, productname]
transforms.moveHeadersToValue.negate = false
transforms.moveHeadersToValue.operation = copy
transforms.moveHeadersToValue.predicate =
transforms.moveHeadersToValue.type = class io.debezium.transforms.HeaderToValue
transforms.unwrap.add.fields = []
transforms.unwrap.add.headers = []
transforms.unwrap.delete.handling.mode = drop
transforms.unwrap.drop.tombstones = true
transforms.unwrap.negate = false
transforms.unwrap.predicate =
transforms.unwrap.route.by.field =
transforms.unwrap.type = class io.debezium.transforms.ExtractNewRecordState
value.converter = null
(org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:376)
[2023-06-21 16:43:01,253] INFO [mysql-dw-valuekey-test|task-0] Loading the custom topic naming strategy plugin: io.debezium.schema.DefaultTopicNamingStrategy (io.debezium.config.CommonConnectorConfig:849)
Jun 21, 2023 4:43:01 PM org.glassfish.jersey.internal.Errors logErrors
WARNING: The following warnings have been detected: WARNING: The (sub)resource method listLoggers in org.apache.kafka.connect.runtime.rest.resources.LoggingResource contains empty path annotation.
WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method listConnectorPlugins in org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource contains empty path annotation.
WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation.
[2023-06-21 16:43:01,482] INFO Started o.e.j.s.ServletContextHandler@2b80497f{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:921)
[2023-06-21 16:43:01,482] INFO REST resources initialized; server is started and ready to handle requests (org.apache.kafka.connect.runtime.rest.RestServer:324)
[2023-06-21 16:43:01,482] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:56)
五、加载debezium插件
- 下载debezium插件到plugin.path=/data/service/debezium/connectors2设置的目录下
- 然后重新启动Kafka Connect集群就能够成功加载debezium插件
重启Kafka Connect集群查看debezium插件是否加载成功,如下所示:成功加载到了debezium 插件
[{
"class":"io.debezium.connector.mysql.MySqlConnector",
"type":"source",
"version":"2.2.1.Final"},
{"class":"org.apache.kafka.connect.mirror.MirrorCheckpointConnector","type":"source","version":"7.3.3-ce"},
{"class":"org.apache.kafka.connect.mirror.MirrorHeartbeatConnector","type":"source","version":"7.3.3-ce"},
{"class":"org.apache.kafka.connect.mirror.MirrorSourceConnector","type":"source","version":"7.3.3-ce"}]
六、总结和延伸
总结:
- 至此成功部署了具有一个节点的Kafka Connect集群,如果需要更多节点,需要在多台服务器上启动Kafka Connect,从而组成一个多节点的Kafka Connect集群
基于Kafka Connect加载debezium插件的更多的内容可以参考博主以下几篇技术博客或者Debezium 专栏:文章来源:https://www.toymoban.com/news/detail-495000.html
- Debezium系列之:安装部署debezium详细步骤,并把debezium服务托管到systemctl
- Debezium系列之:打通Debezium2.0以上版本的使用技术
- Debezium系列之:安装部署debezium2.0以上版本的详细步骤
- Debezium系列之:实现接入上千Mysql、Sqlserver、MongoDB、Postgresql数据库的Debezium集群从Debezium1.X版本升级到Debezium2.X版本
- Debezium系列之:安装jmx导出器监控debezium指标
- Debezium系列之:Debezium UI部署详细步骤
- Debezium 专栏地址
延伸:文章来源地址https://www.toymoban.com/news/detail-495000.html
- 组成一个Kafka Connect集群后,需要启动多个connector进行Kafka Connect集群稳定性、可靠性测试。
- 可以进一步部署Kafka Connect集群UI
到了这里,关于基于Confluent Kafka部署Kafka Connect集群,Kafka Connect集群加载debezium插件的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!