【Flink-Kafka-To-Hive】使用 Flink 实现 Kafka 数据写入 Hive

这篇具有很好参考价值的文章主要介绍了【Flink-Kafka-To-Hive】使用 Flink 实现 Kafka 数据写入 Hive。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

需求描述:

1、数据从 Kafka 写入 Hive。

2、相关配置存放于 Mysql 中,通过 Mysql 进行动态读取。

3、此案例中的 Kafka 是进行了 Kerberos 安全认证的,如果不需要自行修改。

4、Flink 集成 Kafka 写入 Hive 需要进行 checkpoint 才能落盘至 HDFS。

5、先在 Hive 中创建表然后动态获取 Hive 的表结构。

6、Kafka 数据为 Json 格式,通过 FlatMap 扁平化处理后,根据表结构封装到 Row 中后完成写入。

7、写入时转换成临时视图模式,利用 Flink-Sql 实现数据写入。

8、本地测试时 Hive 相关文件要放置到 resources 目录下。

9、本地测试时可以编辑 resources.flink_backup_local.yml 通过 ConfigTools.initConf 方法获取配置。

1)导入相关依赖

这里的依赖比较冗余,大家可以根据各自需求做删除或保留。

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>example.cn.test</groupId>
    <artifactId>kafka2hive</artifactId>
    <version>1.0.0</version>

    <properties>
        <hbase.version>2.3.3</hbase.version>
        <hadoop.version>3.1.1</hadoop.version>
        <spark.version>3.0.2</spark.version>
        <scala.version>2.12.10</scala.version>
        <maven.compiler.source>8</maven.compiler.source>
        <maven.compiler.target>8</maven.compiler.target>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <flink.version>1.14.0</flink.version>
        <scala.binary.version>2.12</scala.binary.version>
        <target.java.version>1.8</target.java.version>
        <maven.compiler.source>${target.java.version}</maven.compiler.source>
        <maven.compiler.target>${target.java.version}</maven.compiler.target>
        <log4j.version>2.17.2</log4j.version>
        <hadoop.version>3.1.2</hadoop.version>
        <hive.version>3.1.2</hive.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>gaei.cn.x5l.bigdata.common</groupId>
            <artifactId>x5l-bigdata-common</artifactId>
            <version>1.1-SNAPSHOT</version>
            <exclusions>
                <exclusion>
                    <groupId>org.apache.logging.log4j</groupId>
                    <artifactId>log4j-core</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.apache.logging.log4j</groupId>
                    <artifactId>log4j-slf4j-impl</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.apache.logging.log4j</groupId>
                    <artifactId>log4j-api</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-dist -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-dist_2.12</artifactId>
            <version>1.14.0-csa1.7.0.0</version>
            <scope>provided</scope>
            <exclusions>
                <exclusion>
                    <groupId>org.slf4j</groupId>
                    <artifactId>slf4j-log4j12</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-jdbc_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.jyaml</groupId>
            <artifactId>jyaml</artifactId>
            <version>1.3</version>
        </dependency>
        <dependency>
            <groupId>gaei.cn.x5l</groupId>
            <artifactId>tsp-gb-decode</artifactId>
            <version>1.0.0</version>
            <exclusions>
                <exclusion>
                    <groupId>org.apache.logging.log4j</groupId>
                    <artifactId>log4j-core</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.apache.logging.log4j</groupId>
                    <artifactId>log4j-api</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.apache.logging.log4j</groupId>
                    <artifactId>log4j-slf4j-impl</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>5.1.44</version>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>gaei.cn.x5l.flink.common</groupId>
            <artifactId>x5l-flink-common</artifactId>
            <version>1.2-SNAPSHOT</version>
            <scope>compile</scope>
            <exclusions>
                <exclusion>
                    <artifactId>slf4j-api</artifactId>
                    <groupId>org.slf4j</groupId>
                </exclusion>
                <exclusion>
                    <groupId>org.apache.logging.log4j</groupId>
                    <artifactId>log4j-core</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.apache.logging.log4j</groupId>
                    <artifactId>log4j-api</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.apache.logging.log4j</groupId>
                    <artifactId>log4j-slf4j-impl</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.apache.logging.log4j</groupId>
                    <artifactId>log4j-1.2-api</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <!-- Flink Dependency -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-hive_2.12</artifactId>
            <version>1.14.0</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-shaded-hadoop-3 -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-shaded-hadoop-3</artifactId>
            <version>3.1.1.7.2.8.0-224-9.0</version>
            <scope>provided</scope>
            <exclusions>
                <exclusion>
                    <artifactId>slf4j-log4j12</artifactId>
                    <groupId>org.slf4j</groupId>
                </exclusion>
                <exclusion>
                    <groupId>org.apache.logging.log4j</groupId>
                    <artifactId>log4j-core</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.apache.logging.log4j</groupId>
                    <artifactId>log4j-api</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.apache.logging.log4j</groupId>
                    <artifactId>log4j-slf4j-impl</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>log4j</groupId>
                    <artifactId>log4j</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>cn.hutool</groupId>
            <artifactId>hutool-all</artifactId>
            <version>5.8.10</version>
        </dependency>
        <dependency>
            <groupId>com.alibaba.ververica</groupId>
            <artifactId>flink-connector-mysql-cdc</artifactId>
            <version>1.4.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-jdbc_2.11</artifactId>
            <version>1.11.6</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-sql-connector-kafka_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <!-- 基础依赖  开始-->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-java</artifactId>
            <version>${flink.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-streaming-java_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-clients_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
            <scope>provided</scope>
        </dependency>
        <!-- 基础依赖  结束-->
        <!-- TABLE  开始-->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-api-java-bridge_${scala.binary.version}</artifactId>
            <version>1.14.0</version>
            <scope>provided</scope>
        </dependency>
        <!-- 使用 hive sql时注销,其他时候可以放开 -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-planner_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-streaming-scala_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-common</artifactId>
            <version>${flink.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-cep_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <!-- TABLE  结束-->
        <!-- sql  开始-->
        <!-- sql解析 开始 -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-json</artifactId>
            <version>${flink.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-csv</artifactId>
            <version>${flink.version}</version>
            <scope>provided</scope>
        </dependency>
        <!-- sql解析 结束 -->
        <!-- sql连接 kafka -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-sql-connector-kafka_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <!-- sql  结束-->
        <!-- 检查点 -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-state-processor-api_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-kafka_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>commons-lang</groupId>
            <artifactId>commons-lang</artifactId>
            <version>2.5</version>
            <scope>compile</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-runtime-web_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
            <scope>provided</scope>
        </dependency>
        <!-- 本地监控任务 结束 -->
        <!-- DataStream 开始 -->
        <dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-slf4j-impl</artifactId>
            <version>${log4j.version}</version>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-api</artifactId>
            <version>${log4j.version}</version>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-core</artifactId>
            <version>${log4j.version}</version>
            <scope>runtime</scope>
        </dependency>
        <!-- hdfs -->
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>3.3.1</version>
        </dependency>
        <!-- 重点,容易被忽略的jar -->
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-auth</artifactId>
            <version>${hadoop.version}</version>
        </dependency>
        <!-- rocksdb_2 -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-statebackend-rocksdb_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
            <scope>provided</scope>
        </dependency>
        <!-- 其他 -->
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>1.1.23</version>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.16.18</version>
            <scope>provided</scope>
        </dependency>
            <!-- kafka2mongo 离线任务 -->
            <dependency>
                <groupId>org.mongodb</groupId>
                <artifactId>mongodb-driver</artifactId>
                <version>3.12.6</version>
            </dependency>
    </dependencies>
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <version>3.0.0</version>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>
                        <configuration>
                            <createDependencyReducedPom>false</createDependencyReducedPom>
                            <artifactSet>
                                <excludes>
                                    <exclude>org.apache.flink:force-shading</exclude>
                                    <exclude>com.google.code.findbugs:jsr305</exclude>
                                    <exclude>org.slf4j:*</exclude>
                                    <exclude>org.apache.logging.log4j:*</exclude>
                                    <exclude>org.apache.flink:flink-runtime-web_2.11</exclude>
                                </excludes>
                            </artifactSet>
                            <filters>
                                <filter>
                                    <artifact>*:*</artifact>
                                    <excludes>
                                        <exclude>META-INF/*.SF</exclude>
                                        <exclude>META-INF/*.DSA</exclude>
                                        <exclude>META-INF/*.RSA</exclude>
                                    </excludes>
                                </filter>
                            </filters>
                            <transformers>
                                <transformer
                                        implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
                                    <mainClass>com.owp.flink.kafka.KafkaSourceDemo</mainClass>
                                </transformer>
                                <!-- flink sql 需要  -->
                                <!-- The service transformer is needed to merge META-INF/services files -->
                                <transformer
                                        implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
                                <!-- ... -->
                            </transformers>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>

        <pluginManagement>
            <plugins>
                <!-- This improves the out-of-the-box experience in Eclipse by resolving some warnings. -->
                <plugin>
                    <groupId>org.eclipse.m2e</groupId>
                    <artifactId>lifecycle-mapping</artifactId>
                    <version>1.0.0</version>
                    <configuration>
                        <lifecycleMappingMetadata>
                            <pluginExecutions>
                                <pluginExecution>
                                    <pluginExecutionFilter>
                                        <groupId>org.apache.maven.plugins</groupId>
                                        <artifactId>maven-shade-plugin</artifactId>
                                        <versionRange>[3.0.0,)</versionRange>
                                        <goals>
                                            <goal>shade</goal>
                                        </goals>
                                    </pluginExecutionFilter>
                                    <action>
                                        <ignore/>
                                    </action>
                                </pluginExecution>
                                <pluginExecution>
                                    <pluginExecutionFilter>
                                        <groupId>org.apache.maven.plugins</groupId>
                                        <artifactId>maven-compiler-plugin</artifactId>
                                        <versionRange>[3.1,)</versionRange>
                                        <goals>
                                            <goal>testCompile</goal>
                                            <goal>compile</goal>
                                        </goals>
                                    </pluginExecutionFilter>
                                    <action>
                                        <ignore/>
                                    </action>
                                </pluginExecution>
                            </pluginExecutions>
                        </lifecycleMappingMetadata>
                    </configuration>
                </plugin>
            </plugins>
        </pluginManagement>

    </build>
    <repositories>
        <repository>
            <id>cdh.releases.repo</id>
            <url>https://repository.cloudera.com/artifactory/libs-release-local/</url>
            <name>Releases Repository</name>
        </repository>
    </repositories>
</project>

2)代码实现

2.1.resources

2.1.1.appconfig.yml

mysql.url: "jdbc:mysql://1.1.1.1:3306/test?useSSL=false"
mysql.username: "test"
mysql.password: "123456"
mysql.driver: "com.mysql.jdbc.Driver"

2.1.2.log4j.properties

log4j.rootLogger=info, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n

2.1.3.log4j2.xml

<?xml version="1.0" encoding="UTF-8"?>
<configuration monitorInterval="5">
    <Properties>
        <property name="LOG_PATTERN" value="%date{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n" />
        <property name="LOG_LEVEL" value="ERROR" />
    </Properties>

    <appenders>
        <console name="console" target="SYSTEM_OUT">
            <PatternLayout pattern="${LOG_PATTERN}"/>
            <ThresholdFilter level="${LOG_LEVEL}" onMatch="ACCEPT" onMismatch="DENY"/>
        </console>
        <File name="log" fileName="tmp/log/job.log" append="false">
            <PatternLayout pattern="%d{HH:mm:ss.SSS} %-5level %class{36} %L %M - %msg%xEx%n"/>
        </File>
    </appenders>

    <loggers>
        <root level="${LOG_LEVEL}">
            <appender-ref ref="console"/>
            <appender-ref ref="log"/>
        </root>
    </loggers>
</configuration>

2.1.4.flink_backup_local.yml

hdfs:
  checkPointPath: 'hdfs://nameserver/user/flink/rocksdbcheckpoint'
  checkpointTimeout: 360000
  checkpointing: 300000
  maxConcurrentCheckpoints: 1
  minPauseBetweenCheckpoints: 10000
  restartInterval: 60
  restartStrategy: 3
hive:
  defaultDatabase: 'ods'
  hiveConfDir: 'D:/WorkSpace/bigdata-flink-backup/kafka2hive/src/main/resources/'
  sourceTopic: 'topicA,topicB'
  tableName: 'table_name'
kafka-consumer:
  prop:
    auto.offset.reset: 'earliest'
    bootstrap.servers: 'kfk01:9092,kfk02:9092,kfk03:9092'
    enable.auto.commit: 'false'
    fetch.max.bytes: '52428700'
    group.id: 'test'
    isKerberized: '1'
    keytab: 'D:/keytab/test.keytab'
    krb5Conf: 'D:/keytab/krb5.conf'
    max.poll.interval.ms: '300000'
    max.poll.records: '1000'
    principal: 'test@PRE.TEST.COM'
    security_protocol: 'SASL_PLAINTEXT'
    serviceName: 'kafka'
    session.timeout.ms: '600000'
    useTicketCache: 'false'
  topics: 'topicA,topicB'
kafka-producer:
  defaultTopic: 'kafka2hive_error'
  prop:
    acks: 'all'
    batch.size: '1048576'
    bootstrap.servers: 'kfk01:9092,kfk02:9092,kfk03:9092'
    compression.type: 'lz4'
    key.serializer: 'org.apache.kafka.common.serialization.StringSerializer'
    retries: '3'
    value.serializer: 'org.apache.kafka.common.serialization.StringSerializer'

2.2.utils

2.2.1.DBConn

import java.sql.*;

public class DBConn {


    private static final String driver = "com.mysql.jdbc.Driver";		//mysql驱动
    private static Connection conn = null;

    private static PreparedStatement ps = null;
    private static ResultSet rs = null;
    private static final CallableStatement cs = null;

    /**
     * 连接数据库
     * @return
     */
    public static Connection conn(String url,String username,String password) {
        Connection conn = null;
        try {
            Class.forName(driver);  //加载数据库驱动
            try {
                conn = DriverManager.getConnection(url, username, password);  //连接数据库
            } catch (SQLException e) {
                e.printStackTrace();
            }
        } catch (ClassNotFoundException e) {
            e.printStackTrace();
        }
        return conn;
    }

    /**
     * 关闭数据库链接
     * @return
     */
    public static void close() {
        if(conn != null) {
            try {
                conn.close();  //关闭数据库链接
            } catch (SQLException e) {
                e.printStackTrace();
            }
        }
    }
}

2.2.2.CommonUtils

@Slf4j
public class CommonUtils {
    public static StreamExecutionEnvironment setCheckpoint(StreamExecutionEnvironment env) throws IOException {
//        ConfigTools.initConf("local");
        Map hdfsMap = (Map) ConfigTools.mapConf.get("hdfs");
        env.enableCheckpointing(((Integer) hdfsMap.get("checkpointing")).longValue(), CheckpointingMode.EXACTLY_ONCE);//这里会造成offset提交的延迟
        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(((Integer) hdfsMap.get("minPauseBetweenCheckpoints")).longValue());
        env.getCheckpointConfig().setCheckpointTimeout(((Integer) hdfsMap.get("checkpointTimeout")).longValue());
        env.getCheckpointConfig().setMaxConcurrentCheckpoints((Integer) hdfsMap.get("maxConcurrentCheckpoints"));
        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
        env.setRestartStrategy(RestartStrategies.fixedDelayRestart(
                (Integer) hdfsMap.get("restartStrategy"), // 尝试重启的次数,不宜过小,分布式任务很容易出问题(正常情况),建议3-5次
                Time.of(((Integer) hdfsMap.get("restartInterval")).longValue(), TimeUnit.SECONDS) // 延时
        ));
        //设置可容忍的检查点失败数,默认值为0表示不允许容忍任何检查点失败
        env.getCheckpointConfig().setTolerableCheckpointFailureNumber(2);

        //设置状态后端存储方式
        env.setStateBackend(new RocksDBStateBackend((String) hdfsMap.get("checkPointPath"), true));
//        env.setStateBackend(new FsStateBackend((String) hdfsMap.get("checkPointPath"), true));
//        env.setStateBackend(new HashMapStateBackend(());
        return env;
    }

    public static FlinkKafkaConsumer<ConsumerRecord<String, String>> getKafkaConsumer(Map<String, Object> kafkaConf) throws IOException {
        String[] topics = ((String) kafkaConf.get("topics")).split(",");
        log.info("监听的topic: {}", topics);
        Properties properties = new Properties();
        Map<String, String> kafkaProp = (Map<String, String>) kafkaConf.get("prop");
        for (String key : kafkaProp.keySet()) {
            properties.setProperty(key, kafkaProp.get(key).toString());
        }

        if (!StringUtils.isBlank((String) kafkaProp.get("isKerberized")) && "1".equals(kafkaProp.get("isKerberized"))) {
            System.setProperty("java.security.krb5.conf", kafkaProp.get("krb5Conf"));
            properties.put("security.protocol", kafkaProp.get("security_protocol"));
            properties.put("sasl.jaas.config", "com.sun.security.auth.module.Krb5LoginModule required "
                    + "useTicketCache=" + kafkaProp.get("useTicketCache") + " "

                    + "serviceName=\"" + kafkaProp.get("serviceName") + "\" "
                    + "useKeyTab=true "
                    + "keyTab=\"" + kafkaProp.get("keytab").toString() + "\" "
                    + "principal=\"" + kafkaProp.get("principal").toString() + "\";");
        }

        properties.put("key.serializer", "org.apache.flink.kafka.shaded.org.apache.kafka.common.serialization.ByteArrayDeserializer");
        properties.put("value.serializer", "org.apache.flink.kafka.shaded.org.apache.kafka.common.serialization.ByteArrayDeserializer");

        FlinkKafkaConsumer<ConsumerRecord<String, String>> consumerRecordFlinkKafkaConsumer = new FlinkKafkaConsumer<ConsumerRecord<String, String>>(Arrays.asList(topics), new KafkaDeserializationSchema<ConsumerRecord<String, String>>() {
            @Override
            public TypeInformation<ConsumerRecord<String, String>> getProducedType() {
                return TypeInformation.of(new TypeHint<ConsumerRecord<String, String>>() {
                });
            }

            @Override
            public boolean isEndOfStream(ConsumerRecord<String, String> stringStringConsumerRecord) {
                return false;
            }

            @Override
            public ConsumerRecord<String, String> deserialize(ConsumerRecord<byte[], byte[]> record) throws Exception {
                return new ConsumerRecord<String, String>(
                        record.topic(),
                        record.partition(),
                        record.offset(),
                        record.timestamp(),
                        record.timestampType(),
                        record.checksum(),
                        record.serializedKeySize(),
                        record.serializedValueSize(),
                        new String(record.key() == null ? "".getBytes(StandardCharsets.UTF_8) : record.key(), StandardCharsets.UTF_8),
                        new String(record.value() == null ? "{}".getBytes(StandardCharsets.UTF_8) : record.value(), StandardCharsets.UTF_8));
            }
        }, properties);
        return consumerRecordFlinkKafkaConsumer;
    }
}

2.3.conf

2.3.1.ConfigTools

@Slf4j
public class ConfigTools {

    public static Map<String, Object> mapConf;

    /**
     * 获取对应的配置文件
     *
     * @param option
     */
    public static void initConf(String option) {
        String confFile = "/flink_backup_" + option + ".yml";
        try {
            InputStream dumpFile = ConfigTools.class.getResourceAsStream(confFile);
            mapConf = Yaml.loadType(dumpFile, HashMap.class);
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

    /**
     * 获取对应的配置文件
     *
     * @param option
     */
    public static void initMySqlConf(String option, Class clazz) {
        String className = clazz.getName();
        String confFile = "/appconfig.yml";
        Map<String, String> mysqlConf;
        try {
            InputStream dumpFile = ConfigTools.class.getResourceAsStream(confFile);
            mysqlConf = Yaml.loadType(dumpFile, HashMap.class);
            String username = mysqlConf.get("mysql.username");
            String password = mysqlConf.get("mysql.password");
            String url = mysqlConf.get("mysql.url");
            Connection conn = DBConn.conn(url, username, password);
            Map<String, Object> config = getConfig(conn, className, option);

            if (config == null || config.size() == 0) {
                log.error("获取配置文件失败");
                return;
            }

            mapConf = config;

        } catch (Exception e) {
            e.printStackTrace();
        }

    }

    private static Map<String, Object> getConfig(Connection conn, String className, String option) throws SQLException {
        PreparedStatement preparedStatement = null;
        try {
            String sql = "select config_context from app_config where app_name = '%s' and config_name = '%s'";
            preparedStatement = conn.prepareStatement(String.format(sql, className, option));

            ResultSet rs = preparedStatement.executeQuery();

            Map<String, String> map = new LinkedHashMap<>();

            String config_context = "";
            while (rs.next()) {
                config_context = rs.getString("config_context");
            }
            System.out.println("配置信息config_context:"+config_context);
//            if(StringUtils.isNotBlank(config_context)){
//                System.out.println(JSONObject.toJSONString(JSONObject.parseObject(config_context), SerializerFeature.PrettyFormat));
//            }
            Map<String, Object> mysqlConfMap = JSON.parseObject(config_context, Map.class);
            return mysqlConfMap;
        }finally {
            if (preparedStatement != null) {
                preparedStatement.close();
            }
            if (conn != null) {
                conn.close();
            }
        }
    }

    public static void main(String[] args) {
//        initMySqlConf("local", TboxPeriodBackoutA3K.class);
        initConf("local");
        String s = JSON.toJSONString(mapConf);
        System.out.println(s);
    }
}

2.4.po

2.4.1.SchemaPo

/**
 * 字段属性对象
 */
@Data
@AllArgsConstructor
@NoArgsConstructor
public class SchemaPo implements Serializable {
    private String signal;
    private String type;
}

2.5.kafka2hive

2.5.1.Kafka2Hive-ODS

从 Kafka 中获取到的数据不做任何处理直接写入到 Hive 的 ODS 层文章来源地址https://www.toymoban.com/news/detail-770887.html

public class Kafka2Hive_ODS {

    public static Logger logger = Logger.getLogger(Kafka2Hive_ODS.class);
    
    public static void main(String[] args) throws Exception {
        ConfigTools.initMySqlConf(args[0], AcpBackoutAll_X9E_ORIGINAL.class);
        Map<String, Object> mapConf = ConfigTools.mapConf;
        Map<String, Object> kafkaConsumerConf = (Map<String, Object>) mapConf.get("kafka-consumer");
        Map<String, Object> hiveConf = (Map<String, Object>) mapConf.get("hive");


        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment().disableOperatorChaining();

        CommonUtils.setCheckpoint(env);

        EnvironmentSettings fsSettings = EnvironmentSettings
                .newInstance()
                .useBlinkPlanner()
                .inStreamingMode()
                .build();

        StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env, fsSettings);
        StreamStatementSet statementSet = tableEnv.createStatementSet();

        // 使用Hive的sql方言
        tableEnv.getConfig().setSqlDialect(SqlDialect.HIVE);
        //自定义一个名字(没有限制随意取)
        String name = "myhive";
        //这里需要配置hive表中的默认库(不是mysql的hive元数据库)
        String defaultDatabase = "ods";
        //hive-site.xml文件目录
        String hiveConfDir = (String) hiveConf.get("hiveConfDir");
        //创建HiveCatalog
        HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, "3.1.2");
        //注册HiveCatalog
        tableEnv.registerCatalog("myhive", hive);
        //使用HiveCatalog
        tableEnv.useCatalog("myhive");


//        FlinkKafkaConsumer<String> myConsumer = CommonUtils.getKafkaConsumer();
        FlinkKafkaConsumer<ConsumerRecord<String, String>> myConsumer = CommonUtils.getKafkaConsumer(kafkaConsumerConf);
        DataStream<ConsumerRecord<String, String>> stream = env.addSource(myConsumer);

        String tableName = (String) hiveConf.get("tableName");

        List<FieldSchema> schemas = hive.getHiveTable(new ObjectPath(defaultDatabase, tableName)).getSd().getCols();
        List<FieldSchema> partitionKeys = hive.getHiveTable(new ObjectPath(defaultDatabase, tableName)).getPartitionKeys();
        schemas.addAll(partitionKeys);

        List<SchemaPo> schemaPos = new ArrayList<>();
        List<String> fieldLists = new ArrayList<>();
        List<TypeInformation> typeList = new ArrayList<>();
        for (FieldSchema schema : schemas) {
            SchemaPo schemaPo = new SchemaPo();
            schemaPo.setSignal(schema.getName());
            schemaPo.setType(schema.getType());
            schemaPos.add(schemaPo);
            fieldLists.add(schema.getName());
            String type = schema.getType();
            if (type.equalsIgnoreCase("bigint")) {
                typeList.add(Types.LONG);
            } else {
                typeList.add(Types.STRING);
            }
        }

        String[] fieldNames = fieldLists.toArray(new String[fieldLists.size()]);
        TypeInformation[] types = typeList.toArray(new TypeInformation[typeList.size()]);

        SingleOutputStreamOperator<Row> originalRow = stream.flatMap(new KafkaMsgFormatFunction(schemaPos), new RowTypeInfo(types, fieldNames)).uid("ORIGINAL");

        tableEnv.createTemporaryView("originalRow", originalRow);
        StringBuilder sql = new StringBuilder();
        tableEnv.executeSql("alter table table_name set TBLPROPERTIES ('sink.partition-commit.policy.kind'='metastore')");
        sql.append("insert into `table_name` select ");
        sql.append(" * ");
        sql.append(" from `myhive`.`ods`.originalRow");
        tableEnv.executeSql(sql.toString());
//        env.execute();
    }


    static class KafkaMsgFormatFunction extends RichFlatMapFunction<ConsumerRecord<String,String>, Row> {

        private List<SchemaPo> schemaPos;

        public KafkaMsgFormatFunction(List<SchemaPo> schemaPos) {
            this.schemaPos = schemaPos;
        }

        @Override
        public void open(Configuration parameters) {
        
        }

        @Override
        public void flatMap(ConsumerRecord<String,String> record, Collector<Row> out) {
            String key = null;
            try {
                HashMap<String, Object> infoMap = JSON.parseObject((String) record.value(), HashMap.class);
                
                for (String signalkey : infoMap.keySet()) {
                    resultMap.put(signalkey.toLowerCase(), String.valueOf(infoMap.get(signalkey)));
                }

                Row row = new Row(schemaPos.size());
                for (int i = 0; i < schemaPos.size(); i++) {
                    SchemaPo schemaPo = schemaPos.get(i);
                    String v = resultMap.get(schemaPo.getSignal());
                    if (StringUtils.isBlank(v)) {
                        row.setField(i, null);
                        continue;
                    }
                    if ("bigint".equalsIgnoreCase(schemaPo.getType())) {
                        Long svalue = Long.valueOf(resultMap.get(schemaPo.getSignal()));
                        row.setField(i, svalue);
                    } else {
                        String svalue = resultMap.get(schemaPo.getSignal());
                        row.setField(i, svalue);
                    }
                }
                out.collect(row);
            } catch (Exception e) {
            	e.printStackTrace();
            }
        }
    }
}

到了这里,关于【Flink-Kafka-To-Hive】使用 Flink 实现 Kafka 数据写入 Hive的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 使用Flink实现Kafka到MySQL的数据流转换:一个基于Flink的实践指南

    在现代数据处理架构中,Kafka和MySQL是两种非常流行的技术。Kafka作为一个高吞吐量的分布式消息系统,常用于构建实时数据流管道。而MySQL则是广泛使用的关系型数据库,适用于存储和查询数据。在某些场景下,我们需要将Kafka中的数据实时地写入到MySQL数据库中,本文将介绍

    2024年04月15日
    浏览(33)
  • Flink(七)Flink四大基石之Time和WaterMaker详解与详细示例(watermaker基本使用、kafka作为数据源的watermaker使用示例以及超出最大允许延迟数据的接收实现)

    一、Flink 专栏 Flink 专栏系统介绍某一知识点,并辅以具体的示例进行说明。 1、Flink 部署系列 本部分介绍Flink的部署、配置相关基础内容。 2、Flink基础系列 本部分介绍Flink 的基础部分,比如术语、架构、编程模型、编程指南、基本的datastream api用法、四大基石等内容。 3、

    2024年02月15日
    浏览(32)
  • Flink1.17.1消费kafka3.5中的数据出现问题Failed to get metadata for topics [flink].

    问题呈现 Failed to get metadata for topics [flink]. at org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriberUtils.getTopicMetadata(KafkaSubscriberUtils.java:47) at org.apache.flink.connector.kafka.source.enumerator.subscriber.TopicListSubscriber.getSubscribedTopicPartitions(TopicListSubscriber.java:52) at org.apache.flink.connecto

    2024年02月11日
    浏览(41)
  • Kafka To HBase To Hive

    目录 1.在HBase中创建表 2.写入API 2.1普通模式写入hbase(逐条写入) 2.2普通模式写入hbase(buffer写入) 2.3设计模式写入hbase(buffer写入) 3.HBase表映射至Hive中 hbase(main):003:0 create_namespace \\\'events_db\\\'                                                  hbase(main)

    2024年02月08日
    浏览(21)
  • 7、Flink四大基石之Time和WaterMaker详解与详细示例(watermaker基本使用、kafka作为数据源的watermaker使用示例以及超出最大允许延迟数据的接收实现)

    一、Flink 专栏 Flink 专栏系统介绍某一知识点,并辅以具体的示例进行说明。 1、Flink 部署系列 本部分介绍Flink的部署、配置相关基础内容。 2、Flink基础系列 本部分介绍Flink 的基础部分,比如术语、架构、编程模型、编程指南、基本的datastream api用法、四大基石等内容。 3、

    2024年02月14日
    浏览(33)
  • Zookeeper+Hadoop+Spark+Flink+Kafka+Hbase+Hive

    Zookeeper+Hadoop+Spark+Flink+Kafka+Hbase+Hive 完全分布式高可用集群搭建 下载 https://archive.apache.org/dist/  Mysql下载地址 Index of /MySQL/Downloads/ 我最终选择 Zookeeper3.7.1 +Hadoop3.3.5 + Spark-3.2.4 + Flink-1.16.1 + Kafka2.12-3.4.0 + HBase2.4.17 + Hive3.1.3  +JDK1.8.0_391  IP规划 IP hostname 192.168.1.5 node1 192.168.1.6 node

    2024年01月23日
    浏览(35)
  • 使用Flink处理Kafka中的数据

    目录         使用Flink处理Kafka中的数据 前提:  一, 使用Flink消费Kafka中ProduceRecord主题的数据 具体代码为(scala) 执行结果 二, 使用Flink消费Kafka中ChangeRecord主题的数据           具体代码(scala)                 具体执行代码①                 重要逻

    2024年01月23日
    浏览(37)
  • Flink使用 KafkaSource消费 Kafka中的数据

    目前,很多 flink相关的书籍和网上的文章讲解如何对接 kafka时都是使用的 FlinkKafkaConsumer,如下: 新版的 flink,比如 1.14.3已经将 FlinkKafkaConsumer标记为 deprecated(不推荐),如下: 新版本的 flink应该使用 KafkaSource来消费 kafka中的数据,详细代码如下: 开发者在工作中应该尽量避

    2024年02月15日
    浏览(30)
  • 掌握实时数据流:使用Apache Flink消费Kafka数据

            导读:使用Flink实时消费Kafka数据的案例是探索实时数据处理领域的绝佳方式。不仅非常实用,而且对于理解现代数据架构和流处理技术具有重要意义。         Apache Flink  是一个在 有界 数据流和 无界 数据流上进行有状态计算分布式处理引擎和框架。Flink 设计旨

    2024年02月03日
    浏览(64)
  • flink启动报错Failed to construct kafka producer

    flink local模式下启动 sink2kafka报错,具体报错如下 提取报错信息 Failed to construct kafka producer class org.apache.kafka.common.serialization.ByteArraySerializer is not an instance of org.apache.kafka.common.serialization.Serializer 代码 flink版本是14.6 kafkaProperties里存的是kafka的信息 本地起了一个sink2kafka的demo 也没

    2024年02月16日
    浏览(34)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包