Flink Environment Variable

这篇具有很好参考价值的文章主要介绍了Flink Environment Variable。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

序言

我们在使用命令发布Flink任务的时候可以根据根据任务需要来设置环境变量(具体命令就是./flink run-application -t yarn-application),而不需要根据使用默认flink-conf.yaml的默认值,同时因为flink并不能自己根据任务的多少来设置算子的并行度等原因,所以根据任务的实际情况手动设置是很有必要的,如果要这么做就需要了解flink-conf.yaml的内容.cuiyaonan2000@163.com

具体在命令中的使用也很简单就是在原有的key和value的键值对上,增加-D就行了,基于1.17.1

原始内容

Flink 配置文件 flink-conf.yaml 中的配置基本都是通过键值对的方式进行配置

当 Flink 进程启动时,会解析flink-conf.yaml内容,形成键值对列表.以供Flink启动的时候获取对应key的value.

Flink 使用的 JAVA_HOME 为当前环境默认的 JAVA 环境,如果要使用自定义的 JAVA ,需要在该配置文件中通过 env.java.home 进行配置

Flink 解压后有一个 conf 文件夹,我们一般在该文件夹中 flink-conf.yaml 配置文件进行配置。对于非会话部署模式,我们也可以复制该文件夹到其他的地方,并通过环境变量 FLINK_CONF_DIR 指定配置文件夹的位置,从而实现不同的作业使用不同的配置

################################################################################
#  Licensed to the Apache Software Foundation (ASF) under one
#  or more contributor license agreements.  See the NOTICE file
#  distributed with this work for additional information
#  regarding copyright ownership.  The ASF licenses this file
#  to you under the Apache License, Version 2.0 (the
#  "License"); you may not use this file except in compliance
#  with the License.  You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
#  Unless required by applicable law or agreed to in writing, software
#  distributed under the License is distributed on an "AS IS" BASIS,
#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#  See the License for the specific language governing permissions and
# limitations under the License.
################################################################################


#==============================================================================
# Common
#==============================================================================

# The external address of the host on which the JobManager runs and can be
# reached by the TaskManagers and any clients which want to connect. This setting
# is only used in Standalone mode and may be overwritten on the JobManager side
# by specifying the --host <hostname> parameter of the bin/jobmanager.sh executable.
# In high availability mode, if you use the bin/start-cluster.sh script and setup
# the conf/masters file, this will be taken care of automatically. Yarn
# automatically configure the host name based on the hostname of the node where the
# JobManager runs.

jobmanager.rpc.address: localhost

# The RPC port where the JobManager is reachable.

jobmanager.rpc.port: 6123


# The total process memory size for the JobManager.
#
# Note this accounts for all memory usage within the JobManager process, including JVM metaspace and other overhead.

jobmanager.memory.process.size: 1600m


# The total process memory size for the TaskManager.
#
# Note this accounts for all memory usage within the TaskManager process, including JVM metaspace and other overhead.

taskmanager.memory.process.size: 1728m

# To exclude JVM metaspace and overhead, please, use total Flink memory size instead of 'taskmanager.memory.process.size'.
# It is not recommended to set both 'taskmanager.memory.process.size' and Flink memory.
#
# taskmanager.memory.flink.size: 1280m

# The number of task slots that each TaskManager offers. Each slot runs one parallel pipeline.

taskmanager.numberOfTaskSlots: 1

# The parallelism used for programs that did not specify and other parallelism.

parallelism.default: 1

# The default file system scheme and authority.
# 
# By default file paths without scheme are interpreted relative to the local
# root file system 'file:///'. Use this to override the default and interpret
# relative paths relative to a different file system,
# for example 'hdfs://mynamenode:12345'
#
# fs.default-scheme

#==============================================================================
# High Availability
#==============================================================================

# The high-availability mode. Possible options are 'NONE' or 'zookeeper'.
#
# high-availability: zookeeper

# The path where metadata for master recovery is persisted. While ZooKeeper stores
# the small ground truth for checkpoint and leader election, this location stores
# the larger objects, like persisted dataflow graphs.
# 
# Must be a durable file system that is accessible from all nodes
# (like HDFS, S3, Ceph, nfs, ...) 
#
# high-availability.storageDir: hdfs:///flink/ha/

# The list of ZooKeeper quorum peers that coordinate the high-availability
# setup. This must be a list of the form:
# "host1:clientPort,host2:clientPort,..." (default clientPort: 2181)
#
# high-availability.zookeeper.quorum: localhost:2181


# ACL options are based on https://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html#sc_BuiltinACLSchemes
# It can be either "creator" (ZOO_CREATE_ALL_ACL) or "open" (ZOO_OPEN_ACL_UNSAFE)
# The default value is "open" and it can be changed to "creator" if ZK security is enabled
#
# high-availability.zookeeper.client.acl: open

#==============================================================================
# Fault tolerance and checkpointing
#==============================================================================

# The backend that will be used to store operator state checkpoints if
# checkpointing is enabled. Checkpointing is enabled when execution.checkpointing.interval > 0.
#
# Execution checkpointing related parameters. Please refer to CheckpointConfig and ExecutionCheckpointingOptions for more details.
#
# execution.checkpointing.interval: 3min
# execution.checkpointing.externalized-checkpoint-retention: [DELETE_ON_CANCELLATION, RETAIN_ON_CANCELLATION]
# execution.checkpointing.max-concurrent-checkpoints: 1
# execution.checkpointing.min-pause: 0
# execution.checkpointing.mode: [EXACTLY_ONCE, AT_LEAST_ONCE]
# execution.checkpointing.timeout: 10min
# execution.checkpointing.tolerable-failed-checkpoints: 0
# execution.checkpointing.unaligned: false
#
# Supported backends are 'jobmanager', 'filesystem', 'rocksdb', or the
# <class-name-of-factory>.
#
# state.backend: filesystem

# Directory for checkpoints filesystem, when using any of the default bundled
# state backends.
#
# state.checkpoints.dir: hdfs://namenode-host:port/flink-checkpoints

# Default target directory for savepoints, optional.
#
# state.savepoints.dir: hdfs://namenode-host:port/flink-savepoints

# Flag to enable/disable incremental checkpoints for backends that
# support incremental checkpoints (like the RocksDB state backend). 
#
# state.backend.incremental: false

# The failover strategy, i.e., how the job computation recovers from task failures.
# Only restart tasks that may have been affected by the task failure, which typically includes
# downstream tasks and potentially upstream tasks if their produced data is no longer available for consumption.

jobmanager.execution.failover-strategy: region

#==============================================================================
# Rest & web frontend
#==============================================================================

# The port to which the REST client connects to. If rest.bind-port has
# not been specified, then the server will bind to this port as well.
#
 
# The address to which the REST client will connect to
#
#rest.address: 0.0.0.0

# Port range for the REST and web server to bind to.
#
#rest.bind-port: 8080-8090

# The address that the REST & web server binds to
#
#rest.bind-address: 0.0.0.0

# Flag to specify whether job submission is enabled from the web-based
# runtime monitor. Uncomment to disable.

#web.submit.enable: false

# Flag to specify whether job cancellation is enabled from the web-based
# runtime monitor. Uncomment to disable.

#web.cancel.enable: false

#==============================================================================
# Advanced
#==============================================================================

# Override the directories for temporary files. If not specified, the
# system-specific Java temporary directory (java.io.tmpdir property) is taken.
#
# For framework setups on Yarn, Flink will automatically pick up the
# containers' temp directories without any need for configuration.
#
# Add a delimited list for multiple directories, using the system directory
# delimiter (colon ':' on unix) or a comma, e.g.:
#     /data1/tmp:/data2/tmp:/data3/tmp
#
# Note: Each directory entry is read from and written to by a different I/O
# thread. You can include the same directory multiple times in order to create
# multiple I/O threads against that directory. This is for example relevant for
# high-throughput RAIDs.
#
# io.tmp.dirs: /tmp

# The classloading resolve order. Possible values are 'child-first' (Flink's default)
# and 'parent-first' (Java's default).
#
# Child first classloading allows users to use different dependency/library
# versions in their application than those in the classpath. Switching back
# to 'parent-first' may help with debugging dependency issues.
#
# classloader.resolve-order: child-first

# The amount of memory going to the network stack. These numbers usually need 
# no tuning. Adjusting them may be necessary in case of an "Insufficient number
# of network buffers" error. The default min is 64MB, the default max is 1GB.
# 
# taskmanager.memory.network.fraction: 0.1
# taskmanager.memory.network.min: 64mb
# taskmanager.memory.network.max: 1gb

#==============================================================================
# Flink Cluster Security Configuration
#==============================================================================

# Kerberos authentication for various components - Hadoop, ZooKeeper, and connectors -
# may be enabled in four steps:
# 1. configure the local krb5.conf file
# 2. provide Kerberos credentials (either a keytab or a ticket cache w/ kinit)
# 3. make the credentials available to various JAAS login contexts
# 4. configure the connector to use JAAS/SASL

# The below configure how Kerberos credentials are provided. A keytab will be used instead of
# a ticket cache if the keytab path and principal are set.

# security.kerberos.login.use-ticket-cache: true
# security.kerberos.login.keytab: /path/to/kerberos/keytab
# security.kerberos.login.principal: flink-user

# The configuration below defines which JAAS login contexts

# security.kerberos.login.contexts: Client,KafkaClient

#==============================================================================
# ZK Security Configuration
#==============================================================================

# Below configurations are applicable if ZK ensemble is configured for security

# Override below configuration to provide custom ZK service name if configured
# zookeeper.sasl.service-name: zookeeper

# The configuration below must match one of the values set in "security.kerberos.login.contexts"
# zookeeper.sasl.login-context-name: Client

#==============================================================================
# HistoryServer
#==============================================================================

# The HistoryServer is started and stopped via bin/historyserver.sh (start|stop)

# Directory to upload completed jobs to. Add this directory to the list of
# monitored directories of the HistoryServer as well (see below).
#jobmanager.archive.fs.dir: hdfs:///completed-jobs/

# The address under which the web-based HistoryServer listens.
#historyserver.web.address: 0.0.0.0

# The port under which the web-based HistoryServer listens.
#historyserver.web.port: 8082

# Comma separated list of directories to monitor for completed jobs.
#historyserver.archive.fs.dir: hdfs:///completed-jobs/

# Interval in milliseconds for refreshing the monitored directories.
#historyserver.archive.fs.refresh-interval: 10000

Description

Set FlinkWeb

 
#如下如果key前是注释掉的表示默认就是注释掉的,不是我猪似的.注释掉的key会根据情况使用代码中的默认值cuiyaonan2000@163.com

# Flink管理界面访问的地址
#rest.address: 0.0.0.0

# Flink管理界面访问端口
#rest.port: 8081

# Flink管理界面的端口值 ,当rest.port没有设置的时候
#rest.bind-port: 8080-8090

# ,当rest.address没有设置的时候
#rest.bind-address: 0.0.0.0

#该配置用于 TaskManager 连接 JobManager, 一般将此设置为 JobManager 运行的主机名(该配置决定TaskManager连接JobManager时的地址和端口)
jobmanager.rpc.address: localhost

#jobmanager 给TaskManager连接的端口
jobmanager.rpc.port: 6123


# 启用通过 Flink UI 上传和启动作业(默认为 true)
#web.submit.enable: false

# 启用通过 Flink UI 取消作业(默认为 true)
#web.cancel.enable: false

Memory Slot parallelism


#jobmanager的内存大小,默认1.6G,
jobmanager.memory.process.size: 1600m


#taskmanager的内存大小,默认1.728g
taskmanager.memory.process.size: 1728m


# The number of task slots that each TaskManager offers. Each slot runs one parallel 
# 如上是官方的说明,因此可以设置成cpu的数量
pipeline.
taskmanager.numberOfTaskSlots: 1

#算子的默认并行度为1
parallelism.default: 1


CheckPoint

我们一般会在应用中通过代码配置检查点,为了防止代码中没有配置检查点,因此在配置文件中增加了检查点的默认配置(默认不开启,如需开启需要配置

#存储支持的烈性 'jobmanager', 'filesystem', 'rocksdb', or the <class-name-of-factory>.
# state.backend: filesystem


#  在filesystem模式下 使用hdfs存储
# state.checkpoints.dir: hdfs://namenode-host:port/flink-checkpoints

#  在filesystem模式下 使用hdfs存储    
# state.savepoints.dir: hdfs://namenode-host:port/flink-savepoints

# 建个多少分钟执行checkpoint
# execution.checkpointing.interval: 3min


# 即在取消任务的时候是否删除检查点上的数据
# execution.checkpointing.externalized-checkpoint-retention: [DELETE_ON_CANCELLATION, RETAIN_ON_CANCELLATION]

# 最多有1个线程执行checkpoint
# execution.checkpointing.max-concurrent-checkpoints: 1

#checkpoint间隔时间,可以设置1200s
# execution.checkpointing.min-pause: 0

# 精准一次,最少一次
# execution.checkpointing.mode: [EXACTLY_ONCE, AT_LEAST_ONCE]

# checkpoint 超时时间
# execution.checkpointing.timeout: 10min

# 允许checkpoint失败的次数
# execution.checkpointing.tolerable-failed-checkpoints: 0
# execution.checkpointing.unaligned: false

另外同属checkpoint的一个同概念概念如下所示:

jobmanager.execution.failover-strategy: region

故障恢复策略 jobmanager.execution.failover-strategy 配置值
全图重启 full
基于 Region 的局部重启 region

全图重启故障恢复策略

在全图重启故障恢复策略下,Task 发生故障时会重启作业中的所有 Task 进行故障恢复。--------简单立即就是所有的算子

基于 Region 的局部重启故障恢复策略

该策略会将作业中的所有 Task 划分为数个 Region。当有 Task 发生故障时,它会尝试找出进行故障恢复需要重启的最小 Region 集合。 相比于全局重启故障恢复策略,这种策略在一些场景下的故障恢复需要重启的 Task 会更少----------简单理解就是有问题的算子

High Availability

如果没有设置重启的话,高可用的设置就是个寂寞

TaskManager挂掉:


当TaskManager挂掉之后,JobManager可以知道运行在上面的任务失败了,此时JobManager就会通过ResourceManager申请另外的处理槽,如果成功,只需要在新申请的处理槽上处理失败的任务即可,如果申请处理槽失败,JobManager将会使用重启的策略尝试着申请足够的处理槽

JobManager挂掉:

JobManager挂掉后,这个flink应用的所有任务都会自动取消掉,JobManager需要从Zookeeper中恢复元数据以及检查点路径等管理职责所需的信息,因此接管的JobManager会完成以下的工作:

  1. 从zookeeper中获取元数据:包括JobGraph执行图存储路径,Jar文件存储路径以及最新检查点的存储路径等信息
  2. 重新申请作业执行所需的处理槽,也就是向ResourceManager重新申请处理任务所需的处理槽
  3. 使用最新检查点数据恢复应用的执行.
     
# The high-availability mode. Possible options are 'NONE' or 'zookeeper'.
#
# high-availability: zookeeper

# Jobmanager 元数据存储位置
# high-availability.storageDir: hdfs:///flink/ha/

# zk的地址设置
# "host1:clientPort,host2:clientPort,..." (default clientPort: 2181)
# high-availability.zookeeper.quorum: localhost:2181

Restart&文章来源地址https://www.toymoban.com/news/detail-603910.html

到了这里,关于Flink Environment Variable的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • The JAVA_HOME environment variable is not defined correctly, this environment variable is needed to

    这个错误通常是因为系统无法找到正确的Java安装路径。要解决这个问题,你需要设置JAVA_HOME环境变量来指向Java的安装路径。 在Windows系统上,你可以按照以下步骤设置JAVA_HOME环境变量: 找到你的Java安装路径。通常情况下,Java安装在类似 “C:Program FilesJavajdk1.x.x_xx” 这样的

    2024年02月21日
    浏览(56)
  • 解决报错The JAVA_HOME environment variable is not defined correctly This environment variable is needed

    注:本方法适用于安装过java的用户 一、报错内容: 在使用mvn命令时,环境变量报错: 二、 查看环境变量后发现jdk路径错误,可是我jdk重装过,想不起来怎么办? 我在命令行窗口使用java -verbose查出来的路径为: E:/java/bin 但是当我兴高采烈在环境变量设置中写入此环境变量后

    2024年02月03日
    浏览(47)
  • The JRE_HOME environment variable is not defined correctly This environment variable is needed to ru

    The JRE_HOME environment variable is not defined correctly This environment variable is needed to run this program jdk的安装目录下没有jre(没安装jre),且没有添加环境变量 win+R → 输入cmd →路径换成你的jdk的安装目录(比如我的是F:1zGraduation projectJava EEjdk-12.0.2) → 运行命令 binjlink.exe --module-pat

    2024年02月08日
    浏览(44)
  • 2 Data Streaming Pipelines With Flink and Kafka

    作者:禅与计算机程序设计艺术 数据流是一个连续不断的、产生、存储和处理数据的过程。传统上,数据流编程都是基于特定平台(比如:消息队列,数据仓库,事件溯源)的SDK或者API进行开发,但随着云计算和容器技术的发展,越来越多的企业选择使用开源工具实现自己的

    2024年02月08日
    浏览(52)
  • Streamlining Your Data Pipeline with Databricks and Apache Flink

    大数据技术在过去的几年里发展迅速,成为了企业和组织中不可或缺的一部分。随着数据的规模和复杂性的增加,传统的数据处理技术已经无法满足需求。为了解决这个问题,我们需要一种更高效、可扩展的数据处理框架。 Databricks 和 Apache Flink 是两个非常受欢迎的开源项目

    2024年02月22日
    浏览(52)
  • 报错: environment variable RANK expected, but not set

    报错 在运行单机多卡训练与测试的时候,直接执行训练/测试脚本遇到如下报错: 解决办法 通过 python -m torch.distributed.launch 执行脚本,报错解决: 说明 python -m 将其后的模块作为脚本来运行,其后的 xxx.py 是作为位置参数传递给该脚本,以此来启动分布式训练/测试。 python

    2024年02月13日
    浏览(42)
  • Linux访问firefox 显示Error: no DISPLAY environment variable specified

            在 CentOS 7 中访问 Firefox 浏览器时,出现 \\\"Error: no DISPLAY environment variable specified\\\" 的错误提示通常是由于缺少显示环境变量导致的。 要解决这个问题,你可以按照以下步骤进行配置:           如果仍然遇到问题,请确保你的 X Window System 和图形界面环境正常工

    2024年02月20日
    浏览(54)
  • The JAVA_HOME environment variable is not defined correctly

    排查 JAVA_HOME 路径错误 ,使用 echo %JAVA_HOME% 和 cd %JAVA_HOME% 来验证,具体操作如下: 执行 echo %JAVA_HOME% 如果输出 %JAVA_HOME% 说明环境变量中未配置 JAVA_HOME ,或名称写错了 用户变量、系统变量都要检查 继续执行 echo %JAVA_HOME% ,保证可以输出一个路径 继续执行 cd %JAVA_HOME% 如果报

    2024年04月13日
    浏览(48)
  • 配置Maven时报错The JAVA_HOME environment variable is not defined correctly,this environment解决方法汇总

    在检验maven是否安装成功时: C:Users28955mvn -v The JAVA_HOME environment variable is not defined correctly, this environment variable is needed to run this program. 呜呜呜,真难啊 搜了原因发现是因为高版本的JDK如JDK17免安装版没有JRE,配置好环境变量Maven识别不出JDK的位置导致的报错 1.找到maven的bin目

    2024年01月25日
    浏览(52)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包