Flink复习3-2-4-6-1(v1.17.0): 应用开发 - DataStream API - 状态和容错 - 数据类型&序列化 - 概述

这篇具有很好参考价值的文章主要介绍了Flink复习3-2-4-6-1(v1.17.0): 应用开发 - DataStream API - 状态和容错 - 数据类型&序列化 - 概述。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。


Apache Flink handles data types and serialization in a unique way, containing its own type descriptors, generic type extraction, and type serialization framework. This document describes the concepts and the rationale behind them.
Apache Flink以独特的方式处理数据类型和序列化,包含自己的类型描述符、泛型类型提取和类型序列化框架。本文档描述了这些概念及其背后的基本原理。

Supported Data Types(支持的数据类型)

Flink places some restrictions on the type of elements that can be in a DataStream. The reason for this is that the system analyzes the types to determine efficient execution strategies.
Flink对数据流中的元素类型进行了一些限制。这方便系统分析类型来确定高效的执行策略。

There are seven different categories of data types:
有七种不同的数据类型:

  1. Java Tuples and Scala Case Classes
  2. Java POJOs
  3. Primitive Types
  4. Regular Classes
  5. Values
  6. Hadoop Writables
  7. Special Types

Tuples and Case Classes

Tuples are composite types that contain a fixed number of fields with various types. The Java API provides classes from Tuple1 up to Tuple25. Every field of a tuple can be an arbitrary Flink type including further tuples, resulting in nested tuples. Fields of a tuple can be accessed directly using the field’s name as tuple.f4, or using the generic getter method tuple.getField(int position). The field indices start at 0. Note that this stands in contrast to the Scala tuples, but it is more consistent with Java’s general indexing.
元组是包含固定数量的具有各种类型的字段的复合类型。Java API提供从Tuple1到Tuple25的类。元组的每个字段都可以是任意的Flink类型,包括更多的元组,从而产生嵌套的元组。元组的字段可以使用字段名称tuple.f4直接访问,也可以使用通用getter方法tuple.getField(int position)访问。字段索引从0开始。请注意,这与Scala元组相反,但它与Java的通用索引更一致。

DataStream<Tuple2<String, Integer>> wordCounts = env.fromElements(
    new Tuple2<String, Integer>("hello", 1),
    new Tuple2<String, Integer>("world", 2));

wordCounts.map(new MapFunction<Tuple2<String, Integer>, Integer>() {
    @Override
    public Integer map(Tuple2<String, Integer> value) throws Exception {
        return value.f1;
    }
});

wordCounts.keyBy(value -> value.f0);

POJOs

Java and Scala classes are treated by Flink as a special POJO data type if they fulfill the following requirements:
如果Java和Scala类满足以下要求,Flink会将它们视为一种特殊的POJO数据类型:

  • The class must be public.
    类必须是公共的。

  • It must have a public constructor without arguments (default constructor).
    它必须有一个没有参数的公共构造函数(默认构造函数)。

  • All fields are either public or must be accessible through getter and setter functions. For a field called foo the getter and setter methods must be named getFoo() and setFoo().
    所有字段要么是公共的,要么必须通过getter和setter函数访问。对于名为foo的字段,getter和setter方法必须命名为getFoo()和setFoo()。

  • The type of a field must be supported by a registered serializer.
    已注册的序列化程序必须支持字段的类型。

POJOs are generally represented with a PojoTypeInfo and serialized with the PojoSerializer (using Kryo as configurable fallback). The exception is when the POJOs are actually Avro types (Avro Specific Records) or produced as “Avro Reflect Types”. In that case the POJO’s are represented by an AvroTypeInfo and serialized with the AvroSerializer. You can also register your own custom serializer if required; see Serialization for further information.
POJO通常用PojoTypeInfo表示,并用PojoSerializer序列化(使用Kryo作为可配置的回退)。例外情况是POJO实际上是Avro类型(Avro特定记录)或作为“Avro反射类型”生成。在这种情况下,POJO由AvroTypeInfo表示,并使用AvroSerializer进行序列化。如果需要,您还可以注册自己的自定义序列化程序;有关详细信息,请参阅序列化。

Flink analyzes the structure of POJO types, i.e., it learns about the fields of a POJO. As a result POJO types are easier to use than general types. Moreover, Flink can process POJOs more efficiently than general types.
Flink分析POJO类型的结构,即它了解POJO的字段。因此,POJO类型比一般类型更易于使用。此外,Flink相较于一般类型能更高效地处理POJO。

You can test whether your class adheres to the POJO requirements via org.apache.flink.types.PojoTestUtils#assertSerializedAsPojo() from the flink-test-utils. If you additionally want to ensure that no field of the POJO will be serialized with Kryo, use assertSerializedAsPojoWithoutKryo() instead.
您可以通过flink-test-utils中的org.apache.flink.types.PojoTestUtils#assertSerializedAsPojo()来测试您的类是否符合POJO要求。如果您还想确保POJO的任何字段都不会使用Kryo进行序列化,请改用assertSerializedAsPojoWithoutKryo()来代替。

The following example shows a simple POJO with two public fields.
下面的示例展示了一个具有两个公共字段的简单POJO。

public class WordWithCount {

    public String word;
    public int count;

    public WordWithCount() {}

    public WordWithCount(String word, int count) {
        this.word = word;
        this.count = count;
    }
}

DataStream<WordWithCount> wordCounts = env.fromElements(
    new WordWithCount("hello", 1),
    new WordWithCount("world", 2));

wordCounts.keyBy(value -> value.word);

Primitive Types(基本数据类型)

Flink supports all Java and Scala primitive types such as Integer, String, and Double.
Flink支持所有Java和Scala基本类型,如Integer, String和Double。

General Class Types(一般类型)

Flink supports most Java and Scala classes (API and custom). Restrictions apply to classes containing fields that cannot be serialized, like file pointers, I/O streams, or other native resources. Classes that follow the Java Beans conventions work well in general.
Flink支持大多数Java和Scala类(API和自定义)。限制包含无法序列化的字段的类,如文件指针、I/O流或其他本机资源。遵循Java bean约定的类通常工作得很好。

All classes that are not identified as POJO types (see POJO requirements above) are handled by Flink as general class types. Flink treats these data types as black boxes and is not able to access their content (e.g., for efficient sorting). General types are de/serialized using the serialization framework Kryo.
Flink将所有未标识为POJO类型的类(请参阅上面的POJO要求)作为一般类型进行处理。Flink将这些数据类型视为黑盒,无法访问其内容(例如,高效排序时需要访问(我理解的))。一般类型使用序列化框架Kryo进行反序列化。

Values

Value types describe their serialization and deserialization manually. Instead of going through a general purpose serialization framework, they provide custom code for those operations by means of implementing the org.apache.flink.types.Value interface with the methods read and write. Using a Value type is reasonable when general purpose serialization would be highly inefficient. An example would be a data type that implements a sparse vector of elements as an array. Knowing that the array is mostly zero, one can use a special encoding for the non-zero elements, while the general purpose serialization would simply write all array elements.
值类型手动描述它们的序列化和反序列化。它们没有经过通用的序列化框架,而是通过实现带有read和write方法的org.apache.flink.types.Value接口,为这些操作提供自定义代码。当通用序列化效率极低时,使用Value类型是合理的。一个例子是将元素的稀疏向量实现为数组的数据类型。知道数组大部分为零后,可以对非零元素使用特殊编码,而通用序列化将简单地写入所有数组元素。

The org.apache.flink.types.CopyableValue interface supports manual internal cloning logic in a similar way.
org.apache.flink.types.CopyableValue接口以类似的方式支持手动内部克隆逻辑。

Flink comes with pre-defined Value types that correspond to basic data types. (ByteValue, ShortValue, IntValue, LongValue, FloatValue, DoubleValue, StringValue, CharValue, BooleanValue). These Value types act as mutable variants of the basic data types: Their value can be altered, allowing programmers to reuse objects and take pressure off the garbage collector.
Flink附带了与基本数据类型相对应的预定义值类型。(ByteValue、ShortValue、IntValue、LongValue、FloatValue、DoubleValue、StringValue、CharValue、BooleanValue)。这些值类型充当基本数据类型的可变变体:它们的值可以更改,允许程序员重用对象并减轻垃圾收集器的压力。

Hadoop Writables

You can use types that implement the org.apache.hadoop.Writable interface. The serialization logic defined in the write()and readFields() methods will be used for serialization.
可以使用实现org.apache.hadoop.Writable接口的类型。在write()和readFields()方法中定义用于序列化的逻辑。

Special Types(特殊类型)

You can use special types, including Scala’s Either, Option, and Try. The Java API has its own custom implementation of Either. Similarly to Scala’s Either, it represents a value of two possible types, Left or Right. Either can be useful for error handling or operators that need to output two different types of records.
可以使用特殊类型,包括Scala的Either、Option和Try。Java API有自己的Either自定义实现。类似于Scala的Either,它表示两种可能类型的值,Left或Right。Either可用于错误处理或需要输出两种不同类型记录的操作符。

Type Erasure & Type Inference(类型擦除和类型推断)

Note: This Section is only relevant for Java.
注:本节仅与Java相关。

The Java compiler throws away much of the generic type information after compilation. This is known as type erasure in Java. It means that at runtime, an instance of an object does not know its generic type any more. For example, instances of DataStream and DataStream look the same to the JVM.
Java编译器在编译后丢弃了许多泛型类型信息。这在Java中被称为类型擦除。这意味着在运行时,对象的实例不再知道其泛型类型。例如,DataStream<String>和DataStream<Long>的实例在JVM中看起来是相同的。

Flink requires type information at the time when it prepares the program for execution (when the main method of the program is called). The Flink Java API tries to reconstruct the type information that was thrown away in various ways and store it explicitly in the data sets and operators. You can retrieve the type via DataStream.getType(). The method returns an instance of TypeInformation, which is Flink’s internal way of representing types.
Flink在准备程序执行时(当程序的主方法被调用时)需要类型信息。Flink Java API试图重建以各种方式丢弃的类型信息,并将其显式地存储在数据集和操作符中。可以通过DataStream.getType()检索类型。该方法返回TypeInformation的实例,TypeInformation是Flink表示类型的内部方式。

The type inference has its limits and needs the “cooperation” of the programmer in some cases. Examples for that are methods that create data sets from collections, such as StreamExecutionEnvironment.fromCollection(), where you can pass an argument that describes the type. But also generic functions like MapFunction<I, O> may need extra type information.
类型推断有其局限性,在某些情况下需要程序员的“配合”。例如从集合中创建数据集的方法,例如StreamExecutionEnvironment.fromCollection(),可以在其中传递描述类型的参数。但是像MapFunction<I, O>这样的泛型函数可能需要额外的类型信息。

The ResultTypeQueryable interface can be implemented by input formats and functions to tell the API explicitly about their return type. The input types that the functions are invoked with can usually be inferred by the result types of the previous operations.
ResultTypeQueryable接口可以通过输入格式和函数来实现,以显式地告诉API它们的返回类型。调用函数时使用的输入类型通常可以通过前面操作的结果类型推断出来。

Type handling in Flink(Flink中的类型处理)

Flink tries to infer a lot of information about the data types that are exchanged and stored during the distributed computation. Think about it like a database that infers the schema of tables. In most cases, Flink infers all necessary information seamlessly by itself. Having the type information allows Flink to do some cool things:
Flink试图推断出许多关于在分布式计算过程中交换和存储的数据类型的信息。把它想象成一个推断表模式的数据库。在大多数情况下,Flink会自己无缝地推断出所有必要的信息。有了类型信息,Flink可以做一些很酷的事情:

  • The more Flink knows about data types, the better the serialization and data layout schemes are. That is quite important for the memory usage paradigm in Flink (work on serialized data inside/outside the heap where ever possible and make serialization very cheap).
    Flink对数据类型了解得越多,序列化和数据布局方案就越好。这对于Flink中的内存使用模式非常重要(尽可能在堆内/堆外处理序列化数据,使序列化变得非常便宜)。

  • Finally, it also spares users in the majority of cases from worrying about serialization frameworks and having to register types.
    最后,在大多数情况下,它还使用户不必担心序列化框架和必须注册类型。

In general, the information about data types is needed during the pre-flight phase - that is, when the program’s calls on DataStream are made, and before any call to execute(), print(), count(), or collect().
一般来说,有关数据类型的信息是在预运行阶段(有时会被翻译成飞行阶段)需要的 —— 也就是说,当程序对DataStream进行调用时,以及在调用execute()、print()、count()或collect()之前。

Most Frequent Issues(最常见问题)

The most frequent issues where users need to interact with Flink’s data type handling are:
用户需要与Flink的数据类型处理进行交互的最常见问题是:

  • Registering subtypes: If the function signatures describe only the supertypes, but they actually use subtypes of those during execution, it may increase performance a lot to make Flink aware of these subtypes. For that, call .registerType(clazz) on the StreamExecutionEnvironment for each subtype.
    注册子类型:如果函数签名只描述超类型,但它们在执行过程中实际使用了超类型的子类型,那么让Flink知道这些子类型可以大大提高性能。为此,请在StreamExecutionEnvironment上为每个子类型调用.registerType(clazz)。

  • Registering custom serializers: Flink falls back to Kryo for the types that it does not handle transparently by itself. Not all types are seamlessly handled by Kryo (and thus by Flink). For example, many Google Guava collection types do not work well by default. The solution is to register additional serializers for the types that cause problems. Call .getConfig().addDefaultKryoSerializer(clazz, serializer) on the StreamExecutionEnvironment. Additional Kryo serializers are available in many libraries. See 3rd party serializer for more details on working with external serializers.
    注册自定义序列化器:Flink因其自身无法透明处理的类型而求助于Kryo。但并非所有类型都可以由Kryo无缝处理(Flink也是如此)。例如,默认情况下,许多Google Guava集合类型不能很好地工作。解决方案是为导致问题的类型注册额外的序列化程序。在StreamExecutionEnvironment上调用.getConfig().addDefaultKryoSerializer(clazz, serializer)。许多库中提供了额外的Kryo序列化程序。有关使用外部序列化程序的更多详细信息,请参阅第三方序列化程序。

  • Adding Type Hints: Sometimes, when Flink cannot infer the generic types despite all tricks, a user must pass a type hint. That is generally only necessary in the Java API. The Type Hints Section describes that in more detail.
    添加类型提示:有时,当Flink无法推断出泛型类型时,用户必须传递类型提示。这通常只在Java API中是必需的。类型提示部分对此进行了更详细的描述。

  • Manually creating a TypeInformation: This may be necessary for some API calls where it is not possible for Flink to infer the data types due to Java’s generic type erasure. See Creating a TypeInformation or TypeSerializer for details.
    手动创建TypeInformation:对于某些API调用来说,这可能是必需的,因为Java的泛型类型擦除导致Flink无法推断数据类型。有关详细信息,请参阅创建TypeInformation或TypeSerializer。

Flink’s TypeInformation class(Flink的TypeInformation类)

The class TypeInformation is the base class for all type descriptors. It reveals some basic properties of the type and can generate serializers and, in specializations, comparators for the types. (Note that comparators in Flink do much more than defining an order - they are basically the utility to handle keys)
类TypeInformation是所有类型描述符的基类。它展示了类型的一些基本属性,并可以为类型生成序列化程序,在特殊化中,还可以生成比较器。(请注意,Flink中的比较器所做的远不止定义顺序 —— 它们大体上是处理keys的实用程序)

Internally, Flink makes the following distinctions between types:
在内部,Flink对类型进行了以下区分:

  • Basic types: All Java primitives and their boxed form, plus void, String, Date, BigDecimal, and BigInteger.
    基本类型:所有Java基元及其装箱形式,加上void、String、Date、BigDecimal和BigInteger。

  • Primitive arrays and Object arrays
    基本数组和对象数组

  • Composite types
    复合类型

    • Flink Java Tuples (part of the Flink Java API): max 25 fields, null fields not supported
      Flink Java元组(Flink Java API的一部分):最多25个字段,不支持空字段

    • Scala case classes (including Scala tuples): null fields not supported
      Scala case类(包括Scala元组):不支持空字段

    • Row: tuples with arbitrary number of fields and support for null fields
      Row:具有任意数量字段的元组,并支持空字段

    • POJOs: classes that follow a certain bean-like pattern
      POJO:遵循某种类似bean模式的类

  • Auxiliary types (Option, Either, Lists, Maps, …)
    辅助类型(Option, Either, Lists, Maps,…)

  • Generic types: These will not be serialized by Flink itself, but by Kryo.
    泛型类型:Flink本身不会序列化这些类型,而是由Kryo序列化。

POJOs are of particular interest, because they support the creation of complex types. They are also transparent to the runtime and can be handled very efficiently by Flink.
POJO特别有趣,因为它们支持创建复杂类型。它们对运行时也是透明的,Flink可以非常有效地处理它们。

Rules for POJO types(POJO类型的规则)

Flink recognizes a data type as a POJO type (and allows “by-name” field referencing) if the following conditions are fulfilled:
如果满足以下条件,Flink将数据类型识别为POJO类型(并允许“by-name”字段引用):

  • The class is public and standalone (no non-static inner class)
    类是公共且独立的(没有非静态内部类)

  • The class has a public no-argument constructor
    该类有一个公共的无参数构造函数

  • All non-static, non-transient fields in the class (and all superclasses) are either public (and non-final) or have a public getter- and a setter- method that follows the Java beans naming conventions for getters and setters.
    类(和所有超类)中的所有非静态、非瞬态字段要么是公共的(也是非最终的),要么有一个公共getter和setter方法,该方法遵循getter和seter的Java bean命名约定。

Note that when a user-defined data type can’t be recognized as a POJO type, it must be processed as GenericType and serialized with Kryo.
请注意,当用户定义的数据类型无法识别为POJO类型时,必须将其处理为GenericType并使用Kryo进行序列化。

Creating a TypeInformation or TypeSerializer(创建TypeInformation或TypeSerializer)

To create a TypeInformation object for a type, use the language specific way:
要为类型创建TypeInformation对象,请使用特定于语言的方式:

Because Java generally erases generic type information, you need to pass the type to the TypeInformation construction:
因为Java通常会擦除泛型类型信息,所以需要将类型传递给TypeInformation构造:

For non-generic types, you can pass the Class:
对于非泛型类型,可以传递Class:

TypeInformation<String> info = TypeInformation.of(String.class);

For generic types, you need to “capture” the generic type information via the TypeHint:
对于泛型类型,你需要通过TypeHint“捕获”泛型类型信息:

TypeInformation<Tuple2<String, Double>> info = TypeInformation.of(new TypeHint<Tuple2<String, Double>>(){});

Internally, this creates an anonymous subclass of the TypeHint that captures the generic information to preserve it until runtime.
在内部,这将创建TypeHint的一个匿名子类,该子类捕获泛型信息并将其保存到运行时。

To create a TypeSerializer, simply call typeInfo.createSerializer(config) on the TypeInformation object.
要创建TypeSerializer,只需在TypeInformation对象上调用typeInfo.createSerializer(config)。

The config parameter is of type ExecutionConfig and holds the information about the program’s registered custom serializers. Where ever possibly, try to pass the programs proper ExecutionConfig. You can usually obtain it from DataStream via calling getExecutionConfig(). Inside functions (like MapFunction), you can get it by making the function a Rich Function and calling getRuntimeContext().getExecutionConfig().
config参数的类型是ExecutionConfig,并保存有关程序注册的自定义序列化器的信息。在任何可能的地方,尝试向程序传递正确的ExecutionConfig。通常可以通过调用getExecutionConfig()从DataStream获得它。在函数内部(如MapFunction),可以通过将函数设置为Rich函数并调用getRuntimeContext().getexecutionconfig()来获得它。

Type Information in the Scala API(Scala API中的类型信息)

Scala has very elaborate concepts for runtime type information though type manifests and class tags. In general, types and methods have access to the types of their generic parameters - thus, Scala programs do not suffer from type erasure as Java programs do.
Scala通过类型清单和类标签为运行时类型信息提供了非常详细的概念。一般来说,类型和方法可以访问其泛型参数的类型——因此,Scala程序不会像Java程序那样遭受类型擦除。

In addition, Scala allows to run custom code in the Scala Compiler through Scala Macros - that means that some Flink code gets executed whenever you compile a Scala program written against Flink’s Scala API.
此外,Scala允许通过Scala宏在Scala编译器中运行自定义代码,这意味着每当编译针对Flink的Scala API编写的Scala程序时,就会执行一些Flink代码。

We use the Macros to look at the parameter types and return types of all user functions during compilation - that is the point in time when certainly all type information is perfectly available. Within the macro, we create a TypeInformation for the function’s return types (or parameter types) and make it part of the operation.
我们使用宏来查看编译期间所有用户函数的参数类型和返回类型——这是所有类型信息都完全可用的时间点。在宏中,我们为函数的返回类型(或参数类型)创建TypeInformation,并使其成为操作的一部分。

No Implicit Value for Evidence Parameter Error(证据参数错误没有隐式值)

In the case where TypeInformation could not be created, programs fail to compile with an error stating “could not find implicit value for evidence parameter of type TypeInformation”.
在无法创建TypeInformation的情况下,程序编译失败,并出现“无法找到TypeInformation类型的证据参数的隐式值”的错误。

A frequent reason if that the code that generates the TypeInformation has not been imported. Make sure to import the entire flink.api.scala package.
一个常见的原因是生成TypeInformation的代码尚未导入。请确保导入整个flink.api.scala包。

import org.apache.flink.api.scala._

Another common cause are generic methods, which can be fixed as described in the following section.
另一个常见的原因是泛型方法,可以按照下一节中的描述进行修复。

Generic Methods(泛型方法)

Consider the following case below:
考虑以下情况:

def selectFirst[T](input: DataStream[(T, _)]) : DataStream[T] = {
  input.map { v => v._1 }
}

val data : DataStream[(String, Long) = ...

val result = selectFirst(data)

For such generic methods, the data types of the function parameters and return type may not be the same for every call and are not known at the site where the method is defined. The code above will result in an error that not enough implicit evidence is available.
对于此类泛型方法,函数参数的数据类型和返回类型可能在每次调用中都不相同,并且在定义方法的地方是未知的。上面的代码将导致一个错误(即:没有足够的隐式证据可用)。

In such cases, the type information has to be generated at the invocation site and passed to the method. Scala offers implicit parameters for that.
在这种情况下,必须在调用的地方生成类型信息并将其传递给方法。Scala为此提供了隐式参数。

The following code tells Scala to bring a type information for T into the function. The type information will then be generated at the sites where the method is invoked, rather than where the method is defined.
下面的代码告诉Scala将T的类型信息带入函数中。然后,类型信息将在调用方法的位置生成,而不是在定义方法的位置生成。

def selectFirst[T : TypeInformation](input: DataStream[(T, _)]) : DataStream[T] = {
  input.map { v => v._1 }
}

Type Information in the Java API(Java API中的类型信息)

In the general case, Java erases generic type information. Flink tries to reconstruct as much type information as possible via reflection, using the few bits that Java preserves (mainly function signatures and subclass information). This logic also contains some simple type inference for cases where the return type of a function depends on its input type:
在一般情况下,Java会擦除泛型类型信息。Flink尝试通过反射重建尽可能多的类型信息,使用Java保留的少量信息(主要是函数签名和子类信息)。对于函数的返回类型取决于其输入类型的情况,该逻辑还会包含一些简单的类型推断:

public class AppendOne<T> implements MapFunction<T, Tuple2<T, Long>> {

    public Tuple2<T, Long> map(T value) {
        return new Tuple2<T, Long>(value, 1L);
    }
}

There are cases where Flink cannot reconstruct all generic type information. In that case, a user has to help out via type hints.
在某些情况下,Flink无法重建所有泛型类型信息。在这种情况下,用户必须通过类型提示提供帮助。

Type Hints in the Java API(Java API中的类型提示)

In cases where Flink cannot reconstruct the erased generic type information, the Java API offers so called type hints. The type hints tell the system the type of the data stream or data set produced by a function:
在Flink无法重建被擦除的泛型类型信息的情况下,Java API提供了所谓的类型提示。类型提示告诉系统由函数产生的数据流或数据集的类型:

DataStream<SomeType> result = stream
    .map(new MyGenericNonInferrableFunction<Long, SomeType>())
        .returns(SomeType.class);

The returns statement specifies the produced type, in this case via a class. The hints support type definition via
returns语句指定生成的类型,在本例中是通过一个类指定的。提示支持通过以下两种定义类型

  • Classes, for non-parameterized types (no generics)
    类,用于非参数化类型(无泛型)

  • TypeHints in the form of returns(new TypeHint<Tuple2<Integer, SomeType>>(){}). The TypeHint class can capture generic type information and preserve it for the runtime (via an anonymous subclass).
    returns(new TypeHint<Tuple2<Integer, SomeType>>(){}) 形式的类型提示。TypeHint类可以捕获泛型类型信息并为运行时保留它(通过匿名子类)。

Type extraction for Java 8 lambdas(Java 8 lambda的类型提取)

Type extraction for Java 8 lambdas works differently than for non-lambdas, because lambdas are not associated with an implementing class that extends the function interface.
Java 8 lambda的类型提取与非lambda不同,因为lambda与扩展函数接口的实现类没有关联。

Currently, Flink tries to figure out which method implements the lambda and uses Java’s generic signatures to determine the parameter types and the return type. However, these signatures are not generated for lambdas by all compilers. If you observe unexpected behavior, manually specify the return type using the returns method.
目前,Flink会试图找出哪个方法实现了lambda,并使用Java的泛型签名来确定参数类型和返回类型。但是,并非所有编译器都为lambda生成这些签名。如果观察到意外行为,请使用return方法手动指定返回类型。

Serialization of POJO types(POJO类型的序列化)

The PojoTypeInfo is creating serializers for all the fields inside the POJO. Standard types such as int, long, String etc. are handled by serializers we ship with Flink. For all other types, we fall back to Kryo.
PojoTypeInfo为POJO中的所有字段创建序列化器。int、long、String等标准类型由Flink附带的序列化器处理。对于其他类型,求助于Kryo。

If Kryo is not able to handle the type, you can ask the PojoTypeInfo to serialize the POJO using Avro. To do so, you have to call
如果Kryo无法处理该类型,可以要求PojoTypeInfo使用Avro序列化POJO。要做到这一点,必须调用

final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.getConfig().enableForceAvro();

Note that Flink is automatically serializing POJOs generated by Avro with the Avro serializer.
注意,Flink使用Avro序列化器自动序列化Avro生成的pojo。

If you want your entire POJO Type to be treated by the Kryo serializer, set
如果希望整个POJO类型由Kryo序列化器处理,请设置

final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.getConfig().enableForceKryo();

If Kryo is not able to serialize your POJO, you can add a custom serializer to Kryo, using
如果Kryo不能序列化POJO,可以向Kryo添加一个自定义序列化器,使用

env.getConfig().addDefaultKryoSerializer(Class<?> type, Class<? extends Serializer<?>> serializerClass);

There are different variants of these methods available.
这些方法有不同的变体。

Disabling Kryo Fallback(禁用Kryo Fallback)

There are cases when programs may want to explicitly avoid using Kryo as a fallback for generic types. The most common one is wanting to ensure that all types are efficiently serialized either through Flink’s own serializers, or via user-defined custom serializers.
在某些情况下,程序可能希望显式地避免使用Kryo作为泛型类型的后备。最常见的一个是希望通过Flink自己的序列化器或用户定义的自定义序列化器确保所有类型都有效地序列化。

The setting below will raise an exception whenever a data type is encountered that would go through Kryo:
每当遇到要通过Kryo的数据类型时,以下设置都会引发异常:

env.getConfig().disableGenericTypes();

Defining Type Information using a Factory(使用Factory定义类型信息)

A type information factory allows for plugging-in user-defined type information into the Flink type system. You have to implement org.apache.flink.api.common.typeinfo.TypeInfoFactory to return your custom type information. The factory is called during the type extraction phase if either the corresponding type or a POJO’s field using this type has been annotated with the @org.apache.flink.api.common.typeinfo.TypeInfo annotation.
类型信息工厂允许将用户定义的类型信息插入Flink类型系统。您必须实现org.apache.flink.api.common.typeinfo.TypeInfoFactory才能返回您的自定义类型信息。如果相应的类型或使用该类型的POJO字段已使用@org.apache.flink.api.common.typeinfo.TypeInfo注解进行了注释,则会在类型提取阶段调用工厂。

Type information factories can be used in both the Java and Scala API.
类型信息工厂可以在Java和Scala API中使用。

In a hierarchy of types the closest factory will be chosen while traversing upwards, however, a built-in factory has highest precedence. A factory has also higher precedence than Flink’s built-in types, therefore you should know what you are doing.
在类型层次结构中,向上遍历时将选择最近的工厂,但是,内置工厂具有最高优先级。工厂也比Flink的内置类型具有更高的优先级,因此您应该知道自己在做什么。

The following example shows how to annotate a custom type MyTuple and supply custom type information for it using a factory in Java.
以下示例显示如何使用Java中的工厂对自定义类型MyTuple进行注释并为其提供自定义类型信息。

The annotated custom type:
带注释的自定义类型:

@TypeInfo(MyTupleTypeInfoFactory.class)
public class MyTuple<T0, T1> {
  public T0 myfield0;
  public T1 myfield1;
}

The factory supplying custom type information:
提供自定义类型信息的工厂:

public class MyTupleTypeInfoFactory extends TypeInfoFactory<MyTuple> {

  @Override
  public TypeInformation<MyTuple> createTypeInfo(Type t, Map<String, TypeInformation<?>> genericParameters) {
    return new MyTupleTypeInfo(genericParameters.get("T0"), genericParameters.get("T1"));
  }
}

Instead of annotating the type itself, which may not be possible for third-party code, you can also annotate the usage of this type inside a valid Flink POJO like this:
您也可以在有效的Flink POJO中注释此类型的用法,而不是注释类型本身(这对于第三方代码来说也许是不可能的),如下所示:

public class MyPojo {
  public int id;

  @TypeInfo(MyTupleTypeInfoFactory.class)
  public MyTuple<Integer, String> tuple;
}

The method createTypeInfo(Type, Map<String, TypeInformation<?>>) creates type information for the type the factory is targeted for. The parameters provide additional information about the type itself as well as the type’s generic type parameters if available.

方法createTypeInfo(Type, Map<String, TypeInformation<?>>)为工厂的目标类型创建类型信息。参数提供了关于类型本身的附加信息,以及类型的泛型类型参数(如果可用)。

If your type contains generic parameters that might need to be derived from the input type of a Flink function, make sure to also implement org.apache.flink.api.common.typeinfo.TypeInformation#getGenericParameters for a bidirectional mapping of generic parameters to type information.
如果您的类型包含可能需要从Flink函数的输入类型派生的泛型参数,请确保也实现了org.apache.flink.api.common.typeinfo.TypeInformation#getGenericParameters,以实现泛型参数到类型信息的双向映射。文章来源地址https://www.toymoban.com/news/detail-524056.html

到了这里,关于Flink复习3-2-4-6-1(v1.17.0): 应用开发 - DataStream API - 状态和容错 - 数据类型&序列化 - 概述的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 移动应用开发 试题 复习

    1、Android 开发中常用的数据库是( ) A、 SQLite B、 Oracle C、 MySql D、 SQL Server A.SQLite 2、从其他应用中读取共享的数据库数据时,需要用到的是 query 方法,返回的是 Curser 数 据,那么这个方法是哪个对象的方法。( ) A、 SQLiteDatabase B、 SQLiteOpenHelper C、 ContentProvider D、 Content

    2024年02月13日
    浏览(39)
  • 解析!1V1直播源码开发搭建技术实时语音识别翻译功能的应用

      语言是我们人类交流的工具,它的种类繁多,比如世界语言,像是中国的汉语、英国的英语、法国的法语等;又或是我们中国的方言,像是山东话、北京话、上海话等。可谓是五花八门,争奇斗艳,每一种世界语言或是方言都有他独特的风格,但语言种类繁多的同时,这也

    2024年02月16日
    浏览(43)
  • 【Flink-1.17-教程】-【四】Flink DataStream API(1)源算子(Source)

    DataStream API 是 Flink 的核心层 API。一个 Flink 程序,其实就是对 DataStream 的各种转换。具体来说,代码基本上都由以下几部分构成: Flink 程序可以在各种上下文环境中运行:我们可以在本地 JVM 中执行程序,也可以提交到远程集群上运行。 不同的环境,代码的提交运行的过程会

    2024年01月22日
    浏览(55)
  • 【Flink-1.17-教程】-【四】Flink DataStream API(5)转换算子(Transformation)【分流】

    所谓 “分流” ,就是将一条数据流拆分成完全独立的两条、甚至多条流。也就是基于一个 DataStream ,定义一些筛选条件,将符合条件的数据拣选出来放到对应的流里。 其实根据条件筛选数据的需求,本身非常容易实现:只要针对同一条流多次独立调用 .filter() 方法进行筛选

    2024年01月24日
    浏览(47)
  • Flink流批一体计算(17):PyFlink DataStream API之StreamExecutionEnvironment

    目录 StreamExecutionEnvironment Watermark watermark策略简介 使用 Watermark 策略 内置水印生成器 处理空闲数据源 算子处理 Watermark 的方式 创建DataStream的方式 通过list对象创建 ​​​​​​使用DataStream connectors创建 使用Table SQL connectors创建 StreamExecutionEnvironment 编写一个 Flink Python DataSt

    2024年02月11日
    浏览(42)
  • 《Flink学习笔记》——第二章 Flink的安装和启动、以及应用开发和提交

    ​ 介绍Flink的安装、启动以及如何进行Flink程序的开发,如何运行部署Flink程序等 2.1 Flink的安装和启动 本地安装指的是单机模式 0、前期准备 java8或者java11(官方推荐11) 下载Flink安装包 https://flink.apache.org/zh/downloads/ hadoop(后面Flink on Yarn部署模式需要) 服务器(我是使用虚拟

    2024年02月10日
    浏览(40)
  • 【Flink-1.17-教程】-【四】Flink DataStream API(2)转换算子(Transformation)【基本转换算子、聚合算子】

    数据源读入数据之后,我们就可以使用各种转换算子,将一个或多个 DataStream 转换为新的 DataStream。 map 是大家非常熟悉的大数据操作算子,主要用于将数据流中的数据进行转换,形成新的数据流。简单来说,就是一个 “一 一映射”,消费一个元素就产出一个元素 。 我们只

    2024年01月23日
    浏览(48)
  • 【Java应用程序开发】【期末复习题】【2022秋】【答案仅供参考】

    答题时长:90分钟 试卷共包含57道题目,其中单选题30道,多选题10道,判断题10道,简答题5道,程序题2道。 1.定义一个类,必须使用的是( ) A.public B.class C.interface D.static 2.抽象方法:( ) A.可以有方法体 B.不可以出现在非抽象类中 C.有方法体的方法 D.抽象类中的方法都是抽

    2024年02月11日
    浏览(46)
  • 【Flink-1.17-教程】-【四】Flink DataStream API(3)转换算子(Transformation)【用户自定义函数(UDF)】

    用户自定义函数( user-defined function , UDF ),即用户可以根据自身需求,重新实现算子的逻辑。 用户自定义函数分为: 函数类 、 匿名函数 、 富函数类 。 Flink 暴露了所有 UDF 函数的接口,具体实现方式为接口或者抽象类,例如 MapFunction 、 FilterFunction 、 ReduceFunction 等。所

    2024年01月23日
    浏览(47)
  • Solon v2.2.17 发布,Java 新的生态型应用开发框架

    一个, Java 新的生态型应用开发框架 。它从零开始构建,有自己的标准规范与开放生态。与其他框架相比, 它解决了一个重要的痛点:启动慢,费资源。 由于Solon Bean容器的独特设计, Solon 不会因为扩展依赖变多而启动很慢(开发调试时,爽快)!!! 以开源项目“小诺”

    2024年02月05日
    浏览(42)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包