Kubernetes scheduling constraints

这篇具有很好参考价值的文章主要介绍了Kubernetes scheduling constraints。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

 Affinity and anti-affinity rules allow you to fine-tune your Kubernetes deployments, optimizing resource utilization and enhancing reliability. 

Pod Affinity

  • Definition: Pod affinity is used to express scheduling constraints based on characteristics of candidate Nodes and existing Pods.
  • Purpose: It encourages Pods to be colocated on the same Node if they need to communicate frequently over the network.
  • Example: Imagine a microservices architecture where two Pods, ServiceA and ServiceB, interact frequently. You can set up pod affinity so that both ServiceA and ServiceB prefer to run on the same Node. This enhances communication efficiency.
  • Description: The affinity rule ensures that Pods with a specific label will be scheduled onto a Node that already hosts a Pod with the same label.

This ensures that all nginx Pods are scheduled on the same Node based on the hostname label.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - nginx
              topologyKey: "kubernetes.io/hostname"
      containers:
        - name: nginx
          image: nginx

Pod Anti-Affinity

  • Definition: Pod anti-affinity discourages scheduling Pods onto Nodes that already have Pods with certain labels.
  • Purpose: It helps distribute workloads across different Nodes, promoting fault tolerance and resilience.
  • Example: Consider a scenario where you have two Pods, Frontend and Backend, serving a web application. You can set up pod anti-affinity so that Frontend and Backend avoid running on the same Node. This way, if one Node fails, the other Node can still handle requests.
  • Description: The anti-affinity rule ensures that Pods with a specific label prefer not to be scheduled on a Node that already hosts a Pod with the same label.

This ensures that no two nginx Pods are scheduled on the same Node based on the hostname label.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - nginx
              topologyKey: "kubernetes.io/hostname"
      containers:
        - name: nginx
          image: nginx

 Node Affinity

  • Definition: Node affinity constrains which Nodes can receive a Pod by matching labels on those Nodes.
  • Purpose: It allows you to specify an affinity toward a group of Nodes based on their labels.
  • Example: Suppose you have a set of high-memory Nodes labeled as memory=high. You want to run memory-intensive Pods on these Nodes. You can define node affinity to ensure that Pods with the label memory=high are scheduled on those specific Nodes.
  • Description: Node affinity acts as a preference, indicating that the scheduler should use a Node with the specified characteristics if available.

This ensures that the nginx Pod is scheduled only on a Node with the disktype=ssd label.

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: disktype
                operator: In
                values:
                  - ssd
  containers:
    - name: nginx
      image: nginx

 Node Anti-Affinity

  • Definition: Node anti-affinity discourages scheduling Pods onto Nodes that already have Pods with specific labels.
  • Purpose: It promotes workload distribution across different Nodes, preventing resource bottlenecks.
  • Example: Imagine a scenario where you have Pods performing CPU-intensive computations. You can set up node anti-affinity to prevent these Pods from running on the same Node, ensuring better resource utilization.
  • Description: Node anti-affinity acts as a repelling rule, making it less probable for Pods to be scheduled on Nodes with the specified label.

This ensures that the nginx Pod avoids Nodes with the gpu=true label.

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  affinity:
    nodeAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: gpu
                operator: In
                values:
                  - true
  containers:
    - name: nginx
      image: nginx

requiredDuringSchedulingIgnoredDuringExecution

requiredDuringSchedulingIgnoredDuringExecution  can be broken into two parts:

  1. requiredDuringScheduling:

    • This component implies that a pod should be scheduled on a node only if it satisfies certain criteria. In other words, the node must meet specific conditions for the pod to be placed there during the initial scheduling process.
  2. IgnoredDuringExecution:

    • This part comes into play after a pod is already scheduled and running on a node.
    • If any changes occur in the labels on that node during the pod’s execution (for example, due to an update), the existing pod should not be evicted based on these label changes.
    • Instead, only newly scheduled pods should be required to match the updated criteria.

In summary, requiredDuringSchedulingIgnoredDuringExecution ensures that pods are initially placed on suitable nodes and avoids unnecessary evictions during runtime due to label changes on the node. It’s a way to maintain stability and predictability in your Kubernetes cluster. 

topologyKey

topologyKey represents the key of node labels that the scheduler uses to determine the topology domain for pod placement. For example, when using pod affinity, the scheduler ensures that a pod is scheduled in the same domain (topology) as other pods that match a specific expression.

Common label options of topologyKey include:

  • topology.kubernetes.io/zone: Pods are scheduled in the same zone as other pods with matching labels.
  • kubernetes.io/hostname: Pods are scheduled on the same hostname as other pods with matching labels.
kind: Pod
metadata:
  name: with-pod-affinity
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
              - key: security
                operator: In
                values:
                  - S1
          topologyKey: topology.kubernetes.io/zone
  containers:
    - name: with-pod-affinity
      image: k8s.gcr.io/pause:2.0

topologySpreadConstraints

topologySpreadConstraints allow you to control how Pods are distributed across your cluster among different failure domains such as regions, zones, nodes, and other user-defined topology domains. The goal is to achieve both high availability and efficient resource utilization.

For example, it can avoid single-node dependency, the YAML below deploys pods evenly to all nodes.

apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  topologySpreadConstraints:
    - maxSkew: 1
      topologyKey: kubernetes.io/hostname
      whenUnsatisfiable: DoNotSchedule

maxSkew helps maintain a more even spread of pods, enhancing reliability and performance in your Kubernetes clusters. It defines the maximum allowed imbalance in the number of pods across topology domains. Set maxSkew to 1 (meaning only one more pod than the average can be in any zone)

topologySpreadConstraints are ideal for hierarchical topologies (where nodes are spread across logical domains), while pod/node affinity is suitable for linear topologies (where all nodes are on the same level). topologySpreadConstraints provide more expressive control over pod scheduling across broader topological domains, and combining them with other affinity rules allows you to fine-tune your workload placement. 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 5
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - my-app
              topologyKey: kubernetes.io/hostname
          topologySpreadConstraints:
            - labelSelector:
              matchLabels:
                app: my-app
              maxSkew: 1
              topologyKey: topology.kubernetes.io/zone
              whenUnsatisfiable: ScheduleAnyway

In this example, the pods of my-app are spread across different zones(based on the topology.kubernetes.io/zone label)

You may notice that there is labelSelector inside the topologySpreadConstraints, there's difference between with and without the labelSelector.

1. With labelSelector:

  • When you define a topologySpreadConstraints with a labelSelector, it allows you to select specific Pods based on their labels. These selected Pods are then counted to determine the number of Pods in their corresponding topology domain (such as nodes, zones, or other user-defined domains).
  • The labelSelector helps you control the spreading behavior of your Pods across different failure domains. You can ensure that Pods with specific labels are distributed evenly or according to your desired criteria.
  • For example, if you want to avoid running multiple Pods with the same label on a single node, you can use a labelSelector to enforce this constraint.

2. Without labelSelector:

  • When you omit the labelSelector, the spreading behavior is calculated automatically based on other information (such as services, replication controllers, replica sets, or stateful sets) that the Pod belongs to.
  • In this case, the system determines how to spread the Pods across different domains without explicitly considering their labels.
  • It’s a more automatic approach, but it might not provide fine-grained control over the distribution of Pods based on specific labels.

Taints and Tolerations

Taints are applied to nodes to mark them as “tainted” with specific keys and values. A tainted node will not schedule pods that do not have the corresponding toleration.

Tolerations are set on pods to allow them to tolerate specific taints. They define how long a pod can tolerate being scheduled on a tainted node.

Add taint to a node, taint effect NoSchedule.

kubectl taint nodes node1 key1=value1:NoSchedule

The allowed values for the effect field are:

  • NoExecute: This affects pods that are already running on the node as follows:
  • Pods that do not tolerate the taint are evicted immediately
  • Pods that tolerate the taint without specifying tolerationSeconds in their toleration specification remain bound forever
  • Pods that tolerate the taint with a specified tolerationSeconds remain bound for the specified amount of time. After that time elapses, the node lifecycle controller evicts the Pods from the node.
  • NoSchedule: No new Pods will be scheduled on the tainted node unless they have a matching toleration. Pods currently running on the node are not evicted.
  • PreferNoSchedule: PreferNoSchedule is a "preference" or "soft" version of NoSchedule. The control plane will try to avoid placing a Pod that does not tolerate the taint on the node, but it is not guaranteed.

Remove taint from a node.

kubectl taint nodes node1 key1=value1:NoSchedule-

Get the node's taint info

kubectl get node/node1 -o json | jq .spec.taints

tolerations usually used in pod or deployment declaration,  in the YAML below, pods will tolerate the taint with key "hardware" and value "gpu" on the nodes where it is scheduled

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
        - name: ai
          image: skynet:1997-08-29
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/hostname
                    operator: In
                    values: ["big-gpu", "expensive-gpu"]
      tolerations:
        - key: "hardware"
          value: "gpu"
          effect: "NoSchedule"
          tolerationSeconds: 3600

Horizontal Pod Autoscaler(HPA)

Horizontal Pod Autoscaler (HPA) is a Kubernetes resource and controller that automates the scaling of pods based on observed metrics, such as CPU utilization, memory utilization, or custom metrics. 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: envbin
spec:
  selector:
    matchLabels:
      app: envbin
  template:
    metadata:
      labels:
        app: envbin
    spec:
      containers:
        - name: envbin
          image: mtinside/envbin:latest
          imagePullPolicy: Always
          resources:
            requests:
              cpu: 100m
            limits:
              cpu: 100m
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: envbin
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: envbin
  minReplicas: 1
  maxReplicas: 5
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 80

 

Attention: Some of the sample YAML are generated by ChatGPT.文章来源地址https://www.toymoban.com/news/detail-760772.html

到了这里,关于Kubernetes scheduling constraints的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • Kubernetes高可用集群二进制部署(四)部署kubectl和kube-controller-manager、kube-scheduler

    Kubernetes概述 使用kubeadm快速部署一个k8s集群 Kubernetes高可用集群二进制部署(一)主机准备和负载均衡器安装 Kubernetes高可用集群二进制部署(二)ETCD集群部署 Kubernetes高可用集群二进制部署(三)部署api-server Kubernetes高可用集群二进制部署(四)部署kubectl和kube-controller-man

    2024年02月12日
    浏览(40)
  • 22-k8s中pod的调度-亲和性affinity

            在k8s当中,“亲和性”分为三种,节点亲和性、pod亲和性、pod反亲和性; 亲和性分类 名称 解释说明 nodeAffinity 节点亲和性 通过【节点】标签匹配,用于控制pod调度到哪些node节点上,以及不能调度到哪些node节点上;(主角node节点) podAffinity pod亲和性 通过【节点+

    2024年02月20日
    浏览(46)
  • 专业修图软件 Affinity Photo 2 mac中文版编辑功能

    Affinity Photo for Mac是应用在MacOS上的专业修图软件,支持多种文件格式,包括psD、PDF、SVG、Eps、TIFF、JPEG等。 Affinity Photo mac提供了许多高级图像编辑功能,如无限制的图层、非破坏性操作、高级的选择工具、高级的调整层、HDR合成、全景图拼接、RAW文件处理、高级的滤镜和效果

    2024年02月07日
    浏览(58)
  • MySQL约束constraint

    约束canstraint,约束实际上就是表中数据的限制条件。表在设置的时候加入约束的目的是为了保证表中的记录完整性和有效性,比如用户的某些数据不能为空。 常用约束: 添加约束的两种方法: 在创建表的时候添加 create 在修改标的时候添加 alter 注意: 主键约束的列非空且

    2023年04月20日
    浏览(31)
  • 解决No validator could be found for constraint ‘javax.validation.constraints.NotBlank‘ validating type

    今天在使用 Knife4j 调用后端接口,报出如下错误: 即 javax.validation.UnexpectedTypeException: HV000030: No validator could be found for constraint \\\'javax.validation.constraints.NotBlank\\\' validating type \\\'java.lang.Long\\\'. Check configuration for \\\'id\\\' 正赶上最近 ChatGPT 很火,于是借助 ChatGPT 来解决我的问题,如下所示:

    2024年02月11日
    浏览(38)
  • codeforces (C++ Satisfying Constraints)

        1、找到最大的下限min 2、找到最小的上限max 3、则max-min+1满足1、2约束条件的个数 4、max-min+1减去约束条件3的个数,即为最终答案 5、如果min大于max,则结果为0,不存在满足约束条件的数

    2024年01月19日
    浏览(30)
  • ad报错:board clearance constraint

    具体的信息: 错误类别board outline clearance(板卡轮廓间隙/板卡外形间隙)(outline edge):(0.25mm0.5mm)between board edge and pad g1-1(xxxx,xxxx)on bottom layer 意思是底层g1焊盘离板卡边缘太近了,实际距离0.25mm,小于了0.5mm的约束规则。 clearance constraint很常见,前面有board 一般就是跟板

    2024年01月19日
    浏览(83)
  • 论文简读《3D Equivariant Diffusion For Target-Aware Molecule Generation and Affinity Prediction》

    Targetdiff ICLR 2023 *一个端到端的框架,用于在蛋白靶点条件下生成分子,该框架明确考虑了蛋白质和分子在三维空间中的物理相互作用。 *就我们所知,这是针对靶向药物设计的第一个概率扩散公式,其中训练和采样过程以非自回归和SE(3)-等变的方式对齐,这得益于移位中心操

    2024年04月28日
    浏览(34)
  • 0/1 nodes are available: 1 node(s) didn‘t match Pod‘s node affinity.

    主要是需要确认你的yaml文件中是否有nodeSelector的配置,一般是因为k8s集群中没有相应的node节点匹配导致   这个错误消息表明您正在尝试在不符合Pod的节点亲和性规则的节点上运行Pod。这通常是由于节点选择器或节点亲和性规则设置不正确引起的。 以下是一些可能导致此错误

    2024年02月06日
    浏览(38)
  • PostgreSQL数据库分区裁剪——constraint exclusion

    constraint exclusion约束排除有如下类型:不使用、对所有表使用、对otherrel使用。首先看一下官方解释:Controls the query planner’s use of table constraints to optimize queries. The allowed values of constraint_exclusion are on (examine constraints for all tables), off (never examine constraints), and partition (examine constr

    2024年02月08日
    浏览(33)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包