Load balancing rules for nodes

To improve fault tolerance and system performance, you can define flexible load balancing rules for nodes using the tools:

  • Affinity. Set flexible rules for placing pods in relation to each other or on specific nodes.
  • Anti-affinity. Prevent pods of the same type from being placed on the same node or in the same topology.

Before configuring Affinity or Anti-affinity, please read the BRIX Enterprise advanced settings article to learn about all the pod placement tools and recommendations for using them together.

Affinity tool

You can configure flexible rules to distribute pods across nodes or zones. For example, prioritize nodes or set complex conditions for pod placement. To do this, use the Affinity tool. You can configure it using the nodeAffinity parameter.

How to configure the nodeAffinity parameter

Let’s see how to configure the nodeAffinity parameter by setting pod running only on nodes with the role=worker label. To do this, in the values-elma365.yaml file, go to the .Values.global.affinity field and specify the values:

global:
affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: role
          operator: In
          values:
          - worker

Where:

  • requiredDuringSchedulingIgnoredDuringExecution. A rule for placing a pod on a node.
  • nodeSelectorTerms. A list of conditions that must be met for nodes.
  • matchExpressions. Logical expressions for filtering labels:
    • key. Label name.
    • operator. Comparison operator:
      • In. A label value must be included in the list of values.
      • NotIn. A label value must not be included in the list of values.
      • Exists. A pod must have a key with the given name.
      • DoesNotExist. A pod must not have a key with the given name.
  • values. Label values that are used with the In and NotIn operators. You can specify several values.

Once the parameter is configured, apply it as described in the Modify BRIX Enterprise parameters article.

Anti-affinity tool

To improve system fault tolerance, you can distribute pods of the same type to different nodes. For example, set rules for the pods of the mailer service. If one node is not enough to perform computations, the service will continue to run on other nodes.

To set such rules, use the Anti-affinity tool. You can configure it using the podAntiAffinity parameter.

How to configure the podAntiAffinity parameter

Let's see how to configure the podAntiAffinity parameter and prohibit the placement of pods with the app=mailer label on one node. To do this, in the values-elma365.yaml file, go to the .Values.global.affinity field and set the settings:

global:
affinity:
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchExpressions:
        - key: app
          operator: In
          values:
          - mailer
      topologyKey: kubernetes.io/hostname

Where:

  • requiredDuringSchedulingIgnoredDuringExecution. A rule for placing a pod on a node.
  • labelSelector. Labels of pods to which the rule applies.
  • matchExpressions. Logical expressions for filtering labels:
    • key. Label name.
    • operator. Comparison operator:
      • In. The label value must be included in the list of values.
      • NotIn. The label value must not be included in the list of values.
      • Exists. A pod must have a key with the given name.
      • DoesNotExist. A pod must not have a key with the given name.
  • values. Label values that are used with the In and NotIn operators. You can specify multiple values.
  • topologyKey. Metadata key that is used to define the topology of nodes, for example kubernetes.io/hostname.

Once the parameter is configured, apply it as described in the Modify BRIX Enterprise parameters article.

Default values in the .Values.global.affinity field

If you have not set values in the .Values.global.affinity field, the default settings apply:

affinity:
podAntiAffinity:
  preferredDuringSchedulingIgnoredDuringExecution:
    - podAffinityTerm:
        labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - <service name>
            - key: release
              operator: In
              values:
                - "<release name>"
        topologyKey: kubernetes.io/hostname
      weight: 10

In this configuration:

  • Multiple pods of the same app (app=<service name>) or release (release=<release name>) cannot be placed on the same node.
  • The kubernetes.io/hostname key is used to define the topology. It distributes pods to different nodes.
  • The weight: 10 parameter specifies the priority of the rule that the Kubernetes scheduler uses when selecting a node to place pods on.