This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Tutorials

Tutorials for real-world use cases

1 - Active-active Kubernetes architecture

How to setup an active-active architecture between two Kubernetes clusters

Overview

Active-active replication between Kubernetes clusters is a strategy to ensure high availability and disaster recovery for applications. In this setup, multiple Kubernetes clusters, typically located in different geographical regions, run identical copies of an application simultaneously.

Prerequisites

  • Install Astronetes Resiliency Operator.
  • Create a namespace where to store the secrets and run the synchronization between clusters.

Setup

Import the first cluster

Import the first Kubernetes cluster as described in details here:

  1. Save the kubeconfig file as cluster-1-kubeconfig.yaml.

    Import the kubeconfig file as secret:

    kubectl create secret generic cluster-1-kubeconfig --from-file=kubeconfig.yaml=cluster-1-kubeconfig.yaml
    
  2. Create the KubernetesCluster resource manifest cluster-1.yaml:

    apiVersion: assets.astronetes.io/v1alpha1
    kind: KubernetesCluster
    metadata:
      name: cluster-1
    spec:
      secretName: cluster-1-kubeconfig
    

    Deploy the resource with the following command:

    kubectl create -f cluster-1.yaml
    

Import the second cluster

Import the first Kubernetes cluster as described in details here:

  1. Save the kubeconfig file as cluster-2-kubeconfig.yaml.

    Import the kubeconfig file as secret:

    kubectl create secret generic cluster-2-kubeconfig --from-file=kubeconfig.yaml=cluster-2-kubeconfig.yaml
    
  2. Create the KubernetesCluster resource manifest cluster-2.yaml:

    apiVersion: assets.astronetes.io/v1alpha1
    kind: KubernetesCluster
    metadata:
      name: cluster-2
    spec:
      secretName: cluster-2-kubeconfig
    

    Deploy the resource with the following command:

    kubectl create -f cluster-2.yaml
    

Synchronize the clusters

Create the configuration manifest to synchronize the clusters according to the full documentation is provided at Configure kubernetes-to-kubernetes.

In the following examples there is a minimal configuration to synchronize namespaces labeled with sync=true:

  1. Save the configuration file as livesync.yaml with the following content:
apiVersion: automation.astronetes.io/v1alpha1
kind: LiveSynchronization
metadata:
  name: livesync-dev-active-active
spec:
  plugin: kubernetes-to-kubernetes
  config:
    sourceName: cluster-1
    destinationName: cluster-2
    globalSelector:
      namespaceSelector:
        labelSelector:
          matchLabels:
            sync: true
    observability:
      enabled: true
    options:
      dryRun: false
    selectors:
    - objectSelector:
        labelSelector:
          matchLabels:
            sync: true
      target:
        group: ""
        resources:
        - namespaces
        version: v1
    - target:
        group: ""
        resources:
        - services
        - secrets
        - configmaps
        - serviceaccounts
        version: v1
    - target:
        group: apps
        resources:
        - deployments
        version: v1
    - target:
        group: rbac.authorization.k8s.io
        resources:
        - clusterroles
        - rolebindings
        version: v1
    - target:
        group: networking.k8s.io
        resources:
        - ingresses
        version: v1
  suspend: false
  1. Apply the configuration:

    kubectl apply -f livesync.yaml
    

Operations

Pause the synchronization

The synchronization process can be paused with the following command:

kubectl patch livesynchronization active-active -p '{"spec":{"suspend":true}}' --type=merge

Resume the synchronization

The synchronization process can be paused with the following command:

kubectl patch livesynchronization active-active -p '{"spec":{"suspend":false}}' --type=merge

2 - Active-passive Kubernetes architecture

How to setup an active-passive architecture between two Kubernetes clusters

Overview

Active-passive replication between Kubernetes clusters is a strategy designed to provide high availability and disaster recovery, albeit in a more cost-efficient manner than active-active replication.

Prerequisites

  • Install Astronetes Resiliency Operator.
  • Create a namespace where to store the secrets and run the synchronization between clusters.

Setup

Import the active cluster

Import the first Kubernetes cluster as described in details here:

  1. Save the kubeconfig file as cluster-1-kubeconfig.yaml.

    Import the kubeconfig file as secret:

    kubectl create secret generic cluster-1-kubeconfig --from-file=kubeconfig.yaml=cluster-1-kubeconfig.yaml
    
  2. Create the KubernetesCluster resource manifest cluster-1.yaml:

    apiVersion: assets.astronetes.io/v1alpha1
    kind: KubernetesCluster
    metadata:
      name: cluster-1
    spec:
      secretName: cluster-1-kubeconfig
    

    Deploy the resource with the following command:

    kubectl create -f cluster-1.yaml
    

Import the passive cluster

Import the first Kubernetes cluster as described in details here:

  1. Save the kubeconfig file as cluster-2-kubeconfig.yaml.

    Import the kubeconfig file as secret:

    kubectl create secret generic cluster-2-kubeconfig --from-file=kubeconfig.yaml=cluster-2-kubeconfig.yaml
    
  2. Create the KubernetesCluster resource manifest cluster-2.yaml:

    apiVersion: assets.astronetes.io/v1alpha1
    kind: KubernetesCluster
    metadata:
      name: cluster-2
    spec:
      secretName: cluster-2-kubeconfig
    

    Deploy the resource with the following command:

    kubectl create -f cluster-2.yaml
    

Synchronize the clusters

Create the configuration manifest to synchronize the clusters according to the full documentation is provided at Configure kubernetes-to-kubernetes.

In the following examples there is a minimal configuration to synchronize namespaces labeled with sync=true. The deployments are replicated to the second cluster with replica=0, meaning that the applicatio is deployed but not running in the cluster. Only after the switch to the second cluster, the application will be started.

  1. Save the configuration file as livesync.yaml with the following content:
apiVersion: automation.astronetes.io/v1alpha1
kind: LiveSynchronization
metadata:
  name: active-passive
spec:
  plugin: kubernetes-to-kubernetes
  config:
    sourceName: cluster-1
    destinationName: cluster-2
    globalSelector:
      namespaceSelector:
        labelSelector:
          matchLabels:
            sync: "true"
    observability:
      enabled: true
    options:
      dryRun: false
    transformations:
      - resources:
          - group: apps
            version: v1
            resources:
              - deployments
        namespaceSelector:
          labelSelector:
            matchLabels:
              sync: "true"
        operations:
          - jsonpatch:
              operations:
                - op: replace
                  path: /spec/replicas
                  value: 0
    selectors:
    - objectSelector:
        labelSelector:
          matchLabels:
            sync: "true"
      target:
        group: ""
        resources:
        - namespaces
        version: v1
    - target:
        group: ""
        resources:
        - services
        - secrets
        - configmaps
        - serviceaccounts
        version: v1
    - target:
        group: apps
        resources:
        - deployments
        version: v1
    - target:
        group: rbac.authorization.k8s.io
        resources:
        - clusterroles
        - rolebindings
        version: v1
    - target:
        group: networking.k8s.io
        resources:
        - ingresses
        version: v1
  suspend: false
  1. Apply the configuration:

    kubectl apply -f livesync.yaml
    

Operations

Pause the synchronization

The synchronization process can be paused with the following command:

kubectl patch livesynchronization active-passive -p '{"spec":{"suspend":true}}' --type=merge

Resume the synchronization

The synchronization process can be resumed with the following command:

kubectl patch livesynchronization active-passive -p '{"spec":{"suspend":false}}' --type=merge

Recover from disasters

Recovering from a disaster will require the deployment of a TaskRun resource per Task that applies to recover the system and applications.

  1. Define the TaskRun resource in the taskrun.yaml file:

    apiVersion: automation.astronetes.io/v1alpha1
    kind: TaskRun
    metadata:
      name: restore-apps
    spec:
      taskName: set-test-label
    
  2. Create the TaskRun:

    kubectl create -f taskrun.yaml
    
  3. Wait for the application to be recovered.

Understanding the TaskRun

After defining a LiveSynchronization, a Task resource will be created in the destination cluster. The operator processes the spec.config.reaplication.resources[*].recoveryProcess parameter to define the required steps to activate the dormant applications.

This is the Task that will be created according to the previous defined LiveSynchronization object:

apiVersion: automation.astronetes.io/v1alpha1
kind: Task
metadata:
  name: active-passive
spec:
  plugin: kubernetes-objects-transformation
  config:
    resources:
      - identifier:
          group: apps
          version: v1
          resources: deployments
        patch:
          operations:
            - op: replace
              path: '/spec/replicas'
              value: 1
          filter:
            namespaceSelector:
              matchLabels:
                sync: "true"

For every Deployment in the selected namespaces, the pod replica will be set to 1.

This object should not be tempered with. It is managed by their adjacent LiveSynchronization.

3 - Synchronize Zookeeper clusters

How to synchronize Zookeeper clusters data

Overview

In environments where high availability and disaster recovery are paramount, it is essential to maintain synchronized data across different ZooKeeper clusters to prevent inconsistencies and ensure seamless failover.

In the following tutorial will be explained how to synchronize Zookeeper clusters.

Prerequisites

  • Install Astronetes Resiliency Operator.
  • Create a namespace where to store the secrets and run the synchronization between clusters.

Setup

Import the first cluster

Import the first Zookeeper cluster as described in details here:

  1. Define the Database resource with the following YAML, and save it as zookeeper-1.yaml:

    apiVersion: assets.astronetes.io/v1alpha1
    kind: Database
    metadata:
      name: zookeeper-1
    spec:
      zookeeper:
        client:
          servers:
            - <zookeeper_ip>:<zookeeper_port>
            - <zookeeper_ip>:<zookeeper_port>
            - <zookeeper_ip>:<zookeeper_port>
    
  2. Import the resource with the following command:

    kubectl create -f zookeeper-1.yaml
    

Import the second cluster

Import the second Zookeeper cluster as described in details here:

  1. Define the Database resource with the following YAML, and save it as zookeeper-2.yaml:

    apiVersion: assets.astronetes.io/v1alpha1
    kind: Database
    metadata:
      name: zookeeper-2
    spec:
      zookeeper:
        client:
          servers:
            - <zookeeper_ip>:<zookeeper_port>
            - <zookeeper_ip>:<zookeeper_port>
            - <zookeeper_ip>:<zookeeper_port>
    
  2. Import the resource with the following command:

    kubectl create -f zookeeper-2.yaml
    

Synchronize the clusters

Create the configuration manifest to synchronize the clusters according to the full documentation is provided here:

In the following example there is the configuration to synchronize every hour all the data in the / path:

  1. Create the synchronization file as zookeeper-sync.yaml with the following content:

    apiVersion: automation.astronetes.io/v1alpha1
    kind: SynchronizationPlan
    metadata:
      name: synchronize-zookeeper
    spec:
      schedule: "0 * * * *"
      template:
        spec:
          plugin: zookeeper-to-zookeeper-nodes
          config:
            sourceName: zookeeper-1
            destinationName: zookeeper-2
            rootPath: /
            createRoutePath: true
    
  2. Apply the configuration:

    kubectl apply -f zookeeper-sync.yaml
    

Operations

Force the synchronization

The synchronization can be run at any time creating a Synchronization object.

In the following example there is the configuration to synchronize all the data in the / path:

  1. Create the synchronization file as zookeeper-sync-once.yaml with the following content:
apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  generateName: synchronize-zookeeper-
spec:
  plugin: zookeeper-to-zookeeper-nodes
  config:
    sourceName: zookeeper-1
    destinationName: zookeeper-2
    rootPath: /
    createRoutePath: true
  1. Apply the configuration:

    kubectl create -f zookeeper-sync-once.yaml