This is the multi-page printable view of this section. Click here to print.
Tutorials
1 - Active-active Kubernetes architecture
Overview
Active-active replication between Kubernetes clusters is a strategy to ensure high availability and disaster recovery for applications. In this setup, multiple Kubernetes clusters, typically located in different geographical regions, run identical copies of an application simultaneously.
Prerequisites
- Install Astronetes Resiliency Operator.
- Create a namespace where to store the secrets and run the synchronization between clusters.
Setup
Import the first cluster
Import the first Kubernetes cluster as described in details here:
Save the kubeconfig file as
cluster-1-kubeconfig.yaml.Import the kubeconfig file as secret:
kubectl create secret generic cluster-1-kubeconfig --from-file=kubeconfig.yaml=cluster-1-kubeconfig.yamlCreate the KubernetesCluster resource manifest
cluster-1.yaml:apiVersion: assets.astronetes.io/v1alpha1 kind: KubernetesCluster metadata: name: cluster-1 spec: secretName: cluster-1-kubeconfigDeploy the resource with the following command:
kubectl create -f cluster-1.yaml
Import the second cluster
Import the first Kubernetes cluster as described in details here:
Save the kubeconfig file as
cluster-2-kubeconfig.yaml.Import the kubeconfig file as secret:
kubectl create secret generic cluster-2-kubeconfig --from-file=kubeconfig.yaml=cluster-2-kubeconfig.yamlCreate the KubernetesCluster resource manifest
cluster-2.yaml:apiVersion: assets.astronetes.io/v1alpha1 kind: KubernetesCluster metadata: name: cluster-2 spec: secretName: cluster-2-kubeconfigDeploy the resource with the following command:
kubectl create -f cluster-2.yaml
Synchronize the clusters
Create the configuration manifest to synchronize the clusters according to the full documentation is provided at Configure kubernetes-to-kubernetes.
In the following examples there is a minimal configuration to synchronize namespaces labeled with sync=true:
- Save the configuration file as
livesync.yamlwith the following content:
apiVersion: automation.astronetes.io/v1alpha1
kind: LiveSynchronization
metadata:
name: livesync-dev-active-active
spec:
plugin: kubernetes-to-kubernetes
config:
sourceName: cluster-1
destinationName: cluster-2
globalSelector:
namespaceSelector:
labelSelector:
matchLabels:
sync: true
observability:
enabled: true
options:
dryRun: false
selectors:
- objectSelector:
labelSelector:
matchLabels:
sync: true
target:
group: ""
resources:
- namespaces
version: v1
- target:
group: ""
resources:
- services
- secrets
- configmaps
- serviceaccounts
version: v1
- target:
group: apps
resources:
- deployments
version: v1
- target:
group: rbac.authorization.k8s.io
resources:
- clusterroles
- rolebindings
version: v1
- target:
group: networking.k8s.io
resources:
- ingresses
version: v1
suspend: false
Apply the configuration:
kubectl apply -f livesync.yaml
Operations
Pause the synchronization
The synchronization process can be paused with the following command:
kubectl patch livesynchronization active-active -p '{"spec":{"suspend":true}}' --type=merge
Resume the synchronization
The synchronization process can be paused with the following command:
kubectl patch livesynchronization active-active -p '{"spec":{"suspend":false}}' --type=merge
2 - Active-passive Kubernetes architecture
Overview
Active-passive replication between Kubernetes clusters is a strategy designed to provide high availability and disaster recovery, albeit in a more cost-efficient manner than active-active replication.
Prerequisites
- Install Astronetes Resiliency Operator.
- Create a namespace where to store the secrets and run the synchronization between clusters.
Setup
Import the active cluster
Import the first Kubernetes cluster as described in details here:
Save the kubeconfig file as
cluster-1-kubeconfig.yaml.Import the kubeconfig file as secret:
kubectl create secret generic cluster-1-kubeconfig --from-file=kubeconfig.yaml=cluster-1-kubeconfig.yamlCreate the KubernetesCluster resource manifest
cluster-1.yaml:apiVersion: assets.astronetes.io/v1alpha1 kind: KubernetesCluster metadata: name: cluster-1 spec: secretName: cluster-1-kubeconfigDeploy the resource with the following command:
kubectl create -f cluster-1.yaml
Import the passive cluster
Import the first Kubernetes cluster as described in details here:
Save the kubeconfig file as
cluster-2-kubeconfig.yaml.Import the kubeconfig file as secret:
kubectl create secret generic cluster-2-kubeconfig --from-file=kubeconfig.yaml=cluster-2-kubeconfig.yamlCreate the KubernetesCluster resource manifest
cluster-2.yaml:apiVersion: assets.astronetes.io/v1alpha1 kind: KubernetesCluster metadata: name: cluster-2 spec: secretName: cluster-2-kubeconfigDeploy the resource with the following command:
kubectl create -f cluster-2.yaml
Synchronize the clusters
Create the configuration manifest to synchronize the clusters according to the full documentation is provided at Configure kubernetes-to-kubernetes.
In the following examples there is a minimal configuration to synchronize namespaces labeled with sync=true. The deployments are replicated to the second cluster with replica=0, meaning that the applicatio is deployed but not running in the cluster. Only after the switch to the second cluster, the application will be started.
- Save the configuration file as
livesync.yamlwith the following content:
apiVersion: automation.astronetes.io/v1alpha1
kind: LiveSynchronization
metadata:
name: active-passive
spec:
plugin: kubernetes-to-kubernetes
config:
sourceName: cluster-1
destinationName: cluster-2
globalSelector:
namespaceSelector:
labelSelector:
matchLabels:
sync: "true"
observability:
enabled: true
options:
dryRun: false
transformations:
- resources:
- group: apps
version: v1
resources:
- deployments
namespaceSelector:
labelSelector:
matchLabels:
sync: "true"
operations:
- jsonpatch:
operations:
- op: replace
path: /spec/replicas
value: 0
selectors:
- objectSelector:
labelSelector:
matchLabels:
sync: "true"
target:
group: ""
resources:
- namespaces
version: v1
- target:
group: ""
resources:
- services
- secrets
- configmaps
- serviceaccounts
version: v1
- target:
group: apps
resources:
- deployments
version: v1
- target:
group: rbac.authorization.k8s.io
resources:
- clusterroles
- rolebindings
version: v1
- target:
group: networking.k8s.io
resources:
- ingresses
version: v1
suspend: false
Apply the configuration:
kubectl apply -f livesync.yaml
Operations
Pause the synchronization
The synchronization process can be paused with the following command:
kubectl patch livesynchronization active-passive -p '{"spec":{"suspend":true}}' --type=merge
Resume the synchronization
The synchronization process can be resumed with the following command:
kubectl patch livesynchronization active-passive -p '{"spec":{"suspend":false}}' --type=merge
Recover from disasters
Recovering from a disaster will require the deployment of a TaskRun resource per Task that applies to recover the system and applications.
Define the TaskRun resource in the
taskrun.yamlfile:apiVersion: automation.astronetes.io/v1alpha1 kind: TaskRun metadata: name: restore-apps spec: taskName: set-test-labelCreate the TaskRun:
kubectl create -f taskrun.yamlWait for the application to be recovered.
Understanding the TaskRun
After defining a LiveSynchronization, a Task resource will be created in the destination cluster. The operator processes the spec.config.reaplication.resources[*].recoveryProcess parameter to define the required steps to activate the dormant applications.
This is the Task that will be created according to the previous defined LiveSynchronization object:
apiVersion: automation.astronetes.io/v1alpha1
kind: Task
metadata:
name: active-passive
spec:
plugin: kubernetes-objects-transformation
config:
resources:
- identifier:
group: apps
version: v1
resources: deployments
patch:
operations:
- op: replace
path: '/spec/replicas'
value: 1
filter:
namespaceSelector:
matchLabels:
sync: "true"
For every Deployment in the selected namespaces, the pod replica will be set to 1.
This object should not be tempered with. It is managed by their adjacent LiveSynchronization.
3 - Synchronize Zookeeper clusters
Overview
In environments where high availability and disaster recovery are paramount, it is essential to maintain synchronized data across different ZooKeeper clusters to prevent inconsistencies and ensure seamless failover.
In the following tutorial will be explained how to synchronize Zookeeper clusters.
Prerequisites
- Install Astronetes Resiliency Operator.
- Create a namespace where to store the secrets and run the synchronization between clusters.
Setup
Import the first cluster
Import the first Zookeeper cluster as described in details here:
Define the Database resource with the following YAML, and save it as
zookeeper-1.yaml:apiVersion: assets.astronetes.io/v1alpha1 kind: Database metadata: name: zookeeper-1 spec: zookeeper: client: servers: - <zookeeper_ip>:<zookeeper_port> - <zookeeper_ip>:<zookeeper_port> - <zookeeper_ip>:<zookeeper_port>Import the resource with the following command:
kubectl create -f zookeeper-1.yaml
Import the second cluster
Import the second Zookeeper cluster as described in details here:
Define the Database resource with the following YAML, and save it as
zookeeper-2.yaml:apiVersion: assets.astronetes.io/v1alpha1 kind: Database metadata: name: zookeeper-2 spec: zookeeper: client: servers: - <zookeeper_ip>:<zookeeper_port> - <zookeeper_ip>:<zookeeper_port> - <zookeeper_ip>:<zookeeper_port>Import the resource with the following command:
kubectl create -f zookeeper-2.yaml
Synchronize the clusters
Create the configuration manifest to synchronize the clusters according to the full documentation is provided here:
In the following example there is the configuration to synchronize every hour all the data in the / path:
Create the synchronization file as
zookeeper-sync.yamlwith the following content:apiVersion: automation.astronetes.io/v1alpha1 kind: SynchronizationPlan metadata: name: synchronize-zookeeper spec: schedule: "0 * * * *" template: spec: plugin: zookeeper-to-zookeeper-nodes config: sourceName: zookeeper-1 destinationName: zookeeper-2 rootPath: / createRoutePath: trueCustom path
The data to be synchronized between clusters can be specified inspec.template.spec.config.rootPath.Apply the configuration:
kubectl apply -f zookeeper-sync.yaml
Operations
Force the synchronization
The synchronization can be run at any time creating a Synchronization object.
In the following example there is the configuration to synchronize all the data in the / path:
- Create the synchronization file as
zookeeper-sync-once.yamlwith the following content:
apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
generateName: synchronize-zookeeper-
spec:
plugin: zookeeper-to-zookeeper-nodes
config:
sourceName: zookeeper-1
destinationName: zookeeper-2
rootPath: /
createRoutePath: true
Apply the configuration:
kubectl create -f zookeeper-sync-once.yaml