This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Disaster Recovery

Astronetes Disaster Recovery Operator provides a transparent and effortless solution to protect Cloud Native platforms from possible disaster outages by leveraging Kubernetes native tools.

1 - Intro

What is Astronetes Disaster Recovery Operator and why it could be useful for you

Business continuity refers to the ability that a particular business can overcome potentially disruptive events with minimal impact in its operations. This no small ordeal requires the definition, implementation of plans, processes and systems while involving complete collaboration and synchronization between multiple actors and departments.

This collection of assets and processes compose the company’s Disaster Recovery. Its goal is to reduce the downtime and data loss in the case of a catastrophic, unforeseen situation. Disaster Recovery needs answer two questions:

  • How much data can we lose? - Recovery Point Objective (RPO)
  • How long can we take to recover the system? - Recovery Time Objective (RTO)

Astronetes Disaster Recovery Operator provides a solution to improve the business continuity of a Cloud Native Platform by offering a Disaster Recovery tool that is transparent in day-to-day operations while having minimal impact in technical maintenance.

By syncing the two clusters in real time, the RPO is minimal. Astronetes Disaster Recovery Operator does not depend on system backups or tools external to the Kubernetes ecosystem.

And with an accessible component to resume operations, the RTO can be substantially reduced in comparison with alternatives that depend on backups stored outside Kubernetes.

2 - Architecture

Astronetes Disaster Recovery architecture

2.1 - Overview

Astronetes Disaster Recovery architecture

Architecture

The cluster is protected with a warm stand-by paired cluster where the workloads will be offloaded when the disaster occurs. The resources can be deactivated while in the destination cluster until such event takes place, avoiding unnecessary resource consumption and optimizing organizational costs.

The Disaster Recovery Operator extracts the resources from the source cluster and syncs them on the destination cluster maintaining a consistent state between them.

Operator monitoring is attached to the operator and it is independent of either cluster.

Deployment

The operator can be deployed in either a 2-clusters or 3-clusters architecture.

2-clusters

This configuration is recommended for training, testing, validation or when the 3-clusters option is not optimal or possible.

The currently active cluster will be the source cluster, while the passive is the destination cluster. The operator, including all the Custom Resource Definitions and processes, is installed in the latter. The operator will listen for new resources that fulfill the active RecoveryPlan requirements and clone them into the destination cluster.

The source cluster is never aware of the destination cluster and can exist and operate as normal without its presence. The destination cluster needs to have access to it through a ManagedCluster resource.

3-clusters

In addition of the already existing 2 clusters, this modality includes the management cluster. The operator synchronization workflow is delegated in it instead of depending on the destination cluster. The management cluster is in charge of reading the changes and new resources in the source cluster and syncing them to the destination. Neither source or destination cluster needs to know of the existence of the management cluster and can operate without it. Having a separate cluster that is decoupled from direct production activity lowers operational risks and eases access control to both human and software operators. The operator needs to be installed in the destination cluster as well to start the recovery process without depending on other clusters. Custom Resources that configure the synchronization such as RecoveryPlan are deployed in the management cluster while those only relevant when executing the recovery process such as RecoveryExecutionJob are deployed in the destination cluster.

This structure fits organizations that are already depending on a management cluster for other tasks or ones that are planning to do so. Astronetes Disaster Recovery does not require a standalone management cluster and can be installed and managed from an existing one.

2.2 - Components

Disaster Recovery Components

Operator

ComponentDescriptionSource cluster permissionsDestination cluster permissions
OperatorOrchestrate all the disaster recovery configurations.N/AN/A

Recovery Plan

ComponentDescriptionSource cluster permissionsDestination cluster permissions
Events listenerRead events in the source cluster.Cluster readerN/A
ProcessorFilter and transform the objects read from the source cluster.Cluster readerN/A
SynchronizerWrite processed objects in the destination cluster.N/AWrite
ReconcilerSends delete events whenever it founds discrepancies between source and destination.Cluster readerCluster reader
NATSUsed by other components to send and receive data.N/AN/A
RedisStores metadata about the synchronization state. Most Recovery Plan services interact with it.N/AN/A
Metrics exporterExport metrics about the Recovery Plan.N/AN/A

Recovery Execution Job

ComponentDescriptionSource cluster permissionsDestination cluster permissions
RestorerRestore the data in the destination cluster.N/AWrite

3 - Installation

Install the Disaster Recovery Operator

3.1 - Preparing to install

Setup for the necessary tools to install the operator.

Pre-requirements

Get familiarized with the architecture reading this section.

A valid Disaster Recovery Operator license key and registry access key should already be assigned.

Supported platforms

Astronetes Disaster Recovery Operator is vendor agnostic meaning that any Kubernetes distribution such as Google Kubernetes Engine, Azure Kubernetes Service, OpenShift or self-managed bare metal installations can run it.

This is the certified compatibility matrix:

PlatformMin VersionMax Version
AKS1.241.29
EKS1.241.28
GKE1.241.28
OpenShift Container Platform4.114.14

Software

Official kubernetes.io client CLI kubectl.

Networking

  • Allow traffic to the Image Registry quay.io/astrokube using the mechanism provided by the chosen distribution.
  • In a 3-clusters architecture, the management cluster needs to have communication with both the destination and source cluster. Note that it is not necessary to also allow connections between the target clusters. Due to the lack of a centralised management cluster, in a 2-clusters architecture communication between destination and source should be enabled.

Cluster configuration

  • Cluster admin permission in management, destination and source clusters. In a 2-clusters architecture it is only required to have admin permissions in the destination and source clusters as the operator activities will be delegated to the former.
  • The Secret provided by AstroKube to access the Image Registry.
  • The Secret provided by AstroKube with the license key.

Software

OpenShift client CLI.

Networking

  • Add quay.io/astrokube to the allowed registries in the Image configuration.
  • In a 3-clusters architecture, the management cluster needs to have communication with both the destination and source cluster. Note that it is not necessary to also allow connections between the target clusters. Due to the lack of a centralised management cluster, in a 2-clusters architecture communication between destination and source should be enabled.
apiVersion: config.openshift.io/v1
kind: Image
metadata:
    ...
spec:
  registrySources: 
    allowedRegistries: 
    ...
    - quay.io/astrokube

Cluster configuration

  • Cluster admin permission in management, destination and source clusters. In a 2-clusters architecture it is only required to have admin permissions in the destination and source clusters as the operator activities will be delegated to the former.
  • The Secret provided by AstroKube to access the Image Registry.
  • The Secret provided by AstroKube with the license key.

3.2 - Installing on Kubernetes

Steps to install the Disaster Recovery Operator in Kubernetes

The following operations need to executed in both the management and destination cluster.

Prerequirements

It is necessary that the cluster has cert-manager already installed.

Process

1. Create Namespace

Create the Namespace where the operator will be installed:

kubectl create namespace astronetes-disaster-recovery-operator

2. Setup registry credentials

Create the Secret that stores the credentials to the AstroKube image registry:

kubectl -n astronetes-disaster-recovery-operator create -f pull-secret.yaml

3. Setup license key

Although the operator can be installed without a license key, pods originating from Astronetes CRDs such as Recovery Plans will crash. If the installation was performed before obtaining a valid license key, it can be updated as described in this section.

Create the Secret that stores the license key:

kubectl -n astronetes-disaster-recovery-operator create -f license-key.yaml

4. Install the operator

Install the CRDs:

kubectl apply -f https://astronetes.io/deploy/disaster-recovery-operator/v0.10.1/crds.yaml

Install the operator:

kubectl -n astronetes-disaster-recovery-operator apply -f https://astronetes.io/deploy/disaster-recovery-operator/v0.10.1/operator-k8s.yaml

3.3 - Installing on OpenShift

Steps to install the Disaster Recovery Operator in OpenShift

The following operations need to executed in both the management and destination cluster.

Process

1. Create Namespace

Create the Namespace where the operator will be installed:

oc create namespace astronetes-disaster-recovery-operator

2. Setup registry credentials

Create the Secret that stores the credentials to the AstroKube image registry:

oc -n astronetes-disaster-recovery-operator create -f pull-secret.yaml

3. Setup license key

Although the operator can be installed without a license key, pods originating from Astronetes CRDs such as Recovery Plans will crash. If the installation was performed before obtaining a valid license key, it can be updated as described in this section.

Create the Secret that stores the license key:

oc -n astronetes-disaster-recovery-operator create -f license-key.yaml

4. Install the operator

Install the CRDs:

oc apply -f https://astronetes.io/deploy/disaster-recovery-operator/v0.10.1/crds.yaml

Install the operator:

oc -n astronetes-disaster-recovery-operator apply -f https://astronetes.io/deploy/disaster-recovery-operator/v0.10.1/operator-openshift.yaml

3.4 - Uninstalling on Kubernetes

Steps to uninstall the Disaster Recovery Operator on Kubernetes

Process

1. Delete Operator objects

Delete the resources in the destination cluster:

kubectl delete recoveryexecutionjobs -A --all

Delete the recovery plans in the management cluster:

kubectl delete recoveryplans -A --all

Delete the managed clusters and recovery buckets in the management cluster:

kubectl delete managedclusters,recoverybuckets -A --all

2. Remove the operator

Delete the operator:

kubectl -n astronetes-disaster-recovery-operator delete -f https://astronetes.io/deploy/disaster-recovery-operator/v0.10.1/operator.yaml

Delete the CRDs:

kubectl delete -f https://astronetes.io/deploy/disaster-recovery-operator/v0.10.1/crds.yaml

3. Remove registry credentials

Delete the Secret that stores the credentials to the AstroKube image registry:

kubectl -n astronetes-disaster-recovery-operator delete -f pull-secret.yaml

4. Remove license key

Delete the Secret that stores the license key:

kubectl -n astronetes-disaster-recovery-operator delete -f license-key.yaml

3.5 - Uninstalling on OpenShift

Steps to uninstall the Disaster Recovery Operator on OpenShift

Process

1. Delete Operator objects

Delete the resources in the destination cluster:

oc delete recoveryexecutionjobs -A --all

Delete the recovery plans in the management cluster:

oc delete recoveryplans -A --all

Delete the managed clusters and recovery buckets in the management cluster:

oc delete managedclusters,recoverybuckets -A --all

2. Remove the operator

Delete the operator:

oc -n astronetes-disaster-recovery-operator delete -f https://astronetes.io/deploy/disaster-recovery-operator/v0.10.1/operator.yaml

Delete the CRDs:

oc delete -f https://astronetes.io/deploy/disaster-recovery-operator/v0.11.0/crds.yaml

3. Remove registry credentials

Delete the Secret that stores the credentials to the AstroKube image registry:

oc -n astronetes-disaster-recovery-operator delete -f pull-secret.yaml

4. Remove license key

Delete the Secret that stores the license key:

oc -n astronetes-disaster-recovery-operator delete -f license-key.yaml

4 - Post-installation configuration

Steps to configure the Disaster Recovery solution

4.1 - Setting a managed cluster

Granting access to source and destination cluster

Introduction

Connection to both the source and destination clusters is set using the ManagedCluster resource. Credentials are stored in Kubernetes secrets from which the ManagedCluster collection access to connect to the clusters.

Requirements

  • The kubeconfig file to access as read-only to the source cluster
  • The kubeconfig file to access as cluster-admin to the destination cluster
  • The Secret provided by AstroKube to access the Image Registry

Process

1. Prepare

Create Namespace

Create the namespace to configure the recovery process:

kubectl create namespace <namespace_name>

Setup registry credentials

Create the Secret that stores the credentials to the AstroKube image registry:

kubectl -n <namespace_name> create -f pull-secret.yaml

2. Configure the source Cluster

Create secret

Get the kubeconfig file that can be used to access the cluster, and save it as source-kubeconfig.yaml.

Then create the Secret with the following command:

kubectl -n <namespace_name> create secret generic source --from-file=kubeconfig.yaml=source-kubeconfig.yaml

Create resource

Define the ManagedCluster resource with the following YAML, and save it as managedcluster.yaml:

apiVersion: dr.astronetes.io/v1alpha1
kind: ManagedCluster
metadata:
  name: source
  namespace: <namespace_name>
spec:
  secretRef:
    name: source
    namespace: <namespace_name>

Deploy the resource with the following command:

kubectl create -f managedcluster.yaml

3. Configure the destination Cluster

Create secret

Get the kubeconfig file that can be used to access the cluster, and save it as destination-kubeconfig.yaml.

Then create the Secret with the following command:

kubectl -n <namespace_name> create secret generic destination --from-file=kubeconfig.yaml=destination-kubeconfig.yaml

Create resource

Define the ManagedCluster resource with the following YAML, and save it as managedcluster.yaml:

apiVersion: dr.astronetes.io/v1alpha1
kind: ManagedCluster
metadata:
  name: destination
  namespace: <namespace_name>
spec:
  secretRef:
    name: destination
    namespace: <namespace_name>

Deploy the resource with the following command:

kubectl create -f managedcluster.yaml

4.2 - Configuring a recovery plan

How to proctect the platform resources from a disaster

Introduction

A RecoveryPlan resource indicates a set of Kubernetes resource to replicate or synchronize between the source cluster and the destination cluster.

Requirements

Process

1. Configure the recovery plan

Create the recoveryplan.yaml file according to your requirements. For this example, the goal is to synchronize deployments with the disaster-recovery label set to enabled. It is also desirable that when its replication is completed that no pod is created in the destination cluster and that after a RecoveryExecutionJob the deployment launches active pods again.

Let’s dissect the following YAML:

apiVersion: dr.astronetes.io/v1alpha1
kind: RecoveryPlan
metadata:
  name: applications
spec:
  suspend: true
  forceNamespaceCreation: true
  sourceClusterRef:
    name: source
    namespace: dr-maqueta
  destinationClusterRef:
    name: destination
    namespace: dr-maqueta
  resources:
    - group: apps
      version: v1
      resource: deployments
      transformation:
        patch:
          - op: replace
            path: /spec/replicas
            value: 0
      filters:
        selector:
          matchLabels:
            disaster-recovery: enabled
      recoveryProcess:
        fromPatch:
          - op: replace
            path: /spec/replicas
            value: 1

spec.sourceClusterRef and spec.destinationClusterRef refers to the name and namespace of the ManagedCluster resources for the corresponding clusters.

The spec.resources is a list of the set of resources to deploy. A single RecoveryPlan can cover multiple types or groups of resources, although this example only manages deployments.

The type of the resource is defined at spec.resources[0].resource. The filters can be located in spec.resources[0].filters. In this case, the RecoveryPlan is matching the content of the disaster-recovery label.

The spec.resources[0].transformation and spec.resources[0].recoveryProcess establish the actions to take after each resource is synchronized and after they are affected by the recovery process respectively. In this case, while being replicated, each deployment will set their replicas to 0 in the destination cluster and will get back to one after a successful RecoveryExecutionJob The resource parameters are always left intact in the source cluster.

2. Suspending and resumen a recovery plan

A keen eye might have noticed the spec.suspend parameter. In this example it is set to true to indicate that the recovery plan is inactive. An inactive or suspended recovery plan will not replicate new or existing resources until it is resumed. Resuming a recovery plan can be done by setting spec.suspend to false and applying the changes in yaml. Alternatively, a patch with kubectl will work as well and will not require the original yaml file:

kubectl patchrecoveryplan <recovery_plan_name> -p '{"spec":{"suspend":false}}' --type=merge

3. Deploy the recovery plan

The recovery plan can be deployed as any other Kubernetes resource:

kubectl -n <namespace_name> apply -f recoveryplan.yaml

4. Identify the RecoveryExecutionPlan

Once you have deployed the RecoveryPlan in the management cluster, you should found the RecoveryExecutionPlan in the destination cluster created by the operator:

kubectl -n <namespace_name> get recoveryexecutionplan

Additional steps

For more examples, take a look at our samples.

Modifying synchronized resources.

Depending on the use case and the chosen solution for Disaster Recovery, it is convenient that resources synchronized in the destination cluster differ from the original copy. Taking as example a warm standby scenario, in order to optimize infrastructure resources, certain objects such as Deployments or Cronjobs do not need to be actively running until there is a disaster. The standby destination cluster can run with minimal computing power and autoscale as soon as the recovery process starts, reducing the required overhead expenditure.

While a resource is being synchronized into the destination cluster, its properties can be transformed to adapt them to the organization necessities. Then, if and when a disaster occurs, the resource characteristics can be restored to either its original state or an alternative one with the established recover process.

Filters

FIlters are useful to select only the exact objects to synchronize. They are set in the spec.resources[x].filters parameter.

Name selector

The nameSelector filters by the name of the resources of the version and type indicated. The following example selects only the Configmaps that follow the regular expression config.*:

apiVersion: dr.astronetes.io/v1alpha1
kind: RecoveryPlan
metadata:
  name: test-name-selector
  namespace: dr-config
spec:
  suspend: false
  sourceClusterRef:
    name: source
    namespace: dr-config
  destinationClusterRef:
    name: destination
    namespace: dr-config
  forceNamespaceCreation: true
  resources:
    - version: v1
      resource: configmaps
      filters:
        nameSelector:
          regex:
            - "config.*"

This selector can also be used negatively with excludeRegex. The following example excludes every configmap that ends in .test:

apiVersion: dr.astronetes.io/v1alpha1
kind: RecoveryPlan
metadata:
  name: test-name-selector
  namespace: dr-config
spec:
  suspend: false
  sourceClusterRef:
    name: source
    namespace: dr-config
  destinationClusterRef:
    name: destination
    namespace: dr-config
  forceNamespaceCreation: true
  resources:
    - version: v1
      resource: configmaps
      filters:
        nameSelector:
          excludeRegex:
          - "*.test"

Namespace selector

The namespaceSelector filters resources taking in consideration the namespace they belong to. This selector is useful to synchronize entire applications if they are stored in a namespace. The following example selects every deployment that is placed in a namespace with the label disaster-recovery: enabled:

apiVersion: dr.astronetes.io/v1alpha1
kind: RecoveryPlan
metadata:
  name: applications
spec:
  suspend: true
  forceNamespaceCreation: true
  sourceClusterRef:
    name: source
    namespace: dr-maqueta
  destinationClusterRef:
    name: destination
    namespace: dr-maqueta
  resources:
    - group: apps
      version: v1
      resource: deployments
      filters:
        selector:
          matchLabels:
            disaster-recovery: enabled

Transformations

Transformations are set in the spec.resources[x].transformation parameter and are managed through patches.

Patch modifications alter the underlying object definiton using the same mechanism as kubectl patch. As with jsonpatch, the allowed operations are replace, add and remove. Patches are defined in the spec.resources[x].transformation.patch list and admits an arbitary number of modifications.

apiVersion: dr.astronetes.io/v1alpha1
kind: RecoveryPlan
metadata:
  name: recovery-plan
spec:
  ...
  resources:
    - ...
      transformation:
        patch:
          - op: replace
            path: /spec/replicas
            value: 0
          - op: remove
            path: /spec/strategy

RecoveryProcess

The RecoveryProcess of a RecoveryPlan is executed when a RecoveryExecutionJob targetting the RecoveryExecutionPlan originated from the RecoveryPlan is deployed. A resource can be either restored from the original definition stored in a bucket or by performing custom patches like with Transformations.

To restore from the original data, read the Recovering from a Bucket section. This option will disregard performed transformations and replace the parameters with those of the source cluster.

Patching when recovering is accessible at spec.resources[x].recoveryProcess.fromPatch list and admits an arbitary number of modifications. It will act on the current state of the resource in the destination cluster, meaning it will take into consideration the transformations performed when it was synchronized unlike when recovering from original. As with jsonpatch, the allowed operations are replace, add and remove.

apiVersion: dr.astronetes.io/v1alpha1
kind: RecoveryPlan
metadata:
  name: recovery-plan
spec:
  ...
  resources:
    - ...
      recoveryProcess:
        fromPatch:
          - op: replace
            path: /spec/replicas
            value: 1

4.3 - Recovering from a Bucket

How save objects and recover them using object storage.

Introduction

A RecoveryBucket resource indicates an Object Storage that will be used to restore original objects in the RecoveryPlan.

Object Storage stores data in an unstructured format in which each entry represents an object. Unlike other storage solutions, there is not a relationship or hierarchy between the data being stored. Organizations can access their files as easy as with traditional hierarchical or tiered storage. Object Storage benefits include virtually infinite scalability and high availability of data.

Many Cloud Providers include their own flavor of Object Storage and most tools and SDKs can interact with them as their share the same interface. Disaster Recovery Operator officially supports the following Object Storage solutions:

AWS Simple Storage Service (S3) Google Cloud Storage

Disaster Recovery Operator can support multiple buckets in different providers as each one is managed independently.

Contents stored in a bucket

A bucket is assigned to a RecoveryPlan spec.resources item. The same bucket can be assigned to multiple resources. It stores every synchronized object in the destination cluster with some internal control annotations added. In the case of a disaster, resources with recoveryProcess.fromOriginal.enabled equal to true will be restored using the bucket configuration.

The path of a stored object is as follows: <recoveryplan_namespace>/<recoveryplan_name>/<object_group-version-resource>/<object_namespace>.<object_name>.

Requirements

  • At least an instance of a ObjectStorage service in one of the supported Cloud Providers. This is commonly known as a bucket and will be referred as so in the documentation.
  • At least one pair of accessKeyID and secretAccessKey that gives both write and read permissions over all objects of the bucket. Refer to the chosen cloud provider documentation to learn how to create and extract them. It is recommended that each access key pair has only access to a single bucket.

Preparing and setting the bucket

Create the secret

Store the following file and apply it into the cluster substituting the template parameters with real ones.

apiVersion: v1
kind: Secret
metadata:
  name: bucket
  namespace: <namespace>
stringData:
  s3.auth.yaml: |
    accessKeyID: <access_key_id>
    secretAccessKey: <secret_access_key>
    useSSL: true    

Create the RecoveryBucket

Store the following file and apply it into the cluster substituting the template parameters with real ones.

apiVersion: dr.astronetes.io/v1alpha1
kind: RecoveryBucket
metadata:
  name: bucket
  namespace: <namespace>
spec:
  endpoint: storage.googleapis.com
  bucketName: <bucket_name>
  secretRef:
    name: bucket
    namespace: <namespace>

Create the RecoveryPlan

For how to get started with Recovery Plans check its section. If the Recovery Plan does not set spec.resources[x].recoveryProcess.fromOriginal.enabled equal to true, where x refers to the index of the desired resource, the contents of the bucket will not be used. For the configuration to work, make sure both the bucket reference and recovery process transformations are correctly set.

Indicating which bucket to use can accomplished by configuring the spec.BucketRef like in the following example:

apiVersion: dr.astronetes.io/v1alpha1
kind: RecoveryPlan
metadata:
  name: applications
spec:
  suspend: false
  forceNamespaceCreation: true
  sourceClusterRef:
    name: source
    namespace: dr
  destinationClusterRef:
    name: destination
    namespace: dr
  resources:
    - group: apps
      version: v1
      resource: deployments
      transformation:
        patch:
          - op: replace
            path: /spec/replicas
            value: 0
      recoveryProcess:
        fromOriginal:
          enabled: true
  bucketRef:
    name: <bucket_name>
    namespace: <bucket_namespace>
    objectPrefix: <object_prefix>

Create the secret

Store the following file and apply it into the cluster substituting the template parameters with real ones.

apiVersion: v1
kind: Secret
metadata:
  name: bucket
  namespace: <namespace>
stringData:
  s3.auth.yaml: |
    accessKeyID: <access_key_id>
    secretAccessKey: <secret_access_key>
    useSSL: true    

Create the RecoveryBucket

Store the following file and apply it into the cluster substituting the template parameters with real ones.

S3 requires that the region in the endpoint matches the region of the target bucket. It has to be explicitely set as AWS does not infer buckets region e.g. us-east-1 for North Virginia.

apiVersion: dr.astronetes.io/v1alpha1
kind: RecoveryBucket
metadata:
  name: bucket
  namespace: <namespace>
spec:
  endpoint: s3.<aws_region>.amazonaws.com
  bucketName: <bucket_name>
  secretRef:
    name: bucket
    namespace: <namespace>

Create the RecoveryPlan

For how to get started with Recovery Plans check its section. If the Recovery Plan does not set spec.resources[x].recoveryProcess.fromOriginal.enabled equal to true, where x refers to the index of the desired resource, the contents of the bucket will not be used. For the configuration to work, make sure both the bucket reference and recovery process transformations are correctly set.

Indicating which bucket to use can accomplished by configuring the spec.BucketRef like in the following example:

apiVersion: dr.astronetes.io/v1alpha1
kind: RecoveryPlan
metadata:
  name: applications
spec:
  suspend: false
  forceNamespaceCreation: true
  sourceClusterRef:
    name: source
    namespace: dr
  destinationClusterRef:
    name: destination
    namespace: dr
  resources:
    - group: apps
      version: v1
      resource: deployments
      transformation:
        patch:
          - op: replace
            path: /spec/replicas
            value: 0
      recoveryProcess:
        fromOriginal:
          enabled: true
  bucketRef:
    name: <bucket_name>
    namespace: <bucket_namespace>
    objectPrefix: <object_prefix>

4.4 - Resynchronization

Synchronized resources reconciliation between source and destination cluster.

Introduction

Due to special circumstances it might be possible that there are objects that were not synchronized from the source cluster to the destination cluster. To cover this case, Astronetes Disaster Recovery Operator offers a reconciliation process that adds, deletes or updates objects in the destination cluster if its state differs from the source.

Architecture

Reconciliation is performed at the Recovery Plan level. Every Recovery Plan is in charge of their covered objects and that they are up to date with the specification. Reconciliation is started by two components, EventsListener and Reconciler. The former is in charge of additive reconciliation and the latter of substractive reconciliation.

Additive reconciliation

Refers to the reconciliation of missing objects that are present in the source cluster but, for any reason, are not present or are not up to date in the destination cluster. The entry point is the EventsListener service which receives events with the current state in the source cluster of all the objects covered by the Recovery Plan with a period of one hour by default.

These resync events are then treated like regular events and follow the syncronization communication flow. If the object does not exist in the destination cluster, the Synchronizer will apply it. In the case of updates, only those with a resourceVersion greater than the existing one for that object will be applied, updating the definition of said object.

Substractive reconciliation

In the case that an object was deleted in the source cluster but it was not in the destination, the Additive Reconciliation will not detect it. The source cluster can send events containing the current state of its existing components, but not of those that ceased to exist in it.

For that, the Reconciler is activated with a period of one hour by default. It compares the state of the objects covered the Recovery Plan in both source and destination clusters. If a change is found, it creates a delete event in the NATS. This event is then processed as an usual delete event throughout the rest of the communication process.

Modifying the periodic interval

By default, the resynchronization process will be launched every hour. It can be changed by modifying the value at spec.reconciliation.Duration in the RecoveryPlan object. The admitted format is %Hh%Mm%Ss e.g. 1h0m0s for intervals of exactly one hour. Modifying this variable updates the schedule for both additive and substractive reconciliations.

apiVersion: dr.astronetes.io/v1alpha1
kind: RecoveryPlan
metadata:
  name: resync-3h-25m-12s
spec:
  ...
  reconciliation:
    Duration: 3h25m12s

5 - Update license key

Steps to update the license key for the Disaster Recovery Operator

There is no need to reinstall the operator when updating the license key.

1. Update the license key

Update the Kubernetes Secret that stores the license key with the new license:

kubectl -n astronetes-disaster-recovery-operator apply -f new-license-key.yaml
oc -n astronetes-disaster-recovery-operator apply -f new-license-key.yaml

2. Restart the Disaster Recovery Operator

Restart the Disaster Recovery Operator Deployment to apply the new license:

kubectl -n astronetes-disaster-recovery-operator rollout restart deployment disaster-recovery-operator
oc -n astronetes-disaster-recovery-operator rollout restart deployment disaster-recovery-operator

3. Wait for the Pods restart

Wait a couple of minutes until all the Disaster Recovery Operator Pods are restarted with the new license.

6 - Operator management

Actions to manage the operation

6.1 - Pause a recovery plan

How to pause a Recovery Plan.

Introduction

A RecoveryPlan can be paused in order to stop any operation in the source and destination cluster.

Requirements

Process

1. Pause the RecoveryPlan

Pause the RecoveryPlan using the following path operation:

kubectl patch recoveryplan <recovery_plan_name> -p '{"spec":{"suspend":true}}' --type=merge

2. Verify the RecoveryPlan status

Get the list of the defined RecoveryPlans:

kubectl get recoveryplan

The result should display the SUSPENDED column to true:

NAME      SUSPENDED   STATUS
example   true        Reconciled

3. Verify the containers

For the containers deployed in the cluster, you can verify the logs:

kubectl logs example-eventslistener-76c9889466-vrz7w

A log will appear indicating that RecoveryPlan is suspended:

Recovery plan is suspended

6.2 - Recovering from a disaster

How to recover the platform from a disaster

Introduction

In the circumstance that a disaster happens, the replicated contents can be recovered by using a RecoveryExecutionJob. Applying it will execute every recovery process set in the RecoveryPlan collection.

Requirements

Process

1. Pause the RecoveryPlan

Pause the RecoveryPlan using the following path operation:

kubectl patch recoveryplan <recovery_plan_name> -p '{"spec":{"suspend":true}}' --type=merge

2. Identify the RecoveryExecutionPlan

Identify the RecoveryExecutionPlan configured in the previous step.

3. Deploy the RecoveryExecutionJob

Create the recoveryexecutionjob.yaml file with the following content:

apiVersion: dr.astronetes.io/v1alpha1
kind: RecoveryExecutionJob
metadata:
  generateName: <recovery_execution_plan_name>
  namespace: <namespace_name>
spec:
  recoveryExecutionPlanRef:
    name: <recovery_execution_plan_name>
    namespace: <namespace_name>

Deploy the RecoveryExecutionJob:

kubectl apply -f recoveryexecutionjob.yaml

6.3 - Workarounds

Limitations and workarounds of Astronetes Disaster Recovery Operator.

Immutable parameters

Astronetes Disaster Recovery Operator synchronizes the state between two clusters by either creating new objects if they are missing from the destination cluster, by updating them if they already exist or by deleting them if they dissapear from the source cluster.

In most situations this behaviour is compatible with immutable parameters. Updates made to an immutable parameter will require deleting the object that contains it first to then recreate with the updated configuration. Astronetes Disaster Recovery Operator will detect the delete event and apply it before the recreation in the destination cluster automatically. There is no need for additional manual steps, the entire pipeline is managed by the Operator.

This is assuming that the RecoveryPlan is not paused. The Operator will fail to synchronize in the following situation:

  1. A RecoveryPlan is paused in the management cluster.
  2. An object that was selected for that RecoveryPlan is deleted and then recreated with updated configuraton in at least one immutable parameter.
  3. The RecoveryPlan of the first step resumes operation.

The delete event was not detected by the RecoveryPlan when it was suspended so the object in the destination cluster was not deleted. Further events with the new configuration would not be able to be applied since they would read as an update to an immutable parameter.

In this case, the solution is to manually delete in the destination cluster every object with an updated immutable parameter that is selected by the previously suspended RecoveryPlan. The Operator will recreate them with the new configurations applied in the source cluster after the next resynchronization.

7 - Observability

Monitor the state of the synchronization and recovery process

7.1 - Audit fields

Parameters built into Astronetes Disaster Recovery Operator to track when a change was made and whom did it

Auditing and version control is an important step when configuring resources such as Recovery Plans. Knowing when a change was made and the account that applied it can be determinative in an ongoing investigation to solve an issue or a configuration mismanagement.

Audit fields

The following annotation are attached to every resource that belongs to Astronetes Disaster Recovery:

apiVersion: dr.astronetes.io/v1alpha1
kind: RecoveryPlan
metadata:
  annotations:
    audit.astronetes.io/last-update-time: "<date>"         # Time at which the last update was applied.
    audit.astronetes.io/last-update-user-uid: "<uid-hash>" # Hash representing the Unique Identifier of the user that applied the change.
    audit.astronetes.io/last-update-username: "<username>" # Human readable name of the user that applied the change. 

Example:

apiVersion: dr.astronetes.io/v1alpha1
kind: RecoveryPlan
metadata:
  annotations:
    audit.astronetes.io/last-update-time: "2024-02-09T14:05:30.67520525Z"
    audit.astronetes.io/last-update-user-uid: "b3fd2a87-0547-4ff7-a49f-cce903cc2b61"
    audit.astronetes.io/last-update-username: system:serviceaccount:preproduction:microservice1

Fields are updated only when a change to the fields .spec, .labels or .annotations are detected. Status modifications by the operator are not recorded.

Objects that are synchronized by a Recovery Plan will not have these labels.

7.2 - Understanding logging

How to interpret Disaster Recovery Operator log messages and manage them

Disaster Recovery Operator implements a logging system throughout all its pieces so that the end user can have visibility on the system.

JSON fields

NameDescription
levelLog level at write time.
timestampTime at which the log was written.
msgLog message.
processInformation about the process identity that generated the log.
eventIndicates if the log is referring to a create, update or delete action.
sourceObjectObject related to the source cluster that is being synchronized.
oldSourceObjectPrevious state of the sourceObject. Only applicable to update events.
sourceClusterInformation about the source managed cluster.
destinationObjectObject related to the destination cluster.
destinationObjectInformation about the destination managed cluster.
bucketRecovery bucket information.
bucketObjectPath to the object to synchronize.
lastUpdateAuditing information. More information.

Examples

An object read from the source cluster.

{
  "level": "info",
  "timestamp": "2023-11-28T18:05:26.904276629Z",
  "msg": "object read from cluster",
  "process": {
    "id": "eventslistener"
  },
  "sourceCluster": {
    "name": "source",
    "namespace": "dr-config",
    "resourceVersion": "91015",
    "uid": "3c39aaf0-4216-43a8-b23c-63f082b22436"
  },
  "sourceObject": {
    "apiGroup": "apps",
    "apiVersion": "v1",
    "name": "nginx-deployment-five",
    "namespace": "test-namespace-five",
    "resource": "deployments",
    "resourceVersion": "61949",
    "uid": "5eb6d1d1-b694-4679-a482-d453bcd5317f"
  },
  "oldSourceObject": {
    "apiGroup": "apps",
    "apiVersion": "v1",
    "name": "nginx-deployment-five",
    "namespace": "test-namespace-five",
    "resource": "deployments",
    "resourceVersion": "61949",
    "uid": "5eb6d1d1-b694-4679-a482-d453bcd5317f"
  },
  "lastUpdate": {
    "time": "2023-11-25T13:12:28.251894531Z",
    "userUID": "165d3e9f-04f4-418e-863f-07203389b51e",
    "username": "kubernetes-admin"
  },
  "event": {
    "type": "update"
  }
}

An object was uploaded to a recovery bucket.

{
  "level": "info",
  "timestamp": "2023-11-28T18:05:27.593493962Z",
  "msg": "object uploaded in bucket",
  "sourceObject": {
    "apiGroup": "apps",
    "apiVersion": "v1",
    "name": "helloworld",
    "namespace": "test-namespace-one",
    "resource": "deployments",
    "resourceVersion": "936",
    "uid": "7c2ac690-3279-43ca-b14e-57b6d57e78e1"
  },
  "oldSourceObject": {
    "apiGroup": "apps",
    "apiVersion": "v1",
    "name": "helloworld",
    "namespace": "test-namespace-one",
    "resource": "deployments",
    "resourceVersion": "936",
    "uid": "7c2ac690-3279-43ca-b14e-57b6d57e78e1"
  },
  "process": {
    "id": "processor",
    "consumerID": "event-processor-n74"
  },
  "bucket": {
    "name": "bucket-dev",
    "namespace": "dr-config",
    "resourceVersion": "91006",
    "uid": "47b50013-3058-4283-8c0d-ea3a3022a339"
  },
  "bucketObject": {
    "path": "dr-config/pre/apps-v1-deployments/test-namespace-one.helloworld"
  },
  "lastUpdate": {
    "time": "2023-11-25T13:12:29.625399813Z",
    "userUID": "165d3e9f-04f4-418e-863f-07203389b51e",
    "username": "kubernetes-admin"
  }
}

Managing logs

Messages structure vary depending on the operation that originated it.

The sourceCluster and destinationCluster are only present for operations that required direct access to either cluster. For the former, only messages originating from either the eventsListener, processor or reconciler services can include it in their logs. The latter will only be present in synchronizer or reconciler logs messages. These parameters will not be present for internal messages such as those coming from the nats since there is no direct connection with either cluster.

oldSourceObject is the previous state of the object when performing an update operation. It is not present in other types.

When the bucket and bucketObject parameters are present, the operation is performed against the indicated bucket without any involvement of the source and destination clusters. For create operations, an object was uploaded for the first time to the bucket, for updates an existing one is modified and for delete an object was deleted from the specified bucket.

These characteristics can be exploited to improve log searches by narrowing down the messages to those that are relevant at the moment. Serving as an example, the following command will output only those logs that affect the source managed cluster by filtering the messages that lack the sourceCluster.

kubectl -n dr-config logs pre-eventslistener-74bc689665-fwsjc | jq '. | select(.sourceCluster != null)'

This could be useful when trying to debug and solve connection issues that might arise.

Log messages

The log message is located in the msg parameter. It can be read and interpreted to establish the severity of the log. The following tables group every different log message depending on whether it should be treated as error or informative.

Error messages

msg
“error reading server groups and resources”
“error reading resources for group version”
“error getting namespace from cluster”
“error creating namespace in cluster”
“error getting object from cluster”
“error creating object in cluster”
“error updating object in cluster”
“error listing objects in cluster”
“error deleting object in cluster”
“error uploading object in bucket”
“error deleting object form bucket”
“error getting object from bucket”

Informative messages

msg
“reading server groups and resources”
“server group and resources read from cluster”
“reading resources for group version”
“resource group version not found”
“group resource version found”
“reading namespace from cluster”
“namespace not found in cluster”
“namespace read from cluster”
“creating namespace from cluster”
“namespace already exists in cluster”
“namespace created in cluster”
“reading object from cluster”
“object not found in cluster”
“object read from cluster”
“creating object in cluster”
“object created in cluster”
“updating object in cluster”
“object updated in cluster”
“deleting object in cluster”
“object deleted in cluster”
“listing objects in cluster”
“list objects not found in cluster”
“listed objects in cluster”
“uploading object in bucket”
“object uploaded in bucket”
“deleting object from bucket”
“object deleted from bucket”
“getting object from bucket”
“object got from bucket”
“listing object from bucket”

7.3 - Granafa setup

How to configure Grafana

Astronetes Disaster Recovery Operator offers the option of leveraging an existing Grafana installation to monitor the state of the synchronization and recovery process. Users can incorporate the provided visualizations to their workflows in a transparent manner without affecting their operability.

1. Requirements

Grafana Operator

The operator installation includes the necessary tools to extract the information from it. To view that information with the official dashboard, is required that the management cluster has the Grafana Operator installed.

Astronetes Disaster Recovery Operator supports Grafana v4 and Grafana v5.

2a. Using Grafana Operator v4

Create the GrafanaDashboard from the release manifests:

kubectl apply -f https://astronetes.io/deploy/disaster-recovery-operator/v0.10.1/grafana-v4-dashboard.yaml

2b. Using Grafana Operator v5

Create the GrafanaDashboard from the release manifests:

kubectl apply -f https://astronetes.io/deploy/disaster-recovery-operator/v0.10.1/grafana-v5-dashboard.yaml

3. Working with the dashboard

The dashboard will show with the name Astronetes Disaster Recovery - Recovery Plans. It shows detailed information about the write, read and computing processes alongside a general overview of the health of the operator.

General view of the status of the operator:

The dashboard can be filtered attending the following characteristics:

  • Namespace. Only shows information related to the Recovery Plans in a specified namespace.
  • Recovery Plan. Filters by a specific Recovery Plan.
  • Object Namespace. Only shows information of the objects located in a given namespace regardless their associated Recovery Plan.
  • Object API Group. Objects are filtered attending to the API Group that they belong to.

Filters can be combined to get more specific results e.g. Getting the networking related objects that belong to a Recovery Plan that is deployed in a namespace.

8 - Samples

8.1 - RecoveryPlan

8.1.1 - Generic Applications

The following RecoveryPlan synchronize many Kubernetes resources between the source and destination clusters:

  • Deployments
  • ConfigMaps
  • Secrets
  • Services
  • CronJobs
apiVersion: dr.astronetes.io/v1alpha1
kind: RecoveryPlan
metadata:
  name: applications
spec:
  suspend: false
  forceNamespaceCreation: true
  sourceClusterRef:
    name: source
    namespace: dr-maqueta
  destinationClusterRef:
    name: destination
    namespace: dr-maqueta
  resources:
    - group: apps
      version: v1
      resource: deployments
      transformation:
        patch:
          - op: replace
            path: /spec/replicas
            value: 0
      filters:
        selector:
          matchLabels:
            disaster-recovery: enabled
      recoveryProcess:
        fromPatch:
          - op: replace
            path: /spec/replicas
            value: 1
    - version: v1
      resource: services
      filters:
        selector:
          matchLabels:
            disaster-recovery: enabled
    - version: v1
      resource: secrets
      filters:
        selector:
          matchLabels:
            disaster-recovery: enabled
    - version: v1
      resource: configmaps
      filters:
        selector:
          matchLabels:
            disaster-recovery: enabled
    - group: batch
      version: v1
      resource: cronjobs
      filters:
        selector:
          matchLabels:
            disaster-recovery: enabled

8.1.2 - Namespaces

This RecoveryPlan synchronize namespaces between the source and destination clusters.

apiVersion: dr.astronetes.io/v1alpha1
kind: RecoveryPlan
metadata:
  name: ns
spec:
  suspend: false 
  sourceClusterRef:
    name: source
    namespace: dr-maqueta
  destinationClusterRef:
    name: destination
    namespace: dr-maqueta
  resources:
    - version: v1
      resource: namespaces
      filters:
        selector:
          matchLabels:
            disaster-recovery: enabled

8.1.3 - Secrets

This RecoveryPlan synchronize the secrets between the source and destination clusters.

apiVersion: dr.astronetes.io/v1alpha1
kind: RecoveryPlan
metadata:
  name: apps-pre
spec:
  suspend: false
  forceNamespaceCreation: true
  sourceClusterRef:
    name: source
    namespace: dr-maqueta
  destinationClusterRef:
    name: destination
    namespace: dr-maqueta
  resources:
    - version: v1
      resource: secrets
      filters:
        selector:
          matchLabels:
            disaster-recovery: enabled

8.1.4 - Services

This RecoveryPlan synchronize the Services between the source and destination clusters.

apiVersion: dr.astronetes.io/v1alpha1
kind: RecoveryPlan
metadata:
  name: apps-pre
spec:
  suspend: false
  forceNamespaceCreation: true
  sourceClusterRef:
    name: source
    namespace: dr-maqueta
  destinationClusterRef:
    name: destination
    namespace: dr-maqueta
  resources:
    - version: v1
      resource: services
      filters:
        selector:
          matchLabels:
            disaster-recovery: enabled
      transformation:
        patch:
          - op: remove
            path: /spec/clusterIP
          - op: remove
            path: /spec/clusterIPs

9 - Reference

This section contains the API Reference of CRDs for the Disaster Recovery.

9.1 - API Reference

Packages

dr.astronetes.io/v1alpha1

Package v1alpha1 contains API Schema definitions for the dr v1alpha1 API group

Resource Types

BucketRef

Appears in:

FieldDescription
name stringBucket name
namespace stringBucket namespace
objectPrefix stringObjectPrefix to contain the objects for this recovery plan

ClusterRef

Appears in:

FieldDescription
name stringCluster name
namespace stringCluster namespace

Components

Appears in:

FieldDescription
eventsListener EventsListenerEventsListener configuration
processor ProcessorProcessor configuration
reconciler ReconcilerReconciler configuration
restorer ReconcilerRestorer configuration
synchronizer SynchronizerSynchronizer configuration
nats NatsNats configuration
redis RedisRedis configuration
metricsExporter MetricsExporterRedis configuration

ContainerSelector

Underlying type: struct{Name string “json:"name"”; Type ContainerSelectorType “json:"type"”}

Appears in:

EventsListener

Appears in:

FieldDescription
logLevel stringLog level debug;info;warm;error;panic;fatal
imagePullPolicy PullPolicyEventsListener ImagePullPolicy
resources ResourceRequirementsResources to be assigned to EventsListener

ExecutionPlanResource

Appears in:

FieldDescription
group stringResource group
version stringResource version
resource stringResource
patchOptions PatchOpts
fromPatch PatchOperation arrayList of JSONPatch operations to apply to the resources
operations Operation array
fromOriginal FromOriginalFromOriginal to apply to the resources

Filters

Appears in:

FieldDescription
selector LabelSelectorFilter the resources to be processed by Selector
namespaceSelector LabelSelectorFilter the resources to be processed by Namespace’s Selector

FromOriginal

FromOriginal resources are cloned in Bucket

Appears in:

FieldDescription
enabled boolean

ManagedCluster

ManagedCluster is the Schema for the managedclusters API

Appears in:

FieldDescription
apiVersion stringdr.astronetes.io/v1alpha1
kind stringManagedCluster
metadata ObjectMetaRefer to Kubernetes API documentation for fields of metadata.
spec ManagedClusterSpec

ManagedClusterList

ManagedClusterList contains a list of ManagedCluster

FieldDescription
apiVersion stringdr.astronetes.io/v1alpha1
kind stringManagedClusterList
metadata ListMetaRefer to Kubernetes API documentation for fields of metadata.
items ManagedCluster array

ManagedClusterSpec

ManagedClusterSpec defines the desired state of ManagedCluster

Appears in:

FieldDescription
secretRef SecretRefReference to the secret that stores the cluster Kubeconfig

MetricsExporter

Appears in:

FieldDescription
logLevel stringLog level debug;info;warm;error;panic;fatal
imagePullPolicy PullPolicyMetricsExporter ImagePullPolicy
resources ResourceRequirementsResources to be assigned to MetricsExporter

Nats

Appears in:

FieldDescription
imagePullPolicy PullPolicyNats ImagePullPolicy
resources ResourceRequirementsResources to be assigned to Nats

Observability

Appears in:

FieldDescription
interval Duration
enabledV1 boolean
enabledV2 boolean

OnAddStrategy

Underlying type: string

Appears in:

OnDeleteStrategy

Underlying type: string

Appears in:

OnUpdateStrategy

Underlying type: string

Appears in:

Op

Underlying type: string

Op types

Appears in:

Operation

Appears in:

FieldDescription
op OpEnvVar
filters OperationFilters
env EnvVar

OperationFilters

Appears in:

FieldDescription
containerSelector ContainerSelectorFilter the resources to be processed by Selector

PatchOperation

Appears in:

FieldDescription
op string
path string
value JSON

PatchOpts

Appears in:

FieldDescription
skipIfNotFoundOnDelete booleanSkipIfNotFoundOnDelete determines if errors should be ignored when trying to remove an field that doesn’t exist.

Processor

Appears in:

FieldDescription
concurrentTasks integerNumber of concurrent tasks
replicas integerNumber of replicas
logLevel stringLog level debug;info;warm;error;panic;fatal
imagePullPolicy PullPolicyProcessor ImagePullPolicy
resources ResourceRequirementsResources to be assigned to Processor

Reconciler

Appears in:

FieldDescription
concurrentTasks integerNumber of concurrent tasks
replicas integerNumber of replicas
logLevel stringLog level debug;info;warn;error;panic;fatal
imagePullPolicy PullPolicyReconciler ImagePullPolicy
resources ResourceRequirementsResources to be assigned to Reconciler

Reconciliation

Appears in:

FieldDescription
resyncPeriod DurationTime between reconciliation processes

RecoveryBucket

RecoveryBucket is the Schema for the recoverybuckets API

Appears in:

FieldDescription
apiVersion stringdr.astronetes.io/v1alpha1
kind stringRecoveryBucket
metadata ObjectMetaRefer to Kubernetes API documentation for fields of metadata.
spec RecoveryBucketSpec

RecoveryBucketList

RecoveryBucketList contains a list of RecoveryBucket

FieldDescription
apiVersion stringdr.astronetes.io/v1alpha1
kind stringRecoveryBucketList
metadata ListMetaRefer to Kubernetes API documentation for fields of metadata.
items RecoveryBucket array

RecoveryBucketSpec

RecoveryBucketSpec defines the desired state of RecoveryBucket

Appears in:

FieldDescription
endpoint stringBucket endpoint
bucketName stringBucket name
secretRef SecretRefReference to the secret that stores the bucket credentials

RecoveryExecutionJob

RecoveryExecutionJob is the Schema for the recoveryexecutionjobs API

Appears in:

FieldDescription
apiVersion stringdr.astronetes.io/v1alpha1
kind stringRecoveryExecutionJob
metadata ObjectMetaRefer to Kubernetes API documentation for fields of metadata.
spec RecoveryExecutionJobSpec

RecoveryExecutionJobList

RecoveryExecutionJobList contains a list of RecoveryExecutionJob

FieldDescription
apiVersion stringdr.astronetes.io/v1alpha1
kind stringRecoveryExecutionJobList
metadata ListMetaRefer to Kubernetes API documentation for fields of metadata.
items RecoveryExecutionJob array

RecoveryExecutionJobSpec

RecoveryExecutionJobSpec defines the desired state of RecoveryExecutionJob

Appears in:

FieldDescription
recoveryExecutionPlanRef RecoveryExecutionPlanRefReference to the RecoveryExecutionPlan
retries integerRetries

RecoveryExecutionPlan

RecoveryExecutionPlan is the Schema for the recoveryexecutionplans API

Appears in:

FieldDescription
apiVersion stringdr.astronetes.io/v1alpha1
kind stringRecoveryExecutionPlan
metadata ObjectMetaRefer to Kubernetes API documentation for fields of metadata.
spec RecoveryExecutionPlanSpec

RecoveryExecutionPlanList

RecoveryExecutionPlanList contains a list of RecoveryExecutionPlan

FieldDescription
apiVersion stringdr.astronetes.io/v1alpha1
kind stringRecoveryExecutionPlanList
metadata ListMetaRefer to Kubernetes API documentation for fields of metadata.
items RecoveryExecutionPlan array

RecoveryExecutionPlanRef

Appears in:

FieldDescription
name stringRecoveryExecutionPlan name
namespace stringRecoveryExecutionPlan namespace

RecoveryExecutionPlanSpec

RecoveryExecutionPlanSpec defines the desired state of RecoveryExecutionPlan

Appears in:

FieldDescription
recoveryPlan RecoveryPlanRefRecoveryPlan origin
kubeconfigSecretRef SecretRefSecret for Kubeconfig
licenseSecretRef SecretRefSecret for the license
bucketRef BucketRefBucket used to clone the original state of the resources
sourceClusterID string
destinationClusterID string
resources ExecutionPlanResource arrayResources to recover

RecoveryPlan

RecoveryPlan is the Schema for the recoveryplans API

Appears in:

FieldDescription
apiVersion stringdr.astronetes.io/v1alpha1
kind stringRecoveryPlan
metadata ObjectMetaRefer to Kubernetes API documentation for fields of metadata.
spec RecoveryPlanSpec

RecoveryPlanList

RecoveryPlanList contains a list of RecoveryPlan

FieldDescription
apiVersion stringdr.astronetes.io/v1alpha1
kind stringRecoveryPlanList
metadata ListMetaRefer to Kubernetes API documentation for fields of metadata.
items RecoveryPlan array

RecoveryPlanRef

Appears in:

FieldDescription
name stringName of the recovery plan
namespace stringNamespace of the recovery plan

RecoveryPlanSpec

RecoveryPlanSpec defines the desired state of RecoveryPlan

Appears in:

FieldDescription
suspend booleanSuspend the process to synchronize to the destination cluster
forceNamespaceCreation booleanForceNamespaceCreation If true, the namespace will be created in case of it doesn’t exist, otherwise the resource will be omitted
ignoreNamespaces string arrayInclude kube-system namespace
sourceClusterRef ClusterRefSource cluster
destinationClusterRef ClusterRefDestination cluster
bucketRef BucketRefBucket used to clone the original state of the resources
components ComponentsList of components configurations to be applied to the Kubesync services deployed
resources Resource arrayResources to synchronize
reconciliation ReconciliationReconciliation process
observability ObservabilityReconciliation process

RecoveryProcess

Appears in:

FieldDescription
patchOptions PatchOpts
fromPatch PatchOperation arrayList of JSONPatch operations to apply to the resources
operations Operation array
fromOriginal FromOriginal

Redis

Appears in:

FieldDescription
imagePullPolicy PullPolicyRedis ImagePullPolicy
resources ResourceRequirementsResources to be assigned to Redis

Resource

Appears in:

FieldDescription
group stringResource group
version stringResource version
resource stringResource
transformation TransformationTransformation to apply to the resources
strategy StrategyStrategy to apply to apply the recover plan
filters FiltersResource selection filters
recoveryProcess RecoveryProcessRecovery plan configuration

SecretRef

Appears in:

FieldDescription
name stringSecret name
namespace stringSecret namespace

Strategy

Strategy to be applied for the recovery plan

Appears in:

FieldDescription
onDelete OnDeleteStrategy
onAdd OnAddStrategy
onUpdate OnUpdateStrategy

Synchronizer

Appears in:

FieldDescription
concurrentTasks integerNumber of concurrent tasks
replicas integerNumber of replicas
logLevel stringLog level debug;info;warm;error;panic;fatal
imagePullPolicy PullPolicySynchronizer ImagePullPolicy
resources ResourceRequirementsResources to be assigned to Synchronizer

Transformation

Appears in:

FieldDescription
patchOptions PatchOpts
patch PatchOperation arrayList of JSONPatch operations to apply to the resources
operations Operation array