Astronetes Resiliency Operator provides a transparent and effortless solution to protect Cloud Native platforms from possible disaster outages by leveraging Kubernetes native tools.
This is the multi-page printable view of this section. Click here to print.
Resiliency Operator
- 1: Intro
- 2: Architecture
- 2.1: Overview
- 2.2: Components
- 2.3: Synchronization objects
- 3: Installation
- 4: Update license key
- 5: Plugins
- 6: Samples
- 6.1: Zookeeper to Zookeeper samples
- 6.1.1: Zookeeper Database
- 6.1.2: Zookeeper Synchronization
- 6.1.3: Zookeeper Synchronization Plan
- 7: Reference
- 7.1: API Reference
1 - Intro
Business continuity refers to the ability that a particular business can overcome potentially disruptive events with minimal impact in its operations. This no small ordeal requires the definition, implementation of plans, processes and systems while involving complete collaboration and synchronization between multiple actors and departments.
This collection of assets and processes compose the company’s Disaster Recovery. Its goal is to reduce the downtime and data loss in the case of a catastrophic, unforeseen situation. Disaster Recovery needs answer two questions:
- How much data can we lose? - Recovery Point Objective (RPO)
- How long can we take to recover the system? - Recovery Time Objective (RTO)

Resiliency Operator provides a solution to improve the business continuity of Cloud Native Platforms by offering a tool that improves resiliency that is transparent in day-to-day operations while having minimal impact in technical maintenance.
Depending on the organisation, system and project necessities resiliency can be improved with a combination of real time synchronization across two or more instances and with a backup and restore strategy. Resiliency Operator implements both methods of data replication across multiple technologies and allows for flexibility on where and how is the information stored.
Business Continuity plans often include complex tests to validate backups content and that they can be restored at any time. To help with these requirements Resiliency Operator includes monitorization systems so that operational teams can make sure that the data is being correctly synchronized and its state in destination.
2 - Architecture
2.1 - Overview
Resiliency Operator is installed in a Kubernetes cluster that acts as an orchestrator and hosts the tools and components that synchronize the data across assets.
2.2 - Components
Operator
| Component | Description |
|---|---|
| Database controller | Orchestrate the Database obejcts. |
| Synchronization controller | Orchestrate the Synchronization obejcts. |
| Synchronization plan controller | Orchestrate the SynchronizationPlan obejcts. |
2.3 - Synchronization objects
Introduction
Astronetes offers the following synchronization objects to cover the infrastructure resiliency requirements.
Synchronization
Synchronizes the content of an asset to another one once. The operations required to perform the snapshot depend on the plugin. More information can be found in the Plugin section. Synchronizations are managed through the Synchronization Custom Resource Definition.
Synchronization Plan
Periodic snapshots with intervals set by the user. Periodicy is established as a cron expression. When the Synchronization Plan starts a new snapshot, it creates a new Synchronization resource akin to how a Kubernetes CronJob deploys a new Job whenever the cron expression indicates. Synchronization Plans are managed through the SynchronizationPlan Custom Resource Definition.
3 - Installation
3.1 - Preparing to install
Pre-requirements
Get familiarized with the architecture reading this section.
A valid Resiliency Operator license key and registry access key should already be assigned.
Supported platforms
Astronetes Resiliency Operator is vendor agnostic meaning that any Kubernetes distribution such as Google Kubernetes Engine, Azure Kubernetes Service, OpenShift or self-managed bare metal installations can run it.
This is the certified compatibility matrix:
| Platform | Min Version | Max Version |
|---|---|---|
| AKS | 1.24 | 1.29 |
| EKS | 1.24 | 1.28 |
| GKE | 1.24 | 1.28 |
| OpenShift Container Platform | 4.11 | 4.14 |
Kuberentes requirements
Software
Official kubernetes.io client CLI kubectl.
Networking
- Allow traffic to the Image Registry quay.io/astrokube using the mechanism provided by the chosen distribution.
- In a 3-clusters architecture, the management cluster needs to have communication with both the destination and source cluster. Note that it is not necessary to also allow connections between the target clusters. Due to the lack of a centralised management cluster, in a 2-clusters architecture communication between destination and source should be enabled.
OpenShift requirements
Software
Networking
- Add quay.io/astrokube to the allowed registries in the Image configuration.
- In a 3-clusters architecture, the management cluster needs to have communication with both the destination and source cluster. Note that it is not necessary to also allow connections between the target clusters. Due to the lack of a centralised management cluster, in a 2-clusters architecture communication between destination and source should be enabled.
apiVersion: config.openshift.io/v1
kind: Image
metadata:
...
spec:
registrySources:
allowedRegistries:
...
- quay.io/astrokube
Cluster configuration
- Cluster admin permission in management, destination and source clusters. In a 2-clusters architecture it is only required to have admin permissions in the destination and source clusters as the operator activities will be delegated to the former.
- The Secret provided by AstroKube to access the Image Registry.
- The Secret provided by AstroKube with the license key.
3.2 - Installing on OpenShift
The following operations need to executed in both the management and destination cluster.
Process
1. Create Namespace
Create the Namespace where the operator will be installed:
oc create namespace resiliency-operator
2. Setup registry credentials
Create the Secret that stores the credentials to the AstroKube image registry:
oc -n resiliency-operator create -f pull-secret.yaml
3. Setup license key
Create the Secret that stores the license key:
oc -n resiliency-operator create -f license-key.yaml
4. Install the operator
Install the CRDs:
oc apply -f https://astronetes.io/deploy/resiliency-operator/v1.0.0/crds.yaml
Install the operator:
oc -n resiliency-operator apply -f https://astronetes.io/deploy/resiliency-operator/v1.0.0/operator-openshift.yaml
3.3 - Uninstalling on OpenShift
Process
1. Delete Operator objects
Delete the synchronizations from the management cluster:
oc delete synchronizationplans.automation.astronetes.io,synchronizations.automation.astronetes.io -A --all
Delete the assets from the management cluster:
oc delete databases.assets.astronetes.io -A --all
2. Remove the operator
Delete the operator:
oc -n resiliency-operator delete -f https://astronetes.io/deploy/resiliency-operator/v1.0.0/operator-openshift.yaml
Delete the CRDs:
oc delete -f https://astronetes.io/deploy/resiliency-operator/v1.0.0/crds.yaml
3. Remove registry credentials
Delete the Secret that stores the credentials to the AstroKube image registry:
oc -n resiliency-operator delete -f pull-secret.yaml
4. Remove license key
Delete the Secret that stores the license key:
oc -n resiliency-operator delete -f license-key.yaml
4 - Update license key
There is no need to reinstall the operator when updating the license key.
1. Update the license key
Update the Kubernetes Secret that stores the license key with the new license:
kubectl -n resiliency-operator apply -f new-license-key.yaml
oc -n resiliency-operator apply -f new-license-key.yaml
2. Restart the Resiliency Operator
Restart the Resiliency Operator Deployment to apply the new license:
kubectl -n resiliency-operator rollout restart deployment resiliency-operator-database-controller
kubectl -n resiliency-operator rollout restart deployment resiliency-operator-synchronization-controller
kubectl -n resiliency-operator rollout restart deployment resiliency-operator-synchronizationplan-controller
kubectl -n resiliency-operator rollout restart deployment resiliency-operator-database-controller
kubectl -n resiliency-operator rollout restart deployment resiliency-operator-synchronization-controller
kubectl -n resiliency-operator rollout restart deployment resiliency-operator-synchronizationplan-controller
3. Wait for the Pods restart
Wait a couple of minutes until all the Resiliency Operator Pods are restarted with the new license.
5 - Plugins
Plugins implement the logic to synchronize data from a particular type of asset to another instance that runs the same or another technology.
Specifying which one to use is required whether the synchronization is managed through Synchronization or SynchronizationPlan Custom Resources.
5.1 - Zookeeper to Zookeeper
Replicates from one Zookeeper instance to another one directly.
Samples
6 - Samples
6.1 - Zookeeper to Zookeeper samples
6.1.1 - Zookeeper Database
Source and destination Databases should include the host and port of the target Zookeper instances. A Secret containing the user login credentials is required for each Database instance. Databases are mapped to Secrets that have the same name and namespace.
Zookeeper users should have appropiate read permissions if they belong to a source instance and write permissions if they are instead located in a destination location.
apiVersion: v1
kind: Secret
metadata:
name: zookeeper-source
stringData:
user: admin
password: password
---
apiVersion: assets.astronetes.io/v1alpha1
kind: Database
metadata:
name: zookeeper-source
spec:
zookeeper:
client:
servers:
- 172.18.0.4:30181
---
apiVersion: v1
kind: Secret
metadata:
name: zookeeper-destination
stringData:
user: admin
password: password
---
apiVersion: assets.astronetes.io/v1alpha1
kind: Database
metadata:
name: zookeeper-destination
spec:
zookeeper:
client:
servers:
- 172.18.0.5:30181
6.1.2 - Zookeeper Synchronization
Zookeeper synchronization requires the path to the root endpoint. If can be specified in spec.template.spec.config.rootPath.
---
apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
generateName: synchronize-zookeeper-
spec:
plugin: zookeeper-to-zookeeper-nodes
config:
sourceName: zookeeper-source
destinationName: zookeeper-destination
rootPath: /test
6.1.3 - Zookeeper Synchronization Plan
Zookeeper synchronization requires the path to the root endpoint. If can be specified in spec.template.spec.config.rootPath.
---
apiVersion: automation.astronetes.io/v1alpha1
kind: SynchronizationPlan
metadata:
name: synchronize-zookeeper
spec:
schedule: "10 * * * *"
template:
spec:
plugin: zookeeper-to-zookeeper-nodes
config:
sourceName: zookeeper-source
destinationName: zookeeper-destination
rootPath: /test
7 - Reference
This section contains the API Reference of CRDs for the Resiliency Operator.
7.1 - API Reference
Packages
assets.astronetes.io/v1alpha1
Package v1alpha1 contains API Schema definitions for the assets v1alpha1 API group
Resource Types
AWSS3
Appears in:
| Field | Description | Default | Validation |
|---|---|---|---|
name string | |||
region string | |||
secretName string |
Bucket
Bucket is the Schema for the buckets API
Appears in:
| Field | Description | Default | Validation |
|---|---|---|---|
apiVersion string | assets.astronetes.io/v1alpha1 | ||
kind string | Bucket | ||
metadata ObjectMeta | Refer to Kubernetes API documentation for fields of metadata. | ||
spec BucketSpec |
BucketList
BucketList contains a list of Bucket
| Field | Description | Default | Validation |
|---|---|---|---|
apiVersion string | assets.astronetes.io/v1alpha1 | ||
kind string | BucketList | ||
metadata ListMeta | Refer to Kubernetes API documentation for fields of metadata. | ||
items Bucket array |
BucketSpec
BucketSpec defines the desired state of Bucket
Appears in:
| Field | Description | Default | Validation |
|---|---|---|---|
gcpCloudStorage GCPCloudStorage | Reference a GCP Cloud Storage service | Optional: {} | |
awsS3 AWSS3 | Reference a AWS Bucket service | Optional: {} |
Database
Database is the Schema for the databases API
Appears in:
| Field | Description | Default | Validation |
|---|---|---|---|
apiVersion string | assets.astronetes.io/v1alpha1 | ||
kind string | Database | ||
metadata ObjectMeta | Refer to Kubernetes API documentation for fields of metadata. | ||
spec DatabaseSpec |
DatabaseList
DatabaseList contains a list of Database
| Field | Description | Default | Validation |
|---|---|---|---|
apiVersion string | assets.astronetes.io/v1alpha1 | ||
kind string | DatabaseList | ||
metadata ListMeta | Refer to Kubernetes API documentation for fields of metadata. | ||
items Database array |
DatabaseSpec
DatabaseSpec defines the desired state of Database
Appears in:
| Field | Description | Default | Validation |
|---|---|---|---|
zookeeper Zookeeper | Zookeeper database | Optional: {} |
GCPCloudStorage
Appears in:
| Field | Description | Default | Validation |
|---|---|---|---|
name string | |||
secretName string |
KubernetesCluster
KubernetesCluster is the Schema for the kubernetesclusters API
Appears in:
| Field | Description | Default | Validation |
|---|---|---|---|
apiVersion string | assets.astronetes.io/v1alpha1 | ||
kind string | KubernetesCluster | ||
metadata ObjectMeta | Refer to Kubernetes API documentation for fields of metadata. | ||
spec KubernetesClusterSpec |
KubernetesClusterList
KubernetesClusterList contains a list of KubernetesCluster
| Field | Description | Default | Validation |
|---|---|---|---|
apiVersion string | assets.astronetes.io/v1alpha1 | ||
kind string | KubernetesClusterList | ||
metadata ListMeta | Refer to Kubernetes API documentation for fields of metadata. | ||
items KubernetesCluster array |
KubernetesClusterSpec
KubernetesClusterSpec defines the desired state of KubernetesCluster
Appears in:
| Field | Description | Default | Validation |
|---|---|---|---|
secretName string | Reference to the secret that stores the cluster Kubeconfig | Required: {} |
Zookeeper
Appears in:
| Field | Description | Default | Validation |
|---|---|---|---|
admin ZookeeperAdmin | Credentials for the admin port | Optional: {} | |
client ZookeeperClient | Credentials for the client port | Optional: {} |
ZookeeperAdmin
Appears in:
| Field | Description | Default | Validation |
|---|---|---|---|
protocol string | Zookeeper protocol | Required: {} | |
host string | Zookeeper host | Required: {} | |
port string | Zookeeper port | Required: {} | |
secretName string | Zookeeper authentication data | Optional: {} |
ZookeeperClient
Appears in:
| Field | Description | Default | Validation |
|---|---|---|---|
servers string array | Zookeeper servers | Required: {} |