This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Resiliency Operator

Astronetes Resiliency Operator provides a transparent and effortless solution to protect Cloud Native platforms from possible disaster outages by leveraging Kubernetes native tools.

1 - Getting started

Getting started with Resiliency Operator

1.1 - Intro

What is Resiliency Operator and why it could be useful for you

Astronetes Resiliency Operator is a Kubernetes operator that improves the resiliency of cloud native platforms. It acts as the orchestrator that setup and manages the resiliency of Cloud Native platforms, automating processes and synchronizing data and configurations across multiple technologies.

Astronetes Resiliency Operator helps you to accomplish the following:

  • Enable active-active architectures for cloud native platforms
  • Automate Disaster Recovery plans for cloud native platforms

Use cases

Active-Active architectures

Active-Active architectures are a proven approach to ensure the required resiliency for mission cricital applications. It protects the application from an outage caused by both technical and operational failures in the platform and its dependencies.

Astronetes Resiliency Operator empowers organizations to deploy and maintain applications across multiple clusters or regions, ensuring maximum uptime and seamless user experiences even during failures or maintenance events.

Disaster Recovery

Business continuity refers to the ability that a particular business can overcome potentially disruptive events with minimal impact in its operations. This no small ordeal requires the definition, implementation of plans, processes and systems while involving complete collaboration and synchronization between multiple actors and departments.

This collection of assets and processes compose the company’s Disaster Recovery. Its goal is to reduce the downtime and data loss in the case of a catastrophic, unforeseen situation. Disaster Recovery needs answer two questions:

  • How much data can we lose? - Recovery Point Objective (RPO)
  • How long can we take to recover the system? - Recovery Time Objective (RTO)

Resiliency Operator provides a solution to improve the business continuity of Cloud Native Platforms by offering a tool that improves resiliency that is transparent in day-to-day operations while having minimal impact in technical maintenance.

Depending on the organisation, system and project necessities resiliency can be improved with a combination of real time synchronization across two or more instances and with a backup and restore strategy. Resiliency Operator implements both methods of data replication across multiple technologies and allows for flexibility on where and how is the information stored.

Business Continuity plans often include complex tests to validate backups content and that they can be restored at any time. To help with these requirements Resiliency Operator includes monitorization systems so that operational teams can make sure that the data is being correctly synchronized and its state in destination.

1.2 - Release notes

Resiliency Operator Release Notes

v1.3.5

Released: 8 November 2024

Improvements:

  • Improved Synchronizations observability providing more details about write operations.

Fixes:

  • Fixed kubernetes-to-bucket synchronization plugin whithout using cache

Manifests

Kubernetes:

OpenShift:

v1.3.4

Released: 23 October 2024

Improvements:

  • Add option to run synchronizations without cache
  • Add option to show a log if an object is already in sync during the synchronization process
  • Add option to show a log if an object have beed adapted for the destination during the synchronization process
  • Add annotations in Kubernetes objects written in synchronizations
  • Auto fix OpenShift ImageStreams that references namespaces that don’t exist
  • Improved memory management in Kubernetes assets
  • Improved zookeeper replication logs showing if a node have been created or updated
  • Updated default zookeeper timeout to 5 minutes.

Fixes:

  • Many fixes in deployment manifests
  • Fixed ServiceAccounts updates
  • Fixed status update for SynchronizationPlans

Manifests

Kubernetes:

OpenShift:

v1.3.3

Released: 17 September 2024

Improvements:

  • Improved plugins start time

Fixes:

  • Set synchronization namespace in astronetes_total_synchronized_objects metric
  • Add webook configuration in deployment manifests
  • Fix controllers roles in deployment manifests
  • Fix JSONPatch transformations options

Manifests

Kubernetes:

OpenShift:

v1.3.2

Released: 13 September 2024

Improvements:

  • Improved metrics and updated Grafana Dashboard

Manifests

Kubernetes:

OpenShift:

v1.3.1

Released: 12 September 2024

Improvements:

  • Improved plugin configuration validation
  • Reduced memory used by synchronizations
  • Exposed synchronization Namespace in metrics

Fixes:

  • Fixed skipping delete errors in JSONPatch transformations

Manifests

Kubernetes:

OpenShift:

v1.3.0

Released: 9 September 2024

New features:

  • Native support for Google Cloud Storage
  • Support synchronizations from Kubernetes to Bucket
  • Support synchronizations from Bucket to Kubernetes
  • Customize synchronizations with DryRun, ForceSync and ForcePrune options
  • Filter objects by namespace name in Kubernetes synchronizations
  • Export metrics to inform about Assets status
  • Export metrics to inform about synchronizations status
  • Export metrics to inform about write operations

Improvements:

  • Improved performance in LiveSynchronization from Kubernetes to Kubernetes
  • Simplified Kubernetes object selectors in synchronizations
  • Reduced amount of internal logs

Manifests

Kubernetes:

OpenShift:

2 - Architecture

Astronetes Resiliency Operator architecture

2.1 - Overview

Resiliency Operator architecture

Resiliency Operator acts as the orchestrator that setup and manages the resiliency of Cloud Native platforms, automating processes and synchronizing data and configurations across multiple technologies.

It is built with a set of plugins that enables to integrate many technologies and managed services in the resiliency framework.

Key concepts

Assets

Platforms, technologies and services can be linked to the Resiliency Operator to be included in the resiliency framewor, like Kubernetes clusters and databases.

Synchronizations

The synchronization of data and configurations can be configured according to the platform requirements.

Synchronization NameDescription
SynchronizationSynchronize data and configurations only once.
SynchronizationPlanSynchronize data and configurations based on a scheduled period.
LiveSynchronizationReal-time synchronization of data and configurations.

Automation

The Resiliency Operator allows the automation of tasks to be executed when an incident or a disaster occurs.

2.2 - Components

Resiliency Operator Components

Astronetes Resiliency Operator is software that can be deployed on Kubernetes based clusters. It is composed by a set of controllers that automate and orchestrate the resiliency of Cloud Native platforms.

Operator

ControllersDescription
BucketOrchestrate the Bucket obejcts.
DatabaseOrchestrate the Database obejcts.
Kubernetes ClusterOrchestrate the KubernetesCluster obejcts.
Live SynchronizationOrchestrate the LiveSynchronization obejcts.
Synchronization PlanOrchestrate the SynchronizationPlan obejcts.
SynchronizationOrchestrate the Synchronization obejcts.
Task RunOrchestrate the TaskRun obejcts.
TaskOrchestrate the Task obejcts.

2.3 - Observability

Metrics and alerting for Astronetes

Astronete provides monitoring capabilities by exposing various performance and operational metrics. These metrics allow to gain insight into the system’s health, performance, and behavior, ensuring that you can take proactive measures to maintain system stability.

Metrics

The metrics are exposed in Prometheus format, which is a widely-adopted open-source standard for monitoring. This format enables seamless integration with Prometheus-based monitoring solutions.

Assets by status

The status of each asset managed by the operator: KubernetesClusters, Buckets and Databases.

Prometheus metric: astronetes_asset_status.

Status values: Ready, Progressing, Terminating, Unknown or Failed.

Synchronizations by status

The status of each synchronization object: Synchronization, SynchronizationPlan and LiveSynchronization.

Prometheus metric: astronetes_synchronization_status.

Status values: Ready, Progressing, Terminating, Unknown or Failed.

Total synchronized objects by status

The count of synchronized objects by status.

Prometheus metric: astronetes_total_synchronized_objects.

Status values: Sync, OutOfSync or Unknown.

Alerts

Based on the exposed metrics, alerting can be configured using the widely-adopted open-source standard PrometheusRules. This format enables seamless integration with Prometheus-based monitoring solutions.

Platform alerts

The following alerts reports a possible issue with the platform.

Alert NameDescriptionSeverityDuration
AssetFailureAt least one asset is failingcritical5 minutes
SynchronizationFailureAt least one synchronization is failingcritical5 minutes

Applications alerts

The following alerts reports a possible issue with the objects configured to be synchronized. Those alerts are usually related to applications issues.

Alert NameDescriptionSeverityDuration
SynchronizationNotInSyncThere are synchronizations items out of syncwarning1 hour
WriteOperationsFailedOne or more write operations failedwarning1 hour

2.4 - Audit

Parameters built into Resiliency Operator to track when a change was made and whom did it

Auditing and version control is an important step when configuring resources. Knowing when a change was made and the account that applied it can be determinative in an ongoing investigation to solve an issue or a configuration mismanagement.

Audit annotations

The following annotation are attached to every resource that belongs to Resiliency Operator Custom Resources:

apiVersion: automation.astronetes.io/v1alpha1
kind: LiveSynchronization
metadata:
  annotations:
    audit.astronetes.io/last-update-time: "<date>"         # Time at which the last update was applied.
    audit.astronetes.io/last-update-user-uid: "<uid-hash>" # Hash representing the Unique Identifier of the user that applied the change.
    audit.astronetes.io/last-update-username: "<username>" # Human readable name of the user that applied the change. 

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: LiveSynchronization
metadata:
  annotations:
    audit.astronetes.io/last-update-time: "2024-02-09T14:05:30.67520525Z"
    audit.astronetes.io/last-update-user-uid: "b3fd2a87-0547-4ff7-a49f-cce903cc2b61"
    audit.astronetes.io/last-update-username: system:serviceaccount:preproduction:microservice1

Fields are updated only when a change to the fields .spec, .labels or .annotations are detected. Status modifications by the operator are not recorded.

3 - Installation

Install the Resiliency Operator

3.1 - Preparing to install

Setup for the necessary tools to install the operator.

Prerequisites

  • Get familiarized with the architecture reading this section.
  • The Secret provided by AstroKube to access the Image Registry.
  • The Secret provided by AstroKube with the license key.

Cluster requiremenets

Supported platforms

Astronetes Resiliency Operator is vendor agnostic meaning that any Kubernetes distribution such as Google Kubernetes Engine, Azure Kubernetes Service, OpenShift or self-managed bare metal installations can run it.

This is the certified compatibility matrix:

PlatformMin VersionMax Version
AKS1.241.29
EKS1.241.28
GKE1.241.28
OpenShift Container Platform4.114.14

Permissions

To install the Resiliency Operator on a cluster, you need to have Cluster Admin permissions in that cluster.

The Resiliency Operator needs read access to the assets being protected and read/write access to the backup assets. Refer to plugin documentation for details.

Kubernetes requirements

Software

Networking

  • Allow traffic to the Image Registry quay.io/astrokube using the mechanism provided by the chosen distribution.

OpenShift requirements

Software

Networking

apiVersion: config.openshift.io/v1
kind: Image
metadata:
    ...
spec:
  registrySources: 
    allowedRegistries: 
    ...
    - quay.io/astrokube

3.2 - Installing on Kubernetes

Steps to install the Resiliency Operator in Kubernetes

Prerequisites

Process

1. Create Namespace

Create the Namespace where the operator will be installed:

oc create namespace resiliency-operator

2. Setup registry credentials

Create the Secret that stores the credentials to the AstroKube image registry:

oc -n resiliency-operator create -f pull-secret.yaml

3. Setup license key

Create the Secret that stores the license key:

oc -n resiliency-operator create -f license-key.yaml

4. Install the operator

Install the CRDs:

oc apply -f https://astronetes.io/deploy/resiliency-operator/v1.3.5/crds-kubernetes.yaml

Install the operator:

oc -n resiliency-operator apply -f https://astronetes.io/deploy/resiliency-operator/v1.3.5/operator-kubernetes.yaml

3.3 - Installing on OpenShift

Steps to install the Resiliency Operator in OpenShift

Prerequisites

  • Review the documenation about preparing for the installation.
  • The pull-secret.yaml Secret provided by AstroKube to access the Image Registry.
  • The license-key.yaml` Secret provided by AstroKube with the license key.

Process

1. Create Namespace

Create the Namespace where the operator will be installed:

oc create namespace resiliency-operator

2. Setup registry credentials

Create the Secret that stores the credentials to the AstroKube image registry:

oc -n resiliency-operator create -f pull-secret.yaml

3. Setup license key

Create the Secret that stores the license key:

oc -n resiliency-operator create -f license-key.yaml

4. Install the operator

Install the CRDs:

oc apply -f https://astronetes.io/deploy/resiliency-operator/v1.3.5/crds-openshift.yaml

Install the operator:

oc -n resiliency-operator apply -f https://astronetes.io/deploy/resiliency-operator/v1.3.5/operator-openshift.yaml

3.4 - Uninstalling on Kubernetes

Steps to uninstall the Resiliency Operator on Kubernetes

Process

1. Delete Operator objects

Delete the synchronizations from the cluster:

oc delete livesynchronizations.automation.astronetes.io -A --all
oc delete synchronizationplans.automation.astronetes.io -A --all
oc delete synchronizations.automation.astronetes.io -A --all

Delete the assets from the cluster:

oc delete buckets.assets.astronetes.io -A --all
oc delete databases.assets.astronetes.io -A --all
oc delete kubernetesclusters.assets.astronetes.io -A --all

2. Remove the operator

Delete the operator:

oc -n resiliency-operator delete -f https://astronetes.io/deploy/resiliency-operator/v1.3.5/operator-kubernetes.yaml

Delete the CRDs:

oc delete -f https://astronetes.io/deploy/resiliency-operator/v1.3.5/crds-kubernetes.yaml

3. Remove registry credentials

Delete the Secret that stores the credentials to the AstroKube image registry:

oc -n resiliency-operator delete -f pull-secret.yaml

4. Remove license key

Delete the Secret that stores the license key:

oc -n resiliency-operator delete -f license-key.yaml

3.5 - Uninstalling on OpenShift

Steps to uninstall the Resiliency Operator on OpenShift

Process

1. Delete Operator objects

Delete the synchronizations from the cluster:

oc delete livesynchronizations.automation.astronetes.io -A --all
oc delete synchronizationplans.automation.astronetes.io -A --all
oc delete synchronizations.automation.astronetes.io -A --all

Delete the assets from the cluster:

oc delete buckets.assets.astronetes.io -A --all
oc delete databases.assets.astronetes.io -A --all
oc delete kubernetesclusters.assets.astronetes.io -A --all

2. Remove the operator

Delete the operator:

oc -n resiliency-operator delete -f https://astronetes.io/deploy/resiliency-operator/v1.3.5/operator-openshift.yaml

Delete the CRDs:

oc delete -f https://astronetes.io/deploy/resiliency-operator/v1.3.5/crds-openshift.yaml

3. Remove registry credentials

Delete the Secret that stores the credentials to the AstroKube image registry:

oc -n resiliency-operator delete -f pull-secret.yaml

4. Remove license key

Delete the Secret that stores the license key:

oc -n resiliency-operator delete -f license-key.yaml

4 - Configure

Configure the operator

4.1 - Integrating with Grafana

Configure Grafana dashboards

Introduction

The Resiliency Operator exports metrics in Prometheus format that can be visualized using custom Grafana dashboards.

Prerequisites

  • Prometheus installed in the Kubernetes cluster
  • Grafana configured to access the Prometheus

Process

1. Import dashboard for Assets

Access to Grafana and navigate to Home > Dashboards > Import.

Set the dashboard URL to https://astronetes.io/deploy/resiliency-operator/v1.3.5/grafana-dashboard-assets.json and click Load.

Configure the import and click con Import button to complete the process.

2. Import dashboard for Synchronizations

Access to Grafana and navigate to Home > Dashboards > Import.

Set the dashboard URL to https://astronetes.io/deploy/resiliency-operator/v1.3.5/grafana-dashboard-synchronizations.json and click Load.

Configure the import and click con Import button to complete the process.

4.2 - Integrating with Grafana Operator

Configure Grafana dashboards

Introduction

The Resiliency Operator exports metrics in Prometheus format that can be visualized using custom Grafana dashboards.

Prerequisites

  • Prometheus installed in the Kubernetes cluster
  • Grafana Operator installed in the cluster and configured to access the Prometheus

Process

1. Create the GrafanaDashboard for Assets

Create the GrafanaDashboard for Assets from the release manifests:

kubectl apply -f https://astronetes.io/deploy/resiliency-operator/v1.3.5/dashboard-assets.yaml

2. Create the GrafanaDashboard for Synchronizations

Create the GrafanaDashboard for Synchronizations from the release manifests:

kubectl apply -f https://astronetes.io/deploy/resiliency-operator/v1.3.5/dashboard-synchronizations.yaml

4.3 - Integrating with OpenShift Alerting

Manage alerts based on Prometheus metrics through OpenShift

Introduction

OpenShift allows the creation of alerts based on Prometheus metrics to provide additional information about the functioning and status of Astronetes operator.

Prerequisites

  • Access Requirement: cluster-admin access to the OpenShift cluster

Configure alerts

Two types of alerts are provided for managing the operator’s integration within the cluster and for monitoring the synchronization

Platform alerts

Metrics defined to assess the functionality of the integration between the product and the assets

Applying these rules:

oc apply -f https://astronetes.io/deploy/resiliency-operator/v1.3.5/alert-rules-resiliency-operator.yaml

Synchronization alerts

Metrics are employed to assess the status of synchronized objects.

For configuring this rule its necesary to follow these steps:

  1. Create this PrometheusRule manifest:
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: failed-synchronize-items
  namespace: <your-synchronization-namespace>
spec:
  groups:
  - name: synchronization-alerts
    rules:
    - alert: SynchronizationNotInSync
      annotations:
        summary: "There are synchronization items not in sync."
        description: "Synchronization {{ $labels.synchronizationName }} is out of sync in namespace {{ $labels.synchronizationNamespace }}"
      expr: astronetes_total_synchronized_objects{objectStatus!="Sync"} > 0
      for: 1h
      labels:
        severity: warning
    - alert: WriteOperationsFailed
      annotations:
        summary: "There are one or more write operations failed"
        description: "Synchronization {{ $labels.synchronizationName }} failed write operator in namespace {{ $labels.synchronizationNamespace }}"
      expr: astronetes_total_write_operations{writeStatus="failed"} > 0
      for: 1h
      labels:
        severity: warning
  1. Edit namespace: Use the namespace where synchronizes are deployed

  2. Applying this rule:

kubectl apply -f <path-to-your-modified-yaml-file>.yaml

How to configure custom alerts

Prometheus provides a powerful set of metrics that can be used to monitor the status of your cluster and the functionality of your operator by creating customized alert rules.

The PrometheusRule should be created in the same namespace as the process that generates these metrics to ensure proper functionality and visibility.

Here is an example of a PrometheusRule YAML file:

apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: <alert-name>
  namespace: <namespace>
spec:
  groups:
  - name: <group-name>
    rules:
    - alert: <alert-rule-name>
      annotations:
        description: <description>
        summary: <summary>
      expr: <expresion>
      for: <duration>
      labels:
        severity: <severity-level>

Field Value Descriptions

In the PrometheusRule YAML file, several fields are essential for defining your alerting rules. Below is a table describing the values that can be used for each field:

FieldDescriptionExample Values
alertSpecifies the name of the alert that will be triggered. It should be descriptive.AssetFailure, HighCPUUsage, MemoryThresholdExceeded
forDefines the duration for which the condition must be true before the alert triggers.5m, 1h, 30s
severityIndicates the criticality of the alert. Helps prioritize alerts.critical, warning, info
exprThe Prometheus expression (in PromQL) that determines the alerting condition based on metrics.sum(rate(http_requests_total[5m])) > 100, node_memory_usage > 90

Apply to the cluster

Create new prometheus rule in the cluster:

oc apply -f <path-to-your-prometheus-rule-file>.yaml

Checking alerts

1. Access OpenShift Web Console:

  • Open your browser and go to the OpenShift web console URL.
  • Log in with your credentials.

2. Navigate to Observe:

  • In the OpenShift console, go to the Observe section from the main menu.
  • In the Alerts tab, you’ll find a list of active and silenced alerts.
  • Check for any alerts triggered based on the custom rules you can create in Prometheus.
  • Also you can see the entire list of alerting rules configurated.

3. Filter Custom Alerts:

  • To filter the custom alerts, use the source field and set its value to user. This will display only the alerts that were generated based on user-defined rules. Check openshift docs about filtering.

4.4 - Update license key

Steps to update the license key for the Resiliency Operator

There is no need to reinstall the operator when updating the license key.

Process

1. Update the license key

Update the Kubernetes Secret that stores the license key with the new license:

kubectl -n resiliency-operator apply -f new-license-key.yaml
oc -n resiliency-operator apply -f new-license-key.yaml

2. Restart the Resiliency Operator

Restart the Resiliency Operator Deployment to apply the new license:

kubectl -n resiliency-operator rollout restart deployment resiliency-operator-bucket-controller
kubectl -n resiliency-operator rollout restart deployment resiliency-operator-database-controller
kubectl -n resiliency-operator rollout restart deployment resiliency-operator-kubernetescluster-controller
kubectl -n resiliency-operator rollout restart deployment resiliency-operator-livesynchronization-controller
kubectl -n resiliency-operator rollout restart deployment resiliency-operator-synchronization-controller
kubectl -n resiliency-operator rollout restart deployment resiliency-operator-synchronizationplan-controller
kubectl -n resiliency-operator rollout restart deployment resiliency-operator-task-controller
kubectl -n resiliency-operator rollout restart deployment resiliency-operator-taskrun-controller
oc -n resiliency-operator rollout restart deployment resiliency-operator-bucket-controller
oc -n resiliency-operator rollout restart deployment resiliency-operator-database-controller
oc -n resiliency-operator rollout restart deployment resiliency-operator-kubernetescluster-controller
oc -n resiliency-operator rollout restart deployment resiliency-operator-livesynchronization-controller
oc -n resiliency-operator rollout restart deployment resiliency-operator-synchronization-controller
oc -n resiliency-operator rollout restart deployment resiliency-operator-synchronizationplan-controller
oc -n resiliency-operator rollout restart deployment resiliency-operator-task-controller
oc -n resiliency-operator rollout restart deployment resiliency-operator-taskrun-controller

3. Wait for the Pods restart

Wait a couple of minutes until all the Resiliency Operator Pods are restarted with the new license.

kubectl -n resiliency-operator wait --for=condition=available deployment/resiliency-operator-bucket-controller
kubectl -n resiliency-operator wait --for=condition=available deployment/resiliency-operator-database-controller
kubectl -n resiliency-operator wait --for=condition=available deployment/resiliency-operator-kubernetescluster-controller
kubectl -n resiliency-operator wait --for=condition=available deployment/resiliency-operator-livesynchronization-controller
kubectl -n resiliency-operator wait --for=condition=available deployment/resiliency-operator-synchronization-controller
kubectl -n resiliency-operator wait --for=condition=available deployment/resiliency-operator-synchronizationplan-controller
kubectl -n resiliency-operator wait --for=condition=available deployment/resiliency-operator-task-controller
kubectl -n resiliency-operator wait --for=condition=available deployment/resiliency-operator-taskrun-controller
oc -n resiliency-operator wait --for=condition=available deployment/resiliency-operator-bucket-controller
oc -n resiliency-operator wait --for=condition=available deployment/resiliency-operator-database-controller
oc -n resiliency-operator wait --for=condition=available deployment/resiliency-operator-kubernetescluster-controller
oc -n resiliency-operator wait --for=condition=available deployment/resiliency-operator-livesynchronization-controller
oc -n resiliency-operator wait --for=condition=available deployment/resiliency-operator-synchronization-controller
oc -n resiliency-operator wait --for=condition=available deployment/resiliency-operator-synchronizationplan-controller
oc -n resiliency-operator wait --for=condition=available deployment/resiliency-operator-task-controller
oc -n resiliency-operator wait --for=condition=available deployment/resiliency-operator-taskrun-controller

5 - Assets

Assets management

Platforms, technologies and services can be linked to the Resiliency Operator to enable process automation and data synchronization.

5.1 - Introduction

Asset introduction

An Asset is any kind of platform, technology or service that can be imported into the operator to improve its resiliency. Assets can include Kubernetes clusters and databases.

Asset types

Kubernetes Cluster

While the system is designed to be compatible with all kinds of Kubernetes clusters, official support and testing are limited to a specific list of Kubernetes distributions. This ensures that the synchronization process is reliable, consistent, and well-supported.

This is the list of officially supported Kubernetes distributions:

DistributionVersions
OpenShift Container Platform4.12+
Azure Kubernetes Service (AKS)1.28+
Elastic Kubernetes Service (EKS)1.26+
Google Kubernetes Engine (GKE)1.28+

Buckets

Public cloud storage containers for objects stored in simple storage service.

Databases

DatabaseVersions
Zookeeper3.6+

5.2 - Buckets

Manage Buckets

5.2.1 - Import GCP Cloud Storage

How-to import a bucke from GCP Cloud Storage

Buckets hosted in Cloud Storage can be imported as GCP CLoud Storage.

Requirements

The Bucket properties:

  • Bucket name
  • GCP project ID

The credentials to access the bucket:

  • The ServiceAccount key

Process

1. Create the Secret

Store the following file as secret.yaml and substitute the template parameters with real ones.

apiVersion: v1
kind: Secret
metadata:
  name: bucket-credentials
stringData:
  application_default_credentials.json: '{...}'

Then create the Secret with the following command:

kubectl -n <namespace_name> apply -f secret.yaml

2. Create the object

Store the following file as bucket.yaml and substitute the template parameters with real ones.

apiVersion: assets.astronetes.io/v1alpha1
kind: Bucket
metadata:
  name: <name>
  namespace: <namespace>
spec:
  gcpCloudStorage:
    name: <gcp-project-name>
    projectID: <gcp-project-id>
    secretName: gcp-bucket

Deploy the resource with the following command:

kubectl create -f bucket.yaml

5.2.2 - Import generic bucket

How-to import a generic bucket

Buckets that support AWS S3 protocol (like Minio), can be imported as a generic bucket.

Requirements

The Bucket properties:

  • Bucket endpoint
  • Bucket name

The credentials to access the bucket:

  • The access key ID
  • The ssecret access key

Process

1. Create the Secret

Store the following file as secret.yaml and substitute the template parameters with real ones.

apiVersion: v1
kind: Secret
metadata:
  name: bucket-credentials
stringData:
  accessKeyID: <access_key_id>
  secretAccessKey: <secret_access_key>

Then create the Secret with the following command:

kubectl -n <namespace_name> apply -f secret.yaml

2. Create the Bucket

Store the following file as bucket.yaml and substitute the template parameters with real ones.

apiVersion: assets.astronetes.io/v1alpha1
kind: Bucket
metadata:
  name: <name>
  namespace: <namespace>
spec:
  generic:
    endpoint: mybucket.example.com
    name: <bucket_name>
    useSSL: true
    secretName: bucket-credentials

Deploy the resource with the following command:

kubectl create -f bucket.yaml

5.2.3 - Configurations

Configure the Bucket import

Intro

The import of each Bucket can be configured with some specific parameters using the .spec.config attribute.

apiVersion: assets.astronetes.io/v1alpha1
kind: Bucket
metadata:
  name: my-bucket
spec:
  ...
  config: {}

Limit assigned resources

For each Bucket imported, a new Pod is deployed inside the same Namespace. The limit and requests resources can be set using the .spec.config.resources field.

Example:

apiVersion: assets.astronetes.io/v1alpha1
kind: Bucket
metadata:
  name: my-cluster
spec:
  ...
  config:
    resources:
      requests:
        cpu: 1
        memory: 2Gi
      limits:
        cpu: 2
        memory: 2Gi

Filter the watched resources

By default, the operator will watch all the files in the bucket. You can filter the list of path to be watched by configuring the .spec.config.paths field.

Example:

apiVersion: assets.astronetes.io/v1alpha1
kind: Bucket
metadata:
  name: my-bucket
spec:
  ...
  config:
    paths:
      - example1/

Concurrency

The concurrency parameter can be used to improve the peformance of the operator on listening the changes that happens in the Bucket.

Example:

apiVersion: assets.astronetes.io/v1alpha1
kind: Bucket
metadata:
  name: my-cluster
spec:
  ...
  config:
    concurrency: 200

5.2.4 - API Reference

Configuration details

Config

Customize the integration with a Bucket

FieldDescriptionTypeRequired
concurrencyConcurrent processes to be executed to improve performanceintfalse
intervalInterval of whichstringfalse
logLevelLog level to be used by the related Podstringfalse
observabilityObservability configurationObservabilityConfigfalse
pathsFilter the list of paths to be listened[]stringfalse
resourcesResources to be assigned to the synchronization PodResourceRequirementsfalse

ObservabilityConfig

Configure the synchronization process observability using Prometheus ServiceMonitor

FieldDescriptionTypeRequired
enabledEnable the Observability with a Prometheus ServiceMonitorboolfalse
intervalConfigure the interval in the ServiceMonitor that Prometheus will use to scrape metricsDurationfalse

Duration

Duration is a wrapper around time.Duration which supports correct marshaling to YAML and JSON. In particular, it marshals into strings, which can be used as map keys in json.

FieldDescriptionTypeRequired

5.3 - Databases

Manage Databases

5.3.1 - Import Zookeeper

How-to import a Zookeeper database

Zookeeper clusters can be imported with the Database resource.

Requirements

  • The Zookeeper server hosts

Process

1. Create the object

Define the Database resource with the following YAML, and save it as database.yaml:

apiVersion: assets.astronetes.io/v1alpha1
kind: Database
metadata:
  name: zookeeper
spec:
  zookeeper:
    client:
      servers:
        - 172.18.0.4:30181
        - 172.18.0.5:30181
        - 172.18.0.6:30181

Deploy the resource with the following command:

kubectl create -f database.yaml

5.4 - Kubernetes Clusters

Manager Kubernetes Cluster

5.4.1 - Import

How-to import Kubernetes clusters

Any kind of KubernetesCluster can be imported in the operator. Credentials are stored in Kubernetes secrets from which the KubernetesCluster collection access to connect to the clusters.

Once you have imported the KubernetesCluster, all the resources in the cluster that can be watched, will be read by the operator.

Requirements

  • The kubeconfig file to access the cluster

Process

1. Create the Secret

Get the kubeconfig file that can be used to access the cluster, and save it as kubeconfig.yaml.

Then create the Secret with the following command:

kubectl create secret generic source --from-file=kubeconfig.yaml=kubeconfig.yaml

2. Create the KubernetesCluster

Define the KubernetesCluster object with the following YAML, and save it as cluster.yaml:

apiVersion: assets.astronetes.io/v1alpha1
kind: KubernetesCluster
metadata:
  name: cluster-1
spec:
  secretName: <secret_name>

Deploy the resource with the following command:

kubectl create -f cluster.yaml

5.4.2 - Configurations

Configure the Kubernetes Clusters import

Intro

The import of each KubernetesCluster can be configured with some specific parameters using the .spec.config attribute.

apiVersion: assets.astronetes.io/v1alpha1
kind: KubernetesCluster
metadata:
  name: my-cluster
spec:
  secretName: my-cluster-secret
  config: {}

Limit assigned resources

For each Kubernetes Cluster imported, a new Pod is deployed inside the same Namespace. The limit and requestsresources can be set using the.spec.config.resources` field.

Example:

apiVersion: assets.astronetes.io/v1alpha1
kind: KubernetesCluster
metadata:
  name: my-cluster
spec:
  secretName: my-cluster-secret
  config:
    resources:
      requests:
        cpu: 1
        memory: 2Gi
      limits:
        cpu: 2
        memory: 2Gi

Filter the watched resources

By default, the operator will watch all the available resources int he cluster that can be watched. You can filter the list of this resources by configuring the .spec.config.selectors field.

Example:

apiVersion: assets.astronetes.io/v1alpha1
kind: KubernetesCluster
metadata:
  name: my-cluster
spec:
  secretName: my-cluster-secret
  config:
    selectors:
      targets:
        - group: ""
          version: v1
          resources:
            - namespaces
            - secrets
            - configmaps
            - serviceaccounts
            - resourcequotas
            - limitranges
            - persistentvolumeclaims
        - group: policy
          version: v1
          resources:
            - poddisruptionbudgets

Concurrency

The concurrency parameter can be used to improve the peformance of the operator on listening the changes that happens in the Kubernetes Cluster.

Example:

apiVersion: assets.astronetes.io/v1alpha1
kind: KubernetesCluster
metadata:
  name: my-cluster
spec:
  secretName: my-cluster-secret
  config:
    concurrency: 200

5.4.3 - API Reference

Configuration details

Config

Customize the integration with a KubernetesCluster

FieldDescriptionTypeRequired
concurrencyConcurrent processes to be executed to improve performanceintfalse
logLevelLog level to be used by the related Podstringfalse
observabilityObservability configurationObservabilityConfigfalse
resourcesResources to be assigned to the synchronization PodResourceRequirementsfalse
selectorsFilter the list of resources to be listenedKubernetesClusterSelectorsfalse

ObservabilityConfig

Configure the synchronization process observability using Prometheus ServiceMonitor

FieldDescriptionTypeRequired
enabledEnable the Observability with a Prometheus ServiceMonitorboolfalse
intervalConfigure the interval in the ServiceMonitor that Prometheus will use to scrape metricsDurationfalse

Duration

Duration is a wrapper around time.Duration which supports correct marshaling to YAML and JSON. In particular, it marshals into strings, which can be used as map keys in json.

FieldDescriptionTypeRequired

KubernetesClusterSelectors

Filter the Kubernetes objects that should be read from the cluster

FieldDescriptionTypeRequired
objectSelectorRules to filter Kubernetes objects by ObjectSelectorObjectSelectorfalse
namespaceSelectorRules to filter Kubernetes objects by NamespaceSelectorNamespaceSelectorfalse
targetsKuberentes resourcs to be usedr[]GroupVersionResourcesfalse

ObjectSelector

FieldDescriptionTypeRequired
nameSelectorFilter objects by their nameNameSelectorfalse
labelSelectorFilter objects by their labelsLabelSelectorfalse

NameSelector

Select object by their name

FieldDescriptionTypeRequired
includeRegexInclude names that matches at least one regex[]stringfalse
excludeRegexExlcude names that matches at least one regex[]stringfalse

LabelSelector

A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. +structType=atomic

FieldDescriptionTypeRequired
matchLabelsmatchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabelsmap is equivalent to an element of matchExpressions, whose key field is “key”, theoperator is “In”, and the values array contains only “value”. The requirements are ANDed.+optionalmap[string]stringfalse
matchExpressionsmatchExpressions is a list of label selector requirements. The requirements are ANDed.+optional+listType=atomic[]LabelSelectorRequirementfalse

LabelSelectorRequirement

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

FieldDescriptionTypeRequired
keykey is the label key that the selector applies to.stringfalse
operatoroperator represents a key’s relationship to a set of values.Valid operators are In, NotIn, Exists and DoesNotExist.LabelSelectorOperatorfalse
valuesvalues is an array of string values. If the operator is In or NotIn,the values array must be non-empty. If the operator is Exists or DoesNotExist,the values array must be empty. This array is replaced during a strategicmerge patch.+optional+listType=atomic[]stringfalse

LabelSelectorOperator

A label selector operator is the set of operators that can be used in a selector requirement.

FieldDescriptionTypeRequired

NamespaceSelector

FieldDescriptionTypeRequired
nameSelectorFilter Namespaces by their nameNameSelectorfalse
labelSelectorFilter Namespaces by their labelsLabelSelectorfalse

GroupVersionResources

Select a set of GroupVersionResource

FieldDescriptionTypeRequired
groupKubernetes resource group. Example: appsstringfalse
versionKubernetes resource version. Example: v1stringfalse
resourcesKubernetes resource names. Example: deployments[]stringfalse

6 - Synchronization

Assets management

6.1 - Introduction

Synchronization introduction

Synchronization is a critical process that enables the replication of data and configurations across different platform assets. This ensures consistency, integrity and improve the platform resiliency.

Key concepts

Source and destination

Each synchronization has at least two assets:

  • Source: the original location or system from which data and configurations are retrived.
  • Destination: the destination location or system where data and configurations are applied or updated.

Synchronization periodicity

There are three distinct types of synchronization processes designed to meet different operational needs: Synchronization, SynchronizationPlan, and LiveSynchronization.

Synchronization

The Synchronization process is designed to run once, making it ideal for one-time data alignment tasks or initial setup processes. This type of synchronization is useful when a system or component needs to be brought up-to-date with the latest data and configurations from another source without ongoing updates.

The synchronization process follows these rules:

  • Object exists in Source: If a matching object exists in the source asset, it will be synchronized to the destination asset.
  • Object only in Destination: If a matching object exists only in the destination asset, it will be removed from the destination asset.

SynchronizationPlan

The SynchronizationPlan process operates similarly to a cron job, allowing synchronization tasks to be scheduled at regular intervals. This type is ideal for systems that require periodic updates to ensure data and configuration consistency over time without the need for real-time accuracy.

LiveSynchronization

LiveSynchronization provides real-time synchronization, continuously monitoring and updating data and configurations as changes occur. This type of synchronization is essential for environments where immediate consistency and up-to-date information are crucial.

The synchronization process follows these rules:

  • Object Creation/Update in Source: If a matching object is created or updated in the source asset, it will be synchronized to the destination asset.
  • Object Deletion in Source: If a matching object is deleted in the source asset, the corresponding object will be deleted in the destination asset.
  • Object Creation/Update in Destination: If a matching object is created or updated in the destination asset, it will be synchronized from the source asset.
  • Object Deletion in Source: If a matching object is deleted in the source asset, the corresponding object will be deleted in the destination asset.
  • Object Only in Destination: If a matching object exists only in the destination asset, it will be removed from the destination asset.

Resume

PeriodicityDescription
SynchronizationSynchronize data and configurations only once.
SynchronizationPlanSynchronize data and configurations based on a scheduled period.
LiveSynchronizationReal-time synchronization of data and configurations.

Prerequisites

Before initiating the Synchronization process, ensure the following prerequisites are met:

  • Both source and destiation systems have been defined as Asset.
  • There is a network connectivity between the assets and the operator.

6.2 - Bucket to Kubernetes

Synchronize Bucket files to Kubernetes

6.2.1 - Introduction

Bucket to Kubernetes introduction

Bucket files can be synchronized in a Kubernetes cluster as Kubernetes objects using the bucket-to-kubernetes plugin.

Synchronization process

Special rules

There are no special rules for this synchronization plugin.

Blacklisted objects

There are some Kubernetes objects that are blacklisted by default and will be ignored by the synchronization process.

This is the list of the blacklisted objects:

  • Namespaces which name starts with kube- or openshift , an the Namespace resiliency-operator.
  • All namespaced objects inside a blacklisted Namespace.
  • ConfigMaps named kube-root-ca.crt or openshift-service-ca.crt.

Objects path

The files from the bucket will be read as Kubernetes objectsfrom files with the following path: <group>.<version>.<kind>/<object_namespace>.<object_name>.

Examples:

  • The Namespace named test will be saved in the file core.v1.Namespace/test.
  • The Deployment named app-1 deployed in the test Namespace will be saved in the file apps.v1.Deployment/test.app-1.

Use cases

Backup restore

Restore your Kubernetes cluster from a Bucket.

Pilot light architecture

Recover from a disaster by running the application saved in the Bucket.

6.2.2 - Configuration

Bucket to Kubernetes configuration

Introduction

The synchronization process can be configured with some specific parameters using the .spec.config attribute.

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: bucket-to-kubernetes
  config:
    ...

Required configuration

Source and destination

The source bucket and the destination cluster can be specified using the .spec.config.sourceName and .spec.config.destinationName properties. Both the KubernetesCluster and the bucket objects should exists in the same Namespace where the synchronization is being created.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: bucket-to-kubernetes
  config:
    sourceName: bucket-1
    destinationName: cluster-1
    ...

Selectors

The resources that should be synchronized can be configured using the .spec.config.selectors property.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: bucket-to-kubernetes
  config:
    ...
    selectors:
      - target:
          version: v1
          resources:
            - namespaces
        objectSelector:
          labelSelector:
            matchLabels:
              env: pro

Optional configuration

Global selectors

Global selectors are used to set the default value on all selectors defined in .spec.config.selectors. They can be configured with the .spec.config.globalSelector property.

There are two options allowed, that can be configured at the same time:

  • namespaceSelector: to set e dafult namespace selector for namespaced resources
  • objectSelector: to set a default object selector for all resources

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: bucket-to-kubernetes
  config:
    ...
    globalSelector:
      namespaceSelector:
        labelSelector:
          matchLabels:
            env: pro
      objectSelector:
        labelSelector:
          matchLabels:
            env: pro

Path prefix

The Kubernetes objects can be written in a subdirectory of the destination Bucket. The property .spec.config.pathPrefix allows this configuration.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: bucket-to-kubernetes
  config:
    ...
    pathPrefix: prefix-path

Log level

The log level of the Pod deployed to execute the synchronization, can be configured with the .spec.config.logLevel parameter.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: bucket-to-kubernetes
  config:
    ...
    logLevel: warn

Observability

Observability can be enaled using the specific .spec.config.observability parameter. For more information check the Observability page.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: bucket-to-kubernetes
  config:
    ...
    observability:
      enabled: true
      interval: 2m

Limit assigned resources

For each synchronization, a new Pod is deployed inside the same Namespace. The limit and requests resources can be set using the .spec.config.resources field.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: bucket-to-kubernetes
  config:
    ...
    resources:
      requests:
        cpu: 1
        memory: 2Gi
      limits:
        cpu: 2
        memory: 2Gi

Concurrency

The concurrency parameter can be used to improve the peformance of the synchronization process with .spec.config.concurrency`.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: bucket-to-kubernetes
  config:
    ...
    concurrency: 200

Transformations

Objects from the source bucket can be transformed before being synchronized into the destination cluster.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: bucket-to-kubernetes
  config:
    ...
    transformations:
      resources:
        - version: v1
          resources:
            - namespaces
      operations:
        - jsonpatch:
            operations:
              - op: add
                path: /metadata/labels/test-astrosync
                value: ok

API Reference

NameDescriptionTypeRequired
sourceNameBucket namestringyes
destinationNameKubernetesCluster namestringyes
selectorsThe Kubernetes resources to be syncrhonized[]KubernetesObjectSelectoryes
globalSelectorGlobal selectors to be applied to all selectorsKubernetesGlobalSelectoryes

6.2.3 - Observability

Obervability

Introduction

The synchronization process exports many metrics in Prometheus format over HTTP.

Exported metrics

The following metrics are available:

MetricDescription
astronetes_synchronization_statusThe status of each synchronization object.
astronetes_synchronization_status_totalThe count of synchronization objects for each state.

Requirements

  • The ServiceMonitor CRD from prometheus (servicemonitors.monitoring.coreos.com) must be enabled in the cluster where the operator is running.

Processes

Enable observability

Update the synchronization configuration, setting the paramente .spec.config.observability.enabled to true.

Once enabled the observability, a ServiceMonitor will be created in the same Namespace of the related synchronization.

Disable observability

Update the synchronization configuration, setting the paramente .spec.config.observability.enabled to false.

6.2.4 - API Reference

Configuration details

Config

Configuration for Bucket to Kubernetes synchronization

FieldDescriptionTypeRequired
concurrencyConcurrent processes to be executed to improve performanceintfalse
destinationNameKubernetesCluster name where data will be synchronizedstringfalse
globalSelectorOverrides selectors propertiesKubernetesGlobalSelectorfalse
logLevelLog level to be used by the synchronization Podstringfalse
observabilityObservability configurationObservabilityConfigfalse
optionsSynchronization optionsSynchronizationOptionsfalse
pathPrefixPath prefix to be used to retreive objects in the Bucketstringfalse
resourcesResources to be assigned to the synchronization PodResourceRequirementsfalse
selectorsSelectors to filter the Kubernetes resources to be synchronized[]KubernetesObjectSelectorfalse
sourceNameBucket name from where data will be readstringfalse
transformationsTransform Kubernetes objects before to be written to the destination[]Transformationsfalse
useCachedDataUse cached data instead of get data from Kubernetes clusters on startupboolfalse

KubernetesGlobalSelector

Global selector is used to set the default value on all selectors

FieldDescriptionTypeRequired
objectSelectorRules to filter Kubernetes objects by ObjectSelectorObjectSelectorfalse
namespaceSelectorRules to filter Kubernetes objects by NamespaceSelectorNamespaceSelectorfalse

ObjectSelector

FieldDescriptionTypeRequired
nameSelectorFilter objects by their nameNameSelectorfalse
labelSelectorFilter objects by their labelsLabelSelectorfalse

NameSelector

Select object by their name

FieldDescriptionTypeRequired
includeRegexInclude names that matches at least one regex[]stringfalse
excludeRegexExlcude names that matches at least one regex[]stringfalse

LabelSelector

A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. +structType=atomic

FieldDescriptionTypeRequired
matchLabelsmatchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabelsmap is equivalent to an element of matchExpressions, whose key field is “key”, theoperator is “In”, and the values array contains only “value”. The requirements are ANDed.+optionalmap[string]stringfalse
matchExpressionsmatchExpressions is a list of label selector requirements. The requirements are ANDed.+optional+listType=atomic[]LabelSelectorRequirementfalse

LabelSelectorRequirement

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

FieldDescriptionTypeRequired
keykey is the label key that the selector applies to.stringfalse
operatoroperator represents a key’s relationship to a set of values.Valid operators are In, NotIn, Exists and DoesNotExist.LabelSelectorOperatorfalse
valuesvalues is an array of string values. If the operator is In or NotIn,the values array must be non-empty. If the operator is Exists or DoesNotExist,the values array must be empty. This array is replaced during a strategicmerge patch.+optional+listType=atomic[]stringfalse

LabelSelectorOperator

A label selector operator is the set of operators that can be used in a selector requirement.

FieldDescriptionTypeRequired

NamespaceSelector

FieldDescriptionTypeRequired
nameSelectorFilter Namespaces by their nameNameSelectorfalse
labelSelectorFilter Namespaces by their labelsLabelSelectorfalse

ObservabilityConfig

Configure the synchronization process observability using Prometheus ServiceMonitor

FieldDescriptionTypeRequired
enabledEnable the Observability with a Prometheus ServiceMonitorboolfalse
intervalConfigure the interval in the ServiceMonitor that Prometheus will use to scrape metricsDurationfalse

Duration

Duration is a wrapper around time.Duration which supports correct marshaling to YAML and JSON. In particular, it marshals into strings, which can be used as map keys in json.

FieldDescriptionTypeRequired

SynchronizationOptions

Customize the synchronization process with special options

FieldDescriptionTypeRequired
dryRunSimulate the synchronization process but don’t execute the write operationsboolfalse
forceSyncSynchronize object in the destination even if the object exists in the destination and it doesn’t match the configured selectorsboolfalse
forcePrunePrune object in the destination even if it doesn’t match the configured selectorsboolfalse
showLogIfObjectIsAlreadyInSyncShow a log message if object is already in syncboolfalse
showLogIfObjectHaveBeenAdaptedShow a log message if object have been adapted for the destinationboolfalse

KubernetesObjectSelector

FieldDescriptionTypeRequired
objectSelectorFilter objects by ObjectSelectorObjectSelectorfalse
namespaceSelectorFilter objects by NamespaceSelectorNamespaceSelectorfalse
targetKubernetes resource to be usedGroupVersionResourcesfalse

GroupVersionResources

Select a set of GroupVersionResource

FieldDescriptionTypeRequired
groupKubernetes resource group. Example: appsstringfalse
versionKubernetes resource version. Example: v1stringfalse
resourcesKubernetes resource names. Example: deployments[]stringfalse

Transformations

Transformations is a list of operations to modifiy the Kubernetes objects matching the given selectors

FieldDescriptionTypeRequired
resourcesSelect the objects to be transfomred by their resource type[]GroupVersionResourcesfalse
namespaceSelectorFilter the objects to be transformed by NamespaceSelectorNamespaceSelectorfalse
objectSelectorFilter the objects to be transformed by ObjectSelectorObjectSelectorfalse
operationsOperations to be executed to transform the objects[]TransformationOperationfalse

TransformationOperation

The operation to execute to transform the objects

FieldDescriptionTypeRequired
jsonpatchJSONPatch operationOperationJSONPatchfalse

OperationJSONPatch

The JSONPatch operation

FieldDescriptionTypeRequired
skipIfNotFoundOnDeleteSkip if not found on deleteboolfalse
operationsList of operations to be executed[]JSONPatchOperationfalse

JSONPatchOperation

JSONPAtch operation

FieldDescriptionTypeRequired
opJSONPatch operation: add, copy, move, remove, replace, teststringfalse
pathExecute the operation to the given pathstringfalse
valueOptional value to be used in the operationinterface{}false

6.3 - Kubernetes to Bucket

Synchronize Kubernetes objects into a Bucket

6.3.1 - Introduction

Kubernetes to Bucket introduction

Kubernetes objects can be synchronized into a Bucket the kubernetes-to-bucket plugin.

Synchronization process

Special rules

There are no special rules for this synchronization plugin.

Blacklisted objects

There are some Kubernetes objects that are blacklisted by default and will be ignored by the synchronization process.

This is the list of the blacklisted objects:

  • Namespaces which name starts with kube- or openshift , an the Namespace resiliency-operator.
  • All namespaced objects inside a blacklisted Namespace.
  • ConfigMaps named kube-root-ca.crt or openshift-service-ca.crt.

Objects path

Each Kubernetes object will be stored into a file with the following path: <group>.<version>.<kind>/<object_namespace>.<object_name>.

Examples:

  • The Namespace named test will be saved in the file core.v1.Namespace/test.
  • The Deployment named app-1 deployed in the test Namespace will be saved in the file apps.v1.Deployment/test.app-1.

Use cases

Backups

Backup your Kubernetes cluster to a Bucket to recover data when required.

Pilot light architecture

Synchronize applications to a Bucket and recover the applications in another cluster after a disaster.

6.3.2 - Configuration

Kubernetes to Bucket configuration

Introduction

The synchronization process can be configured with some specific parameters using the .spec.config attribute.

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: kubernetes-to-bucket
  config:
    ...

Required configuration

Source and destination

The source cluster and the destination bucket can be specified using the .spec.config.sourceName and .spec.config.destinationName properties. Both the KubernetesCluster and the bucket objects should exists in the same Namespace where the synchronization is being created.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: kubernetes-to-bucket
  config:
    sourceName: cluster-1
    destinationName: bucket-1
    ...

Optional configuration

Selectors

The resources that should be synchronized can be configured using the .spec.config.selectors property. If not configured, all resources will be included in the synchronization.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: kubernetes-to-bucket
  config:
    ...
    selectors:
      - target:
          version: v1
          resources:
            - namespaces
        objectSelector:
          labelSelector:
            matchLabels:
              disaster-recovery: "true"

Global selectors

Global selectors are used to set the default value on all selectors defined in .spec.config.selectors. They can be configured with the .spec.config.globalSelector property.

There are two options allowed, that can be configured at the same time:

  • namespaceSelector: to set e dafult namespace selector for namespaced resources
  • objectSelector: to set a default object selector for all resources

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: kubernetes-to-bucket
  config:
    ...
    globalSelector:
      namespaceSelector:
        labelSelector:
          matchLabels:
            env: pro
      objectSelector:
        labelSelector:
          matchLabels:
            env: pro

Path prefix

The Kubernetes objects can be written in a subdirectory of the destination Bucket. The property .spec.config.pathPrefix allows this configuration.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: kubernetes-to-bucket
  config:
    ...
    pathPrefix: prefix-path

Log level

The log level of the Pod deployed to execute the synchronization, can be configured with the .spec.config.logLevel parameter.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: kubernetes-to-bucket
  config:
    ...
    logLevel: warn

Observability

Observability can be enaled using the specific .spec.config.observability parameter. For more information check the Observability page.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: kubernetes-to-bucket
  config:
    ...
    observability:
      enabled: true
      interval: 2m

Limit assigned resources

For each synchronization, a new Pod is deployed inside the same Namespace. The limit and requests resources can be set using the .spec.config.resources field.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: kubernetes-to-bucket
  config:
    ...
    resources:
      requests:
        cpu: 1
        memory: 2Gi
      limits:
        cpu: 2
        memory: 2Gi

Concurrency

The concurrency parameter can be used to improve the peformance of the synchronization process with .spec.config.concurrency`.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: kubernetes-to-bucket
  config:
    ...
    concurrency: 200

Transformations

Kubernetes obejcts from the source cluster can be transformed before being synchronized into the destination bucket.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: kubernetes-to-bucket
  config:
    ...
    transformations:
      resources:
        - version: v1
          resources:
            - namespaces
      operations:
        - jsonpatch:
            operations:
              - op: add
                path: /metadata/labels/test-astrosync
                value: ok

6.3.3 - Observability

Obervability

Introduction

The synchronization process exports many metrics in Prometheus format over HTTP.

Exported metrics

The following metrics are available:

MetricDescription
astronetes_synchronization_statusThe status of each synchronization object.
astronetes_synchronization_status_totalThe count of synchronization objects for each state.

Requirements

  • The ServiceMonitor CRD from prometheus (servicemonitors.monitoring.coreos.com) must be enabled in the cluster where the operator is running.

Processes

Enable observability

Update the synchronization configuration, setting the paramente .spec.config.observability.enabled to true.

Once enabled the observability, a ServiceMonitor will be created in the same Namespace of the related synchronization.

Disable observability

Update the synchronization configuration, setting the paramente .spec.config.observability.enabled to false.

6.3.4 - API Reference

Configuration details

Config

Configuration for Kubernetes to Bucket synchronization

FieldDescriptionTypeRequired
concurrencyConcurrent processes to be executed to improve performanceintfalse
destinationNameBucket name where data will be synchronizedstringfalse
globalSelectorOverrides selectors propertiesKubernetesGlobalSelectorfalse
logLevelLog level to be used by the synchronization Podstringfalse
observabilityObservability configurationObservabilityConfigfalse
optionsSynchronization optionsSynchronizationOptionsfalse
pathPrefixPath prefix to be used to retreive objects in the Bucketstringfalse
resourcesResources to be assigned to the synchronization PodResourceRequirementsfalse
selectorsSelectors to filter the Kubernetes resources to be synchronized[]KubernetesObjectSelectorfalse
sourceNameKubernetesCluster name from where data will be readstringfalse
transformationsTransform Kubernetes objects before to be written to the destination[]Transformationsfalse
useCachedDataUse cached data instead of get data from assets on startupboolfalse

KubernetesGlobalSelector

Global selector is used to set the default value on all selectors

FieldDescriptionTypeRequired
objectSelectorRules to filter Kubernetes objects by ObjectSelectorObjectSelectorfalse
namespaceSelectorRules to filter Kubernetes objects by NamespaceSelectorNamespaceSelectorfalse

ObjectSelector

FieldDescriptionTypeRequired
nameSelectorFilter objects by their nameNameSelectorfalse
labelSelectorFilter objects by their labelsLabelSelectorfalse

NameSelector

Select object by their name

FieldDescriptionTypeRequired
includeRegexInclude names that matches at least one regex[]stringfalse
excludeRegexExlcude names that matches at least one regex[]stringfalse

LabelSelector

A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. +structType=atomic

FieldDescriptionTypeRequired
matchLabelsmatchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabelsmap is equivalent to an element of matchExpressions, whose key field is “key”, theoperator is “In”, and the values array contains only “value”. The requirements are ANDed.+optionalmap[string]stringfalse
matchExpressionsmatchExpressions is a list of label selector requirements. The requirements are ANDed.+optional+listType=atomic[]LabelSelectorRequirementfalse

LabelSelectorRequirement

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

FieldDescriptionTypeRequired
keykey is the label key that the selector applies to.stringfalse
operatoroperator represents a key’s relationship to a set of values.Valid operators are In, NotIn, Exists and DoesNotExist.LabelSelectorOperatorfalse
valuesvalues is an array of string values. If the operator is In or NotIn,the values array must be non-empty. If the operator is Exists or DoesNotExist,the values array must be empty. This array is replaced during a strategicmerge patch.+optional+listType=atomic[]stringfalse

LabelSelectorOperator

A label selector operator is the set of operators that can be used in a selector requirement.

FieldDescriptionTypeRequired

NamespaceSelector

FieldDescriptionTypeRequired
nameSelectorFilter Namespaces by their nameNameSelectorfalse
labelSelectorFilter Namespaces by their labelsLabelSelectorfalse

ObservabilityConfig

Configure the synchronization process observability using Prometheus ServiceMonitor

FieldDescriptionTypeRequired
enabledEnable the Observability with a Prometheus ServiceMonitorboolfalse
intervalConfigure the interval in the ServiceMonitor that Prometheus will use to scrape metricsDurationfalse

Duration

Duration is a wrapper around time.Duration which supports correct marshaling to YAML and JSON. In particular, it marshals into strings, which can be used as map keys in json.

FieldDescriptionTypeRequired

SynchronizationOptions

Customize the synchronization process with special options

FieldDescriptionTypeRequired
dryRunSimulate the synchronization process but don’t execute the write operationsboolfalse
forceSyncSynchronize object in the destination even if the object exists in the destination and it doesn’t match the configured selectorsboolfalse
forcePrunePrune object in the destination even if it doesn’t match the configured selectorsboolfalse
showLogIfObjectIsAlreadyInSyncShow a log message if object is already in syncboolfalse
showLogIfObjectHaveBeenAdaptedShow a log message if object have been adapted for the destinationboolfalse

KubernetesObjectSelector

FieldDescriptionTypeRequired
objectSelectorFilter objects by ObjectSelectorObjectSelectorfalse
namespaceSelectorFilter objects by NamespaceSelectorNamespaceSelectorfalse
targetKubernetes resource to be usedGroupVersionResourcesfalse

GroupVersionResources

Select a set of GroupVersionResource

FieldDescriptionTypeRequired
groupKubernetes resource group. Example: appsstringfalse
versionKubernetes resource version. Example: v1stringfalse
resourcesKubernetes resource names. Example: deployments[]stringfalse

Transformations

Transformations is a list of operations to modifiy the Kubernetes objects matching the given selectors

FieldDescriptionTypeRequired
resourcesSelect the objects to be transfomred by their resource type[]GroupVersionResourcesfalse
namespaceSelectorFilter the objects to be transformed by NamespaceSelectorNamespaceSelectorfalse
objectSelectorFilter the objects to be transformed by ObjectSelectorObjectSelectorfalse
operationsOperations to be executed to transform the objects[]TransformationOperationfalse

TransformationOperation

The operation to execute to transform the objects

FieldDescriptionTypeRequired
jsonpatchJSONPatch operationOperationJSONPatchfalse

OperationJSONPatch

The JSONPatch operation

FieldDescriptionTypeRequired
skipIfNotFoundOnDeleteSkip if not found on deleteboolfalse
operationsList of operations to be executed[]JSONPatchOperationfalse

JSONPatchOperation

JSONPAtch operation

FieldDescriptionTypeRequired
opJSONPatch operation: add, copy, move, remove, replace, teststringfalse
pathExecute the operation to the given pathstringfalse
valueOptional value to be used in the operationinterface{}false

6.4 - Kubernetes to Kubernetes

Synchronize Kubernetes objects between two clusters

6.4.1 - Introduction

Kubernetes to Kubernetes introduction

Kubernetes objects can be synchronized between clusters adopting the kubernetes-to-kubernetes plugin.

Synchronization process

Special rules

Additionally, there are some special rules for source kind of objects:

  • PersistentVolumeClaim: objects will be updated in the destination cluster only if .spec.resources changes.

  • ServiceAccount: If you’re using a Kubernetes cluster version <v1.24, when creating a ServiceAccount, an autogenerated secret will be created. From here, there are two scenarios:

    • Service account with an autogenerated secret: The autogenerated secret will be deleted, and a new one will be generated in the target cluster. The secret is always updated during each synchronization.

    • Service account with a custom secret: The custom secret will remain unchanged.

    To avoid errors, custom secrets must not follow the autogenerated secret naming structure. Secrets are considered autogenerated if they begin with the following prefixes:

    • {serviceAccountName}-token-
    • {serviceAccountName}-dockercfg-

Blacklisted objects

There are some Kubernetes objects that are blacklisted by default and will be ignored by the synchronization process.

This is the list of the blacklisted objects:

  • Namespaces which name starts with kube- or openshift , and the Namespace resiliency-operator.
  • All namespaced objects inside a blacklisted Namespace.
  • ConfigMaps named kube-root-ca.crt or openshift-service-ca.crt.

6.4.2 - Configuration

Kubernetes to Kubernetes configuration

Introduction

The synchronization process can be configured with some specific parameters using the .spec.config attribute.

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: kubernetes-to-kubernetes
  config:
    ...

Required configuration

Source and destination

The source and the destination clusters can be specified using the .spec.config.sourceName and .spec.config.destinationName properties. Both KubernetesCluster objects should exists in the same Namespace where the synchronization is being created.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: kubernetes-to-kubernetes
  config:
    sourceName: cluster-1
    destinationName: cluster-2
    ...

Selectors

The resources that should be synchronized between clusters can be configured using the .spec.config.selectors property.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: kubernetes-to-kubernetes
  config:
    ...
    selectors:
      - target:
          version: v1
          resources:
            - namespaces
        objectSelector:
          labelSelector:
            matchLabels:
              disaster-recovery: "true"

Optional configuration

Global selectors

Global selectors are used to set the default value on all selectors defined in .spec.config.selectors. They can be configured with the .spec.config.globalSelector property.

There are two options allowed, that can be configured at the same time:

  • namespaceSelector: to set e dafult namespace selector for namespaced resources
  • objectSelector: to set a default object selector for all resources

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: bucket-to-kubernetes
  config:
    ...
    globalSelector:
      namespaceSelector:
        labelSelector:
          matchLabels:
            env: pro
      objectSelector:
        labelSelector:
          matchLabels:
            env: pro

Log level

The log level of the Pod deployed to execute the synchronization, can be configured with the .spec.config.logLevel parameter.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: kubernetes-to-kubernetes
  config:
    ...
    logLevel: warn

Observability

Observability can be enaled using the specific .spec.config.observability parameter.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: kubernetes-to-kubernetes
  config:
    ...
    observability:
      enabled: true
      interval: 2m

Limit assigned resources

For each synchronization, a new Pod is deployed inside the same Namespace. The limit and requests resources can be set using the .spec.config.resources field.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: kubernetes-to-kubernetes
  config:
    ...
    resources:
      requests:
        cpu: 1
        memory: 2Gi
      limits:
        cpu: 2
        memory: 2Gi

Concurrency

The concurrency parameter can be used to improve the peformance of the synchronization process with .spec.config.concurrency`.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: kubernetes-to-kubernetes
  config:
    ...
    concurrency: 200

Transformations

Kubernetes obejcts from the source cluster can be transformed before being synchronized into the destination cluster.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: kubernetes-to-kubernetes
  config:
    ...
    transformations:
      - resources:
          - group: ""
            version: v1
            resources:
              - namespaces
        namespaceSelector:
          labelSelector:
            matchLabels:
              sync: "true"
        operations:
          - jsonpatch:
              operations:
                - op: add
                  path: /metadata/labels/test-astrosync
                  value: ok

6.4.3 - Observability

Obervability

Introduction

The synchronization process exports many metrics in Prometheus format over HTTP.

Exported metrics

The following metrics are available:

MetricDescription
astronetes_synchronization_statusThe status of each synchronization object.
astronetes_synchronization_status_totalThe count of synchronization objects for each state.

Requirements

  • The ServiceMonitor CRD from prometheus (servicemonitors.monitoring.coreos.com) must be enabled in the cluster where the operator is running.

Processes

Enable observability

Update the synchronization configuration, setting the paramente .spec.config.observability.enabled to true.

Once enabled the observability, a ServiceMonitor will be created in the same Namespace of the related synchronization.

Disable observability

Update the synchronization configuration, setting the paramente .spec.config.observability.enabled to false.

6.4.4 - API Reference

Configuration details

Config

Configuration for Kubernetes to Kubernetes synchronization

FieldDescriptionTypeRequired
concurrencyConcurrent processes to be executed to improve performanceintfalse
destinationNameKubernetesCluster name where data will be synchronizedstringfalse
globalSelectorOverrides selectors propertiesKubernetesGlobalSelectorfalse
logLevelLog level to be used by the synchronization Podstringfalse
observabilityObservability configurationObservabilityConfigfalse
optionsSynchronization optionsSynchronizationOptionsfalse
resourcesResources to be assigned to the synchronization PodResourceRequirementsfalse
selectorsSelectors to filter the Kubernetes resources to be synchronized[]KubernetesObjectSelectorfalse
sourceNameKubernetesCluster name from where data will be readstringfalse
transformationsTransform Kubernetes objects before to be written to the destination[]Transformationsfalse
useCachedDataUse cached data instead of get data from Kubernetes clusters on startupboolfalse

KubernetesGlobalSelector

Global selector is used to set the default value on all selectors

FieldDescriptionTypeRequired
objectSelectorRules to filter Kubernetes objects by ObjectSelectorObjectSelectorfalse
namespaceSelectorRules to filter Kubernetes objects by NamespaceSelectorNamespaceSelectorfalse

ObjectSelector

FieldDescriptionTypeRequired
nameSelectorFilter objects by their nameNameSelectorfalse
labelSelectorFilter objects by their labelsLabelSelectorfalse

NameSelector

Select object by their name

FieldDescriptionTypeRequired
includeRegexInclude names that matches at least one regex[]stringfalse
excludeRegexExlcude names that matches at least one regex[]stringfalse

LabelSelector

Select object by their labels

FieldDescriptionTypeRequired
matchLabelsMatch object by the given labelsmap[string]stringfalse
matchExpressionsMatch object by the given expressions[]LabelSelectorRequirementfalse

LabelSelectorRequirement

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

FieldDescriptionTypeRequired
keykey is the label key that the selector applies to.stringfalse
operatoroperator represents a key’s relationship to a set of values.Valid operators are In, NotIn, Exists and DoesNotExist.LabelSelectorOperatorfalse
valuesvalues is an array of string values. If the operator is In or NotIn,the values array must be non-empty. If the operator is Exists or DoesNotExist,the values array must be empty. This array is replaced during a strategicmerge patch.+optional+listType=atomic[]stringfalse

LabelSelectorOperator

A label selector operator is the set of operators that can be used in a selector requirement.

FieldDescriptionTypeRequired

NamespaceSelector

FieldDescriptionTypeRequired
nameSelectorFilter Namespaces by their nameNameSelectorfalse
labelSelectorFilter Namespaces by their labelsLabelSelectorfalse

ObservabilityConfig

Configure the synchronization process observability using Prometheus ServiceMonitor

FieldDescriptionTypeRequired
enabledEnable the Observability with a Prometheus ServiceMonitorboolfalse
intervalConfigure the interval in the ServiceMonitor that Prometheus will use to scrape metricsDurationfalse

Duration

Duration is a wrapper around time.Duration which supports correct marshaling to YAML and JSON. In particular, it marshals into strings, which can be used as map keys in json.

FieldDescriptionTypeRequired

SynchronizationOptions

Customize the synchronization process with special options

FieldDescriptionTypeRequired
dryRunSimulate the synchronization process but don’t execute the write operationsboolfalse
forceSyncSynchronize object in the destination even if the object exists in the destination and it doesn’t match the configured selectorsboolfalse
forcePrunePrune object in the destination even if it doesn’t match the configured selectorsboolfalse
showLogIfObjectIsAlreadyInSyncShow a log message if object is already in syncboolfalse
showLogIfObjectHaveBeenAdaptedShow a log message if object have been adapted for the destinationboolfalse

KubernetesObjectSelector

FieldDescriptionTypeRequired
objectSelectorFilter objects by ObjectSelectorObjectSelectorfalse
namespaceSelectorFilter objects by NamespaceSelectorNamespaceSelectorfalse
targetKubernetes resource to be usedGroupVersionResourcesfalse

GroupVersionResources

Select a set of GroupVersionResource

FieldDescriptionTypeRequired
groupKubernetes resource group. Example: appsstringfalse
versionKubernetes resource version. Example: v1stringfalse
resourcesKubernetes resource names. Example: deployments[]stringfalse

Transformations

Transformations is a list of operations to modifiy the Kubernetes objects matching the given selectors

FieldDescriptionTypeRequired
resourcesSelect the objects to be transfomred by their resource type[]GroupVersionResourcesfalse
namespaceSelectorFilter the objects to be transformed by NamespaceSelectorNamespaceSelectorfalse
objectSelectorFilter the objects to be transformed by ObjectSelectorObjectSelectorfalse
operationsOperations to be executed to transform the objects[]TransformationOperationfalse

TransformationOperation

The operation to execute to transform the objects

FieldDescriptionTypeRequired
jsonpatchJSONPatch operationOperationJSONPatchfalse

OperationJSONPatch

The JSONPatch operation

FieldDescriptionTypeRequired
skipIfNotFoundOnDeleteSkip if not found on deleteboolfalse
operationsList of operations to be executed[]JSONPatchOperationfalse

JSONPatchOperation

JSONPAtch operation

FieldDescriptionTypeRequired
opJSONPatch operation: add, copy, move, remove, replace, teststringfalse
pathExecute the operation to the given pathstringfalse
valueOptional value to be used in the operationinterface{}false

6.5 - Zookeeper to Zookeeper

Synchronize Zookeeper data between two clusters

6.5.1 - Introduction

Synchronize Zookeeper data between clusters

You can synchronize Zookeeper data between two clusters using the Zookeeper protocol.

Supported models

One time synchronization

You can synchronize the data just once with the Synchronization Kubernetes object.

Periodic synchronization

You can synchronize periodically the SynchronizationPlan Kubernetes object.

Samples

Synchronize once

Synchronize the data once only in the /test path:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  generateName: synchronize-zookeeper-
spec:
  plugin: zookeeper-to-zookeeper-nodes
  config:
    sourceName: zookeeper-source
    destinationName: zookeeper-destination
    rootPath: /test
    createRoutePath: true

Scheduled synchronization

Synchronize data every hour in the /test path:

apiVersion: automation.astronetes.io/v1alpha1
kind: SynchronizationPlan
metadata:
  name: synchronize-zookeeper
spec:
  schedule: "0 * * * *"
  template:
    spec:
      plugin: zookeeper-to-zookeeper-nodes
      config: 
        sourceName: zookeeper-source
        destinationName: zookeeper-destination
        rootPath: /test

6.5.2 - Configuration

Plugin parameters and accepted values

Required configuration

Source and destination

The source and the destination clusters can be specified using the .spec.config.sourceName and .spec.config.destinationName properties. Both Database objects should exists in the same Namespace where the synchronization is being created.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: zookeeper-nodes-to-zookeeper
  config:
    sourceName: cluster-1
    destinationName: cluster-2
    ...

Root path

The root path to be used to only synchronize a specific part of the Zookeeper database.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: zookeeper-nodes-to-zookeeper
  config:
    ...
    rootPath: /test

Optional configuration

Create root path

Create the Root Path in the destination database if it doesn’t exist.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: zookeeper-nodes-to-zookeeper
  config:
    ...
    createRootPath: true

Ignore ephemeral

Don’t synchronize ephemeral data in the destination cluster..

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: zookeeper-nodes-to-zookeeper
  config:
    ...
    ignoreEphemeral: true

Exclude paths

Exclude data from being synchronized to the destination cluster filtering on path using regex.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  name: example
spec:
  plugin: zookeeper-nodes-to-zookeeper
  config:
    ...
    excludeRegex: ..

6.5.3 - API Reference

Configuration details

Config

Configuration for Zookeeper to Zookeeper synchronization

NameDescriptionTypeRequired
sourceNameZookeeper instance acting as sourcestringyes
destinationNameZookeeper instance acting as destinationstringyes
rootPathRoot Path of the contents to synchronizestringyes
createRootPathWhether to create the Root Path in the destination databasebooleanno
ignoreEphemeralWhether to ignore ephemeralbooleanno
excludePathRegexpRegular expression for keys to exclude while synchronizingstringno

7 - Automation

Incident response automation

7.1 - Introduction

Automation introdution

The operations to improve the platform resiliency can be automated with the automation framework provided by the Resiliency Operator.

Use cases

Recovery from a disaster

After a disaster occurs in one of your platform assets, the automation framework help in the recovery of the platform.

Key concepts

Tasks

A Task represents a configurable, reusable unit of work. It defines a plugin with a specific configuration to be executed later.

TaskRuns

A TaskRun represents a single execution of a Task. When a TaskRun is created, it triggers the execution of the plugin defined in the Task.

7.2 - Plugins

Automation plugins

7.2.1 - Custom image

Run custom container image in Kubernetes cluster.

7.2.1.1 - Introduction

Custom image plugin introduction

The custom-image plugins enables the deployment of a custom image to be executed as Task. This is extremely useful to run custom logic and software in the resiliency patterns.

The Pod is deployed in the Kubernetes namespace where the Task has been created.

7.2.1.2 - Configuration

Plugin parameters and accepted values

Task

Configuration

NameDescriptionTypeRequired
imageContainer image to be deployedstringyes
commandThe command to be executed when the container starts[]stringyes

7.2.1.3 - Samples

Kubernetes to Kubernetes plugin Architecture

This is a list of samples of what can be perfomed as Task with the custom-image plugin.

Bash command

Run a bash command:

apiVersion: automation.astronetes.io/v1alpha1
kind: Task
metadata:
  name: hello-world
spec:
  plugin: custom-image
  config:
    image: busybox
    command:
      - echo
      - "hello world"

Kubectl command

Execute a kubectl command:

apiVersion: automation.astronetes.io/v1alpha1
kind: Task
metadata:
  name: kubectl
spec:
  plugin: custom-image
  config:
    image: bitnami/kubectl:1.27
    command:
      - kubectl
      - cluster-info
    clusterRole: cluster-admin

7.2.2 - Kubernetes object transformation

Transform objects in a Kubernetes cluster.

7.2.2.1 - Introduction

Plugin introduction

The custom-image plugins enables the deployment of a custom image to be executed as Task. This is extremely useful to run custom logic and software in the resiliency patterns.

7.2.2.2 - Configuration

Plugin parameters and accepted values

Task

Configuration

NameDescriptionTypeRequired
imageContainer image to be deployedstringyes
commandThe command to be executed when the container starts[]stringyes

7.2.3 - Run synchronization

Run a synchronizatino from a template

7.2.3.1 - Introduction

Introduction for run synchronization plugin

This plugins allows the creation of new Synchronization objects on demands, using a custom template.

Use cases

Disaster Recovery

Create a Synchronization process to restore data after a disaster occurs

Computing offloading

Explaind the platform offloading the applications in another cloud provider.

7.2.3.2 - Configuration

Configuration for run synchronization plugin

Introduction

The Task can be configured with some specific parameters using the .spec.config attribute.

Required

Plugin

The .spec.plugin reference to the plugin to be used by the Task. This value must be set to run-synchronization.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Task
metadata:
  name: example
spec:
  plugin: run-synchronization
  ...

Synchronization spec

Configure the synchronization spec according to the plugin to be used by the synchronization.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Task
metadata:
  name: set-replica-to-1
spec:
  plugin: run-synchronization
  config:
    template:
      spec:
        plugin: kubernetes-to-kubernetes
        config:
          sourceName: ...
          destinationName: ...
          ...

Optional

Synchronization annotations

The annotations for the Synchronization object can be configured using the .spec.config.template.metadata.annotations field.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Task
metadata:
  name: set-replica-to-1
spec:
  plugin: run-synchronization
  config:
    template:
      metadata:
        annotations:
          env: dev
      spec:
        plugin: kuberentes-to-kubernetes
        config:
          sourceName: ...
          destinationName: ...
          ...

Synchronization labels

The labels for the Synchronization object can be configured using the .spec.config.template.metadata.labels field.

Example:

apiVersion: automation.astronetes.io/v1alpha1
kind: Task
metadata:
  name: set-replica-to-1
spec:
  plugin: run-synchronization
  config:
    template:
      metadata:
        labels:
          env: dev
      spec:
        plugin: kuberentes-to-kubernetes
        config:
          sourceName: ...
          destinationName: ...
          ...

7.2.3.3 - API Reference

Configuration details

Config

FieldDescriptionTypeRequired
templateTemplatefalse

Template

FieldDescriptionTypeRequired
specSynchronizationSpecfalse

SynchronizationSpec

SynchronizationSpec defines the desired state of Synchronization

FieldDescriptionTypeRequired
restartPolicyRestart policyRestartPolicyfalse
pluginSynchronization pluginSynchronizationPluginfalse
configSynchronization configJSONfalse

SynchronizationPlugin

FieldDescriptionTypeRequired

8 - Tutorials

Tutorials for real-world use cases

8.1 - Active-active Kubernetes architecture

How to setup an active-active architecture between two Kubernetes clusters

Overview

Active-active replication between Kubernetes clusters is a strategy to ensure high availability and disaster recovery for applications. In this setup, multiple Kubernetes clusters, typically located in different geographical regions, run identical copies of an application simultaneously.

Prerequisites

  • Install Astronetes Resiliency Operator.
  • Create a namespace where to store the secrets and run the synchronization between clusters.

Setup

Import the first cluster

Import the first Kubernetes cluster as described in details here:

  1. Save the kubeconfig file as cluster-1-kubeconfig.yaml.

    Import the kubeconfig file as secret:

    kubectl create secret generic cluster-1-kubeconfig --from-file=kubeconfig.yaml=cluster-1-kubeconfig.yaml
    
  2. Create the KubernetesCluster resource manifest cluster-1.yaml:

    apiVersion: assets.astronetes.io/v1alpha1
    kind: KubernetesCluster
    metadata:
      name: cluster-1
    spec:
      secretName: cluster-1-kubeconfig
    

    Deploy the resource with the following command:

    kubectl create -f cluster-1.yaml
    

Import the second cluster

Import the first Kubernetes cluster as described in details here:

  1. Save the kubeconfig file as cluster-2-kubeconfig.yaml.

    Import the kubeconfig file as secret:

    kubectl create secret generic cluster-2-kubeconfig --from-file=kubeconfig.yaml=cluster-2-kubeconfig.yaml
    
  2. Create the KubernetesCluster resource manifest cluster-2.yaml:

    apiVersion: assets.astronetes.io/v1alpha1
    kind: KubernetesCluster
    metadata:
      name: cluster-2
    spec:
      secretName: cluster-2-kubeconfig
    

    Deploy the resource with the following command:

    kubectl create -f cluster-2.yaml
    

Synchronize the clusters

Create the configuration manifest to synchronize the clusters according to the full documentation is provided at Configure kubernetes-to-kubernetes.

In the following examples there is a minimal configuration to synchronize namespaces labeled with sync=true:

  1. Save the configuration file as livesync.yaml with the following content:
apiVersion: automation.astronetes.io/v1alpha1
kind: LiveSynchronization
metadata:
  name: livesync-dev-active-active
spec:
  plugin: kubernetes-to-kubernetes
  config:
    sourceName: cluster-1
    destinationName: cluster-2
    globalSelector:
      namespaceSelector:
        labelSelector:
          matchLabels:
            sync: true
    observability:
      enabled: true
    options:
      dryRun: false
    selectors:
    - objectSelector:
        labelSelector:
          matchLabels:
            sync: true
      target:
        group: ""
        resources:
        - namespaces
        version: v1
    - target:
        group: ""
        resources:
        - services
        - secrets
        - configmaps
        - serviceaccounts
        version: v1
    - target:
        group: apps
        resources:
        - deployments
        version: v1
    - target:
        group: rbac.authorization.k8s.io
        resources:
        - clusterroles
        - rolebindings
        version: v1
    - target:
        group: networking.k8s.io
        resources:
        - ingresses
        version: v1
  suspend: false
  1. Apply the configuration:

    kubectl apply -f livesync.yaml
    

Operations

Pause the synchronization

The synchronization process can be paused with the following command:

kubectl patch livesynchronization active-active -p '{"spec":{"suspend":true}}' --type=merge

Resume the synchronization

The synchronization process can be paused with the following command:

kubectl patch livesynchronization active-active -p '{"spec":{"suspend":false}}' --type=merge

8.2 - Active-passive Kubernetes architecture

How to setup an active-passive architecture between two Kubernetes clusters

Overview

Active-passive replication between Kubernetes clusters is a strategy designed to provide high availability and disaster recovery, albeit in a more cost-efficient manner than active-active replication.

Prerequisites

  • Install Astronetes Resiliency Operator.
  • Create a namespace where to store the secrets and run the synchronization between clusters.

Setup

Import the active cluster

Import the first Kubernetes cluster as described in details here:

  1. Save the kubeconfig file as cluster-1-kubeconfig.yaml.

    Import the kubeconfig file as secret:

    kubectl create secret generic cluster-1-kubeconfig --from-file=kubeconfig.yaml=cluster-1-kubeconfig.yaml
    
  2. Create the KubernetesCluster resource manifest cluster-1.yaml:

    apiVersion: assets.astronetes.io/v1alpha1
    kind: KubernetesCluster
    metadata:
      name: cluster-1
    spec:
      secretName: cluster-1-kubeconfig
    

    Deploy the resource with the following command:

    kubectl create -f cluster-1.yaml
    

Import the passive cluster

Import the first Kubernetes cluster as described in details here:

  1. Save the kubeconfig file as cluster-2-kubeconfig.yaml.

    Import the kubeconfig file as secret:

    kubectl create secret generic cluster-2-kubeconfig --from-file=kubeconfig.yaml=cluster-2-kubeconfig.yaml
    
  2. Create the KubernetesCluster resource manifest cluster-2.yaml:

    apiVersion: assets.astronetes.io/v1alpha1
    kind: KubernetesCluster
    metadata:
      name: cluster-2
    spec:
      secretName: cluster-2-kubeconfig
    

    Deploy the resource with the following command:

    kubectl create -f cluster-2.yaml
    

Synchronize the clusters

Create the configuration manifest to synchronize the clusters according to the full documentation is provided at Configure kubernetes-to-kubernetes.

In the following examples there is a minimal configuration to synchronize namespaces labeled with sync=true. The deployments are replicated to the second cluster with replica=0, meaning that the applicatio is deployed but not running in the cluster. Only after the switch to the second cluster, the application will be started.

  1. Save the configuration file as livesync.yaml with the following content:
apiVersion: automation.astronetes.io/v1alpha1
kind: LiveSynchronization
metadata:
  name: active-passive
spec:
  plugin: kubernetes-to-kubernetes
  config:
    sourceName: cluster-1
    destinationName: cluster-2
    globalSelector:
      namespaceSelector:
        labelSelector:
          matchLabels:
            sync: "true"
    observability:
      enabled: true
    options:
      dryRun: false
    transformations:
      - resources:
          - group: apps
            version: v1
            resources:
              - deployments
        namespaceSelector:
          labelSelector:
            matchLabels:
              sync: "true"
        operations:
          - jsonpatch:
              operations:
                - op: replace
                  path: /spec/replicas
                  value: 0
    selectors:
    - objectSelector:
        labelSelector:
          matchLabels:
            sync: "true"
      target:
        group: ""
        resources:
        - namespaces
        version: v1
    - target:
        group: ""
        resources:
        - services
        - secrets
        - configmaps
        - serviceaccounts
        version: v1
    - target:
        group: apps
        resources:
        - deployments
        version: v1
    - target:
        group: rbac.authorization.k8s.io
        resources:
        - clusterroles
        - rolebindings
        version: v1
    - target:
        group: networking.k8s.io
        resources:
        - ingresses
        version: v1
  suspend: false
  1. Apply the configuration:

    kubectl apply -f livesync.yaml
    

Operations

Pause the synchronization

The synchronization process can be paused with the following command:

kubectl patch livesynchronization active-passive -p '{"spec":{"suspend":true}}' --type=merge

Resume the synchronization

The synchronization process can be resumed with the following command:

kubectl patch livesynchronization active-passive -p '{"spec":{"suspend":false}}' --type=merge

Recover from disasters

Recovering from a disaster will require the deployment of a TaskRun resource per Task that applies to recover the system and applications.

  1. Define the TaskRun resource in the taskrun.yaml file:

    apiVersion: automation.astronetes.io/v1alpha1
    kind: TaskRun
    metadata:
      name: restore-apps
    spec:
      taskName: set-test-label
    
  2. Create the TaskRun:

    kubectl create -f taskrun.yaml
    
  3. Wait for the application to be recovered.

Understanding the TaskRun

After defining a LiveSynchronization, a Task resource will be created in the destination cluster. The operator processes the spec.config.reaplication.resources[*].recoveryProcess parameter to define the required steps to activate the dormant applications.

This is the Task that will be created according to the previous defined LiveSynchronization object:

apiVersion: automation.astronetes.io/v1alpha1
kind: Task
metadata:
  name: active-passive
spec:
  plugin: kubernetes-objects-transformation
  config:
    resources:
      - identifier:
          group: apps
          version: v1
          resources: deployments
        patch:
          operations:
            - op: replace
              path: '/spec/replicas'
              value: 1
          filter:
            namespaceSelector:
              matchLabels:
                sync: "true"

For every Deployment in the selected namespaces, the pod replica will be set to 1.

This object should not be tempered with. It is managed by their adjacent LiveSynchronization.

8.3 - Synchronize Zookeeper clusters

How to synchronize Zookeeper clusters data

Overview

In environments where high availability and disaster recovery are paramount, it is essential to maintain synchronized data across different ZooKeeper clusters to prevent inconsistencies and ensure seamless failover.

In the following tutorial will be explained how to synchronize Zookeeper clusters.

Prerequisites

  • Install Astronetes Resiliency Operator.
  • Create a namespace where to store the secrets and run the synchronization between clusters.

Setup

Import the first cluster

Import the first Zookeeper cluster as described in details here:

  1. Define the Database resource with the following YAML, and save it as zookeeper-1.yaml:

    apiVersion: assets.astronetes.io/v1alpha1
    kind: Database
    metadata:
      name: zookeeper-1
    spec:
      zookeeper:
        client:
          servers:
            - <zookeeper_ip>:<zookeeper_port>
            - <zookeeper_ip>:<zookeeper_port>
            - <zookeeper_ip>:<zookeeper_port>
    
  2. Import the resource with the following command:

    kubectl create -f zookeeper-1.yaml
    

Import the second cluster

Import the second Zookeeper cluster as described in details here:

  1. Define the Database resource with the following YAML, and save it as zookeeper-2.yaml:

    apiVersion: assets.astronetes.io/v1alpha1
    kind: Database
    metadata:
      name: zookeeper-2
    spec:
      zookeeper:
        client:
          servers:
            - <zookeeper_ip>:<zookeeper_port>
            - <zookeeper_ip>:<zookeeper_port>
            - <zookeeper_ip>:<zookeeper_port>
    
  2. Import the resource with the following command:

    kubectl create -f zookeeper-2.yaml
    

Synchronize the clusters

Create the configuration manifest to synchronize the clusters according to the full documentation is provided here:

In the following example there is the configuration to synchronize every hour all the data in the / path:

  1. Create the synchronization file as zookeeper-sync.yaml with the following content:

    apiVersion: automation.astronetes.io/v1alpha1
    kind: SynchronizationPlan
    metadata:
      name: synchronize-zookeeper
    spec:
      schedule: "0 * * * *"
      template:
        spec:
          plugin: zookeeper-to-zookeeper-nodes
          config:
            sourceName: zookeeper-1
            destinationName: zookeeper-2
            rootPath: /
            createRoutePath: true
    
  2. Apply the configuration:

    kubectl apply -f zookeeper-sync.yaml
    

Operations

Force the synchronization

The synchronization can be run at any time creating a Synchronization object.

In the following example there is the configuration to synchronize all the data in the / path:

  1. Create the synchronization file as zookeeper-sync-once.yaml with the following content:
apiVersion: automation.astronetes.io/v1alpha1
kind: Synchronization
metadata:
  generateName: synchronize-zookeeper-
spec:
  plugin: zookeeper-to-zookeeper-nodes
  config:
    sourceName: zookeeper-1
    destinationName: zookeeper-2
    rootPath: /
    createRoutePath: true
  1. Apply the configuration:

    kubectl create -f zookeeper-sync-once.yaml
    

9 - API Reference

This section contains the API Reference of CRDs for the Resiliency Operator.

9.1 - Assets API Reference

Packages

assets.astronetes.io/v1alpha1

Package v1alpha1 contains API Schema definitions for the assets v1alpha1 API group

Resource Types

AWSS3

Appears in:

FieldDescriptionDefaultValidation
name stringBucket nameRequired: {}
region stringAWS region nameRequired: {}
secretName stringSecret name where credentials are storedRequired: {}

Bucket

Bucket is the Schema for the buckets API

Appears in:

FieldDescriptionDefaultValidation
apiVersion stringassets.astronetes.io/v1alpha1
kind stringBucket
metadata ObjectMetaRefer to Kubernetes API documentation for fields of metadata.
spec BucketSpec

BucketList

BucketList contains a list of Bucket

FieldDescriptionDefaultValidation
apiVersion stringassets.astronetes.io/v1alpha1
kind stringBucketList
metadata ListMetaRefer to Kubernetes API documentation for fields of metadata.
items Bucket array

BucketSpec

BucketSpec defines the desired state of Bucket

Appears in:

FieldDescriptionDefaultValidation
generic GenericBucketReference a generic bucketOptional: {}
gcpCloudStorage GCPCloudStorageReference a GCP Cloud Storage serviceOptional: {}
awsS3 AWSS3Reference a AWS Bucket serviceOptional: {}

Database

Database is the Schema for the databases API

Appears in:

FieldDescriptionDefaultValidation
apiVersion stringassets.astronetes.io/v1alpha1
kind stringDatabase
metadata ObjectMetaRefer to Kubernetes API documentation for fields of metadata.
spec DatabaseSpec

DatabaseList

DatabaseList contains a list of Database

FieldDescriptionDefaultValidation
apiVersion stringassets.astronetes.io/v1alpha1
kind stringDatabaseList
metadata ListMetaRefer to Kubernetes API documentation for fields of metadata.
items Database array

DatabaseSpec

DatabaseSpec defines the desired state of Database

Appears in:

FieldDescriptionDefaultValidation
zookeeper ZookeeperZookeeper databaseOptional: {}

GCPCloudStorage

Appears in:

FieldDescriptionDefaultValidation
name stringBucket nameRequired: {}
secretName stringSecret name where credentials are storedRequired: {}

GenericBucket

Appears in:

FieldDescriptionDefaultValidation
name stringBucket nameRequired: {}
endpoint stringBucket endpointRequired: {}
useSSL booleanUse SSLOptional: {}
secretName stringSecret name where credentials are storedRequired: {}

KubernetesCluster

KubernetesCluster is the Schema for the kubernetesclusters API

Appears in:

FieldDescriptionDefaultValidation
apiVersion stringassets.astronetes.io/v1alpha1
kind stringKubernetesCluster
metadata ObjectMetaRefer to Kubernetes API documentation for fields of metadata.
spec KubernetesClusterSpec

KubernetesClusterList

KubernetesClusterList contains a list of KubernetesCluster

FieldDescriptionDefaultValidation
apiVersion stringassets.astronetes.io/v1alpha1
kind stringKubernetesClusterList
metadata ListMetaRefer to Kubernetes API documentation for fields of metadata.
items KubernetesCluster array

KubernetesClusterSpec

KubernetesClusterSpec defines the desired state of KubernetesCluster

Appears in:

FieldDescriptionDefaultValidation
secretName stringReference to the secret that stores the cluster KubeconfigRequired: {}

Zookeeper

Appears in:

FieldDescriptionDefaultValidation
admin ZookeeperAdminCredentials for the admin portOptional: {}
client ZookeeperClientCredentials for the client portOptional: {}

ZookeeperAdmin

Appears in:

FieldDescriptionDefaultValidation
protocol stringZookeeper protocolRequired: {}
host stringZookeeper hostRequired: {}
port stringZookeeper portRequired: {}
secretName stringZookeeper authentication dataOptional: {}

ZookeeperClient

Appears in:

FieldDescriptionDefaultValidation
servers string arrayZookeeper serversRequired: {}

9.2 - Automation API Reference

Packages

automation.astronetes.io/v1alpha1

Package v1alpha1 contains API Schema definitions for the automation v1alpha1 API group

Resource Types

Backup

Backup is the Schema for the backups API

Appears in:

FieldDescriptionDefaultValidation
apiVersion stringautomation.astronetes.io/v1alpha1
kind stringBackup
metadata ObjectMetaRefer to Kubernetes API documentation for fields of metadata.
spec BackupSpec

BackupDestinationBucket

Appears in:

FieldDescriptionDefaultValidation
name stringReference the Bucket nameRequired: {}
basePath stringThe base path to be used to store the Backup dataOptional: {}

BackupList

BackupList contains a list of Backup

FieldDescriptionDefaultValidation
apiVersion stringautomation.astronetes.io/v1alpha1
kind stringBackupList
metadata ListMetaRefer to Kubernetes API documentation for fields of metadata.
items Backup array

BackupPlugin

Underlying type: string

Appears in:

BackupSourceDatabase

Appears in:

FieldDescriptionDefaultValidation
name stringReference the Database nameRequired: {}

BackupSourceKubernetesCluster

Appears in:

FieldDescriptionDefaultValidation
name stringReference the KubernetesCluster nameRequired: {}
namespaces string arrayReference the Kubernetes namespaces to be includedOptional: {}

BackupSpec

BackupSpec defines the desired state of Backup

Appears in:

FieldDescriptionDefaultValidation
restartPolicy RestartPolicySuspend the CronJobOptional: {}
plugin BackupPluginBackup pluginRequired: {}
config JSONSynchronization configRequired: {}

LiveSynchronization

LiveSynchronization is the Schema for the livesynchronizations API

Appears in:

FieldDescriptionDefaultValidation
apiVersion stringautomation.astronetes.io/v1alpha1
kind stringLiveSynchronization
metadata ObjectMetaRefer to Kubernetes API documentation for fields of metadata.
spec LiveSynchronizationSpec

LiveSynchronizationList

LiveSynchronizationList contains a list of LiveSynchronization

FieldDescriptionDefaultValidation
apiVersion stringautomation.astronetes.io/v1alpha1
kind stringLiveSynchronizationList
metadata ListMetaRefer to Kubernetes API documentation for fields of metadata.
items LiveSynchronization array

LiveSynchronizationPlugin

Underlying type: string

Appears in:

LiveSynchronizationSpec

LiveSynchronizationSpec defines the desired state of LiveSynchronization

Appears in:

FieldDescriptionDefaultValidation
suspend booleanSuspend the executionfalseOptional: {}
plugin LiveSynchronizationPluginLiveSynchronization pluginRequired: {}
config JSONLiveSynchronization configRequired: {}

Resource

Appears in:

FieldDescriptionDefaultValidation
group stringResource groupOptional: {}
version stringResource versionRequired: {}
resource stringResourceRequired: {}

Synchronization

Synchronization is the Schema for the synchronizations API

Appears in:

FieldDescriptionDefaultValidation
apiVersion stringautomation.astronetes.io/v1alpha1
kind stringSynchronization
metadata ObjectMetaRefer to Kubernetes API documentation for fields of metadata.
spec SynchronizationSpec

SynchronizationList

SynchronizationList contains a list of Synchronization

FieldDescriptionDefaultValidation
apiVersion stringautomation.astronetes.io/v1alpha1
kind stringSynchronizationList
metadata ListMetaRefer to Kubernetes API documentation for fields of metadata.
items Synchronization array

SynchronizationPlan

SynchronizationPlan is the Schema for the synchronizationplans API

Appears in:

FieldDescriptionDefaultValidation
apiVersion stringautomation.astronetes.io/v1alpha1
kind stringSynchronizationPlan
metadata ObjectMetaRefer to Kubernetes API documentation for fields of metadata.
spec SynchronizationPlanSpec

SynchronizationPlanList

SynchronizationPlanList contains a list of SynchronizationPlan

FieldDescriptionDefaultValidation
apiVersion stringautomation.astronetes.io/v1alpha1
kind stringSynchronizationPlanList
metadata ListMetaRefer to Kubernetes API documentation for fields of metadata.
items SynchronizationPlan array

SynchronizationPlanSpec

SynchronizationPlanSpec defines the desired state of SynchronizationPlan

Appears in:

FieldDescriptionDefaultValidation
schedule stringSchedule in Cron formatRequired: {}
startingDeadlineSeconds integerOptional deadline in seconds for starting the job if it misses scheduled
time for any reason. Missed jobs executions will be counted as failed ones.
Optional: {}
concurrencyPolicy ConcurrencyPolicySpecifies how to treat concurrent executions of a Job.
Valid values are:

- “Allow” (default): allows CronJobs to run concurrently;
- “Forbid”: forbids concurrent runs, skipping next run if previous run hasn’t finished yet;
- “Replace”: cancels currently running job and replaces it with a new one
Optional: {}
suspend booleanSuspend the executionfalseOptional: {}
template SynchronizationTemplateSpecSpecify the Synchronization that will be created when executing the CronOptional: {}
successfulJobsHistoryLimit integerThe number of successful finished jobs to retain. Value must be non-negative integer2Optional: {}
failedJobsHistoryLimit integerThe number of failed finished jobs to retain. Value must be non-negative integer2Optional: {}

SynchronizationPlugin

Underlying type: string

Appears in:

SynchronizationSpec

SynchronizationSpec defines the desired state of Synchronization

Appears in:

FieldDescriptionDefaultValidation
restartPolicy RestartPolicyRestart policyOptional: {}
plugin SynchronizationPluginSynchronization pluginRequired: {}
config JSONSynchronization configRequired: {}

SynchronizationTemplateSpec

Appears in:

FieldDescriptionDefaultValidation
metadata ObjectMetaRefer to Kubernetes API documentation for fields of metadata.
spec SynchronizationSpecSpecification of the desired behavior of the SynchronizationOptional: {}

Task

Task is the Schema for the tasks API

Appears in:

FieldDescriptionDefaultValidation
apiVersion stringautomation.astronetes.io/v1alpha1
kind stringTask
metadata ObjectMetaRefer to Kubernetes API documentation for fields of metadata.
spec TaskSpec

TaskList

TaskList contains a list of Task

FieldDescriptionDefaultValidation
apiVersion stringautomation.astronetes.io/v1alpha1
kind stringTaskList
metadata ListMetaRefer to Kubernetes API documentation for fields of metadata.
items Task array

TaskPlugin

Underlying type: string

Appears in:

TaskRun

TaskRun is the Schema for the taskruns API

Appears in:

FieldDescriptionDefaultValidation
apiVersion stringautomation.astronetes.io/v1alpha1
kind stringTaskRun
metadata ObjectMetaRefer to Kubernetes API documentation for fields of metadata.
spec TaskRunSpec

TaskRunList

TaskRunList contains a list of TaskRun

FieldDescriptionDefaultValidation
apiVersion stringautomation.astronetes.io/v1alpha1
kind stringTaskRunList
metadata ListMetaRefer to Kubernetes API documentation for fields of metadata.
items TaskRun array

TaskRunSpec

TaskRunSpec defines the desired state of TaskRun

Appears in:

FieldDescriptionDefaultValidation
taskName stringTask nameRequired: {}

TaskSpec

TaskSpec defines the desired state of Task

Appears in:

FieldDescriptionDefaultValidation
restartPolicy RestartPolicyRestart policyOptional: {}
plugin TaskPluginTask pluginRequired: {}
config JSONTask configRequired: {}