This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Buckets

Manage Buckets

1 - Import GCP Cloud Storage

How-to import a bucke from GCP Cloud Storage

Buckets hosted in Cloud Storage can be imported as GCP CLoud Storage.

Requirements

The Bucket properties:

  • Bucket name
  • GCP project ID

The credentials to access the bucket:

  • The ServiceAccount key

Process

1. Create the Secret

Store the following file as secret.yaml and substitute the template parameters with real ones.

apiVersion: v1
kind: Secret
metadata:
  name: bucket-credentials
stringData:
  application_default_credentials.json: '{...}'

Then create the Secret with the following command:

kubectl -n <namespace_name> apply -f secret.yaml

2. Create the object

Store the following file as bucket.yaml and substitute the template parameters with real ones.

apiVersion: assets.astronetes.io/v1alpha1
kind: Bucket
metadata:
  name: <name>
  namespace: <namespace>
spec:
  gcpCloudStorage:
    name: <gcp-project-name>
    projectID: <gcp-project-id>
    secretName: gcp-bucket

Deploy the resource with the following command:

kubectl create -f bucket.yaml

2 - Import generic bucket

How-to import a generic bucket

Buckets that support AWS S3 protocol (like Minio), can be imported as a generic bucket.

Requirements

The Bucket properties:

  • Bucket endpoint
  • Bucket name

The credentials to access the bucket:

  • The access key ID
  • The ssecret access key

Process

1. Create the Secret

Store the following file as secret.yaml and substitute the template parameters with real ones.

apiVersion: v1
kind: Secret
metadata:
  name: bucket-credentials
stringData:
  accessKeyID: <access_key_id>
  secretAccessKey: <secret_access_key>

Then create the Secret with the following command:

kubectl -n <namespace_name> apply -f secret.yaml

2. Create the Bucket

Store the following file as bucket.yaml and substitute the template parameters with real ones.

apiVersion: assets.astronetes.io/v1alpha1
kind: Bucket
metadata:
  name: <name>
  namespace: <namespace>
spec:
  generic:
    endpoint: mybucket.example.com
    name: <bucket_name>
    useSSL: true
    secretName: bucket-credentials

Deploy the resource with the following command:

kubectl create -f bucket.yaml

3 - Configurations

Configure the Bucket import

Intro

The import of each Bucket can be configured with some specific parameters using the .spec.config attribute.

apiVersion: assets.astronetes.io/v1alpha1
kind: Bucket
metadata:
  name: my-bucket
spec:
  ...
  config: {}

Limit assigned resources

For each Bucket imported, a new Pod is deployed inside the same Namespace. The limit and requests resources can be set using the .spec.config.resources field.

Example:

apiVersion: assets.astronetes.io/v1alpha1
kind: Bucket
metadata:
  name: my-cluster
spec:
  ...
  config:
    resources:
      requests:
        cpu: 1
        memory: 2Gi
      limits:
        cpu: 2
        memory: 2Gi

Filter the watched resources

By default, the operator will watch all the files in the bucket. You can filter the list of path to be watched by configuring the .spec.config.paths field.

Example:

apiVersion: assets.astronetes.io/v1alpha1
kind: Bucket
metadata:
  name: my-bucket
spec:
  ...
  config:
    paths:
      - example1/

Concurrency

The concurrency parameter can be used to improve the peformance of the operator on listening the changes that happens in the Bucket.

Example:

apiVersion: assets.astronetes.io/v1alpha1
kind: Bucket
metadata:
  name: my-cluster
spec:
  ...
  config:
    concurrency: 200

4 - API Reference

Configuration details

Config

Customize the integration with a Bucket

FieldDescriptionTypeRequired
concurrencyConcurrent processes to be executed to improve performanceintfalse
intervalInterval of whichstringfalse
logLevelLog level to be used by the related Podstringfalse
observabilityObservability configurationObservabilityConfigfalse
pathsFilter the list of paths to be listened[]stringfalse
resourcesResources to be assigned to the synchronization PodResourceRequirementsfalse

ObservabilityConfig

Configure the synchronization process observability using Prometheus ServiceMonitor

FieldDescriptionTypeRequired
enabledEnable the Observability with a Prometheus ServiceMonitorboolfalse
intervalConfigure the interval in the ServiceMonitor that Prometheus will use to scrape metricsDurationfalse

Duration

Duration is a wrapper around time.Duration which supports correct marshaling to YAML and JSON. In particular, it marshals into strings, which can be used as map keys in json.

FieldDescriptionTypeRequired