Operator Setup¶
In order to work with the operator and manage your application stacks it is required to install the operator into your cluster, give it sufficient rights and connect it to the peripheral systems which shall be used. The operator itself is a Spring Boot based application that is provided as an OCI image. A Helm chart for the initial installation of the operator into your cluster is provided and can be used to perform the basic setup. More about the required steps can be found in the subsequent sections.
AWS-specific setup instructions
Note that the current version of the operator only supports AWS as an IaaS provider. Further providers haven't been implemented yet. For this reason also the following setup of the operator is tightly bound to AWS as well. Although it may run in different environments this hasnt been elaborated and tested yet. Further instructions will follow soon!
1 Provisioning Of Operator AWS Resources & Database¶
In order to deploy the operator to your cluster you'll have to provision some resources such as a database and some AWS resources.
1.1 The Operator Database¶
The operator needs a database for its internal management of the models and further configurations. Here are the requirements to be met:
- Vendor: PostgreSQL
- Versions: No special requirement (tested with versions 13+)
- User & Schema: A normal schema with regular user rights is totally sufficient for the operator to run (see sample setup script below)
Operator Sample Database Setup
- Replace this placeholder with a custom password for your database user. This password will later be put into a secret for the operator itself.
1.2 The AWS Resources¶
In AWS we must configure an IAM role that enables the operator to manage all resource types that are supported by the models. Since this is depending on the concreate cluster setup as well as personal taste the subsequent section on that topic describes just a possible setup which proofed stable for quite a while now.
Furthermore we also require some secrets in the secret store that is linked into the cluster. Again this is a matter of configuration and personal taste, so we just give an example setup.
1.2.1 The IAM Role¶
In the following the requirements to the IAM role are listed. Some custom policies as well as the trust relationship come with sample JSON data.
Required standard policies:
- AmazonS3FullAccess
- AmazonSNSFullAccess
- AmazonSQSFullAccess
- AWSCloudFormationFullAccess
- AWSKeyManagementServicePowerUser
- AWSLambda_FullAccess
Custom policies
- Replace
ACCOUNT_IDwith the id of your AWS account!
- Replace
ACCOUNT_IDwith the id of your AWS account andREGIONby the target AWS region (e.g.eu-central-1)
2 Installation Of The Operator Helm Chart¶
The operator can be installed into the cluster using a Helm Chart which is available in this Github repository.
Our demo setup uses a Flux CD Helm Release in order to parameterize and install the chart. Furthermore, the usage of Flux makes it very easy to keep the operator application up to date. The GitOps approach requires some config changes in the respective Git repository only. Our proposed Flux-based demo setup involves some resources to be created.
2.1 Kubernetes Cluster Prerequisites¶
In order to install the operator Helm chart properly like with our demo setup the cluster should be setup to fulfill some simple prerequisites which are listed below.
2.1.1 External Secrets¶
In order to manage secrets properly and securely Kubernetes offers the concept of secrets. Although this can be used to store secrets inside the cluster it is often somehow unhandy since it possibly exposes secrets to cluster administrators or devops engineers. So we rather like to integrate some more common secrets management systems which might be more familiar to your company. This way we can separate secrets management from actual devops tasks. This is where External Secrets Operator (ESO) comes into game which integrates external secret management systems into the cluster.
Our demo installation contains the ESO and which is connected to the AWS Secrets Manager as well as the AWS Systems Manager's Parameter Store. The operator itself also requires external secrets managers in order to retrieve and manage secrets for the respective deployments. Currently, we only use the AWS Parameter Store implementation.
2.1.2 ExternalDNS¶
Although it is not required it is very handy when you want to fully automate your operator-based deployments without requiring manual work for DNS host name assignment to your deployments. Kubernetes ExternalDNS synchronizes exposed Kubernetes Services and Ingresses with DNS providers which means that it is able to create, modify and remove entries in your DNS provider based on the ingress configurations of your deployments.
Our demo installations use ExternalDNS with slightly different configurations. While the development cluster is configured to be able to fully manage DNS entries the externalDNS operator on the production cluster is set up to only upsert DNS entries which means that a deletion of DNS entries is forbidden in order to not destroy production setups by accident.
2.2 Required Kubernetes Cluster Resources¶
Kubernetes Cluster Role, Role Binding & Operator Service Account
RBAC Cluster roles contain rules that represent a set of permissions for specific resources. A cluster role itself is a non-namespaced resource.
- Replace the cluster role name by any name that matches your naming scheme.
- Further API groups and verbs might become necessary on further operator capabilities evolution!
A service account is a type of non-human account that, in Kubernetes, provides a distinct identity in a Kubernetes cluster.
- Replace the value by the ARN of your EKS cluster!
- Replace the service account name by any name that matches your naming scheme.
- Replace the namespace by the namespace into which you want to deploy the operator.
A cluster role binding grants the cluster-wide resource permissions defined in a cluster role.
- Replace the name of the cluster role binding by any name that matches your naming scheme.
- Refer to the name of the cluster role here!
- Specify your created operator service account as a subject in order to connect the cluster role to this service account. This will assign the required cluster-wide permissions to the operator application.
Flux CD Kubernetes Resources
Declare a Kuberenetes secret which is used to grant access to our Flux state Git repository in which our Operator Helm Release is stored.
- Replace the secret name by any name that matches your naming scheme.
- This is a Flux CD specific resource so we place it in the flux-system namespace.
- This is base 64 encoded data containing some ssh setup.
Our Flux CD Git Repository contains the Helm Release definition for the operator deployment.
- Replace the repository name by any name that matches your naming scheme.
- This is a Flux CD specific resource so we place it in the flux-system namespace.
- Please replace this URL with your specific git repository in which you are managing your deployment configuration.
- Defines the branch to be checked out (might also refer to tags, ...)
- Please refer to the secret containing the git access details.
A Flux CD Kustomization is a Custom Resource Definition and the counterpart of Kustomize’s kustomization.yaml config file of the repository we refer to.
- Replace the name of the kustomization by any name that matches your naming scheme.
- This is a Flux CD specific resource so we place it in the flux-system namespace.
- Refer to the Flux state Git repository that manages your operator Helm release.
2.3 The Helm Release Definition - A Git Repository¶
In order to deploy the operator Helm chart into our cluster Flux CD needs a Helm Release together with
the operator Helm repository. In our sample setup
this is managed in a Git repository which we call Flux state repository and is pulled automatically and regularly ba
Flux in order to keep the deployed resources in sync with our intended deployment specification. The relevant part of our repository is the folder
pxf-operator and consists of the following:
Flux State Repository Contents
Declare a Kuberenetes secret which is used to access the OCI repository that provides the operator Helm charts.
- Replace the secret name by any name that matches your naming scheme.
- This is a Flux CD specific resource so we place it in the flux-system namespace.
- This is a base 64 encoded string that contains the docker configuration which authorizes access to our registry. It looks like this:
{"auths":{"ghcr.io":{"username":"vnr-private","password":"ghp_...","auth":"dm5y...hVag=="}}}and can be created using this sample commandkubectl create secret docker-registry my-oci-secret --docker-server=REGISTRY_URL --docker-username=USERNAME --docker-password=PASSWORD --docker-email=EMAIL -n my-namespace
The OCI repository provides our operator Helm charts.
- Replace the repository name by any name that matches your naming scheme.
- This is a Flux CD specific resource so we place it in the flux-system namespace.
- Please refer to the secret we created prior!
- This is the URL of the operator helm repository and shouldn't be changed.
The actual Helm Release definition which parameterizes the operator Helm chart for deployment.
- Replace the repository name by any name that matches your naming scheme.
- This is a Flux CD specific resource so we place it in the flux-system namespace.
- Please refer to the Helm Release docs for further information!
- Defines that the actual configuration values for the operator deployment are taken from a config map
This Flux CD Kustomization of the repository is the counterpart of the kustomization that we already deployed into the cluster which defines the repository scan. This kustomization is deployment specific and instructs Kustomize which descriptors belong to the application and how they should be processed.
- This is a Flux CD specific resource so we place it in the flux-system namespace.
- Defines the resources that belong to the deployment.
- Here we instruct Kustomize to turn the application config values into config maps.
- Loads a special configuration for the processing of config values.
This config tells Kustomize that a specific field inside the HelmRelease should be treated as a reference to our ConfigMap.
This is the main deployment configuration file which carries the actual environment specific configuration values for the operator deployment. It is separated into several parts which are described shortly in the respective annotations.
- The image tag version of the operator OCI image.
- Keep this value as is! This is the id of the user running the application inside the container and is required to configure the security context properly.
- Under
envsyou can specify environment variables that shall be passed to the operator via its Kubernetes config map. Note that you can pass every variable but only some are known and processible by the operator application. Please refer to the operator Helm Chart repository for further information. - Here you define mappings from environment variables to keys of your external secrets implementation. Since the helm chart is implemented to create an external secrets configuration your cluster will also require such a connector.
- Here you can connect the operator deployment with the service account we deployed to the cluster up-front. This provides the deployment with some crucial permissions for managing cluster resources as well as external resources in your managed cloud provider accounts.
- Use this section to configure the ingress implementation you want to use to make your installation accessible from outside the cluster.