Giter Club home page Giter Club logo

ndb-operator's Introduction

Nutanix Database Service Operator for Kubernetes

The NDB operator brings automated and simplified database administration, provisioning, and life-cycle management to Kubernetes.


Go Report Card CI Release

release License Proudly written in Golang


Installation / Deployment

Pre-requisites

  1. Access to an NDB Server.
  2. A Kubernetes cluster to run against, which should have network connectivity to the NDB server. The operator will automatically use the current context in your kubeconfig file (i.e. whatever cluster kubectl cluster-info shows).
  3. The operator-sdk installed.
  4. A clone of the source code (this repository).
  5. Cert-manager (only when running in non OpenShift clusters). Follow the instructions here.

With the pre-requisites completed, the NDB Operator can be deployed in one of the following ways:

Outside Kubernetes

Runs the controller outside the Kubernetes cluster as a process, but installs the CRDs, services and RBAC entities within the Kubernetes cluster. Generally used while development (without running webhooks):

make install run

Within Kubernetes

Runs the controller pod, installs the CRDs, services and RBAC entities within the Kubernetes cluster. Used to run the operator from the container image defined in the Makefile. Make sure that the cert-manager is installed if not using OpenShift.

make deploy

Using Helm Charts

The Helm charts for the NDB Operator project are available on artifacthub.io and can be installed by following the instructions here.

On OpenShift

To deploy the operator from this repository on an OpenShift cluster, create a bundle and then install the operator via the operator-sdk.

# Export these environment variables to overwrite the variables set in the Makefile
export DOCKER_USERNAME=dockerhub-username
export VERSION=x.y.z
export IMG=docker.io/$DOCKER_USERNAME/ndb-operator:v$VERSION
export BUNDLE_IMG=docker.io/$DOCKER_USERNAME/ndb-operator-bundle:v$VERSION

# Build and push the container image to the container registry
make docker-build docker-push

# Build the bundle following the prompts for input, build and push the bundle image to the container registry
make bundle bundle-build bundle-push

# Install the operator (run on the OpenShift cluster)
operator-sdk run bundle $BUNDLE_IMG

NOTE: 
The container and bundle image creation steps can be skipped if existing images are present in the container registry.

Usage

Create secrets to be used by the NDBServer and Database resources using the manifest:

apiVersion: v1
kind: Secret
metadata:
  name: ndb-secret-name
type: Opaque
stringData:
  username: username-for-ndb-server
  password: password-for-ndb-server
  ca_certificate: |
    -----BEGIN CERTIFICATE-----
    CA CERTIFICATE (ca_certificate is optional)
    -----END CERTIFICATE-----
---
apiVersion: v1
kind: Secret
metadata:
  name: db-instance-secret-name
type: Opaque
stringData:
  password: password-for-the-database-instance
  ssh_public_key: SSH-PUBLIC-KEY

Create the secrets:

kubectl apply -f <path/to/secrets-manifest.yaml>

Create the NDBServer resource. The manifest for NDBServer is described as follows:

apiVersion: ndb.nutanix.com/v1alpha1
kind: NDBServer
metadata:
  labels:
    app.kubernetes.io/name: ndbserver
    app.kubernetes.io/instance: ndbserver
    app.kubernetes.io/part-of: ndb-operator
    app.kubernetes.io/managed-by: kustomize
    app.kubernetes.io/created-by: ndb-operator
  name: ndb
spec:
    # Name of the secret that holds the credentials for NDB: username, password and ca_certificate created earlier
    credentialSecret: ndb-secret-name
    # NDB Server's API URL
    server: https://[NDB IP]:8443/era/v0.9
    # Set to true to skip SSL certificate validation, should be false if ca_certificate is provided in the credential secret.
    skipCertificateVerification: true

Create the NDBServer resource using:

kubectl apply -f <path/to/NDBServer-manifest.yaml>

Create a Database Resource. A database can either be provisioned or cloned on NDB based on the inputs specified in the database manifest.

Provisioning manifest

apiVersion: ndb.nutanix.com/v1alpha1
kind: Database
metadata:
  # This name that will be used within the kubernetes cluster
  name: db
spec:
  # Name of the NDBServer resource created earlier
  ndbRef: ndb
  isClone: false
  # Database instance specific details (that is to be provisioned)
  databaseInstance:
    # Cluster id of the cluster where the Database has to be provisioned
    # Can be fetched from the GET /clusters endpoint
    clusterId: "Nutanix Cluster Id"
    # The database instance name on NDB
    name: "Database-Instance-Name"
    # The description of the database instance
    description: Database Description
    # Names of the databases on that instance
    databaseNames:
      - database_one
      - database_two
      - database_three
    # Credentials secret name for NDB installation
    # data: password, ssh_public_key
    credentialSecret: db-instance-secret-name
    size: 10
    timezone: "UTC"
    type: postgres

    # You can specify any (or none) of these types of profiles: compute, software, network, dbParam
    # If not specified, the corresponding Out-of-Box (OOB) profile will be used wherever applicable
    # Name is case-sensitive. ID is the UUID of the profile. Profile should be in the "READY" state
    # "id" & "name" are optional. If none provided, OOB may be resolved to any profile of that type
    profiles:
      compute:
        id: ""
        name: ""
      # A Software profile is a mandatory input for closed-source engines: SQL Server & Oracle
      software:
        name: ""
        id: ""
      network:
        id: ""
        name: ""
      dbParam:
        name: ""
        id: ""
      # Only applicable for MSSQL databases
      dbParamInstance:
        name: ""
        id: ""
    timeMachine:                        # Optional block, if removed the SLA defaults to NONE
      sla : "NAME OF THE SLA"
      dailySnapshotTime:   "12:34:56"   # Time for daily snapshot in hh:mm:ss format
      snapshotsPerDay:     4            # Number of snapshots per day
      logCatchUpFrequency: 90           # Frequency (in minutes)
      weeklySnapshotDay:   "WEDNESDAY"  # Day of the week for weekly snapshot
      monthlySnapshotDay:  24           # Day of the month for monthly snapshot
      quarterlySnapshotMonth: "Jan"     # Start month of the quarterly snapshot
    additionalArguments:                # Optional block, can specify additional arguments that are unique to database engines.
      listener_port: "8080"

Cloning manifest

apiVersion: ndb.nutanix.com/v1alpha1
kind: Database
metadata:
  # This name that will be used within the kubernetes cluster
  name: db
spec:
  # Name of the NDBServer resource created earlier
  ndbRef: ndb
  isClone: true
  # Clone specific details (that is to be provisioned)
  clone:
    # Type of the database to be cloned
    type: postgres
    # The clone instance name on NDB
    name: "Clone-Instance-Name"
    # The description of the clone instance
    description: Database Description
    # Cluster id of the cluster where the Database has to be provisioned
    # Can be fetched from the GET /clusters endpoint
    clusterId: "Nutanix Cluster Id"
    # You can specify any (or none) of these types of profiles: compute, software, network, dbParam
    # If not specified, the corresponding Out-of-Box (OOB) profile will be used wherever applicable
    # Name is case-sensitive. ID is the UUID of the profile. Profile should be in the "READY" state
    # "id" & "name" are optional. If none provided, OOB may be resolved to any profile of that type
    profiles:
      compute:
        id: ""
        name: ""
      # A Software profile is a mandatory input for closed-source engines: SQL Server & Oracle
      software:
        name: ""
        id: ""
      network:
        id: ""
        name: ""
      dbParam:
        name: ""
        id: ""
      # Only applicable for MSSQL databases
      dbParamInstance:
        name: ""
        id: ""
    # Name of the secret with the
    # data: password, ssh_public_key
    credentialSecret: clone-instance-secret-name
    timezone: "UTC"
    # ID of the database to clone from, can be fetched from NDB REST API Explorer
    sourceDatabaseId: source-database-id
    # ID of the snapshot to clone from, can be fetched from NDB REST API Explorer
    snapshotId: snapshot-id
    additionalArguments:                # Optional block, can specify additional arguments that are unique to database engines.
      expireInDays: 3

Create the Database resource:

kubectl apply -f <path/to/database-manifest.yaml>

Additional Arguments for Databases

Below are the various optional addtionalArguments you can specify along with examples of their corresponding values. Arguments that have defaults will be indicated.

Provisioning Additional Arguments:

# PostGres
additionalArguments:
  listener_port: "1111"                            # Default: "5432"

# MySQL
additionalArguments:
  listener_port: "1111"                            # Default: "3306" 

# MongoDB
additionalArguments:
  listener_port: "1111"                            # Default: "27017"
  log_size: "150"                                  # Default: "100"
  journal_size: "150"                              # Default: "100"

# MSSQL
additionalArguments:
  sql_user_name: "mazin"                           # Defualt: "sa".
  authentication_mode: "mixed"                     # Default: "windows". Options are "windows" or "mixed". Must specify sql_user.
  server_collation: "<server-collation>"           # Default: "SQL_Latin1_General_CP1_CI_AS".
  database_collation:  "<server-collation>"        # Default: "SQL_Latin1_General_CP1_CI_AS".
  dbParameterProfileIdInstance: "<id-instance>"    # Default: Fetched from profile.
  vm_dbserver_admin_password: "<admin-password>"   # Default: Fetched from database secret.
  sql_user_password:         "<sq-user-password>"  # NO Default. Must specify authentication_mode as "mixed".
  windows_domain_profile_id: <domain-profile-id>   # NO Default. Must specify vm_db_server_user.
  vm_db_server_user: <vm-db-server-use>            # NO Default. Must specify windows_domain_profile_id.
  vm_win_license_key: <licenseKey>                 # NO Default.

Cloning Additional Arguments:

MSSQL:
  windows_domain_profile_id   
  era_worker_service_user      
  sql_service_startup_account  
  vm_win_license_key           
  target_mountpoints_location  
  expireInDays                 
  expiryDateTimezone           
  deleteDatabase               
  refreshInDays                
  refreshTime                  
  refreshDateTimezone          

MongoDB:
  expireInDays                 
  expiryDateTimezone           
  deleteDatabase               
  refreshInDays                
  refreshTime                  
  refreshDateTimezone    

Postgres:
  expireInDays                 
  expiryDateTimezone           
  deleteDatabase               
  refreshInDays                
  refreshTime                  
  refreshDateTimezone  

MySQL:
  expireInDays                 
  expiryDateTimezone           
  deleteDatabase               
  refreshInDays                
  refreshTime                  
  refreshDateTimezone  

Deleting the Database resource

To deregister the database and delete the VM run:

kubectl delete -f <path/to/database-manifest.yaml>

Deleting the NDBServer resource

To deregister the database and delete the VM run:

kubectl delete -f <path/to/NDBServer-manifest.yaml>

Developement

Modifying the API definitions

If you are editing the API definitions, generate the manifests such as CRs or CRDs using:

make generate manifests

Add the CRDs to the Kubernetes cluster

make install

Run your controller locally (this will run in the foreground, so switch to a new terminal if you want to leave it running):

make run

NOTES:

  1. You can also run this in one step by running: make install run
  2. Run make --help for more information on all potential make targets

More information can be found via the Kubebuilder Documentation

Building and pushing to an image registry

Build and push your image to the location specified by IMG:

make docker-build docker-push IMG=<some-registry>/ndb-operator:tag

Deploy the operator pushed to an image registry

Deploy the controller to the cluster with the image specified by IMG:

make deploy IMG=<some-registry>/ndb-operator:tag

Uninstallation / Cleanup

Uninstall the operator based on the installation/deployment environment

Running outside the cluster

# Stops the controller process
ctrl + c
# Uninstalls the CRDs
make uninstall

Running inside the cluster

# Removes the deployment, crds, services and rbac entities
make undeploy

Running using Helm charts

# NAME: name of the release created during installation
helm uninstall NAME

Running on Openshift

operator-sdk cleanup ndb-operator --delete-all

How it works

This project aims to follow the Kubernetes Operator pattern. It uses Controllers which provides a reconcile function responsible for synchronizing resources until the desired state is reached on the cluster.

A custom resource of the kind Database is created by the reconciler, followed by a Service and an Endpoint that maps to the IP address of the database instance provisioned. Application pods/deployments can use this service to interact with the databases provisioned on NDB through the native Kubernetes service.

Pods can specify an initContainer to wait for the service (and hence the database instance) to get created before they start up.

  initContainers:
  - name: init-db
    image: busybox:1.28
    command: ['sh', '-c', "until nslookup <<Database CR Name>>-svc.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for database service; sleep 2; done"]

Contributing

See the contributing docs


Support

This code is developed in the open with input from the community through issues and PRs. A Nutanix engineering team serves as the maintainer. Documentation is available in the project repository. Issues and enhancement requests can be submitted in the Issues tab of this repository. Please search for and review the existing open issues before submitting a new issue.


License

Copyright 2022-2023 Nutanix, Inc.

The project is released under version 2.0 of the Apache license.

ndb-operator's People

Contributors

akshmish avatar dependabot[bot] avatar krunal-jhaveri avatar manavrajvanshi avatar mazin-s avatar shenoypritika avatar svc-xi-github avatar tuxtof avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

ndb-operator's Issues

The Kubernetes service automatically generated to point to the deployed DB instance does not allow specifying the port

When an NDB operator deploys a database solution, a Kubernetes service is automatically deployed, configuring the deployed database as the endpoint.

However, it is not currently possible to specify the port on which the service will listen, and port 80 is set by default.

It should be feasible, when setting up the resources of type 'Database', to specify the port on which the Kubernetes service would listen. The Custom Resource Definition (CRD) should provide the option to define something similar:

The actual result is

apiVersion: v1
kind: Service
metadata:
  name: demoappdb02-svc
spec:
  clusterIP: 172.19.94.227
  clusterIPs:
  - 172.19.94.227
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - port: 80
    protocol: TCP
    targetPort: 5432

We should be able to create resources like this

apiVersion: ndb.nutanix.com/v1alpha1
kind: Database
metadata:
  # This name that will be used within the kubernetes cluster
  name: demoappdb02
spec:
  # Name of the NDBServer resource created earlier
  ndbRef: gdb
  isClone: false
  servicePort : 5432
  exposedPort : 5432

Then we should get similar result:

apiVersion: v1
kind: Service
metadata:
  name: demoappdb02-svc
spec:
  clusterIP: 172.19.94.227
  clusterIPs:
  - 172.19.94.227
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - port: 5432
    protocol: TCP
    targetPort: 5432

Please Start S3 operator ASAP

Is your feature request related to a problem? Please describe.
Since this is the go-based operator by Nutanix, i dropped this feature request here, because i cannot see any place more relevant than this place. So please accept it here , or move it & notify me.

Describe the solution you'd like

apiVersion: s3.nutanix.com/v1alpha1
kind: Bucket
spec:
  events:
      kafka:
         topic: xyx
  • an admission controller to coordinate with the resourceQuota of each namespace & compare it with the requested storage. I know that S3 is unlimited, but please consider a soft quota ( software-defined)

Describe alternatives you've considered
We gonna build our own cloud operator with a multi-cloud architecture. So it comes from you it will be better, as we are partners.

Additional context
N/A

Redesign CRD for NDB to meet Cloud Native Security standards

Is your feature request related to a problem? Please describe.
Yes it's a problem to expose the prism secret/creds to developers & delivery teams in general.
I mean this part:

spec:
  # NDB server specific details
  ndb:
    # Cluster id of the cluster where the Database has to be provisioned
    # Can be fetched from the GET /clusters endpoint
    clusterId: "Nutanix Cluster Id" 
    # Credentials secret name for NDB installation
    # data: username, password, 
    # stringData: ca_certificate
    credentialSecret: your-ndb-secret
    # The NDB Server
    server: https://[NDB IP]:8443/era/v0.9
    # Set to true to skip SSL verification, default: false.
    skipCertificateVerification: false

Describe the solution you'd like
Remove k explain database.spec.ndb from the CRD Database & shift it to the controller runtime configuration.
And if it requires to override something at CRD Database, it can be anything but not credentials.

Describe alternatives you've considered
For the time being, we can contribute here as you are already a partner of our organization.
If we will find it's hard to contribute with you, or it took longtime to release whatever we contributed, that time, we will go to wrap your CRD with a helm-based operator by defaulting this credential value (i.e. watches.yaml)

Additional context
MENA regions , go-based operator

Era Cluster ID README Clarification

Hello the manifest for creating a Postgres DB needs to specify Era cluster ID rather than just cluster ID.

For example it could be:

spec:
  # NDB server specific details
  ndb:
    # Cluster id of the cluster where the Database has to be provisioned
    # Can be fetched from the GET /clusters endpoint ``add: from Era CLI/API``
    clusterId: "EraCluster Id" 
Instead of << clusterId: "Nutanix Cluster Id" >>

Here is the API response fro GET clusters from Era API

{
    "id": "af193c0a-b8f5-47e1-a416-d50e0715b5fd",
    "name": "EraCluster",
    "uniqueName": "ERACLUSTER",
    "ipAddresses": [
      "10.38.x.x"
    ],

Database Size must not exceed Namespace remaining Storage Quota

Is your feature request related to a problem? Please describe.
Yes! there is no limit of DB storage size can be managed/governed from kubernetes side.

Describe the solution you'd like
The operator should check the remaining quota & compare it with the requested storage (spec.databaseInstance.size).

Describe alternatives you've considered
As per now, We plan to use OPA with Gatekeeper to prevent exceeding the quota ( Redhat ACS,..etc)

Additional context
N/A

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.