New to KubeVault? Please start here.

Mount MySQL/MariaDB credentials into Kubernetes pod using CSI Driver

Before you Begin

At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using Minikube.

To keep things isolated, this tutorial uses a separate namespace called demo throughout this tutorial.

$ kubectl create ns demo
namespace/demo created

Note: YAML files used in this tutorial stored in docs/examples/csi-driver/database/mysql folder in github repository KubeVault/docs

Configure Vault

We need to configure following things in this step to retrieve MySql/MariaDB credentials from Vault server into Kubernetes pod.

  • Vault server: used to provision and manager database credentials
  • Appbinding: required to connect CSI driver with Vault server
  • Role: using this role CSI driver can access credentials from Vault server

There are two ways to configure Vault server. You can use either use Vault Operator or use vault cli to manually configure a Vault server.

Using Vault Operator

Follow this tutorial to manage MySql/MariaDB credentials with Vault operator. After successful configuration you should have following resources present in your cluster.

  • AppBinding: An appbinding with name vault-app in demo namespace
  • Role: A role named k8s.-.demo.demo-role which have access to read database credential
Using Vault CLI

You can use Vault cli to manually configure an existing Vault server. The Vault server may be running inside a Kubernetes cluster or running outside a Kubernetes cluster. If you don’t have a Vault server, you can deploy one by running the following command:

$ kubectl apply -f
service/vault created
statefulset.apps/vault created

To use secret from database engine, you have to do following things.

  1. Enable database Engine: To enable database secret engine run the following command.

    $ vault secrets enable database
    Success! Enabled the database secrets engine at: database/
  2. Create Engine Policy: To read database credentials from engine, we need to create a policy with read capability. Create a policy.hcl file and write the following content:

    # capability of get secret
    path "database/creds/*" {
        capabilities = ["read"]

    Write this policy into vault naming test-policy with following command:

    $ vault policy write test-policy policy.hcl
    Success! Uploaded policy: test-policy
  3. Write Secret on Vault: Configure Vault with the proper plugin and connection information by running:

    $ vault write database/config/my-mysql-database \
        plugin_name=mysql-rds-database-plugin \
        allowed_roles="k8s.-.demo.demo-role" \
        connection_url="{{username}}:{{password}}@tcp(" \
        username="root" \
  4. Write a DATABASE role: We need to configure a role that maps a name in Vault to an SQL statement to execute to create the database credential:

    $ vault write database/roles/k8s.-.demo.demo-role \
        db_name=my-mysql-database \
        creation_statements="CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}';GRANT SELECT ON *.* TO '{{name}}'@'%';" \
        default_ttl="1h" \
    Success! Data written to: database/roles/k8s.-.demo.demo-role

Here, k8s.-.demo.demo-role will be treated as secret name on storage class.

Configure Cluster

  1. Create Service Account: Create service.yaml file with following content:

        kind: ClusterRoleBinding
          name: role-dbcreds-binding
          namespace: demo
          kind: ClusterRole
          name: system:auth-delegator
        - kind: ServiceAccount
          name: db-vault
          namespace: demo
        apiVersion: v1
        kind: ServiceAccount
          name: db-vault
          namespace: demo

    After that, run kubectl apply -f service.yaml to create a service account.

  2. Enable Kubernetes Auth: To enable Kubernetes auth back-end, we need to extract the token reviewer JWT, Kubernetes CA certificate and Kubernetes host information.

    export VAULT_SA_NAME=$(kubectl get sa db-vault -n demo -o jsonpath="{.secrets[*]['name']}")
    export SA_JWT_TOKEN=$(kubectl get secret $VAULT_SA_NAME -n demo -o jsonpath="{.data.token}" | base64 --decode; echo)
    export SA_CA_CRT=$(kubectl get secret $VAULT_SA_NAME -n demo -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)
    export K8S_HOST=<host-ip>
    export K8s_PORT=6443

    Now, we can enable the Kubernetes authentication back-end and create a Vault named role that is attached to this service account. Run:

    $ vault auth enable kubernetes
    Success! Enabled Kubernetes auth method at: kubernetes/
    $ vault write auth/kubernetes/config \
        token_reviewer_jwt="$SA_JWT_TOKEN" \
        kubernetes_host="https://$K8S_HOST:$K8s_PORT" \
    Success! Data written to: auth/kubernetes/config
    $ vault write auth/kubernetes/role/db-cred-role \
        bound_service_account_names=db-vault \
        bound_service_account_namespaces=demo \
        policies=test-policy \
    Success! Data written to: auth/kubernetes/role/db-cred-role

    Here, db-cred-role is the name of the role.

  3. Create AppBinding: To connect CSI driver with Vault, we need to create an AppBinding. First we need to make sure, if AppBinding CRD is installed in your cluster by running:

    $ kubectl get crd -l app=catalog
    NAME                                          CREATED AT           2018-12-12T06:09:34Z

    If you don’t see that CRD, you can register it via the following command:

    kubectl apply -f

    If AppBinding CRD is installed, Create AppBinding with the following data:

      kind: AppBinding
        name: vault-app
        namespace: demo
        url: # Replace this with Vault URL
        apiVersion: ""
        kind: "VaultServerConfiguration"
        usePodServiceAccountForCSIDriver: true
        authPath: "kubernetes"
        policyControllerRole: db-cred-role # we created this in previous step

Mount secrets into a Kubernetes pod

After configuring Vault server, now we have vault-app AppBinding in demo namespace, k8s.-.demo.demo-role access role which have access into database path.

So, we can create StorageClass now.

Create StorageClass: Create storage-class.yaml file with following content, then run kubectl apply -f storage-class.yaml

 kind: StorageClass
   name: vault-mysql-storage
   namespace: demo
 annotations: "false"
   ref: demo/vault-app # namespace/AppBinding, we created this in previous step
   engine: DATABASE # vault engine name
   role: k8s.-.demo.demo-role # role name on vault which you want get access
   path: database # specify the secret engine path, default is database

Test & Verify

  • Create PVC: Create a PersistantVolumeClaim with following data. This makes sure a volume will be created and provisioned on your behalf.

        apiVersion: v1
        kind: PersistentVolumeClaim
          name: csi-pvc
          namespace: demo
          - ReadWriteOnce
              storage: 1Gi
          storageClassName: vault-mysql-storage
          volumeMode: DirectoryOrCreate
  • Create Pod: Now we can create a Pod which refers to this volume. When the Pod is created, the volume will be attached, formatted and mounted to the specific container.

        apiVersion: v1
        kind: Pod
          name: mymysqlpod
          namespace: demo
          - name: mymysqlpod
            image: busybox
              - sleep
              - "3600"
            - name: my-vault-volume
              mountPath: "/etc/foo"
              readOnly: true
          serviceAccountName: db-vault
            - name: my-vault-volume
                claimName: csi-pvc

    Check if the Pod is running successfully, by running:

      $ kubectl describe pods/my-pod
  • Verify Secret: If the Pod is running successfully, then check inside the app container by running

    $ kubectl exec -it mymysqlpod sh
    # ls /etc/foo
    password  username
    # cat /etc/foo/username

So, we can see that database credentials (username, password) are mounted to the specified path.

Cleaning up

To cleanup the Kubernetes resources created by this tutorial, run:

$ kubectl delete ns demo
namespace "demo" deleted