secrets Archives - ProdSens.live https://prodsens.live/tag/secrets/ News for Project Managers - PMI Sun, 19 Nov 2023 18:24:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://prodsens.live/wp-content/uploads/2022/09/prod.png secrets Archives - ProdSens.live https://prodsens.live/tag/secrets/ 32 32 Kubernetes Volumes https://prodsens.live/2023/11/19/kubernetes-volumes/?utm_source=rss&utm_medium=rss&utm_campaign=kubernetes-volumes https://prodsens.live/2023/11/19/kubernetes-volumes/#respond Sun, 19 Nov 2023 18:24:24 +0000 https://prodsens.live/2023/11/19/kubernetes-volumes/ kubernetes-volumes

Volumes Volumes are needed to store data within a container or share data among other containers. All volumes…

The post Kubernetes Volumes appeared first on ProdSens.live.

]]>
kubernetes-volumes

Volumes

Volumes are needed to store data within a container or share data among other containers.
All volumes requested by a Pod must be mounted before the containers within the Pod are started. This applies also to secrets and configmaps.

Shared Volume

Below you can find a sample of how to create a shared volume.
But be aware that one container can overwrite the data that from the other container.
You can use locking or versioning to overcome this topic.

   containers:
   - name: firstcontainer
     image: busybox
     volumeMounts:
     - mountPath: /firstdir
       name: sharevol
   - name: secondcontainer
     image: busybox
     volumeMounts:
     - mountPath: /seconddir
       name: sharevol
   volumes:
   - name: sharevol
     emptyDir: {}  

$ kubectl exec -ti example -c secondcontainer -- touch /seconddir/bla

$ kubectl exec -ti example -c firstcontainer -- ls -l /firstdir

Persistent Volume – PV

This is a storage abstraction used to keep data even if the Pods is killed. In the Pods you define a volume of that type.
kubectl get pv

Sample of a PV with hostPath Type

kind: PersistentVolume
apiVersion: v1
metadata:
name: 10Gpv01
labels:
type: local
spec:
capacity:
        storage: 10Gi
    accessModes:
        - ReadWriteOnce
    hostPath:
        path: "https://dev.to/somepath/data01"

Persistent Volume Claim – PVC

With the PVC volumes can be accessed by multiple pods and allow state persistency.
The cluster attaches the Persistent Volume.

There is no concurrency checking, so data corruption is probable unless locking takes place outside.

There are 3 access modes for the PVC:

  1. RWO – ReadWriteOnce by a single node
  2. ROX – ReadOnlyMany by multiple nodes
  3. RWX – ReadWriteMany by many nodes

kubectl get pvc

Phases to persistent storage

  1. Provisioning: Can be done in advance, ie resources from a cloud provider
  2. Binding: Once a watch loop on master notices a PVC it requests the access.
  3. Using: The volume is mounted to the Pod and can now be used.
  4. Releasing: When the pod is down, the PVC is deleted. The resident data remains depending on the persitenVolumReclaimPolicy
  5. Reclaiming:
    You have three options: Retain, Delete, Recycle

Empty Dir

The kubelet creates an emptyDir. It will create the directory in the container but not mount any storage. The data written to that storage is not persistent, as it will be deleted when the Pod is deleted.

apiVersion: v1
kind: Pod
metadata:
    name: sample
    namespace: default
spec:
    containers:
    - image: sample
      name: sample
      command:
        - sleep
        - "3600"
      volumeMounts:
      - mountPath: /sample-mount
        name: sample-volume
    volumes:
    - name: sample-volume
            emptyDir: {}

Other Volume types

GCEpersistenDisk and awsElsaticBlockStore

You can mount your GCE or your EBS into your Pods.

hostPath

This mounts a resource from the host node filesystem. The resource must be already in advance in order to be used.

  • DirectoryOrCreate
  • FileOrCreate

and many more

NFS – Network File System
iSCSI – Internet Small Computer System Interface
RBD (RADOS Block Device) – RBD is a block storage device that runs on top of the Ceph distributed storage system. It allows you to create block devices that can be mounted and used like a regular disk. RBD is often used in virtualization environments, providing storage for virtual machines.
CephFS – CephFS is a distributed file system built on top of the Ceph storage system.
GlusterFS – open-source, distributed file system that can scale out to petabytes of storage. It works by aggregating various storage resources across nodes into a single, global namespace.

Dynamic Provisioning

With the kind StorageClass, a user can request a claim, which the API Server fills via auto-provisioning. Common choices for dynamic storage are AWS and GCE.

Sample for gce:

apiVersion: storage.k8s.io/v1        
kind: StorageClass
metadata:
  name: you-name-it                        
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd 

ConfigMaps

This kind of storage is used to store sensitive data, that does not need to be encoded, but should not be stored within the application itself.
Using configmaps we can decouple the container image from the configuration artifacts.
If configmaps are marked as “optional” they don’t need to be mounted before a pod wants to use them.

They can be consumed in various ways:

  • Pod environmental variables from single or multiple ConfigMaps
  • Use ConfigMap values in Pod commands
  • Populate Volume from ConfigMap
  • Add ConfigMap data to a specific path in Volume
  • Set file names and access mode in Volume from ConfigMap data
  • Can be used by system components and controllers.

Create a Configmap from literal:
kubectl create cm yourcm --from-literal yoursecret=topsecret

Create a Configmap from a file:
kubectl -f your-cm.yaml create

Sample ConfigMap:

apiVersion: v1
data:
  yoursecret: topsecret
  level: "3"
kind: ConfigMap
metadata:
  name: yourcm

read the configmap
kubectl get configmap yourcm -o yaml

Secrets

This kind of storage is used to store sensitive data, that needs to be encoded.

A Secret in Kubernetes is base64-encoded by default.
If you want to encrypt secrets, you have to create a EncryptionConfiguration.
There is no limit to the number of secrets, but there is a 1MB limit to their size.
Secrets are stored in the tmpfs storage on the host node and are only sent to the host running Pod.

Secret as an environmental variable

kubectl get secrets
kubectl create secret generic --help
kubectl create secret generic mysecret --from-literal=password=supersecret

spec:
     containers:
     -image: yourimage
      name: yourcontainername
      env:
      - name: ROOT_PASSWORD
        valueFrom: 
         secretKeyRef:
           name: yoursecret
           key: password

Mounting secrets as volumes

spec:
    containers:
    - image: busybox
      name: busy
      command:
        - sleep
        - "3600"
      volumeMounts:
      - mountPath: /mysqlpassword
        name: mysql
    volumes:
    - name: mysql
      secret:
        secretName: mysql

Verify that the secret is available in thte container:
kubectl exec -ti busybox -- cat /mysqlpassword/password

Further reading:
https://trainingportal.linuxfoundation.org/learn/course/kubernetes-for-developers-lfd259/
Volumes on Kubernetes: https://kubernetes.io/docs/concepts/storage/volumes/
Ceph: https://ubuntu.com/ceph/what-is-ceph

The post Kubernetes Volumes appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/11/19/kubernetes-volumes/feed/ 0
True Secrets Auto Rotation with ESO and Vault https://prodsens.live/2023/07/27/true-secrets-auto-rotation-with-eso-and-vault/?utm_source=rss&utm_medium=rss&utm_campaign=true-secrets-auto-rotation-with-eso-and-vault https://prodsens.live/2023/07/27/true-secrets-auto-rotation-with-eso-and-vault/#respond Thu, 27 Jul 2023 21:25:51 +0000 https://prodsens.live/2023/07/27/true-secrets-auto-rotation-with-eso-and-vault/ true-secrets-auto-rotation-with-eso-and-vault

Requirements A Kubernetes cluster that you can use (kind, minikube, something managed) and kubectl to connect to it…

The post True Secrets Auto Rotation with ESO and Vault appeared first on ProdSens.live.

]]>
true-secrets-auto-rotation-with-eso-and-vault

Image description

Requirements

  • A Kubernetes cluster that you can use (kind, minikube, something managed) and kubectl to connect to it
  • Vault CLI
  • External Secrets Operator (ESO) installed.
  • Vault installed through the helm chart

What we want to achieve

Image description

This guide aims to establish an automatic hourly rotation of a database connection secret. Following these steps, an administrator sets up the process once, ensuring that the secret refreshes/changes every hour. Simultaneously, the application will always maintain new valid credentials for seamless database interactions.

Image description

ESO secret Generators

⚠ As of this writing this feature is in alpha state, and we want more people to help test it, so we can make improvements, and eventually promote to stable.

Documentation around it is a bit limited, that’s why I am getting this guide out in my blog, while we figure better ways to bring these into our documentation.

Image description

Getting Started

Just so we start with the same setup I have locally:

  • I have installed Vault in a namespace named vault.
    • You can use helm install vault hashicorp/vault -n vault --create-namespace command instead of the one provided in the guide.
    • Follow all steps in there to init Vault, unseal, and get the cluster-keys.json with the token.
    • You can skip other steps.
  • I have installed ESO in the default namespace.

In this guide we are going to use Vault token authentication just for the sake of simplicity. However, please never use this in real setups. Prefer service account auth.

After properly starting Vault and unsealing it, take note of your auth token. Let’s do a port forward and authenticate in our work desktop so we don’t have to exec into Vault every time we need to run commands.

In a new terminal (this terminal will be blocked)

kubectl -n vault port-forward service/vault 8200:820

In another terminal you can run.

export VAULT_ADDR=http://127.0.0.1:8200
vault login
## type your auth token

Image description

Simple Deployment of PostgreSQL

To have an interesting example, let’s deploy psql and configure it so we can let Vault and other workloads connect to it.

Let’s first create a configmap with an admin user and password for this psql instance (just for simplicity and to get to the other part of the guide quickly).

cat < postgres-config.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: postgres-config
  labels:
    app: postgres
data:
  POSTGRES_DB: postgresdb
  POSTGRES_USER: admin
  POSTGRES_PASSWORD: psltest

EOF

Apply it.

kubectl apply -f postgres-config.yaml

Now create the postgres-deployment.yaml.

cat < postgres-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres  # Sets Deployment name
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:10.1 # Sets Image
          imagePullPolicy: "IfNotPresent"
          ports:
            - containerPort: 5432  # Exposes container port
          envFrom:
            - configMapRef:
                name: postgres-config

EOF

Apply it.

kubectl apply -f postgres-deployment.yaml

And finally, let’s create a service, so other workloads can access it:

cat < postgres-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: postgres # Sets service name
  labels:
    app: postgres # Labels and Selectors
spec:
  type: NodePort # Sets service type
  ports:
    - port: 5432 # Sets port to run the postgres application
  selector:
    app: postgres

EOF

Apply it.

kubectl apply -f postgres-service.yaml

Preparing DB with new readonly role

Exec into the psql pod.

kubectl get pods # get pod name
kubectl exec -it  -- bash

Change into postgres user, and run commands to create the new role.

su postgres
psql -c "CREATE ROLE "ro" NOINHERIT;"
psql -c "GRANT SELECT ON ALL TABLES IN SCHEMA public TO "ro";"

We are going to use this role when configuring Vault to use Dynamic Secrets with psql plugin.

Image description

Vault Dynamic Secrets

Vault Dynamic secrets are in fact meant to be used as a way to get short lived credentials. However, there is nothing stopping us from using them in our auto-rotation process. There are various other plugins that integrate with other systems, like AWS credentials, or certificate issuing systems. Most of these are also interesting in the context of ESO, but I wanted a self-contained example with no need to create external accounts for you to try it out.

Lets first enable the database engine.

vault secrets enable database

After that, let’s configure PostgreSQL secrets engine, with the admin creds we had before (we are passing credentials into the connection url here, never do that outside of test labs).

## POSTGRES_URL with name of the service and namespace
export POSTGRES_URL=postgres.default.svc.cluster.local:5432

vault write database/config/postgresql 
     plugin_name=postgresql-database-plugin 
connection_url="postgresql://admin:psltest@$POSTGRES_URL/postgres?sslmode=disable" 
     allowed_roles=readonly 
     username="root" 
     password="rootpassword"

Create an SQL file container the templated command that will be used by Vault when dynamically creating roles.

tee readonly.sql <<EOF
CREATE ROLE "{{name}}" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}' INHERIT;
GRANT ro TO "{{name}}";
EOF

Write that into Vault and configure default expiration of new requested roles and other fields (this will fail if you did not create the ROLE ‘ro’ correctly while setting up psql).

vault write database/roles/readonly 
      db_name=postgresql 
      creation_statements=@readonly.sql 
      default_ttl=1h 
      max_ttl=24h

You can already check within Vault if you can get the temporary credentials before setting up other steps.

vault read database/creds/readonly
## response
Key                Value
---                -----
lease_id           database/creds/readonly/CPqcUrG55f8qfrA9QKMV3peO
lease_duration     1h
lease_renewable    true
password           5p-xDWSC5Iu9z-hlZPrs
username           v-root-readonly-SQjhNhGxxmKx9QaRKsxM-1690473242

Image description

ESO Generator and ExternalSecret

Before next steps we are going to base64 encode the token so we can apply it with a secret. Grab you Vault token and echo it into base64.

echo "somethinsomething" | base64

Now we can use the new External Secrets Operator CRD, the Generator. Use the value outputted above for the auth token secret (vault-token).

cat < vaultDynamicSecret.yaml
apiVersion: generators.external-secrets.io/v1alpha1
kind: VaultDynamicSecret
metadata:
  name: "psql-example"
spec:
  path: "/database/creds/readonly" ## this is how you choose which vault dynamic path to use
  method: "GET" ## this path will only work with GETs
  # parameters: ## no needed parameters 
  # ...
  provider:
    server: "http://vault.vault.svc.cluster.local:8200" ## vault url. In this case vault service on the vault namespace
    auth:
      # points to a secret that contains a vault token
      # https://www.vaultproject.io/docs/auth/token
      tokenSecretRef: ## reference to the secret holding the Vault auth token
        name: "vault-token"
        key: "token"
---
apiVersion: v1
kind: Secret
metadata:
  name: vault-token
data:
  token: aHZzLkM4M0o2UWNQSW1YQkRJVU96aWNNNzVHdwo= ## token base64 encoded
EOF

Apply this file.

kubectl apply -f vaultDynamicSecret.yaml

And finally we can now create our ExternalSecret that in the end will let the operator create the final Kubernetes Secret.

cat < vaultDynamicSecret.yaml
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: "psql-example-es"
spec:
  refreshInterval: "1h" ## the same as the expiry time on the dynamic config of Vault, or lower, so apps have always new valid credentials
  target:
    name: psql-example-for-use ## the final name of the kubernetes secret created in your cluster
  dataFrom:
  - sourceRef:
      generatorRef:
        apiVersion: generators.external-secrets.io/v1alpha1
        kind: VaultDynamicSecret
        name: "psql-example" ## reference to the generator
EOF

Apply this and check if the status of the ExternalSecret is ok.

kubectl get externalsecret
## response
NAME              STORE   REFRESH INTERVAL   STATUS         READY
psql-example-es           1h                 SecretSynced   True

If you get errors here, verify that you used the right path in the Generator. Also check that you created the right roles inside psql and you can ping vault from a pod in the ESO namespace.

Checking the final secret

You should get a secret containing new users and passwords with read-only access to the database every hour.

k get secrets psql-example-for-use -o jsonpath="{.data}"
## response
{"password":"V2lSWUlqZzdvQS1yOTFaV2N1SWE=","username":"di1yb290LXJlYWRvbmx5LVlXQ3kzZ01hbkhSbGtuY3FqTUg2LTE2OTA0NzIwMzc="}

To check one of the values you can get it and base64 decode it.

k get secrets psql-example-for-use -o jsonpath="{.data.password}" | base64 -d

Now your application can use this secret, it will be automatically auto-rotated, and still be a valid credential to the database.

Caveats

If you use secrets as Environment Variables you will need to use something to make workloads get the new credentials, if they just loose connection. You can use the Reloader project for that.

If you use secrets as volumes, pods will get that update automatically, and you won’t have problems connecting, as long as your application can get the new values.

Conclusion

Image description

That’s it! We’ve set up an auto-rotating secret for a database connection using ESO and Vault. The magic is, as we said, you can set it once and forget. Your secret refreshes every hour and your app stays connected to the database with new valid credentials. It is secure, you follow best practices with regard to rotation, and you avoid manual intervention if that is not needed.

The post True Secrets Auto Rotation with ESO and Vault appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/07/27/true-secrets-auto-rotation-with-eso-and-vault/feed/ 0