K8S-compute-aaS underpinned by a Portworx storage cluster

jboothomas
10 min readMar 29, 2021

When building out a Kubernetes as a service offering, storage has an integral part to play. You may opt for a compute+storage architecture where for each new K8SaaS cluster you provision storage for its users, or perhaps a disaggregated approach with one storage layer providing storage for all the K8SaaS clusters.

One of the main reasons for such an architecture is :

  • scale multiple compute and/or storage dimensions independently
  • use dedicated storage nodes - reduce overall HW footprint less storage devices or connectivity requirements to say a FlashArray for backend storage access
  • denser compute nodes = more end users into a smaller footprint
  • ability to leverage underlying Portworx storage services (snapshots, volume copy) to provide added services (ex: provide access to volume clones from other clusters).

In this blog I will be covering the setup of such an environment, one storage cluster utilised across multiple Kubernetes compute clusters. I will also look into securing the environment so that the provided K8SaaS environments cannot impact each other.

In the above diagram (taken from the Portworx documentation), the storage cluster, is present on each of the compute clusters.

Using an ansible deployment I spin up VMs from a ubuntu18.04 template that has all the required settings to install Kubernetes v1.19.9.

The VMs are automatically configured with hostname and IPs. My k8S Portworx storage cluster is created on the following systems: z-da-14[0–3].

Kubernetes storage cluster

I add to my DNS server a CNAME entry for ‘z-da-k14s.mylab.purestorage.com’ pointing to my z-da-140 node, as this is where I will install the control plane.

To install k8s I run the following command:

sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint z-da-k14s.mylab.purestorage.com

I proceed to copy to my account the kube config file:

{
mkdir -p $HOME/.kube;
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config;sudo chown $(id -u):$(id -g) $HOME/.kube/config;
}

I then proceed to implement the Calico CNI driver:

{
curl https://docs.projectcalico.org/manifests/calico.yaml -O ;
kubectl apply -f calico.yaml;watch kubectl get pods -l=k8s-app=calico-node -A ;
}

I wait for the CNI pods to be in a running state before proceeding.

I then proceed to add my 3 worker nodes to the cluster, each worker node has 4 local storage devices that will be used for Portworx to provide persistent storage to the K8s as a service clusters. These worker nodes will also host the etcd database required by Portworx.

sudo kubeadm join z-da-k14s.mylab.purestorage.com:6443 --token dyalw3.x5ovb50augvnebkt --discovery-token-ca-cert-hash sha256:96762748b31e0f183b60f1277d7841bee7d6f60f701f1e4f4b7fce4622a211d6

In case you do not have helm here is the installation procedure, as I will be using helm later on:

{
wget https://get.helm.sh/helm-v3.5.3-linux-amd64.tar.gz;
tar -xzvf helm-v3.5.3-linux-amd64.tar.gz;sudo mv linux-amd64/helm /usr/local/bin/helm;
}

The lates Helm versions can be found here.

MetalLB

I will be using MetalLB to provide an IP address to an etcd instance. Portworx requires an externally reachable etcd for its kvdb when implemented in a disaggregated architecture.

To install MetalLB:

{
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml ;
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml ;
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" ;
}

I then create and apply the following configmap where a set of available IPs is declared for use by MetalLB.

apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.40-192.168.1.42

I then apply this to the Kubernetes storage cluster:

$ kubectl -n metallb-system apply -f metallb-cm.yaml

ETCD CLUSTER

As indicated I need an Etcd instance in order to install Portworx. I will use the bitnami helm chart and local storage from the 3 worker nodes (z-da-141,z-da-142,z-da-143).

On each of these workers I create a folder and assign permissions:

{
sudo mkdir /pxetcd;
sudo chmod 771 /pxetcd;
}

I then create the persistent volumes using this definition:

apiVersion: v1
kind: PersistentVolume
metadata:
name: etcd-vol-0
spec:
capacity:
storage: 2Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /pxetcd
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- z-da-141
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: etcd-vol-1
spec:
capacity:
storage: 2Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /pxetcd
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- z-da-142
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: etcd-vol-2
spec:
capacity:
storage: 2Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /pxetcd
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- z-da-143

Applying this will create the three persistent volumes:

$ kubectl apply -f pxetcd-pv.yaml
persistentvolume/etcd-vol-0 created
persistentvolume/etcd-vol-1 created
persistentvolume/etcd-vol-2 created
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
etcd-vol-0 2Gi RWO Retain Available local-storage 5s
etcd-vol-1 2Gi RWO Retain Available local-storage 5s
etcd-vol-2 2Gi RWO Retain Available local-storage 5s

I can now create my three persistent volume claims using this definition file:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-px-etcd-0
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
volumeName: "etcd-vol-0"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-px-etcd-1
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
volumeName: "etcd-vol-1"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-px-etcd-2
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
volumeName: "etcd-vol-2"

I create a separate namespace for my Portworx Etcd instance:

$ kubectl create namespace pxetcd

and then apply the pvc definition file therein:

$ kubectl -n pxetcd apply -f pxetcd-pvc.yaml
persistentvolumeclaim/data-px-etcd-0 created
persistentvolumeclaim/data-px-etcd-1 created
persistentvolumeclaim/data-px-etcd-2 created

I now add the Bitnami Helm repo and deploy the etcd cluster to the pxectd namespace. Etcd will use the three PVC created in the prior steps:

helm repo add bitnami https://charts.bitnami.com/bitnamihelm install px-etcd bitnami/etcd --set replicaCount=3 --set service.type=LoadBalancer --set auth.rbac.enabled=false --namespace=pxetcd

After a few minutes our etcd is up and running and visible on the assigned LoadBalancer IP by metallb (192.168.1.40):

$ kubectl -n pxetcd get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/px-etcd-0 1/1 Running 0 3m44s 10.244.33.1 z-da-141 <none> <none>
pod/px-etcd-1 1/1 Running 0 114s 10.244.1.193 z-da-142 <none> <none>
pod/px-etcd-2 1/1 Running 0 114s 10.244.128.130 z-da-143 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/px-etcd LoadBalancer 10.110.174.176 192.168.1.40 2379:30780/TCP,2380:32729/TCP 3m44s app.kubernetes.io/instance=px-etcd,app.kubernetes.io/name=etcd
service/px-etcd-headless ClusterIP None <none> 2379/TCP,2380/TCP 3m44s app.kubernetes.io/instance=px-etcd,app.kubernetes.io/name=etcd
NAME READY AGE CONTAINERS IMAGES
statefulset.apps/px-etcd 3/3 3m44s etcd docker.io/bitnami/etcd:3.4.15-debian-10-r14

We can check with curl the Etcd version of our instance using:

curl -L http://192.168.1.40:2379/version -GET -v
{"etcdserver":"3.4.15","etcdcluster":"3.4.0"}

Portworx storage cluster

I proceed to provide options and download the generated spec file from the Portworx central website. The options provided were:

kubernetes version: 1.19.9
Portworx version: 2.6
etcd details: http://192.168.1.40:2379
etcd: disable HTTPS
On premises with the 4 devices /dev/sdb /dev/sdc /dev/sdd /dev/sde
autocreate journal device

I can now proceed to apply the downloaded px-spec.yaml file:

$ kubectl apply -f px-spec.yaml

I can, once all the pods are running, check the status of our Portworx cluster from one of the worker nodes:

$ /opt/pwx/bin/pxctl status
Status: PX is operational
License: Trial (expires in 31 days)
Node ID: bfcc0cb0-0554-4c69-840d-27ffb02296b1
IP: 192.168.4.141
Local Storage Pool: 1 poolPOOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION
0 HIGH raid0 61 GiB 8.0 GiB Online default default
Local Storage Devices: 4 devices
Device Path Media Type Size Last-Scan
0:1 /dev/sdb2 STORAGE_MEDIUM_SSD 13 GiB 21 Mar 21 06:43 UTC
0:2 /dev/sdc STORAGE_MEDIUM_SSD 16 GiB 21 Mar 21 06:43 UTC
0:3 /dev/sdd STORAGE_MEDIUM_SSD 16 GiB 21 Mar 21 06:43 UTC
0:4 /dev/sde STORAGE_MEDIUM_SSD 16 GiB 21 Mar 21 06:43 UTC
total - 61 GiB
Cache Devices:
* No cache devices
Journal Device:
1 /dev/sdb1 STORAGE_MEDIUM_SSD
Cluster Summary
Cluster ID: px-k14s-http-baa231ff-d36b-4113-b185-bc0b4d619107
Cluster UUID: c5c32dd0-e50e-4b1a-8cf8-bf8841e7cbbf
Scheduler: kubernetes
Nodes: 3 node(s) with storage (3 online)
IP ID SchedulerNodeName StorageNode Used Capacity Status StorageStatus Version Kernel OS
192.168.4.142 e5bf9e1e-a941-4f76-8b8d-33f9522d37c0 z-da-142 Yes 8.0 GiB 61 GiB Online Up 2.6.3.0-4419aa4 4.15.0-139-generic Ubuntu 18.04.5 LTS192.168.4.141 bfcc0cb0-0554-4c69-840d-27ffb02296b1 z-da-141 Yes 8.0 GiB 61 GiB Online Up (This node) 2.6.3.0-4419aa4 4.15.0-139-generic Ubuntu 18.04.5 LTS192.168.4.143 0f72355b-1a37-40d8-9ffd-da8b785c918f z-da-143 Yes 8.0 GiB 61 GiB Online Up 2.6.3.0-4419aa4 4.15.0-139-generic Ubuntu 18.04.5 LTSGlobal Storage Pool
Total Used : 24 GiB
Total Capacity : 183 GiB

Secure Portworx cluster

To secure the Portworx cluster I then followed the steps as per the documentation. I modified both the spec file and the Portworx daemonset and stork deployment on my Kubernetes storage cluster. Adding the required -jwt_issuer portworx.com parameter and the required env variables.

kubectl edit -n kube-system daemonset portworxkubectl edit deployment stork -n kube-system

The cluster is not ‘secured’ and I can proceed to generate authentication tokens as per the documentation. I also created a second KUBE2_TOKEN for a second K8SaaS cluster.

$ ADMIN_TOKEN=$(sudo /opt/pwx/bin/pxctl auth token generate --auth-config=px-admin.yaml --issuer=portworx.com --shared-secret=$PORTWORX_AUTH_SHARED_SECRET --token-duration=1y)$ KUBE_TOKEN=$(sudo /opt/pwx/bin/pxctl auth token generate --auth-config=px-kube.yaml --issuer=portworx.com --shared-secret=$PORTWORX_AUTH_SHARED_SECRET --token-duration=1y)

Once created I added the ADMIN_TOKEN as a context on all three of my Portworx nodes:

/opt/pwx/bin/pxctl context create admin --token=$ADMIN_TOKEN

To check that pxctl requires authentication I run the following sequence on one of my storage nodes:

$  /opt/pwx/bin/pxctl context unset
Current context unset
$ /opt/pwx/bin/pxctl status
Access denied token is empty

We can see that without the admin context set, I am unable to run pxctl commands, this will be critical for the multi-tenancy storage aspect of the compute Kubernetes clusters.

Kubernetes compute clusters

Using similar steps to the Kubernetes section above I proceed to install a compute K8SaaS cluster. Composed of one controller node (z-da-144) and two worker nodes (z-da-14[5–6]).

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
z-da-144 Ready master 138m v1.19.8
z-da-145 Ready <none> 7m41s v1.19.8
z-da-146 Ready <none> 8m4s v1.19.8

From the storage cluster, I get a copy of the Portworx secrets:

kubectl -n kube-system get secret pxkeys -o yaml > pxkeys.yaml

I copy this and the main Portworx spec file over to my K8SaaS compute cluster.

I then proceed to apply the secret followed by the portworx spec file.

kubectl apply -f pxkeys.yaml -n kube-systemkubectl apply -f px-spec.yaml

After a little while we can see our nodes join the main portworx storage cluster, checking on the overall status we can see them listed as ‘No Storage’:

Status: PX is operational...Cluster Summary
Cluster ID: px-k14s-http-baa231ff-d36b-4113-b185-bc0b4d619107
Cluster UUID: c5c32dd0-e50e-4b1a-8cf8-bf8841e7cbbf
Scheduler: kubernetes
Nodes: 3 node(s) with storage (3 online), 2 node(s) without storage (2 online)
IP ID SchedulerNodeName StorageNode Used Capacity Status StorageStatus Version Kernel OS
192.168.4.142 e5bf9e1e-a941-4f76-8b8d-33f9522d37c0 z-da-142 Yes 8.0 GiB 61 GiB Online Up 2.6.3.0-4419aa4 4.15.0-139-generic Ubuntu 18.04.5 LTS192.168.4.141 bfcc0cb0-0554-4c69-840d-27ffb02296b1 z-da-141 Yes 8.0 GiB 61 GiB Online Up (This node) 2.6.3.0-4419aa4 4.15.0-139-generic Ubuntu 18.04.5 LTS192.168.4.143 0f72355b-1a37-40d8-9ffd-da8b785c918f z-da-143 Yes 8.0 GiB 61 GiB Online Up 2.6.3.0-4419aa4 4.15.0-139-generic Ubuntu 18.04.5 LTS192.168.4.145 cb3cd97e-ac5f-4ad4-b54b-ea9b0e48b0bb z-da-145 No 0 B 0 B Online No Storage 2.6.3.0-4419aa4 4.15.0-139-generic Ubuntu 18.04.5 LTS192.168.4.146 0e27ca1d-dc32-43c6-aa14-cb69f99120d1 z-da-146 No 0 B 0 B Online No Storage 2.6.3.0-4419aa4 4.15.0-139-generic Ubuntu 18.04.5 LTSGlobal Storage Pool
Total Used : 24 GiB
Total Capacity : 183 GiB

I check to see if from one of my K8SaaS compute cluster worker nodes I am able to run pxctl commands:

$ /opt/pwx/bin/pxctl status
Access denied token is empty

So my security is in place and Kubernetes admins of the K8SaaS nodes will not be able administer the Portworx storage cluster.

Storage class, on K8SaaS compute cluster

Remember the KUBE_TOKEN secret I will now use this in order to enable ‘user’ access to the Portworx storage cluster. On the K8SaaS compute cluster I run:

kubectl -n portworx create secret \
generic px-k8s-user --from-literal=auth-token=$KUBE_TOKEN

As per the documentation I then proceed to create a storage class and test storage provisioning. I used the following StorageClass definition:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: px-storage-repl-1
provisioner: kubernetes.io/portworx-volume
parameters:
repl: "1"
openstorage.io/auth-secret-name: px-k8s-user
openstorage.io/auth-secret-namespace: portworx
allowVolumeExpansion: true

I then followed the steps as per https://2.3.docs.portworx.com/cloud-references/security/kubernetes/shared-secret-model/example-application/ to create a pvc and a test application.

I changed from the given example to my storage class name:

annotations:
volume.beta.kubernetes.io/storage-class: px-storage-repl-1

From one of my Portworx storage cluster nodes, I can check the volume is created:

$ /opt/pwx/bin/pxctl volume list
ID NAME SIZE HA SHARED ENCRYPTED PROXY-VOLUME IO_PRIORITY STATUS SNAP-ENABLED
502341406524817907 pvc-e6c65c42–1c4b-43a1–88fc-a67f415c5b22 2 GiB 1 no no no HIGH up — detached no

And proceed with a application deployment:

$ kubectl get pods,pvc
NAME READY STATUS RESTARTS AGE
pod/mysql-846d5bbd66-x7qh7 1/1 Running 0 2m36s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/mysql-data Bound pvc-e6c65c42-1c4b-43a1-88fc-a67f415c5b22 2Gi RWO px-storage-repl-1 8m18s

I deployed a second K8SaaS compute cluster with its own Portworx token and after deploying an application I can see that my two K8SaaS clusters have provisioned storage to the same Portworx data plane as two separate authenticated users:

$ /opt/pwx/bin/pxctl volume access show 502341406524817907
Volume: 502341406524817907
Ownership:
Owner: kubernetes1admin@k14c1.lab
$ /opt/pwx/bin/pxctl volume access show 475807366647779514
Volume: 475807366647779514
Ownership:
Owner: kubernetes2admin@k14c2.lab

Permissions and roles could be tailored to fit specific requirements of given K8SaaS clusters, or even specific namespaces with a K8s cluster.

Conclusion

I hope this post helps others out when deploying a disaggregated Portworx architecture. As shown adding K8SaaS compute clusters into an existing Portworx storage cluster is trivial, and granular security via roles and permissions can be enforced across these compute clusters or their namespaces.

--

--

jboothomas

Infrastructure engineering for modern data applications