King Skeleton: a simple three-tier application demo using Helm

King Skeleton: a simple three-tier application demo using Helm

As a newbie with Kubernetes and Helm I want to deploy a three-tier application using Helm Charts that allows me to replicate deployments easily.

ยท

7 min read

King Skeleton is "La Rebelion" walKing Skeleton.

The goal

For this lesson: as a newbie with Kubernetes and Helm I want to deploy a three-tier application using Helm Charts that allows me to replicate deployments of my full-stack solution easily in different environments with different combinations of components and configurations.

Later, more advanced lessons to deploy Strapi full stack to seeHelm and Kubernetes in action. Subscribe to get updates on the new guides and how-to's, or register to the helmee beta.

"A Walking Skeleton is a tiny implementation of the system that performs a small end-to-end function. It need not use the final architecture, but it should link together the main architectural components. The architecture and the functionality can then evolve in parallel." - devops.stackexchange.com/questions/712/what..

Epic Description:

For DevOps and SysAdmin users
who want to quickly and repeatable deploy stacks of solutions easily in different environments
the helmee tool
is an automation resource
that allows DevOps teams to continuously deploy different versions of the solution
unlike creating and updating artifacts manually on each deployment
our solution allows DevOps teams to optimize their time and avoid reviewing files manually to reduce human errors and environment drifts.

Batteries not included:

  • A Kubernetes cluster (minikube for king skeleton)
    • multipass launch --name lr-kube --cpus 4 --mem 4G --disk 40G minikube
      • You require multipass ๐Ÿ™ˆ
    • Or Docker Desktop on Windows
  • kubectl and helm

If you would like a guide on how to install "the batteries", let me know in the comments.

from zero to zero point one - Mocking the Walking Skeleton

KISS, Keep It Simple Skeleton... the simplest possible, our front end is connected to the backend API, and the API is connected to the database.

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”              โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”              โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  
โ”‚              โ”‚              โ”‚            โ”‚              โ”‚            โ”‚  
โ”‚ Presentation โ”‚              โ”‚   Logic    โ”‚              โ”‚    Data    โ”‚  
โ”‚    (app)     โ”‚              โ”‚   (api)    โ”‚              โ”‚    (db)    โ”‚  
โ”‚              โ”‚              โ”‚            โ”‚              โ”‚            โ”‚  
โ”‚              โ”‚              โ”‚            โ”‚              โ”‚            โ”‚  
โ”‚         โ—„โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ–บ     โ—„โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ–บ         โ”‚  
โ”‚              โ”‚              โ”‚            โ”‚              โ”‚            โ”‚  
โ”‚              โ”‚              โ”‚            โ”‚              โ”‚            โ”‚  
โ”‚              โ”‚              โ”‚            โ”‚              โ”‚            โ”‚  
โ”‚              โ”‚              โ”‚            โ”‚              โ”‚            โ”‚  
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜              โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜              โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

asciiflow.com

I am going to start with demoing the helm basics.

# prepare the environment
DEMO_HOME=$(mktemp -d)
CHART_DIR=$DEMO_HOME/strapi-chart
NAMESPACE=king-skeleton

# We need to install helm (Killercoda)
snap install helm --classic
# create namespace a set as default
kubectl create ns $NAMESPACE
kubectl config set-context --current --namespace=$NAMESPACE

cd $DEMO_HOME
# let's see how is the chart structure
helm create strapi-chart
tree strapi-chart
# everything in this directory will be sent through the template engine
cd strapi-chart/templates
rm -rf *
# create a config map manifest
cat <<EOF > configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: mychart-configmap
data:
  myvalue: "Hello Rebels"
EOF

# testing, not deploying anything yet, just the test what helm would generate in the k8s cluster
helm install --debug --dry-run king-skeleton $CHART_DIR

# overwrite config map - adding name parametrized
cat <<EOF > configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: king-skeleton
  name: {{ .Release.Name }}-configmap
data:
  myvalue: "Hello Rebels"
EOF

# Let's deploy the Config Map (CM) in the helm chart
helm install king-skeleton $CHART_DIR
# command takes a release name (king-skeleton) and prints out all of the Kubernetes resources that were created into the k8s cluster
helm get manifest king-skeleton
kubectl get configmaps -A
# all configmaps, see your's in the 'king-skeleton' namespace

Now, we are ready to add steroids to the "king skeleton" (ks), the deployments of the required components.

via GIPHY

from zero point one to zero point five - Adding Steroids to the Walking Skeleton

Let's add some extra variables: the chart directory, the components' names, and the directories for these components' charts.

KSKELETON=$DEMO_HOME/ks
APP=ks-app
API=ks-api
DB=ks-db
APP_DIR=$KSKELETON/$APP
API_DIR=$KSKELETON/$API
DB_DIR=$KSKELETON/$DB

Starting from scratch! - based on what you learned from previous exercise, we are going to create all we need from zero. Create the templates directory.

mkdir -p $APP_DIR/templates
mkdir -p $API_DIR/templates
mkdir -p $DB_DIR/templates

How are we so far?

tree $KSKELETON
/tmp/tmp.nvyRG1RRy2/ks
โ”œโ”€โ”€ ks-api
โ”‚   โ””โ”€โ”€ templates
โ”œโ”€โ”€ ks-app
โ”‚   โ””โ”€โ”€ templates
โ”œโ”€โ”€ ks-db
โ”‚   โ””โ”€โ”€ templates

What is going to be responded by each tier in the stack, let's mockup the responses:

cd $KSKELETON
echo "Hello Rebels from 'La Rebelion'!" > index.html
echo "{id: 1, name: 'King', lastname: 'Skeleton'}" > api-payload.json
echo "{tables: [rebels,orders,services,categories]}" > db-payload.json

We need to specify from where and how these files are going to be taken by the pods. For simplicity you can create static Persistent Volumes (PV), or even simpler, ConfigMap, this last is the option we are going to use.

Create the configmaps:

kubectl create configmap $APP-configmap --from-file=index.html
kubectl create configmap $API-configmap --from-file=api-payload.json
kubectl create configmap $DB-configmap --from-file=db-payload.json

Now, is time to create the deployment, specifying the pod and replicas we are going to create.

Two pods per chart, we specify this in the values.yaml:

cat <<EOF > $APP_DIR/values.yaml
replicaCount: 2
EOF

Same for the other charts (API and DB):

cp $APP_DIR/values.yaml $API_DIR
cp $APP_DIR/values.yaml $DB_DIR

Create the deployment template:

cat <<EOF > $APP_DIR/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
# we don't need to define a namespace, this is set in current context
metadata:
  name: {{ .Chart.Name }}
  labels:
    app: {{ .Chart.Name }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ .Chart.Name }}
  template:
    metadata:
      labels:
        app: {{ .Chart.Name }}
    spec:
      containers:
      - name: {{ .Chart.Name }}
        image: busybox
        command: ["/bin/httpd"]
        args: ["-f", "-h", "/var/www/"]
        ports:
        - containerPort: 80
        volumeMounts:
        - name: working-dir
          mountPath: /var/www/
      volumes:
      - name: working-dir
        configMap:
          name: ks-{{ .Chart.Name }}-configmap
EOF

We use the same deployment for the 3 charts

cp $APP_DIR/templates/deployment.yaml $API_DIR/templates
cp $APP_DIR/templates/deployment.yaml $DB_DIR/templates

Now, let's define the Charts' manifest with Chart name, Chart version and app version. Check we are going to use different versions, for you to see this differences when listing the charts (helm ls).

cat<<EOF > $APP_DIR/Chart.yaml
apiVersion: v2
name: app-chart
type: application
version: 0.1.0
appVersion: "1.0-SNAPSHOT"

EOF
cat<<EOF > $API_DIR/Chart.yaml
apiVersion: v2
name: api-chart
type: application
version: 0.1.0
appVersion: "1.0-SNAPSHOT"

EOF
cat<<EOF > $DB_DIR/Chart.yaml
apiVersion: v2
name: db-chart
type: application
version: 0.1.0
appVersion: "1.0-SNAPSHOT"

EOF
cp $APP_DIR/values.yaml $API_DIR
cp $APP_DIR/values.yaml $DB_DIR

What we have so far?

tree .
.
โ”œโ”€โ”€ api-payload.json
โ”œโ”€โ”€ db-payload.json
โ”œโ”€โ”€ index.html
โ”œโ”€โ”€ ks-api
โ”‚   โ”œโ”€โ”€ Chart.yaml
โ”‚   โ”œโ”€โ”€ templates
โ”‚   โ”‚   โ””โ”€โ”€ deployment.yaml
โ”‚   โ””โ”€โ”€ values.yaml
โ”œโ”€โ”€ ks-app
โ”‚   โ”œโ”€โ”€ Chart.yaml
โ”‚   โ”œโ”€โ”€ templates
โ”‚   โ”‚   โ””โ”€โ”€ deployment.yaml
โ”‚   โ””โ”€โ”€ values.yaml
โ””โ”€โ”€ ks-db
    โ”œโ”€โ”€ Chart.yaml
    โ”œโ”€โ”€ templates
    โ”‚   โ””โ”€โ”€ deployment.yaml
    โ””โ”€โ”€ values.yaml

6 directories, 12 files

You should see: the responses mockup files and the charts content with Chart manifest, templates (deployment) and the values manifest. (6 directories, 12 files)

Before executing the installation commands, let's see a picture of the future, what is going to be the resulting template to be deployed in Kubernetes.

helm template [helm-chart]

Kubernetes
# helm template [helm-chart] 
helm template $APP
#helm template $API
#helm template $DB

Install the three tier application, launch time! 3, 2, 1... ๐Ÿš€

Take Off!

helm install king-skeleton $APP

API and DB will be launch with auto-generated names:

helm install --generate-name $API
helm install --generate-name $DB

Listing the charts installed:

helm ls

Review what was created in the Kubernetes cluster, command takes a release name (king-skeleton-app) and prints out all of the Kubernetes resources that were created:

helm get manifest king-skeleton

helm does an apply behind scenes when templates are 'transformed', this could be an example to mimic that behaviour: ls *.yaml | kubectl apply -f -

List the pods (in the current namespace):

kubectl get pods

We can take one of the pods to test components, either with pods or deployment:

#export APP_POD=$(kubectl get pods -l app=app -o jsonpath='{.items[0].metadata.name}')
#export API_POD=$(kubectl get pods -l app=api -o jsonpath='{.items[0].metadata.name}')
#export DB_POD=$(kubectl get pods -l app=db -o jsonpath='{.items[0].metadata.name}')

export APP_DEPLOY=$(kubectl get deployment -l app=app -o jsonpath='{.items[0].metadata.name}')
export API_DEPLOY=$(kubectl get deployment -l app=api -o jsonpath='{.items[0].metadata.name}')
export DB_DEPLOY=$(kubectl get deployment -l app=db -o jsonpath='{.items[0].metadata.name}')

Port forward to our instance:

kubectl port-forward --address 0.0.0.0 deployment/$APP_DEPLOY 8080:80 &
kubectl port-forward --address 0.0.0.0 deployment/$API_DEPLOY 8180:80 &
kubectl port-forward --address 0.0.0.0 deployment/$DB_DEPLOY 8280:80 &

Click on the links below to access at:

The breaking point in charts is on the values built-in object of helm. I will explain to you later how to replicate the deployment in multiple clusters (envs: dev, test, QA, prod)

Conclusions

We can customize the manifest files on chart files to create more flexible and specific deployments, but as you saw, when you have dependencies with external charts (possible other namespaces, or different configurations), then you need even more flexibility and customizations capabilities, helmee helps you to add more customizations. If Helm is enough for your use case, of course, you can stay just there, but complex enterprise solutions require extra flexibility.

Subscribe to the distribution list to get additional courses and lessons learned on Kubernetes and Helm deployments.

Please, post your feedback, questions, or additional guides that you would like me to create, any comment is welcome! ๐Ÿ˜Š

Did you find this article valuable?

Support La Rebelion Labs by becoming a sponsor. Any amount is appreciated!

ย