Command line tool for cluster manipulation
and all associated K8s resources.
Command line tool for cluster manipulation
and all associated K8s resources.
Interact with kube-apiserver
Command line tool for cluster manipulation
and all associated K8s resources.
Interact with kube-apiserver
Make an alias in your .bashrc for easier use
# kube : add alias and expand completion to 'k'
alias k="kubectl"
complete -F __start_kubectl k
➜ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-32j54h2 1/1 Running 0 48m25s
front-app-34s53d4 0/1 Init 0/1 0 5s
backend-app-45r65g6 0/1 CrashloopBackoff 4 19m58s
➜ kubectl describe pod nginx-32j54h2
Name: nginx-32j54h2
Namespace: dspdmap0
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Fri, 25 Mar 2022 17:50:27 +0100
Labels: app=nginx
Annotations: none
Status: Running
IP: 10.1.0.15
IPs:
IP: 10.1.0.15
Containers:
nginx:
Container ID: docker://ec3d2dc0aafbd88c7fba3a0d9bd3f4d49030bb88c28a3c0a6649c0a80794aeaf
Image: nginx:latest
Image ID: docker-pullable://nginx@sha256:4ed64c2e0857ad21c38b98345ebb5edb01791a0a10b0e9e3d9ddde185cdbd31a
Port: none
Host Port: none
State: Running
Started: Mon, 28 Mar 2022 13:24:49 +0200
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 25 Mar 2022 17:50:29 +0100
Finished: Mon, 28 Mar 2022 13:24:38 +0200
Ready: True
Restart Count: 1
Environment: none
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m67dr (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-m67dr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: nil
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: none
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SandboxChanged 29m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulling 29m kubelet Pulling image "nginx:latest"
Normal Pulled 29m kubelet Successfully pulled image "nginx:latest" in 1.355729459s
Normal Created 29m kubelet Created container nginx
Normal Started 29m kubelet Started container nginx
Normal Killing 9s kubelet Container nginx definition changed, will be restarted
Normal Pulling 9s kubelet Pulling image "nginx:1.0.0"
Warning Failed 8s kubelet Failed to pull image "nginx:1.0.0": rpc error: code = Unknown desc = Error response from daemon: manifest for nginx:1.0.0 not found: manifest unknown: manifest unknown
Warning Failed 8s kubelet Error: ErrImagePull
Warning BackOff 7s kubelet Back-off restarting failed container
➜ kubectl run # Run a particular Image (similar to docker run)
➜ kubectl create # Create a resource (non-existing)
➜ kubectl apply # Apply a new configuration to a resource
➜ kubectl delete pod nginx-32j54h2
pod "nginx-32j54h2" deleted
➜ kubectl edit deploy nginx
... open with vim
deployment/nginx edited
➜ kubectl scale deploy --replicas 2 nginx
deployment/nginx scaled
➜ kubectl rollout restart deploy nginx
deployment/nginx restarted
kubectl exec # Execute command in a container
kubectl port-forward # Forward one or more locals ports to a pod
kubectl proxy # Run a proxy to the Kubernetes API Server
kubectl cluster-info # Display cluster informations
kubectl top # Display Resources usages
kubectl drain # Drain nodes in preparation for maintenance
kubectl config # Update kubeconfig file
kubectl help # Help about any command
kubectl version # Print the client and server version informations
apiVersion: v1
kind: Pod
metadata:
name: nginx-web
labels:
app: nginx
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
protocol: TCP
# Retrieve documentation of the pod resource
kubectl explain pod
# Retrieve documentation of the pod.spec field in the pod resource
kubectl explain pod.spec
# With create
kubectl create -f pod-file.yml
# With apply
kubectl apply -f pod-file.yml
To be able to use apply on a resource, it has to be created with apply
or
create --save-config
# via kubectl cli
kubectl annotate pod nginx cgi.com/created-by=simonp
# directly in a yaml file
apiVersion: v1
kind: Pod
metadata:
name: nginx-web
annotations:
created-by:simonp
spec:
# ...
Can be used to store information like build number, pull requests, contact addresses...
They also can be interpreted by other tools like ingress-controllers
When a pod dies, it's can't be resurected. When a container inside a pod dies, depending on the restart policy, it can be automatically restarted.
➜ kubectl logs nginx-32j54h2
11:24 [notice] 1#1: using the "epoll" event method
11:24 [notice] 1#1: nginx/1.21.6
11:24 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
11:24 [notice] 1#1: OS: Linux 5.10.104-linuxkit
11:24 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
11:24 [notice] 1#1: start worker processes
11:24 [notice] 1#1: start worker process 31
➜ kubectl logs --help
# Retrieve only logs since Xh
➜ kubectl logs < name > --since Xh
# Specify a container to display his logs
➜ kubectl logs < name > -c < container_name >
# Follow the new logs printed by the pod
➜ kubectl logs < name > -f
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-app
image: nginx:latest
Apply this file and check the rollout of the deployment
# Create the deployment
➜ kubectl apply -f nginx.yml
# Check the rollout status
➜ kubectl rollout status deployment nginx
# Check the pods created behind the deployment
➜ kubectl get pods
# Undo last deployment
➜ kubectl rollout undo deployment nginx --to-revision=1
# Play around with some commands on the pod (try killing them!)
By default, you are using the default namespace. Every kubectl command that you send to
the cluster will be sent to your current namespace.
You can add the option --namespace
to any command to target a specific namespace.
Most of the K8S resources are namespace scoped
# Show current cluster
➜ kubectl config current-context
# Change current namespace
➜ kubectl config set-context current-context --namespace=target
# Usefull alias
alias kns="kubectl config set-context --current --namespace"
Namespaces are like any k8s resources
# Get the list of namespaces - doesn't work at Michelin
➜ kubectl get ns
# Describe a namespace
➜ kubectl describe ns
# Create a namespace - doesn't work at Michelin
➜ kubectl create ns
# Same with a yaml file
➜ kubectl create -f k8s-ns.yml
# via kubectl cli
➜ kubectl create -f file.yml -n namespace
# directly in a yaml file
apiVersion: v1
kind: Pod
metadata:
name: nginx-web
namespace: dev
spec:
# ...
To furthermore enhance the organization of your workloads, you can use labels to tag
resources with usefull informations.
# directly in a yaml file
apiVersion: v1
kind: Pod
metadata:
name: nginx-web
labels:
app: router
env: dev
spec:
# ...
Labeling is similar to annotations but can be used to filter our resources or query
specific ones when you need it.
It's also usefull as a selector for resources when exposing apps through service (more
on that later).
You can filter results from kubectl get
with labels
# AND query
➜ kubectl get pods -l "environment=dev,release=daily"
# OR query
➜ kubectl get pods -l "environment in (prod,dev)"
Dont forget that all your kubectl commands are against your current namespace.
A Service in Kubernetes is an abstraction that defines a logical set of pods and a policy to access them, enabling reliable network connections between microservices. (yay loadbalancing)
By definition, Pods are ephemeral. Each time they are created K8S will assign them an internal IP adress. Client application, or other internal apps should not have to know each other adresses, that's where Services come into play.
As for Pods, Services are assigned an internal virtual IP ( can't be pinged ) that will
not change until the service is deleted.
Services can be accessed internaly by their internal DNS names !
apiVersion: v1
kind: Service
metadata:
name: svc-springboot
namespace: dev
spec:
selector:
app: svc-springboot
ports:
- port: 80
targetPort: 8080
Internal DNS resolution is working inside all the cluster. Most of the time you will use it at a higher level ( with Ingresses ) but you can use it at a basic level to make services talk to each others (Java app to mongodb app for example).
serviceName.namespace.svc.cluster.local
serviceName.namespace.svc.cluster.local
svc-springboot
apiVersion: v1
kind: Service
metadata:
name: svc-springboot
namespace: dev
spec:
selector:
app: svc-springboot
ports:
- port: 80
targetPort: 8080
serviceName.namespace.svc.cluster.local
svc-springboot.dev
apiVersion: v1
kind: Service
metadata:
name: svc-springboot
namespace: dev
spec:
selector:
app: svc-springboot
ports:
- port: 80
targetPort: 8080
serviceName.namespace.svc.cluster.local
svc-springboot.dev.svc.cluster.local
Note: svc.cluster.local can be ommited in 99% cases
So, to access your awesome spring boot app from anywhere in the cluster you can make http calls to :
http://svc-springboot.dev/
Note: only works inside the cluster
At Michelin, inter namespace service communication is prohibited. Main reason being you can only do http communication with them. As https is enforced, you will be using Ingress exposition if you need to call application outside of your app namespace.
There exists multiple types of Service
apiVersion: v1
kind: Service
metadata:
name: svc-springboot
namespace: dev
spec:
type: NodePort
selector:
app: svc-springboot
ports:
- port: 80
targetPort: 8080
nodePort: 30001
A NodePort service, is a ClusterIP service. But K8S will also proxy the service via the port specified on each of it's nodes.
NodePort should never be used outside of this training except if you are administering an in-house K8S installation and you have very specific routing needs.
We will start by creating a Mongo deployment (2 replicas) with it's associated
service.
We will use a NodePort service for now so we can test the connectivity.
Next step will be creating a deployment for our frontend app (React app).
We will also expose it via a NodePort Service
An Ingress in Kuberenetes is a resource that manages external access to services within a cluster, typically via HTTP and HTTPS, using routing rules defined in the ingress resource.
The Ingress resource responsible for routing requests to your services will not work alone, it needs an Ingress Controller. This is a very specific kind of resource out of scope for this training.
If not already, please activate the addon on Minikube (in real life scenarios, it's a whole other story)
# Activate minikube addon
➜ minikube addons enable ingress
# Check that it's running
# it's runnning a specific kind of nginx proxy in another namespace
➜ kubectl get pods -n ingress-nginx
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
ingressClassName: nginx
rules:
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
On minikube the Ingress Controller is already setup, but we will create a specific
Ingress resource for accessing the frontend app.
The mongo service will not be exposed outside the cluster.
We will first create the Backend (Spring boot app) deployment and service.
Once that's done, we will create a new Ingress for the backend external routing.
A ConfigMap in Kubernetes is a resource used to store non-confidential data in key-value pairs. It can be consumed in pods or used to configure other Kubernetes resources.
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
APP_COLOR: blue
APP_MODE: prod
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app-container
image: nginx
env:
- name: APP_COLOR
valueFrom:
configMapKeyRef:
name: app-config
key: APP_COLOR
- name: APP_MODE
valueFrom:
configMapKeyRef:
name: app-config
key: APP_MODE
You can also use configMap as volumes in your pods. This is usefull when you need to mount a configMap as a file in your container or if you don't want to defined every env variable one by one.
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app-container
image: nginx
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: app-config
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app-container
image: nginx
envFrom:
- configMapRef:
name: app-config
A Secret in Kubernetes is a resource used to store sensitive data, such as passwords, OAuth tokens, and ssh keys. It can be consumed in pods or used to configure other Kubernetes resources.
apiVersion: v1
kind: Secret
metadata:
name: app-secret
type: Opaque
data:
DB_HOST: ZGJob3N0
DB_USER: YWRtaW4=
DB_PASS: cGFzc3dvcmQ=
Be aware !
Even if the data is encoded in base64, it's not encrypted. It's just a way to
obfuscate the data.
K8S enforces the base64 encoding for secrets.
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app-container
image: nginx
env:
- name: DB_HOST
valueFrom:
secretKeyRef:
name: app-secret
key: DB_HOST
- name: DB_USER
valueFrom:
secretKeyRef:
name: app-secret
key: DB_USER
- name: DB_PASS
valueFrom:
secretKeyRef:
name: app-secret
key: DB_PASS
As ConfigMap, Secrets can be injected simply by using envFrom & secretRef inside a Pod, Deployment or StatefullSet spec.
By using ConfigMaps & Secrets we will update the deployments of the Backend & the Mongo database.
A Volume in Kubernetes is a directory that contains data accessible to containers in a pod. It can be used to share data between containers in a pod, or to persist data between restarts of a container.
An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. All containers in the Pod can read and write the same files in the emptyDir volume.
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: nginx
volumeMounts:
- name: test-volume
mountPath: /usr/share/nginx/html
volumes:
- name: test-volume
emptyDir: {}
ConfigMap and Secret volumes are used to inject configuration data into a container. They can be used to store non-sensitive data in the ConfigMap and sensitive data in the Secret. (more on that later)
A StorageClass provides a way for administrators to describe the "classes" of
storage they offer. Different classes might map to quality-of-service levels, or to backup
policies, or to arbitrary policies determined by the cluster administrators.
Note: This is not manageable at Michelin
Persistent Volumes and Persistent Volume Claims are specific Kubernetes ressources that are used to abstract volume management.
apiVersion: v1
kind: PersistentVolume
metadata:
name: storage-1
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: default
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: storage-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
VolumeClaimTemplate is a way to automatically create a PVC (and a PV) when a pod is created.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
# ...
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
# we are mounting "data" volume somewhere in the pod
Appart from very specific use cases, you might never work with PV, PVC, VCT or
StorageClass at Michelin.
It's a very specific part of the K8S ecosystem that is managed by the platform team.
By using StatefullSet & VolumeClaimTemplates we will be able to add persistence to our MongoDB database.
Now that we have everything working on one namespace (hopefully the default one ) we will
setup a multi-environment application.
By using namespaces, we will be able to replicate every ressources to emulate multiple
environments.
When creating a Pod or a Deployment, you can set resource requests and
limits for CPU and Memory.
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app-container
image: nginx
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
A Pod can use more resources than requested, but it will be throttled or
killed if it uses more than the limit set.
This is true for CPU or Memory resources.
When a Pod is created, K8S will assign it a Quality of Service class based on the
requests and limits set.
There are 3 classes: Guaranteed, Burstable and BestEffort.
Guaranteed QoS class
Pod has requests and limits set and requests are equal to limits.
A Guarranteed QoS class Pod will never be throttled and will never be killed because of resource usage.
Burstable QoS class
Pod has requests and limits set and requests are lower than limits.
A Burstable QoS class Pod can use more resources than requested, but will be throttled if it uses more than the limit set.
BestEffort QoS class
Pod has no requests and no limits set.
A BestEffort QoS class Pod can use all the resources available on the node. It will be killed if the node is out of resources.
A Pod creation request will be rejected if it can't be scheduled on a node because
of resource constraints.
Note: At Michelin, resource quotas are defined at a namespace level instead of at the
node.
Probes are a way to monitor the health of your containers. They can be used to restart a container if it's not responding or to stop it if it's not healthy.
StartupProbe
StartupProbe is used to delay the start of a container until a probe succeeds. It's usefull when you have a long startup time and you want to avoid the container to be marked as unhealthy.
ReadinessProbe
ReadinessProbe is used to check if a container is ready to accept requests. If the probe fails, the container will be removed from the service loadbalancer.
LivenessProbe
LivenessProbe is used to check if a container is running correctly. If the probe fails, the container will be restarted.
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app-container
image: nginx
startupProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 30
periodSeconds: 3
failureThreshold: 6
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 30
periodSeconds: 3
failureThreshold: 3
livenessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 3
periodSeconds: 3
failureThreshold: 3
We can now setup Health Probes & see what's happening when we upgrade the version of our deployments.
Play around setting up quotas for your deployments resources & check the QoS class of your pods.
A Job in Kubernetes is a resource used to run short-lived and one-time tasks. It's used to run a single instance of a Pod and restart it if it fails.
A CronJob is the same thing, but with a schedule (CRON) mechanism.
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
backoffLimit: 4
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello-cron
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello-cron
image: busybox
args: ["/bin/sh","-c", "date"]
restartPolicy: OnFailure
Jobs & CronJobs uses pretty much the same template as a Pod, so you can use all the features of a Pod in a Job or CronJob.