Kubernetes - Cheatsheet
Table of Contents
- Kubernetes Certification (CKAD)
- Important Notes:
- Kubectl Appy vs Create vs Replace
- kubectl apply
- kubectl create
- kubectl replace
- kubectl apply
- kubectl create
- kubectl replace
- Kubectl Set vs Edit
- kubectl set
- Ingress
- Ingress Annotations in Ingress Resources:
- Configuration in a Complete Ingress Resource
- Persistent Volumes
- Breakdown of a sample Configuration
- 1. Pod Definition
- 2. Understanding volumeMounts
- 3. Understanding volumes
- Directory Structure and Mounting
- Summary
- Persistent Volume Claims
- Explanation
- HostPath Details
- Summary
- Configuration
- How It Works
- Mount Path Mapping
- Admission-Based Access Controllers
- Types of Admission Controllers
- Admission Control Workflow
- Enabling and Configuring Admission Controllers
- Example of Configuring Admission Controllers
- Commands
Kubernetes Certification (CKAD)
Important Notes:
Here're some important distinctions and considerations when using kubectl commands:
Kubectl Appy vs Create vs Replace
The main differences between kubectl apply, kubectl create, and kubectl replace are:
kubectl apply
- Uses a declarative syntax to create or update Kubernetes objects defined in a manifest file
- Creates a resource if it doesn't exist, and updates it if it does
- Applies a patch with only the changes to the existing resource
- Creates a kubectl.kubernetes.io/last-applied-configuration annotation which has a size limit
kubectl create
- Uses an imperative syntax to create a new resource directly at the command line
- Fails if the resource already exists
- Works on each and every property of the resource defined in the file
kubectl replace
- Also uses an imperative syntax to replace an existing resource
- Deletes the existing resource and creates a new one with the updated configuration
- Can be used to change immutable fields that apply doesn't allow
- Submits the full spec of the resource as an atomic action In summary, apply is the preferred declarative approach for managing Kubernetes resources, while create and replace are imperative commands used for creating new resources or replacing existing ones. apply is more convenient for updating resources, while replace is necessary for changing certain immutable fields.
Here are examples of using kubectl apply, kubectl create, and kubectl replace:
kubectl apply
Let's say we have a Deployment defined in a YAML file named deployment.yaml:
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
To create or update the Deployment using apply:
If the Deployment doesn't exist, it will be created. If it already exists, only the changes will be applied.
kubectl create
To create a new ConfigMap directly from the command line using create:
This will create a new ConfigMap named my-config with the specified key-value pairs.
kubectl replace
Let's say we want to change the image of the Nginx container in our Deployment. We can update the deployment.yaml file:
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.0 # Updated image
ports:
- containerPort: 80
Then, use replace to update the Deployment:
This will delete the existing Deployment and create a new one with the updated configuration. The main differences are:
- apply uses a declarative approach and can create or update resources, create and replace use an imperative approach.
- apply applies a patch with only the changes, replace submits the full spec as an atomic action.
- create fails if the resource already exists, replace deletes and recreates the resource.
Kubectl Set vs Edit
Here are the key differences between kubectl set and kubectl edit with examples:
kubectl set
- Used to update specific fields of a resource
- Allows setting values for container images, environment variables, resource limits, etc.
- Examples:
- Set the image of a container in a Deployment:
- Set the image of a container in a Deployment:
- Set environment variables for a container:
```kubectl set env deployment nginx-deployment ENVIRONMENT=production
- Set resource limits for a container:
#### kubectl edit
- Opens the specified resource in a text editor for direct editing
- Supports editing multiple resources at once
- Automatically applies the changes to the cluster
- Examples:
- Edit a Deployment:
```kubectl edit deployment nginx-deployment
- Edit a ConfigMap:
- Edit a resource specified by a file:
```kubectl edit -f deployment.yaml
Key differences:
- set updates specific fields, while edit allows direct editing of the entire resource
- set uses a declarative syntax, while edit uses an imperative syntax
- set updates one field at a time, while edit updates the entire resource
- set is more suitable for scripting and automation, while edit is more interactive In summary, kubectl set is used for targeted updates to resource fields, while kubectl edit is used for directly editing and updating the entire resource definition. The choice depends on the specific use case and whether you need to update specific fields or perform a more comprehensive edit. Citations: [1] https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_edit/ [2] https://stackoverflow.com/questions/45502729/how-do-i-edit-a-resource-configuration-with-kubectl [3] https://kubernetes.io/docs/reference/kubectl/generated/kubectl_edit/ [4] https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-with-tanzu-tkg/GUID-104C2238-1D65-402A-85F0-742DAB49AB1A.html
Ingress
Ingress Annotations in Ingress Resources:
The annotation nginx.ingress.kubernetes.io/rewrite-target: / in an Ingress resource is used to modify the request URI before it is sent to the backend service. This annotation is specific to the NGINX Ingress Controller and tells it to rewrite the incoming request's path. Here's how it works:
- When a request matches a specific path defined in the Ingress rule, the nginx.ingress.kubernetes.io/rewrite-target annotation changes the path of the request to the specified value (/ in this case) before forwarding it to the backend service.
- This is useful for stripping out or changing parts of the URL path that the backend service does not need to process the request. Example: Consider the following Ingress resource with the rewrite-target annotation:
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /app1(/|$)(.*)
pathType: ImplementationSpecific
backend:
service:
name: app1-service
port:
number: 80
- path: /app2(/|$)(.*)
pathType: ImplementationSpecific
backend:
service:
name: app2-service
port:
number: 80
How it Works
- If a request comes in for http://example.com/app1/foo, the Ingress controller will rewrite the path to /foo before forwarding it to app1-service.
- If a request comes in for http://example.com/app2/bar, the path is rewritten to /bar before forwarding it to app2-service. Use Cases
- Microservices: When different microservices are deployed under different paths, but each service expects requests starting at the root path (/).
- Path Cleanup: To simplify paths for backend services and ensure they do not have to handle complicated path structures.
Configuration in a Complete Ingress Resource
Here’s an example of a complete Ingress resource with the rewrite-target annotation:
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /app1(/|$)(.*)
pathType: ImplementationSpecific
backend:
service:
name: app1-service
port:
number: 80
- path: /app2(/|$)(.*)
pathType: ImplementationSpecific
backend:
service:
name: app2-service
port:
number: 80
This setup ensures that app1-service and app2-service both receive requests with the root path, regardless of the initial path prefix.
Persistent Volumes
In Kubernetes, volumeMounts and volumes are used to attach storage to containers.
Breakdown of a sample Configuration
1. Pod Definition
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: event-simulator
image: maxeffortgazette/frontend
env:
- name: LOG_HANDLERS
value: file
volumeMounts:
- mountPath: /log
name: log-volume
volumes:
- name: log-volume
hostPath:
path: /var/log/frontend
type: Directory
2. Understanding volumeMounts
The volumeMounts section within the container definition specifies how volumes should be mounted into the container.
- mountPath: /log: This is the path inside the container where the volume will be mounted.
- name: log-volume: This refers to the volume defined in the volumes section of the Pod specification.
3. Understanding volumes
The volumes section specifies the volumes that can be mounted by containers.
- name: log-volume: This is the identifier used to refer to this volume in the volumeMounts section.
- hostPath: This specifies that the volume is a directory from the host node’s filesystem.
- path: /var/log/frontend: This is the path on the host machine where the volume is sourced from.
- type: Directory: This indicates that the path on the host is a directory. This field is optional and primarily used to ensure the specified path exists and is of the correct type.
Directory Structure and Mounting
- On the Host Machine:
- The directory /var/log/frontend exists on the host node. This is where logs or other data will be written to or read from.
- Inside the Container:
- The volume defined by hostPath will be mounted into the container at /log. This means the container will see the contents of /var/log/frontend from the host node at the path /log. So, if you have a file /var/log/frontend/example.log on the host, it will be available inside the container at /log/example.log.
Summary
- Host Directory: /var/log/frontend
- Container Mount Path: /log Files and directories created or modified in /log inside the container will directly affect the /var/log/frontend directory on the host. This allows the container to read from and write to the host filesystem, which is useful for persisting logs or other data.
Persistent Volume Claims
In Kubernetes, the hostPath type for a Persistent Volume (PV) refers to a directory or file on the host node’s filesystem. It allows a container to access files or directories from the host node's filesystem.
kind: PersistentVolume
metadata:
name: pv-log
spec:
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
hostPath:
path: /pv/log
Explanation
- persistentVolumeReclaimPolicy: Retain: This policy determines what happens to the Persistent Volume (PV) when its associated Persistent Volume Claim (PVC) is deleted. Retain means that the PV and its data are preserved and must be manually reclaimed.
- accessModes: ReadWriteMany: This specifies that the volume can be mounted as read-write by many nodes simultaneously. This mode allows multiple pods to read from and write to the volume concurrently.
- capacity: storage: 1Gi: Defines the storage capacity of the PV. In this case, it's set to 1 GiB.
- hostPath:
- path: /pv/log: This specifies the directory on the host node where the volume is sourced from.
HostPath Details
- Location: /pv/log refers to a directory on the host node's filesystem. For this Persistent Volume, it means that any files created or modified in this directory by pods will be directly reflected on the host at the same path.
- Usage: This is useful for testing or development environments but is generally not recommended for production because the data is tied to a specific node. If the pod is rescheduled to another node, it will not have access to the same data unless the directory exists on the new node as well.
Summary
- hostPath.path refers to a directory on the host node's filesystem where the data is stored.
- /pv/log is the directory on the host where the volume is sourced from and is made available to the container(s) that use this Persistent Volume. Here’s how the mapping of the PVC to a Volume works:
Configuration
- Persistent Volume (PV):
kind: PersistentVolume metadata: name: pv-log spec: persistentVolumeReclaimPolicy: Retain accessModes:
- ReadWriteMany capacity: storage: 1Gi hostPath: path: /pv/log
- Pod Definition:
```apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: my-image
volumeMounts:
- mountPath: /log
name: log-volume
volumes:
- name: log-volume
persistentVolumeClaim:
claimName: my-pvc
How It Works
- On the Host Node:
- The directory /pv/log on the host node contains the data or files.
- Persistent Volume:
- The PV is configured to use hostPath with the path /pv/log. This means the PV is backed by the directory /pv/log on the host node.
- Persistent Volume Claim (PVC):
- A Persistent Volume Claim (PVC) would be created to request storage from the PV. This PVC would reference the PV by name (pv-log).
- Pod Definition:
- The Pod definition specifies that it mounts a volume at /log using the PVC (my-pvc). The volume referred to in the Pod specification (log-volume) is backed by the PV.
Mount Path Mapping
- The directory /pv/log on the host node is mounted to /log in the container.
- Host Node Path: /pv/log
- Container Path: /log So, within the container, any files or directories created or modified at /log are directly interacting with the /pv/log directory on the host node. This mapping allows the container to read from and write to the host directory at /pv/log through the /log path inside the container.
Admission-Based Access Controllers
Admission controllers in Kubernetes are plugins that govern and control how the Kubernetes API server processes requests. They are a key part of Kubernetes' admission control process and can enforce policies, validate resource configurations, and modify requests before they are persisted in the cluster.
Types of Admission Controllers
- Validating Admission Controllers:
- These controllers validate requests against a set of rules. If the request does not comply with the rules, the admission controller rejects the request. They are primarily used to enforce security and compliance policies.
- Mutating Admission Controllers:
- These controllers can modify or mutate the incoming requests before they are persisted. They are used to inject sidecars or apply default settings to resources automatically.
Admission Control Workflow
- Request Processing:
- When a request (creating or updating a resource) is sent to the Kubernetes API server, it first goes through a series of admission controllers.
- Validation:
- Validating admission controllers check the request against rules and policies. If the request is invalid, it is rejected.
- Mutation:
- Mutating admission controllers can modify the request, adding defaults or performing other adjustments.
- Persistence:
- After passing through all admission controllers, the request is persisted in the etcd datastore.
Enabling and Configuring Admission Controllers
Admission controllers are configured through the --enable-admission-plugins flag on the API server. They can be enabled, disabled, or configured based on the needs of your cluster.
Example of Configuring Admission Controllers
If you're using a managed Kubernetes service, admission controllers are typically managed for you. However, in self-managed clusters, you can configure them by modifying the API server's startup parameters. Here's an example of enabling specific admission controllers in a Kubernetes API server configuration:
This configuration would enable the NamespaceLifecycle, LimitRanger, ServiceAccount, DefaultStorageClass, and ResourceQuota admission controllers.
Commands
Master these commands to work with Kubernetes clusters like a pro 😎
kubectl get all -A
- Retrieve all resources across all namespaceskubectl get pods -o wide
- Output = wide (More verbose and informativekubectl run redis --image redis 123 --dry-run=client -o yaml > pod-definition.yaml
- In Kubernetes, the--dry-run
flag inkubectl
simulates the creation of a resource without actually creating it. This allows users to test and verify their resource configurations before applying them.kubectl get pods --selector app=backend
- Retrieve specific pods by using theselector
parameter.kubectl get all --selector env=prod
- Retrieve all objects with the labelenv: prod.
k get po --selector env=prod,bu=finance,tier=frontend
- Chain multiple labels together.
kubectl api-resources
- Describe kubernetes resourceskubectl api-resources | grep replicaset
- Describe the version replicaset uses
kubectl explain replicaset | grep VERSION
- Explain a particular service and search for a specific parameterkubectl edit replicaset new-replica-set
edit a service, save the file, and then redeploy it.kubectl get po
shorthand for getting podskubectl scale --replicas=3 -f foo.yaml
- Scale a service defined infoo.yaml
kubectl scale <replicaset-name OR any-applicable-service-name> --replicas=10
- Scale based on Service Name. Example, a deployment as shown below.kubectl scale deployment --replicas=1 app-backend
- Scale the number of pods in the deployment to 1 .kubectl scale --replicas=3 rs/foo
- Scale a replicaset service namedfoo
, that is increase the number of pods to 3
kubectl replace -f foo.yaml
- Apply changes in foo.yaml (Analogous to update)<service-name.namespace.svc.cluster.local>
- To access a service in another namespacecluster.local
- Domain namesvc
- Subdomain name
kubectl get po --namespace=<namespace>
- Get pods or perform any other service for that matter in a specific namespacekubectl config set-context $(kubectl config current-context) --namespace=<namespace>
- To permanently switch to a different namespacekubectl get rs --all-namespaces
- Get replicasets or any service for that matter under all namespaceskubectl run custom-nginx --image=nginx --port=8080
- Deploy a pod and expose a specific container portkubectl run httpd --image=httpd:alpine --port=80 --expose
- Create and expose a port with ClusterIP. Can also be accomplished throughkubectl run httpd --image=httpd:alpine
kubectl expose pod httpd --name=httpd --type=ClusterIP --port=80
kubectl run app-pod --image repository/image-name -- --color green
- The--
afterimage-name
is used to separate kubectl command line arguments from those of the applications’. In this case, the application has a command line argument called color. If you want to set this through the command line use--
to separate app arguments from kubectl arguments.--comand
can be used to modify the command before the application’s argument. Example if the original command ispython --color red
we can use--command python3 -- --color green
in the kubectl command to modify both the command and argument.
kubectl create configmap <cm name> --from-literal environment=dev --from-literal app=frontend
- Create a configmap imperatively from a list of valueskubectl create configmap <cm name> --from-file <path-to-file>
- Create a configmap imperatively from a file
kubectl create secret generic <secret name> --from-literal password=dev
- Create a secret imperatively from a list of valuesecho -n "<confidential_string>" | base 64
- Pipes the Secret (without a newline character) to the base64 utility, to encode the secret-n
eliminates the newline character, asecho
usually adds a trailing newline character at the end.echo -n "<confidential_string>" | base 64 --decode
- Decode the secret
docker run --cap-add <privilege-name> ex MAC_ADMIN ubuntu
- Execute a container with additional privilegesdocker run --cap-drop <privilege-name> ex MAC_ADMIN ubuntu
- Execute a container without a privilegedocker run --privileged ubuntu
- Execute a container with all privilegesdocker run --user=1001 ubuntu sleep 3600
- Execute a process with a specific userID
instead of the default root user
kubectl create serviceaccount <account_name>
- Create a service account- Use
kubectl get serviceaccount
to obtain a list of service accounts kubectl create token <serviceaccount-name>
- Creates a token for a service accountkubectl set serviceaccount deploy/web-dashboard dashboard-sa
- Update the service account of a deployment
- Use
kubectl taint node <node_name> key=value:taint-effect
- Taint a node with a key value pair for contextkubectl taint nodes <node-name> <taint-key>:<taint-effect>-
- Use the minus sign to indicate the removal of a taint effectkubectl label nodes <node_name> <label-key>=<label-value>
- Label a node and then use the same labels in pod configuration, withnodeSelector
kubectl logs <pod-name> -n <namespace>
- Print the logs of a podkubectl replace --force -f <file-name.yaml>
- Replaces a resource based on the definition infile-name.yaml
kubectl -n elastic-stack exec -it app -- cat /log/app.log
- Execute a command inside a podkubectl rollout status deployment/frontend-deployment
- Retrieve the rollout status of a deploymentkubectl rollout history deployment/frontend-deployment
- Retrieve the rollout historyof a deploymentkubectl rollout history deployment nginx --revision=1
- Use the —revision flag to check the specifications of a particular revision
kubectl rollout undo deployment/backend-deployment
- Undo a rollout for a deploymentkubectl rollout undo deployment nginx --to-revision=1
- Use the--to-revision
flag to rollout to a specific revision
kubectl set image deployment/myapp-deployment nginx=nginx:1.9.1
- Update a deployment. Thenginx
object specified after the deployment name corresponds to the name of the container. Alternatively, you can use thekubectl create -f <filename>.yaml
command to update a deployment.kubectl set image deployment nginx nginx=nginx:1.17 --record
- Use the--record
flag to save the command used to update a deployment, so that you can view it against the revision number, under theCHANGE-CAUSE
column.
k create cronjob email-report-cron-job --image=ubuntu --dry-run=client -o yaml --schedule="0 16 * * 5" > email-cron-job.yaml
- Create a corn job that runs at 4PM every Friday.kubectl create service nodeport <service-name> --tcp=<port>:<target-port> --node-port=<node-port>
- Create aNodePort
service.- Example:
kubectl create service nodeport my-service --tcp=80:8080 --node-port=30080
kubectl create service clusterip backend --tcp=80:8080
- Creates a ClusterIp servicekubectl create service loadbalancer frontend --tcp=80:8080
- Creates a load balancer service
- Example:
kubectl create ingress <ingress-name> --rule="host/path=service:port"
- Create an ingress.kubectl create ingress ingress-blog --rule="chat.maxeffortgazette/chat*=chat-service:80"
- Example command to create an ingress resource
kubectl create pv my-pv --capacity=10Gi --access-modes=ReadWriteOnce --hostpath=/mnt/data --storage-class=manual
- Create a persistent volumekubectl config view
- Retrieve configuration details such as the clusters, users, and contexts.kubectl config view --kubeconfig another-k8s-config
- Retrieve configuration details from a specific configuration fileanother-k8s-config
is the filename, in this case.kubectl config --kubeconfig=/root/config-file-name use-context test-user@test
- Modify the config settings based on the configuration defined in a file.kubectl config --kubeconfig=/root/config-file-name current-context
- Get current context
kubectl can i
- Check if you have access to a particular resourcekubectl auth can-i create deployments
kubectl auth can-i delete nodes
kubectl auth can-i create deployments --as dev-user
- Check if a specific user has a privilege (in this case creating deployments)
kubectl get po -n=kube-system
- Retrieve pods part of the core infrastructure of the Kubernetes cluster. Thekube-system
namespace consists of the core Kubernetes system components.kubectl describe pod kube-apiserver-controlplane -n kube-system
- Describes the pod corresponding to the control plane in the Kubernetes cluster.kubectl api-resources --namespaced=true/false
- View api-resources that are namespaced or cluster scoped.
k create role developer —resource=deployments.apps —verbs=list,create,delete
- Create a role called developer that allows users to list, create, and delete deployments.k create rolebinding developer-user-binding --user=dev-user-1 --role=developer
- Create a role binding calleddeveloper-user-binding
that binds thedeveloper
role todev-user-1
k create clusterrole node-cluster-role --resource=nodes --verb=*
- Create a cluster role for cluster scoped resources (in this casenodes
).k create clusterrolebinding node-role-bind --clusterrole=node-cluster-role --user=michelle
- Create a cluster role binding to bind a cluster role to a user -michelle
.kube-apiserver -h | grep enable-admission-plugins
- View Admission plugins enabled by default.kubectl exec kube-apiserver-controlplane -n kube-system -- kube-apiserver -h | grep enable-admission-plugins
- Use this command to find admission plugins in a Kube ADM setup.
kubectl-convert -f <filename.yaml> --output-version apps/v1 | kubectl apply -f -
- Update the API version in an existing file to another version (usually a newer version). Ensure that the kubectl convert plugin is installed before executing this command.helm search hub <chart-name>
- Search the Artifact hub for chart titles containing<chart_name>
helm search repo<chart-name>
- Search an installed repository for chart titles containing<chart_name>
helm repo add [URL]
- Add a chart repository from a URLhelm repo list
- List local chart repositorieshelm install [release-name] [chart-name]
- Install a chart on a K8s cluster.helm uninstall [release-name]
- Uninstall a chart on a cluster
helm list
- List all chartshelm pull [chart-name]
- To download but not install a chart