Kubernetes - Cheatsheet

Kubernetes - Cheatsheet


Table of Contents

Kubernetes Certification (CKAD)

Important Notes:

Here're some important distinctions and considerations when using kubectl commands:

Kubectl Appy vs Create vs Replace

The main differences between kubectl apply, kubectl create, and kubectl replace are:

kubectl apply

  • Uses a declarative syntax to create or update Kubernetes objects defined in a manifest file
  • Creates a resource if it doesn't exist, and updates it if it does
  • Applies a patch with only the changes to the existing resource
  • Creates a kubectl.kubernetes.io/last-applied-configuration annotation which has a size limit

kubectl create

  • Uses an imperative syntax to create a new resource directly at the command line
  • Fails if the resource already exists
  • Works on each and every property of the resource defined in the file

kubectl replace

  • Also uses an imperative syntax to replace an existing resource
  • Deletes the existing resource and creates a new one with the updated configuration
  • Can be used to change immutable fields that apply doesn't allow
  • Submits the full spec of the resource as an atomic action In summary, apply is the preferred declarative approach for managing Kubernetes resources, while create and replace are imperative commands used for creating new resources or replacing existing ones. apply is more convenient for updating resources, while replace is necessary for changing certain immutable fields.

Here are examples of using kubectl apply, kubectl create, and kubectl replace:

kubectl apply

Let's say we have a Deployment defined in a YAML file named deployment.yaml:

kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

To create or update the Deployment using apply:

If the Deployment doesn't exist, it will be created. If it already exists, only the changes will be applied.

kubectl create

To create a new ConfigMap directly from the command line using create:

This will create a new ConfigMap named my-config with the specified key-value pairs.

kubectl replace

Let's say we want to change the image of the Nginx container in our Deployment. We can update the deployment.yaml file:

kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.0  # Updated image
        ports:
        - containerPort: 80

Then, use replace to update the Deployment:

This will delete the existing Deployment and create a new one with the updated configuration. The main differences are:

  • apply uses a declarative approach and can create or update resources, create and replace use an imperative approach.
  • apply applies a patch with only the changes, replace submits the full spec as an atomic action.
  • create fails if the resource already exists, replace deletes and recreates the resource.

Kubectl Set vs Edit

Here are the key differences between kubectl set and kubectl edit with examples:

kubectl set

  • Used to update specific fields of a resource
  • Allows setting values for container images, environment variables, resource limits, etc.
  • Examples:
    • Set the image of a container in a Deployment:
	- Set environment variables for a container:
		```kubectl set env deployment nginx-deployment ENVIRONMENT=production
  • Set resource limits for a container:
#### kubectl edit
- Opens the specified resource in a text editor for direct editing
- Supports editing multiple resources at once
- Automatically applies the changes to the cluster
- Examples:
	- Edit a Deployment:
		```kubectl edit deployment nginx-deployment
  • Edit a ConfigMap:
	- Edit a resource specified by a file:
		```kubectl edit -f deployment.yaml

Key differences:

Ingress

Ingress Annotations in Ingress Resources:

The annotation nginx.ingress.kubernetes.io/rewrite-target: / in an Ingress resource is used to modify the request URI before it is sent to the backend service. This annotation is specific to the NGINX Ingress Controller and tells it to rewrite the incoming request's path. Here's how it works:

  • When a request matches a specific path defined in the Ingress rule, the nginx.ingress.kubernetes.io/rewrite-target annotation changes the path of the request to the specified value (/ in this case) before forwarding it to the backend service.
  • This is useful for stripping out or changing parts of the URL path that the backend service does not need to process the request. Example: Consider the following Ingress resource with the rewrite-target annotation:
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - host: example.com
      http:
        paths:
          - path: /app1(/|$)(.*)
            pathType: ImplementationSpecific
            backend:
              service:
                name: app1-service
                port:
                  number: 80
          - path: /app2(/|$)(.*)
            pathType: ImplementationSpecific
            backend:
              service:
                name: app2-service
                port:
                  number: 80

How it Works

  • If a request comes in for http://example.com/app1/foo, the Ingress controller will rewrite the path to /foo before forwarding it to app1-service.
  • If a request comes in for http://example.com/app2/bar, the path is rewritten to /bar before forwarding it to app2-service. Use Cases
  • Microservices: When different microservices are deployed under different paths, but each service expects requests starting at the root path (/).
  • Path Cleanup: To simplify paths for backend services and ensure they do not have to handle complicated path structures.

Configuration in a Complete Ingress Resource

Here’s an example of a complete Ingress resource with the rewrite-target annotation:

kind: Ingress
metadata:
  name: example-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - host: example.com
      http:
        paths:
          - path: /app1(/|$)(.*)
            pathType: ImplementationSpecific
            backend:
              service:
                name: app1-service
                port:
                  number: 80
          - path: /app2(/|$)(.*)
            pathType: ImplementationSpecific
            backend:
              service:
                name: app2-service
                port:
                  number: 80

This setup ensures that app1-service and app2-service both receive requests with the root path, regardless of the initial path prefix.

Persistent Volumes

In Kubernetes, volumeMounts and volumes are used to attach storage to containers.

Breakdown of a sample Configuration

1. Pod Definition

kind: Pod
metadata:
  name: frontend
spec:
  containers:
  - name: event-simulator
    image: maxeffortgazette/frontend
    env:
    - name: LOG_HANDLERS
      value: file
    volumeMounts:
    - mountPath: /log
      name: log-volume

  volumes:
  - name: log-volume
    hostPath:
      path: /var/log/frontend
      type: Directory

2. Understanding volumeMounts

The volumeMounts section within the container definition specifies how volumes should be mounted into the container.

  • mountPath: /log: This is the path inside the container where the volume will be mounted.
  • name: log-volume: This refers to the volume defined in the volumes section of the Pod specification.

3. Understanding volumes

The volumes section specifies the volumes that can be mounted by containers.

  • name: log-volume: This is the identifier used to refer to this volume in the volumeMounts section.
  • hostPath: This specifies that the volume is a directory from the host node’s filesystem.
    • path: /var/log/frontend: This is the path on the host machine where the volume is sourced from.
    • type: Directory: This indicates that the path on the host is a directory. This field is optional and primarily used to ensure the specified path exists and is of the correct type.

Directory Structure and Mounting

  1. On the Host Machine:
    • The directory /var/log/frontend exists on the host node. This is where logs or other data will be written to or read from.
  2. Inside the Container:
    • The volume defined by hostPath will be mounted into the container at /log. This means the container will see the contents of /var/log/frontend from the host node at the path /log. So, if you have a file /var/log/frontend/example.log on the host, it will be available inside the container at /log/example.log.

Summary

  • Host Directory: /var/log/frontend
  • Container Mount Path: /log Files and directories created or modified in /log inside the container will directly affect the /var/log/frontend directory on the host. This allows the container to read from and write to the host filesystem, which is useful for persisting logs or other data.

Persistent Volume Claims

In Kubernetes, the hostPath type for a Persistent Volume (PV) refers to a directory or file on the host node’s filesystem. It allows a container to access files or directories from the host node's filesystem.

kind: PersistentVolume
metadata:
  name: pv-log
spec:
  persistentVolumeReclaimPolicy: Retain
  accessModes:
    - ReadWriteMany
  capacity:
    storage: 1Gi
  hostPath:
    path: /pv/log

Explanation

  • persistentVolumeReclaimPolicy: Retain: This policy determines what happens to the Persistent Volume (PV) when its associated Persistent Volume Claim (PVC) is deleted. Retain means that the PV and its data are preserved and must be manually reclaimed.
  • accessModes: ReadWriteMany: This specifies that the volume can be mounted as read-write by many nodes simultaneously. This mode allows multiple pods to read from and write to the volume concurrently.
  • capacity: storage: 1Gi: Defines the storage capacity of the PV. In this case, it's set to 1 GiB.
  • hostPath:
    • path: /pv/log: This specifies the directory on the host node where the volume is sourced from.

HostPath Details

  • Location: /pv/log refers to a directory on the host node's filesystem. For this Persistent Volume, it means that any files created or modified in this directory by pods will be directly reflected on the host at the same path.
  • Usage: This is useful for testing or development environments but is generally not recommended for production because the data is tied to a specific node. If the pod is rescheduled to another node, it will not have access to the same data unless the directory exists on the new node as well.

Summary

  • hostPath.path refers to a directory on the host node's filesystem where the data is stored.
  • /pv/log is the directory on the host where the volume is sourced from and is made available to the container(s) that use this Persistent Volume. Here’s how the mapping of the PVC to a Volume works:

Configuration

  • Persistent Volume (PV):

kind: PersistentVolume metadata: name: pv-log spec: persistentVolumeReclaimPolicy: Retain accessModes:

  • ReadWriteMany capacity: storage: 1Gi hostPath: path: /pv/log
- Pod Definition:
	```apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: my-image
    volumeMounts:
    - mountPath: /log
      name: log-volume
  volumes:
  - name: log-volume
    persistentVolumeClaim:
      claimName: my-pvc

How It Works

  1. On the Host Node:
    • The directory /pv/log on the host node contains the data or files.
  2. Persistent Volume:
    • The PV is configured to use hostPath with the path /pv/log. This means the PV is backed by the directory /pv/log on the host node.
  3. Persistent Volume Claim (PVC):
    • A Persistent Volume Claim (PVC) would be created to request storage from the PV. This PVC would reference the PV by name (pv-log).
  4. Pod Definition:
    • The Pod definition specifies that it mounts a volume at /log using the PVC (my-pvc). The volume referred to in the Pod specification (log-volume) is backed by the PV.

Mount Path Mapping

  • The directory /pv/log on the host node is mounted to /log in the container.
    • Host Node Path: /pv/log
    • Container Path: /log So, within the container, any files or directories created or modified at /log are directly interacting with the /pv/log directory on the host node. This mapping allows the container to read from and write to the host directory at /pv/log through the /log path inside the container.

Admission-Based Access Controllers

Admission controllers in Kubernetes are plugins that govern and control how the Kubernetes API server processes requests. They are a key part of Kubernetes' admission control process and can enforce policies, validate resource configurations, and modify requests before they are persisted in the cluster.

Types of Admission Controllers

  1. Validating Admission Controllers:
    • These controllers validate requests against a set of rules. If the request does not comply with the rules, the admission controller rejects the request. They are primarily used to enforce security and compliance policies.
  2. Mutating Admission Controllers:
    • These controllers can modify or mutate the incoming requests before they are persisted. They are used to inject sidecars or apply default settings to resources automatically.

Admission Control Workflow

  1. Request Processing:
    • When a request (creating or updating a resource) is sent to the Kubernetes API server, it first goes through a series of admission controllers.
  2. Validation:
    • Validating admission controllers check the request against rules and policies. If the request is invalid, it is rejected.
  3. Mutation:
    • Mutating admission controllers can modify the request, adding defaults or performing other adjustments.
  4. Persistence:
    • After passing through all admission controllers, the request is persisted in the etcd datastore.

Enabling and Configuring Admission Controllers

Admission controllers are configured through the --enable-admission-plugins flag on the API server. They can be enabled, disabled, or configured based on the needs of your cluster.

Example of Configuring Admission Controllers

If you're using a managed Kubernetes service, admission controllers are typically managed for you. However, in self-managed clusters, you can configure them by modifying the API server's startup parameters. Here's an example of enabling specific admission controllers in a Kubernetes API server configuration:

This configuration would enable the NamespaceLifecycle, LimitRanger, ServiceAccount, DefaultStorageClass, and ResourceQuota admission controllers.

Commands

Master these commands to work with Kubernetes clusters like a pro 😎

  1. kubectl get all -A - Retrieve all resources across all namespaces
  2. kubectl get pods -o wide - Output = wide (More verbose and informative
  3. kubectl run redis --image redis 123 --dry-run=client -o yaml > pod-definition.yaml - In Kubernetes, the --dry-run flag in kubectl simulates the creation of a resource without actually creating it. This allows users to test and verify their resource configurations before applying them.
  4. kubectl get pods --selector app=backend - Retrieve specific pods by using the selector parameter.
    1. kubectl get all --selector env=prod - Retrieve all objects with the label env: prod.
    2. k get po --selector env=prod,bu=finance,tier=frontend - Chain multiple labels together.
  5. kubectl api-resources - Describe kubernetes resources
    1. kubectl api-resources | grep replicaset - Describe the version replicaset uses
  6. kubectl explain replicaset | grep VERSION - Explain a particular service and search for a specific parameter
  7. kubectl edit replicaset new-replica-set edit a service, save the file, and then redeploy it.
  8. kubectl get po shorthand for getting pods
  9. kubectl scale --replicas=3 -f foo.yaml - Scale a service defined in foo.yaml
  10. kubectl scale <replicaset-name OR any-applicable-service-name> --replicas=10 - Scale based on Service Name. Example, a deployment as shown below.
    1. kubectl scale deployment --replicas=1 app-backend - Scale the number of pods in the deployment to 1 .
    2. kubectl scale --replicas=3 rs/foo - Scale a replicaset service named foo, that is increase the number of pods to 3
  11. kubectl replace -f foo.yaml - Apply changes in foo.yaml (Analogous to update)
  12. <service-name.namespace.svc.cluster.local> - To access a service in another namespace
    1. cluster.local - Domain name
    2. svc - Subdomain name
  13. kubectl get po --namespace=<namespace> - Get pods or perform any other service for that matter in a specific namespace
  14. kubectl config set-context $(kubectl config current-context) --namespace=<namespace> - To permanently switch to a different namespace
  15. kubectl get rs --all-namespaces - Get replicasets or any service for that matter under all namespaces
  16. kubectl run custom-nginx --image=nginx --port=8080 - Deploy a pod and expose a specific container port
  17. kubectl run httpd --image=httpd:alpine --port=80 --expose - Create and expose a port with ClusterIP. Can also be accomplished through
    1. kubectl run httpd --image=httpd:alpine
    2. kubectl expose pod httpd --name=httpd --type=ClusterIP --port=80
  18. kubectl run app-pod --image repository/image-name -- --color green - The -- after image-name is used to separate kubectl command line arguments from those of the applications’. In this case, the application has a command line argument called color. If you want to set this through the command line use -- to separate app arguments from kubectl arguments.
    1. --comand can be used to modify the command before the application’s argument. Example if the original command is python --color red we can use --command python3 -- --color green in the kubectl command to modify both the command and argument.
  19. kubectl create configmap <cm name> --from-literal environment=dev --from-literal app=frontend - Create a configmap imperatively from a list of values
    1. kubectl create configmap <cm name> --from-file <path-to-file> - Create a configmap imperatively from a file
  20. kubectl create secret generic <secret name> --from-literal password=dev - Create a secret imperatively from a list of values
  21. echo -n "<confidential_string>" | base 64 - Pipes the Secret (without a newline character) to the base64 utility, to encode the secret
    1. -n eliminates the newline character, as echo usually adds a trailing newline character at the end.
    2. echo -n "<confidential_string>" | base 64 --decode - Decode the secret
  22. docker run --cap-add <privilege-name> ex MAC_ADMIN ubuntu - Execute a container with additional privileges
    1. docker run --cap-drop <privilege-name> ex MAC_ADMIN ubuntu - Execute a container without a privilege
    2. docker run --privileged ubuntu - Execute a container with all privileges
    3. docker run --user=1001 ubuntu sleep 3600 - Execute a process with a specific user ID instead of the default root user
  23. kubectl create serviceaccount <account_name> - Create a service account
    1. Use kubectl get serviceaccount to obtain a list of service accounts
    2. kubectl create token <serviceaccount-name> - Creates a token for a service account
    3. kubectl set serviceaccount deploy/web-dashboard dashboard-sa - Update the service account of a deployment
  24. kubectl taint node <node_name> key=value:taint-effect - Taint a node with a key value pair for context
    1. kubectl taint nodes <node-name> <taint-key>:<taint-effect>- - Use the minus sign to indicate the removal of a taint effect
    2. kubectl label nodes <node_name> <label-key>=<label-value> - Label a node and then use the same labels in pod configuration, with nodeSelector
  25. kubectl logs <pod-name> -n <namespace> - Print the logs of a pod
  26. kubectl replace --force -f <file-name.yaml> - Replaces a resource based on the definition in file-name.yaml
  27. kubectl -n elastic-stack exec -it app -- cat /log/app.log - Execute a command inside a pod
  28. kubectl rollout status deployment/frontend-deployment - Retrieve the rollout status of a deployment
    1. kubectl rollout history deployment/frontend-deployment - Retrieve the rollout historyof a deployment
      1. kubectl rollout history deployment nginx --revision=1 - Use the —revision flag to check the specifications of a particular revision
    2. kubectl rollout undo deployment/backend-deployment - Undo a rollout for a deployment
      1. kubectl rollout undo deployment nginx --to-revision=1 - Use the --to-revision flag to rollout to a specific revision
    3. kubectl set image deployment/myapp-deployment nginx=nginx:1.9.1 - Update a deployment. The nginx object specified after the deployment name corresponds to the name of the container. Alternatively, you can use the kubectl create -f <filename>.yaml command to update a deployment.
      1. kubectl set image deployment nginx nginx=nginx:1.17 --record - Use the --record flag to save the command used to update a deployment, so that you can view it against the revision number, under the CHANGE-CAUSE column.
  29. k create cronjob email-report-cron-job --image=ubuntu --dry-run=client -o yaml --schedule="0 16 * * 5" > email-cron-job.yaml - Create a corn job that runs at 4PM every Friday.
  30. kubectl create service nodeport <service-name> --tcp=<port>:<target-port> --node-port=<node-port> - Create a NodePort service.
    1. Example: kubectl create service nodeport my-service --tcp=80:8080 --node-port=30080
    2. kubectl create service clusterip backend --tcp=80:8080 - Creates a ClusterIp service
    3. kubectl create service loadbalancer frontend --tcp=80:8080 - Creates a load balancer service
  31. kubectl create ingress <ingress-name> --rule="host/path=service:port" - Create an ingress.
    1. kubectl create ingress ingress-blog --rule="chat.maxeffortgazette/chat*=chat-service:80" - Example command to create an ingress resource
  32. kubectl create pv my-pv --capacity=10Gi --access-modes=ReadWriteOnce --hostpath=/mnt/data --storage-class=manual - Create a persistent volume
  33. kubectl config view - Retrieve configuration details such as the clusters, users, and contexts.
    1. kubectl config view --kubeconfig another-k8s-config - Retrieve configuration details from a specific configuration file another-k8s-config is the filename, in this case.
    2. kubectl config --kubeconfig=/root/config-file-name use-context test-user@test - Modify the config settings based on the configuration defined in a file.
    3. kubectl config --kubeconfig=/root/config-file-name current-context - Get current context
  34. kubectl can i - Check if you have access to a particular resource
    1. kubectl auth can-i create deployments
    2. kubectl auth can-i delete nodes
    3. kubectl auth can-i create deployments --as dev-user - Check if a specific user has a privilege (in this case creating deployments)
  35. kubectl get po -n=kube-system - Retrieve pods part of the core infrastructure of the Kubernetes cluster. The kube-system namespace consists of the core Kubernetes system components.
    1. kubectl describe pod kube-apiserver-controlplane -n kube-system - Describes the pod corresponding to the control plane in the Kubernetes cluster.
    2. kubectl api-resources --namespaced=true/false - View api-resources that are namespaced or cluster scoped.
  36. k create role developer —resource=deployments.apps —verbs=list,create,delete - Create a role called developer that allows users to list, create, and delete deployments.
  37. k create rolebinding developer-user-binding --user=dev-user-1 --role=developer - Create a role binding called developer-user-binding that binds the developer role to dev-user-1
  38. k create clusterrole node-cluster-role --resource=nodes --verb=* - Create a cluster role for cluster scoped resources (in this case nodes).
  39. k create clusterrolebinding node-role-bind --clusterrole=node-cluster-role --user=michelle - Create a cluster role binding to bind a cluster role to a user - michelle.
  40. kube-apiserver -h | grep enable-admission-plugins - View Admission plugins enabled by default.
    1. kubectl exec kube-apiserver-controlplane -n kube-system -- kube-apiserver -h | grep enable-admission-plugins - Use this command to find admission plugins in a Kube ADM setup.
  41. kubectl-convert -f <filename.yaml> --output-version apps/v1 | kubectl apply -f - - Update the API version in an existing file to another version (usually a newer version). Ensure that the kubectl convert plugin is installed before executing this command.
  42. helm search hub <chart-name> - Search the Artifact hub for chart titles containing <chart_name>
    1. helm search repo<chart-name> - Search an installed repository for chart titles containing <chart_name>
  43. helm repo add [URL] - Add a chart repository from a URL
  44. helm repo list - List local chart repositories
  45. helm install [release-name] [chart-name] - Install a chart on a K8s cluster.
    1. helm uninstall [release-name] - Uninstall a chart on a cluster
  46. helm list - List all charts
  47. helm pull [chart-name] - To download but not install a chart