❄️Day 30 - Kubernetes Architecture

❄️Day 30 - Kubernetes Architecture

❄️Kubernetes Overview

With the widespread adoption of containers among organizations, Kubernetes, the container-centric management software, has become a standard to deploy and operate containerized applications and is one of the most important parts of DevOps.

Originally developed at Google and released as open-source in 2014. Kubernetes builds on 15 years of running Google's containerized workloads and the valuable contributions from the open-source community. Inspired by Google’s internal cluster management system, Borg,

❄️Tasks

  1. What is Kubernetes? Write in your own words and why do we call it k8s?

    Kubernetes (sometimes shortened to K8s with the 8 standing for the number of letters between the “K” and the “s”) is an open source system to deploy, scale, and manage containerized applications anywhere.

    Kubernetes automates operational tasks of container management and includes built-in commands for deploying applications, rolling out changes to your applications, scaling your applications up and down to fit changing needs, monitoring your applications, and more—making it easier to manage applications.

  2. What are the benefits of using k8s?

    Kubernetes is an open-source container orchestration platform that helps automate the deployment, scaling and management of containerized applications. Kubernetes and containerization provide numerous benefits to organizations and developers looking to build and maintain scalable, resilient and portable applications. Here are some of the key benefits of Kubernetes:

    • Containerization: Kubernetes leverages containerization technology, such as Docker, to encapsulate applications and their dependencies into isolated, lightweight units called containers. Containers offer several advantages, including improved resource utilization, easy application packaging and consistent behavior across different environments.

    • Scalability: Kubernetes enables effortless scalability of applications. It allows you to scale your microservices applications horizontally by adding or removing instances, known as pods, based on the workload demands. This helps ensure that your application can handle increased traffic or accommodate higher resource requirements. This improves performance and responsiveness and is especially needed while migrating workload to DevOps.

    • High Availability: Kubernetes supports high availability by providing automated failover and load balancing mechanisms. It can automatically restart failed containers, replace unhealthy instances and distribute traffic across healthy instances. This ensures that your application remains available even in the event of infrastructure or container failures. This helps reduce downtime and improve reliability.

    • Resource Efficiency: Kubernetes optimizes resource allocation and utilization through its advanced scheduling capabilities. It intelligently distributes containers across nodes based on resource availability and workload requirements. This helps maximize the utilization of computing resources, minimizing waste and reducing costs.

    • Self-Healing: Kubernetes has self-healing capabilities which means it automatically detects and addresses issues within the application environment. If a container or node fails, Kubernetes can reschedule containers onto healthy nodes. It can also replace failed instances and even perform automated rolling updates without interrupting the overall application availability.

    • Portability: Kubernetes offers portability, allowing applications to be easily moved between different environments, such as on-premises data centers, public clouds or hybrid setups. Its container-centric approach ensures that applications and their dependencies are bundled together. This reduces the chances of compatibility issues and enables seamless deployment across diverse infrastructure platforms.

    • DevOps Enablement: Kubernetes fosters collaboration between development and operations teams by providing a unified platform for application deployment and management. It enables developers to define application configurations as code using Kubernetes manifests, allowing for version-controlled, repeatable deployments. Operations teams can leverage Kubernetes to automate deployment workflows, monitor application health and implement continuous integration and delivery (CI/CD) pipelines.

Kubernetes provides a robust platform for managing containerized applications at scale. Its benefits include improved scalability, high availability, resource efficiency, self-healing capabilities, portability and support for implementing DevOps, Cloud and DevSecOps practices. By leveraging Kubernetes, organizations can streamline application deployment and operations, increase productivity and deliver more reliable and resilient applications.

  1. Explain the architecture of Kubernetes? What is Control Plane?

    Kubernetes Cluster mainly consists of Worker Machines called Nodes and a Control Plane. In a cluster, there is at least one worker node. The Kubectl CLI communicates with the Control Plane and Control Plane manages the Worker Nodes.

    Kubernetes – Cluster Architecture

    As can be seen in the diagram below, Kubernetes has a client-server architecture and has master and worker nodes, with the master being installed on a single Linux system and the nodes on many Linux workstations.

    Kubernetes Components

    Kubernetes is composed of a number of components, each of which plays a specific role in the overall system. These components can be divided into two categories:

    • nodes: Each Kubernetes cluster requires at least one worker node, which is a collection of worker machines that make up the nodes where our container will be deployed.

    • Control plane: The worker nodes and any pods contained within them will be under the control plane.

Control Plane Components

It is basically a collection of various components that help us in managing the overall health of a cluster. For example, if you want to set up new pods, destroy pods, scale pods, etc. Basically, 4 services run on Control Plane:

Kube-API server

The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. It is like an initial gateway to the cluster that listens to updates or queries via CLI like Kubectl. Kubectl communicates with API Server to inform what needs to be done like creating pods or deleting pods etc. It also works as a gatekeeper. It generally validates requests received and then forwards them to other processes. No request can be directly passed to the cluster, it has to be passed through the API Server.

Kube-Scheduler

When API Server receives a request for Scheduling Pods then the request is passed on to the Scheduler. It intelligently decides on which node to schedule the pod for better efficiency of the cluster.

Kube-Controller-Manager

The kube-controller-manager is responsible for running the controllers that handle the various aspects of the cluster’s control loop. These controllers include the replication controller, which ensures that the desired number of replicas of a given application is running, and the node controller, which ensures that nodes are correctly marked as “ready” or “not ready” based on their current state.

Node Components

These are the nodes where the actual work happens. Each Node can have multiple pods and pods have containers running inside them. There are 3 processes in every Node that are used to Schedule and manage those pods.

Container runtime

A container runtime is needed to run the application containers running on pods inside a pod. Example-> Docker

Kubelet

kubelet interacts with both the container runtime as well as the Node. It is the process responsible for starting a pod with a container inside.

Kube-proxy

It is the process responsible for forwarding the request from Services to the pods. It has intelligent logic to forward the request to the right pod in the worker node.

  1. Write the difference between kubectl and kubelets?

    Two fundamental components within the Kubernetes ecosystem are:

    • Kubectl

    • Kubelet

Kubectl and Kubelet, while distinct, have crucial roles in Kubernetes operations, they serve different purposes and operate in distinct environments.

Kubectl is the command-line tool of choice for administrators, developers, and operators who need to interact with Kubernetes clusters. It acts as the primary interface for controlling and managing Kubernetes resources, making it an indispensable tool for tasks like deploying applications, scaling workloads, monitoring cluster status, and executing administrative commands.

Kubectl operates from a user’s local machine and communicates with the Kubernetes API server to issue commands and manage the cluster’s desired state. Configuration for Kubectl is typically stored in a user’s local configuration file, allowing for easy context switching between different clusters.

Kubelet on the other hand, serves as the Kubernetes node agent, running on every node within a Kubernetes cluster. Its primary responsibility is to manage containers on the node and ensure they align with the cluster’s desired state.

Kubelet creates and supervises pods, interacts with the container runtime (such as Docker), and reports node status to the control plane components. While Kubelet plays a vital role in maintaining the health of containers and nodes, it operates autonomously within each node and is managed by cluster administrators, making it less directly accessible to end-users.

By understanding the roles of Kubectl and Kubelet, users and administrators can harness the full power of Kubernetes for managing containerized applications effectively.

  1. Explain the role of the API server.

    The core of Kubernetes' control plane is the API server. The API server exposes an HTTP API that lets end users, different parts of your cluster, and external components communicate with one another.

    The Kubernetes API lets you query and manipulate the state of API objects in Kubernetes (for example: Pods, Namespaces, ConfigMaps, and Events).

    Most operations can be performed through the kubectl command-line interface or other command-line tools, such as kubeadm, which in turn use the API. However, you can also access the API directly using REST calls. Kubernetes provides a set of client libraries for those looking to write applications using the Kubernetes API.

    Each Kubernetes cluster publishes the specification of the APIs that the cluster serves. There are two mechanisms that Kubernetes uses to publish these API specifications; both are useful to enable automatic interoperability. For example, the kubectl tool fetches and caches the API specification for enabling command-line completion and other features. The two supported mechanisms are as follows:

    • The Discovery API provides information about the Kubernetes APIs: API names, resources, versions, and supported operations. This is a Kubernetes specific term as it is a separate API from the Kubernetes OpenAPI. It is intended to be a brief summary of the available resources and it does not detail specific schema for the resources. For reference about resource schemas, please refer to the OpenAPI document.

    • The Kubernetes OpenAPI Document provides (full) OpenAPI v2.0 and 3.0 schemas for all Kubernetes API endpoints. The OpenAPI v3 is the preferred method for accessing OpenAPI as it provides a more comprehensive and accurate view of the API. It includes all the available API paths, as well as all resources consumed and produced for every operations on every endpoints. It also includes any extensibility components that a cluster supports. The data is a complete specification and is significantly larger than that from the Discovery API.

📚Happy Learning :)