Everything About Kubernetes Cluster Architecture

Containerization is gaining more momentum and most application software providers are moving towards this alternative of virtual machine. The process encapsulates an application in a container with its own operating system. With multiple applications opting for such containerization, container orchestration holds the key in simultaneous execution. This is where Kubernetes, an open source container orchestration, is gaining traction.

Kubernetes is used to effectively automate and manage the containerized orchestration. It is becoming one of the industry standards and developers are readily embracing Kubernetes. It helps them to effectively deploy containers from the developer’s environment into production. Kubernetes highly supports one of the cutting-edge application development technologies, serverless architecture.

What is Kubernetes?

Now before jumping right into Kubernetes Cluster Architecture, one should know first what Kubernetes is. So, here we go. Kubernetes is practically a system that executes and coordinates containerized applications across a cluster of infrastructure resources. It is a platform built to effectively control the life cycle of containerized applications and services by employing methods that provide predictability, flexibility and high availability.

What is Kubernetes Cluster Architecture?

Kubernetes follow the client-server architecture. It can be basically viewed as a system designed with multiple layers, with each higher layer abstracting the intricacy found in the lower levels. At the base level, Kubernetes joins together individual physical or virtual machines into a cluster. It then employs a shared network to communicate with each server. The cluster is the physical platform that houses the Kubernetes components and configures the capabilities and workloads of each component.

 

Kubernetes Nodes

 

Kubernetes Master Node

The Kubernetes Master is a combination of three system processes that gets executed on a single node, which is called as the Master node. Those processes are Kube-API server, Kube-controller-manager and Kube-scheduler.

Non-master node runs two processes, as against the Master which runs three:

  • Kubelet, which communicates with the Kubernetes Master.

  • Kube-proxy will be networking-related services of Kubernetes running on each node. This network proxy manages/reflects the status of these services

In the Kubernetes ecosystem, one server functions as the Master server. As the name indicates, the Kubernetes Master server acts as the controller of the cluster. The Master server provides the API for the users and the clients, perform health-checks of the other server, split the components in the best possible manner and assign work, orchestrate communication between other components. The Master server acts as a primary point of contact within the cluster and is accountable for most of the centralized logic the Kubernetes executes.

Kubernetes Master Server Components

Kubernetes Master acts as the gateway between the administrators and users. It also imparts several cluster-wide systems for the unsophisticated worker nodes. The Master server components jointly work to accept users’ requests, detect the best possible methods to schedule containers, validate clients and nodes, manage cluster-wide networking, and also take care of scaling and health checking activities.

Etcd: Kubernetes Etcd is a distributed, high-availability key-value store that can be organized to span across various stores. Etcd is used by Kubernetes to store the configuration data that can be accessed by every single node in the cluster. It can also be employed for service discovery and can assist the components to configure/re-configure themselves according to the latest data.

Etcd also helps to retain several features like leader election, distributed locking within the cluster state. The values can simply be changed or retrieved by using an HTTP/JSON API Interface. Etcd can either be configured on a single Master server or shared among several machines in the production environment. One important factor to be taken care of is to ensure that Etcd is network accessible to all other Kubernetes’ machines.

Kube-API server: The API server is the primary management point and it permits the configuration of Kubernetes workloads and organizational units. API server also makes sure that both Etcd store and the service aspects of the deployed containers are always in sync with each other. It acts as a pathway for the multiple components and works to retain the cluster health and circulate information and other commands.

Kube-Controller-manager: The controller manager is in charge of several responsibilities. Firstly, it handles the multiple controllers that manage the state of the cluster, manage workload life cycles and execute the routine tasks. For example, a replication controller makes sure that the number of replicas specified for a pod is similar to the number deployed on the cluster. The details of these actions are written to Etcd and the controller manager monitors the changes through the API server. When a change occurs, the controller studies the new information and executes the procedure that is applicable to the current state. Changes include actions like scaling an application up or down, regulating the endpoints, etc.

Kube Scheduler: The task of assigning workloads to different nodes in the cluster is performed by the scheduler. The scheduler understands the workload’s operating environment and monitors the current infrastructure environment, and also assigns work to acceptable node or nodes. The scheduler also tracks the available capacity on each host and makes sure that the work is not scheduled in excess of the available resources. The scheduler must be aware of the total capacity and also the resources that are already allocated to existing workloads on each server.

Cloud-Controller manager: Kubernetes interrelate with several infrastructure providers to understand and control the state of the resources in the cluster. It can be easily deployed in several environments. Kubernetes mostly interacts with the generic representation of resources like attachable storage and load balancers. It requires a method to map the generic ones to the actual resources provided by other cloud providers.

A cloud controller manager acts as a bridge that permits Kubernetes to connect providers with different abilities, features and APIs without creating a change to the generic constructs internally. This allows Kubernetes to update its current state in accordance with the data collected from the cloud provider, adjust the cloud resources as per the changes required in the system. It also allows creating and using extra cloud services to satisfy the requirements that are submitted to the cluster.

kubernetes-cluster-architecture

Kubernetes Non-Master Node Components

Container Runtime: Each node requires a Container Runtime. The Container runtime handles the starting and managing of containers, controls the applications that work in an isolated but lightweight operating environment. Every single unit of work in a cluster is executed as one or more containers that need to be deployed. The Container runtime present on each node is the component that finally executes the containers defined in the workloads that are submitted to the cluster.

Kubelet: A small service which acts as the primary contact point between that node and the cluster group is the Kubelet. Kubelet transmits the data to and from the control plane services. It also communicates with the Etcd store to read configuration details and rewrite new values. Kubelet interacts with the master components and receives the commands and work. A Kubelet receives the work in the form of a manifest that describes the workload and other operating parameters. It regulates the state of work within the cluster. It also manages the control runtime and instructs it to create or destroy containers as per the requirement.

Kube-proxy: Every single node server executes a small proxy service called the Kube-proxy. Kube-proxy is responsible for controlling the individual host subnetting and make services available to other components. Kube-proxy transfers the requests to the apt containers and performs primitive load balancing. It also ensures that the networking environment is stable, accessible, but also isolated wherever it is appropriate.

Kubernetes Objects and Workloads: Kubernetes makes use of extra layers of abstraction over the container interface to deliver scaling, resiliency and other life cycle management features. Users do not manage the containers directly, instead define and interact with instances composed of various objectives as defined by the Kubernetes object model.

Also read: Kubernetes vs Docker Swarm | Comparison Everything You Need to Know

Pods: The primary unit that Kubernetes deals with is referred as a pod. Containers are not directly assigned to hosts. Instead, one or more containers are joined together to create an object. Such an object is called a Pod. The containers included in a pod execute together, share the same life cycle and are also scheduled on the same node. A pod is always considered as a unit that shares the same environment, volumes and IP space.

A pod generally consists of the main ceontainer that performs the major workload and some helper containers that assist in the execution of closely related tasks. For example, if the main container in a pod executes the primary application server, the helper container tracks the file and takes it to the shared file system when changes are detected in the external repository.

Replication Controllers and Replication Sets: A replication controller is an object that describes a pod template and regulates parameters to manage groups of identical, replicated pods. The replicated pods are controlled horizontally by maximizing or minimizing the number of running copies. This process easily and effectively distributes the load and also enhances availability within the Kubernetes. The pod template that is similar to the pod definition is embedded within the replication controller configuration. This template assists the replication controller to create new pods as and when they are required.

The replication controller also ensures that the number of pods deployed in the cluster is equal to the number of pods described in its configuration. The controller creates new pods if any of the existing pod or underlying host fails. If there is a change in the count of the replicas in the configuration, the replication controller either starts up or kills the containers in order to match the number of the configuration. Replication Controller also performs rolling updates and updates the set of pods to a new version one by one without causing much impact on the application availability.

Replication Sets are iteration of the replication controller. They provide greater flexibility and ensure that a specific number of pods are executing at any given point of time. Replication sets provide greater replica selection capabilities than the replication controllers but replication sets fail to perform the rolling updates to roll cycle backend to the newer ones.

Deployments: One of the common workloads that can easily be created and managed is referred to as the deployment. The replication sets are the building block of a deployment. Deployments designed using the replication sets assist to effectively perform the rolling updates much better than the replica controller.

In a replica controller, users need to submit a plan for a new replication controller that would update the current controller. In a replication controller, the tasks like tracking history, recovery from network failures during the update and rolling back improper changes are very difficult to perform and are left as the user’s responsibility.

Deployment is a high-level object and allows easy management of the life cycle of replicated pods. It is very easy to modify a deployment and just requires changing the configuration. Kubernetes will accordingly adjust the replica sets, control the transitions between different applications versions, maintain event history on a timely basis and undo capabilities automatically. These features make deployment as one of the Kubernetes objects that are used to work most frequently.

Stateful sets: Stateful sets are specialized pod controllers that are used to have specific authority in the event of special requirements related to deployment ordering, persistent data or stable networking. A stable networking identifier is provided by the Stateful sets in order to maintain a unique number-based name for each pod. This unique number-based name will be retained even if the pod is moved to another node. The pod is transferred along with its constant storage volume in the event of rescheduling. The storage volume persists even after the pod gets deleted to prevent data loss.

The Stateful sets execute operations while deploying or adjusting scale based on the unique identifier in its name. This imparts greater predictability and control over the order of execution.

Daemon Sets: Daemon Sets are another set of special pod controllers that execute a copy of a pod on each node within a cluster. It is mostly used at the time of deploying pods that assist perform maintenance and deliver services for the nodes themselves. As the daemon sets are required throughout the fleet, they can skip pod scheduling restrictions. The daemon sets have the ability to bypass the restriction on a pod-by-pod basis to ensure that the crucial services are executing properly.

Jobs and Cron Jobs: Jobs are referred to as a workload used by the Kubernetes that is designed to deliver a more task-based workflow. In such a work process, the containers are required to exit successfully once they have executed their work. Jobs are particularly useful for the execution of batch processing instead of continuous service.

Cron jobs provide an interface to execute jobs with a scheduling component. Cron jobs are used to schedule jobs to get executed in the future on a regular or recurring basis.

Containerisation is having a growing demand each passing day and so the orchestration is the need of the hour. Kubernetes is an exciting technology that allows users to execute highly scalable and available containerized workloads. These robust features make Kubernetes the most-wanted and unparalleled in the open-source world, which is why the demand of Kubernetes Cluster Architecture will continue to grow.

Do you get confused what to do when your website is loading slow? Why not try a powerful plugin such as ApacheBooster which terrifically improves server performance!

(Visited 58 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *


*