It is a common issue faced by most programmers that the environment and the platform do not match with what is present at the server side. So, in case if they wish to collaborate, they always face platform dependence. This hinders their full potential. The Docker Container fulfills that remaining gap.
What is a Docker Container?
The Linux Kernels have the ability to run various applications in the containers. The containers, in turn, have the provision of an independent runtime environment to each application, while still avoiding a full-fledged VM overhead.
By definition, as stated by the Wikipedia –
“an open-source project that automates the deployment of software applications inside containers by providing an additional layer of abstraction and automation of OS-level virtualization on Linux.”
Docker Container is there to ensure that the code written by the programmer(s) will have the same working irrespective of the computing environment. Following are the terms that are commonly in use:
1. Image – It is the building-block of a container. It is actually a kind of a snapshot of the virtual but is more lightweight.
2. Container – It is a method of virtualizing the operating system so that an application and its related dependencies could be executed in resource-isolated processes.
3. Package – A list of codes that form a single unit, so that it could allow a number of functionalities to a programmer wherever it is present.
4. Dockerfile – This file describes a number of steps that would need to create a Docker image. One could imagine it like a recipe that involves the ingredients and all the necessary steps it will be requiring.
5. Virtual Machine (VM) – It allows a person to execute other (guest) OS within his/her current (host) OS. It runs as if there is another program. Using this environment, it is possible to use a number of different operating systems that actually belong to other machines.
Difference between a Virtual Machine and a Docker
The idea of the Virtual Machine is to share the hardware and software resources. It reduces the maintenance cost of the hardware used to a great extent. Virtualization requires more bandwidth, processing capacity, and storage than a traditional desktop or server if one hardware is prepared to host multiple running VMs. The virtual machine is a set of codes (program) that acts as a virtual computer. VMs need to be balanced because it is quite possible that one device uses more storage space, while other very little.
A docker is allowed to share a number of host operating systems. That gets less isolated and lightweight as compared to a full-fledged virtualized system. In case of a docker, multiple containers can be run on the same machine and share the operating system kernel, each one of which could run as isolated processes in user space; unlike VMs that are abstractions of hardware converting one server into multiple servers, all of which run on a single machine.
Difference between an Image and a Container
While we use a docker, we get a base image at the moment. We then boot it, make the relevant changes and save those changes in layers forming another image. A container is basically an instance of an image. This means that a running image is a container. If a person starts an image, he is actually running a container at the moment. One may have a number of running containers at a time of the same image.
One can view all images using the following code:
The running containers can be seen with:
If one wants to see all the containers, whether running or not:
docker ps -a
This topic is wide and needs practice before being quite knowledgeable and confident. Even the most experienced person might make mistakes at times. It is not possible to cover all the topics in detail and even if it is mentioned in tutorial form, one would not be able to grasp before doing that themselves. We are open to suggestions if something is missing. Please, give your feedback in the comment section.