linux docker

Definition: Docker

Docker has become the de facto standard when it comes to container-based implementations. From small-scale implementation to large-scale enterprise applications, Docker serves as the foundation for container-based orchestration.

Docker has gained such popularity and adoption in the DevOps community in a short time due to the way it is developed for portability and designed for modern microservices architecture.

The evolution of containers

If you think containerization is a new technology, it’s not. Google uses its own containerization technology in its infrastructure for years.

The concept of container started in the 2000s. In fact, the roots go back to 1979 where we had chroot, a concept of changing the root directory of a process.

Here is a list of container-based projects that started in 2000.

What is a Linux Container (LXC)?

Before diving directly into the concepts of Docker, you must first understand what a Linux Container is.

In a typical virtualized environment, one or more virtual machines run on a physical server using a hypervisor like Xen, Hyper-V, etc.

Containers, on the other hand, run on top of the operating systems kernel. We can call this virtualization at the operating system level. Before we get into the underlying concepts of containers, you need to understand two key Linux concepts.

  • User area : All the code necessary for the execution of user programs (applications, processes) is called “user space”. When you initiate a program action, for example, to create a file, the process in user space makes a system call to kernel space.
  • Kernel Space : This is the heart of the operating system, where the kernel code is located

A container is a process

When you launch an application, for example an Nginx web server, you are actually launching a process. A process itself is a self-contained instruction with limited isolation.

What if we could isolate the process with only the files and configuration needed to run and function. That’s what a container does.

A container is actually a process with enough insulation user space components so that it feels like a separate operating system.

The parent container process can have a child process. So we can say that a container is also a group of processes.

For example, when you launch an Nginx service, it launches a parent Nginx process. The parent process then covers its child processes like the cache manager, cache loader and workers.

So when you start an Nginx container, you launch a master Nginx process in its isolated environment. I’ll show you this practically in the sections below.

Each container has its isolated user space, and you can run multiple containers on a single host. Does this mean one container owns the entire OS? No. Unlike a VM with its own kernel, a container contains only the necessary files related to a specific distro and uses the shared host’s kernel. Even more interesting, you can run different containers based on Linux distros on a single host that shares the same kernel space.

For example, you can run a RHEL, CentOS, SUSE based container on an Ubuntu server. This is possible because for all Linux distributions, only the user space is different, and the kernel space is the same.

Underlying Concept of Linux Containers

The following image gives you a visual representation of Linux continua.

Containers are isolated within a host using two features of the Linux kernel called namespaces and cgroups.

A real-world analogy would be an apartment building. Even though it is one large building, each apartment is isolated for individual households having their own identity with water, gas and electricity meters. We use concrete, steel structures and other building materials to establish this insulation. You have no visibility into other houses unless they allow you to enter them.

Likewise, you can link to a single host containing multiple containers. To isolate containers with their own CPU, memory, IP address, mount points, processes, you need two Linux kernel features called namespaces and cgroups.

A lire également  Telegram: Definition

What exactly is Docker?

Docker is an open-source project popular written in go and developed by Dotcloud (A PaaS Company).

It is essentially a container engine that uses Linux kernel features like namespaces and cgroups to create containers on an operating system.

That is, all the container concepts and features we learned in the LXC section are done very simply by Docker. Just run a few Docker commands and settings to get the containers up and running.

You may be wondering how Docker is different from a Linux Container (LXC), as all the concepts and implementation are similar.

Docker was originally built on Linux containers (LXC). Later, Docker replaced LXC with its own runtime container (which is now part of runc)

In addition to being a container technology, Docker has well-defined packaging components that make packaging applications easier. Before Docker, it was not easy to manage containers. In other words, it does all the work necessary to decouple your application from the infrastructure by packaging all application system requirements into a container.

For example, if you have a Java jar file, you can run it on any server that has Java installed. In the same way, Once you package a container with the required applications using Docker, you can run it on any other host on which Docker is installed.

Difference between docker and container

Docker is a technology or tool developed to efficiently manage container implementation.

So, can I manage a container without Docker? Yes ! Of course. You can use LXC technology to run containers on Linux servers.

Everything you need to know about docker

Docker is a widely used containerization platform. You should know that containers and microservices are often used from an application development and deployment perspective. This is “cloud-native” development. Docker is therefore an extremely popular alternative in businesses. If you want to have access to a complete guide to docker, you can go to the website of our partner datascientest.com. A container is a lightweight execution environment and an alternative to traditional virtualization techniques from virtual machines.

To successfully develop modern software, applications deployed on the same host must be isolated to prevent them from interfering. Packages are required to run applications. Virtual machines are used to separate applications from each other on an identical system and to reduce conflicts between software components and competition for resources. Docker is an open source, secure and cost-effective solution. Docker is also used to run containers on Windows or Mac through a Linux virtualization layer. So you know where to go to get more information about Docker and understand all the ins and outs of this containerization platform.

To successfully develop modern software, applications deployed on the same host must be isolated to prevent them from interfering. Packages are required to run applications. Virtual machines are used to separate applications from each other on an identical system and to reduce conflicts between software components and competition for resources. Docker is an open source, secure and cost-effective solution. Docker is also used to run containers on Windows or Mac through a Linux virtualization layer. So you know where to go to get more information about Docker and understand all the ins and outs of this containerization platform.

Why is Docker so successful?

Docker has an efficient workflow for moving the application from the developer’s laptop to the testing environment and then to production. You’ll learn more by watching a practical example of packaging an application into a Docker image.

Did you know that starting a Docker container takes less than a second?

It is incredibly fast and can run on any host with a compatible Linux kernel. (It is also compatible with Windows)

Note : you cannot run a Windows container on a Linux host because there is no Linux kernel support for Windows. You can read about Windows Containers from here

Docker uses a “Copy-on-write” union file system for storing its images. Whenever changes are made to a container, only the changes will be written to disk using the copy-on-write model.

With the Copy-on-write model, you will have optimized shared storage layers for all your containers.