Basic Concepts

In this page we get familiar with some of containerization core concepts


Containerization is the process of encapsulating software code along with all of its dependencies inside a single package so that it can be run consistently anywhere.


Docker is an open source containerization platform. It provides the ability to run applications in an isolated environment known as a container.

Containers are like very lightweight virtual machines that can run directly on our host operating system's kernel without the need of a hypervisor. As a result we can run multiple containers simultaneously.

Each container contains an application along with all of its dependencies and is isolated from the other ones. Developers can exchange these containers as image(s) through a registry and can also deploy directly on servers.

Comparing Virtual Machines and Containers

virtual machines:

  • A virtual machine is the emulated equivalent of a physical computer system with their virtual CPU, memory, storage, and operating system.

  • A program known as a hypervisor creates and runs virtual machines. The physical computer running a hypervisor is called the host system, while the virtual machines are called guest systems.

  • The hypervisor treats resources — like the CPU, memory, and storage — as a pool that can be easily reallocated between the existing guest virtual machines.

There are two types of hypervisors:

Type 1 Hypervisor (VMware vSphere, KVM, Microsoft Hyper-V).

Type 2 Hypervisor (Oracle VM VirtualBox, VMware Workstation Pro/VMware Fusion).


  • A container is an abstraction at the application layer that packages code and dependencies together.

  • Instead of virtualizing the entire physical machine, containers virtualize the host operating system only.

  • Containers sit on top of the physical machine and its operating system. Each container shares the host operating system kernel and, usually, the binaries and libraries, as well.

Getting Docker set up and running

Choosing which Docker product based on requirements

In a production environment that runs containers hosting critical applications, you would rather have your favorite admins install Docker Enterprise.

However, on your development machine or a continuous integration build machine, you can use the free Docker Engine Community or Docker Desktop depending on your machine type. In short:

Installing Docker

We are on Fedora28 here but you can choose distribution you like

OS requirements

To install Docker Engine, you need the 64-bit version of one of these Fedora versions:

  • Fedora 30

  • Fedora 31

Uninstall old versions🔗

Older versions of Docker were called docker or docker-engine. If these are installed, uninstall them, along with associated dependencies.

$ sudo dnf remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-selinux \
                  docker-engine-selinux \

It’s OK if dnf reports that none of these packages are installed.

Installation methods🔗

You can install Docker Engine in different ways, depending on your needs:

  • Most users set up Docker’s repositories and install from them, for ease of installation and upgrade tasks. This is the recommended approach.

  • Some users download the RPM package and install it manually and manage upgrades completely manually. This is useful in situations such as installing Docker on air-gapped systems with no access to the internet.

  • In testing and development environments, some users choose to use automated convenience scripts to install Docker.

Install using the repository

Before you install Docker Engine for the first time on a new host machine, you need to set up the Docker repository. Afterward, you can install and update Docker from the repository.


Install the dnf-plugins-core package (which provides the commands to manage your DNF repositories) and set up the stable repository.

$ sudo dnf -y install dnf-plugins-core

$ sudo dnf config-manager \
    --add-repo \


Install the latest version of Docker Engine and containerd, or go to the next step to install a specific version:

$ sudo dnf install docker-ce docker-ce-cli

If prompted to accept the GPG key, verify that the fingerprint matches 060A 61C5 1B55 8A7F 742B 77AA C52F EB6B 621E 9F35, and if so, accept it.

Docker is installed but not started. The docker group is created, but no users are added to the group.

Start Docker:

$ sudo systemctl start docker

If you would like to use Docker as a non-root user, you should now consider adding your user to the “docker” group with something like:

  sudo usermod -aG docker your-user

Remember to log out and back in for this to take effect!

Hello World in Docker

Now that we have Docker ready to go on our machines, it's time for us to run our first container. Open up terminal and run following command:

docker run hello-world

If everything goes fine you should see some output like the following:

[root@earth ~]# docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
0e03bdcc26d7: Pull complete 
Digest: sha256:49a1c8800c94df04e9658809b006fd8a686cab8028d33cfba2cc049724254202
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:

For more examples and ideas, visit:

[root@earth ~]# 

To understand what just happened, you need to get familiar with the Docker Architecture, Images and Containers, and Registries.

Docker Engine

Docker Engine is a client-server application with these major components:

  • A server which is a type of long-running program called a daemon process (the dockerd command).

  • A REST API which specifies interfaces that programs can use to talk to the daemon and instruct it what to do.

  • A command line interface (CLI) client (the docker command).

Docker Architecture

Docker’s architecture is also client-server based. However, it’s a little more complicated than a virtual machine because of the features involved. It consists of four main parts:

  1. Docker Client: This is how you interact with your containers. Call it the user interface for Docker.

  2. Docker Objects: These are your main components of Docker: your containers and images. We mentioned already that containers are the placeholders for your software, and can be read and written to. Container images are read-only, and used to create new containers.

  3. Docker Daemon: A background process responsible for receiving commands and passing them to the containers via command line.

  4. Docker Registry: Commonly known as Docker Hub, this is where your container images are stored and retrieved.

When you are working with Docker, you use images, containers, volumes, networks; all these are Docker objects.

Don't worry if it looks confusing at the moment. Everything will become much clearer in the upcoming sub-sections.


-------- by Farhan Hasin Chowdhury,(PID%3A%20Process%20ID).


Last updated