Docker Networking
Individual containers, need to communicate with each other through a network to perform the required actions, and this is nothing but Docker Networking.
You can define Docker Networking as a communication passage through which all the isolated containers communicate with each other in various situations to perform the required actions.
As Docker has developed a new way of delivering applications, and with that, containers have also changed some aspects of how networking is approached.
Here are Challenges of Networking Containers :
Portability
How do I guarantee maximum portability across different network environments while taking advantage of unique network characteristics?
Security
How do I segment to prevent the wrong containers from accessing each other?
How do I guarantee that a container with application and cluster control traffic is secure?
Performance
How do I provide advanced network services while minimizing latency and maximizing bandwidth?
Scalability
How do I ensure that none of these characteristics are sacrificed when scaling applications across many hosts?
Container Network Model (CNM)
The Docker networking architecture is built on a set of interfaces called the Container Networking Model (CNM). The philosophy of CNM is to provide application portability across different infrastructures. This model strikes a balance to achieve application portability and also takes advantage of special features and capabilities of the infrastructure.
CNM Constructs
Endpoint: Provides the connectivity for services exposed by a container in a network with other services provided by other containers in the network. An endpoint represents a service and not necessarily a particular container, Endpoint has a global scope within a cluster as well.
Sandbox: Created when users request to create an endpoint on a network. A Sandbox can have multiple endpoints attached to different networks representing container’s network configuration such as IP-address, MAC-address, routes, DNS.
Network: Provides connectivity between a group of endpoints that belong to the same network and isolate from the rest. So, whenever a network is created or updated, the corresponding Driver will be notified of the event.
CNM Driver Interfaces
The Container Networking Model provides two pluggable and open interfaces that can be used by users, the community, and vendors to add additional functionality, visibility, or control in the network.
the network plugin APIs are used to create/delete networks and add/remove containers from networks.
Native Network Drivers — Native Network Drivers are a native part of the Docker Engine and are provided by Docker. There are multiple drivers to choose from that support different capabilities like overlay networks or local bridges.
Remote Network Drivers — Remote Network Drivers are network drivers created by the community and other vendors. These drivers can be used to provide integration with specific software and hardware. Users can also create their own drivers in cases where they desire specific functionality that is not supported by an existing network driver.
The IPAM plugin APIs are used to create/delete address pools and allocate/deallocate container IP addresses.
Libnetwork is an open source Docker library which implements all of the key concepts that make up the CNM.
Docker Native Network Drivers
There are mainly 5 network drivers: Bridge, Host, None, Overlay, Macvlan:
Bridge: The bridge network is a private default internal network created by docker on the host. So, all containers get an internal IP address and these containers can access each other, using this internal IP. The Bridge networks are usually used when your applications run in standalone containers that need to communicate.
The Docker server creates and configures the host system’s docker0 interface as an Ethernet bridge inside the Linux kernel that could be used by the docker containers to communicate with each other and with the outside world, the default configuration of the docker0 works for most of the scenarios but you could customize the docker0 bridge based on your specific requirements.
The docker0 bridge is virtual interface created by docker, it randomly chooses an address and subnet from the private range defined by RFC 1918 that are not in use on the host machine, and assigns it to docker0. All the docker containers will be connected to the docker0 bridge by default, the docker containers connnected to the docker0 bridge could use the iptables NAT rules created by docker to communicate with the outside world.
Host: This driver removes the network isolation between the docker host and the docker containers to use the host’s networking directly. So with this, you will not be able to run multiple web containers on the same host, on the same port as the port is now common to all containers in the host network.
Overlay: Creates an internal private network that spans across all the nodes participating in the swarm cluster. So, Overlay networks facilitate communication between a swarm service and a standalone container, or between two standalone containers on different Docker Daemons.
Macvlan: Allows you to assign a MAC address to a container, making it appear as a physical device on your network. Then, the Docker daemon routes traffic to containers by their MAC addresses. Macvlan driver is the best choice when you are expected to be directly connected to the physical network, rather than routed through the Docker host’s network stack.
None: In this kind of network, containers are not attached to any network and do not have any access to the external network or other containers. So, this network is used when you want to completely disable the networking stack on a container and, only create a loopback device.
Lets wrap it up:
Connects container to the LAN and other containers
The default network type
Great for most use cases
Listing Docker Networks
as we are using Docker-CE there is no Overlay network. We can get detailed information about any of these networks using docker network inspect
command.
Creating a bridge network
so again we can user docker network inspect myapp-net
to see details such as network subnet.
Running container(s) on the bridge network
use docker network inspect myapp-net
in order to check the container(s) that are running on the myapp-net network, see the ip address.
Lets bring up another container app2 on myapp-net and check the communications between two containers:
attach to app1 and pinging app2 :
How to get out without stopping the container ? ctrl+p and then ctrl+q.
Docker Embeded DNS
Containers can reach each other using their names
All containers in a docker host can resolve each other with the name of the container, docker has a built in DNS server that helps the containers to resolve each other using the container name. Built in DNS server always run at address 127.0.0.11.
As there is no guarantee that containers get the same ip when system reboots, using the container's name it the right way of calling other apps inside other containers.
Removing a docker network
before removing a network we have to make sure there are no running container on that network so first:
docker network prune
will remove all unused networks.
Good to know
By default, containers have outbound network access but no inbound network access.
Ports must be published to allow inbound network access.
Publishing ports
By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish
or -p
flag. This creates a firewall rule which maps a container port to a port on the Docker host. Here are some examples:
If you provide the -P
(NOTE: the letter is upper-case) option when running your container, it will bind each exposed port to a random ports of the host.
Configuring DNS
By default, a container inherits the DNS settings of the host, as defined in the /etc/resolv.conf
configuration file. Containers that use the default bridge
network get a copy of this file, whereas containers that use a custom network use Docker’s embedded DNS server, which forwards external DNS lookups to the DNS servers configured on the host.
If we want a container to use specific DNS server we have a couple of different ways to go about this:
Using --dns flag while run a container:
ctrk+d to exit and lets check it again with out specifying DNS Server:
and it has taken my Docker host DNS server settings. Please note that Custom hosts defined in /etc/hosts
are not inherited. To pass additional hosts into your container, refer to add entries to container hosts file in the docker run.
2. creating /etc/docker/daemon.json file:
it would actually infect intire docker host:
put the desired DNS server(s) in there:
restarting docker service and check the results:
Creating a host network
The concept of host network is very simple, instead of a container running and then having some sort of network address translation that you may or may not configure between the host and the container(s) , with host network the container runs and it is utilizing the physical interface of the host network. So no NAT, no port to configure, that container is directly on host physical network.
and actually there are not private to public port mapping, it is directly on the host network:
open a web browser and go to your docker host ip address:
.
.
--------
with the special thanks of David Davis .
https://success.docker.com/article/networking
https://www.edureka.co/blog/docker-networking/
https://www.slideshare.net/Docker/docker-networking-0-to-60mph-slides
https://vsupalov.com/docker-expose-ports/
https://docs.docker.com/config/containers/container-networking/
.
Last updated