top of page
Writer's pictureSathish Kumar

Docker Networking- Part 2

Updated: Jan 15, 2021






In part 1 of the Docker Networking Series, I gave an overview of Docker Networking. In this article, I am going to show things in action. If you are fellow brethren in networking field and trying to make sense of all the madness around containers, you can be assured you are hearing this from one of your own kind :)


In this part of the series, I intend to talk about the following "network" requirements:

  1. Communicate with other containers on the same host- Docker Bridge. This is a docker0 interface.

  2. Communicate with the outside world - Leverage the host's network to communicate with the outside world with docker Host Networking.

Let's spin up 2 busybox containers



root@ubuntu20-docker1:/home/sathish#  docker container run --name linux1 -it --detach  busybox   
root@ubuntu20-docker1:/home/sathish# docker container run --name linux2 -it --detach  busybox

and get a shell inside the container to find out what's going on




root@ubuntu20-docker1:/home/sathish/dockerbuilds# docker container exec -it linux1 sh
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02
          inet addr:172.17.0.2  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:11 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:946 (946.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
# iproute
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 scope link  src 172.17.0.2

172.17.0.2 is the IP address assigned to the container, this is attached to the Docker0 interface. The default gateway for this is 172.17.0.1.


Back in the host, We can see Docker0 interface uses an IP of 172.17.0.1 which is the default route of containers associated with the bridge:



root@ubuntu20-docker1:/home/sathish# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:3c:2e:1f:de  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Containers communicate with the outside world with the host's physical interface. The host performs NAT to make this happen.


root@ubuntu20-docker1:/home/sathish# iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  anywhere            !localhost/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  172.17.0.0/16        anywhere

Chain DOCKER (2 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere

Those familiar with IPTABLES will notice, with"outbound" NAT- Any packet from containers is forwarded to outside networks.


To check out inbound NAT in action- let's spin up an HTTP server that exposes port 80.


root@ubuntu20-docker1:/home/sathish# docker run -detach --name web -p 80:80 httpd

As we can see a new rule gets added to allow this



root@ubuntu20-docker1:/home/sathish# iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  anywhere            !localhost/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  172.17.0.0/16        anywhere
MASQUERADE  tcp  --  172.17.0.4           172.17.0.4           tcp dpt:http

Chain DOCKER (2 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere
DNAT       tcp  --  anywhere             anywhere             tcp dpt:http to:172.17.0.4:80

172.17.0.4 is the eth0 IP address of the "web" container.


Let's now look at how Docker looks at things.


root@ubuntu20-docker1:/home/sathish# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
cd789c842a3d        bridge              bridge              local
bfaf94bd6cff        host                host                local
c26f01b036e0        none                null                local

Well, there is the bridge, host interfaces, and none driver.



Note: The none driver is  used when a container does not need any kind of network access. Only a loopback interface will be created inside cotainer. 

Let's look at the bridge interface with the inspect command.



root@ubuntu20-docker1:/home/sathish# docker  network  inspect  bridge
[
    {
        "Name": "bridge",
        "Id": "cd789c842a3d292ba7e6d34e0e54cbdf2c3962c6fe54d7a75dbd5e24946930ab",
        "Created": "2020-08-09T12:32:30.01803359Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "02f571db76d1c6a8e288f51520af689d9b5c1e14bd4dd1e3ba89d69a6f10ae1f": {
                "Name": "web",
                "EndpointID": "118815e10ee1275f904adb946fdd69ae99a8b6ac5e40d6e581199b17455c19c6",
                "MacAddress": "02:42:ac:11:00:04",
                "IPv4Address": "172.17.0.4/16",
                "IPv6Address": ""
            },
            "d98dce33019d09f53fc5105aa4c88bd72668f7b7cd12072128e5b6b6acf927fe": {
                "Name": "linux2",
                "EndpointID": "b0300e80e95d22511a446035a1847a45b56690340f73107c3904ac0a358bef39",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            },
            "e70937b55c641cdd2e611489dce0fb72e9dcd2e7bcca564589a82dc1f25bc67e": {
                "Name": "linux1",
                "EndpointID": "ae7bc12fb0afada9bbc1ee1a25e92f6c7f32c2268640be7f0018bf389403d5c5",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

The inspect command shows the containers we have spun up are associated with this bridge. Let's look at a few other important things

  • "com.docker.network.bridge.default_bridge": "true", means this is default bridge

  • "com.docker.network.bridge.enable_icc": "true"- setting allows communication between container on this bridge

  • "com.docker.network.bridge.enable_ip_masquerade"- This setting tells Docker to perform NAT and this is how I was able to ping google.com from within the container.


Let's get back inside linux1 container and check things out.




root@ubuntu20-docker1:/home/sathish# docker container exec -it linux1 sh

/ # ping -c 1 google.com
PING google.com (172.217.163.142): 56 data bytes
64 bytes from 172.217.163.142: seq=0 ttl=112 time=39.624 ms

--- google.com ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 39.624/39.624/39.624 ms
/ # ping -c 1 172.17.0.3

PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.040 ms

We can ping both the outside world and other containers connected to the Docker0 bridge. Ping to google is possible by the post routing iptable NAT rule on the host, while docker 0 bridge allows ping to the linux2 container.


That's all for today folks !! In the next article, I will give an intro to Docker Swarm and then will talk about multi-host container networking.








251 views0 comments

Recent Posts

See All

Comentarios


bottom of page