One of the reasons Docker is such a powerful tool is its versatility and the ability to connect Docker containers either to each other or to other third-party workloads. Docker has a powerful network engine that allows you to manage subnets within the Docker ecosystem. You can create workloads that can seamlessly integrate with other applications and still give you the flexibility to be creative and efficient. Having an in-depth understanding of Docker’s networking capabilities will allow you to design and deploy applications, as well as manage them in a platform-agnostic manner. You can also configure networks to create completely isolated containers that you can use to build web applications that work together securely and efficiently.
This is exactly why Docker is popular among developers around the world. The freedom to build, manage and run containers from around the world is simple and easy. Docker offers some amazing benefits such as open source technology, faster and less load on RAM, easier deployment, worldwide acceptance and much more. With Docker, developers can stop worrying and instead focus on making the best app possible.
TetraNoodle is planning a Kickass course on Docker and Docker Swarm for which it is raising funds on Kickstarter. The upcoming course will help budding and experienced developers become a master at building, managing and deploying containers using the simplest codes possible.
When you install Docker, it automatically creates three default networks. You can also choose to write your own network driver plugin to create custom drivers, but this is a complex task.
To view the default networks, just enter the command “docker network ls”. As you can see here, the three default Docker networks are bridge, null and host.
- The bridge network is the default network driver. Unless you tell it otherwise, Docker will always launch any new containers in this network. You would typically use bridge networks when your applications run in standalone containers that need to communicate.
- The null network adds a container to a container-specific network stack. This container lacks a network interface and is usually used in conjunction with a custom network driver.
- The host network adds a container on the host’s network stack. It is usually used to create standalone containers to remove network isolation between the container and the Docker host.
Let’s say you have a collection of containers that need to communicate with each other on a regular basis. For example, Plex to Ombi or Grafana to InfluxDB. While containers can communicate with each other on the basis of their IP address, even if they are not in the same network, getting the IP address of each and every container can be difficult and time-consuming. It would be like trying to memorize every phone number in your address book! Also, IP addresses can change, and they only live as long as the container itself, so communication using them may not always be successful. Wouldn’t it just be easier to refer to containers by their names? Of course! But Docker doesn’t support automatic service discovery on the default bridge network. You cannot directly configure the null and host networks in Docker.
This is where user-defined networks come in. You can configure the default bridge network, as well as your own user-defined bridge networks. You can use the default Docker network drivers as a base to create your own custom user-defined network. You can use these to control which containers can communicate with each other, and also enable automatic DNS resolution of container names to IP addresses.
When you create your own Docker network, you are essentially telling the Docker engine to create a new bridged subnet. Any containers that you then attach to this network will belong to the new bridged subnet. This means more efficient routing of messages to various network hosts, as well as logical grouping of hosts based on location or purpose.
User-defined networks are completely isolated from each other, which means that a container attached to one network cannot communicate with a container attached to another network. But you can create as many networks as you need, and you can connect a container to multiple networks at a time. In this way, you can create a pseudo-DMZ network and link this to a private network. This is very useful, for example, if you need to keep databases separate from front-end web containers and apply specific IP table’s rules to each network.
If you want complete control and customization, you also have the option to create a network plugin or a remote network. As you can see, user-defined networks have endless applications and are very versatile! Let’s see how you can easily create one.
CREATING A USER-DEFINED NETWORK
First, let’s see what happens if you try to make your containers communicate without a network. There may be many instances when two containers may need to communicate. Let’s take a look at a scenario. We will first see if you can access one container from the other.
To do this, let’s launch a container. Enter the command docker run -itd –name nginx nginx, and then enter docker images and docker ps to view the list of images and containers. And let’s also pull up the IP of the second container listed here. Enter the command docker inspect demo|grep IP.
Here is what you will see:
As you can see here, the IP address is 172.17.0.2. Let’s copy this and then find the IP for the nginx container as well. Enter the command docker inspect nginx|grep IP. The IP address of the nginx container is 172.17.0.3. Make a note of this IP as well and you’ll use it soon.
Now let’s go into the demo container. Enter the command docker exec -it demo bash. You are in the demo container. Now, enter the command apt-get update && apt-get install telnet to download the latest updates into this container. So basically, you are installing telnet onto the entry port of the container to see if it can be accessed from the other container. You’ll see the installation begin.
Once telnet is installed, let’s check the connection to the container. If you remember, the IP of the nginx container was 172.17.0.3, and the default entry port is 80. So, enter the command telnet 172.17.0.3 80. You will see a message that you are connected to the nginx container.
Let’s enter the command ^] to close the connection.
So, you know that it can connect using the IP address. But as we noted earlier, it can be tedious to get so many IP addresses, and they can change as well. So, what happens if we try to connect to the container using the name? Let’s try this. Enter the command telnet nginx 80. You will see a message saying you couldn’t connect to the container.
Let’s try to ping the nginx container. Enter the command ping nginx. You’ll see that it was unable to find the host nginx
Maybe the host container is not available? So, let’s test this by trying to ping the nginx container using its IP address. Enter ping 172.17.0.3. This works, so its pinging but not telnetting.
This is where networks come in. Launching the containers in the same network will allow them to communicate directly using the container name without having to use their IP address each time.
So, let’s create a network. It’s really simple; just enter docker network create infra_tetra, where ‘infra_tetra’ is the name of the new network. Now, to see all the networks on the server, enter docker network ls and you will see a list of the three default networks, and the new network you just created. Since you didn’t specify a network type, the default type is ‘bridge’.
Now that you’ve learned a little bit about how Networks work in Docker, you can start building your own networks to connect your containers seamlessly. If you want to learn more about Docker and Docker Swarm, TetraNoodle is raising funds for its very own upcoming comprehensive course on Docker and Docker Swarm. The course will cover Docker from the very start, so if you are a newbie just trying to learn the ropes, then this would be the perfect course for you! So, please help us bring this course to life. You can show your support by selecting any of the many different pledges that are present.