There are two common virtualization methods. First is the full machine virtualization, think VirtualBox and VMware, which provides virtual machines. On the other hand, there are several containerized applications that use applications such as LXC and Docker. This creates a sandboxed environment where you can install your entire operating system and configure the virtual hardware for each virtual machine. In this blog post,we’re going to go through docker and containers to make it easier for you to know what to do.
Virtual machines are a sandboxed environment which contains a full-fledged computer. With its virtual hardware, operating system, kernel, and software, booting up a virtual machine can sometimes take a few minutes to boot up.
Containers are a lightweight alternative to full machine virtualization since they are commonly used to sandbox a single application, which recently became popular due to the concept of micro services. Containers use the host operating system’s kernel, and thus no bootup time is needed. You just need a few seconds and your containerized application is up.
Providers now focus on system containers, which offer an environment as close to possible as the one you’d get from a virtual machine but without the overhead that comes with running a separate kernel and simulating all the hardware.
Docker containers wrap up the software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries — basically anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.
One of the very valuable use-cases I applied myself was when I needed a sandbox to try compiling a piece of software without worrying about all the files that might remain in my system directories after compiling, because once you remove the container it’s all gone. To understand how this works, we first need to understand the filesystem that docker uses, which is UnionFS.
UnionFS allows files and directories of separate file systems, known as branches, to be transparently overlaid, forming a single coherent file system. Contents of directories which have the same path within the merged branches will be seen together in a single merged directory, within the new, virtual filesystem.
Simply put, we have multiple layers of filesystems and each is merged on each other creating a new filesystem. You could think of it like GIT: Each layer contains the files that are added, removed, and edited in the layer before it, creating a new filesystem with the changes.
Images are the basis of containers. An image does not have a state and it never changes. They, too, are a set of filesystem layers. Basically, this image is used as the base filesystem when the Docker container is run.
A container consists of an image, and a writable filesystem that is added on it by Docker to allow writing inside the container.
In my opinion, Docker Hub is what makes Docker an amazing product, as it provides a store-like website where you can search and download images to use. It’s been highly adopted by a lot of people and even organizations, so you can find a lot of official images for operating systems and products. You can also download any image and use it directly or use it as a base image for your own image, or for building a Docker image.
How do you use Docker and containers? Let us know in the comments below.