Docker is an increasingly popular containership option for WordPress. In our previous blog post, we introduced you to the container. But now that you know what it is, we’re digging deeper into what it is and why it’s proving to be so popular.
What is Docker?
Docker is a platform for building, shipping, running and managing applications. It attempts to remove the need to care about infrastructure from Developers. As long as you can run a docker container, you can deploy your application regardless of where it’s running.
Docker works by using “Containers”. Containers are isolated pieces of your application, or another piece of software your application depends on. Containers run independently of each other, so cannot interfere with each other, except in ways specified by your application. Eg, a webserver cannot grow and take up all the available memory leaving none for your database.
The docker process simply manages the lifecycle of your containers, what communication is allowed between them and what communication is allowed between containers and the host server.
Docker Containers are running instances of an Image. This is effectively a snapshot of a container at a specific time.
Images are built from instructions in a DockerFile. This allows docker based deployments of your application to be completely code-driven. The file that creates the image used in your container can be included in your applications source code.
What does this mean?
Well, this leads to interesting implications. In theory, a docker file means that every environment setup to work on a project is exactly the same. Dev, Test, Staging and Production all share the same Docker Files so should all match exactly.
Docker can run multiple containers at once easily so the effort to match production architecture on development machines is slashed.
This is opposed to having multiple VM’s running on a machine all having to be managed independently.
With Docker, you are no longer looking for hosting and server space that matches what you need or provides a specific functionality, you can deploy your containers to any provider who supports Docker.
This means you are no longer locked into a single cloud vendor and can (relatively) easily move between them, or, use multiple providers for extra redundancy.
As containers and components are separated vulnerabilities in one, changes are less likely to impact other parts of the stack. This allows continuous Integration and Deployment which are simplified as you are shipping the entire application – not just the codebase.
Specialized versions of your docker files(s) can be used to build the application, removing any crossover in dependencies needed to build and run the application.
Though application version history is improved as deployments are immutable, changes made must be reflected in the image and not just in the running container or they won’t stick around.
Docker’s Containers vs. Virtualisation
Containers are an important evolution of virtualisation. Virtualisation can do most things containers can do but requires more effort so is the less favourable option for enterprise websites.
Unlike virtualisation, Docker can easily run and connect multiple containers. It requires separate tools for a VM base setup and creates Docker Images which are created from the DockerFile, provisioning VM’s like this needs another tool.
VM’s require a whole OS instance to be included in each VM, but luckily, this isn’t the case in Docker.
One of the most valuable assets of Docker is its portability.
In theory, a DockerFile means that every environment setup for a project is exactly the same. Development, Test, Staging and Production all share the same Docker Files therefore, should all match exactly.
You can copy a single text file and create a complete copy of the container too, which makes content storing and recall effortless.
Docker Container Registries make it easy to share containers (or image bases) like Open Source projects on GitHub.