As we move forward in terms of technology, we consistently find newer and better ways to do things. More and more things are being digitized and automated, and we see that computers are now the central piece in almost every piece of technology that we depend on. And well, here’s a new concept: containerism. It’s not an easy topic to discuss and oftentimes, you’re going to feel like a mosquito in a nudist colony — “Where do I begin?”
First off, let’s talk about virtual machines, since these are the most common point of comparison for both containerism and Docker. The short definition of a virtual machine is “A software computer that is responsible for running an operating system and application.”
A virtual machine is essentially software that’s installed in your computer that mimics physical hardware. For the end users, this means that the virtual machine can act as if it were a whole different computer, complete with its own operating system.
Virtual machines are mainly used to fully utilize the capabilities of computers, especially in the modern age where what we currently consider to be “mid-range” machines have about the same performance capabilities of the last decade’s high-end computers and even more.
The problem then was that with old systems, multiple processes could not run simultaneously without having to pull resources from each other. This was, in essence, two pit bulls fighting over a slab of meat, each tugging on one end, which would cause stability issues for processes.
Virtual machines were created in order to negate this by providing each process with its own set of resources. These resources are solely dedicated to running the process it’s meant to run. How is this pragmatic? Well, say it costs $5K to run a server on a single machine. Let’s say this is a really powerful machine. But because there’s that fear of each process going into a tug-of-war for resources, the company has to buy 3 other systems in order to run four servers. That’s an easy $20k investment.
With virtual machines, all four processes can run on a single computer, thus making optimal use of the hardware available.
While I highly recommend that you take a docker training course in order for you to gain workable information and to learn how to efficiently use it, these are the basics of what Docker is:
Like VMs, Docker is able to create virtual partitions. Except, with Docker you can essentially run multiple processes on a single operating system — a single instance of Windows is capable of running multiple containers.
The main advantage here is that Docker, and other containers, are able to load resources in mere milliseconds whereas VMs normally take a few minutes to do the same thing.
In essence, a container can be used to hold an application, complete with its own dedicated resources. This allows software developers to test their software without having to worry about other processes interfering, which results in cleaner test results because applications are tested in a standardized scenario.
Not only that, but containers allow developers to pack their applications into lightweight packages and to share these applications with other people who are involved in their project. This may not sound like much, but from a business standpoint, containerism could not only boost computer efficiency, but software developer efficiency as well.