Virtual Machines have been a tried-and-tested industry standard in software development and IT for a long time. Allowing multiple OSs to run on a single piece of hardware at the same time was a game-changer in terms of scalability and the design of scaling software systems.
With Virtual Machines, businesses could now cut back on the upfront hardware investment cost, and established server networks (such as AWS) could make web applications handle millions of requests effortlessly.
But there is a new kid on the block now. With its launch in 2013, Docker brought the container technology to the scene, and was soon seen as the ‘next generation’ of infrastructure for scaling software products.
Docker and Kubernetes are the main players in container technology, allowing developers to run multiple virtual environments on the same underlying OS. In essence, what Virtual Machines did to physical hardware, containers did to the OS.
However, even though the container boom is here to stay, will it completely replace Virtual Machines? Or is there a better chance for it to be a complementary technology instead?
In this article, we are going to explore this divide and have a look at:
- The features that make containers next-gen;
- The 2 reasons that prove VMs are here to stay;
- The way forward for the world of infrastructure.
What Makes Containers a Next-Gen Technology?
Docker and Kubernetes designed their container technology as a lightweight, resourceful alternative to running virtual environments. By using the same host OS as their basis, container applications boot up faster, and require less memory space and hardware, all while providing native performance.
At the same time, containers are less code-intensive than their VM counterparts, allowing for easier transfer of workloads. This is a partial reason for the rise in popularity of microservices [hyperlink to article], which have become quite relevant in the field of scaling web applications.
And while the tech itself outlines some clear fortes, an honourable mention must also go to the business perspective. Companies looking to set up scaling software products have to invest less into hardware purchases when developing their infrastructure, making containers the more frugal option.
At this point, it may seem obvious that containers have the high ground. But VMs still benefit from two aspects that may be enough to keep them in the race.
In behavioural economics, the sunken cost fallacy is when an individual feels constrained to carry on with an investment because they already invested in something in the past. But in the case of our topic, this is less of a fallacy and more common sense.
Although containers are faster and more resourceful for web scalability, many companies already base their infrastructure on VMs. Making the switch would not only imply taking an L on the hardware investment but also transferring multiple applications to the new, container-based infrastructure.
This, of course, means more work for the developers and more time billed for the company.
The truth of the matter is that the entire world now runs on VM infrastructure. If even one of the larger networks of servers were to be infected or compromised, the results across the Internet would be catastrophic.
This is where the hidden ace of VMs comes into play. Although full isolation is what makes them slower than containers, it is also the aspect that makes them more secure in case of an attack. To compromise a network of VMs, the attack would need to reach a large portion of the physical hardware.
And when looking at the security of some, say, AWS locations, the prospects for that happening are, well, not great.
On the other hand, containers use process-level isolation, making them possibly less secure in the case of an attack. Of course, this does not go to say that all container-based infrastructure is just a sand castle waiting for a tide. But the companies making up the backbone of the Internet itself might have some second thoughts about switching in the foreseeable future.
So, if things stand as such, the question of where containers would fit in today’s world of software applications and scalability remains. We have an idea of how things may look.
Containers vs VMs: Who Wins?
While both containers and VMs have proven their reliability in software development, we think that a joint way forward is the most likely scenario for this divide.
New companies looking to establish a scaling application infrastructure will probably opt for containers due to the technology behind them. Established companies, on the other hand, might stick to the traditional VM setup.
But perhaps the most important point to note in this discussion is compatibility. There is no reason why the OSs generated on VMs could not also run container applications and improve scalability manyfold.
This option makes use of both the current physical infrastructure and the developments brought by container technology, and we think it is what is most likely to happen in the upcoming years.
Have a different opinion? Let us know what you think!
About abac software
Did you like our take on containers vs Virtual Machines? We’re glad!
abac software is a startup from Cluj-Napoca, Romania, that aims to help enterprises and other startups with designing the software products of tomorrow. Our team loves to keep up with the latest news in software and web development.
So, if you liked our take, subscribe to our newsletter and keep an eye on your inbox for more stories like this!