cancel
Showing results for 
Search instead for 
Did you mean: 
Ghost_Rider
Mission Specialist
Mission Specialist
  • 4,302 Views

Does containers kill VM's?

I have read lot of topics on this subject but most of them are very diplomatic. Just curious if containers are stable and matured on all aspects do we really need tradiotnal VM? why dont just spin containers on bare matels and reduce overhead.

 

8 Replies
beelandc
Flight Engineer Flight Engineer
Flight Engineer
  • 4,290 Views

There are definitely situations where it makes sense to forego VMs altogether and just deploy containers / container-based infrastructure directly onto bare metal machines.

However, I also think there is still a place for VM virtualization in most IT environments. They provide IT departments with flexibility at a level of abstraction below containers. Furthermore, I would argue that not all applications are good candidates for containerization. I would expect some workloads (particularly legacy applications that are in some sort of maintenance mode) to remain outside of containers.

I just don't see the two technologies as mutually exclusive. You see the same arguments coming up now with FaaS vs Containers. There are certainly situations where one makes sense over the other, but I expect both to co-exist for the foreseeable future.

shauny
Mission Specialist
Mission Specialist
  • 4,289 Views

I think traditional VMs will always be around, but there will be a general shift towards container-oriented operating systems (think Atomic or CoreOS). 

They're definitely stable and matured, but their use case doesn't suit being an environment for sysadmins or developers to log into and work day-to-day. For example, I have it set up so I can scale up and down my kubernetes environment at the click of a button - it creates a new VM in vCenter, deploys an ISO, then joins it to the cluster. But I can't see things like my jump box ever working as a container.

Similar thing with virtual appliances, something like a virtual firewall or syslog server I feel work best when they're seperate to the environment their supporting. Should the K8s cluster or etc die or have a serious issue, it's taken down not just the workloads, but your VPN with it. 

All of the above can be resolved one way or another (for example, I usually put an additional ssl vpn somewhere just incase my usual ipsec tunnel ever goes down), but I don't feel the risk outweights the rewards quite yet. 

To answer you're specific question of if we will ever just spin them up on bare metal, no, I don't think we ever will. Containers are just that, containers. They're abstractions upon an existing system (cgroups and namespaces and other fancy pants magic stuff). If we created a system we're it's just running on bare metal, we'd have just created another OS. That's why I focused on container-specific OS earlier on - that and cloud-init are going to be where the real focus is. Minimal traditional OS with container focus. 

Just my rambling thoughts. What do you think?

  • 4,208 Views

Then you have the resources issue, containers used the resources from the host server whereas VMs have its own resources.
0 Kudos
  • 4,198 Views

VM can provide High Avaibility from the HW perpective, COntainer can provide HA on Application Perspective

 

so VM and Container can support each other

We Learn From Failure, Not Success
kubeadm
Flight Engineer Flight Engineer
Flight Engineer
  • 4,160 Views

It won't - just like television didn't kill the radio, etc. 

As mentioned by others, VM provides better isolation and is more suitable for certain workloads.

cheers,
Adrian

0 Kudos
  • 4,146 Views

I'll go out on a limb here and say Containers absolutely DO NOT "kill virtual machines" because right now one of the best ways to work with Containers is on top of a vm running the base O/S. Whether that's something really tiny or Atomic or your plain-jane RHEL 7.x release. In my business, VM's are our go-to for almost everything. We've got lots of bare metal, but those are usually high-demand, high-bandwidth and high-performance mission critical applications. For "softer" loads, development, testing and compute tasks, we'll choose a VM every time.

Bishop
Cadet
Cadet
  • 4,136 Views

Let me take that one step farther.

Look around, and you'll find, due to resource contention, the recommended method of deploying a containerized app is one container per VM -- that's one docker image running on a tiny (photon or alpine) VM.   Do that 30 times for 30 images.  That's pretty much how I saw VMware managing their container plans; and putting the resource limits on the VM gets us agile little containers with any bloating contained.

But if we can vagrant up a VM build from instructions very like a dockerfile, what's the difference between an app running as an image on an alpine VM vs the app vagranted up inside a RHEL7 slim VM?  Patching is more straightforward, as you don't need to worry about versions and dependencies in and out of the docker image, but other than any overhead or lag with the docker shim layer it's about the same thing.  Oh, also the filesystem parts.  And the port forwarding.  And the inter-image dependencies.

So if we should really deploy one image per VM anyway, and if it's really the same thing except a lot simpler management for the slim VM over the docker image on a VM, then really we should just vagrant something up instead of dockering it, and be ready to recycle the slim VM with its payload when we want to refresh (and don't want to use the option of just upgrading the apps).

[Do] containers kill [VMs]?  I'm going to say no.  Sometimes I worry they barely make traction.

beelandc
Flight Engineer Flight Engineer
Flight Engineer
  • 4,132 Views

I agree that if you're going to deploy only one instance of one image per VM, organizations will lose out on many of the potential benefits of containers (including more efficient resource utilization vs just VMs) and the remaining benefits may not (in that specific scenario) make containers worth the additional complexity and overhead.

However, I don't really agree with the notion that the recommended way to use containers is to deploy one container per VM, and while I'm sure there are some organizations out there doing that, I have not personally encountered any using that approach. 

If your concern is resource contention, you should implement resource limits on your containers and scale your environment appropriately to optimize the use of the underlying resources without negatively impacting your application performance. 

Many enterprise environments leverage container platforms, such as OpenShift and Docker Swarm, to help manage their container-based infrastructure at scale. I'm personally most familiar with OpenShift, but all of these platforms provide features such as container scheduling, replication management, networking support, and resource limits that allow container environments to be managed effectively. However, even standalone Docker has the native ability to implement resource limits for containers.

Join the discussion
You must log in to join this conversation.