So a container is created just for one program or process? Does a container run/operate within a VM, or in the guest RAM / processor cycles?
Think of it as a VM runs an operating system. A computer can run multiple VMs, but a VM can run only one operation system. Now scale it to the application level and you have that one VM can run multiple containers. I have only run one program per container, but it is probably possible to run a few in one, but I don't think this is advisable. It would depend on how you build the image. Containers run purely in memory, which makes sense since they are processes.
Imagine a web server. In the world of VMs if you need more capacity to handle requests you would need to deploy and spin up more VMs, which can take a little time. Since the container is just the process it is light weight and can be brought up and down very quickly compared to the VM, and in reality you only need the more processes to handle your requests and not the full operating system that you get from a VM.
There are so many things in an operating system that need to be updated and secured, while in the container you have one process, which is restricted to what it can access when it is deployed. It's just simplifies things.
I have not tried it but I did see that VMware Fusion on the Mac now supports containers so it could be something interesting to look at. Or you can always just install docker and manage things with portioner.
I like to think of it in layers: Networks, Servers, Operating Systems in traditional systems
Then moving to virtual systems on VMware or others - have their own Operating Systems and applications sitting on top, with their own resources.
On VMware you are dependent on the underlying VMware environment, ESX gateways, but they are always accessible to the VMWare Administrator.
Containers, as our colleague states is the next realm, they do simplify everything, they don't use as much resources, and you can literally containerise your applications into specific areas or within Cloud environments distribute them, so you can keep control of them. You can control who has access to them.
Often Containers are used for DevOps environments, where lots of software development is being used and using Agile development techniques, but remember the SecDevOps.
Here is a quick explanation with graphical depiction:
Containers and Microservices explanation:
Many Cloud services offer free or even Opensource vulnerability tools such as JFrog and many others, so as you create an new container, they are automatically scanned for vulnerabilities and rates in terms of impact of the discovered vulnerabilities at that point in time. It is part of the microservices, reducing environments into the smallest necessary to run them, isolating them and literally containing them.
A lot of cloud architects, tend to think this makes their lives easier, but they have to think through the entire SDLC lifecycle including DevOps, and now SecDevOps as well.
Servers, as my colleague states - have many aspects go along to Cloud Internet Security (CIS) to review their benchmarks on hardening for Operating Systems, etc.
Often McAfee, Symantec, TrendMicro, Sophos offer services above an beyond normal AV these days for hosts, servers and operating Systems - things such as host based Firewalls, Host based IDS/IPS, File Integrity, Web Reputation, Log monitoring etc.
Carbon Black or Crowdstrike use a different approach monitoring end points i.e. servers or desktops, laptops etc, looking for user behaviour or actions which may constitute a compromise or attempt to compromise files, hosts, servers etc. It is far more holistic.
However, you still have traditional needs, these have not gone away even in cloud environments.
My personal take is VMware does a great job, but I like some independence rather than depending on a single vendor, especially in shared or multi-cloud environments.
I like some of the Container solutions like Twistlock and Illumio
Only caveat I have, economic as long as you have 100's of containers.
There are others such as Aqua
@ericgeater, with regards to containers, you're essentially packaging the application processes --- with some isolation --- & this increasing the efficiency / manageability, & perhaps also enhancing security to some extent.
That being said, the use of containers can't take the place of an EPP / EDR solution. As @Caute_cautim had mentioned, they can be compromised. We used Docker in our organisation. This link gives you some info on the security concerns with that.
At the end of the day, you should have a good EPP / EDR solution to safeguard the infrastructure, & it shoud not consume to much resources either.
Providers tend to change the names of their solutions even if there's little change in the features, so it's best to check on the features & test the solution with a PoC to ensure you're satisfied with it.
An example: we use a Virtual Desktop Infrastructure (VDI) for which the EPP provider was asked to facilitate an agent-less mode to ensure minimal performance impact --- but went on to try to convince us to use their agent-based mode for added features. Our engineer tested an instance and found that it consumed way too many resources, after which the provider relented.