Why are containers i.e. Kubernetes or Docker an attacker's paradise?
For instance Red Hat Linux, has a 13 layer security model just for containers alone - worth looking up.
@Caute_cautim I read the first article and to me it doesn't seem to give and explanation for the claim that containers can be an attacker's paradise. Did I miss something?
On Azure, vulnerability scanning is a default part of the pipeline. The next step is real time scanning of containers and if an issue is detected the container is automatically destroyed and a replacement created. I do not understand why the article claims this is costly since it's an automated process.
I currently want to get deeper and see what things can go from being containers to functions. To me containers are more on the ops side of things where as functions are more on the dev side since they would take more coding considerations to use and be affective.
@JKWiniger Okay lets go back to basics: My point was to ensure you understand the entire lifecycle, rather than treat it as a given facility - given we are now in the world of APIs, Micro-Services all of which has their own architectural vagaries.
GitLab’s 2020 Global DevSecOps Survey found 56% of developers simply don’t run container scans, and a majority of DevOps teams don’t have a security plan in place for containers or many other cutting edge software technologies, including cloud native/serverless, APIs, and microservices.
The solution is for DevOps teams to shift left and integrate security practices into each stage of the application lifecycle.
Yes, some Cloud Providers do run automatic scans, but always read the Shared Responsibility Model, it it is not always fully understood as to what the Cloud Provider will do for the client, vs what they will not do - so be prepared.
Yes, I agree with Functions As A Service (FaaS) which basically is developers run your own Code within a safe environment and be charged in Milliseconds rather than by the minute or the hour etc safely with full resilience.
There is so much more that can go wrong through the entire Container life cycle, as indicated by the Red Hat Security on OpenShift for instance, which is attached.
Yes, we know Containers are immutable, they are dependent on the integrity of the private registry, which be created by a third party for instance. If you want to make changes, you have to destroy the original and replace it.
Have a read, and see once you have digested the Azure Shared Responsibility Model and I suggest you lay it out in a RACI, which will help you to understand your responsibilities etc. It is an enlightening experience.
@Caute_cautim I sometime forget that I have a bad habit of thinking everyone follows best practices and does this stuff automatically. I am from that school of thought, do it right or don't do it at all. A little wake up call to remind me that not everyone thinks that way.
Containers for all!
SECURE Containers for all!
@JKWiniger Hi John, we all need reminders from time to time, everything is advancing so quickly, we need to keep very aware of what really is happening in reality. FaaS to all - DevSecOps with Code.
An ideal situation for one stop shopping (many targets in 1 location to choose from). From my understanding its more down to the issue of the SLA, Terms and Conditions, and RACI (like you mentioned). I imagine lots of customers are pushing to the cloud due to cost savings but have this misplaced idea that the cloud MSP is fully responsible when exposures/incidents/hacking/ransomware/etc. happens. They fail to realize the MSP has a pricing model and there is a basic hosting cost and then specific add on costs and even so there there is a certain level of responsibility that is left to the customer. So if the customer does not manage effectively their responsibilities part of the RACI that would be a potential concern.
@RRoach I think you are right as a one stop, but I think that is more for cloud in general and not container specific. As far as cost goes, I think a lot of places probably think there will be a cost savings going to the cloud, but then they don't optimize things for savings and find the cloud is costing more than expected. And yes, add ons are a whole other game which need to be considered and factored in from the beginning to get a proper idea of what the real costs are. How many places don't bring their Dev environment down at night when it isn't used to save money?
@RRoach @JKWiniger I agree, with you, you have to think carefully how and what is put in the cloud and, also think through the goals and strategy behind it too - if you are charged in minutes or hours from one provider, can you get better deal with another or even use Hybrid Cloud, and not put all your eggs in the same basket. If you are doing Coding and DevOps, then think about using Function as a Service (FaaS, which allows developers to test their packages/code but remember as you state to turn it off - because you will be charged for each and every millisecond, if is up and running.
The issue is Ingress and Egress, because some providers, also charge for each bit of data being passed through their infrastructure as well.
Nothing is free in this world, thinking smartly and wisely, will reduce costs, but shovelling them up in to Cloud without a plan or forethought will definitely cost the client dearly.