Having spent years dealing with failing or blindsided gateway security solutions and a good few years researching all the latest next gen endpoint solutions.
I've finally come to the conclusion that reasonable endpoint protection requires multiple solutions and would cost several hundred per endpoint and still has a small chance that all these layers will fail.
Over 80% of attacks originate from web & email.
So why not plan for an inevitable breach, it does not mean I have given up when I consider isolation, zero trust, integrity and privilege separation into a plan.
My plan is to create a sandbox environment for those riskier internet activities of browsing the web, opening email with risky hyperlinks and documents that all want to run foreign code in your enterprise.
I believe any organisation can achieve this today with their favourite brand of application streaming to seamlessly integrate remote web and email apps into existing estates.
I present the attached diagram as an example of a 1st gen approach to web isolation. I know it is not perfect but it does afford one a greater degree of confidence in preventing and recovering from an attack.
My current research is now focusing on how to leverage open source to provide a more integrated solution.
Thank you for reading this short article and please comment and contribute to this discussion.
Just to make sure if I understand you correctly: you're suggesting running the CLIENT side in a sandbox, right?
Something like that can be done (and is done..) quite easily on any Linux system by simply starting a VM, install a browser etc. in that VM and than run your browser inside the VM, using the virtual screen / console to access the output of the browser. Actually, I believe it was Sun that used to sell a similar solution decades ago: they had a system set up in the DMZ that had access to the Internet and instead of giving you access to the Internet, you got access to the virtualised screen of their VM. This had some limitations (e.g. downloads required additional actions etc.) but all in al worked great and even allowed Internet access on secure networks.
Essentially yes, sandbox those web and email clients at various levels, but take them out the enterprise.
I tried to avoid getting stuck in the weeds, but you have made a great point that *nix has X11-Server which can achieve part of the goal and is over 30 years old, which emphasizes 'Why the slow uptake'.
It begs the question why do many organisations still allow untrusted code to execute in their enterprise?
The downloads aspect will become another step to push the file through a managed file transfer system if it cannot be viewed/edited in situ or requires archival (would have a method to transform content to remove potential exploits).
Today we have improved storage snapshot capability and can leverage application virtualization or containers to add a few more layers.
It is a shame Oracle is choosing to kill Solaris.
The problem is that sand-boxing adds significant latency to the user's browsing experience. The King of France never ate a hot meal because everything had to get past the official tasters. Same problem here.
We plan our security assuming that the endpoint is compromised. We still do our due diligence, endpoint protection, port hardening, basic firewall, etc. but that's for machine stability and user convenience more than security.
It's also our policy to not spend more than 15 minutes trying to resolve a problem on a machine. After that, we just re-image from the template. In a VDI environment, this takes less than 5 minutes. Even for desktops, that usually takes less than 30 minutes. If it's something I haven't seen before, we'll migrate the infected machine to our sandbox for analysis. This policy isn't just for security issues, it's anything that severely impacts the performance of the system, so our users have gotten fairly used to it.
We, generally, don't care that much about our systems. We care about the data. Everything we do for systems is about stability first. If a security measure doesn't add to stability or performance, then it isn't implemented. The data, though, are the crown jewels and we treat them as such.
This also helps create a more clear line between Security and Ops. They control the infrastructure. We support it. We control the data. They support that.
We, generally, don't care that much about our systems. We care about the data.
Good enough a motto to be used for a new startup, I say!
How is VDI any less latent to App streaming when it's pushed over the same protocols, what you are suggesting is that the latency is induced by resource contention, hence user acceptance challenges.
I see the benefits to the VDI approach but surely that is afforded by virtualization giving you the options to migrate/re-provision to a clean slate.
Why would you waste resources provisioning two virtual desktops just to achieve role specific application hosting, keeping the compromised internet on one set of isolated infrastructure and the hopefully clean enterprise & data on the other.
to the last point I've seen the two desktop's approach used before for higher and lower type networks, both bad in time and quite recently.
Latency annoys users, but I think as long as the capacity is right then the latency is probably going to be OK. Email obviously is less problematic then web.
I'd say application virtualization or streaming makes more sense than VDI, but if you run a big old Citrix farm or have a Mirage on the Horizon then it would probably make sense.
There are a few companies currenlty offering full web and email isolation(disclosure I work for one of them) that's all application layer stuff. The overhead can still be mitigated and the experience can be made good. It doesn't really get rid of the need for layered controls around detection and response, but it defiantly makes sense for stuff you don't block outright or whitelist as trusted.
Phishing now contains a lot more of the old social engineering and this is trickier to isolate with this approach.
Nice document BTW
Perhaps approaching the premise from the other direction, rather than asking why so many organizations don't sandbox these risky applications/activities from the rest of the enterprise, ask why do so many organizations insist on running these applications/activities in (for lack of a better term) an anti-sandbox? While we have seen improvements, too frequently we see applications running with greater privileges than necessary and various ports opened to facilitate what often seems to be an unnecessary level of enterprise-wide connectedness.
I would say the premise of your paper/query is valid, but the necessity of such a sandbox perhaps varies depending on how careless the rest of the playground has been set up. For those of us who choose our playgrounds wisely, we seem to have significantly fewer worries than others. That doesn't mean we are immune to attack, but if you are going to drive a Pinto with no brakes, and four bald wheels, you're going to be more reliant on your seatbelt than someone who chooses a Volvo with anti-locks brakes and collision warning. Still, both should buckle up.