Stealing PII or other data apparently no longer satisfied the blackhat after they have full access into our computing devices. The emerging cryptocurrency is now usefull to give an additional income beside ransom payment. And it is require further steps to detect this infection on which unfortunately the mechanism used could be vary (last time was using coinhive javascript).
Something interesting here:
Forget stealing data — these hackers broke into Amazon's cloud to mine bitcoin
Let me know your thoughts.
Similar to your original conversation piece: The reality is seeping in systems on a daily basis.
http://www.zdnet.com/article/uk-government-websites-ico-hijacked-by-cryptocurrency-mining-malware/
But fundamentally, from an SDLC perspective, why are web developers and related organisations so blind to readily accepting plugins and as a result the above occurs? Is it likely that the people or providers responsible will be disciplined or just carry on as usual?
It's interesting to note that the UK Information Commisioner (the governing body for data protection in the UK) was offline most of yesterday with the BrowseAloud plugin issue, but hasn't commented as yet today on the threat or the reasons for being offline... Some others, e.g. Camden Council have been a bit quicker off the mark.
The challenge here is very much to do with the model for plugins - I will load a plugin off your website because I need it and, possibly, have paid you for it, so you are now part of my trust web.
However, verifying your plugin (say using SRI hashes) requires that you tell me when you're updating it and I respond quickly to this; the problem multiplies quickly with each additional plugin.
Digital signing with built-in verification might help - but only if implemented everywhere, and of course, if you're hacked and the hackers can get at the signature too, all bets are off.
We were all lucky this time - the code didn't steal data, just clock cycles. Next time we may not be so fortunate...
Hi Steve
But surely, this is part of the standard set of ITIL principles and ISO 27001:2016 controls and the underpinning government security policies - i.e. change management. How and why do people allow themselves to be driven by need and pressure, when these types of consequences occur? Everyone needs to be champions or else this will happen again and again. What happened to Secure by Design life cycles and now on top of this "privacy by design' driven by GDPR will exasperate this situation. There are consequences, whether by breach, disciplinary or termination.
It's a very good point caute_cautim - but in the wunnerful world of Web 2.0 up to now it's not happened. 3rd party plugins are common and currently not well controlled in most organisations I'm seeing.
I checked a website run by an organisation I'm consulting for today and found about a dozen plugins - including a BrowseAloud competitor. No-one had these logged as potential risks, nor, in some cases, was there even a clear contract with the supplier. They are now logged as a risk, and we are debating what action to take. Check your risk register - do you have these as risks?
However, let's be clear on the issue. An example may help.
Take Google - and say, tag manager use policy Not picking on Google, the others are probably similar.
There's nothing in their terms of service that gives them a liability for things put in by that code; it's even disclaimed that they vet content from 3rd parties that may get injected.
I don't get updates when they update the code - so if I use hash verification my website will stop working when they update and I have to scramble to verify that it's a real update (hard - the JS is optimised so I can't see easily changes) and then update my hash, whilst all the time my main window to the world is down.
I have two choices - stop using Google (ha ha) or have a contract that insists Google tell me when they update their code and give me time to update. Neither is going to work well - let's imagine a perfect world where updates are notified and imagine the outcome:
1) Exploit/bug is found, Google issue new code - has to be on a new URL as the SDI mechanism only allows one hash
2) Google notify everyone somehow
3) Good folks update websites, JS call URLs, the SDI and use new code. Everyone else carries on using old version.
4) Google remove old version; lots of website stop working, they get the blame. OR Google leave old version up, everyone gets lazy, until bug is exploited... They get the blame again.
Not good for Google, or the users.
Okay, let's assume Google give me a URL for the hash too and I only allow loading from a Google-signed location and of Google-signed code, so Google can update anytime and I am just verifying their certificates. We're just moving trust from one signature (a hash) to another (a cert) and that can also be hacked (q.v. DigiNotar and the google.com certificate).
So let's get the 3rd party out - I need to host the code so the only risk is on me.
If I do, the update problem becomes mine, but then we have trust issues on the Google side because I will be running code they may have deprecated and talking to APIs that they want to update. Again, they may have to stop me using the APIs.
It's not an easy problem - we can solve it by moving to a more restricted web that follows ITIL principles and ISO27001:2016 controls but that will involve some big changes. Maybe this incident will lead to pressure for that to happen, but unless we all stop using 3rd parties who don't follow strong security rules there's not the pressure to change.
Yes, I'm a bit negative. We've not fixed SMTP properly after 26 years either...
Hi Steve_D,
I just wanted to bring your attention to a live webinar we have coming up on the 22nd May on the subject of crypto mining that might be of interest to you: "Crypto Crime: Hunting for Cryptocurrency Mining in Your Enterprise".
I hope this is helpful.
Best
Patricia
Digital Programmes Manager
(ISC)2 EMEA
John,
@Caute_cautim wrote:How and why do people allow themselves to be driven by need and pressure, when these types of consequences occur? Everyone needs to be champions or else this will happen again and again.
It's as simple as the risk equation. We evaluate risk based upon the probability that a vulnerability will be exploited by a threat. Until the threat was realized, through stealing of clock cycles for bitcoin mining, there was just a vulnerability. If we evaluated risk on the existence of the vulnerability alone, there would be no way for us to afford to be in business.
Developers and the Organizations that use their software are motivated by their bottom line. Developers want products out quickly that are useful and affordable. Organizations want to purchase products that do the function they are being acquired for, and are affordable. The introduction of vulnerabilities is a collateral function of neither of the Developer having the time and resources to make their product absolutely perfect before they run out of startup funds - and the Organization choosing a competing product because it's actually available and affordable and does the thing that needs doing.
@Baechle wrote:John,
@Caute_cautim wrote:How and why do people allow themselves to be driven by need and pressure, when these types of consequences occur? Everyone needs to be champions or else this will happen again and again.
It's as simple as the risk equation. We evaluate risk based upon the probability that a vulnerability will be exploited by a threat. Until the threat was realized, through stealing of clock cycles for bitcoin mining, there was just a vulnerability. If we evaluated risk on the existence of the vulnerability alone, there would be no way for us to afford to be in business.
Developers and the Organizations that use their software are motivated by their bottom line. Developers want products out quickly that are useful and affordable. Organizations want to purchase products that do the function they are being acquired for, and are affordable. The introduction of vulnerabilities is a collateral function of neither of the Developer having the time and resources to make their product absolutely perfect before they run out of startup funds - and the Organization choosing a competing product because it's actually available and affordable and does the thing that needs doing.
It's not a question of how and why, it depends on the competitive nature of the environment one works within, the greater majority wanting to think client first - and get the solution and services out of the door. The pressures on organisations these days are immense, and with the Agile cultural change - this increases. So security practitioners either give up, or believe strong enough to keep injecting themselves and obtaining better outcomes. It is challenging in some environments, in which we strive or become shrinking violets. I think that the recent initiative to drive vendors towards providing warranties will change the approach eventually, especially if the insurance companies issue edicts and increase premiums sufficiently for organisations to take notice.
John,
@Caute_cautim wrote:It's not a question of how and why, it depends on the competitive nature of the environment one works within, the greater majority wanting to think client first - and get the solution and services out of the door. The pressures on organisations these days are immense, and with the Agile cultural change - this increases.
I don’t really understand what you’re saying here. The competitive nature of business is what drives the necessity to take risks. That means getting a product or service available to customers. There is value in that by being (1) early to market, (2) the most reliable, and (3) the least expensive. Each of these will appeal more to different customers depending on those customers' needs. However, there is a significant problem in the reliability context of software. It doesn’t necessarily have anything to do with a “Client First” approach as it does with long term “Fiduciary Viability” for the developer.
Software is the part of the whole system that does something on behalf of a human. Software relies on hardware, other software, and an event (usually in the form of human interaction) to get it going and to do something useful. Each pillar there and each change or layer to those pillars introduces an instability that has to be accounted for by the programmer/developer. If you don’t release a piece of software until it has all of the instabilities addressed, you will practically never release a piece of software. Something else in the ecosystem is always changing.
Somewhere you have to concede reliability external factors to make your product available on the market, and affordable. That’s where the risk management process comes in. It is applied both by the developers/manufacturers of software, and by the consumers. Is what this product aims to do worth the risk of the instability it introduces to my overall information system by using it?
@Caute_cautim wrote:I think that the recent initiative to drive vendors towards providing warranties will change the approach eventually, especially if the insurance companies issue edicts and increase premiums sufficiently for organisations to take notice.
I don’t think that any drive toward product warranty in software against misuse or abuse especially if it is external is going to be successful. I can’t think of any industry that does that. In fact, in nearly every sense, misuse and abuse are specifically clauses that invalidate warranties. Just because someone throws a brick at my car doesn't mean that the manufacturer replaces my windshield under warranty. It's an amusing concept that I think will eventually become niche and then die.
I also think it’s a pipe dream to expect commercial programmers and developers to come up with every possible scenario constituting misuse or abuse. As soon as you begin connecting systems to other systems, you introduce instability that can’t necessarily be accounted for – unless you make your system incompatible with everything else. I certainly don’t think that’s the direction the “Cloud” will let developers move and stay profitable. On the other hand, we’re all free to spend our spare time making programs that are bug and error free and incompatible with everything else – because we don’t need to fund our programming and development by selling it.
Sincerely,
Eric B.