Hi all, I am looking for a discussion about SIEMs. What do you think about the idea of correlation rules that, finally, are static rules in a changing world. I see customers that disable the majority of rules in order to avoid a lot of alarms and false positives.
Do you think that SIEMs need to change and to have an approach more related to ingest as much information as it can and, after that, use machine learning, different feeds or similar in order to detect security issues?.
I am not an expert in SIEMs but I see the correlation rules like the static IPS rules where a lot of customers disable the majority or trust in what the vendor says about the review of threats and never look at the IPS.
Samuel,
Here's my take on SIEMs. This is my opinion based on my professional experience, area of skill, and observations.
SIEMs are a valuable tool when equipped and wielded by a Data Engineer/Data Scientist supporting both the Security Operations and System Administrators.
SIEMs are not plug and play, no matter how much their marketing literature tells you they are (as evidented by so many customers turning off every alarm and alert - usually followed in short order by ignoring the SIEM completely). You need a dedicated SIEM person (preferably with Data Engineering talent or skills) to work the SIEM by continuously customizing it and improving its data aggregation and filtering who will collaborate with each Security staffer and System Admin over time and in an iterative way to develop triggers, alerts, and reports that are valuable to them and their leadership.
I have to reiterate, in my opinion you need someone on-staff with Data Engineering skills and interest. This is not the same thing as system or security administration (although someone may also have a Data Engineering background and interest and may be able to dual-hat, but don't count on it).
In my experience and observation, the lack of a dedicated, trained, and engaged/interested Data Engineer has been the downfall of most SIEM deployments that I've encountered. The presence of one or more Data Engineers/Data Scientists on the other hand has been the key in the overwhelming success of SIEM implementations that I've encountered.
Sincerely,
Eric B.
Absolutely agree with your assessment. I have seen the SIEM being turned into veritable dumping ground for logs with very little return on investment, when wielded by non-data analytics savvy personnel, (which is frequently already overwhelmed by the number of tools they have to handle).
Lack of universal parser libraries is also a killer, frequently requiring commitment of serious time and effort.
Further more, I've seen a slap-dash implementations with unencrypted syslogs forwarded to and accepted by SIEM vendors, or at least pumped through the local network to collectors).
It seems that at least for now, SIEMs may be causing as many problems as they claim being able to address and are in most instances being implemented as a concession to the compliance requirements for the centralized log retention repositories.
@SamuelSancho, while the others have just about clarified everything you asked, I'll supplement with a small description of my most recent experience with the SIEM, and a bit of info...
We procured a SIEM solution, but encountered a host of issues when implementing it --- including integrating log sources, setting up alerts, and so on --- which culminated with the SIEM becoming a white elephant & the organization then going in for a Managed Security Services provider (MSSP)
(An MSSP falls cheaper since an organization's investment in the capital and operational expenditures will be greatly reduced.)
In my opinion, it's not a safe option to rely on machine learning to have the SIEM detect issues. For example, if you set up a SIEM in an environment where there's malicious activity it may perceive the information it gets as normal, right? The best thing to do is to manually configure the SIEM with details of the infrastructure and then create rules / tailor alerts as needed.
You likened a SIEM to an IPS when it comes to poor configuration, but there's a major difference in terms of the potential impacts.
Finally, as @Baechle already stated, there's no use having a SIEM without someone with adequate skills to manage it. This someone shouldn't be the network or system administrator, but a dedicated security person having the proper know-how & skill-set for the job.
I agree with the above 3 posters. You do need someone who is both curious and interested in the security implications of the log events and is capable of dealing with the massive amounts of events being processed.
I have seen SIEM implementations where many of the rules are simply variants of detecting a brute-force attack on a user account. Well, if you read all the threat intel of the ingenious new exploits being discovered nearly every day, not very many of them involve brute-forcing user accounts. So companies deploying SIEMs invest in a lot of money in a system that could be quite capable, but ends up spamming them with alerts about user accounts that have gotten locked out. So they're not very happy with their investment after a while.
So it's critical to have the threat intel, engineers and analysts all working together and talking to each other, and thinking about how attacks would actually look and play out in their real-world network, so they can devise the correlation rules to actually detect attacks.
Here's a quick check: whenever you have a scheduled pen-test, does your SIEM detect the pen-testers and alert you to their activity - even if they fail to exploit anything? If not, it's probably a good idea to see what's going on with the SIEM.
Hi all, very interesting conversation. Basically I am agree with all the
posts but I think that the base problem is the static approach of SIEMs to
security. All of us know that security is a constant change world so, as
Dan has said, many SIEM implementations has simply variants of brute-force
attack and today not many of threats are related to brute-force. So,
probably, we should begin to think in a new way regarding to SIEM. Static
correlations rules must be maintained but SIEM has a lot of information
about the traffic so It could create a "baseline" about the traffic using
machine learning. This information would give a very powerful tool to
engineers to understand what is happened and create new rules. Obviously
this baseline must be capable of detect new situations automatically
(growth of the company) and adapt to them.
I am agree that if the baseline contains malicious activity it would be a problem, but the static rules are there for help you.
I do not know well all SIEMs in the market and I do not know if some of them
implemented something similar but I think it would be very useful.
Anyway, thanks for your replays. It is a pleasure to talk with people like
you.
Samuel
Samuel,
You could accomplish the same thing here with a free NetFlow analyzer (e.g. Moloch + ElasticSearch) without A.I. or machine learning. Because you're not using A.I., it gives you the double benefit of forcing you to go out and research ports, protocols, their associated applications installed on your hosts, and evaluate mis-configurations throughout your enterprise because of the anomalous communications you discover.
@SamuelSancho wrote:Static correlations rules must be maintained but SIEM has a lot of information
about the traffic so It could create a "baseline" about the traffic using
machine learning. This information would give a very powerful tool to
engineers to understand what is happened and create new rules. Obviously
this baseline must be capable of detect new situations automatically
(growth of the company) and adapt to them.
As you select and identify various communications as valid, you filter them from your NetFlow view. Eventually you're left with a rule set of anomalies. Increases in traffic, different protocols, different times, etc. These should now be the only thing left that's showing up.
Taking a SIEM and then finding a use for it is probably the opposite of what we should be doing. Instead, we should be identifying a task we want to accomplish: Identify non-standard traffic on our network and its origin for further investigation. Before we even start looking for a tool, we should understand what needs to be accomplished on conceptual level and be able to articulate it in layperson terms. In this case: We need to capture network communication header information to a database where it can be sorted, filtered, and analyzed for anomalies. Then, we can start talking about how to do that using common tools like WinPCap, Port Mirroring, and Ethereal/Wireshark; before we start going to advanced tools like an ElasticSearch Database underneath Moloch nodes. And you'll see you never even have to touch a full SIEM to get there.
You will probably be so busy tracking down, analyzing, understanding, and filtering traffic that you wouldn't even have time to deal with a SIEM. Tnce you've gone through the process of filtering down the stuff that's "normal", and cleaned up your remaining systems by reloading them or removing excess applications, services, and protocols... THEN you'll be ready to start talking about a SIEM with an established baseline.
Sincerely,
Eric B.