Don’t have time to read? Start listening to this blog post now:
Determining the likelihood of an attack succeeding in your environment is a key component to understanding the potential financial loss that you can incur. One of the methods used in the industry today is computing the Value At Risk (VaR) through Monte Carlo Simulations.
Monte Carlo simulations rely on repeated random sampling to produce distributions of possible outcome values, an example of which is shown in the figure below.
Figure 1 – Taken from the World Economic Form (WEF) Cyber VaR paper
The thing that bothers me (and drives me) is the “random sampling” used in Monte Carlo simulations. Just a surface look at the cyber industry shows that we have a lot of data available. Organizations have, on average, 75 security products deployed and receive 17,000 malware alerts per week. This isn’t a complete picture but it does mean that we can do better than random.
Random sampling also fails because of the “I’m different” syndrome. What do I mean by that? I mean that you can’t look at two similar companies (take Home Depot and Lowe’s, for example) and use the same random samplings. Why? Because the security environments (people, process and technology) they have could be wildly different. Maybe we could create a “random random sampling” to cover that but that would only work if we created a 2nd order derivative to go with it (for those of you not following, the shorthand for that would be r2d2).
In order to remove “random sampling” we really need to understand what an attacker could do to your environment. The most accurate way to know what an attacker would do would be to let them loose in your environment and watch their every move. Besides this being a dumb thing to do (yes, you can quote me on the “dumb” part), there are also better ways.
There are a lot of factors in understanding what an attacker would (and can) do to your organization. One question I always get asked is “What impact would that attack have on my environment?” Vulnerability information (CVE’s with their corresponding CVSS) provides insight into how exploitable your endpoints are but isn’t a complete answer.
Another key piece of information would be “could an attack actually run on one of my endpoints that has this vulnerability?” The answer to this question is where the cloud, big data, and advanced analytics comes into play.
I believe the best way to determine the probability an attack can occur in your environment is to test it with a copy of your environment. Load the copy (whether it be a gold image, clone, or something similar) and test attacks against the copy. That data, combined with the results of other runs, provides a deep understanding of what happens, how and why. And that data isn’t random.
What would that give us? Empirical data around the probability an attack could succeed. No random sampling needed. That’s our goal at Nehemiah Security – to produce empirically proven, financially quantified results.
This is a short post and what we’re talking about is one factor in understanding the potential financial impact cyber attacks can have on your business. My goal and passion is to move beyond methods like Monte Carlo Simulations and Value at Risk (VaR) models and get into empirical, provable, and repeatable methods for computing loss.
Interested in more cyber risk analytics blog posts? Read our post about Computing Asset Value.