Ever since I first became familiar with the 80/20 principle, and other circumstances marked by Pareto distributions, I began to see examples of it everywhere. Naturally, I’m particularly biased to observe it in risk, security, compliance, and other related disciplines. You can frequently see situations where 20% of the issues dominate 80% of the risk or where 20% of the work can provide 80% of the benefit. The 80/20 principle can be vital where work needs to be driven by relentless incremental progress. Finding the smaller percentages of actions that can yield the most impact in parallel with a few very big transformations is the key to much of effective risk management.
In thinking about this again, I decided to re-read the latest edition of the landmark book, The 80/20 Principle by Richard Koch. What follows is a summary of what I think are the most relevant parts, together with some thoughts on applications in security. It is, of course, no replacement for reading the book itself.
The universe is predictably unbalanced. Few things really matter.
As Richard opens, “Truly effective people and organizations batten on to the few powerful forces at work in their worlds and turn them to their advantage”.
If we’re going to be precise, there are plenty of 70/30’s and 99/1’s, the reality is most disciplines are full of 80/20’s or sufficiently near to be reasonably labeled as 80/20’s, for example:
20% of products represent 80% of the revenues of many businesses
20% of customers account for 80% of the profits of many businesses
20% of criminals account for 80% of criminal losses
20% of motorists cause 80% of the accidents
20% of those who marry represent 80% of the divorces (serial marriage failures)
The origin of the framing of this principle is the economist Vilfredo Pareto, hence the Pareto (distribution, principle, law) naming. He discovered that, typically, national wealth is distributed in such an 80/20 fashion. It was predictably unbalanced - as were, upon inspection much later by other economists, many other situations.
IBM were one of the most emblematic adopters of hunting for and utilizing the 80/20 to great effect in discovering 20% of mainframe operations represented 80% of the computing time and, thus, optimizing the hell out of those 20% of the instructions had transformational performance effects.
It happens in controls as well. I once worked for an organization where we developed a Bayesian Network simulation of a distributed control process. From this we developed, using historical data, a control effectiveness distribution of each control in the environment, from which we could run a Monte-Carlo simulation to determine the overall dependency on each control. Our intuition was that the “control load” was borne roughly equally by all controls. That turned out not to be the case (in hindsight it was rather obvious) and, you guessed it, roughly 80% of the control effectiveness of the overall environment was contributed to by 20% of the controls. Interestingly, some of the controls in that 20% were the least invested in and were over time becoming more likely to fail. We corrected that.
In most situations the driving factors of the 80/20, or more precisely what makes 20% of the causes account for 80% of the consequences, come from a few forces. These forces must be identified and watched. The good forces should be amplified and the bad ones neutralized. Non-linearity and the feedback loops of these forces drive the imbalance especially in “rich get richer scenarios” where sensitivity to initial conditions under feedback conditions can result in the 80/20 outcomes, having traversed some tipping point.
Two approaches to deal with, or find opportunities from this, recur:
Reallocate resources from the unproductive to the productive (i.e. redeploy the 80% that isn’t contributing to the 80% outcomes of the productive 20%).
Make the unproductive more effective (i.e. make the 80%, or some subset of that effort, more productive).
“God plays dice with the universe. But they’re loaded dice. And the main objective is to find out by what rules they were loaded and how we can use them for our own ends.”
The main thrust of the remainder of the book is to explore this in more depth with examples in the fields of economics and business. I won’t summarize or review those here but they are well worth reading at source. So, we’re left to ponder - as risk managers - its application in the field of security. But as most of us have no doubt experienced there are plenty of examples, including:
Vulnerability Management
Identify the High-Impact 20%. Not all vulnerabilities are created equal. The key, of course, is to pinpoint critical vulnerabilities, the 20% that could cause the most damage.
Focus Your Defense. Once you've identified the high-impact vulnerabilities, prioritize your resources to resolve them. Patch them first, implement additional security controls around them, and monitor them closely for any suspicious activity.
Don't Neglect the 80% Entirely. While the 20% gets the prioritized treatment, don't abandon the remaining 80% completely, even minor vulnerabilities can be exploited in combination, so maintaining a baseline of security across the entire system is crucial. There can still be danger lurking in the tail especially considering the transitive closure of the vulnerability dependency graph.
Privilege Management
Identifying the High-Impact 20%. Focus on critical systems and data. Focus on the 20% that hold the “crown jewels”.
Focus on Highly Privileged Users. A small percentage of users often wield immense power – think system administrators and privileged accounts. The actions of these 20% can have a domino effect, granting access to everything downstream. Scrutinize their privileges and activities closely.
Least Privilege Gone Wrong. The principle of least privilege aims to give users only the access they need. However, sometimes, exceptions creep in, leading to bloated privileges. Identify these accounts where the 20% of unused permissions pose a significant risk.
Continuous Monitoring and Auditing. Keep a watchful eye on the critical 20%. Monitor privileged user activity for suspicious behavior and implement logging and auditing to track all access attempts.
Insider Risk Management
Identifying the High-Impact 20%. Not all employees pose the same level of risk. Focus on the 20% with higher risk indicators due to the criticality of their roles, or perhaps in certain circumstances the inherent risk of the individuals. These individuals warrant closer monitoring and proactive interventions.
Critical Assets and Data. Not all data and systems are created equal. Prioritize the 20% holding the most sensitive information, like intellectual property, financial records, or customer data.
Unusual Activities and Anomalies. Identify the 20% of activities that deviate from typical patterns, such as unauthorized access attempts or sudden changes in data access patterns.
Fraud Detection and Prevention
High-Impact Activities. Not all transactions are born equal. Zoom in on the 20% of activities most susceptible to fraud, like large fund transfers, high-value purchases, and suspicious changes in account behavior.
Red Flag Profiles. Identify the 20% of customer profiles with red flags like unusual geographic locations, sudden spikes in activity, or inconsistent financial patterns. These profiles warrant closer monitoring and proactive interventions.
Emerging Threats. Track the 20% of the latest fraud trends – social engineering scams, malware variants, or zero-day vulnerabilities.
Targeted Monitoring and Alerts. Implement real-time monitoring systems that zero in on the high-risk activities and profiles. Trigger instant alerts for suspicious behavior, allowing you to intervene before a scam unfolds.
Bottom line: risk management overall and security in particular are full of examples of 80/20’s where 20% of the issues represent 80% of the risk and 20% of the effort in other ways can yield 80% of the benefits. Finding these predictable imbalances can be all the difference in effective risk management.