top of page
Phil Venables

People and Security Incentives

Force 6 : People, organizations and AI respond to incentives and inherent biases but not always the ones we think are rational.

//

Central Idea: Risk management should be driven by incentives - but take care about your assumptions of rationality. Behavioral insights are key to cybersecurity.

Continuing our theme of exploring the 6 fundamental forces that shape information security risk we will now look at Force 6: People, organizations and AI respond to incentives - and inherent biases but not always the ones we think are rational.


Breaking from the structure of the last post and those before it, I will simply move straight to a discussion of how to work with incentives.


First, a reminder of how we state Force 6: People, organizations and AI respond to incentives and inherent biases, but not always the ones we think are rational - the macro/micro economics of information security are important to align incentives. This can help to ensure we reduce the right risks in prioritized order factoring in opportunity and productivity cost.


Part 1: Incentives and Organization Design


If we want to achieve a set of goals, either to promote a good activity or minimize a bad activity, then we need to use orders or incentives. I’m going to discount the effectiveness of orders, but I will concede that in some environments for some time orders can and should be used to drive the right outcomes. However, for orders to be sustained or made more effective then incentives or disincentives (friction) need to be introduced at a people or organization design level. This also applies in the context of AI as the use of AI is as much learned behavior from its real-world training data set which itself reflects the inherent rationality or irrationality of many organizations. In other words, you need to think about this at all levels since, even if you don’t explicitly identify this, then AI will likely discover your hidden organization incentives and behave accordingly.


So, to set about using incentives and organizational design the most important thing to internalize is Kurt Lewin’s concept of Force Field Analysis, which is a structured planning technique that identifies and analyzes the forces that help or hinder change in an organization. The basic idea behind force field analysis is that any situation or state of affairs is maintained by a balance of forces, some of which are driving change and others of which are restraining change.




You can drive change by adding forces or by removing restraining factors. Many security activities seem to pile on the forces for change: new policies, fresh orders, mandated tool implementations, more training and so on. Often the better approach is to remove the forces working against good practices by, for example, making the secure path the easiest path. In many cases looking at the world with a sense of curiosity as to why the current forces for change are not being effective can reveal surprising (likely not surprising in hindsight) forces working against change - in other words treat status-quo situations as surprising and in need of investigation.


As an example of hidden incentives, I’ve seen plenty of organizations (despite the obvious logic of security requirements) fail to get management buy-in at certain levels for some activities because the promotion criteria for those leaders achieving the next level in the hierarchy depends on many things, but not security. In fact, even worse, the markers of success that lead to promotion entail doing things that actively work against security, like recognizing flaws in design that need to be addressed, doing little about it, because it might impact a schedule or trigger a cost over-run.


Other examples include internalization of conflicting risks where the relative priorities do not get surfaced and prioritized appropriately. The classic triple is security, reliability and change management where a necessary security change is held back because of change management reluctance which leads to an internalized trade-off between two sides of reliability risk: failure due to a bad change or failure due to a security event because the change (update) didn’t occur. In reality, as with most incentive or organization design discussions this comes down to whether the right people at the right level of the organization at the right time were able to externalize the risk for discussion.


So, summarizing what we have so far to get incentives and organization design right for optimal security decisions:

  1. Do a formal or informal force field analysis in situations to look for where to add more force or preferably remove the friction that is actively working against or disincentivizing the right outcome.

  2. Create the right risk management framework that externalizes risks to ensure security teams can elevate those discussions in the right context to get the right outcome.

  3. Escalate discovered risks to the right level in as fast and reliable a way as possible.

  4. Seek to learn from the factors that require escalation into the risk management process so that the process improves. This involves creating and sustaining your organization as a, so-called, High Reliability Organization.

  5. Use behavioral techniques to align people and organizational incentives in the right way.

We just discussed 1, other posts (here, here and here) discuss 2 and I will defer 5 to a subsequent post. So let’s focus on items 3 and 4.


Part 2: Escalation as a Service


A big goal for all security roles in everything we do is to create transparency over the risks our organizations face.


The belief, which is more often validated than not, is that leaders at all levels when confronted with the reality of a risk in clear terms will want to do something about it.

When confronted with a risk they weren’t aware of, that if realized could result in sizable harm to their business (or mission / scope of responsibility) and therefore to them personally, is something they will want to get ahead of. That is, all things being equal, they’d rather avoid the crisis than have to deal with it.


Sometimes, though, this alignment of incentives only works at the right level in the organization. Decisions taken lower down, where constraints overlap and disincentives are in play, are reversed higher up by people who have more personal skin in the game should the risk manifest itself. So, thinking of it this way, the escalation of significant risk issues is a service for leadership, in other words it is escalation-as-a-service.


But this is all easier said than done so let’s unpack it a bit:

  • Most organizations are resource challenged and so few of the things we think people should do will always get naturally prioritized (unless they’ve been previously operationalized).

  • Therefore, needing to escalate risks or other issues is not a sign of failure, it’s a natural consequence of most environments and so is what is expected for most security roles.

  • Escalation to attend to unusual or otherwise extraordinary risks can be done through pre-defined constructs (e.g. a risk committee, SLA review process etc.) or as a specific management escalation to be brought to the attention of someone more urgently.

  • When escalating the most important thing to focus on is how actionable the escalation is. It needs to be specific (as opposed to a vague concern), it needs some supporting data where possible, it needs to have a recommended action associated with it with a suggested responsible party and an expected time frame.

  • This is the classic SMART model, but with the additional of the consequence of not addressing the risk, expressed in a plausible scenario that people can emotionally connect with. For example, "we have a security vulnerability in Product X, which if exploited would result in one customer being able to access another customers data which will create reportable and highly public incident for them and us which will have significant brand and regulatory consequences, to resolve this we need to apply this fix to subsystem Y which we think will take 2 weeks of work by Teams A and B and should be done no later than Z date".

  • Try and propose some solutions and find some win-wins. In other words, there may be a new way to tackle the problem in an 80/20 way. However, even if there is a real priority challenge then maybe ask for some action that is immediately achievable. For example, "I know you can't do this all, but doing X and Y is a 1 hour piece of work, let’s just make those changes and we can revisit the wider work later". Sometimes this breaks the inertia, makes the apparently intractable more achievable, or in the worst case at least something got fixed.

  • Escalation can be uncomfortable as it is sometimes seen as by-passing people and might make subsequent working relationships difficult. But you can be empathetic and think of it like this:

    • Appeal to the process: "I’m sorry, but our risk process is such that I need to escalate this if not resolved, it’s nothing personal."

    • Share responsibility: "I know you don’t have the resources to do this, but let’s go to leadership together to make them aware of the challenges. In other groups where we’ve done this it has resulted in some additional resources being allocated."

    • Show curiosity: "I’m a bit surprised this isn’t getting done as it’s an already established organization strategy / priority so you and I should discuss with your leadership team to see if priorities on these have really changed."

    • Connect to other motivations: "We need to jointly escalate this, I’m really worried given customer concerns / market opportunity / recent other incidents / etc".

For a more expansive treatment of escalation see the previous post on the 12 Step Guide on Escalating Risk and Security Issues.


Part 3: High Reliability Organizations


Much of the necessary organization design and incentives can be achieved by adopting the practices of so-called High Reliability Organizations (HRO). While not all organizations fulfill the criteria for really becoming an HRO, I think all organizations would benefit - at least for security and resilience - from adopting many of the characteristics of an HRO, summarized:

  • Preoccupation with failure. HROs treat anomalies as symptoms of a problem with the system. The latent organizational weaknesses that contribute to small errors can also contribute to larger problems, so errors are reported promptly so problems can be found and fixed.

  • Reluctance to simplify interpretations. HROs take deliberate steps to comprehensively understand the work environment as well as a specific situation. They are cognizant that the operating environment is very complex, so they look across system boundaries to determine the path of problems (where they started, where they may end up) and value a diversity of experience and opinions.

  • Sensitivity to operations. HROs are continuously sensitive to unexpected changed conditions. They monitor the systems’ safety and security barriers and controls to ensure they remain in place and operate as intended. Situational awareness is extremely important to HROs.

  • Commitment to resilience. HROs develop the capability to detect, contain, and recover from errors. Errors will happen, but HROs are not paralyzed by them.

  • Deference to expertise. HROs follow typical communication hierarchy during routine operations, but defer to the person with the expertise to solve the problem during upset conditions. During a crisis, decisions are made at the front line and authority migrates to the person who can solve the problem, regardless of their hierarchical rank.

But in simpler terms:

  • Be proactive. Don't wait for something to go wrong before you take action. Look for potential problems and address them before they cause an accident.

  • Think critically. Don't take things at face value. Question everything and look for the root cause of problems.

  • Be flexible. Things don't always go according to plan. Be prepared to adapt and change your plans as needed.

  • Communicate openly. Share information with everyone involved in the operation. This will help to prevent accidents and improve safety.

  • Value expertise. Listen to the people who know the most about a particular situation. They may have the best ideas for how to prevent an accident.

Bottom line: improving or sustaining security in any organization requires effort. The level of effort depends on the alignment with the organization’s incentive structure. This structure might not appear wholly rational depending on your perspective so you have to analyze it, work with it or adjust it to get it aligned with your goals.

1,650 views0 comments

Recent Posts

See All

Comments


Commenting has been turned off.
bottom of page