I managed to keep up the pace of 1 post every 2 weeks throughout 2024. Just when I think I might be running out of ideas, and the backlog of topics is running low, then something always manages to come up. I’m grateful I continue to be in a position at the nexus of various fields (technology, a range of customer sectors, government, academia and investing), of various disciplines (risk, resilience, security, privacy, compliance and trust), all while being somewhat in a front row seat for the vast changes that keep occurring in and around various innovations (AI, hardware, operating systems, cryptography, risk management).
So, in closing the year let’s take a look at the top 10 posts of 2024 in the order of most read.
1. Security Training & Awareness - 10 Essential Techniques
It shouldn’t have been as much of a surprise that this attracted a lot of readers given how comparatively little is written on this topic. The comments received were to be expected, indeed, I did set it up as a lot of security awareness initiatives don’t produce impact but broadly training when integrated in the right way does. The bottom line is ambient controls should replace the need for much security training and what remains should be a function of the processes people operate within, not something they are dragged to. Concepts should be reinforced at key moments when people are maximally receptive - either at a moment when they’ve hit a guard rail, hard rail, made an error, or when they are changing roles or being promoted.
2. Risk Appetite and Risk Tolerance - A Practical Approach
I really enjoyed writing this post, and I had been meaning to do it for a long time. These approaches of actually setting and managing risk appetite is hard to get right and many lessons can be taken, built on my own experience of managing financial risks in large systemically important banks. The bottom line is defining risk appetite is of little value if it doesn’t support business decision making. That should include balancing upside and downside - ensuring risks are taken for strategic objectives while capping the downsides. Above all, the expressions of risk tolerance should permit actual choices to be made and measurements to have meaning and useful escalations if there are deviations. Finally, there must be a process that tunes the limits and thresholds of specific risk measures based on actual outcomes, current risk profile and business / mission opportunities.
3. Truths of Cyber Risk Quantification
A refresh of an original post from 4 years ago was popular as many organizations seek to find better means of quantifying risk to inform both their tactical and strategic decisions. The bottom line is we need to apply more quantitative risk analysis methods to cyber, but to think there will be one unifying approach is naive. Like every other discipline you will need to select the particular method to the task at hand and then iterate. Above all, don’t confuse risk communication techniques with risk quantification techniques. And remember, even when it’s all working your most important equation might well be Risk = Hazard + Outrage.
4. Where the Wild Things Are: Second Order Risks of AI
A reminder that while we correctly focus on the immediate risks of generative AI we also need to look at second order effects - the risks that come from what comes next. The bottom line is we should be appropriately cautious about AI but not so that we forgo the truly massive upside that the bold but responsible use of this technology will give us in a range of fields. It’s healthy to have a societal level debate about AI risks as that is what will drive the mitigation of those risks so we can enjoy the benefits of this remarkable capability. But, in doing this we need to be much more focused on the real risks that come, and have come in prior technological shifts, from the 2nd order effects. Ask, in a society reshaped by AI, what does that world look like? And, in that world, what risks will we face that we don’t face today? Then, what do we need to do to be prepared to mitigate those effects? If we’re not careful, that will be where the wild things truly are.
5. A Letter from the Future
One of the other things I got done this year was co-leading the production of a report on Cyber Physical Resilience for The White House which has led to a number of good outcomes already. This “letter from the future” was not something I could ultimately include in the report itself but is reproduced here and proved to be of interest to a lot of people.
6. Security and Ten Laws of Technology
Another blog I’d had in my head for years that I finally got round to writing. This looked at the security implications of many of our so called laws, from Moore’s to Metcalfe’s and more. The bottom line being that while these are not laws in the strictest sense they are nevertheless useful for encoding some of our body of knowledge. Like megatrends these are important to pay attention to so you can “ride” them to your advantage but perhaps even more so to make sure you’re not positioned against the unstoppable forces they represent.
7. InfoSec Hard Problems
A recap of a number of research reports on what are the most fundamental and challenging problems in info/cybersecurity today. It’s disappointing that many were identified a long time ago and remain unsolved, but there is hope as progress has been made even in the hardest of these hard problems. The bottom line is that across the research community and with engineers/practitioners we have made and continue to make enormous strides on these problems. But the hard problems remain. Research and practical tactical advances are needed across “zero trust”, service meshes, higher assurance trustworthy computing, default encryption, sandboxing / enclaves, hardware assisted security, policy languages and security integration into software development, testing and deployment management.
8. Incentives for Security
A fresh look at moving from our traditional incentives to a more commercial outlook. The bottom line is that many organizations have framed security incentives poorly. It has been positioned as loss avoidance, regulatory compliance, brand protection and return on security investment by saving “soft dollars” that don’t actually generate the stated return. These incentives can be powerful enough (especially regulatory) to drive many positive outcomes, but, a better way of looking at incentives is to ask what are the things we can do that deliver significant commercial (or mission) outcomes in and of themselves such that we’d do them no matter what. Then, do those in ways that deliver massive adjacent security and resilience benefits.
9. Why Good Security Fails: The Asymmetry of InfoSec Investment
A quick look at what it takes to sustain security, beyond the initial investment needed. The bottom line is that unless actively counteracted, resources applied to sustain security will gradually atrophy. Worse, the drop off in effectiveness is disproportionately negative to the rate of resource drain. Things can go from good to bad to “boom” pretty quickly.
10. Job Interviews: Part 2 Conducting the Security Interview - The Big 10
https://www.philvenables.com/post/job-interviews-part-2-conducting-the-security-interview-the-big-10
This just edged out the accompanying part 1 of how to do well at security interviews. The bottom line being that security leadership positions at all levels are some of the most challenging roles there are. Assess candidates thoroughly - not just by asking questions and looking for good answers but fundamentally looking at the thinking patterns and cultural outlook of the candidate. The most important thing though is to not just rely on the interview process itself but to look for evidence in what the person has done in the world, what they’ve written or said and the leaders they’ve ushered into the wider community. People often tell you exactly who they are. Look and listen.
_____________________________________________________________
Now, looking ahead at the posts to come in 2025. It’s hard to be predictive, a lot of 2024’s posts were developed not according to a laid out plan but what seemed right to cover at the moment. But I do want to spend some time on or revisiting the following in 2025:
What is actually transpiring in AI: the risks and the opportunities.
Looking at mitigating whole classes of emerging threats.
Seeing where we are at in the interplay of different risks: security, privacy, compliance.
Keeping the usual focus on risk management and risk communications especially to Board/Executive level.
As ever feel free to suggest some topics via the social channels.