top of page
  • Phil Venables

Intelligence Failures - “The Distortion of Retrospect”

The codebreaking and overall intelligence success of Bletchley Park in World War II is legendary. Ultra, along with broader Allied signals intelligence prowess enabled a swifter path to victory. However, there were also failures.


Many of the failures were caused not by too little signal but, rather, by too much. The failures were that of intelligence analysis. What is particularly impressive about the work of the people at the UK Government Code and Cipher School at Bletchley Park was the intellectual honesty of their port-mortem assessments. 


Many of these are now declassified. Let’s look at some of the lessons, which apply to all forms of intelligence and post-mortem analysis including cybersecurity. The particular example I will use is of the intelligence failings before the Battle of Bulge, the last major attack of Nazi Germany in the Ardennes in the closing stages of World War II. The full report is in the NSA archives here.


Lesson 1: Post-mortems are about learning not blame


It opens with the direct approach, that the value of the post-mortem is in lessons which can be extracted for future use - not blame, not covering up, not glossing over or making excuses - but, lessons to improve. 



I particularly like this : "to learn what can be learnt."

Lesson 2 : Avoid rigidity in thinking

There was a degree of complacency as it had been a long time since the Germans had launched a major offensive. This led to indicators being more likely interpreted as a defensive rather than offensive build-up. There was little appreciation about how much the Germans had learnt from and then adopted Allied deception tactics and there was too much reliance on one source of intelligence (Ultra decrypts) without more extensive use of human intelligence.


Finally, there was a massive underestimation of how much of Allied communications were exposed to German signals intelligence. 

You see this in many cyber threat contexts today where there are failures of imagination to adjust intelligence analysis based on the speed by which attackers are learning. Similarly, there are many examples of mis-reading attacker intent based on too narrow a view formed from immediately available intelligence vs. other possible sources.


Also, remember, if you’re dealing with advanced attackers they will be watching and likely reading your response and planning unless you take steps to manage that response on separate systems and communication channels from those you suspect are compromised. Communications security of your response is paramount.

Lesson 3 : Post-mortem view without hindsight bias


What strikes me most about the whole report is the intense intellectual honesty to distinguish between the lessons learnt on process vs. what can only be informed by the hindsight of knowing the outcome. Of course, it is entirely clear the German offensive should have been foreseen knowing now that it did in fact occur. But, looking in the post-mortem for the failures of foresight is a discipline in itself which they do very well. 




Looking at the signals and their interpretation distinguishing between foresight and hindsight is crucial.



This illustrates the point in any post-mortem / after action review - whether it is associated with an incident, a security breach or even a business transaction - that it is important to look at what can be improved in the process vs. the outcome. For example, an incident might have been avoided by pure luck and so even though the outcome was good the lesson’s learnt are critical to apply. Conversely, it might that an incident occurred but the process was solid and it just so happened that given the knowledge at the time the decisions that led to the incident were in fact reasonable given the available knowledge and resources.


You should focus on process : improving that process and how it effects outcomes as opposed to determining the need for learning based on the outcome alone. This also means you should question your prioritization approach for what gets examined in depth, for example: say an incident caused a loss of $1,000 and your threshold for analysis is, say, $50,000 then you wouldn’t look at that. That’s perfectly reasonable on the face it, but what if the loss of $1,000 was just happenstance and in another viable circumstances on a different day in different conditions the same incident could have resulted in a loss of $5,000,000 then it undoubtedly should be analyzed. Process, not outcomes, based on loss potential : "......the picture as it now appears is not the one which developed from day to day."



Lesson 4 : Assume your attackers know you are sophisticated


The German Army, despite received wisdom today, had a great appreciation of the Allied intelligence effort across signals intelligence (as much traffic analysis as cryptanalysis) to human intelligence, and so were putting in place highly sophisticated countermeasures to blunt that edge. 


Some examples:


While many attackers might stumble about giving off easily detected signals because they are used to dealing with easy prey, it is likely if you consider yourself able and sophisticated then they will be fully aware of this too - and will be bringing their A-game.

Lesson 5 : Don't become too wedded to adversary intentions

In the conclusion they emphasize what they, wonderfully, describe as “The Distortion of Retrospect”.


There was too much bias to look at the signals through the lens of what they believed the Germans were going to do and they failed to present alternative possibilities. 


There are some organizations who are very good about being wary of this and are careful to present alternative possibilities, and there are some organizations who have all too clearly assembled and focused intelligence in support of a pre-determined conclusion. 


As we’ve mentioned before it is also hard to resist becoming too reliant on a previously successful source of intelligence which can blind you to the other indicators and warnings that may help generate the alternative scenarios to consider. 


This is, of course, critical in cyber not just in terms of threat intelligence but also in the scope and granularity of your detection apparatus. If you’re only looking in one place at one set of events then your ability to consider alternative possibilities of events will be intrinsically limited. 

Bottom Line


There is a long history of intelligence analysis, a massive body of knowledge and a range of skilled practitioners who have worked in this space for a long time. Cybersecurity threat intelligence practices should make use of all of this. Leaders of organizations that have threat intelligence teams should make sure they have the people and the right processes to assure outcomes and heed these lessons constantly.


655 views0 comments

Recent Posts

See All

Human Error

Several years after writing the first version of this blog I still see a repeated pattern of problematic events attributed to human error. It seems like society has a block on thinking more deeply abo

Going Faster: Isochrones and “Time to Hello World”

When you strip away all the fluff, security succeeds when: You are moving quicker than attackers - mitigating specific attacks ahead of, or just in time, through fast detection, containment and recove

Incentives for Security: Flipping the Script

We’re getting it wrong on the messaging for incentives to do security - and people are pretending it’s landing when it isn’t. There are 5 main categories of security incentives: Loss avoidance. The pr

Subscribe for updates.

Thanks for submitting!

© 2020 Philip Venables. 

bottom of page