We still have plenty of open problems in information and cybersecurity (InfoSec). Many of these problems are what could easily be classed as “hard” problems by any measure. Despite progress, more research is needed here. While there is much academic, government and private sector sponsored research underway I wonder if some alignment between all these efforts to focus on a smaller set of foundational problems would be more fruitful. The challenge is to agree on what these are.
There was a comprehensive effort in the US in 2005, when the InfoSec Research Council, a group of research leaders at various US Government Agencies, published a Hard Problems List. Summarized below:
Global-Scale Identity Management. Global-scale identification, authentication, access control, authorization, and management of identities and identity information.
Insider Threat. Mitigation of insider threats in cyber space to an extent comparable to that of mitigation in physical space.
Availability of Time-Critical Systems. Guaranteed availability of information and information services, even in resource-limited, geospatially distributed, on demand (ad hoc) environments.
Building Scalable Secure Systems. Design, construction, verification, and validation of system components and systems ranging from crucial embedded devices to systems composing millions of lines of code.
Situational Understanding and Attack Attribution. Reliable understanding of the status of information systems, including information concerning possible attacks, who or what is responsible for the attack, the extent of the attack, and recommended responses.
Information Provenance. Ability to track the pedigree of information in very large systems that process petabytes of information.
Security with Privacy. Technical means for improving information security without sacrificing privacy.
Enterprise-Level Security Metrics. Ability to effectively measure the security of large systems with hundreds to millions of users.
It’s interesting (and perhaps a bit depressing) that 19 years on this list is still a good statement of our current hard problems.
The US Government’s NITRD program has focused, as part of its wider R&D coordination mission, on driving a more coordinated cybersecurity research agenda with its recently released report being a good example. They summarize the challenges as follows:
Human-centered cybersecurity: A greater emphasis is needed on human-centered approaches to cybersecurity where people’s needs, motivations, behaviors, and abilities are at the forefront of determining the design, operation, and security of information technology systems.
Trustworthiness: Capabilities are needed to be able to establish and enforce the required levels of trust at all layers of computing, starting at the hardware layer and including all other layers, such as operating systems, software applications, networking, web browsing, and applications and services such as electronic commerce and information sharing on social media.
Cyber resilience: Capabilities are needed to ensure that systems can withstand cyberattacks and disruptions, and can continue to deliver vital functions in the face of impairment in adverse and contested cyber environments.
Cybersecurity metrics, measurements, and evaluation: Advancements are needed in capabilities to evaluate and quantify cybersecurity risks, resilience, and trustworthiness, in a scientifically sound, technology-agnostic, and tailorable manner, for all levels of an organization and organization's products, supply chains, and operations.
Cybersecurity research, development, and experimentation infrastructure: An up-to-date, national-level cybersecurity research, development, and experimentation infrastructure is needed to support innovation at the scope and scale of cyberspace and the speed of advances by adversaries.
The consequent research priorities are:
1.Protect People and Society
Strengthen Cybersecurity Through Human-Centered Approaches
Empower Organizations to Tackle Cybersecurity Threats
Strengthen Cybersecurity Education and Leverage AI-Powered Automation
Support Cybersecurity Policy Development
2.Develop Means to Establish and Manage Trust
Develop Trust Models for Management of Identity, Access, and Interoperation
Develop Capabilities to Negotiate Trust
Develop Solutions to Sustain Trustworthy Information Ecosystems
Enhance Trustworthiness of Cyberspace by Minimizing Privacy Risk and Harms
3.Strengthen Cyber Resilience
Advance Science of Cyber Resilience
Improve Cyber Resilience by Design
Improve Cyber Resilience During Operation
4.Protect Software and Hardware Supply Chain
Increase Ability to Attest to Supply Chain Integrity Through Design and Development
Increase Ability to Verify and Maintain Ongoing Supply Chain Integrity Throughout Operations
5.Realize Secure and Trustworthy AI
Establish Formal Assurance Methods for AI
Engineer Verifiable and Resilient AI Systems
Improve Trusted Collaboration between Humans and AI
6.Secure the Clean Energy Future
The UK’s NCSC has a more understated list of challenges in the so-called Cybersecurity Research Problem Book. This is an excellent summary of cross-cutting problems along with an additional list of hardware security problems. Summarized below:
How can we build systems we can trust when we can't trust any of the individual components within them?
How do we make system security assessments more data driven?
How do we create and adopt meaningful measures of cyber security?
How do we make phishing a thing of the past?
How can we accelerate the adoption of modern security mitigations into Operational Technology (OT)?
How do our devices physically behave, and how do we secure those behaviors?
How do we know that we can trust our devices?
What device architectures help us to improve security further up the stack?
How do we integrate secure devices, to ensure that the security still holds at the system level?
The more I read of these and other less structured lists it does continue to confirm the validity of the original hard problem list. Indeed, a National Academies study on Foundational Cybersecurity Research, that I was a part of, came to similar conclusions but I think for the first time (2017) strongly advocated significant research to align the computer science aspects of cybersecurity with the social sciences. Something, as practitioners in the field we had long recognized as vital.
I’ve maintained my own list for a few years and it is also quite well aligned to this, adding in the social science elements (the “Carbon”) of cyber as well as the technological aspects (the “Silicon”). Many of our issues can be simply stated. That is, we need to specify and codify our security requirements as a set of rules to be enforced and monitored. We need to map those rules to clearly stated policy goals derived from a compilation of risk analysis, laws, regulations and opportunities. The challenge is one of modeling, translating and applying those to massively distributed complex environments of people, objects/data and systems that often appear to behave organically. All of the research challenges below stem from one premise, that security is an emergent property of a complex system rather than being something that is just easily designed in. The fostering of such emergence is, by definition, a dynamic process in need of observation, positive and negative feedback loops and multiple levels of abstraction. Here’s my current list:
SILICON
1. Distributed Policy Specification and Enforcement
Real world systems are massive aggregations of multiple components (software, data, network, storage, compute, etc) that need to work in harmony across multiple policy enforcement and decision points to achieve a desired security posture. Configuration of this environment is often too dependent on skilled security and other personnel applying multiple layers of translation from human postulated objectives to machine readable policy. We need an integrated modeling, policy management, rule distribution and enforcement framework.
2. Protecting and Assuring Security and Trust in AI
Every one of these research challenges can likely be aided by the application of various forms of AI. But, the focus of much research is needed around the security, control, trust and safety of AI. There is much industry-based practical work and a vast amount of emerging academic work but even “basic” problems on how to mitigate the risks of specific attack techniques such as prompt injection remains.
3. Federated Monitoring and Policy Verification
The problem of policy enforcement is further complicated considering distributed systems extend across the entire supply chain. We need to be able to make security decisions based on the apparent trustworthiness of an element beyond immediate policy control e.g. a vendor’s system. We need to do this without necessarily obtaining transparency over the full detail that is typically only available to the element's own enforcement and monitoring. Thus a trustworthy federated monitoring and policy enforcement verification approach (say, a reliable technical attestation framework) is needed that can also preserve the necessary privacy of the related entity’s environment and that of its other agents or customers.
4. Data Level Entitlement Policy Enforcement : Rules, Roles, Rights, Requests
Access to data and the metadata that describes it is often codified under a complex array of rules, roles, attributes, rights and request workflow rules. Policy can be explicitly defined or derived from the surrounding organizational context. Work is needed to develop more tools to manage, visualize and verify this and to provide a management environment usable by business risk management or other non-technical personnel.
5. Distributed Interoperable Data Protection
Data is created all the time and flows between organizations. Data access is typically protected in the channel (in motion), in its storage (at rest), and now even in use. Much progress has been made here but there are still challenges in enforcing policy rights once the information has left the security boundary of the originating organization. Digital / enterprise rights management software has been useful but needs more work in terms of policy protocol interoperability, transport/serialization interoperability to facilitate sharing and control across heterogeneous environments as well as linking those rights controls into trusted computing stacks.
6. Predictably Secure Systems and Service Development (Software & Hardware)
We need to even further improve the security and reliability of the systems all organizations produce so as to resist ever more sophisticated attacks. This is not just about code analysis tools, penetration testing, fuzzing or other automated or systems-assisted reviews. Additionally, this is about providing some more fundamental integration of security and reliability objectives in the whole software lifecycle cycle. This is another space where massive progress has been made in integrating security into developer tooling, testing, deployment mechanisms and use of memory safe languages. There have also been significant practical advances in formal verification at a larger scale. Tying this together in the years to come, especially for critical systems, will be important as well as taking more advantage of hardware advances to enable higher assurance in software (e.g. memory tagging extensions). Naturally, any software refactoring to memory safe languages or changes to take advantage of hardware developments can be radically aided by AI.
7. Software, Behavior, Protocol and Zone Least Privilege with Dynamic Adjustment.
[Note: this is all, arguably, “zero trust” but I hesitate to use that framing as the concept has been so widely bastardized for a multitude of marketing purposes by various vendors for which the phrase “zero trust” is a label that could be applied to them too aptly, but not in the way they intend.]
We are moving and in many cases have moved to a world where we can no longer sufficiently find, detect and stop “bad stuff” [software, behavior, anomalous protocol communications, flows]. Rather, we need to keep moving to only permit known “good stuff”. This is relatively straight-forward for new, simply constructed (even at scale) environments, but is more difficult for large scale environments that have evolved over time and need to have this approach retroactively applied. Research and tools are needed to help profile, monitor, abstract and enable a move from block-list to allow-list approaches. This is across an array of problems to actually achieve usable and scalable default-deny, protocol/access least privilege and effective zoned / enclave based defense-in-depth at lower granularity and at multiple levels of abstraction.
8. Massive Scale Anomaly Detection and Behavioral Analytics
The complexity of systems, the fast evolution of attacks and the increasing inherent risk of many digitized systems means we have to do more monitoring for bad behaviors or earlier warning signs of attacks. This means increasing the coverage of our sensory apparatus as well as ingesting the digital exhaust (e.g. logs) of our entire environment. Developing models, utilizing AI, for how to fuse, manage, analyze and make sense of the signals that come from this is a hard problem in need of further research.
CARBON
9. Economics and Incentives
Many of you who have worked in any organization, especially large ones, know that a big part of the security role is to align incentives. Part of this is to seek adjacent benefits so that risk-reducing activities don’t have to be imposed top-down but develop naturally as a consequence of how the work is undertaken. There has been significant progress here on joint efforts to bring together multi-disciplinary research from computer science, economics and other disciplines. WEIS, the annual Workshop on the Economics of Information Security is a shining example of this.
10. Behavioral Science and Human Factors
This is another area where there has been tremendous progress across a range of risk types, especially human factors and design considerations for encouraging more secure interactions with systems because of, rather than in spite of, their design. This area has also benefited from behavioral economics research, particularly nudge theory which has led to various incarnations of so-called nudge units. I know a number of organizations have seen tremendous security risk reduction results from work of their own nudge units. There is even a NudgeStock which is truly fascinating.
11. Risk Ontology, Measurement and Metrics
One area where I think there is still much to be done is on the development of approaches for risk ontologies, taxonomies and methods to quantify risk. There is a lot of practical work going on in consultancies and organizations using methods such as FAIR. One very promising research area is on Bayesian Networks applied to operational risk (including cybersecurity risk) and this book is a great summary of that with links to current research.
12. Human Readable Policy Expression
Building on the need for improvements in distributed policy specification and enforcement, there is a related topic on how to drive such specification in human readable form. This is so that participants in an organization can match risk management or policy intent to the policy that will be encoded in machine readable and automatically enforceable form. There has been a lot of research using various graphical interfaces but I’ve yet to see progress on how to map organization risk intent in ways that can be understood and reasoned about by non-specialists that can then be stepwise encoded into machine readable form.
13. Systems Thinking / Fail-Secure / Fail-Safe Design Principles (Cyber / Physical Control System Synchrony)
Finally, there is a rich field on systems thinking and control theory which helps us understand and manage risks in complex systems. I’d recommend this book as a great introduction to the concepts. But, this is a wide field of study that is much referenced and discussed but actually under-utilized in information and cybersecurity. This seems to be a most promising future line of research.
________________________________________
There are many other sub-challenges and other research topics that maybe should rise to their own place on a main list but while they are important I’m reluctant to suggest they’re yet foundational (in the sense that they pervade many control applications). I’d included a number of these in our recent R&D recommendations in the PCAST Cyber Physical Resilience Report to the President. These include:
Definitions and study of foundational principles of resiliency of complex systems, and designs for resilience including developing modeling and simulation tools for studying resiliency.
Enhance field upgradability of IoT/OT systems.
Chaos engineering applied to security.
Mechanisms to apply segmentation, micro-virtualization, and zero trust technologies to ease the burden on defensive efforts.
Crypto-agility technologies that may enable reliable, sustained, and timely transition to post-quantum cryptography standards.
Advance secure operating systems research for systems-on-a-chip technologies.
Exploration of the use of AI methods by adversaries to employ, and defenders to thwart, multipoint and sequenced attacks within and across systems and sectors.
Developing digital twin simulation / security tools for critical systems and use-prognostics to model weaknesses.
Explore the use of AI to radically advance anomaly detection especially to identify threats/attackers in low signal environments such as with “living off the land” attacks.
Research to promote migration to memory safe programming languages as well as deployment of controls (while not excessively impacting cost / performance) to reduce memory safety issues on legacy code bases (e.g. C++).
Bottom line: across the research community and with engineers/practitioners we have made and continue to make enormous strides on these problems. But the hard problems remain. Research and practical tactical advances are needed across “zero trust”, service meshes, higher assurance trustworthy computing, default encryption, sandboxing / enclaves, hardware assisted security, policy languages and security integration into software development, testing and deployment management.
Comments