top of page
  • Phil Venables

Ethics and Computer Security Research

If we are to keep advancing the fields of information / cybersecurity, technology risk management and resilience then we need to apply more scientific principles. Such scientific principles sustain an engineering body of knowledge and practice. Additionally, this can involve the social sciences of psychology, economics and more. 


Of course, to do this well requires applying scientific methods and conducting actual research. It’s important, as with any research, that this is conducted according to the right ethical principles. For example, wide ranging research to improve security shouldn’t involve wide-scale compromise of privacy in the name of that research. Similar considerations apply to other parts of the cybersecurity field like vulnerability research, red teaming, and any other activity that walks a line between legitimate offense oriented discovery and actual harm. 


There are many frameworks that have been developed to guide all of this. Arguably, the one framework that provided the basis of all of this was The Menlo Report (“Ethical Principles Guiding Information and Communication Technology Research”) produced in 2012 by a working group created by the US Department of Homeland Security. 


This report was built on some similar work, The Belmont Report, which provides a guide for ethics in biomedical research. The Menlo Report is worth a full read, not least because you can take a fresh perspective from it on how we should approach AI trust and safety research. For the rest of this post we’ll look at the principles (below) and recommendations. 


Stakeholder Perspectives and Considerations

You have to take the perspectives of many people in any aspect of research, in any field. This is because the research will inevitably affect or be constrained by some people, groups, or laws. Such stakeholders include:


  • Human subjects as well as non-subjects (e.g. recorded victims of criminal activity). 


  • Malicious actors. In particular, examining whether the results (published or otherwise) of the research will benefit adversaries more than the transparency of the work will aid defenders. 


  • Platform owners and providers. These are not only affected by research but also can be vital in specific research as intermediaries between researchers and mass-scale end users.


  • Government - Law Enforcement. They have a significant interest in the societal mitigation of risk and so have a stake in effective research but also can be a guide to stay within the bounds of the law when undertaking the research. 


  • Government - Non-Law Enforcement. Similarly, they have broader interest in how research might inform policy. Such policy influence should include impact to explicit cyber legislation or regulation as well as non-cyber rules that have potential unintended effects to societal cybersecurity or resilience. 


Respect for Persons and Informed Consent 

Research involving or impacting people should be based on informed consent. Similarly, applying the notion of respect for persons needs to consider the impact to the systems that, in turn, might impact people. This is especially important for life safety and other critical infrastructure. 


Informed consent is not just asking for consent, it is the rigorous description of the risks to subjects (people and systems) as well as the ability to withdraw from the research at any time without consequences. Such consent should be very specific to the research being conducted and there should be additional consent for new or even follow-up research. 


Consent involves ensuring notice (written consent to the full description of the research), comprehension (ensuring such written consent is understandable to the parties of the research), and how voluntary the consent actually is. If there is a situation when consent would harm the goals of the research then an exception or waiver process should be in place. It might be, especially for research on aggregate data (e.g. collections of Internet data flows, or analysis of breach data dumps), that individual consent is not feasible and so an overall exception and legal review for the approval is important.  To quote the Menlo Report, these requirements ensure that: 


“(1) The research involves no more than minimal risk to the subjects; (2) The waiver or alteration will not adversely affect the rights and welfare of the subjects; (3) The research could not practicably be carried out without the waiver or alteration; and (4) Whenever appropriate, the subjects will be provided with additional pertinent information after participation. Research of criminal activity often involves deception or clandestine research activity, so requests for waivers of both informed consent and post hoc notification and debriefing may be relatively common as compared with research studies of non-criminal activity.”


Beneficence

Beneficence is an unusual term, carried over from the Belmont Report, that is essentially about balancing probable harm vs. potential benefits that would come from the research. It’s not just laying out the balance between the two but also about the proposed means to keep harm low (or lower) and realize potential benefits - and building that into the research plan: 


  • Identification of Potential Benefits and Harms. For cyber research this would pay particular attention to systems assurance and reliability (not destroying or disrupting something during the research) as well as confidentiality and integrity (such as vulnerability discovery not actually exploiting that to cause harm and to follow established reporting and disclosure standards). 


  • Balancing Risks and Benefits. This is a risk management decision. Not all potential harms have or can be eliminated and not all potential benefits can be known in advance. But, nevertheless, a systematic approach to enumerating and planning for the risks is important. Some of these trade-offs might be across several dimensions, for example, consider the risks to a specific individual or small group vs. the potential benefits to society overall. These are not easy decisions. 


  • Mitigation of Realized Harms. When the risks have been analyzed, the research should look to mitigate those risks. Again, risk free research is not the goal since some, or even most, research requires taking some risk to realize possible outsize societal benefits. The mitigations stemming from the risk analysis should not be static. As the research progresses some new risks may occur, some previously identified risks may increase or decrease, all of which could result in an adjustment to the research plan. Naturally, the ongoing research results may show unexpectedly larger potential benefits which are worthy of more risk taking, whether it’s accelerating work or increasing scope. 


Justice: Fairness and Equity

Ensure that research selection and application is fair. There are multiple lenses to what it means to be fair and it can be easier to talk about this in specific examples rather than in abstract terms. For example, it would be wrong to use public funds to research attacks, exclusively, on wealthy individuals and neglect attacks against other parts of society. That is, unless it were specifically accounted for in the risk assessment as being of value to learn techniques that might later be commoditized and used to help the wider population. Inclusion or exclusion of research subjects or scope should be done specifically to support the research goals and not be done for arbitrary reasons. 


In disclosing research we face the already widely discussed equities considerations including the benefit of disclosure to ensure issues might be fixed vs. the need for confidentiality so as to not tip attackers before issues (like vulnerabilities) are resolved. There is much well established coordinated disclosure practice here that is applicable in other research domains. 


Respect for Law and Public Interest

A big part of this is not just the actual respect for the law and public interest in framing and conducting research - but being transparent in doing so. Transparency is to demonstrate compliance as well as to inform those affected by the research of the potential effects during and after the research. 


  • Compliance. Researchers should identify laws, regulations, contracts, and other agreements that are applicable to their research. This is especially important for laws and regulations regarding computer crime and information security, privacy and anonymity, and intellectual property. This can specifically apply to identity theft, unsolicited bulk electronic mail, communications privacy, breach notifications, intellectual property concerns, child pornography and health information security and privacy.


  • Transparency and Accountability. Transparency should drive clear communications about the purposes of research and why certain data collection or activities are needed - along with how the results may be used. In many contexts it will include communicating the results of the various stages of research risk assessments. 


Implementing the Principles and Applications

Like most research fields, and in fact, most of cyber and information security it is vital to constantly examine prior successes and failures to enhance future work. This includes research activities. There are many well established protocols ranging from privacy research to vulnerability discovery and disclosure that are aligned to the Menlo Report. Many other parts of research including some internal organizational work in both public and private sectors could benefit from these principles as well. 


For further reading on this topic I’d recommend reading “Ethical Frameworks and Computer Security Trolley Problems: Foundations for Conversations”.


313 views0 comments

Recent Posts

See All

6 Truths of Cyber Risk Quantification

I wrote the original version of this post over 4 years ago. In revisiting this it is interesting to note that not much has actually...

Subscribe for updates.

Thanks for submitting!

© 2020 Philip Venables. 

bottom of page