There are still plenty of organizations that don’t have a well defined and accessible bug bounty program. More surprisingly, there are also organizations that don’t even have an accessible vulnerability reporting process to handle discovered and reported vulnerabilities. This is irrespective of proactively seeking such hacker activity (here I’ll use “hacker” in the sense of a security researcher behaving ethically and legally).
Many organizations do, however, find the rationale of a vulnerability reporting, and then a full bug bounty program compelling. Simply put, if you have a vulnerability in any of your products or services you want to maximize the probability that it is you that finds out about the issue ahead of anyone else and then fixes it before it’s widely discovered and exploited. Indeed, a good program will institutionalize faster time to remediation across the board.
I’ve introduced a bug bounty program in one quite conservative organization with some effort, although, interestingly, executive leadership and the Board were almost instinctively supportive ahead of others. Today, I benefit from a long-standing program.
Most organizations don’t have the logistics to stand up a means of managing the lifecycle of vulnerability reporting, rewards or the proactive offering of bounties, let alone the ability to make payments to people who aren’t traditionally regarded as vendors. So, many turn to bug bounty companies like HackerOne to get them up and running. (Full disclosure: I have been a prior customer of HackerOne and I am now an independent Board Director).
In the rest of this post I’ll cover the distinctions of various reporting and rewards programs and the steps to set one up.
Types of Vulnerability Reporting and Rewards Programs
Vulnerability Reporting Program. A well publicized place for anyone to report a security flaw in an organization's products or services. It could be an easily found web page on a main web site or an email address like security@company.com. There needs to be a reliable process by which submissions can be triaged, checked, resolved and publicized in the appropriate way. All of this needs coordinating with the submitter and resolved in acceptable time. Oh, and given you’re not claiming you’ll reward the submitter you will need to have a good answer to their inevitable question of what their reward will be. This could be problematic if they’re not willing to disclose what the vulnerability is without such agreement. Various Government and other mandates may also refer to this, and related programs, as Coordinated Vulnerability Disclosure.
Vulnerability Rewards Program. This is the next evolution, often done first or quickly after a basic reporting program is determined not to be enough. Such programs require more formal construction, publication of scope, rules, reward levels, and disclosure approaches (coordinated and time-boxed or otherwise). They will typically be much better publicized and directed to the research / hacking community to attract attention.
Private Bug Bounty. A bug bounty program, often used interchangeably to mean the same thing as a Vulnerability Rewards Program, has a subtle difference - at least in my view - in that the organization will proactively direct researcher / hacker activity against a particular target. This target might be a new product or service, or a particular critical control in a platform that is vital to the overall security of other services - hence they want some more exhaustive checking than the undoubted reviews they’ve already undertaken internally. A private bug bounty program can only really be done if your organization has relationships with a set of researchers / hackers who perhaps have a good track record from work with their vulnerability rewards program. More likely, though, is to tap into the researcher / hacker network of a bug bounty company. This is where selecting a company based on the reach, depth, breadth and dynamism of its community is vital. Even though it’s a private bug bounty it could still be useful to be able to offer recognition, in addition to compensation, for discoveries and doing the normal coordinated and time-boxed disclosure.
Public Bug Bounty. This is the next evolution and entails essentially permitting any of the community (or the bug bounty company's community) to have at it vs. a particular pre-defined scope.
Hackathon (also called Live Hacking Events). In person, perhaps virtual, coordinated events where an organization directly or through their bug bounty company convenes a single or multi-day event for groups of researchers / hackers to come together on a common target, and in working together perhaps have greater effect. Yes, hackathon is also a broader term for similar group activities of software or other development.
Penetration Testing. Increasingly bug bounty companies offer adjacent services like access to security researchers to conduct what might typically be regarded as penetration testing and attack surface discovery / scanning. This is typically more economic than some of the more captive penetration testing services and I’ve seen many organizations use the savings to then be able to allocate resources to stand up a broader bug bounty program - with a different set of researchers to get a fresh perspective.
Making the Case
Organizations should be of a certain maturity to open themselves to a bug bounty program. In other words you should be operating a reasonably secure software development lifecycle and associated vulnerability management program. If you aren’t then you’re going to be flooded with notifications and potential reward payouts for issues you could have easily discovered and resolved internally.
So, bug bounty programs are a necessary complement to a vulnerability management program not a replacement of that. They also magnify the effect of those programs because you’re able to see the types of bugs not typically found which will not only reduce risk going forward but likely structurally improve the internal vulnerability management program.
The security aspect is most important though. Being part of a bug bounty program means your products and services are going to benefit from the probing of a wider researcher / hacker community that have learnt from a wider set of organizations. In other words, you’re exposing yourself to be tested and improved by an ecosystem that has to be on their A-game constantly.
The ability to tap into 1000’s of talented researchers / hackers is an economy of scale and economy of skill that is almost impossible to replicate using your own organization. However, despite the evident value of this both for economic (savings) and risk (less latent security issues) reasons, there can be some reluctance in organizations to sign up for a bug bounty program. Some of the types of roles that might need convincing are:
Regulators. In my experience most reluctance here has been from a lack of familiarity. The remedy for this, other than the raw logic that such programs permit an organization to learn from and fix issues before they’re widely exploited, is to point to other organizations in that sector that have already implemented a program.
Auditors. Will be similarly concerned, but will quickly go to the point of making sure the organization can likely respond well enough, execute a process of triage and fulfill disclosure obligations. If this is a significant concern then start small, perhaps with a limited scope rewards program to demonstrate a path to scale.
IT / Development. IT teams are often concerned about the volume of vulnerabilities they might get. Yes, this is a little bit of a “head in the sand” issue in that, whether discovered or not, the vulnerabilities are still there. But it is a legitimate operational concern that you can mitigate by taking them through the triage process, the scoping and, again, this is where a private bug bounty effort of very limited scope will get them initially comfortable.
Executive Management. Are usually sold on the security risk reduction and the cost effectiveness of tapping into the network of researchers / hackers although they can be concerned about the potential growth of reward payout budget. Again, this is a legitimate concern that is best addressed by showing them the rewards setting approach, the triage process and if necessary the initial tight scoping so that you can show you’re incrementally increasing the program carefully.
Board of Directors. In my experience they are typically supportive if IT and executive management are. In seeking Board approval, if in fact you need to, it’s useful to see if other organizations those Board members are a part of (executive or Board roles) have a program and use that as a means of consensus building. It might be, though, they are not aware that their other organizations do this. Finally, the ultimate clincher for Boards is the value of bug bounty programs forcing critical vulnerability discovery. It can be important for the Board, under increasing pressure to evidence solid cybersecurity oversight, to show that having a vibrant and effective bug bounty program is a means to do that. In other words, an important check and balance.
Establishing the Program
Once you have the commitment to start this, at whatever stage of maturity then you’ll need to build your internal reporting and triage logistics, develop policies for disclosure and put in place rigorous checks and balances for issues to be reviewed, and to ensure the correct and ethical behavior of researchers as well as the security team in dealing with the report.
When documenting the structure and rules of a vulnerability rewards program or wider bug bounty program think of:
Services in scope, inclusions and exclusions and any specific domain rule sets.
Qualifying vulnerabilities.
Non qualifying vulnerabilities (some things that might be deemed questionable but exist for a good reason and are more taste than substance, or even situations where there are vulnerabilities in older versions of software but newer patched versions are available).
Reward amounts.
Guidance on investigating and reporting bugs (e.g. use your own accounts, follow vendor instructions).
Any of the bug bounty companies will be able to provide pro-forma programs and policies and get you up and running and will likely save you a ton of work.
The State of Hacker Powered Security
HackerOne publishes some of the latest trends in their annual Hacker Powered Security Report which the latest one for 2023 is here. It highlights also the early stages of bug bounty programs for Generative AI implementations, from the foundation models to the contextualized implementations.
Only 9% of hackers say NDAs put them off working with companies and only 14% are reluctant to work with organization’s that would constrain publicity.
The median cost of a bug on the HackerOne platform is $500, the average cost is $1,048, and the 90th percentile is $3,000.
Financial services is a growing sector, with 53% of hackers spending time on these organizations’ programs, up from 44% in 2022.
40% of hackers now hack government organizations, up from 33% in 2022.
Hackers use a plethora of techniques; while 95% of hackers specialize in web application testing, they also span a range of new and emerging technologies. 47% specialize in network application testing, 20% have experience with social engineering, and 63% do vulnerability research. 36% of hackers say they are most skilled at the reconnaissance part of hacking, and 20% say they are best at exploitation.
GenAI has become a significant tool for 14% of hackers, and 53% of hackers are using it in some way.
66% of hackers said that they do or will use GenAI to write better reports, 53% say they will use it to write code, and 33% say they will use it to reduce language barriers.
55% of hackers say that GenAI tools themselves will become a major target for them in the coming years, and 61% said they plan to use and develop hacking tools that employ GenAI to find more vulnerabilities. Another 62% of hackers said they plan to specialize in the OWASP Top 10 for Large Language Models.
When asked, hackers ranked their concerns about the risks GenAI poses, 28% were most concerned about criminal exploitation of the tool, 22% about disinformation, and 18% about an increase in insecure code.
While 38% of hackers say they think GenAI will reduce the number of vulnerabilities in code, 43% say it will lead to an increase in vulnerabilities.
Bottom line: all of your products and services will have vulnerabilities even after you’ve rigorously applied your own tests to find and fix those. If others discover those vulnerabilities in the field you want to be the first to hear about that and to fix them ahead of others exploiting it. The researcher / hacker community can provide an extra level of scrutiny and be stimulated through explicit bounties to target your products and services to improve them. If you have the logistics to run your own program then of course proceed, but even the largest organizations find it easier to use one of the several bug bounty companies to stand this up quickly and effectively.
Comments