When Apple announced at Black Hat that it’s launching a bug bounty program, you could hear from the peanut gallery variations of a common theme: “it’s about time.”
Apple has taken some flak for being slow to join the many tech companies with bug bounty programs, from Alphabet to Yahoo. Increasingly, companies outside the tech sector, from auto manufacturing to airlines, are adopting this practice. Even the Pentagon has a bug bounty program.
So why has it taken so long for Apple to join the crowd of companies crowdsourcing vulnerability research?
Perhaps the folks in Cupertino recognize that it’s essential to have a robust strategy for finding and fixing vulnerabilities, but bug bounty programs should be low on the list of priorities for getting security right.
It’s true that bug bounty programs help companies find and squash bugs. But there are some limitations to bug bounty programs and some misconceptions about their purpose.
First of all, they won’t stop unscrupulous hackers from selling vulnerabilities to the highest bidder. You can’t compete with governments or hacking firms that are willing to pay enormous sums for zero-day exploits. And you shouldn’t try.
Secondly, bug bounty programs are a lot of work for companies to manage, without a huge payoff. Depending on the amount of bugs disclosed and their severity, it takes time and resources to sort through submissions and prioritize fixes.
In 2015, Facebook’s bug bounty program, for example, received 13,000 submissions from 5,500 researchers. Facebook only paid awards for about 500 “valid reports” – less than 4 percent of all bug bounty submissions it received last year.
Apple’s bug bounty program appears to be well thought out and designed with clear objectives in mind. Rather than opening up its program to anyone, Apple invited only a few dozen researchers to participate. Furthermore, Apple is narrowing its focus to just a handful of targets. The program only covers five vulnerability types in just two of its most valuable products, iOS and iCloud.
Apple’s awards range from $25,000 at the low end up to $200,000 for secure boot firmware exploits – perhaps the biggest payouts of any legitimate bug bounty program to date. Most companies can’t afford the same kind of payola that Apple can, but awards don’t need to be enough for bug finders to retire on (Facebook’s average payout last year was just $1,780).
The purpose of a bug bounty program should be to compensate researchers fairly for what can be enormously time-consuming and difficult work. But it’s also a sign of openness and accountability. Companies that pay bug bounties show their appreciation for and willingness to work with the whitehat hacker community.
Even a small gesture of public recognition may be sufficient. A public “thank you” is definitely a better way to treat ethical hackers than the “thanks” some have received from overzealous prosecutors, who have sometimes used a draconian interpretation of anti-hacking laws like the Computer Fraud and Abuse Act to punish security researchers. After all, the work security researchers do benefits not just the companies whose vulnerabilities they help close – they help keep the public safe from security threats, too.
Bug bounty programs are designed to catch vulnerabilities that slip through the cracks during development, but completely outsourcing security to the bug hunting community is not a safe bet. For public-facing applications you develop need to have security baked in before they go out into the world.
Also, new vulnerabilities are disclosed all the time. Whether it’s applications you develop in-house with third-party and open-source components, or software that you use from third-party vendors, applications need to be continuously assessed and patched.
Find out more about a strategic approach to securing your software with our Ultimate Guide to Getting Started With Application Security.