Crowdsourcing security holes—aka bug bounties—has become an increasingly-popular tech firm tactic, bordering on Silicon Valley standard-operating-procedure. But as tempting as such an approach is, it's not without serious drawbacks.

What we're talking about is encouraging and incentivizing anyone and everyone to dig into your app/OS and beat up on it to try and find any security problems. If they do, the bounty program offers them money, once they confirm the problem. The good part of this is that it's a very efficient and effective way to quickly identify and fix security holes. Depending on the bounty offered, it might even be cost-efficient as well.

The bad side is that there is no control. Some of the people doing this probing may not have your best interests at heart. Once they put in the time and effort and find a serious hole, what if they choose to sell it to a cyberthief instead of you, ostensibly to make a lot more money? What if they choose to publish it on security web sites, instantly making you a target until you can fix the hole? What if they choose to complain about the hole's existence publicly, hurting your stock price and sales?

A bounty program is lightyears different from retaining a small group of security consultants, people that you pay to privately try and find holes. That's a much safer route, but it's not necessarily going to expose as many holes as quickly as inviting anyone on the Internet to have a go.

That said, there can be very attractive financial reasons to bountyize. A security consultant will charge for the time and effort to investigate your system, regardless of what they find. A declaration that your software is airtight comes with the same bill as a list of 50 flaws. Well, usually, unless it's a strict hourly charge, which the 50-flaw approach will take more time to document. Note: somewhere out there is probably a security consultant who found zero flaws in software, but I've yet to find them.

A bounty program charges you nothing if nothing is found. And if something is found, you have a locked-in fee for that flaw's particulars. Generally, the fee is fixed regardless of the severity of the hole, which cuts both ways. It's also dangerous to choose to not pay for a minor hole as the publicity makes the savings rarely worth it.

Part of the "should I do a bounty program or not?" calculation is based on how much of a target your company is. For Google, Walmart, Chase Manhattan and others, it's clear that bad guys are trying to break in at all times anyway, so the added risk of a bounty program is minimal. (Apple is the exception to this rule, but more on that in a moment.)

But if your company has a very low profile, you need to seriously weigh your options. Do you really want to encourage that kind of attention and prodding/poking?

Now, as promised, let's get to Apple. For good or for bad, Apple has tended to go its own way, regardless of industry trends. Apple would call it leadership and others would opt for different phrasing. Apple's approach to bug bounties is definitely different. It offers a bounty program, but the reward for finding something is a heartfelt "thanks" instead of money. (Editor's note: At Black Hat 2016 Apple launched a bug bounty program with significant monetary payouts up to $200,000 for certain vulnerabilities, we applaud their decision to do so. More here:

Far from being renegade, Apple's position makes some sense. To the extent that a bug bounty program is all about using the gigantic power of crowdsourcing to find bugs your developers missed, then Apple's huge profile already delivers that. It knows that any Apple hole in published software will be discovered in days—if not hours—and will appear in the usual-suspect sites. Why pay for that which you're likely to get for free?

Another part of this bounty ROI equation is whether a company is obligated to pay for that which it already knows. As experienced security professionals know, there are many large companies that know about security holes that they have chosen—for various cost related reasons—to not fix. Many have simply calculated that some holes are sufficiently obscure that the company's realistic fraud losses are less than what it would cost to fix. What should the company do when 200 bounty participants "discover" that hole and all seek money for the find? If you've chosen to not fix a security hole, you have presumably also decided to keep that fact a secret.

Of course, for Apple, the real issue is finding the security holes before the software is widely distributed. One of Apple's most well-known disasters was its initial Apple Maps rollout. Apple’s senior vice president of Internet software and services Eddy Cue recently discussed that rollout and why Apple is rethinking its software development approach.

"Most importantly, he realized that Apple’s notorious secrecy—something (former CEO Steve) Jobs had honed while chief executive—isn’t the best for software development. 'To all of us living in Cupertino, the maps for here were pretty darn good. Right? So [the problem] wasn’t obvious to us,' Cue said about early Apple Maps testing," according to Fortune story. "'We were never able to take it out to a large number of users to get that feedback. Now we do.' Apple is now able to try out its software on so many users by offering something it had never done until 2014: allowing the public to test beta versions of its mobile and desktop operating systems. At its Worldwide Developers Conference (WWDC) in June, Apple announced that it would make available both iOS 10 and macOS Sierra to the public for testing."

This gets into some classic Silicon Valley parlance. A beta test is software shown to a small number of people willing to report back to the company on what it finds. These public tests are seen by many as launches. Consumers will grab the software and deploy it as though it's final code, fully prepared to blame Apple for anything that is wrong. Software developers can't win.

Bottom line: bug bounties are a good idea, but not a great security strategy in themselves. You have to look at your company's security profile and set up specific limits on your bounties—so you don't pay for holes that you either already knew about or that another bounty-seeker already found. Done properly, it can certainly unearth things you need to know.

The risk, though, is akin to a tiny fishing boat deep at sea. A school of dolphins is swimming nearby and is frightening off the cod that is the fish being sought. To scare the dolphins away, the fisherman splashes his paddle in the water furiously, hoping to attract sharks to scare off the dolphins. The moral of the story: When you encourage cyberthieves to explore your software, you may find that you've invited more headaches than you might have intended.

About Evan Schuman

Evan Schuman has covered IT issues for a lot longer than he'll ever admit. The founding editor of retail technology site StorefrontBacktalk, he's been a columnist for, RetailWeek, Computerworld and eWeek and his byline has appeared in titles ranging from BusinessWeek, VentureBeat and Fortune to The New York Times, USA Today, Reuters, The Philadelphia Inquirer, The Baltimore Sun, The Detroit News and The Atlanta Journal-Constitution. 

Comments (0)

Please Post Your Comments & Reviews

Your email address will not be published. Required fields are marked *

Love to learn about Application Security?

Get all the latest news, tips and articles delivered right to your inbox.