There is no doubt that Web 2.0 is upon us. The software we use everyday is migrating from our desktops, laptops and company servers to the great data centers in the sky. The first application to move to the cloud was e-mail, then picture and file sharing services, and now traditional desktop applications such as calendaring, task lists, spreadsheets and word processing are all available via the web. Soon there will be little need for the average computer user to have any applications running on their desktop at all except for a web browser with media player plug-ins.

There are many benefits to the concept of software as a service: no hardware, no software to install and maintain, the ability to get at the application and your data from any internet connected computer with a browser. These benefits come from fact that the software is running on a computer owned and maintained by someone other than the user.

To understand what this new software paradigm means to the field of vulnerability research we need to take a look back at how the field has evolved over time. Public vulnerability research originated somewhat illegitimately in the late 80’s and early 90’s.

The type of networked systems for which vulnerability research was meaningful was expensive. They were in the hands of large corporations, government, and universities and prohibitively expensive for individuals. It was the rare computer geek that had a Sun or SGI workstation.

To research software vulnerabilities meant that you had to get permission from your employer or school or connect to computers you didn’t have legitimate access to or have permission to attack. Since getting permission to potentially crash a business computer was difficult, the only vulnerability researchers doing things legitimately were in an academic or research environment. This was rare. The majority of research was being done illegitimately on other people’s computers.

By the mid 90’s, hardware became more powerful and cheaper. This combined with the advent of Linux and FreeBSD, meant individuals could afford to run the software for which vulnerability research was meaningful. Name servers, mail servers, web servers, file transfer servers and operating system weaknesses were the staples of early vulnerability research. Now anyone could inexpensively install an OS and target software to research vulnerabilities legitimately in the comfort of their very own lab (or bedroom).

So around 1996 we start to see a lot more vulnerabilities being disclosed through organizations like CERT or on public mailing lists like Bugtraq. Of course they vulnerabilities were always there. The difference is people had the capability of looking for them without breaking the law. They had access to the hardware and software to look for vulnerabilities. Note the quick ramp up of vulnerabilities disclosed in this chart from the National Vulnerability Database.

NVD vulnerabilities by year

Now zoom forward to 2006. Vulnerability research is alive and well but some categories of important software are off limits. How exactly is a vulnerability researcher supposed to inspect Google Spreadsheets for vulnerabilities they way he would inspect Microsoft Excel? Remember the software is running on Google’s hardware in Google’s data center. Can Yahoo mail be scrutinized to the same level that Microsoft Outlook can be? The answer is no and there are two reasons why.

  1. Researchers are in legal jeopardy if they stage attacks against a computer owned by Google or Yahoo.
  2. Researchers cannot use valuable reverse engineering or “grey box” techniques to find vulnerabilities.

In my next posting I will discuss what this means for the future.

[update: CSO magazine has published an article on this topic, "The Chilling Effect". Scott Berinato, the author interviewed me on the topic of the new vulnerability trend of XSS and SQL injection surpassing buffer overflows as the new kings of the vulnerability heap. I told him that it was true but that I had a bigger concern, that disclosure of these types of vulnerabilities was starting to get to be problematic for researchers. Scott ran with the idea and wrote a great article.]

About Chris Wysopal

Chris Wysopal, co-founder and CTO of Veracode, is recognized as an expert and a well-known speaker in the information security field. He has given keynotes at computer security events and has testified on Capitol Hill on the subjects of government computer security and how vulnerabilities are discovered in software. His opinions on Internet security are highly sought after and most major print and media outlets have featured stories on Mr. Wysopal and his work. At Veracode, Mr. Wysopal is responsible for the security analysis capabilities of Veracode technology.

Comments (0)

Please Post Your Comments & Reviews

Your email address will not be published. Required fields are marked *

Love to learn about Application Security?

Get all the latest news, tips and articles delivered right to your inbox.