Last week, Ben Worthen of the Wall Street Journal had a conversation with Howard Schmidt about the vulnerabilities in purchased software while Howard was waiting on line to have his iPhone upgraded.

Howard Schmidt, who was once the CSO of Microsoft, knows a thing or two about vendors shipping insecure software. He offers this advice relating to his iPhone, "Just because a piece of software was distributed through Apple’s App Store, don’t assume that it is vulnerability free." I think that sums up the problem pretty well. Customers assume the software they are getting is vulnerability free until it is proved otherwise.

If it's distributed by the Apple Store it is coming from a trusted brand. "It must be secure", many think. The same thinking is used by people who install social networking applets and give them access to their personal data. Someone, somewhere is taking care of the software security so I don't have to. It must be the platform provider, the store, some industry body, my antivirus provider, or maybe even the government.

You can see how this thinking pervades the consumer space because there are regulatory bodies governing all other aspects of safety and security in our personal lives. I'm safe in a plane or car because the government is looking out for me with standards and testing requirements. I am safe in the mall parking lot because the men in the white SUV are patrolling.

This thinking also pervaded the b2b space. I talk to companies which are outsourcing critical applications to offshore development companies and they assume that security testing is taking place as part of the development process. I ask them if they have made security quality part of the requirements of the project and they say no. Then I ask them what evidence does the offshore developer provide to demonstrate they have a certain level of security quality in the software they are producing and they tell me they have never asked.

I can tell you what would happen if they did ask because I have also spoken with the offshore developers. They have no evidence. Their concern is getting the software functionality done on time and on budget. They consider fixing security vulnerabilities, once discovered, rework which the customer pays for. So not only are they not looking for vulnerabilities and relying on the customer to find them, they are charging the customer to fix the problems. The customer has to this date accepted this model.

The same goes for commercial off the shelf software and open source. Surely the developers writing the software are trained in secure software engineering. Surely commercial software companies are using 3rd parties to test their software just like the banks have the big 4 audit their accounting or auto manufacturers submit to testing by the NHTSA. And of course open source has "many eyes" reviewing the code for security defects and informing the developers. The customer has accepted a model where this is almost never true.

But times are changing and it is partially due to the availability of software that can automate the process of looking for security vulnerabilities. David Rice, the author of "Geekanomics: The Real Cost of Insecure Software" was interviewed recently by Drazin Drazic his Beast or Buddha blog. He said the trend is toward a future of secure software and automated security analysis is one of the sparks:

BorB: I recently wrote in a post that little is changing. We are not learning from the lessons of the past. There are few, if any new technologies that exist today, that we have great faith and trust in as being secure now, and expecting them to continue to be secure in the future. Any solutions to even basic security issues need a starting point and a significant change to current thinking, and even then, it will takes years to see the impacts of this. What are your thoughts on this? Are we seeing anything at present to make us more confident of the future?

DR: It is true that it takes years to see the positive impacts of a change of mindset. And we are in the unfortunate position of repeating many old lessons.

At base, human history is a collection of exhaustive, expensive, and protracted engagements; only the relentless survive and have a chance at succeeding (notice no guarantee here). Confronting some of our most complex problems like highway safety, nuclear proliferation, or insecure software is painful, difficult, complicated, and troublesome. Human endeavors of any significance are like this. But we must do it. The inertia of culture and status quo is difficult to overcome, but overcome it we can; otherwise, we would not have the better parts of the world we enjoy today.

I believe the technology space is no different. We are just a little dazed and bewildered by all the changes technology has introduced so quickly and on such a grand scale. For every change we react to, another two or three rapidly appear.

I do see sparks of hope emerging. In the United States some members of government are beginning to understand the problem and are willing to start discussing how to approach insecure software from a policy perspective. On the technology front, companies like Ounce, Fortify, and Veracode are beginning to give software buyers an automated method of evaluating assurance levels of software. While not complete in and of themselves, these solutions are, as I stated, “sparks” that can help us progress down paths that were once not easily open to us.

As for the larger issue of cyber security, which software assurance is only a part of, society has a lot of adjusting to do. The Internet is a new environment for many still, and many more to come. There is a learning curve that must be confronted. It took the United States almost 80 years to develop the highway system we know and enjoy today. Nearly $400 billion was spent on this endeavor with hundreds of thousands of lives lost. As this shows, learning how to govern and navigate a new environment is expensive. Failing to learn even more so.

Independent, automated, and repeatable software security testing is an essential component of a safe and secure online environment. Without it we are stuck with the assumption of vendors perfoming software security as our imaginary security blanket that allows us to operate in the current online world.

FREE Security Tutorials from Veracode

Flash Security
SQL Injection Attack
Cyber Security
Mobile Phone Security
CRLF Injection

Veracode Security Solutions

Binary Analysis
Application Testing Tool
Software Security

Veracode Data Security Resources

Data Loss
Data Security
Data Breach

About Chris Wysopal

Chris Wysopal, co-founder and CTO of Veracode, is recognized as an expert and a well-known speaker in the information security field. He has given keynotes at computer security events and has testified on Capitol Hill on the subjects of government computer security and how vulnerabilities are discovered in software. His opinions on Internet security are highly sought after and most major print and media outlets have featured stories on Mr. Wysopal and his work. At Veracode, Mr. Wysopal is responsible for the security analysis capabilities of Veracode technology.

Comments (0)

Please Post Your Comments & Reviews

Your email address will not be published. Required fields are marked *

Love to learn about Application Security?

Get all the latest news, tips and articles delivered right to your inbox.