A little over a week ago it was the 9th anniversary of the 9-11 attack against the US. The following day, September 12th, 2001, I was scheduled to testify before the US Senate Committee on Governmental Affairs for a hearing titled, "How Secure is Our Critical Infrastructure?" The hearing went on but no one outside of DC was able to get there in time. The following is the written testimony we submitted. We talked about:

  • the security of commercial software
  • one of the first botnets
  • the threat of consumer devices entering corporate environments
  • applications security

All are still major problems today. Nine years later it often seems that not much has changed. -Chris September 12, 2001 The Honorable Joseph I. Lieberman Chairman Committee on Governmental Affairs United States Senate Dear Mr. Chairman: Per your request, @stake Inc. is pleased to provide the attached testimony for consideration by the Committee on Governmental affairs. As a company focused solely on digital security, @stake hopes that the observations and opinions offered in this testimony will be of assistance to you and your committee. Over the past two years, @stake has worked with many Fortune 1000 clients who provide critical energy, telecom and financials services in support of our national infrastructure. While all of our clients have good intentions with respect to digital security, the levels of preparedness and execution are mixed. It is our hope that through this testimony we can provide a baseline view to the vulnerability of our nations critical infrastructure, specifically as it relates to services provided by the private sector. We appreciate this opportunity. Regards, Christopher Darby President and CEO @stake Inc. Peiter Zatko Chief Scientist and VP of Research & Development @stake, Inc. Chris Wysopal Director of Research & Development @stake, Inc.

Testimony and Statement for the Record of Christopher Darby, CEO, @stake, Inc., Peiter Zatko, Chief Scientist and VP of Research & Development, @stake, Inc., and Chris Wysopal, Director of Research & Development, @stake, Inc.

Hearing on "How Secure is Our Critical Infrastructure?" before the Committee on Governmental Affairs United States Senate Wednesday, September 12, 2001 Room 342, Dirksen Senate Office Building @stake Inc. is the world’s largest independent digital security consulting and engineering firm. Formed in 1999, @stake today works with more than 100 clients worldwide including many Fortune 1000 financial and telecommunications institutions. Over the past two years @stake has gathered over 100 of the world’s leading authorities on digital security including people from the NSA, DERA (the U.K. equivalent of the NSA), the FBI, RSA Security, Nortel, MIT, Certco and other prominent institutions. @stake today staffs operations throughout the United States and in Europe. Three years ago Mssrs. Zatko and Wysopal, then members of the L0pht security think tank, testified before this committee on the subject of, “Weak Computer Security in Government.” Today the focus has expanded to encompass the national critical infrastructure, recognizing that government security is dependant on the security of many entities outside of government, for the most part for-profit enterprises. @stake’s business model has not, to date, focused on the Government. Our focus has been on the large commercial enterprises. Many of our clients provide services in support of critical national infrastructure. The majority of @stake’s client engagements focus on assessing digital risks and engineering technical solutions for large multinational companies. It must be remembered that the mandate for these companies is to derive shareholder return, not to secure critical infrastructure. Today @stake’s client base views security as a sunk cost, largely a byproduct of Information Technology architecture and associated spending. Security is viewed as a cost borne to mitigate risks that may negatively impact the corporate mandate of generating shareholder return. The following testimony provides opinion on the security of our national critical infrastructure, specifically as it depends upon the security strategy, architecture and implementation of large commercial enterprises providing such things as financial, telecommunication, energy, and transportation services.


Three years ago attention was drawn to risks taken with technologies that were not well understood. Technologies were being deployed without regard to the larger purpose of the organization. Businesses and government were driving full steam ahead to exploit the potential of Internet technologies. Although the Internet had tremendous resilience and potential, it was of no concern that there were still vulnerabilities serious enough to bring the whole thing crashing down. People involved in understanding offensive security research could, to use a now famous line, “take down the Internet in 30 minutes.” What was referred to then were weaknesses in central points in the network, and the use of mechanisms similar to the now well known distributed denial of service attacks. The proposed solution to the computer security problem was to get vendors and infrastructure owners to take security more seriously, by forcing them to find the weaknesses and problems in their own products. To this effort people endeavored to publicly educate vendors in ways of finding and attacking problems in software and network systems. Fundamental changes in the way vendors and businesses approach security was required.

The Changing Threat Model

Today’s threat model is not addressed by simply running the most popular firewall. Today’s threat model is not addressed by access control. These components look only at a myopic section of the risk. The fact is today’s threat is no longer about active attack. Today’s threat is about passive control. Yesterday’s elite hacker is today’s puppetmaster, no longer content to deface or disrupt a website, but instead seeking total information control. The world of application security has only just become visible on the horizon as a huge area of risk that has not attempted to protect itself. Recent worms such as Code Red, while gaining notoriety, pale in comparison to similar, though lesser known worms, that lie dormant with the capability of manipulating the nation’s critical infrastructure under their master’s control. To illustrate the potentially catastrophic nature of this threat consider that an estimated one third of the classified data on SIPRNET now relies upon these public shared infrastructures. The threat model has indeed changed. A multi-disciplinary emphasis of strategic and tactical must be embraced. With the majority of the world attacking security as a tactical response, we must now compensate with more strategic thinking if we are to successfully move into the next era.

Strategic Architectural Design

Security policies and security mechanisms should, but unfortunately often do not, vary greatly from one organization to another. Too often due-diligence is viewed as the installation of protective software in it’s “out of the box” configuration. A university has very different security requirements than a bank or a utility company. Different levels of risk demand different amounts and types of risk mitigation. @stake has found that the level of security varies greatly even within organizations in the same industry. This is especially disturbing when the organizations are part of our nation’s critical infrastructure where information security requirements are the highest. This is more pronounced in areas such as electrical utilities and gas refineries. These are potentially enticing targets in the new world threat model. Both business models understand and utilize a segmentation structure around what is called Supervisory Control And Data Acquisition (SCADA) and Distributed Control Systems (DCS). These systems are vulnerable to attack and could potentially disrupt the operations supporting infrastructure such as the national power grid. As security consultants, @stake gets to see the details of the security architectures of many organizations. Recently we looked at the information security design of two critical infrastructure organizations, a large modern oil refinery and that of a large fossil fuel power plant. The contrast was stark. The refinery was highly automated with many sensors and computer controlled machines and a succinct network design. There were multiple levels of firewalls to protect the many different security levels. Information was only allowed to pass in one direction: from the most sensitive levels where the computer controlled devices were connected, up to a less sensitive network, the plant control network, and then finally to the least sensitive network, the corporate network, which is where you would find standard business operations like accounting. The network design grew out of performance, reliability and safety concerns but had the added benefit of being very secure. Since information could only flow in one direction on the network it was not possible for someone on the corporate network to affect the sensitive computer controlled devices running the refinery. The network was designed so that someone, perhaps in accounting, couldn’t make a mistake and query a piece of equipment for historical data, which might impact the performance of the equipment if it was contacted at the wrong time. This same design protects the sensitive plant equipment from being controlled by a malicious attacker who may have broken into the corporate network from the Internet or from a malicious insider in the corporate offices. The fossil fuel power plant was a completely different story. The different network levels at the power plant were all joined together without a firewall to segment them from one another. The plant network was connected to the corporate network with a simple network router that did not provide adequate network filtering. The end result was that anyone, anywhere on the corporate network for this very large power company could control this power plant and perhaps many others. The only thing stopping anyone on the Internet from wreaking havoc with the power plant was a single level of firewall that separated the power company’s corporate network from the Internet. In the case of critical infrastructure this is clearly not good enough. How did two similar industrial computer networks end up being implemented so differently? The answer lies in the way they were put together. The power plant network grew over time in an ad hoc way. Pieces were added one by one as technology became available without an overall network architecture. The refinery network had a strict architecture that had to be adhered to, as the network was built. It is like the difference between a planned community and a shantytown. Planning has many benefits: reliability, performance and security. You can often achieve better performance and more functionality without upfront planning but to achieve the proper security, especially the level required for critical infrastructure, it must be planned in from the start.

Security of commercial software

Vendors have responded to the onslaught of publicly reported vulnerabilities in their products by shortening the time their response teams took to fix problems once the problem was reported. While software security is conceded to time to market pressures, companies appear willing to expend more energy in reacting rather than proper design. The notion of proactively analyzing code in anticipation of future attack situations has been entirely overlooked. The path of reactive fixes or “patches” to software requires customers to expend effort installing them on every machine that has the vulnerability. Ironically today most organizations have not realized that part of their total cost of ownership for a piece of software is the monthly installation of patches. In organizations with tens of thousands of computers, this maintenance cost greatly affects their bottom line. Microsoft’s web server, a core component of many businesses today, illustrates this trend. In 1998 there were 5 software patches released for it, in 1999 there were 10, in 2000 there were 16, and through August 2001 there have been 6. Microsoft is not alone in this regard. All of the major vendors approach software security in this way. A full month before the Code Red worm, Microsoft provided a solution that, if it had been installed correctly, would have mitigated the risks resulting from the Code Red attack. At the height of the Code Red worm infestation there were several hundred thousand machines that had been compromised. Even today after many weeks and a lot of media attention there are still an estimated 40,000 unpatched and vulnerable computers that remain infected. The majority of organizations that configured their internal software to use only those components required to meet their business needs would not have been vulnerable to the Code Red worm. @stake consultants are constantly editing out unnecessary functionality to assist client’s in streamlining their operations and enhancing their security profile. They often do not even have to worry about the vendor supplied patch. The ultimate goal is to architect for maximum performance while minimizing complexity. How does one prepare for the future rather than patching the past? The Code Red worm could have been much worse. Had this worm been written correctly, the steps used to mitigate its attack on whitehouse.gov would have been ineffective. By theorizing solutions about the logical evolution of this type of worm, our defense and security goals can be better realized. In the words of Winston Churchill – the worst-case scenario should never come as a surprise.


Reacting to today’s environment will lead to defenses that are incapable of protecting against tomorrow’s threat. Understanding current and future attack methodologies is the important first step of defensive computer security research. By fully exploring anticipated attack methodologies and attack tool capabilities, defenders will be placed ahead of attackers. The Department of Defense acknowledged it is building prototype biological weapons. In order to best come up with defenses against biological weapons, researchers first had to build prototype weapons in order to understand their capabilities. Only then could they start to model defense methods. Information weapons have much in common with biological weapons as they both allow small groups to inflict severe and widespread damage and attack with no warning. People need to understand and be educated to the security risks. Security and technical people need to learn how to communicate in a language that is understood by both the technical and business constituencies. A technical person needs to communicate to a business executive in the business terms relative to the organizations goals. Conversely, a business person needs to convey business goals to the engineers. Tools and threat models will invariably change over time. A security mindset is required. This mindset recognizes that a tool is only one component of the larger solution. A solution must evolve on an ongoing basis to anticipate and meet the emerging threats. In short, security education is an ongoing process and security solutions must be living.

Hidden Threats

Island hopping

One of the new significant threats to both the government and the commercial sector is “island hopping.” Island hopping is the act of automatically scanning large ranges of network addresses (often dedicated to servicing personal, or home users) and taking control of the remote user’s computer. This tactic results in attackers taking control of the unsuspecting user’s computer in order to then “hop” into the user’s corporate network, utilizing the victims own VPN to bypass the corporate firewall. Cable modem, DSL, and dial-up Internet Service Providers have large blocks of address spaces from which they dynamically assign addresses to users who connect and disconnect on demand. These addresses are scanned looking for vulnerabilities in common operating systems and applications. No longer is the threat the network, it is now the application. Breaching an organization’s perimeter remains the goal. The avenue of attack has shifted to the weakest link, the employee’s home computer. In this example, the VPN is converted into an attack tool as opposed to a security solution. Few organizations have the resources or awareness to bring each employee’s personal system up to the same security level as the organizational firewall. This is the same system that children play network games on, home banking is engaged in, and unrestricted web surfing and online chat occur. This same home computer is being trusted, via the VPN, to enter the corporate perimeter and appear in tandem on the "secure" network. Island hopping compromises as many systems as possible in an automated fashion and then looks for systems that have VPN interfaces configured giving them access to the internal networks of agencies and organizations. A large software giant on the west coast suffered a significant attack and the resulting loss of critical assets in exactly this fashion. What would have happened if the Leaves worm (a malignant worm active today and designed to be controlled by one or more unidentified individuals), which was estimated to have compromised over 200,000 systems, had been instructed to report which systems were trusted as internal through VPNs? @stake estimates that the majority of organizations in the private and public sector, would have had their firewalls by-passed. The Leaves worm ingeniously piggybacked itself onto another remote control program called Sub7. Finding previously compromised computers and taking control of them was just the beginning of what made Leaves interesting. The real interesting fact was that a single person controlled all of the computers infected with Leaves by using a public chat network called IRC or Internet Relay Chat. This worm was created to control as many computers as possible. Once a computer was under its control it could launch a denial of service attack on a piece of the Internet, be a launching point to spread other new worms, or anything else the “puppetmaster” could dream up in the future. Just by issuing a few simple commands over the IRC chat system he could get his army of computers to do his bidding. Over the past 3 years attack and control technologies have steadily advanced but the primary defensive technologies, firewalls and antivirus scanners, have remained mostly the same. They are in more widespread use but have not stepped up to solve the problems that hit the Internet at its weakest point, vulnerabilities in applications and operating systems. Attack technologies such as worm toolkits, multiplatform worms, polymorphic shell code, and kernel level root kits, make it possible for attackers to compromise more computers, faster, and remain in control of those computers. Routers, which control the flow of network information, are also the targets of many of these control networks.

Wireless Technology

In the past, attackers monitoring President Clinton’s whereabouts by intercepting secret service pages demonstrated the lack of security in deployed wireless technologies. Attempts to introduce security to these existing technologies is mediocre at best. Unfortunately, the lessons learned have not been applied to the new wireless technologies. Technology is adopted at a pace that exceeds the time period needed to responsibly vet it. A case in point is wireless networking. Within the last year there has been a tremendous growth in the installations of a wireless networking standard popular with corporate and small office users called 802.11b. It only costs about $100-200 per computer to install and allows the computer to use the corporate network and usually the Internet at high speed without being wired. The problem is installing this technology without planning to do it securely for your environment opens up a corporate network to easy attack. This attack can be launched by outsiders in the parking lot armed with little more than a laptop. Informal surveys of major cities, taken by individuals conducting an activity known as “war driving”, have shown that over 60% of the networks discovered do not employ even minimum security precautions. Even when the security settings in wireless networks are enabled, an attacker can bypass the security because of flaws in the network and security standards themselves. This wireless technology is so convenient that even defense contractors, who should be acutely aware of the need for security, have found their employees installing wireless equipment and putting their networks at risk. In June of this year, MITRE, a federally funded research and development center that performs work for the Defense Department, found that anyone could access their internal network from their parking lot. The corporate vulnerability was due to the ad hoc wireless networks many employees had installed without considering the risks it posed to their organization. Today’s Internet does not require a central authority to oversee additional equipment or applications being added to a network. This has an adverse effect. Unless rigorous policies are in place and enforced by regular audits, vulnerabilities will be created as new technology is added without investigating its impact to the organization. MITRE now has a policy forbidding wireless networks to be deployed without the permission of their information technology group. Multi-disciplinary devices When Secretary of State Colin Powell announced he would no longer be using his Palm Pilot, a popular Personal Digital Assistant (PDA), for security concerns, it surprised many people. Not too much later Hanson, the FBI agent found guilty of selling secrets to Russia, was arrested and found to have been outfitted with a customized PDA to help him in his nefarious tasks. Do these events signify an inherent problem with PDA's? No. Most PDA's are great for what they were originally designed for. Storing notes, phone numbers, recipes, and acting as a handy calculator are all great for personal convenience. The problem arises when boundaries between different disciplines become blurred or erased. In this case the two disciplines being crossed are that of personal life and professional. One security paradigm seldom encompasses both worlds without impacting or affecting either. How many people use the same password on personal devices as they do on critical systems? It is unreasonable to believe that the same amount of effort is placed to secure devices found in personal use as those deployed within Critical Infrastructure. However, these devices are freely used between and betwixt both arenas. While the crossing of a social boundary is quite apparent to most people, the crossing of security boundaries is much less apparent while potentially much more disastrous.

Application Security

An Achilles heel of Critical Infrastructure is vulnerabilities within applications. Firewall technology has done a good job of thwarting many network style attacks and blocking access to computers that have not been configured properly for security. However, applications such as a web servers, email programs, and word processors handle the data and communicate with other programs over the network to do their job. This communication cannot be blocked by a firewall or the program ceases to function. These communications give access to the critical data. Attackers employ primitive, yet effective, tactics to reverse engineer popular programs to discover new vulnerabilities. They can then compromise the security of a computer by sending specially crafted messages or commands to the newly vulnerable application. This is frequently the modus operendi of worms and those who seek to control and harness armies of computers through automated attacks. There is no simple solution for this problem such as installing a firewall or antivirus software. Each application must undergo rigorous testing to find its latent vulnerabilities, which are typically the result of design or implementation errors. The bad guys already have the proprietary source code to most operating systems and applications. This includes the operating systems that run on routers, the backbone of the Internet. This gives them a huge advantage in discovering latent vulnerabilities. Source code is the target for many computer intrusions. When Microsoft’s corporate network was pierced in October, 2000 it was source code for upcoming products that was stolen. Kevin Mitnick bragged that he broke into Motorola to steal the source code for their products. The stockpiling and trading of source code over the Internet is a daily activity in the computer underground. The source code for proprietary operating systems such as Windows NT/XP/2000, Solaris, HP/UX, Cisco IOS, Cisco PIX, Firewall-1, and others swapped like baseball cards between attackers. This is why it is so important that third party security audits of software not be hampered by anti-reverse engineering restrictions. Again, the reality is the bad guys already have the source code. Our organization was forced to create a way to derive the equivalent of source code from the binary applications run on end systems. Our tools represent today’s thought leadership in the area of application security analysis. Attempts are being made to restrict access to this type of technology but the fact is attackers are actively pursuing equivalent data. It is our belief that in only a few years time it will similarly be possible for the rest of the world to have total visibility into the applications that support our nations critical infrastructure.


There are significant new and emerging cyber threats to the critical infrastructure of the Untied States. Perhaps the most disturbing of these new threats are those that lie dormant, awaiting instruction from unknown persons. While it is beyond the scope of this testimony to imply motive on the part of these persons, it is reasonable to assume that substantial damage could result from inappropriate use of the hijacked infrastructure. The software industry has not taken appropriate measures to ensure the security of commercial code. The problems are further compounded by inefficient implementation and a lack of security education. In an ideal world, software would be analyzed and secured against emerging threat models prior to release to the market. Today’s reality, however, is rooted in reactive tactics aimed at mitigating financial risk as opposed to physical attack. It is also disturbing to observe that a false sense of security is being propagated in the search for a “silver bullet.” Strong tools such as anti-virus software, firewalls and VPNs do not, in themselves, solve the security issues. These tools provide limited assistance in securing against core software or hardware vulnerabilities. Education coupled with persistent analysis of emerging threat models and the corresponding solutions is the only answer.

Veracode Security Solutions
Security Threat Guides

About Chris Wysopal

Chris Wysopal, co-founder and CTO of Veracode, is recognized as an expert and a well-known speaker in the information security field. He has given keynotes at computer security events and has testified on Capitol Hill on the subjects of government computer security and how vulnerabilities are discovered in software. His opinions on Internet security are highly sought after and most major print and media outlets have featured stories on Mr. Wysopal and his work. At Veracode, Mr. Wysopal is responsible for the security analysis capabilities of Veracode technology.

Comments (0)

Please Post Your Comments & Reviews

Your email address will not be published. Required fields are marked *

Love to learn about Application Security?

Get all the latest news, tips and articles delivered right to your inbox.