I recently gave a webinar wherein I discussed the concept of Responsive AppSec—the idea that development teams can improve the security quality of their software through a developer-focused Application Security (AppSec) Program that's built to respond quickly to their needs and the needs of their customers and other stakeholders through collaboration, taking ownership, and aggressive use of automation.
That high-level description is all well and good, you might be thinking, but how exactly do we do that? This is the first in a series of articles meant to address the building and maintenance of a Responsive AppSec program.
Let's begin with the three pillars of a successful program:
Measurement – consistently evaluating the results your team produces
Iteration – deliberately planning to improve measured results through repeated cycles of measurement and response
Education – improving the skills and knowledge of your team in a structured way
In this article, we'll cover these pillars at the conceptual level. Future articles will dive deeper into implementation considerations.
I once spent time working with a software team whose security quality was measured by flaws per KLOC (thousand lines of code). That is, the number of security flaws discovered during testing was divided by the number of thousands-of-statements of source code in the project. The team got very close to their objective through actual improvements in security quality, but then their deadline loomed and they reasoned that it was much easier to add a bunch of code (in this case by using the source of one of their high-security-quality libraries instead of referencing the compiled library) than to fix more flaws.
The security quality wasn't actually improved, but the team met their measurement objective. This example highlights the two major flaws with the way we've traditionally measured software security:
Software quality is notoriously hard to measure well, and security quality is no different.
Setting specific measurement targets can backfire extraordinarily easily.
Both these problems occur because when faced with the difficulty of measuring security quality, we typically end up measuring easily-gamed proxies instead. And since your development team was hired in large part because they are clever, we end up getting little more than a good-looking number to show management.
Effective measurement in a Responsive AppSec program must be team-facing. That is, the measurement system is designed to be valuable to the development team foremost. It doesn't hurt to find ways to effectively communicate the meaning of those measurements to managers and others outside the team, but this should be a secondary consideration.
In general, this sort of measurement approach should produces measures that are:
Consistent – the measurement will produce the same results when measuring the same code (in other words, it's repeatable).
Granular – your output can reflect very small changes in quality effectively
Comparable – your team can compare its performance with its past performance and with other teams using the same system
Abstracted – the measurement "score" is only meaningful when compared to other software or teams; it's basically meaningless on its own
Reflective – the system looks back at what was done only, it does not set measurement goals
Measurement systems with these five properties discourage "gaming the system" and instead encourage teams to make specific changes and see what happens. In other words, the measure doesn't tell you if your software is good or bad, only better or worse than it was before.
This is absolutely key to being responsive as a team: if you make changes to approach, design, and so on that reduce your security, you need to know quickly so you can react—not be given incentives to hide problems. This ability is key to effectively implementing the next pillar, Iteration.
Imagine you're racing sailboats for a moment. Such a race has a well-defined start and end, a very clear goal (get there before everyone else does), and a very clear set of constraints (the rules of the race). And so a team makes a plan for how best to proceed. How well do you think that team will do if they just follow the plan exactly?
The reality of such a race is that it occurs in a constantly changing environment. Any plan that stands a chance at leading to victory has to allow for the crew to adapt to changing conditions, or else the plan will quickly get discarded. If the crew and its plan aren't responsive to the changing environment, not only do they put their victory at risk, but they could risk catastrophic damage to their craft and dramatically increase the threats to their personal safety.
An AppSec program has a lot in common with our sailing race. Someone outside the development team—a compliance officer, a CISO, a customer—sets an objective, and security and development teams have to make a plan for how to meet that objective. But no matter how wonderful that plan may be, it will fail if we don't plan to be responsive to a changing environment.
The reality of the development world is that conditions are constantly changing—the market changes, and software must adapt; the needs of the customer change, and software must adapt; security flaws that were of small concern suddenly become high-severity at the release of a new tool, and software must adapt. The rise of Agile and related software methodologies has been largely a movement to embrace this reality.
And so, when we design an AppSec program, we too must embrace reality. The core of program adaptability is iteration. We must build programs that align with the iterations we have already built into our development process.
The keys to effective iteration are:
Make small changes each iteration. We need to be able to know what changes had what effect, and too much change at once overwhelms that capability.
Measure and reflect. As we iterate, we need to use our measurement system to determine if the changes we make are having the desired effect. We can't adapt quickly if we don't have information.
Expect and embrace setbacks. Not every iteration will result in improvement to security quality. Sometimes, we will experiment and that experiment will fail—we learn from failure, and it's important for teams to have permission to fail. And sometimes we will deliberately "go backwards" in order to set ourselves up for future success.
Revisit the plan. Sometimes what we learn while iterating is that our plan is unworkable. It's important to be able to revisit the plan and make significant changes to it if needed.
Of course, just as our sailing crew needs to have some knowledge about wind and water to adapt effectively, our team needs to have some knowledge about security in order to conduct an AppSec program in a responsive manner.
Let's think again about our sailing crew. Each member of the crew must know their job deeply, and spends most of their working time applying and practicing those core skills. The team consults with oceanographers, physicists, meteorologists, engineers, and so on to make sure their craft and their plan are optimal. Most likely, none of the crew are experts in any of those fields—and yet, it's important for them to have a basic understanding of each of them. In part, so they can adapt to changing conditions; but also so that they know when they need expert help.
Again, there are clear parallels to our development team. Each member of our team has to know their skill deeply, and spends most of their time applying and honing those core skills. Our team consults with architects, user-experience experts, performance engineers, security experts, and so on to make sure their software and architecture are optimal. Most likely, none of the team are experts in any of those fields—and yet, it's important for them to have a basic understanding of each of them. In part, so they can adapt to changing conditions; but also so that they know when they need expert help.
With most aspects of quality—things like performance, scalability, accessibility, and the like—we tend to acknowledge the importance of educating our development team on the basics. I know very few developers who don't have a working understanding of common performance challenges on their platform of choice, for example.
And yet we tend to treat security as somehow different. Organizations (and individual development team members) either try to outsource security to someone outside the development team—in the worst case, to a suite of vendor products—or imagine that they have to turn the development team into security experts.
Effectively educating our team about security is essential if we're going to own the security program. And as with any area of expertise, a mix of strategies is important:
Formal education. Spending time in a classroom (physical or virtual), led by an effective instructor remains one of the most effective ways for most people to gain a solid footing in a new subject. If team members have very little practical security knowledge, don't underestimate this sort of investment.
Mentorship. Identify people who can serve as security champions on the development team. Help them find security experts to receive mentorship from, and give them the time and support to mentor others on the development team.
Playing games. Games are one of the most effective ways to improve understanding. Security games can be anything from simple card games like Microsoft's Elevation of Privilege to a Capture the Flag exercise. Such games not only serve as great and low-stress ways to synthesize security knowledge, but also as a reminder that security is complex.
If we want to be able to effectively own our software security, we have to be understand what it is we're owning. And education is absolutely the key to accomplishing that.
All three of these pillars—Measurement, Iteration, and Education—support each other and help create a foundation for a Responsive AppSec program.
Future articles will discuss each in greater depth—including anticipating and surmounting common implementation challenges—as well as building and maturing a Responsive AppSec program on top of them.