There are a lot of great perks that come with being a developer.  On the upside, I enjoy the challenge of developing solutions to real world problems with peers in UX, PM, QA, Ops, etc.  I love the creative process and the energy a team has when we are firing in the same direction at the same time.  I love building the stuff and making the team hum.  I love that sense of accomplishment we share as a team when we do something that works and the knowledge that humanity gets a little more efficient every time we deliver.  In addition, the satisfaction I get from seeing my project through to completion is something that brings me a lot of personal joy.  All things considered, I’d say I’m satisfied with my career life choice.

In addition to showing up on time with a functional solution, I’m also required to make sure it’s fault tolerant, scalable, maintainable, testable, and secure.

As a professional developer, I have to be very concerned with schedules.  Specifically, I’m measured in my job performance about delivering features and functionality on a (relatively) predictable date.  This is one of the downsides of development as it brings a bit of stress and anxiety to my otherwise awesome day.  There is always that schedule pressure hanging around.  That part I’d like to forget about, but I understand the realities of business and why we can’t completely abandon schedules.  I understand we have to ship things that might not be perfect in order to meet a deadline, save a deal, demonstrate more value than a competitor, or whatever. 

In addition to showing up on time with a functional solution, I’m also required to make sure it’s fault tolerant, scalable, maintainable, testable, and secure.  I have an entire toolbox of technologies and concepts I’ve accumulated over the years that help me with the first 4 items in that list.  But, when it comes to the 5th item (the “secure” part) I don’t really have all that much to help.  My experience with this idea of AppSec has been all over the map.  I’ve had “Security Teams” tell me that I should involve them at the onset of all projects so they can perform a risk assessment of the attack vectors (or something like that).  Early in my career, they would get a deer in headlights stare back from me and I’d say “What are you talking about?”  They’d go into some lengthy explanation about existential threats from criminals and hackers and rant on about how the world is going to end if somebody can inject malicious javascript into my form element of an HTTP post or something, and how they’d have to “run a scan” with “a tool”.  Again, deer in headlights. 

So, I took that information and being a scientist, started to create a series of hypotheses.  I started inviting them to project inception meetings.  These are the type of meetings where you’re trying to identify a problem that needs to be solved and determine if it’s worth pursuing a technology solution to meet the need.  During these sessions, it was the “Security Team” that had the deer in headlights stare.  They’d tell me, I don’t know where to start my security analysis if there isn’t something tangible I can poke at and examine.  I asked, “Too early in the process…” and they’d say “Yup, come back to me when you have something I can put my hands on.”  I’d say, “You mean when we’re ready to ship?” and the answer was “No, sooner than that.”  I would ask, “Could you be more specific?” and the answer was usually “No.”  Awesome, right? Very helpful.

I’ve come a long way since then, and so has the AppSec industry, but the nature of the problem still remains.  It’s a chicken and egg problem.  Developer: “Is my application going to be secure and can you tell me where the risk to my schedule is?”  Security Team: “No, but when we see something wrong with what you’ve already done we’ll be happy to tell you.”  Developer: “Can you quantify the risk to my schedule and help me mitigate that risk or tell me ahead of time what not to do?”  Security Team: “No, just write secure code.”  Developer: “What the heck are you talking about?   I’m leaving to go learn SWIFT.  Let me know when you’ve figured out what I should be doing.  I’ve got a release date to hit.”  Sound familiar?

So, security team, where can you change the nature of this problem and stop letting me down?  Give me something that allows me to spread the project risk across the entire release cycle (however long or short that is).   That’s my risk mitigation strategy of choice when I can’t eliminate it.  Tell me as early as you possibly can when something is wrong.  Speak my language and integrate your security program into my existing workflow.  If I’m going to take on the responsibility of making my application “secure” to help you, you’re going to have to bring something to the table that helps me.  Don’t be a drag against my schedule and backload all the risk until the end.  Raise problems early and help me do my job, and I’ll help you do yours.

About Jeff Cratty

As Veracode's Director of Engineering Jeff is an experienced software guy pursuing simple solutions to complex problems. He builds Agile development teams that support each other to deliver value to the business with high velocity and high quality. His passions are mission impossible projects, hard engineering problems, and team empowerment.

Comments (5)

willc | February 25, 2016 9:21 am

Application Security person here. The way I see it, the incongruency between devs and appsec that you describe has become even more of a problem since dev teams everywhere started adopting Agile methodologies.

Security doesn't fit into Agile. Or, at least, it wasn't ready for it.

I face this problem daily where I work, and it's been a struggle to find ways to let developers produce code that Security is OK with, and to build ways to work together, but it can be done.

Ways we've worked to achieve this:
1. Security must provide training to developers so that they know what Security likes to see. Input validation, sanitization, all-around secure coding practices, etc. Understanding secure coding helps thwart potential security blocks later.

2. Security should be rolled into the SDLC and planned for, but it shouldn't let work stop. Send Security a tag of your latest version and let them chew on it while you go on working on another feature.

3. Security should still be there while you identify and plan how to solve your problems. A Security voice should be consulted, and you can help alleviate their deer-in-the-headlights look by trusting them for their expertise in making sure you aren't accidentally designing a back door into your network.

4. Lastly, Security needs to become more agile. Being able to scan code and analyze apps more expeditiously as part of developer sprints helps. Then, separately, Security can plan assessments of any other potential areas of concern for further analysis.

It all boils down to communication and trust, really.

willc | February 25, 2016 9:21 am

Man, you guys should fix the removal of line breaks in comments.


ndupaul | February 25, 2016 9:50 am

@willc RE:comment styling - We are! Very soon, stay tuned.

Scott Arciszewski | March 11, 2016 12:17 pm

We're trying to solve this problem at the infrastructure level, by improving the security of the tools and frameworks developers use. By making secure-by-default cryptography libraries the norm rather than the exception. By improving the security offered by programming languages (mostly focused on PHP at the moment).

If we change the environment to where software is secure by default and insecurity is a design choice, the calculus changes significantly.

Terrance A. Crow | March 12, 2016 10:09 am

I think you've hit on an important problem: communications!

I started as a developer, so as I added AppSec expertise, I could manage both vocabularies.

"The Security Team" in your example should have provided security requirements in the initial meetings. Depending on the environment, that could be requirements like PII has to be encrypted at rest; communications between servers should be encrypted; passwords should be hashed and salted (with examples, if needed).

Then developers have something tangible to apply to their designs and code.

And when "The Security Team" conducts their scans, in addition to the generic things they scan for, they can verify that the requirements are met.

As security professionals, I think it's our responsibility to communicate not only what we're asking for, but the value that it brings to the process. Poor security = fewer customers.

Interesting read -- thanks for shining light on this issue!

Please Post Your Comments & Reviews

Your email address will not be published. Required fields are marked *

Love to learn about Application Security?

Get all the latest news, tips and articles delivered right to your inbox.