I’ve understood the benefit of code reviews, and enjoyed them, for almost as long as I’ve been developing software. It’s not just the excuse to attack others (although that can be fun), but the learning—looking at solutions other people come up with, hearing suggestions on my code. It’s easy to fall into patterns in coding, and not realize the old patterns aren’t the best approach for the current programming language or aren’t the most efficient approach for the current project. Dwelling on good code review comments can be a great learning experience. Overlooked patterns can be appreciated, structure and error handling can be improved, and teams can develop more constancy in coding style. Even poor review feedback like “I don’t get this” can identify flaws or highlight where code needs to be reworked for clarity and improved maintenance. But code reviews are rare. Many developers don’t like them. Some management teams don’t see the value, while other managers claim code reviews are good but don’t make room in the schedule (or push them out of the schedule when a project starts to slip.) I remember one meeting where the development manager said “remember what will be happening after code freeze.” He expected us to say “Code Reviews!”, but a couple members of the team responded “I’m going to Disney World!” Everyone laughed, but the Disney trips were enjoyed while the code reviews never happened. In many groups and projects, code reviews never happened, except when I dragged people to my cubicle and forced them to look at small pieces of code. I developed a personal strategy which helped somewhat: When I’m about ready to commit a change set I try to review the change as though it would be sent to a code review. “What would someone complain about if they saw this change?” It takes discipline and doesn’t have most of the benefits of a real code review but it has helped improve my code. The development of interactive code review tools helped the situation. Discussions on changes could be asynchronous instead of trying to find a common time to schedule a meeting, and reviewers could see and riff on each other’s comments. It was still hard to encourage good comments and find the time for code reviews (even if “mandated,”) but the situation was better. The next advancement was integrating code review tools into the source control workflow. This required (or at least strongly encouraged depending on configuration) approved code reviews before allowing merges. The integration meant less effort was needed to set up the code reviews. There’s also another hammer to encourage people to review the code: “Please review my code so I can commit my change.” The barriers to code reviews also exist for security reviews, but the problem can be worse as many developers aren’t trained to find security problems. Security issues are obviously in-scope for code reviews, but the issue of security typically isn’t front of mind for reviewers. Even at CA Veracode the focus is on making the code work and adjusting the user interface to be understandable for customers. But we do have access to CA Veracode’s security platform. We added “run our software on itself” to our release process. We would start a scan, wait for the results, review the flaws found, and tell developers to fix the issues. As with code reviews, security reviews can be easy to put off because it takes time to go through the process steps. As with code reviews, we have taken steps to integrate security review into the standard workflow. The first step was to automatically run a scan during automated builds. A source update to a release branch causes a build to be run, sending out an email if the build fails. If the build works, the script uses the CA Veracode APIs to start a static scan of the build. This eliminated the first few manual steps in the security scan process. (With the CA Veracode 2014.2 release the CA Veracode upload APIs have an “auto start” feature to start a scan without intervention after a successful pre-scan, making automatic submission of scans easier.) To further reduce the overhead of the security scans, we improved the CA Veracode JIRA Import Plugin to match our development process. After a scan completes, the Import plugin notices the new results, and imports the significant flaws into JIRA bug reports in the correct JIRA project. Flaws still need to be assigned to developers to fix, but it now happens in the standard triage process used for any reported problem. If a flaw has a mitigation approved, or if a code change eliminates the flaw, the plugin notices the change and marks the JIRA issue as resolved. The automated security scans aren’t our entire process. We also have security reviews for proposals and designs so developers understand the key security issues before they start to code, and the security experts are always available for consultation in addition to being involved in every stage of development. The main benefit of the automated scans is that they take care of the boring review to catch minor omissions and oversights in coding, leaving more time for the security experts to work on the higher level security strategy instead of closing yet another potential XSS issue.
CA Veracode's software engineers understand the challenge of building security into the Agile SDLC. We live and breathe that challenge. We use our own application security technology to scale our security processes so our developers can go further faster. On April 17th, our director of platform engineering, Peter Chestna, will share in a free webinar, how we’ve leveraged our cloud-based platform to integrate application security testing with our Agile development toolchain—and why it’s become essential to our success. Register for Peter’s webinar, “Secure Agile Through An Automated Toolchain: How CA Veracode R&D Does It” to learn from our experience.