Earlier this week, I attended the first PCI Community Meeting in Toronto, a gathering organized by the PCI Security Standards Council to bring QSAs, ASVs, and other PCI stakeholders together in one room with the PCI Council. Let’s be honest here — in the security industry, discussing regulatory compliance is about as dull as it gets. On the other hand, compliance is also a major catalyst, sometimes the only catalyst, in convincing organizations to improve their security posture, so it’s important to understand. As might be expected, I focused my attention on the sessions dealing with upcoming application security requirements and standards. Of particular interest were the two panel discussions covering the Payment Application Data Security Standard (PA-DSS) and Requirement 6.6 of the PCI Data Security Standard (PCI-DSS).
The PA-DSS is basically an extension of Visa’s Payment Application Best Practices (PABP) guidelines. Essentially, it’s a standard that will be applied to any application that stores, processes, or transmits cardholder data in a merchant or service provider environment. This is not to be confused with the PCI-DSS, which applies to entities/companies as opposed to applications. The PCI Council will release a draft of the PA-DSS in October for comments and suggestions, leading to a final version by the end of Q4. They are currently working on the certification process for existing QPASPs (Visa’s certification for PA assessors) to become PA-QSAs by Q2 2008.
The panel outlined the changes to PABP as follows:
- Explanation/Clarification. Just providing additional text around the best practices, presumably to make them easier to understand.
- Enhancements. Things necessary to turn a Best Practices document into a standard. The panel did not go into exhaustive detail on the enhancements, but the examples given were requiring assessors to use forensic tools to examine the output of the PA, introducing stronger lab requirements, and requiring a penetration test of the PA to identify common (e.g. OWASP-style) vulnerabilities.
Unfortunately, this was the extent of the detail. I wanted more information on the enhancements but didn’t get a chance to ask a question due to time constraints. Specifically, it would be nice to understand whether code analysis would be an alternate option for the penetration test requirement. Or what exactly they meant by using forensic tools to examine the application’s output.
Other relevant questions that came up had to do with versioning and recertification. For example, if a PA vendor releases a minor update, what is required in order to recertify the application? Would they be required to conduct another pen test, even if all they did was add a minor feature, such as the ability to recognize a new type of gift card? The panel’s response was that if a recompile was required then recertification would be necessary. However, that recertification effort could be targeted towards the new functionality. The other good question was under what lab conditions would the application be tested — the vendor or the customer. If you consider applications such as SAP, PeopleSoft, etc. which tend to be highly customized for each customer install, does it make sense to test in a vendor lab only, with no context of how the application will be deployed and configured in a production environment?
Requirement 6.6 of the PCI-DSS becomes mandatory in June 2008 and requires all web-facing applications to either undergo a code review OR be protected by a web application firewall. There were questions on what “web-facing” means, and the Council admitted that it needed rewording. This could be interpreted as any application using HTTP as a transport mechanism, or it could mean only those applications that were Internet-facing. It sounded as if they meant the latter, but they did not say so outright which was confusing. Web services applications would also be within scope. Many people were confused about the requirements for a WAF, specifically because there are no guidelines on what functionality the WAF must have, how it must be configured, etc. Joe Lindstrom, a panelist who runs Symantec’s compliance practice, outlined these features to look for when selecting a WAF:
- Layer of defense for known attacks (e.g. OWASP Top 10)
- LDAP and/or Active Directory integration
- Logging and monitoring capabilities
- Minimal performance impact
From a security perspective, I am not sure why performance impact was on the list. As long as the security requirements are met, it should be up to you to decide how much of a performance hit your application can handle. Another panelist, Dave Wichers from Aspect Security, suggested that a WAF should also do egress filtering to detect sensitive information such as track data leaving the network.
WAFs can be configured in so many different ways, however, that people were really looking for detailed guidelines on what would satisfy the requirement. First, WAFs can be run in “detect” or “prevent” mode. The former simply logs anomalies, while the latter actively blocks traffic once it detects those anomalies. Obviously, this is a big difference from an operational risk perspective, and most organizations use “detect” mode for that reason. Secondly, the effectiveness of a WAF is limited if it is not tuned specifically for the web application that it is protecting. In order for it to know what constitutes malicious traffic, it needs to understand what normal application traffic looks like. It’s kind of like tuning an IDS, except at the application layer. Of course, this means that a certain amount of retraining is required whenever the application is updated or changed, so again, WAFs are commonly deployed without proper tuning in order to avoid unforeseen disruptions.
One panelist, Kennet Westby from Coalfire Systems, pointed out that Requirement 6.6 should not be an either-or option, since code review is an integral part of the SDLC. This time, I made it up to the microphone before time expired, and asked if the council had considered making code review the sole requirement but allowing a WAF to be used as a compensating control. In my view, this seemed more closely aligned with the intent of PCI as a security standard. Incorporating code review into the development process is the best practice, whereas a WAF is essentially just a band-aid, not a replacement for code review. Granted, a SDLC cannot be put in place overnight, and a WAF can be effective as quick fix — something is better than nothing — but it certainly shouldn’t be an equivalent way to satisfy the requirement. Dave Wichers responded to my question, agreeing that code review was the “right way,” but added that a code review is not a viable option for many companies, thus they had to offer WAF as an alternative. Time ran out, so I’m still not clear exactly what he meant by that, and I need to follow up to get more clarity on his comment. Still, I saw some heads nodding and received some positive feedback from a couple people in the audience, so I guess I’m not the only one thinking along these lines.
The panel also brought up the question of whether automated code analysis solutions could be used to satisfy the code review requirement. The general consensus on stage was that you still needed a security professional in the loop due to the false positive (FP) and false negative (FN) problem. This isn’t to say that companies shouldn’t use these tools to help them achieve compliance, but that the tools alone would not satisfy the requirement. While the panel did not recommend any specific products, they suggested that organizations should focus on three main factors when selecting code analysis vendors:
- Maturity of the product
- Ease of integration into the SDLC
- FP and FN rates as an accuracy benchmark
The idea of benchmarking for FP/FN rates shows a level of maturity that this industry is undergoing. It would be an eye-opening exercise to apply these standards to human code reviewers and human penetration testers as well. In fact, I think people would be shocked when they realize the FN levels they are accepting today.
Staying on the topic of benchmarking for a moment, the WAF discussion is interesting. What percentage of attacks do people think these are actually stopping? Is it 20% or 80%? I haven’t ever seen any solid data here, though anecdotally, the reviews certainly have not been glowing. This is a big of an unknown as FP/FN rates of code analysis tools. If automated code analysis is going to be held up to benchmark scrutiny, then WAFs and manual assessments need to be as well.
Overall, I found that the sessions were probably a little bit on the short side. At each session I attended, there was never enough time at the end to address audience questions. Many of the audience members were from companies working toward PCI compliance, so the questions tended to revolve around interpretation of the PCI language and intent (Is such-and-such considered a compensating control? What do you mean by connected entities?), leaving less time for discussion of the technical merits of the security requirements.
Veracode Security Solutions
Static Code Analysis
Vulnerability Scanning Tools
Web Application Security
Software Testing Tools
Source Code Security Analyzer
Software Code Security
Source Code Analysis