It's hardly a revelation that hardcore security veterans are not at the pinnacle of clear communication. And the more technical the talent, in general, the weaker the communication. For most in IT and almost everyone in corporate outside of IT, this is generally dismissed as a fact-of-life.
But I've recently started to wonder if this isn't a bigger problem and one that can undermine core security objectives. Consider this story from Engadget. The story is about Dropbox seeking greater capabilities to improve security. "The app only asks for the permissions it needs. It uses the Mac's accessibility kit for certain tie-ins (such as in Office), and demands elevated access to your OS when standard programming interfaces fall short. The permissions aren't as "granular" as Dropbox would like, the developer adds," the story said. "He stresses that Dropbox can't see your system's administrator password, and a privilege check on startup is only to make sure the software works consistently, especially across OS versions."
The particulars are not that important. The upshot is that Security needed more access to strengthen operations and that end-users were resisting. From that perspective, this is a very common and critical problem. In your company, how often have Security people indicated the need for a change in behavior without telling you why?
Most of the Security communications I have seen fall into two categories: edict or so technically dense that it communicates nothing. The edict ones are easy to recognize, as they are variations of "Do it because we told you to" or "This is the new policy and employees have agreed to abide by all official policies. So get to it!"
The technical ones are more complicated. They stem from the writer honestly attempting to explain to fellow employees or customers the reason for the change, which is all that we're asking. But it's written in a way that is meaningful only to those who already understand the reasoning, as though it's penned from cryptographers for other cryptographers.
These messages must do two things. First, these messages have to be phrased to be meaningful and persuasive to the audience. And secondly, to be persuasive, the messages have to stress exactly how this change in policy—directly or indirectly—will help the reader.
A bank insisting that new accounts be authorized by a retina scan will sell more accounts if it details how many criminals want to impersonate bank customers and steal all of their money. Sort of a "Friends, Romans, Countrymen, lend me your retinas" speech.
What I have tried training technologists—especially security technologists—to articulate more effectively, I usually suggest they envision the least-technically-comfortable colleague that they know and like. Maybe it's someone in accounts payable, perhaps marketing, maybe facilities. How would they explain it to them? It's worked more often than many would imagine.
Security professionals know more than anyone that the way to make the biggest improvement in your security posture is to change the behavior of employees. There's a reason why social engineering will undermine the protections of a dozen biometric authentication schemes and VPN tunnels.
Helping customers and fellow employees understand security will make them adhere to it much more effectively. It also will help them decide when exceptions are appropriate. That, by the way, is really what social engineering con artists do. They try and convince people that what they are asking is actually in the spirit of the rule. Put more bluntly, social engineering only works because Security people are bad at explaining things.
For information on how you can work with groups within your organization to improve AppSec read: Joining Forces: Why Your Application Security Initiative Needs Stakeholder Buy-In