When it comes to cybersecurity, it takes three to tango—vendors, users and researchers.
In an age of extreme digital vulnerability, the three major stakeholders in our information society have their own security priorities. Vendors want to implement security that maximizes usability while minimizing cost. Researchers want access to raw code to probe for security flaws and ensure best practices. And users want to know that their information is safe without having to sift through technical jargon.
But the ever-increasing complexity of IT security makes it a struggle to satisfy all three parties’ priorities—especially if the lines of communication aren’t open.
Security experts Jedidiah Crandall (associate chair of computer science at the University of New Mexico) and Roger Dingledine (co-founder of the Tor Project, a service that enables anonymous internet browsing) face these communication gaps daily. As researchers and educators, both have suggestions for how to open the communication flow to improve security for all.
An Open-Door Policy for the Academic Community
According to Crandall and Dingledine, in order to create an effective security communications strategy, vendors need to take the first step of allowing researchers—academic and independent—to inspect their software for weaknesses.
In 2017, a University of New Mexico research team (with help from The Citizen Lab) found this out the hard way while inspecting security features of LINE, a popular messaging app that boasts 200 million users across Asia. Without access to LINE’s code, researchers went through the painstaking process of reverse engineering the app. Once they’d done so, the researchers were able to find, exploit and disclose flaws in LINE’s encryption features to the app’s developers.
In their analysis published in August 2017 (co-authored by Crandall), the group concluded that vendors and researchers then need to work together to improve app security and define best practices for communicating security issues to end users. The trick, they noted, is to frame the discussion in “terms that users really care about.”
Paying Bounty Hunters to Find Flaws
Professional and amateur researchers can even be incentivized to identify security gaps. Dingledine suggests that companies host bug bounty programs that award cash prizes to independent researchers who expose security flaws.
“Even huge companies like Facebook and Apple have bug bounty programs and try hard to laud the people who come up with problems and report them,” says Dingledine.
Between 2011 and 2016, Facebook’s bounty program paid over $5 million to more than 900 researchers. In 2016 alone, Google paid researchers $3 million as part of its own program, and Microsoft is now offering up to $15,000 for weaknesses exposed in Microsoft Office.
Companies that incentivize these independent analyses, Dingledine says, tend to perform better in the long term, because transparency builds trust among both researchers and users.
The fundamental requirement is that you need to not lie to your users. Too many startups seem to think that the way to stand out is to promise amazing new levels of security. Building trust over time means being consistently accurate about what your system can and can’t do.
— Roger Dingledine, security expert and co-founder of the Tor Project
Telling Stories That Resonate With Customers
Storytelling may be the key to honest communication that educates users about security issues, a task that lies with vendors’ marketing teams.
“The fundamental requirement is that you need to not lie to your users,” says Dingledine. “Too many startups seem to think that the way to stand out is to promise amazing new levels of security. … Building trust over time means being consistently accurate about what your system can and can’t do.”
Marketers need accurate information about products’ security features, including weaknesses exposed by bug bounty hunters, and they must understand the concepts behind those features (and failures). But they can’t do it alone—the engineers who build the apps and services must teach them. A vendor might facilitate this dynamic by hiring marketers with security backgrounds and engineers who are adept at making technical data intelligible to non-technical employees.
Once marketing and engineering work together, companies can consider incorporating literal stories into the marketing it pushes out the world. Take Net Alert, another Citizen Lab project. Net Alert publishes web comics that depict common security-related scenarios, like this case study of Tibetan activists targeted by a phishing campaign. The comic provides a step-by-step breakdown of how phishing can impact anyone who uses email.
A security-savvy marketing team is in a position to tell similar narratives: clear, true stories that explain how a product protects users, and how, in some cases, it might fail. Because when it comes to online security, users prefer the truth.