This three-part series will explore the impact of the human condition on cybersecurity, and why cybersecurity must be designed from a “human-out” perspective.
The technology to deliver cyberattacks and malware has gotten incredibly cheap. According to Secplicity, anyone with a Tor Browser and a Bitcoin can now rent a botnet to deliver malware for as little as $500.*
And the creation of malware no longer requires programming skills. For roughly the price of another Bitcoin, you could buy a Perkele toolkit on the dark web, and deliver, via your Botnet, highly-customized malware that could infect mobile and PC banking applications and defeat multi-factor authentication schemes.
Let’s face it—the low barrier to entry has contributed to the rise of malware seen globally. No longer do you need to be a programmer or be an affiliate of a crime syndicate to build a successful cyber-hacking business. Today, pretty much anyone who is motivated to get into the cybercrime business can do so.
The Human Aspect
Throughout history, there has always been a small group of people who want to take advantage of another group of people. The problem is as old as the human race itself. And, as organizations spend more and more of their corporate budgets on cybersecurity technology (the 2017 global cybersecurity market is expected to exceed $120 billion, according to Cybersecurity Ventures) to minimize threats and thwart data breaches, they have to ask what, if any, return are they getting on these investments?
Will new cybersecurity technologies like next-generation firewalls, cloud-access security brokers or real-time threat intelligence feeds turn out to be worthy investments? Or will they, like so many other cybersecurity technologies purchased by companies as “silver bullets”, find their way into the “shelf-ware” category, each providing yet another example of a technology investment that solved a particular security challenge at a specific point in time, and yet still didn’t address the most fundamental weakness of all—the human aspect?
In the tech industry, we often call this the PEBKAC problem—or, the Problem Existing Between the Keyboard And Chair—in other words, if only we could eliminate those pesky humans, we wouldn’t have a cybersecurity problem. To combat this, cybersecurity professionals make implementing security intentionally difficult. Cybersecurity professionals tell users what they can’t do and declare, “Thou shalt not bring your own device (BYOD) or bring your own app (BYOA)!” How many CISOs do you know that have had to find a new job because of drawing those lines in the sand—or worse yet, after drawing those lines in the sand and making it hard for users, they still had a corporate data breach?
For all of the issues that the human element can cause, humans are also seen as an organization’s most valuable asset. After all, companies spend a ton of money and resources on attracting and retaining top talent, as they should, since employees are core to the business.
In the app economy, software is also core to the business, yet the same level of scrutiny and care that organizations expend to find the right candidate does not always apply to evaluating the code that goes into the software. It’s certainly an interesting concept and one that’s explored in greater depth in a recent installment of The Last Adopter podcast, in which Veracode’s Sam King talks more in-depth about the role of software and actions that can be taken to make our world more secure.
Hearing perspectives from King and George Johnson of NC4 gives hope that new methods and approaches to security can help mitigate threats and control risk… Hiding cash in the mattress and disconnecting our computers is not in our future, contrary to what Lewis Black may think.
Next time, the discussion will continue with a focus on the human element and how designing cybersecurity from a “human-out” perspective can have a positive impact.