Practical Human Security (Part 1 of 3)

Justin Sherman Analysis, Cooperation, Governance, Investigation, Risk, Security, Technology Leave a Comment

The Enemy

“If you know the enemy and know yourself, you need not fear the result of a hundred battles.”

~ Sun Tzu, The Art of War

Introduction

Written by Anastasios Arampatzis and Justin Sherman

The news are overwhelmed by stories of security incidents and data breaches that have exposed the secret, sensitive, and personal data of millions. While various cybersecurity professionals are characterising 2017 as the worst incident year to date, predictions for 2018 are even worse. In addition to a rapidly-changing threat landscape, emerging technologies such as AI, machine learning and quantum computing are fundamentally changing the way we approach cybersecurity in the modern age – which makes staying “ahead of the game” even more difficult.

In reality, we can view the cyber landscape as a battle space, where one key focus of conflict is information. Indeed, in today’s information-driven society, power comes from the accurate and timely ownership and exploitation of such data.

The epicentre of this battle space is us, the human being. Many reports identify us humans as the weakest security link in the cyber domain, vulnerable to countless methods of deceit and exploitation. As we are affected by the digital situations around us, our responses in the cyber realm are not instinctive and “logical,” but are instead fundamentally shaped by each of our individual beliefs, biases, and education.

Thus, we present our trilogy of articles – centred around the human element of cybersecurity. Using Sun Tzu’s famous Art of War saying as foundation, we will first discuss the external environment, “the enemy,” that forms the human threat landscape; in our second article, we will analyse the heuristic biases and beliefs that shape human responses to these threats; and finally, in our third piece we will discuss ways to design around – and design for – these heuristic biases and beliefs, in order to practically strengthen human cybersecurity.

The Enemy: Threats from the External Environment

If we consider the human being as a “system” within the context of systemic theory, the external environment provides inputs to the decision-making processes of every human. These inputs can be benevolent, but they can also be malicious and impair the way people are making decisions. These threats form the “enemy” that we humans need to fight against in order to reach better decisions for a safer cyber domain.

The Cyber Domain

As technology is evolving and disrupting our daily modus operandi, so is human behaviour. Psychologists like Dr. Mary Aiken believe that people behave differently when they are interacting with the abstract cyber domain than they do in the real, face-to-face world. The term “cyber” here refers to anything digital, from Bluetooth technology to driver-less cars to mobile and networked devices to artificial intelligence and machine learning. So: how is the cyber domain threatening human decision-making?

Threat number one: cyber safety is an abstract term. People can understand more easily and effectively the danger of driving drunk than the danger of having an unpatched personal or corporate computer connected to the internet. As a result, people often fail to recognise security risks or the information provided to cue them, and they tend to believe they are less vulnerable to risks than others. As Ryan West quotes, most people believe they are better than average drivers and that they will live beyond average life expectancy. People also believe they are less likely to be harmed by consumer products compared to others. Therefore it is reasonable to conjecture that computer users have the preset belief that they’re at less risk of a computer vulnerability than others. Further, the pro-security choice (i.e. encryption) often has no visible outcome, and there is also typically no “visible” threat (i.e. an email interceptor). The reward for being more secure, then, is that nothing bad happens – which by its nature makes it difficult for people to evaluate security as a gain when mentally comparing cost, benefits, and risks. In fact, if we compare the abstract reward (safety) of being more secure against a concrete reward like viewing an email attachment, then the outcome does not favour security at all. This is especially true when a user does not know what his or her level of risk is, or believes they are at less risk than others to start.

Cyber is addictive. A study has found that the average mobile phone user checks their device more than fifteen hundred (1500) times a week. This is enhanced by the very nature of the web. Internet is always there, open, 24/7, always full of promises, content and data. It is also full of intermittent rewards which are more effective for fostering addiction than continuous rewards. Do you remember the movie You’ve Got Mail with Tom Hanks and Meg Ryan? At some point Tom Hanks says that there’s nothing more powerful than the simple words “You’ve Got Mail.” This is the very essence of cyber addiction – we check our devices because sometimes we’re lucky enough to be rewarded with a notification. When something is addictive, we make irrational decisions every time it’s involved in a set of choices. I search therefore I am; I get likes therefore I exist. We check our mail, now, and again, and again.

This leads us to another threat, the time we are spending online. When we’re checking our mobile phone or when we’re typing this article, we are effectively in a different environment; we have gone somewhere else, just not in the physical world’s time and space. That’s because cyberspace is a distinct space, quite different from the actual living space where our families, homes, and jobs are located. A lot of us have felt “lost in time” when we are surfing online, because we haven’t learned to keep track of time in the cyber domain. This fundamentally affects how we behave and make choices in cyberspace. (And as far as online security goes, more time only equals more risk.)

Finally, we must address the libertarian nature of the internet. Internet is designed to be free. But where does freedom end and totalitarianism begin? What and where is the frontier of freedom and corruption? Is “freedom of speech” in fact fake news or disinformation? And who decides that certain opinions are fake news, if anyone should decide that at all? Similarly, how does personal interest override that of the greater online community? The idea of freedom online is quite contentious, and it raises many ethical questions – but currently, though, very little regulation exists.

It should be clear that our environment has an impact on our decision-making processes. Our instincts have evolved throughout history to handle face-to-face interactions with other human beings, but once we are in the cyber domain, these instincts quickly fail us.

The Ever-Evolving Technology

As devices and gadgets change, the cyber environment changes with it, which impacts our behaviour all over again. More changes lead to more new situations, only creating more confusion.

Until several decades ago, the pace of every technological revolution was such that it allowed humans to assimilate its changes and safely integrate them into their day-to-day activities. During the past twenty years, however, the evolution of digital technology has become so hectic that people cannot cope with it. We don’t have to name all the buzz terms that arise every single day to document this argument.

Even “digital natives,” to use Marc Prensky’s controversial term, feel sometimes helpless in the face of the ever-evolving technology. We haven’t yet found a pattern by which we can effectively leverage this technology for good; we are not sure how to functionally and safely use this new technology; and we are not sure what the long-term (and even short-term) side-effects of these newborn technologies will be.

Certainly, there are good uses and practices for effectively integrating evolving technology into our lives, but there are bad practices as well that many of us follow. In addition, this technology has brought an unprecedented revolution in digital content creation. This raises many questions: What are the implications of exposing so much sensitive data online? Who can benefit from them? Can we protect our precious assets, or are we unlocking our homes to the worst criminals who can erase our lives with just one click? (Remember the movie The Net with Sandra Bullock?)

Technology is not good or bad in its own right; it is neutral, and it simply mediates, amplifies, and changes human behaviour. It can be used well or poorly by humankind – and in many ways, it’s no different from how we regard driving cars or using electricity or nuclear energy. Any technology can be misused. Thus, the central question: what are the universal acceptable ethics for using cyber technology?

The Education

Education is obviously a factor that shapes human behaviour. Many of us have read countless articles about the necessity of recurring education and the return of investing in it. On a macro level, a broad lack of education can result in ignorance, authoritarianism, or even anarchy. Our lack of comprehensive cyber education therefore presents major risks in the way people are perceiving cyber, its risks, and its threats.

Another issue is effectiveness of cyber education that does exist.  Education should aim to answer the “why” and not only the “how” of security. It should aim at deep learning and retention, and it should certainly be recurring. Unfortunately, this is not the case. Typical learning situations rely on positive reinforcement when we do something “right.” Simply, when we do something good, we are rewarded. In the case of security, though, when the user does something “good,” the only existing reinforcement is that bad things are less likely to happen. Such a result is quite abstract and does provides neither an immediate reward nor instant gratification, both of which can be a powerful reinforcer in shaping behaviour.

We should also examine the opposite – how our behaviour is shaped by negative reinforcement when we do something “wrong.” Normally, when we do something bad in a learning environment, we suffer the consequences. In the case of security, however, negative reinforcement for bad behaviour is not immediately evident. It may be delayed by days, weeks, or months, even if it comes at all. (Think of a security breach or a case of identity theft, for instance.) Cause and effect is learned best when the effect is immediate, and the anti-security choice often has no immediate consequences. This makes it hard to foster an understanding of consequences, except in the case of spectacular disasters.

It’s also important to consider that factors such as willpower, motivation, risk perception, cost, and convenience are often more important than the lack of cyber knowledge itself.

Social Engineering

Social engineering attacks, largely orchestrated through phishing messages, remain a persistent threat that allow hackers to circumvent security controls. One can manipulate people into revealing confidential information by exploiting their habits, motives, and cognitive biases. Research on phishing largely focuses on users’ ability to detect structural and physical cues in malicious emails, such as spelling mistakes and differences between displayed URLs and URLs embedded in HTML code. Humans often process email messages quickly by using mental models or heuristics, thus overlooking cues that indicate deception. In addition, people’s habits, needs, and desires make them vulnerable to phishing scams that promise rewards.

Awareness of phishing messages among users has increased, but so has the sophistication of the messages themselves. Hackers design phishing messages today to activate basic human emotions (e.g., fear, greed, and altruism) and often target specific groups to exploit their specific needs. Hackers sometimes even contextualize the messages to individuals by incorporating their personal information (spear phishing). For instance, a new phishing scam has arisen on dating applications; a user (bot) triggers a conversation with another user (victim) and, after a few exchanges, sends a link (malicious) to the victim, ostensibly with a picture, in an attempt to get the victim to click on it. Research shows that such spear phishing attacks are more effective than generic phishing messages, which target a wider population.

The Malevolent Actors

These days, cyber crime is far more organised than ever before. Cyber criminals are well-equipped, well-funded, and they have the tools and knowledge they need to get the job done. But to really understand cyber criminals, we mainly need to know one thing: their motives.

Overwhelmingly, cyber criminals are interested in money. Either they’ll use ransomware to extort money, or they’ll steal data that can be sold on dark web markets. Their main course of action is through phishing campaigns, which can come pre-designed at a low cost and can have a truly staggering return on investment. Typically these campaigns are used to deliver malware (often ransomware), and emails usually include a strong social engineering component. For instance, recipients are often asked to open or forward attachments such as false business documents, which activate malicious software when opened.

Unlike cyber criminals, hacktivists are generally not motivated by money. Instead, they are driven by revenge. Hacktivists work alone, making their attacks extremely difficult to predict or respond to quickly. Sometimes these hacktivists are insider threats, who know how to bypass an organisation’s security defences, but the real risk still lies in that there’s no way of knowing who they are or when they’ll strike, and it is more difficult to attribute a hacktivist’s motives. We at least know that their main course of action is typically through DDoS (distributed denial of service) attacks, primarily to embarrass their victim.

In recent years, we’ve all heard a lot about state-sponsored attacks and cyber espionage. Unsurprisingly, state-sponsored attackers usually aren’t interested in our money. Instead, they want our data, and that means gaining sustained (“persistent”) access to our IT infrastructure. If an organisation operates in a particularly sensitive market where proprietary data is meticulously safeguarded (i.e critical infrastructure or electoral votes), then they’re at greater risk of gaining the attention of a state-sponsored hacking group. In essence, because so much is online, state-sponsored groups will often work on multiple attack vectors simultaneously. In this way, they can collect sensitive data over a long time period, rather than simply performing a “raid operation.”

Conclusions

Cyber is the battle space where many interests collide. In the midst of the haze and dust of this collision is the human, who is the recipient of many external inputs, good and bad, that shape the way we react and behave. But even apart from these external inputs, each human’s cognitive and heuristic biases also play an incredibly important role – which we will discuss more in our second piece.

References

This article wouldn’t had been alive without the following inspirational references:

  • The Cyber Effect by Dr. Mary Aiken
  • The Psychology of Security by Ryan West, published in April 2008, Communications of the ACM.
  • Got Phished? Internet Security and Human Vulnerabilities by Goel, Williams and Dincelli, published in January 2017 in the Journal of Association of Information Systems.
  • Proactive Defense: Understanding the 4 Main Threat Actor Types published in August 2016 in Recorded Future.
About the Author
Justin Sherman

Justin Sherman

Justin Sherman is a sophomore at Duke University double-majoring in Computer Science and Political Science with a certificate in Markets and Management. His focus is broadly on all things cyber, including security, warfare, ethics, terrorism, censorship, and governance.He conducts technical security research through Duke’s Computer Science Department, spanning deep neural networking, mobile privacy, encrypted tunneling, and IoT security; he conducts technology policy research through Duke’s Sanford School of Public Policy, spanning technology and poverty, disinformation, and international technology regulation; he’s a cybersecurity contributor for the Public Sector Digest, one of North America’s largest direct-to-government policy journals; and he’s the Co-Founder and President of Ethical Tech Society, an education and advocacy network for ethical technology professionals.Justin is certified in cybersecurity policy, corporate cybersecurity management, infrastructure protection, social engineering, privacy, information security, continuity planning, and homeland security planning from such organizations as FEMA, the National Institutes of Health, the United States Department of Homeland Security, and the United States Department of Defense. He’s been published numerous times, blending knowledge in computer science, policy analysis, decision science, political behavior, market sociology, philosophy, war theory, foreign policy, and human rights to understand technology in relatable and impactful ways. He also has experience as a computer science instructor, STEM curriculum developer, and technical trainer.When he’s not hacking a computer or worrying about the next cyber apocalypse, Justin enjoys running, hiking, filmmaking, and the martial arts.

Leave a Reply

avatar
  Subscribe  
Notify of