This blog post was co-written with Malte Vollmerhausen as part of a master’s degree course. Revised in 2025.
The soup can experiment: A quick mental check
Imagine you’re walking into a supermarket. You spot a pyramid of Campbell’s tomato soup cans with a sign: 10% off. You grab a few and head to the checkout. Statistically speaking, studies suggest you’d probably buy around 3 cans.
Now, let’s hop into our DeLorean, go back 30 minutes, and walk into that same supermarket again. You see the same pile of soup with the same discount, but this time there’s a small addition: a sign saying “Max. 12 cans per person.”
According to the study, this time you’d likely bring 7 cans to the register.
Technically, that’s an irrational decision. That sign shouldn’t change how much soup you actually need, yet it completely alters your behavior. As you can imagine, this glitch in our decision-making process doesn’t just happen with soup; it happens with security, too.
“Understanding how our brains work, and how they fail, is critical to understanding the feeling of security” [1].
Psychology and security are deeply intertwined. We’ve been diving into the research on human behavior, and it turns out that when it comes to security trade-offs, we (ourselves included) make a ton of mistakes.
In this post, we want to share what we’re learning about how our minds play tricks on us. We’ll explore some basic psychology principles and cognitive biases, then try to connect the dots to security. Finally, we’ll share a few experiments and practical tips we’re trying out to avoid these traps.
Fundamentals: how we think
At its core, psychology is “the science of the mind and behavior” [2]. But before we act, we have to decide. This is where things get tricky.
As Nobel Prize winner Daniel Kahneman describes in his bestseller Thinking, Fast and Slow [3], our brains use two distinct systems for decision-making:
- System 1: Operates automatically and quickly, with little or no effort and no sense of voluntary control.
- System 2: Allocates attention to effortful mental activities, including complex computations. This is where we feel a sense of agency and concentration.
Here is the difference in action: System 1 is what happens when you spot an angry face. You instantly recognize the expression and anticipate the person might yell. It happens in a split second.
System 2 kicks in when you see an equation like 24 × 17. Unless you’re a math savant, you can’t solve this instantly. You know the answer is over 100, but solving it takes deliberate focus.
The shortcut problem: heuristics
You’ve likely heard of “heuristics.” These are defined as “[a] simple procedure that helps finding adequate, though often imperfect, answers to difficult questions” [3].
From an evolutionary standpoint, these shortcuts make sense. System 2 requires energy and focus, and we can only do one heavy mental task at a time. Our ancestors didn’t have time to admire the sunset and calculate the trajectory of a pouncing lion. They needed quick, fight-or-flight reactions to survive.
We still rely on these shortcuts today, even though our lives are safer and we technically have the time to use System 2. The issue is that when we face a hard question, we often subconsciously substitute it with an easier one.
Examples of substitution [3]:
| Target question (Hard) | Heuristic question (Easy) |
|---|---|
| How much would you contribute to save an endangered species? | How much emotion do I feel when I think of dying dolphins? |
| How happy are you with your life these days? | What is my mood right now? |
| How popular will the president be in six months? | How popular is the president right now? |
Take the dolphin example: You aren’t using System 2 to calculate the economic impact of extinction. You’re asking System 1: “Do I like dolphins?”
The challenge we’re all facing is a disproportionate reliance on System 1, leading to what we call Cognitive Biases.
Cognitive biases
Let’s look at the soup cans again. One person buying 7 cans instead of 3 isn’t a big deal. But the systematic nature of these biases is huge. When thousands of people change their behavior because of a simple sign, it catches the eye of marketers and bad actors.
These biases influence everything from elections to salary negotiations. After decades of research [4], we’re seeing an ever-growing list of glitches our minds are prone to. Here are a few we find most relevant to security.
The “What You See Is All There Is” (WYSIATI) problem
We rarely notice missing information. When we meet someone at a party, we decide within a minute if we like them, interpolating a full personality from a tiny data set. We rarely step back to question that initial impression.
Daniel Kahneman calls this WYSIATI: What You See Is All There Is. This tendency to ignore missing data leads directly to a bias called Anchoring.
1. Anchoring
Whenever we deal with numbers, the first value we see influences everything that follows.
In a famous experiment, Kahneman and Amos Tversky rigged a wheel of fortune to stop on either 10 or 65. They asked students to write down the number, then asked them to estimate the percentage of African states in the UN.
- Those who saw 10 estimated around 25%.
- Those who saw 65 estimated around 45%.
Their judgment was swayed by a totally random number from a wheel of fortune. This effect appears everywhere, from guessing the height of trees to real estate prices.
Why this matters for security: Consider an experiment where judges were anchored by rolling a die (3 or 9) before sentencing a shoplifter. Those anchored to the higher number gave significantly longer sentences. If a random number can shift a prison sentence, imagine what it does to risk assessment.
2. Priming
Our environment influences us in ways we don’t realize. Our brains are constantly pattern-matching. This is called Priming.
- The Florida Effect: In one study, people who built sentences using words related to “age” walked significantly slower afterward.
- Voting: People were more likely to support education funding when casting their vote inside a school compared to a nearby building.
If simple environmental factors can shift how we vote, it’s easy to see how this can be weaponized in social engineering or security culture.
3. Statistics and numbers
We are generally terrible at intuitive statistics.
- Denominator Neglect: We think a risk is higher if it’s presented as “1 in 100” rather than “1%”. In one study, people rated a disease that kills 1,286 out of 10,000 people as more dangerous than one with a 24% mortality rate, even though the latter is twice as deadly.
- Law of Small Numbers: We often draw broad conclusions from small data sets. For example, the Gates Foundation once invested heavily in creating smaller schools because data showed the most successful schools were small. However, further analysis showed the worst schools were also small. The sample size was just smaller, leading to more extreme (variable) results.
4. The Sunk Cost Fallacy
This is the bias that keeps you eating at a restaurant when you’re already full because “I already paid for it.”
In security and business, this looks like continuing to fund a failing project just because “we have already invested so much.” Rational decision-making says the past money is gone; you should only evaluate future utility.
5. Risk perception vs. reality
We evaluate risk based on what is easily available to our memory (The Availability Heuristic). Our ancestors needed to focus on immediate, vivid threats (lions). Today, we still focus on vivid threats (terrorism, plane crashes) while ignoring statistically probable threats (car accidents, heart disease).
- The Availability Cascade: Media reports on a specific danger create fear, leading to more coverage, creating a feedback loop.
- Risk Aversion: We treat gains and losses differently. We prefer a sure gain over a gamble, but we will gamble to avoid a sure loss. This is Prospect Theory.
Connecting the dots to security
So, we’ve looked at the bugs in our general decision-making, but how does this actually play out when we are trying to secure a system?
We’ve been reading up on Bruce Schneier, one of the heavy hitters in this space, and his paper “The Psychology of Security” [1] really resonated with us. It highlights a massive challenge that we think everyone in our industry deals with, whether we admit it or not.
The core idea is this: security is never absolute. It is always a trade-off.
We all know this in theory, right? In every security decision, we trade security for things like time, convenience, money, or capabilities. Schneier gives a great (if extreme) example regarding 9/11: “Want things like 9/11 never to happen again? That’s easy, simply ground all the aircraft”.
Technically, that works. But we don’t do it because the trade-off (losing air travel) is terrible. The hard part isn’t knowing that trade-offs exist; it’s knowing if we are making the right ones.
The gap between feeling and reality
Here is where it gets messy. You can feel safe even if you aren’t. Conversely, you can feel totally exposed even when you are perfectly secure.
Schneier puts it simply: “Security is both a feeling and a reality. And they’re not the same” [1].
Remember that System 1 (the gut feeling) and System 2 (the math) we talked about?
The danger zone is when those two systems give us different answers. Schneier warns that “the more [our] perception diverges from reality […], the more your perceived trade-off won’t match the actual trade-off” [1].
We’ve definitely been guilty of this. It is so easy to make a quick decision based on a “feeling” of security (System 1) rather than doing the hard work of calculating the actual risk (System 2). When we do that, we end up prioritizing the wrong things.
Schneier identifies five specific areas where our perception usually fails us:
- The severity of the risk.
- The probability of the risk.
- The magnitude of the costs.
- How effective our countermeasure actually is.
- The trade-off itself.
The “spectacular risk” trap
This connects directly back to the cognitive biases we discussed earlier, specifically the Availability Cascade.
We catch ourselves doing this all the time. We worry about the risks that make for good stories (like sophisticated zero-day exploits or “movie plot” terror attacks) because they are vivid and easy to imagine. Meanwhile, we often underestimate the boring, slow-moving risks (like unpatched servers or poor password hygiene) even though, statistically, those are the ones that kill projects.
It is basically the security equivalent of fearing a shark attack while ignoring your cholesterol.
Security theater
This brings us to a controversial topic: Security Theater.
We used to think that any security measure that didn’t mathematically reduce risk was a waste of money. But Schneier adds some nuance here that made us rethink that stance.
He argues that we make the best trade-offs when our feeling of security matches the reality of security.
Sometimes, a measure doesn’t do much technically, but it helps align that feeling with reality. Think about the TSA full-body scanners. We know they only mitigate a tiny percentage of terrorism risk. From a pure efficiency standpoint (System 2), they are a bad investment. However, if they make passengers feel safe enough to get on a plane, they serve a psychological purpose.
Practical advice: experiments we’re trying
Knowing about biases is one thing; fixing them is another. We don’t have this perfectly figured out, but here are four strategies we are currently experimenting with to bypass System 1 errors.
1. Pause and rationalize
We are trying to force ourselves to engage System 2. Before making a security decision, take a breath. Write down the facts. Question the “gut feeling.” It’s hard work, but it stops the automatic autopilot.
2. Use “Nudges”
Nudges act as guardrails for our brains. A famous example is the organ donation form.
- Opt-in: “Check this box to be a donor.” (Low participation)
- Opt-out: “Check this box not to be a donor.” (High participation)
As developers and security pros, we can design systems that “nudge” users toward safety by default, rather than relying on them to make the perfect choice every time.
3. Build habits
Aristotle wrote, “We are what we repeatedly do.” If we can turn good security practices (like using a password manager or 2FA) into muscle memory, we reduce the cognitive load required to make the trade-off.
4. Automate decisions
Computers don’t have a System 1. They are brilliant but literal. Wherever possible, we offload the “rational” thinking to machines. For example, a smart home system that checks if the doors are locked so you don’t have to rely on your memory.
So, now what?
We know there is no single formula for perfect security. We can’t predict the future, and we will always battle between intuition and statistics.
But we can try to take a step back. We can give System 2 a fighting chance.
If there is one takeaway from this post, it’s this: The next time you walk into a supermarket and see a sign saying “Max. 12 cans per person,” take a pause. Be the person who buys 3 cans instead of 7.
Further resources
If you are interested in why humans are so error prone or just need some new reading material, you should take a look into the following books:
Thinking fast and slow, Daniel Kahneman, 2011 (probably the most bang for the buck)
Judgment under uncertainty: Heuristics and biases, Daniel Kahneman, 1982
Predictably Irrational, Dan Ariely, 2008
The upside of irrationality, Dan Ariely, 2012
The Black Swan, Nassim Nicholas Taleb, 2001
Fooled by Randomness, Nassim Nicholas Taleb, 2007
If you want to read more about security and privacy:
Data and Goliath, Bruce Schneier, 2015
Sources
[1] Bruce Schneier, The Psychology of Security
[2] Merriam Webster, Psychology
[3] Daniel Kahneman, Thinking, fast and slow, 2011
[5] Nassim Nicholas Taleb, The Black Swan, 2007