Skip to main content
Feature

Cerebral Security

Tech smarts and a pair of grants from Google and the National Science Foundation are helping BYU professors at the university’s Neurosecurity Lab lift the lid on computer users’ riskiest behaviors. And with a multimillion-dollar brain scanner at their fingertips, the six researchers are turning heads. -->

YOU'VE BEEN HACKED.

Your data is stolen—or, scarier, that of your clients and customers—all because of a laughably easy password. But here’s the surprising part: neurosecurity experts—the scientists who study the brain and information-security behaviors—can hardly blame you.

Cartoon image of a brain with an open lock in the middle

“It’s hard to think of a security measure that would work any worse with our brains,” says Anthony Vance, a Marriott School information systems professor. “We’re asked to frequently change our passwords. We’re asked to make them difficult to guess, which makes them hard to remember,” so much so that we write them down. “All the password advice goes counter to the way our brains work”—counter to biology itself, he adds.

Just ask Vance’s fellow researchers at BYU’s Neurosecurity Lab, a pioneering group that has studied the brains of hundreds of computer users in the past three years—much to the applause of the National Science Foundation and Google, both of which have awarded substantial grants to the lab.

But it’s not just passwords that give our brains trouble. Even pop-up warnings, generated by web browsers to steer us from untrustworthy sites, can have mind-numbing effects over time, says professor Bonnie Anderson, who oversees the Marriott School’s MISM program. “The more often you see security messages, the less attention your brain devotes to them,” she says. “There’s less blood flow and more reliance on memory. It’s not you being a lazy user, necessarily. It’s your brain being efficient: it’s not going to waste energy on processing something it’s already seen.”

In short, you’re only human.

But thanks to a fount of fascinating findings from the lab’s interdisciplinary team, there’s hope for improving user security behavior, and it doesn’t come just in the form of longer passwords.

Hack Job

Target, Home Depot, the Internal Revenue Service—since the BYU Neurosecurity Lab was formed in 2014, major security breaches have riddled America.

In one of the largest attacks, the US Office of Personnel Management reported the theft of sensitive information from 22 million people—virtually anyone who had undergone a government background check in the last fifteen years, including American spies. And at Sony, hackers erased data from some 10,000 computers and publicly disclosed unfinished movie scripts and films, confidential emails, salary lists, and at least 47,000 Social Security numbers.

In response to these and other attacks, institutions today are on the defensive, with compensation packages for chief information security officers climbing past $1 million at some large banks, insurers, and healthcare companies. Even mid-tier security engineers now earn six-figure salaries, according to the Wall Street Journal. And demand for new tools and intelligence is at an all-time high.

“Many of these ‘hack-of-the-century’ stories began with a social engineering attack,” Vance explains, “where a user was duped into doing something insecure, an attacker gained access, and once inside the organization, he stayed there,” wreaking havoc on a massive scale.

“Users are the weakest link,” Vance continues. “Bruce Schneier, a security thought leader, says that only amateurs attack machines; professionals target people.”

User behavior can have a huge impact on an organization, Vance says.

To prove it, experts at the Neurosecurity Lab are getting inside the heads of average web users by using sophisticated technology. At BYU the team is led by Anderson, Vance, fellow information systems professor Jeff Jenkins, and Brock Kirwan, a psychology and neuroscience professor. Two doctoral candidates, David Eargle of the University of Pittsburgh and BYU’s Dan Bjornn, round out the team.

“We thought it would be cool to look at the problem of information security through the lens of neuroscience,” Vance says.

“Specifically, examining user behavior,” Anderson adds, “with neuro-physiological tools to try to understand why people do what they do online”—and to possibly help developers strike the right balance between usability and security.

Bat Signal

KA-POW! The research began with Batman.

Rising from the crime-filled alleyways of Gotham, the Dark Knight infiltrated the Tanner Building in a flurry of Google images—nearly 200 comic book frames, cartoon stills, and movie shots that research participants were asked to classify as animated or photographed, with points awarded for speed and accuracy. An algorithm, the sixty volunteers were told, would later perform the same classification task for comparison.

A cartoon person whose head is part of a chain

It was a guise—albeit one approved by the university’s institutional review board.

In this study, published in the Journal of the Association for Information Systems, what researchers really wanted to know was how often test-takers would ignore malware warnings. You’ve likely seen them: “This is probably not the site you’re looking for!” reads a typical warning on Google Chrome. During the timed test, such warning screens sprang up six to eight times and were ignored by most participants. “Security warning disregard,” as the team calls it, is common. But why?

Call it the wallpaper effect. The first time you walk into a room, you might notice the wallpaper; the second time, not so much. You’re already searching for something more important: your spouse, your keys, a midnight snack. Researchers call this habituation, and it’s an everyday mental process, great for productivity and focus—until it blinds you to the burglar standing ten feet away, quiet as a mouse and ready to make off with your flat-screen television.

Understandably, participants who failed to heed the security warnings were just as startled to see a grinning Guy Fawkes materialize on their personal laptops, with a countdown timer and an ominous message: “All your files are belong to Algerian Hacker. Say goodbye to your computer.” In a panic, participants gasped and powered down or yanked out their internet cables.

This too was a guise. No one had actually been hacked. But a point had been made.

Before the image classification task, participants were asked to fill out a preassessment of their personal aversion to risk. Then they completed the Iowa Gambling Task, a research mechanism commonly used to study decision making and risk, while submitting to an EEG (electroencephalography) reading. For most participants, there was an obvious discrepancy between the two measurements. While the EEG readings proved to accurately predict risky behavior, the self-reported measures, in most cases, failed to predict user behavior in so-called nonsalient conditions, where security concerns weren’t fresh in participants’ minds. In other words, people’s actions didn’t match their perceptions at all.

After the scare with the Guy Fawkes mask, participants showed much more personal aversion to risk and much more consciousness of their security behavior—a truer alignment between “say” and “do.” The researchers called this behavioral change “once bitten, twice shy.”

Could it be that warning messages are poorly designed? Perhaps, Anderson says. While testing functional and aesthetic variations, the team determined that a jiggling message box or a polymorphic warning, while annoying, would “trick users into paying attention,” at least at first, she says.

And a subsequent study, still under peer review, has explored the detrimental effects of poorly timed interruptions, a phenomenon known as dual-task interference.

“We lobby for intelligent timing of security messages,” Anderson says. “If there’s a way you can delay those messages until later, when you can have people’s full attention, that works better.”

Lab Rats

If attention is any indication of progress, there’s been plenty of it from companies and conference organizers.

“It’s been fun,” Anderson says, clutching a well-worn passport, evidence of the team’s participation at conferences in places around the globe, from Cambridge, England, to Korea. “We’ve presented at Apple as well as Google. And we’re going back there in two weeks,” she said on a hot July afternoon in the McDonald Building, a research facility on the south edge of campus.

Next door, outside the refrigerated chamber of a functional magnetic resonance imaging (fMRI) scanner, the nerve center of the Neurosecurity Lab, Brock Kirwan removes a cap of scalp electrodes—an EEG device—from a study participant’s head. Their tiny suction cups leave pink polka dots on her temples and forehead. “You get used to it,” she says, smiling. She’s no first-timer.

At a nearby monitor, Kirwan and a team of technicians analyze the subject’s neural responses to certain online security tasks. “Do you mind if we ogle your MRI?” he asks, eyeing the colorful brain scans that light up on screen like a Tiffany lamp.

fMRI scanner

In addition to instruments that measure eye movement, stress levels in the saliva, heart rates, facial muscle movement, and sweaty palms and feet, the fMRI scanner enables the team to—as Vance puts it—“open the black box of the brain to see what mental processes are happening.”

The fMRI scanner is impressive, a giant hollow magnet that can send a box of metal paper clips flying across the room. It’s big. It’s cold—4 degrees Kelvin (-452.5 degrees Fahrenheit) at its core. And it’s expensive—about $2 million for a refurbished model. “We’re pushing the equipment as hard as we can,” says Kirwan, who codirects the MRI research center, having scanned gray matter for ten years in three states.

The team estimates it has studied 225 research participants so far, more than half of them using fMRI and EEG. In terms of work hours and output, the project is really cranking, Kirwan says. And that has led to some extraordinary results.

“See the insane thing your computer does to your brain,” wrote one technology blogger after BYU published a collection of images showing neural responses to malware warnings.

Major news outlets, including the Guardian, Voice of America, and Slate, also shared the findings. “Why do people ignore security warnings when browsing the web?” one headline asked. “Researchers terrify college students, prove important point about internet security,” another announced, tongue in cheek.

Though their research has garnered a lot of media attention already, what matters most, say members of the Neurosecurity Lab, is that the team be allowed to build on the collective progress they’ve made to date.

A collaboration with Google engineers to create more effective Chrome warnings—an effort that could impact nearly 60 percent of the global desktop-browser market share—is just a start. As long as the team can help make security messages more impactful, the researchers plan to keep fighting cybercrime—one brain wave at a time.

Take that, hackers.

Still writing down your passwords?

Consider the French journalist who compromised his social media accounts by broadcasting from an office covered in sticky notes containing usernames and passwords—all while covering a major hacking story.

Industry data reveals that 30 percent of us still write down our passwords—and that makes security experts cringe. In a recent Google survey, 231 experts shared the five most effective ways to stop cybercrooks.

  • Install software updates.
    They’re not a bulletproof solution, but it’s far better to install them than to do nothing, experts say. If you are slow to update your computer or mobile device, you are leaving it open to attack.
  • Use a password manager.
    Experts utilize password managers to create and manage strong passwords for multiple accounts three times more frequently than regular users. Through one of these software applications you can create and store encrypted login information for all your online accounts. The database can only be accessed by employing one master password.
  • Have a unique password for each account.
    This way if your password for one website is compromised, it won’t affect the security of your other accounts.
  • Craft hard-to-guess passwords.
    While a mix of upper- and lowercase letters, numbers, and symbols is useful, length is especially important; the longer the better. Avoid birth dates, names, and words found in the dictionary.
  • Sign up for two-step verification.
    Two-step verification means you authenticate yourself in multiple ways (for example, through an online password and a confirmation code sent via text). Even if attackers guess your password, they can’t access your account without your phone.

_

Witten by Bremen Leak

About the Author
Bremen Leak, a 2005 BYU graduate, has written for Marriott Alumni Magazine since 2006. A friend to useless trivia, he’s convinced that baseball statistics make the best passwords.

Related Stories

data-content-type="article"

How Will You Carry His Name?

March 26, 2024 08:30 AM
Drawing upon her experiences in the professional and academic worlds, associate professor Abigail Allen shares how followers of Christ can represent His Church.
overrideBackgroundColorOrImage= overrideTextColor= overrideTextAlignment= overrideCardHideSection=false overrideCardHideByline=false overrideCardHideDescription=false overridebuttonBgColor= overrideButtonText= overrideTextAlignment=
data-content-type="article"

Escaping the Hustle Culture

November 28, 2023 01:33 PM
Practical Tips for Finding a Healthier Work-Life Balance
overrideBackgroundColorOrImage= overrideTextColor= overrideTextAlignment= overrideCardHideSection=false overrideCardHideByline=false overrideCardHideDescription=false overridebuttonBgColor= overrideButtonText= overrideTextAlignment=
data-content-type="article"

Time for a Prep Talk

November 28, 2023 01:31 PM
Huddle up: the third and final piece in Marriott Alumni Magazine's preparedness series looks at community preparedness.
overrideBackgroundColorOrImage= overrideTextColor= overrideTextAlignment= overrideCardHideSection=false overrideCardHideByline=false overrideCardHideDescription=false overridebuttonBgColor= overrideButtonText= overrideTextAlignment=
overrideBackgroundColorOrImage= overrideTextColor= overrideTextAlignment= overrideCardHideSection=false overrideCardHideByline=false overrideCardHideDescription=false overridebuttonBgColor= overrideButtonText=