About 15 years ago, phishing went from a virtually unknown phenomenon to an everyday media topic. With new users pouring onto the internet, and the commercialization of the internet starting in earnest, opportunities abounded for phishers, who use identity deception to defraud e-mail users. As a result of this, and the absence of technical countermeasures, phishing e-mails were suddenly in everybody’s mailboxes. Practically speaking, the only defense was the advice offered by security experts: Watch out for poorly spelled e-mails; and do not click on links.

Over the years, the sophistication of the attacks has risen constantly, and the number of varieties of deceptive e-mails has mushroomed, with attack strategies like the impersonation of colleagues (so called business e-mail compromise or CEO fraud) dramatically on the rise. The increased sophistication resulted in improved yields, tempting more and more would-be criminals to try their luck at deception.

Corporations and other organizations continue to believe they can train their users to evade cyberattacks. Gartner estimates the market for security awareness computer-based training will grow at a 42 percent compound annual growth rate through at least 2023, from $451 million in 2018.

But at this point, the traditional emphasis on user education is an expenditure of resources and end-user burden that can no longer be justified by the results. As online deception techniques proliferate and become more sophisticated, it becomes more and more difficult for individual users to detect fraud. The return on investment on any security awareness effort has dramatically fallen, and the user burden to make security decisions has gone up.

User awareness should no longer be the primary defense against social engineering. In fact, cybercrime technology has evolved to the point that it can only be reliably defeated with opposing technology. Unaided humans are no longer able to adequately defend themselves against cybercrime, any more than fighters with bows and arrows can defeat enemies armed with attack helicopters.

Most defenses are better suited for algorithms than for end users. Instead, security and risk management professionals should educate end users only on the threats they can reasonably be expected to spot, while depending primarily on technical defenses for the overwhelming majority of attacks.

Early on, “traditional” phishing attacks were reported to have yields on the order of 3 percent, meaning that the vast majority of the intended victims did not fall for the attacks. On the other hand, sophisticated attacks such as spear phishing are known to see yields exceeding 70 percent.

Carefully crafted phishing e-mails (as well as other types of deceptive e-mails) are very hard for typical users to spot.

Some types of attacks are close to impossible to identify, even for highly technical users. Consider, for example, an attack in which the attacker compromises a legitimate e-mail account (e.g., by phishing the owner) and then using the compromised account to attack contacts of the phished user.

Other attacks, such as those using deceptive display names to impersonate a colleague of an intended victim are easier for a user to spot, at least in theory. By always inspecting the e-mail address of the sender, and making sure that this is a known user, one can avoid falling for such attacks. However, the increased scrutiny comes with a high price: For every extra step added to mundane tasks, our productivity naturally falls.

Moreover, these attacks are hard to detect in practice, given human error: many people, at least occasionally, accidentally sends e-mails from personal accounts instead of work accounts, and vice versa, creating an ambiguity about what is trustworthy and what is not. As a result, one in 10 users click through e-mails with deceptive display names, the security company Barracuda reports.

Given finite budgets, both in terms of financial cost and attention, companies and individuals must decide which awareness battles to pick, based on what people struggle with versus what types of automated countermeasures work well. Take, for example, the advice “if it looks too good to be true, it probably is”—as well as the variant “if it looks too bad to be true, it probably is.” People have emotions and judgment to warn them when something falls in this category, but so far, computers do not. Accordingly, this is something worthy of an awareness campaign.

On the other hand, deceptive display names are relatively hard for people to spot, but quite easy for computers to detect. This is a problem where automated defenses are more suitable than awareness efforts.

For both digital health and human health, the relative influence of behavior versus technology is the same. From the time they are small children, humans are taught to avoid risks to their safety: don't eat dirt, don't cross the road without looking both ways, don't smoke. But the big gains in life expectancy achieved over the past century or so have come primarily from advances in medical technology for fighting disease.

The prescription is also the same: for human health, take care of yourself and avoid common risks, but by all means get a good doctor and take your medicine. For electronic health, teach your users basic digital hygiene, but commit your budget and time to staying a step ahead of the enemy in the technical arms race that is impossible to avoid.