Since last month’s pedestrian death at the hands of a self-driving Uber car, many have clamored for stronger safety guidelines for autonomous vehicles (AVs). The simplest form for such guidelines would be strict, across-the-board standards, perhaps demanding that any AV match or surpass humans on a comprehensive driving test. But a blanket approach to safety could both crimp AVs’ benefits and reduce their safety.

The crux of the issue is that autonomy is not a monolithic concept. Monorails, Roombas and C-3PO are all autonomous, but in different senses and with different constraints. Similarly, autonomous driving is not a single technology; it comes in gradations, and the vehicles that are imminent will not be independently hurtling through roundabouts in the snow at twilight. Our expectations and standards regarding safety will need to be tailored to the use cases those vehicles are designed for—many of which will be less sophisticated than we’ve been led to expect.


The nuances that have gotten the most airtime concern the degree of human involvement. The Society of Automotive Engineers (SAE) carves this spectrum into five levels:

  1. Cruise control.
  2. Traffic-sensitive cruise control for both steering and speed.
  3. Self-driving, but with a human available to take over if needed.
  4. No driver needed, under the right conditions.
  5. No driver needed ever.

When most of us imagine AVs, we’re probably thinking of Level 4 or 5—“full autonomy.” That’s where most companies are focusing their efforts, from big names like Waymo and Ford to smaller outfits like Aptiv and Navya. Level 4+ is also what gets people agitated about roving robotaxis or lamenting the impending wave of out-of-work truckers.

Yet peoples’ understanding of driverlessness has somewhat outpaced the state of the art. Despite the “pilot” in their names, Tesla’s Autopilot feature and Nissan’s ProPilotAssist are merely Level 2, as the occasional inattentive driver has tragically demonstrated. Even Level 3 consumer vehicles remain merely an aspiration, with Audi positioning its 2019 A8 sedan to be the first to liberate the driver’s attention. “We might see some surprises in the next few years,” says Costa Samaras, assistant professor in civil and environmental engineering at Carnegie Mellon University, and some technologies in the future are closer than they appear, but, he says, “the public feels that these [Level 4–5 cars] are right here right now, when in reality they’re not that close.”

While manufacturers might downplay how far off these higher levels of autonomy really are, eliding these distinctions can leave policymakers and the public with an exaggerated impression of what’s about to hit the market. Worse, drivers overestimate the capabilities of existing vehicles, relying on Level 2 systems to dodge obstacles or even to let them drive while intoxicated.

When we talk about AV safety, then, it’s essential to specify just how autonomous the cars in question are. Some vehicles will only feature partial autonomy, in which case they shouldn’t have to pass a full driving test—but then, Samaras says, “companies [should] make it very explicit what the driver’s responsibilities [and] the vehicle’s capabilities are.”


Some manufacturers have begun advertising full autonomy. In addition to Waymo’s well-known pilot program in Arizona, soon to expand into an autonomous ride-hailing service, major manufacturers are promising Level 4 vehicles in the next few years, including GM (2019), Toyota (2020) and Ford (2021).

It might seem that full autonomy eliminates the ambiguities about what kinds of safety can be demanded of an AV. In fact, the opposite is true: every level of automation contains multitudes, Level 4 most of all.

After all, a monorail is fully autonomous, yet nobody would be impressed if GM declared victory with a high-tech subway car. The exciting prospect is AVs driving on streets, where they must react to lane markings, pedestrians and roadwork. If there are others on the road, a street-worthy AV must move harmoniously, choosing appropriate moments to pass, turn and yield. Night driving and weather add further wrinkles. Finally, if the vehicle travels outside the range of its detailed maps, it’s stuck navigating by signage alone—and woe betide the car that goes straight while in a left turn lane. In short, says Avery Ash, autonomous vehicle lead at transportation analytics company INRIX, “there are hundreds of different factors at play at any given moment …t hat impact a [vehicle’s] ability to operate.” The question of which scenarios are in scope becomes especially critical when there’s no driver to take over if conditions degrade.

Situations where fully autonomous vehicles are poised to take over are much narrower than the hype might suggest. The pilot deployments currently being tested, such as Waymo’s taxis and the parking shuttles at Bishop Ranch in California, are commercially managed fleets constrained to well-mapped areas with clear weather, wide streets and moderate traffic. Such applications have lower safety requirements than unrestricted driving. Unrealistically high standards would jeopardize AVs’ substantial benefits in the domains where they’re just becoming viable.

At the same time, no test for whether an AV outperforms humans could possibly cover all scenarios, Ash says; a vehicle that’s proven ultra-safe most of the time could still be dangerous in nighttime rain. Whatever safety rules are mandated for AVs, they’ll have to be clear about when a given vehicle can take to the roads.

The near future we need to prepare for, then, is not a magic moment when autonomous vehicles take over everything. Instead, we should ready ourselves for a slow rollout of Levels 3 and 4, concentrating first on highly constrained scenarios like small developments and long highway routes—more like buses at first than private cars. If regulators, manufacturers and consumers keep that vision in mind, they can ensure safety and benefit alike.