Home / India / People don’t trust driverless cars. Researchers are trying to change that

People don’t trust driverless cars. Researchers are trying to change that

FORD MOTOR COMPANY

This October, television and web viewers were treated to an advertisement featuring basketball star LeBron James taking a ride in a driverless car. At first, James—known for his fearlessness on the court—peers in doubtfully at the vacant driver’s seat and declares: “Nope.” But after a short trip in the back seat, he has changed his tune. “Hey yo, I’m keepin’ this!” James exclaims to friends.

The ad, from computer chip–maker Intel in Santa Clara, California, is aimed at overcoming what could be one of the biggest obstacles to the widespread adoption of autonomous vehicles (AVs): consumer distrust of the technology. Unnerved by the idea of not being in control—and by news of semi-AVs that have crashed, in one case killing the owner—many consumers are apprehensive. In a recent survey by AAA, for example, 78% of respondents said they were afraid to ride in an AV. In a poll by insurance giant AIG, 41% didn’t want to share the road with driverless cars. And, ironically, even as companies roll out more capable semi-AVs, the public is becoming less—not more—trusting of AVs, according to surveys over the past 2 years by the Massachusetts Institute of Technology (MIT) in Cambridge and marketing firm J.D. Power and Associates.

Such numbers are a warning sign to firms hoping to sell millions of AVs, says Jack Weast, the chief systems architect of Intel’s autonomous driving group in Phoenix. “We could have the safest car in the world,” he says, “but if consumers don’t want to put their kids into it, then there’s no market.”

Consumer distrust has become a catalyst, prompting researchers in industry and academia to a launch a wide range of studies aimed at understanding how people perceive AVs—and what might persuade skeptics to change their views. Some are studying how those outside the vehicle, including pedestrians and nearby drivers, react to driverless vehicles. Others are focusing on how passengers interact with AVs, for instance by testing whether people are more likely to trust cars that talk or share visual information on screens. Bertram Malle, a psychologist at Brown University, predicts that “acceptability is going to depend on how people feel when they are riding in an AV.”

To gauge how bystanders might react to autonomous vehicles, researchers are conducting “ghost driver” experiments in which a driver is hidden by a seat suit.

FORD MOTOR COMPANY

An Intel study conducted earlier this year at its corporate campus in Chandler, Arizona, suggested that—as in the James ad—familiarity will ease some anxiety. Researchers recruited a diverse group of 10 volunteers and offered them a choreographed, 5-minute ride in an AV on a closed course. The ride was structured to resemble one offered by a ride-hailing service. Passengers used a phone app to summon and unlock the car. They sat in the back seat with an Intel employee, while a safety driver sat in the front. As the car stopped for pedestrians or took a detour, it occasionally announced its actions. The passengers were videotaped discussing their thoughts before, during, and after the ride.

Like James, most of the volunteers were a bit apprehensive beforehand, Weast says. But they all turned “almost 180°” after the ride. One mother decided that the car was actually safer than one driven by a human and said AVs would allow her to reclaim the 4 to 6 hours a day she spends shuttling her kids around to school and practices. “The daily life sort of benefits became quickly obvious,” Weast says. “Participants probably wouldn’t have ever dreamed they’d say that before that ride.”

The volunteers also offered some surprising reactions to one trust-building technology that AV-makers are testing: giving the vehicle a voice. Weast says passengers appreciated hearing the car say it was slowing for a pedestrian, because otherwise they might wonder what was happening. “What was funny, though, is that after hearing that a couple of times the passengers quickly transitioned to: ‘OK, OK, I get it. Stop talking to me,’” he says. “Which really was frankly amazing to us. In the span of a couple minutes, people got so comfortable that they quickly transitioned to, ‘I want to just sit back and relax or play with my phone.’”

Other experiments have also found that riders find talking cars comforting. In one, Wendy Ju, a mechanical engineer at Stanford University’s Center for Automotive Research in Palo Alto, California, put volunteers in a car with a visual barrier that prevented passengers from seeing the driver. The passengers were encouraged to pretend—and in some cases came to believe—that the car was driving itself.

The car didn’t talk, but when asked by an experimenter to say “Hello,” it revved its engine. “I was surprised how much I trusted it,” one rider said afterward. “When it said ‘Hello,’ that one thing gave it enough personality for me to trust it.”

Humanizing the car could even shape how a passenger might respond to an accident, a 2014 study concluded. Researchers led by Adam Waytz, a psychologist at Northwestern University in Chicago, Illinois, asked participants to sit in a simulated AV and take a short “ride” while being videotaped and having their heart rate monitored. Some were told that they were riding in a vehicle named Iris, and heard a female voice throughout the ride; others rode in a mute, nameless car. During the simulation, another car crashed into the AV. Iris’s passengers were less startled by the wreck—suggesting that they trusted the car because they had anthropomorphized it.

If you think the car likes you, you think it’s going to try harder to do well, and that’s terribly worrisome.

Wendy Ju, Stanford University

Communication should go both ways, researchers believe. During the Intel test, some passengers mistakenly assumed the car could not only talk, but also hear. “People immediately wanted to start talking back to the car, like it was their digital assistant,” Weast says. Some riders might also want to use hand gestures to instruct it precisely where to pull over, he notes, which would require deploying chips capable of speech and gesture recognition.

It is also possible that riders might come to trust a friendly, smooth-talking car too much. “If you think the car likes you, you think it’s going to try harder to do well, and that’s terribly worrisome,” Ju says. It could lead to unrealistic expectations, because even the safest AV won’t be risk-free.

Researchers at Uber and elsewhere are testing other ways of allowing AVs to communicate with riders, including video screens, sounds, and even vibrating components that might indicate impending turns and stops. In one test, by carmaker Daimler, screens displayed “shadow” images: representations of the landmarks, traffic signals, and adjacent cars that the vehicle must navigate. A rider watching the screen “would very naturally feel that the car is sensing everything around,” says Alexander Mankowsky, a futurologist at Daimler in Stuttgart, Germany.

To make riders more comfortable, some autonomous vehicles feature screens that show passengers what the vehicle is sensing.

JEFF SWENSEN/THE NEW YORK TIMES

Overall, researchers are learning that riders “like to see what the car sees,” says Carol Reiley, a roboticist and co-founder of Drive.ai in Mountain View, California, which experiments with displays. And, “If you do a good job,” she notes, riders “get bored”—and don’t spend the trip nervous and on edge. The goal should be no surprises for the rider, Malle says. “Understanding why the vehicle is doing what it is doing will be a critical factor in trust and acceptance.”

Giving riders a bit of control over the car’s behavior can also build trust. For example, a team led by Anca Dragan, a roboticist at the University of California (UC), Berkeley, devised an experiment in which potential AV riders were presented with common driving situations—such as passing another car—then asked to choose between two candidate AV behaviors, represented by illustrative videos of the car’s path. Based on the answers, a machine-learning algorithm diagnosed the user’s preferred driving style. Ultimately, the system learned to behave as the “rider” wanted.

The idea, Dragan says, is to have “the car adapt to the person, rather than having the person adapt to the car.” Such customization might help address what Berkeley Dietvorst, a marketing researcher at the University of Chicago Booth School of Business, calls “algorithm aversion”—people’s unwillingness to trust digital decision-making, even if it is better than their own. He’s found that offering even a nominal amount of control over an algorithm’s output increases its acceptance.

Researchers are also studying ways to put people outside the AV—pedestrians and people in other cars—at ease. A team led by Ju, for instance, developed a relatively simple “ghost driver” system in which a driver of a conventional car dresses up as the driver’s seat, making it appear that the vehicle has no operator. Then, the researchers record the responses of bystanders. Among other things, the team has found that, at crosswalks, pedestrians like to have some acknowledgement from an AV that it has “seen” them. Without such cues, they will sometimes go out of their way to avoid the vehicle.

What bearing does outsiders’ trust in AVs have on the technology’s adoption by consumers? “Not behaving like a jerk when you’re driving, that’s actually important to people,” Ju says. “Nobody wants to be the asshole driver, even if they’re not actually driving the car.”

Some riders might welcome a say in how aggressively a driverless car, such as this one in Pittsburgh, Pennsylvania, navigates through traffic.

FORD MOTOR COMPANY

After hearing about the ghost driver studies, John Shutko, a human factors researcher at Ford Motor Company in Detroit, Michigan, decided to expand the purview of his research, which had focused primarily on riders. That decision “kind of doubled or tripled our workload,” he recalls, “because now we’re not just focused on the drivers or customers of Ford products, but on how all of society will interact with our vehicles.”

Shutko has taken the ghost driver protocol one step further. In collaboration with researchers at Virginia Polytechnic Institute and State University’s Transportation Institute in Blacksburg, he has dispatched a van with a hidden driver that is also equipped with a bar of lights at the top of the windshield. When the pseudo-AV reaches a crosswalk, the lights flash in different patterns in a bid to communicate with pedestrians.

The researchers chose just three signals, simple and distinctive enough for observers to understand after two or three exposures. One indicates that the car is in autonomous mode, another that it’s preparing to yield, and the third signals it’s about to accelerate. So far, they’ve recorded 150 hours of video of pedestrians’ reactions, but haven’t fully analyzed them yet. Now, Shutko is planning simulations aimed at understanding whether people will become confused if dozens of AVs equipped with messaging lights converge at an intersection.

What worries Shutko is the possibility that different automakers will use different signals, so he’s working with manufacturers and universities to set a standard. “My goal,” Shutko says, “is that a grandparent could see one of these vehicles, understand the lights, come home, explain to their grandchild, and feel comfortable with their grandchild walking to school with these vehicles running around.”

How AVs are programed to handle difficult ethical dilemmas could also affect that comfort level. The most famous is the so-called “trolley problem,” named after a scenario in which a person can either passively allow a trolley to careen down a track and kill five workers or actively flip a switch so that it changes course and kills only one. In a stark AV analog, the car must decide whether to avoid killing five pedestrians by driving its passenger off a cliff. Should it save the many bystanders or the one rider?

In one survey, most respondents thought that AVs should put a priority on saving the most lives, even if it meant endangering passengers—providing they weren’t the passengers. When it came to their own cars, people put passenger safety first, according to the survey, published last year in Science by a team led by Iyad Rahwan, a computer scientist at MIT. Such attitudes have raised a counterintuitive notion: To win consumer acceptance and save more lives in the end, manufacturers might have to field cars that are less utilitarian—programmed to save fewer lives—when faced with those rare trolley dilemmas.

When asked about their top concerns about AVs, however, most people don’t mention trolley problems. And even telling people explicitly about the dilemma doesn’t necessarily enhance their fear of AVs or reduce their desire to buy one, Rahwan reported at a Psychology of Technology conference held last month at UC Berkeley.

Still, researchers say media coverage of the trolley problem could shape public opinion. Last year, Mercedes-Benz faced public indignation after a company official told Car and Driver magazine, “If you know you can save at least one person, at least save that one. Save the one in the car.” The Daily Mail newspaper in the United Kingdom reprinted the quote under the headline: “Mercedes-Benz admits automated driverless cars would run over a CHILD rather than swerve and risk injuring the passengers inside.” The company quickly did damage control, but the episode is a reminder that “public outrage is a really difficult thing to predict,” says Azim Shariff, a psychologist at UC Irvine who co-authored the Science trolley paper. “If what happened with Mercedes is any indication, the public resistance to nonutilitarian cars could end up being a big deal.”

Surveys show consumers have plenty of other concerns. They worry that AVs could be hacked, threatening both control of the vehicle and data privacy. And a steady stream of stories about AV accidents has fed concerns about safety. Last year alone, an Uber AV ran a red light in San Francisco, California, a Google AV bumped into a bus in Mountain View, and a semiautonomous Tesla that was on autopilot hit a truck and killed its driver in Florida.

Such incidents suggest that companies need to be careful about raising unrealistic expectations for AV technology. Some observers, for example, fault Tesla for not making it clearer that their “advanced autopilot” system still required the driver to pay attention and be ready to take control. Studies have found that consumers tend to overestimate the amount of autonomy provided by the slew of assistive technologies that manufacturers are now adding to cars, such as sophisticated cruise controls that enable a car to automatically stay in lanes and adjust its speed depending on surrounding traffic. In one survey, people even suggested names for such semiautonomous cars: “Potential disaster car,” “Bad Science car,” and “Boy are you lazy.”

Ultimately, to build trust in AVs, experts say companies will need to clearly communicate the technology’s strengths and weaknesses. “We have started to use the term ‘calibrated trust,’” Malle says. “You want people to trust the AV … with respect to the things it is actually good at, but not trust the AV with things it is not good at.” A key message, Mankowsky says, is that for all the potential benefits of the technology, consumers “should not believe in magic.”

 

News credit : Sciencemag

Check Also

This ‘muscular’ mayfly has arms like airplane wings

LUTRA/Oceans-Image/Photoshot/Newscom By Jeremy RehmJan. 17, 2018 , 10:50 AM SAN FRANCISCO, CALIFORNIA—With arms stocky enough …

Leave a Reply

Your email address will not be published. Required fields are marked *