Self-Driving Are self-driving cars safe for our cities? By BMaaS Contributor Posted on April 3, 2018 31 min read View original post.Self-driving vehicles may be poised to deliver a future of safer, greener streets for all, but testing the vehicles on today’s streets is a concern. Farrells and WSP | Parsons Brinckerhoff From ushering in an era of decreased car ownership, to narrowing streets and eliminating parking lots, autonomous vehicles promise to dramatically reshape our cities. But after an Uber-operated self-driving vehicle struck and killed a pedestrian in Tempe, Arizona on March 18, 2018, there are more questions than ever about the safety of this technology, especially as these vehicles are being tested more frequently on public streets. Some argue the safety record for self-driving cars isn’t proven, and that it’s unclear whether or not enough testing miles have been driven in real-life conditions. Other safety advocates go further, and say that driverless cars are introducing a new problem to cities, when cities should instead be focusing on improving transit and encouraging walking and biking instead. Contentions aside, the autonomous revolution is already here, although some cities will see its impacts sooner than others. From Las Vegas, where a Navya self-driving minibus scoots slowly along a downtown street, to General Motors’ Cruise ride-hailing service in San Francisco with backup humans in the driver’s seat, to Waymo’s family-focused Chandler, Arizona–based pilot program that uses no human operators in its Chrysler Pacifica minivans at all, the country is accelerating towards a driverless future. While the U.S. government has historically been confident in autonomous vehicles’ ability to end the epidemic of traffic deaths on our streets, there are plenty of concerns from opponents of self-driving cars that are making cities think twice before welcoming them to their streets. Are autonomous vehicles safe? In 2009, Google launched its self-driving project focusing on saving lives and serving people with disabilities. In a 2014 video, Google showed blind and elderly riders climbing into its custom-designed autonomous vehicles, part of the company’s plan to “improve road safety and help lots of people who can’t drive.” Although there were several self-driving projects in the country at the time, many being developed by government agencies or university labs, Google’s project differentiated itself by being public-facing. The goal was not to build cars—although Google did build its own testing prototypes—but to create a self-driving service that would help regular people get around. Google began testing its vehicles on public streets the very same year the project launched. With the reorganization of Google into its new parent company Alphabet, the self-driving program became its own entity, Waymo. Almost a decade later, Waymo remains the clear leader for safe self-driving miles on U.S. streets. According to Waymo’s monthly reports, its vehicles have been in two dozen crashes, only one of which was the fault of the Waymo’s vehicle. In that crash, it bumped a bus while going 2 miles per hour. Now there are dozens of autonomous vehicle companies testing on U.S. streets. As of February 2018, Waymo has logged five million self-driven miles, making it the leader for self-driven miles on U.S. streets. The next most experienced companies, Uber and GM Cruise, are still at least two million miles behind Waymo. That doesn’t include miles driven in the semi-autonomous modes that many cars now offer, like Tesla’s Autopilot, which are more driver-assistance systems than true self-driving vehicles. In the last few years, the greatest strides taken in the self-driving industry have been by ride-hailing companies, who are devoting an exceptional amount of time and money to develop their own proprietary technologies and, in many cases, giving members of the public rides in their vehicles. In 2017, Lyft’s CEO predicted that within five years, all their vehicles will be autonomous. At a press conference in March 2018, where Waymo’s CEO John Krafcik announced its ride-hailing program, Krafcik claimed that the company will be making at least one million trips per day by 2020. [embedded content] Can autonomous cars drive better than humans? The biggest safety advantage to an autonomous vehicle is that a robot is not a human—it is programmed to obey all the rules of the road, won’t speed, and can’t be distracted by a text message flickering onto a phone. And, hypothetically at least, AVs can also detect what humans can’t—especially at night or in low-light conditions—and react more quickly to avoid a collision. AVs are laden with sensors and software that work together to build a complete picture of the road. One key technology for AVs is LIDAR, or a “light-detecting and ranging” sensor. Using millions of lasers, LIDAR draws a real-time, 3D image of the environment around the vehicle. In addition to LIDAR, radar sensors can measure the size and speed of moving objects. And high-definition cameras can actually read signs and signals. As the car is traveling, it cross-references all this data with GPS technology that situates the vehicle within a city and helps to plan its route. In addition to the sensors and maps, AVs run software programs which make real-time decisions about how the car will navigate relative to other vehicles, humans, or objects in the road. Engineers can run the cars through simulations, but the software also needs to learn from actual driving situations. This is why real-world testing on public roads is so important. But how AV companies gather that information has led to greater concerns about how autonomous vehicles can detect and avoid vulnerable road users, like cyclists and pedestrians, but also people who move slowly and more erratically through streets, like seniors and children. Waymo, for example, claims its software has been explicitly programmed to recognize cyclists. A video that Waymo released in 2016 (back when it was still part of Google) shows how one of its vehicles detected and stopped for a wrong-way cyclist coming around a corner at night. This is why self-driving companies put their vehicles through endless tests using simulated city streets. Many traditional automakers use a facility named M City in Ann Arbor, Michigan, but the larger self-driving companies have built their own fake cities specifically to test interactions with humans who are not in vehicles. Waymo’s fake city, named Castle, even has a shed full of props—like tricycles—that might be used by people on streets so that Waymo’s engineers can learn how to identify them. USDOT has been testing autonomous technology at the M City facility for many years. M City Will eliminating human drivers reduce traffic deaths? 50 years ago, the U.S.’s rate of traffic deaths was higher than they are now—in 1980, generally considered to be the deadliest year on U.S. streets, over 50,000 people were killed. With safety features like airbags added to vehicles, stricter seat belt laws, and campaigns that stigmatized drunk driving, the rate of deaths went down significantly. But over the last few years, the U.S. has seen a slight increase in traffic deaths again. Additionally, pedestrian fatalities increased by 27 percent over the last decade, while all other traffic fatalities decreased by 14 percent. There isn’t agreement for why these deaths are increasing, but some experts believe that this is because Americans are driving more—overall vehicle-miles traveled (VMT) reached an all-time high in 2017. Using USDOT’s claim that 94 percent of crashes are caused by human error, it seems like a fairly obvious way to reduce crashes is to reduce the number of humans behind the wheel. But it’s not just the number of human drivers that should be reduced, the U.S. could also reduce the number of cars on roads to prevent fatalities—and autonomous vehicles can help do that, too. The real safety promise of autonomous vehicles is the fact these vehicles can be be summoned on-demand, routed more efficiently, and easily shared—meaning not just the overall number of single-passenger cars on streets will decline but the number of single-passenger trips will be reduced, meaning a reduction in overall miles traveled. In addition, cities can use automated vehicles to tackle ambitious on-demand transit projects, like a proposed initiative to integrate shared self-driving vehicles into the public transit fleet. If cities can launch these kind of “microtransit” systems that serve as a first-mile/last-mile solution to help get more people to fixed-route public transportation, that will also mean fewer people in cars and more people on safer modes of transit. Without having to make room for so many cars, city streets can be narrowed, making even more room for pedestrians and bikes to safely navigate cities. In this way, autonomous vehicles have a great role to play as part of a Vision Zero strategy, which most major U.S. cities have implemented in order to eliminate traffic deaths. A typical U.S. roadway remade as a safe, accessible street filled with autonomous technology, from shared taxibots to self-driving buses, from NACTO’s Blueprint for Autonomous Urbanism. NACTO But aren’t human-driven cars safer now, too? While residents of only a few cities can summon an AV on-demand right now, the truth is that much of the safety tech powering self-driving cars is making its way into today’s cars. Sophisticated collision-avoidance systems, for example, which can stop a vehicle if an object or person are detected in its path, are already being incorporated into new cars and buses. This is why the way the National Highway Traffic Safety Administration (NHTSA) tests those kinds of safety innovations is also changing. Until recently, all safety standards were based on historical crash data, meaning the government had to track years and years of roadway incidents (and, in many cases, deaths) before making an official recommendation. Now, technology is advancing so quickly that there’s not enough time to test every new idea for a decade. The government knows it needs to be more nimble. In fact, that’s what happened for a recent USDOT recommendation that all cars be equipped with vehicle-to-vehicle communication (V2V), a tool which allows cars to “talk” to each other. This recommendation was fast-tracked in 2015 by U.S. transportation secretary Anthony Foxx after detailed simulations and modeling showed that the benefits were obvious—there was no need to spend years collecting historical data. The same type of recommendation might be made for an aspect of autonomous tech. Once a clear safety benefit has been proven across the self-driving industry, a specific feature might become standard on all vehicles. An 8-person autonomous shuttle by Navya travels a route at a speed of 15 mph in Downtown Las Vegas. Keolis Where are self-driving cars being tested? About half of U.S. states allow testing of autonomous vehicles on public roads, but regulations for each state vary widely. The majority of testing is focused in a handful of states: Arizona, California, Georgia, Michigan, Nevada, Texas, Pennsylvania, and Washington. California remains the busiest hub for the AV industry: There are currently 52 companies testing self-driving technology on the state’s streets. It’s also one of the most heavily regulated markets: California’s Department of Motor Vehicles requires companies to file for a permit and submit annual reports that include the number of miles driven and any crashes. While it’s not necessarily used as a safety metric, one performance standard that helps to illustrate how technology is improving is tracking the number of times per self-driving mile that a human driver has to take over, which is called a “disengagement.” California DMV records demonstrate that as self-driving programs log more on-road experience, they see fewer and fewer disengagements. Waymo, for example, now sees one disengagement per every 5,600 miles driven. Other states don’t require as much documentation as California—and they’re not necessarily required to make any information public. Arizona, for example, approved AV testing on public roads in 2016 without notifying its residents, and didn’t require any reports from companies, although after Uber’s fatal crash, that will likely change. Hills, snow, quirky local driving customs, and loose state regulations are some of the reasons Uber started testing its self-driving program in Pittsburgh. AP Photo/Jared Wickerham Does the federal government regulate autonomous vehicles? In 2016, the U.S. government released its long-awaited rules on self-driving vehicles. The Department of Transportation’s 116-page document lists many benefits for bringing technology to market, among them improved sustainability, productivity, and accessibility. But the USDOT report’s central promise is that autonomy will pave the way for policies that dramatically improve road safety. Even President Obama made the case for safety in an op-ed that heralded the dawn of the new driverless age: “Right now, too many people die on our roads—35,200 last year alone—with 94 percent of those the result of human error or choice. Automated vehicles have the potential to save tens of thousands of lives each year. And right now, for too many senior citizens and Americans with disabilities, driving isn’t an option. Automated vehicles could change their lives” In order to get cities across the country to start thinking about using autonomy to solve transportation problems, USDOT hosted the Smart City Challenge in 2016, which awarded $40 million to Columbus, Ohio, to develop a fleet of autonomous transit vehicles. As a result of the challenge, the 70 cities that competed now have blueprints for how to introduce AV tech to their transportation planning. Under the Trump administration, much of the legislation proposed has been centered around exemptions for automakers and increasing the number of AVs allowed to operate on U.S. streets. In fact, in September 2017, USDOT and NHTSA issued updated AV guidelines, which carried an even lighter regulatory touch, after industry leaders expressed concerns about regulation at the federal level stifling innovation. In addition to the 2017 policy statement, Transportation Secretary Elaine Chao held preliminary hearings about autonomous vehicles where she affirmed the government would not play a heavy-handed role. “The market will decide what is the most effective solution,” she said. However, the aggressive development of V2V—which experts agree can work to make human-driven cars much safer as autonomous technology comes to market—has not been made a priority during her leadership. Tesla’s Autopilot feature, one of many driver-assist features which allow control of the vehicle to switch from human to computer, can distract drivers or give them a false sense of security. The Verge What’s the difference between semi-autonomous and fully autonomous? There’s one safety debate that continues to divide the self-driving industry: Some automakers are still pushing for versions of vehicles which allow control to pass from human to computer, offering drivers the ability to toggle between semi-autonomous and fully autonomous modes. Two fatal Tesla crashes—one in 2016 and one in 2018—that occurred while the drivers were using the vehicle’s Autopilot feature illustrated the dangers of a semi-autonomous mode. As the National Transportation Safety Board (NTSB) noted in its report of the 2016 crash, semi-autonomous systems give “far more leeway to the driver to divert his attention to something other than driving.” Fully autonomous is the official policy recommendation from the Self-Driving Coalition for Safer Streets, a lobbying group that wants cars to eventually phase out steering wheels and let the software take over, 100 percent of the time. This completely eliminates the potential for human error. General Motors is planning to make cars without steering wheels by 2019. In 2018, Waymo began conducting fully autonomous testing in Arizona without a human safety driver. California now allows fully autonomous testing as well. But especially after the Uber crash, San Francisco bike advocates worry that the tech isn’t powerful enough to see cyclists. The California Bicycle Coalition started a petition to stop fully autonomous vehicles from being tested on California streets. At least for the near future, even fully autonomous vehicles will still have to contend with the mistakes of human drivers. To truly make self-driving technology the safest it can be, all the vehicles on the road should be fully autonomous—not just programmed to obey the rules of the road, but also to communicate with each other. In 2017, National Association of City Transportation Officials (NACTO) created a Blueprint for Autonomous Urbanism, which encourages cities to deploy fully autonomous vehicles that travel no faster than 25 mph as a tool for making streets safer, “with mandatory yielding to people outside of vehicles.” From new street designs to accessibility guidelines to a focus on data sharing, NACTO’s policy document provides the most detailed AV recommendations for U.S. urban transportation planners. To plot the safest path forward for self-driving vehicles—and for cities to reap the many other environmental and social benefits of the technology—AVs should provide shared rides in regulated fleets, integrate with existing transit, and operate in a way that prioritizes a city’s most vulnerable humans above all users of the streets.