Self-Driving We need self-driving cars that can monitor our attention along with our emotions By BMaaS Contributor Posted on April 13, 2018 10 min read Last month, for the first time ever, a pedestrian was killed by an autonomous vehicle. Elaine Herzberg’s death at the hands of a self-driving Uber vehicle in Arizona has spurred a crisis of conscience in the autonomous vehicle industry. Now, engineers and startups are scrambling to shift the focus to technology that they say could help prevent future self-driving collisions, especially as more and more autonomous vehicles are expected to hit the road in the future. One such startup is Renovo Auto, a Silicon Valley company that has developed an operating system that integrates all the software needed to run a fleet of autonomous vehicles. You might remember the Renovo Coupe, a $529,000 electric supercar with 1,000 pound-feet of torque and a 0–60 time of 3.4 seconds, or, more recently, its project to convert a DeLorean with an electric powertrain and then do autonomous donuts with it. Now, Renovo is highlighting its work to build a self-driving car system that monitors not only driver attention, but one that can also read passenger and pedestrian facial expressions for a better understanding of the emotions inside and outside the vehicle. The company recently started working with AI startup Affectiva to integrate this new technology into its fleet of test vehicles. The point is to “build trust” between the passengers and the technology powering the car, said Renovo CEO Chris Heiser. “We spend a lot of time trying to figure out how to sense inanimate objects with LIDAR and cameras, and that’s super important,” Heiser said. “But automated mobility has a huge human component. And companies like Affectiva give us a brand-new data stream to look at and help every single one of the people in our ecosystem — people who building self-driving, people who are building teleoperation, people who are building ride-hailing applications — they all want to know how people are feeling and reacting to these automated vehicles.” Affectiva’s technology works as both a driver monitoring tool, making sure safety drivers keep their eyes on the road even as the self-driving software is driving the vehicle, and an emotional tracker to ensure robot taxi passengers feel safe and secure during their autonomous trips. Using deep learning algorithms, Affectiva trained its software to read emotional reactions by studying a wide range of people from all ages and ethnic backgrounds, Heiser said. Renovo then integrates Affectiva’s application into its operating system, which allows it to access any of the cameras inside or outside the car. The way Affectiva determines human emotions is pretty interesting. According to the company’s website: [quotes quotes_style=”bquotes”]Computer vision algorithms identify key landmarks on the face – for example, the corners of your eyebrows, the tip of your nose, the corners of your mouth. Deep learning algorithms then analyze pixels in those regions to classify facial expressions. Combinations of these facial expressions are then mapped to emotions.[/quotes] Affectiva says it can measure seven “emotion metrics”: anger, contempt, disgust, fear, joy, sadness, and surprise. It can provide 20 facial expression metrics as well. The company has both a software development kit (SDK) and an application program interface (API) that provides emoji, gender, age, ethnicity, and a number of other metrics. There’s no word, though, on how effective it would be with someone who may sociopathic or at least very good at suppressing their emotions. A vehicle that can detect whether a passenger is scared can slow down the speed or dim the lights if it senses annoyance or frustration, Heiser said. More importantly, with the camera pointed at the safety driver, Renovo can then tell whether that person is tired or distracted, and deliver the right prompts or warnings to ensure attention remains on the road ahead. And that’s where Renovo and Affectiva’s collaboration perhaps could have prevented the fatal Uber collision last month. Dash camera footage released by Tempe police showed Uber’s safety driver looking downward for several seconds before the crash. A driver monitoring system like the one proposed by Renovo could have prompted the driver to look up, perhaps with enough time to avoid colliding with Herzberg. Of course, Renovo isn’t the only company working on driver monitoring tools. Many are available in production models on the road today. Most prominent is Cadillac’s “hands-free” Super Cruise semi-autonomous system, that uses infrared cameras mounted on the steering column to track driver attention to ensure they stay focused on the road. And the new Subaru Forrester comes with facial recognition technology to help detect driver fatigue. But Heiser said those systems can take years before they are production ready, while Renovo’s work with Affectiva can be ready much sooner. “Working with pure software and dropping it onto a platform gives us a lot of speed in deploying,” he said. “And it also means you can take things like Affectiva and integrate it directly with the self-driving system or the teleoperation system or content delivery system. That’s something we’re able to demonstrate in a matter of days or weeks.” Heiser said he wants Renovo to be for self-driving cars what Amazon Web Services is for cloud-computing platforms. The company has created a software intermediary to help other companies bring all the pieces of self-driving technology together. Its operating system, called AWare, enables a fleet of autonomous vehicles to handle enormous amounts of sensor data. Renovo recently started working with Samsung to help test, develop, and deploy the smartphone giant’s self-driving cars. “We’re the operating system for automated mobility,” Heiser said.