In future, the autonomous car may have its eye on you

Freddie Holmes investigates how new facial recognition technology could help to engage drivers that are tired, distracted or simply bored whilst behind the wheel

Autonomous driving technology will dramatically improve road safety by eliminating the dangerous traits associated with human driving. Despite the risks, people often get behind the wheel whilst tired or intoxicated, and frequently check their phones for social media updates. By delegating at least partial control of the vehicle to an ever-alert computer system with faster reaction times and a broader range of hazard detection, crashes that result from these human factors will fall. At least that is the idea.

In May 2016, a Tesla Model S driver died after colliding with a semi-truck in Florida. Whilst in Autopilot mode, a semi-autonomous highway comfort feature, the driver had failed to respond to prompts from the system to take control of the wheel. An investigation into the event found that he had been watching a Harry Potter film at the time.

More recently, a pedestrian was killed by a self-driving test vehicle on the streets of Tempe, Arizona. The Volvo XC90, which was retrofitted with Uber self-driving technology, failed to recognise Elaine Herzberg as she crossed the road at night, leading to a fatal head-on collision. Data obtained from video streaming service Hulu later showed that the safety driver behind the wheel had been watching an episode of The Voice. As local police put it, the event was “entirely avoidable.”

This is not a new issue; drivers have been exploiting partially autonomous systems on public roads for years. Back in 2015, Chris Schreiner, a Director at Strategy Analytics, suggested that consumers widely viewed Autopilot as just a “gadget” that is “nice to show off to your friends.” A quick internet search will show various Tesla owners overriding built-in safety protocols. Anything from a water bottle to an orange has been used as a prop to replicate a hand on the wheel, and to fool the system into believing the driver is paying attention to the road. Some videos show drivers moving to the backseat whilst on a highway, and even asleep whilst in crawling traffic.

A quick internet search will show various Tesla owners overriding the built-in safety protocols of Autopilot. Anything from a water bottle to an orange has been used to fool the system into believing the driver is paying attention to the road

Indeed, many drivers today overestimate the capabilities of their highway pilot systems, now offered by several brands including Mercedes-Benz, Volvo and Cadillac. Rather than reducing instances of distracted driving, the way in which these systems are being used is, arguably, exacerbating the issue. There is a discussion around creating features that drivers can trust, and feel comfortable using, but the question is now turning to: can drivers be trusted to use these technologies?

Smile, you’re on camera

To get around the problem, an advanced form of facial recognition technology is being developed to keep an eye on drivers. The topic of driver monitoring has been around for decades in the trucking industry in order to understand the cause of crashes and to deter drivers from making potentially dangerous decisions behind the wheel. Traditional camera systems simply observe, and cannot interact with the driver, but one company is developing what it calls ‘artificial emotional intelligence’ (Emotion AI) in order to recognise when a driver is distracted or tired, and make proactive decisions to prevent an incident.

Boston-headquartered Affectiva was borne out of the MIT Media Lab in 2009, and recently penned a deal to integrate this technology within ‘Pepper’, a humanoid developed by SoftBank Robotics. This camera-based software is also being used to understand the emotional and cognitive state of human drivers, and particularly those using semi-autonomous driving features. “These systems are not aware of how humans interact and react with them,” says Gabi Zijderveld, Chief Marketing Officer at Affectiva. “We believe this very often causes superficial and sometimes ineffective interactions with technology.”

The Emotion AI technology is hardware agnostic, and Affectiva does not develop its own cameras or microphones. The idea is to allow automakers and systems integrators the freedom to work the technology into their vehicles as they see fit. Camera placement is a key factor that can vary from vehicle to vehicle, for example, as it can affect the perspective of a driver’s face.

The system works by first identifying, isolating and tracking a human face. Vision-based algorithms then analyse features and skin tones, from which facial expressions can be identified. Different combinations of facial expressions allow the system to derive human emotions and states. At a more complex level, the system then measures different ‘intensities’ of these expressions – how tired, or how distracted a driver is, for example.

Wakey wakey

Drowsiness is a common problem for professional drivers, both in the passenger car and commercial vehicle space, who often spend long hours behind the wheel in monotonous traffic. Those that travel the same route each day can also become complacent, and when it comes to testing autonomous vehicles, over confident in the system.

Robert Molloy, Director of the National Transportation Safety Board’s (NTSB) Office of Highway Safety, describes his work as the analysis of human performance, or ‘investigating why people make errors’. Speaking to Automotive World in 2016, he highlighted that many professional drivers believe they are skilled enough to handle fatigue – a dangerous train of thought. “The problem is that none of us are really great at determining when we’re too fatigued,” he warned.

We want to identify the different levels of drowsiness for the simple reason that it matters what happens next

With the help of technologies such as advanced facial recognition, spotting a tired driver could turn monitoring into a proactive tool. Algorithms are trained to look for tell-tale signs of tiredness, and can tailor alerts to the driver depending on the perceived severity of the issue. “Today, we can detect whether someone is showing signs of drowsiness, but we want to identify the different levels of drowsiness for the simple reason that it matters what happens next,” explains Affectiva’s Zijderveld. “If I yawn once that suggests I am only mildly drowsy, and major alerts should not be going off. If I’m beginning to yawn frequently and my head is nodding slightly, I may be moderately drowsy and require a gentle alert or reminder,” she suggests. “But if I’m beginning to fall asleep, then it is time for me to pull over and stop driving – that may require a more severe warning.”

Keeping tabs

But it is not all about tiredness. Indeed, the technology is also used to monitor enjoyment, laughter, frustration or sadness. Distraction is one of the most dangerous cognitive states, however, and Zijderveld suggests this issue is becoming more commonplace as semi-autonomous driving systems proliferate. This is true not only for products on the market today, but also beta systems that are being refined on public roads.

Along with Arizona and Nevada, California has become a hot bed for testing activity, and as of 4 December 2018, a total of 61 separate players hold a permit to test in the state. The number of companies successfully applying for such permits around the world has been climbing steadily, and automakers are pushing out advertisements for safety drivers as fleets expand. In May 2018, Waymo announced plans to add up to 62,000 Chrysler Pacifica Hybrid minivans to its self-driving fleet in the US, a significant expansion of its 600 strong Pacifica fleet at the time.

With public road testing set to soar, Affectiva is pushing for greater awareness of the dangers associated with an inattentive safety driver. “There is a big angle around driver monitoring, especially when you’re testing autonomous vehicles with a safety driver,” says Zijderveld. “Much of this automation is intended to make driving safer, but research has shown that it is in fact causing more distraction. Our technology can help to monitor a safety driver in a semi-autonomous vehicle, and gauge whether he or she is alert.”

Adding a degree of automation presents a new degree of flexibility for drivers. The issue is now not only that drivers are looking at screens whilst behind the wheel, the underlying challenge is keeping drivers’ eyes on the road and brains engaged.

Those with a stake in autonomous driving must ensure that it is not only the technology that can be trusted, but also the humans overseeing its operation

“With semi-autonomous vehicles you still have a steering wheel and a driver who should be alert enough to take over control at any given notice. That hand-off between the driver and the system is a major problem,” continues Zijderveld. “Our technology is software based and doesn’t require any special sensors or wearables, and can help to identify whether a normal driver, or a safety driver testing an autonomous vehicle, is truly engaged.”

Affectiva has been working with Renovo Auto, a Silicon Valley firm that has developed a platform for safety driver monitoring. The system can detect driver distraction, and features a series of thresholds for escalating levels of driver assistance. Prolonged periods of inattention lead to audible prompts, whilst continued distraction instructs the autonomous driving system to gently reduce speed and illuminate the brake lights to warn following vehicles. If the situation worsens and the driver shows no sign of response, the hazard signals are applied and the vehicle is brought to a stop. Immediate alerts can be sent back to HQ, and data from the trip can be used for driver training.

Safety drivers need to stay alert during long periods of inactivity, as do consumers when using the latest highway pilot technology. According to Affectiva, nine people are killed and a further 1,000 injured every day in the US due to distracted driving. Around 6,000 deaths result each year due to drowsy driving. Those with a stake in autonomous driving must ensure that it is not only the technology that can be trusted, but also the humans overseeing its operation.

This article appeared in the Q1 2019 issue of M:bility | Magazine. Follow this link to download the full issue.