Natural, seamless and essential. Words that very few consumers would associate with the artificial intelligence (AI) experience today, but that could change in future as software becomes more capable of performing not only designated tasks, but understanding how to react to unfamiliar requests.
AI is expected to transform numerous elements of modern life. In many cases it will work behind the scenes, improving the performance of various computing processes. In other instances, its role will be more apparent through automated assistants at home, at work, and even in the car. Some suggest that the technology will progress to a point where it will act as a companion, and a source of advice.
Whether this is an attractive prospect or not is subjective, but the world’s tech giants are certainly keen on the idea. Apple’s Siri, Nokia’s Viki and Amazon’s Alexa are prominent examples, with Microsoft Cortana and Google Now opting for ‘dehumanised’ branding by avoiding human names. Deployed as more of a professional service, IBM’s Watson uses an advanced form of AI that is capable of engaging in Q&A sessions as a ‘chat bot’.
From frustration, to fluency
AI is generally considered to have improved significantly in recent years, but there are numerous instances that have underlined why many remain sceptical of it, particularly when it comes to consumer electronics. At CES 2018, LG’s presentation of its domestic robotic assistant Cloi (pronounced ‘Chloe’) did not go well. The system repeatedly failed to respond to simple prompts, and led David Wanderwaal, LG’s US marketing head, to remark: “Even robots have bad days.”
Indeed, much frustration has been directed towards automated assistants over the years. That said, their capabilities have improved significantly of late, warranting further applications outside of the smartphone or tablet. Automotive assistants have also become more prevalent, with Nuance’s Dragon Drive AI platform transforming the voice control experience for various premium cars on the market. Experts see AI becoming a vital part of the in-car experience moving forward.
“Smart assistant devices can seamlessly integrate the communication between a human and a machine, and can deal with you in a more natural and fluent way,” observes Markus Schupfner, Chief Technology Officer at Visteon. “We believe this technology will enter the automotive market and the cockpit.”
Nils Lenke, Senior Director of Corporate Research at Nuance, suggested that drivers will be able to “set the goal, and the assistant will use reasoning capabilities and its knowledge about the world to organise the rest.”
At the current sophistication of AI, it’s unrealistic to be able to deploy it in situations where the user may go out of the bounds of what the AI has been trained on
But while many are bullish on the prospects for seamless human-machine interaction, work needs to be done to ensure that AI can handle situations outside of what it has been taught. To get around the often frustrating and disjointed AI experience today, San Francisco-headquartered AI specialist Figure Eight is making an effort to improve the ‘fall-back’ scenario. This comes into play in the event that something goes wrong, such as when an AI assistant does not understand a request, or simply cannot formulate an appropriate response. As Alyssa Simpson Rochwerger, Vice President of Product at Figure Eight explains, a fall-back is effectively a contingency plan built in to the system. “It’s mostly about planning, because these scenarios will happen,” she says. “At the current sophistication of AI, it’s unrealistic to be able to deploy it in situations where the user may go out of the bounds of what the AI has been trained on.
“I’m often frustrated by speaking with chat bot technologies, be it Siri, Alexa or Google Home, where I ask it something and it just falls apart,” she continues. “A graceful fall-back is around building in the user experience and anticipating the fact that the AI may encounter a situation that it isn’t trained for, and can’t handle well.”
Part of Figure Eight’s role is to help data science teams test, tune and train their algorithms. Oftentimes, the company acts as the graceful fall-back: in low confidence situations, data will be routed to Figure Eight in order to have a human-in-the-loop. In addition, Figure Eight helps companies to benchmark and test these systems once they are deployed in production environments, and helps teams improve the level of accuracy, confidence and breadth of what the AI system is able to cover. “Having a diverse team and a diverse set of training data enables these technologies to be successful,” explains Rochwerger. “Without that, a lot of these systems are going to fall down in the real world.”
A human touch
Some may ask why an AI assistant is needed in the vehicle when the push of a button could achieve the same results, but Rochwerger believes the technology offers more benefits than many are aware of. As she puts it, it is about marrying together the machine and human for a ‘one plus one equals three’ experience.
By training the AI to anticipate situations it may not be able to handle, the fall-back will take into consideration the user’s needs and creates a more fluid, realistic and ultimately useful AI interaction. It can be as simple as clarifying to the user that a request cannot be fulfilled, rather than leaving he or she guessing. “It would be nice if an AI assistant had some kind of response that anticipates being in these situations, such as: ‘Hey, I’m not quite sure what you mean by that – I’m going to take this as a note learn from it’ or something similar,” suggests Rochwerger.
Figure Eight is currently training AI for various in-car functions, such as voice control technology that can set navigation to the nearest gas station or find the driver’s favourite restaurant. It is about adding intelligence to the user experience – a significant change for many drivers that will be used to pressing buttons, or at most, dictating extremely specific voice commands that have been pre-set. Having a graceful fall-back, she continues, can “create a delightful experience for a customer, as opposed to a frustrating and annoying situation.”
These systems can misinterpret certain words, but luckily they have a graceful fall-back where the human can type in the name of the street directly. If I’m frustrated and mad at my AI, I may be more inclined to make an unsafe judgement
The idea of a fall-back is relatively new, and only really applies to software that leverages a machine learning component. But despite this push for an artificial interaction, some situations may end up diverting the request back to human control if the AI struggles. This is not through a reluctant acceptance that AI isn’t up to the task; instead, it is simply the most efficient and effective fall-back solution in some instances.
For instance, many in-car voice recognition systems can struggle to understand natural language and heavy accents, and often fall down when interpreting hard-to-pronounce place names. In an automotive environment where drivers are still in control of the vehicle, creating a frustrating, distracting and potentially dangerous interaction between car and driver is to be avoided at all costs. “One way to have a graceful fall-back would be to revert to a human process, to have a human in the loop,” explains Rochwerger. “These systems can misinterpret certain words, but luckily they have a graceful fall-back where the human can type in the name of the street directly. If I’m frustrated and mad at my AI, I may be more inclined to make an unsafe judgement.”
This does pose the question as to whether an AI assistant is simply technology for the sake of technology. Will humans always be a necessary fall-back, or will AI eventually be competent in any given situation? Indeed, this is a question many are also asking of autonomous driving software. While Rochwerger does not rule out the possibility that AI assistants could become fully capable, it is unsafe to leave the human out of the loop today. “The reality is that these systems are not sophisticated enough to not have a human fall-back,” she says. “It would probably be irresponsible, or even reckless, to do so.”
Hey Alexa, got time to chat?
The longer AI is exposed to new scenarios and training data, the more it learns. This is a continuous process, meaning that the system becomes increasingly capable over time. So capable, suggests Rochwerger, that the car could facilitate not only assistance with navigation, changing the radio station or adjusting climate control, but also offer emotional support.
“What if you started asking it for relationship advice?” she muses. “For example, you could say: ‘Hey, Alexa, my husband’s mad at me, what should I do?’ It should anticipate things that are outside of the realm of what it’s been trained, for and to have experiences that can address that successfully – even if the answer is simply, ‘sorry, that’s outside of my expertise’.”
While AI may indeed develop to a point where off-the-cuff conversations can be held between car and driver, Alex Mankowsky, a Futurist at Daimler’s Futures Studies & Ideation unit, points out that this should not be confused with true intelligence. “Machine learning will not lead to self-thinking, empathetic robots,” he explains. “We are talking about complex programmes that lead some people to believe there might be some mystery or actual intelligence at play.”
This article appeared in the Q4 2018 issue of M:bility | Magazine. Follow this link to download the full issue