The concept of future mobility has seen a range of unassuming job roles open up at the world’s major automakers. General Motors appointed its first ever Chief Product Cybersecurity Officer back in 2014, for example, and as Conversational Interaction Designer, Shyamala Prayaga also holds an intriguing position at Ford Motor Company.
Interaction designers are tasked with sculpting the relationship between consumer and product and have long influenced the design of vehicle cockpits. The role has become increasingly important since the introduction of in-vehicle touchscreens, with designers considering everything from what happens when a button is pressed, to where those buttons are located. As the title would suggest, it is about optimising that interaction.
A conversational interaction designer is no different, but instead works with voice interaction specifically. “We look at how the conversation flows,” explained Prayaga—previously a UX evangelist with Amazon—during an interview with M:bility in Detroit. “When the user has said something, how should the system respond; and if it did not understand what the user said, what should the response be? What happens if I say, ‘call Tom’, but the car hears ‘mom’. We look at how the system responds in those scenarios.”
It may seem a tenuous link, but conversational interaction is expected to play a pivotal role in the car of the future.
A two-way conversation
Early stage voice control was fairly rudimentary, often mimicking a fractured conversation with an automated telephone system. The technology has become increasingly intuitive, however, and in some cases can recognise commands such as ‘I need fuel’ and direct the driver to a nearby filling station. However, for many in the industry the end goal is to launch a fully-fledged AI assistant that can respond to, and initiate, free-flowing conversation.
With acoustics interrupted by chatter from other passengers, tyre and engine roar, voice recognition technology must be able to filter out background noise
With this in mind, Ford is crafting the next generation of its conversational systems in-house. This team takes a holistic approach to the user experience, Prayaga explained, but the task of satisfying users has become ever more challenging. Outside of the car, speech recognition systems have proliferated in the consumer electronics space. Consumers have become familiar with the likes of Apple’s Siri, Amazon Alexa and Google Home, which are able to interpret casual speech.
Inside the car, the environment of the cockpit brings new challenges. With acoustics interrupted by chatter from other passengers, along with tyre and engine roar, voice recognition technology must be able to filter out background noise. In the golden age for distracted driving, there is also the element of safety to consider. “People have become used to these high-performance digital assistants, which have set the bar for us within automotive,” said Prayaga. “We now need to make sure our user experience continues to be best in class—it must be perfect.”
With today’s digital assistants already building a rapport with consumers, the automotive industry faces an uphill climb in building trust with its own in-house assistants. Automakers have two clear options: integrate the likes of Alexa, Siri or Cortana within the vehicle, or introduce their own. “The automotive industry needs to set the bar by thinking about what it wants,” said Prayaga. “Do we want to create our own assistants, or complement the existing ones?”
I choose you
Today, vehicle purchases typically revolve around factors such as fuel economy, acceleration and emissions. Digital cockpits are also high up on the list, increasingly so with younger buyers with a thirst to remain connected. While demand for voice recognition technology remains relatively low today—J.D. Power found in 2018 that it had been the number one complaint from drivers for six years running—the general expectation is that driverless vehicles could do away with physical buttons entirely.
If drivers no longer need to press buttons or swipe screens but simply speak their mind, the bond between man and machine could grow stronger than ever
If voice control can reach a point of reliability where drivers no longer need to press buttons or swipe screens but simply speak their mind, the bond between man and machine could grow stronger than ever. “With voice control taking over, we should consider removing clutter,” said Prayaga. “When we do that, voice could well become your main form of interaction with the car.”
Consumers may eventually choose a car based on the perceived performance of the AI assistant, particularly for a shared vehicle that simply facilitates a trip from A to B. “This is where the opportunity is to strengthen the trust with our next-gen voice system and capabilities, but if we use a third-party assistant we are simply giving away that opportunity to another provider,” said Prayaga. “As we get into the shared and autonomous mobility space, this is where trust is the most important thing. Service operators have to create their own assistant.”
Digital assistants are typically personified—consider Amazon Alexa, Apple Siri, Microsoft Cortana (a name derived from the videogame Halo) and Nokia Viki. Google has bucked the trend with Google Now and Google Home, but generally speaking, it is easier to build a relationship with a product once it has been personified. Some drivers give their car a human name for that reason, and at CES 2017, Nvidia Chief Executive, Jensun Huang, even suggested that the car would become a ‘partner’ in future.
But for any relationship to blossom, the other party needs to be likeable. “These assistants need to have a strong personality; people refer to Alexa as ‘she’, not ‘it’,” observed Prayaga. “But who says that about today’s in-car assistants? No one. The auto industry is lagging, but we need to get to that point as well where our in-car assistants become the best in class. Personality is so important because there is a trust and loyalty factor in it which strengthens the brand.”
In future, any digital assistant may need to comfort concerned riders mid-journey or greet new riders upon entry of the vehicle
Choosing an assistant’s character and accent can be tricky, as there is no single personality to suit all markets around the world. “Ford as a brand is global, so no matter where you go, the traits you would associate with Ford will remain the same. From there, we can add additional traits,” said Prayaga. “Germans are very direct when they speak, for example, so would they like someone who is too conversational? In China, people are so tech-savvy and they want everything in the car—I doubt consumers would have an issue if two eyes pop out of the car as the assistant introduces itself.”
She explained that the strategy in creating a digital assistant begins by defining how the system will improve the UX, before adding in certain underlying traits—appearing friendly and intelligent, for example. “Every time you say ‘good morning’, it could say ‘good morning’ back, but with some extra information,” Prayaga suggested. “If you say good night, the assistant could be friendly and a little sarcastic, saying ‘don’t let the bed bugs bite’ for example. It may gel well in the US, but not in other regions, so you need to have a region-specific personality and not just a voice assistant that listens and responds back.”
Digital assistants can build trust
Garnering trust is particularly important when considering the move toward autonomous vehicles. Timelines aside, the deployment of a vehicle with no option for human control means conversational interaction becomes vital. Riders still need to feel in relative control, however.
In future, any digital assistant may need to comfort concerned riders mid-journey or greet new riders upon entry of the vehicle. The car may also need to advise passengers that a deviation is necessary due to traffic up ahead—something as simple as ‘there’s been an accident, so I’m taking another route’, for example. “It will be essential that the AI-based assistant is highly reliable, even when dealing with mundane tasks, to instil trust in the driver,” says Nils Lenke, Senior Director of Innovation Management at Nuance.
And while voice recognition continues to improve, the technology remains at a relatively early stage of development. “I’m often frustrated by speaking with chat bot technologies, where I ask it something and it just falls apart,” noted Alyssa Simpson Rochwerger, Vice President of Product at Figure Eight.
Players across the industry will continue to hone their systems and investigate new applications moving forward, but it is clear that conversation will play a growing role in the cockpit.
This article appeared in the Q4 2019 issue of M:bility | Magazine. Follow this link to download the full issue.