Artificial intelligence (AI) could play a key role in numerous applications within a smart city, from improvements to traffic and parking management to the safe integration of autonomous ride-share vehicles.
However, in many cases, human creativity is necessary in order to teach an AI programme the models and patterns that need to be recognised. Once trained, this AI can then sift through a monumental volume of data in a flash, and at high accuracy.
“Basic AI, or machine learning to be more apt, is ideal for adoption in the smart city,” said Randi Barshack, Chief Marketing Officer of Figure Eight, a San Francisco-headquartered AI platform. “There are many layers that can be unpeeled in terms of possibilities.”
Figure Eight trains, tests and tunes machine learning models to ‘make AI work in the real world.’ The firm’s human-in-the-loop AI platform works with data such as audio, image, text and video for a range of applications including autonomous vehicles, ‘chat bots’ and facial recognition.
Basic AI is ideal for adoption in the smart city, and there are many layers that can be unpeeled in terms of possibilities
Smile, you’re on camera
A significant application for AI in the smart city is video surveillance, and in some cities today, closed-circuit television (CCTV) cameras already use AI for facial recognition. In December 2017, a BBC reporter demonstrated how this technology could be used for security purposes, and was tracked down by AI in the Chinese city of Guiyang in less than ten minutes. In Zhengzhou, police officers are using ‘smart’ AI glasses to the same effect, recognising criminal suspects and finding civilians with fake IDs.
AI could be used not only to apprehend, but also to provide assistance. If a pedestrian collapses in the street, camera-based AI could detect that a medical professional is required on the scene. Facial recognition technology could also source the person’s name, age and home address, and pass this information on to the relevant authorities. “But it’s not just that this technology can recognise a face or an anomaly, it can also register facial expressions and gesture recognition,” added Barshack. “The number of things you can do with camera-based AI is virtually endless, but it is critical for municipalities and cities to first understand the problems they need to solve, and how AI can be applied to solve them.”
For example, if traffic congestion is an issue, there are a variety of solutions such as adjusting the way traffic lights are metered, or building or closing new roads. AI can be used to process this traffic data, but there must be an initial theory for the algorithm to work with in the first place. This is where human creativity is necessary. As Barshack explained, “AI can do in milliseconds what it might take years for a human to process, but you need to have those data points, and you need to have the theory. AI is not at the point where you can just say: ‘make the traffic patterns better’.”
By using human intelligence to create potential solutions, AI can then be used to model the results of proposed measures. This also allows for corner cases to be better understood beforehand. “Those ‘what if’ scenarios can be explored at speeds you never would have imagined before,” she explained. “What might have taken years to prove out can now be done in a matter of minutes.”
Future mobility brings new challenges
A pillar to any smart city is the provision of accessible, affordable and clean mobility services – be it public transport or through private vendors that offer ride-sharing, bike sharing and on-demand vehicle hire. In many cities, ride-sharing has become particularly popular, but there have been teething pains for city planners.
The nature of ride-sharing results in these vehicles making numerous stops on a single journey, much like a city bus. However, a lack of dedicated pick-up and drop-off areas can lead to a spike in congestion and an increased likelihood of a road traffic incident – the antithesis of a smart city road network. “Traffic is a significant issue in San Francisco these days, for example, and much of that traffic occurs because there are no spots where ride-sharing vehicles can pull over,” said Barshack. “If spaces were freed up, what would happen to parking congestion?” Human-in-the-loop AI could help city planners to understand this, she suggested.
AI can do in milliseconds what it might take years for a human to process, but you need to have those data points, and you need to have the theory; AI is not at the point where you can just say: make the traffic patterns better
Autonomous cars are also considered a significant element to any smart city. Today, the vast majority of a vehicle’s time is wasted as it is either stationary in a parking lot or at home. This creates a situation where many drivers are simply searching for a parking space, exacerbating congestion issues. “In theory, these vehicles could be moving, transporting and managing our lives in a much more interesting way,” said Barshack, pointing to the opportunities of a fully autonomous vehicle.
Over the past decade or so, autonomous test vehicles have been taught how to recognise a plethora of objects they may encounter on the road and surrounding environment. Anything from a lamppost or pedestrian, to a dog or parked truck has been programmed into the system, with an understanding of how these actors may walk, run, change lanes or pull into the road. However, in a smart city, there will not only be cars and pedestrians to worry about.
For example, one of the fastest growing trends in Chinese cities is bicycle sharing, while in California, electric scooter hire has become increasingly popular. With improved air quality and greater ease of access to mobility in mind, city planners are encouraging such alternatives to private vehicle ownership. This will pose a number of new challenges to autonomous driving AI in coming years, which will need to become increasingly capable of driving amid these new parties.
The number of things you can do with camera-based AI is virtually endless, but it is critical for municipalities and cities to first understand the problems they need to solve, and how AI can be applied to solve them
“In San Francisco, e-scooters have suddenly popped up all over the place,” said Barshack, “but a person on a scooter could be really confusing for an autonomous car. It thinks it sees a pedestrian, but its trajectory is faster than the average speed of a pedestrian.” In April 2018, San Francisco city authorities blasted a number of electric scooter-share companies for deploying without permission. Proposed legislation would see all operators require a permit, with penalties for scooters that are not ‘parked responsibly’. Bikes and baby strollers can also pose issues to autonomous driving AI.
“Development teams will be ensuring that autonomous driving models understand all these nuances, such as the difference between a pedestrian and a person on a scooter,” continued Barshack. “Policy and lawmakers also need to be educated on what is happening in this space; if you’re used to regulating horses, and cars suddenly come in, you need to understand what you’re dealing with.”
Machine learning needs human creativity It is clear that AI will have a part to play in the smart city, both in terms of improving the capability of autonomous vehicles and in assisting the development of roadways that can handle new forms of mobility. Citizens can also expect to be under more accurate surveillance to reduce crime and improve incident response efficiency.
City authorities will be able to work through huge quantities of data to test and deploy new initiatives to cope with demand for parking, and access to ride-share vehicles. What is interesting is that despite the push to develop AI that is more capable than a human, it is the human element that will prove vital in training these algorithms in the first place.
This article appeared in the Q2 2018 issue of Automotive Megatrends Magazine. Follow this link to download the full issue