Artificial intelligence was the technology industry’s hottest topic in 2016. This year AI will start to show if it can live up to the hype. Until now, the potential for training computers to identify patterns using large bodies of data has been restricted to services such as Google Photos, which can recognise faces, and Amazon’s Alexa, a digital assistant that responds to voice commands. But the platforms needed to make these algorithms more widely available to other companies have been taking shape in recent months, turning them into ingredients for digital services of all kinds.
Part of the effort has involved packing smart algorithms developed for internal purposes into services that other companies can tap into through APIs — software that lets people use and develop automated tools over the internet — including natural language understanding, text-to-speech translation, foreign language translation, and image and video recognition.
When Amazon Web Services unveiled a range of “on-demand” AI services like this for companies in December, it capped a year of similar moves by other big tech companies. Google had set out early in 2016 to make AI a distinguishing feature of its belated attempt to catch up with AWS in cloud computing.
Google has said, for instance, that by using the computer vision technology developed for its in-house services, companies in industries from agriculture to manufacturing would be able to automate the inspection and analysis of facilities without using humans.
Microsoft put AI at the centre of its cloud services offering in 2016. Despite the large volumes of data needed to train computer models and a scarcity of deep learning engineers, this is moving beyond the realm of giant tech companies. Start-ups including Clarifai and Sentient Technologies have built computer vision models with potentially wide applications and hope to compete with the biggest providers.
Abby Larson and her husband Tait, who with 25 staff run a wedding blog site called StyleMePretty that links users with selected vendors, are typical of the entrepreneurs drawn to the potential of such services.
Their company trawls some 12,000 wedding photos a week, says Mr Larson, categorising them for use on the site. Tapping into Clarifai’s neural network makes it possible to automate such tasks by machine-tagging pictures according to the season, whether there are flowers present, of if shoes appear in them. The technology is not good enough to identify individual dresses, but at a cost of around $500 a month it is a low-cost way of handling much greater volumes of content.
“A human is going to be more efficient, especially in our domain — but it’s going to be an order of magnitude more expensive,” he says. “This allows us to scale up.”
These types of services charge depending on how often calls are made to the API. Though price lists are not transparent, the leaders in the field say that costs are falling: Google, for instance, said last year it was cutting prices for its image recognition service for its heaviest users by 80 per cent.
One question is whether companies have suitable data sets to mine for such purposes. Just about the only other limiting factor, says Al Hilwa, an analyst at market researcher IDC, will be the imagination of customers: can they think of productive uses for data collected for other purposes once they have subjected them to new types of analysis?
The big cloud computing companies have also released tools for developers to train algorithms using their own data. These “machine learning in the cloud” services promise to turn machine learning into a general-purpose technology with wide application.
This will be where customers stand to see the biggest benefits from the new breakthroughs in AI, says Mr Hilwa. But learning how to train and validate the algorithms will take time. And, he adds, a skills shortage among the data scientists could put a brake on progress.
One of the biggest promises of the AI wave has been the advent of conversational computing — the prospect of using language to control and interact with computers. Chatbots, automated text-based messaging services designed to respond to human queries — often simple, narrowly defined tasks — have been one early manifestation. Another has been the introduction of more complex digital assistants that respond to voice commands, such as Alexa, Apple’s Siri and Microsoft’s Cortana.
But companies experimenting with the technology have had mixed experiences. Chatbots were a tech disappointment in 2016, a reminder of the risks of overhyping technologies. But the benefits that could come from talking with computers make this a field ripe for innovation in 2017.
Facebook’s launch last year of a chatbot platform — to which external companies could add their own voice assistants — demonstrated both the shortcomings of the technology and the potential demand. Banks, power companies and others developed automated assistants to handle customer service or simple transactions, but consumers often found it hard to get answers to questions and early reviews were damning. By the end of the year, however, more than 30,000 chatbots were using the platform, and Facebook claimed the early teething problems were already being overcome.
Companies that have been working with the technology say there are signs it will change how people interact with phones and computers, and the businesses whose digital services they use.
Adam Goldstein, chief executive of Hipmunk, a travel-planning service, was an early user of the bot platform developed by Microsoft’s Skype messaging service. “We were surprised about the different types of question people ask a bot,” says Mr Goldstein.
When customers use a standard search form, he says, they tend to focus on the specific: on a travel site, they look for flights between certain cities on particular dates. But with a bot they ask more general research questions about where they should go or when.
Two weaknesses have held back the promise of conversational computing. One involves the difficulty of training general-purpose systems that can work in everyday situations — a problem Microsoft demonstrated in 2016, when a chatbot it released on Twitter was misled into producing racist responses.
The other is the ability of chatbots to understand context, a problem for computers once they expand beyond a narrow field of knowledge. Software companies have responded by limiting the technology’s uses for now. Increasingly more new business software will include some level of conversational AI, though it will be limited, says Tom Austin, an analyst at Gartner, a research company.
How much this is used and how useful it will be is open to question. “We want to be able to walk into the office and say, ‘What the hell happened to Harry’s order?,’” says Mr Austin. “The technology won’t be able to tell you in 2017, but it will start to ask you questions.” The answers it receives, he adds, will help it to find its way to the right response.
But the lag in the uptake of new enterprise technology means it will probably be 2020 before people begin to use conversational AI in large numbers, he says.