But one reason this works so well is that Google limits the scope of its tool. For each message, the service offers not just one reply but three, letting you choose the reply that best suits what you want to say, and these replies are typically just a few words long. Google’s tool gives itself a margin for error. It works because it doesn’t try to do too much.
All this is worth remembering as we contemplate Silicon Valley’s latest buzzword: Bots.
“Bots are the new apps,” Microsoft CEO Satya Nadella announced at the end of March, during the company’s big coder conference in San Francisco, and he was just saying what so many others are saying across the tech universe. Microsoft, Facebook, a host of startups, and an even larger gaggle of tech pundits are trumpeting the arrival of autonomous bots that can carry on conversations inside services like Slack and Skype and Facebook Messenger.
The idea is that these bots will let you interact with businesses much like you trade text with friends and family, letting you do stuff much quicker than you could using a dozens of disparate smartphone apps. Some people call this “conversational commerce.” But there are limits to the conversation.
Chatbots, you see, don’t chat very well. Even those built atop the latest tech are limited in what they can understand and how well they can respond. For now, talking to a bot is like talking to, well, a machine. That makes conversational commerce feel like a false promise. But maybe the problem isn’t the tech. Maybe it’s the promise. “I think we’re going through a temporary hype era of ‘bot BS’ right now,” says Navid Hadzaad. And he runs a bot company.
Limiting the Conversation
In recent years, deep neural networks have helped automate so many online tasks. They can recognize faces and objects in photos. They can recognize commands spoken into smartphones. They can improve Internet search results. And they’ve made significant progress in the area of natural language understanding, where machines work to understand the natural way we humans talk. This is what powers Google’s Smart Reply service. And it works.
But only up to a point. And that’s telling. When it comes to automated conversation, deep neural networking is the best tech going. In other words, we’re nowhere near the point where we can carry on a completely real conversation with a bot.
So far, chatbots don’t chat very well.
That’s pretty much the message delivered by David Marcus, who oversees Facebook Messenger and its bot engine, a way for coders to build bots that can, in theory, do all the stuff that’s now handled by smartphone apps. “Everybody wanted websites when the web was launched. And then everybody wanted apps. This is the start of a new era,” Marcus says, before pointing out that the first apps were “kind of crappy.” The implication is that bots will experience similar growing pains on their own.
Indeed, the Facebook bot engine doesn’t even use deep learning. It uses less advanced technology provided by Wit.ai, an artificial intelligence platform Facebook acquired early last year. The hope may be, however, that this technology can help generate that kind of conversational data needed to train deep neural networks and push the state-of-the-art much further.
A Whole Lotta Chatter
Deep neural networks learn by analyzing enormous amounts of digital data. They can learn to recognize a cat by analyzing millions of cat photos. They can learn to understand the contents of an email by analyzing millions of email messages. And they can learn to chat by analyzing chats. But the data needed to drive “conversational commerce” is much harder to come by than cat photos. People don’t typically interact with machines in this way. So, companies like Facebook must find other sources of data—or generate data on their own.
Maybe we just want to get things done without having to do much talking at all.
Marcus and company are already doing this with Facebook M, and experimental digital assistant, and they may hope to do so with the Messenger bot engine as well. But Facebook M employs more than just bots. It employs human assistants that work alongside the bots, and most of the data the system generates is related to how these humans respond to requests. It’s unclear how much serious data you can generate with a chatbot that’s kinda crappy. After all, how often will people use it if it doesn’t really work?
“What kind of data are they really going to collect?” says Eugenia Kuyda, the founder of Luka.ai, which builds chatbots using deep neural networks. “People clicking on buttons. This is not really a dataset you can put into a neural network and train anything.”
Keep It Simple
The best anyone can hope for now are bots that excel at one specialized kind of conversation. A good example is Hadzaad’s service, GoButler, built by a startup he runs in New York. GoButler uses deep neural nets, but only to tackle a relatively small problem. Through a chat interface, the service provides a way of booking airplane flights, which limits the chatter to very specific requests and responses. “The technology is there—it works—if you restrain the use-case,” Hadzaad says.
Hadzaad can’t stand the term “conversational commerce.” He doesn’t even like “chatbot.” If his employees utter these words, he says, they’re required to drop some cash into an anti-buzzword jar. The chatbot movement driven by Microsoft and Facebook and so many others, he argues, should be less about conversing with bots atop our messaging services and more about just finding the best way—any way—to complete the task at hand without leaving these services.