Google Duplex was announced earlier this year as a way to automate tasks that require a phone call - such as booking an appointment. Rather than calling yourself google has an assistant that understands human speech and speaks convincingly enough to do these simple tasks. Relatedly, a few days I read an article describing how Google is also pushing Duplex to be used in call centers. A bit surprising but makes a ton of sense since the vast majority of the calls are relatively simple tasks that companies are already trying to automate as much as possible using Interactive Voice Response (IVR) systems.
Now imagine both of these becoming successful: we end up in a world where my consumer Duplex is having a conversation with a call center’s Duplex. If only there was a way for machines to communicate directly with each other in a standard protocol instead of depending on some advanced natural language processing.
On a more serious note it’s pretty amazing that that we’re getting to a world where computers are actually able replicate human behaviors. Rather than building systems that can speak to each other via APIs we’re instead building systems that have to speak human first.
This is the same thing that’s happening with self driving cars. Building a self driving car is much more difficult in a world with humans driving than if every car was driven by software. In fact, if we didn’t have human drivers on the road today I suspect we’d already have self driving cars.
The irony is that as software gets better and becomes universal is when it could be dead simple. The software first has to convince us it’s good enough to be human before it can act as a machine.
Maybe this is what’s going to save us from the AI apocalypse.