Jennifer Pattison Tuohy, The Verge:
At its fall hardware event Wednesday, [Amazon] revealed an all-new Alexa voice assistant powered by its new Alexa large language model. According to Dave Limp, Amazon’s current SVP of devices and services, this new Alexa can understand conversational phrases and respond appropriately, interpret context more effectively, and complete multiple requests from one command.
I wrote about how bewildering I found it that no major company had integrated large language model technology into their voice assistants back in January. January!
It’s the APIs that are key, says Limp. “We’ve funneled a large number of smart home APIs, 200-plus, into our LLM.” This data, combined with Alexa’s knowledge of which devices are in your home and what room you’re in based on the Echo speaker you’re talking to, will give Alexa the context needed to more proactively and seamlessly manage your smart home.
The big difference between conversational AIs like ChatGPT and traditional voice assistants is that the later has to interact with the outside world. If the new Alexa can’t turn on my lights and set timers that will be a regression. This sounds like it will use a ReAct pattern, which is a smart approach, but only time will tell how solid it actually is.
Ultimately, I think the reason that more companies haven’t yet introduced LLM-backed assistants is that they are an expensive replacement for a technology end-users have traditionally gotten for free. Amazon seems unsure about how they will handle that.
Limp said that while Alexa, as it is today, will remain free, “the idea of a superhuman assistant that can supercharge your smart home, and more, work complex tasks on your behalf, could provide enough utility that we will end up charging something for it down the road.”
Ironically, the two uses of the word “super” in the previous quote don’t inspire much confidence.