
Daniel Rausch, Amazon Vice president of Alexa And Echoois in the middle of a major transition. More than a decade beyond Alexa’s launch of Amazon, he was given the task of creating a new version of the Marquee Voice Assistant, one powered by Great language models. As he put it in my interview with him, this new assistant, called Alexa+, is “a complete reconstruction of the architecture.”
How did his team approach Amazon’s biggest renovation of his vocal assistant? They used Ai Build AI, of course.
“The rate with which we use AI,” tool through the building process is quite shaking, “Rausch says. Creating the new Alexa, Amazon used AI during every step of the construction. And yes, that includes generating parts of the code.
Alexa’s team also brought generative AI into the test process. The engineers used a “great language model as a judge of answers” during a reinforcing learning processes, where the AI chose what it considered the best responses between two exits of Alexa+.
“People get the power and can move faster, better with AIs,” Rausch says. Amazon’s focus on using a generative AI internally is part of a larger wave of interruption for program engineers at work, as new tools, like anyone’s cursor, changes how the work is done – just as the expected workload.
If these types of AI-focus workflows prove to be hyperative then what means to be An engineer will essentially change. “We will need fewer people doing some jobs that are done today, and more people doing other types of work,” said Amazon andy Jassy in general manager Self This week to employees. “It’s hard to know exactly where this comes out over time, but in the next few years, we expect that to reduce our entire corporate workforce, as we will gain gains of efficiency of using AI widely through the company.”
Currently, Rausch is primarily focused on rolling the generative AI version of Alexa to more Amazon users. “We didn’t really want to leave customers in any way,” he says. “And that means hundreds of millions of different devices you need to support.”
The new Alexa+ chat in a more conversational way with users. It is a more personalized experience that remembers your preferences and is capable of performing online tasks you give it, like looking for concert tickets or buying foods.
Amazon announced Alexa+ at a Company -Event in February, and came up with early access to some public users in March, although this was without the complete slate of announced functions. Now the company claims that more than a million people have access to the updated voice assistant, which is still a small percentage of prospective users; Eventually, hundreds of millions of Alexa users will gain access to the AI. Wider release of Alexa+ may be planned Later this -summer.
Amazon faces competition from multiple directions because it works in a more dynamic voice assistant. Advanced voice -mode of OpenAILaunched in 2024, was popular with users who found the AI voice engaging. Also Apple announced a review of its native vocal assistant, SiriAt the Last Year Development Conference – with many contextual and customizing features similar to what Amazon works with Alexa+. Apple still has to launch the rebuilt Siri, even in Early Access, and the new voice assistant is waiting sometime next year.
Amazon refused to give cable early access to Alexa+ for practical (vocal?) Testing, and the new assistant has not yet been mined to my personal Amazon account. Similar to how we approached Openai’s advanced voice mode, which launched last year, Wired plans to test Alexa+ and provide an experienced context for readers, as it becomes more widely available.