How AI Kingpins Lost the Chatbot War: Microsoft was the first big company to get chatbots right


Amazon, Apple, and Google have been building chatbots for years. So how did they let the alliance between Microsoft and OpenAI integrate the first smash-hit bot into Microsoft products?

What happened: Top AI companies brought their conversational agents to market over the past decade-plus amid great fanfare. But Amazon’s Alexa, Apple’s Siri, and Google’s Assistant succumbed to technical limitations and business miscalculations, The New York Times reported. Meanwhile, Microsoft launched, retooled, and ultimately killed its entry, Cortana, instead banking on a partnership with OpenAI, whose ChatGPT went on to become a viral sensation.

Amazon: Alexa hit the market in 2014. It garnered great enthusiasm as Amazon integrated it into a range of hardware like alarm clocks and kitchen appliances.

  • Amazon tried to emulate Apple’s App Store, developing a skills library that customized Alexa to play simple games or perform tasks like controlling light switches. However, many users found the voice-assistant skills harder to use than mobile apps.
  • Amazon had hoped that Alexa would drive ecommerce, but sales didn’t follow. The division that includes Alexa suffered billions of dollars in financial losses in 2022 and reportedly was deeply affected by the company’s recent layoffs.

Apple: Siri became a fixture in iPhones in 2011. It drove a spike in sales for a few years, but the novelty wore off as it became mired in technical complexity.

  • Siri’s engineers designed the bot to answer questions by querying a colossal list of keywords in multiple languages. Each new feature added words and complexity to the list. Some required engineers to rebuild Siri’s database from scratch.
  • The increasingly complex technology made for infrequent updates and made Siri an unsuitable platform for more versatile approaches like ChatGPT.

Google: Google debuted Assistant in 2016. It touted Assistant’s ability to answer questions by querying its search engine. Meanwhile, it pioneered the transformer architecture and built a series of ever more-capable language models.

  • Like Amazon with Alexa skills, Google put substantial resources into building a library of Assistant actions, but the gambit didn’t pay off. A former Google manager said that most users requested tasks like switching lights or playing music rather than web searches that would generate revenue.
  • In late 2022, Google reduced its investment in Assistant. The company’s recent layoffs affected 16 percent of Assistant’s division.
  • Google debuted the transformer in 2017 and used it to build the Meena language model in 2020. The Meena team encouraged Google to build the model into Assistant, but the executives — sensitive to criticism after having fired two prominent researchers in AI ethics — objected, saying that Meena didn’t meet the company’s standards for safety and fairness, The Wall Street Journal reported.
  • On Tuesday, the company started to allow limited access to Bard, a chatbot based on Meena’s successor LaMDA. (You can sign up here.) Last week, it previewed LaMDA-based text generation in Gmail and Google Docs. These moves followed Google CEO Sundar Pichai’s December “code red” directive to counter Microsoft by focusing on generative AI products.

Why it matters: The top AI companies devoted a great deal of time and money to developing mass-market conversational technology, yet Microsoft got a jump on them by providing cutting-edge language models — however flawed or worrisome— to the public.

We’re thinking: Microsoft’s chatbot success appears to be a classic case of disruptive innovation: An upstart, OpenAI, delivered a product that, although rivals considered it substandard, exceeded their products in important respects. But the race to deliver an ideal language model isn’t over. Expect more surprise upsets to come!

Let’s connect on social media:

Facebook: DeepLearning.AI
Twitter: https://twitter.com/deeplearningai_
LinkedIn: https://www.linkedin.com/company/deeplearningai
Instagram: https://www.instagram.com/deeplearningai/

1 Like

Revealing that IBM doesn’t even make it onto this list of ‘top AI’ companies. When IBM acquired Cognea in 2014 it was announced as the key to unleashing a conversational front end to the Watson question answering pipeline. Despite the billions spent on development and marketing, neither component has flourished.

Mike Rhodin, IBM SVP and one of the people who would tell customers that Watson “learned on its own” and was essentially indistinguishable from magic, from an interview in 2014…

We believe this focus on creating depth of personality, when combined with an understanding of the users’ personalities will create a new level of interaction that is far beyond today’s ‘talking’ smartphones. … I’m not talking about just giving the computer a simple command or asking a simple question. That’s yesterday’s technology. I’m talking about more realistic conversations — everything from friendly chitchat to intense debate.

After a couple years of this “pump and dump” Rhodin cashed out his stock options and retired. IBM has since liquidated significant portions of its portfolio of Watson products and people. Inventing disruptive technology and building sustainable businesses on them are not the same thing.

3 Likes

Nobody or nothing can predict the future. In fact even existence itself doesnt know where its going. :grin::grin:

4 Likes

Exactly @gent.spah, the innovations of the future are yet to be invented!:rocket:

2 Likes