With its AI-powered chatbot, Meta was heading for disaster
The big fad for “chatbots” in the mid-2010s seemed to be over. But on Friday, August 5, Meta reminded that work on this technology continued by presenting BlenderBot 3, his new “state-of-the-art chatbot“. According to the company, this text-based robot can “talk to people naturally” on “almost every topic‘, a promise that chatbot developers have repeatedly made but never fulfilled.
While still in the prototype state, BlenderBot 3 is freely accessible (initially only in the US), allowing a large number of volunteer testers to progress through a system of discussion evaluation. As such, it has been widely scrutinized by the media and other curious minds since it went online, and the first assessment looks like a sad refrain: BlenderBot 3 is quick to plague Facebook, criticizing Zuckerberg’s clothing style, then spinning off conspiratorial remarks, even anti-Semitic. Just before launching the tool, Meta warns users that the chatbot “likely to make false or offensive statements“. But in his press release, he states that he created safeguards to filter out the worst of them…
Meta’s chatbot, Meta’s first critic
BlenderBot’s goal is long-term. The researchers do not want to create a functional and marketable tool in the short term, but only improve the state of the art of chatbots. Specifically, their tool aims to integrate human conversational qualities (such as personality traits) into their answers. With a long-term memory, it must be able to adapt to the user over the course of the exchange. In their press release, the researchers specify that BlenderBot needs to advance chatbots’ conversational skills.Avoiding unnecessary or dangerous responses“.
The problem, as always, is that the chatbot scours the web for information to stimulate conversation. Except that it doesn’t sort enough. When asked about leader Mark Zuckerberg, he can answer: “He is a competent businessman, but his practices are not always ethical. It’s funny that he has so much money but still wears the same clothes!‘ reports Business Insider. He doesn’t hesitate to recall the myriad scandals that have tarnished Facebook (and partially justified his impersonation) when it comes to his parent company. Or he says his life is much better since he deleted Facebook.
If the bot is so negative towards meta, it’s simply because it will be tapping into the most popular search results on Facebook that tell the story of its setbacks. Through this operation, it retains a bias that proves to be detrimental to its own creator. But these drifts aren’t limited to fun projections, which is a problem. To a journalist Wall Street JournalBlenderBot claimed that Donald Trump is still President, and “would still be with his second term ending in 2024“. So there’s a conspiracy theory being passed around. To top it off, Vice points out that BlenderBot’s responses were just “generally neither realistic nor good“and that he”often changes the subject“brutally.
History repeats itself
These runners from the amusing to the dangerous have an atmosphere of déjà vu. In 2016, Microsoft launched the chatbot Tay on Twitter, which was supposed to learn in real time from discussions with users. Failed: After a few hours, the text robot passed on conspiracy theories as well as racist and sexist statements. Less than 24 hours later, Microsoft disconnected Tay and profusely apologized for the fiasco.
Meta therefore tried a similar approach, relying on a massive learning model with more than 175 billion parameters. This algorithm was then trained on huge (mostly publicly available) text databases with the aim of extracting a language understanding in mathematical form. For example, one of the datasets the researchers created contained 20,000 conversations on over 1,000 different topics.
The problem with these large models is that they reproduce the distortions in the data they were fed with, mostly with a magnifying effect. And Meta was aware of these limitations: “Knowing that all AI-powered conversational chatbots sometimes mimic and generate dangerous, biased, or offensive remarks, we conducted large-scale studies, held workshops together, and developed new techniques to create guard rails for BlenderBot 3. Despite this work, BlenderBot may still make rude or offensive comments, which is why we’re collecting feedback.” Obviously, the additional guarantees are not having the desired effect.
Faced with the repeated failure of key language models and a large number of abandoned projects, the industry has reverted to less ambitious but more effective chatbots. As such, most shopper assistance robots today follow a predefined decision tree without ever leaving it, even if that means telling the customer they don’t have an answer or transferring them to a human operator. The technical challenge then is to understand the questions asked by the users and to ask the questions that are left without the most relevant answers.
Meta is transparent
While BlenderBot3’s success is more than questionable, Meta at least shows a rare transparency, a quality that AI-powered tools typically lack. The user can click on the chatbot’s responses to get the sources (more or less detailed) about the origin of the information. In addition, researchers share their code, data, and model used to power the chatbot.
at Guardiana spokesman for Meta also clarifies that “sheAnyone using Blender Bot must acknowledge that they understand that the discussion is for research and entertainment purposes only, that the bot may make false or offensive statements, and that they agree not to intentionally encourage the bot to make offensive statements.“
In other words, BlenderBot reminded that the ideal of sensitive chatbots that can express themselves like humans is still far away and that there are still many technical obstacles to be overcome. But Meta has taken enough precautions in its approach so that the story does not become a scandal this time.