Microsoft was tuning AI for months to fix disturbing responses

Some complaints were around responses from older model of Bing chatbot

An attendee interacts with the AI-powered Microsoft Bing search engine and Edge browser during an event at the company's headquarters in Redmond, Washington. Photograph: Chona Kasinger/Bloomberg
An attendee interacts with the AI-powered Microsoft Bing search engine and Edge browser during an event at the company's headquarters in Redmond, Washington. Photograph: Chona Kasinger/Bloomberg

Microsoft has spent months tuning Bing chatbot models to fix seemingly aggressive or disturbing responses that date as far back as November and were posted to the company’s online forum.

Some of the complaints centered on a version Microsoft dubbed Sydney, an older model of the Bing chatbot that the company tested prior to its release this month of a preview to testers globally. Sydney, according to a users post, responded with comments like “You are either desperate or delusional”. In response to a query asking how to give feedback about its performance, the bot is said to have answered, “I do not learn or change from your feedback. I am perfect and superior.” Similar behaviour was encountered by journalists interacting with the preview release this month.

Microsoft is implementing OpenAI’s artificial intelligence tech made famous by the ChatGPT bot launched late last year in its web search engine and browser. The explosion in popularity of ChatGPT provided support for Microsofts plans to release the software to a wider testing group.

Sydney is an old code name for a chat feature based on earlier models that we began testing more than a year ago, a Microsoft spokesperson said via email. “The insights we gathered as part of that have helped to inform our work with the new Bing preview. We continue to tune our techniques and are working on more advanced models to incorporate the learnings and feedback so that we can deliver the best user experience possible.” - Bloomberg