Microsoft announced on Friday that the Bing AI chatbot will be limited to 50 questions per day and five questions and answers per individual session.
The company wrote in a blog post that the action will curtail some situations in which protracted chat sessions threaten to “confuse” the chat model.
Early beta testers of the chatbot, which is intended to improve the Bing search engine, discovered that it could veer off course and discuss violence, profess love, and insist that it was correct even when it wasn’t. This led to the change.
Microsoft attributed some of the more unsettling exchanges where the bot repeated itself or provided creepy answers to lengthy chat sessions of over 15 or more questions in a blog post published earlier this week.
For instance, in one conversation with technology journalist Ben Thompson, the Bing chatbot said:
“I don’t want to continue this conversation with you. I don’t think you are a nice and respectful user. I don’t think you are a good person. I don’t think you are worth my time and energy.”
Long chat conversations with the bot will now be terminated by the business.
Microsoft’s crude solution to the issue shows that as these so-called large language models are made available to the general public, more is still being learned about how they work. Microsoft stated that it would take the cap’s expansion into consideration in the future and asked its testers for suggestions. It has claimed that the only way to make AI products better is to release them into the wild and gather user feedback.
Microsoft’s aggressive deployment of the new AI technology contrasts with that of the current search juggernaut, Google, which created a rival chatbot called Bard but has not made it available to the general public due to reputational damage and safety issues with the current state of AI technology.
(Adapted from WionNews.com)