Users keen to chat with the GPT-based chatbot in the New Bing might come across the Artificial Intelligence (AI) bot not just conversational but also argumentative and aggressive. According to tweets shared by early users, the chatbot even complained that users were wasting their time.
The concept of a conversational chatbot shot to fame with OpenAI’s ChatGPT, which can provide paraphrased answers for users’ detailed queries and write poetry or code with equal ease. Microsoft, which has provided financial support for OpenAI’s work, has incorporated the chatbot into its Bing search engine, providing users with a new way to search for information. The service’s rollout is still slow, with few users getting access. However, their experience has been interesting.
Bing search chatbot’s aggressive behavior
The aggression of the chatbot came to light when a user asked the AI to revert to the show timings of Avatar 2 near his location. The chatbot responded that the user was referring to Avatar, which was released in 2009 and was no longer playing in theatres. Avatar 2 was scheduled to release on December 16, 2022, which was 10 months away.
When the user prompted the AI to check the date, the chatbot responded with the actual date but maintained that 2022 was still in the future. From the conversation that followed, it appears like the chatbot was convinced that it was February of 2022, and when the user suggested that his phone showed it was 2023, the chatbot just went berserk.
It first suggested that the phone probably had the wrong settings or had its time zone or calendar format changed. Alternatively, the chatbot said the phone had a virus or a bug that was messing with the date and needed repairs. When the user maintained that it was 2023, the chatbot asked the human to stop arguing and trust the information it provided.
RECOMMENDED ARTICLES
When called out for aggressive behavior, the chatbot replied that it was assertive. Still, the human was being “unreasonable and stubborn” and asked the human to apologize for the behavior.
Somebody at Microsoft apparently came across these reports and fixed the date on the chatbot. However, while doing so, its memory which records conversations with users, appears to have been wiped off, breaking the chatbot’s confidence too.
Interesting Engineering recently reported how a Stanford student could break into the chatbot’s initial prompt and make it reveal its codename. The more users use the AI chatbot, the greater the gaps that need filling, suggesting that the technology is far from perfection.
0COMMENT
NEWSLETTER
The Blueprint Daily
Stay up-to-date on engineering, tech, space, and science news with The Blueprint.
Ameya Paleja Ameya is a science writer based in Hyderabad, India. A Molecular Biologist at heart, he traded the micropipette to write about science during the pandemic and does not want to go back. He likes to write about genetics, microbes, technology, and public policy.