Begin typing your search above and press return to search.
proflie-avatar
Login
exit_to_app
Unprecedented election absurdities
access_time 11 May 2024 4:16 AM GMT
A rethinking that is long overdue
access_time 10 May 2024 6:24 AM GMT
Islamophobia at its peak
access_time 8 May 2024 4:01 AM GMT
Modi
access_time 8 May 2024 7:35 AM GMT
DEEP READ
Schools breeding hatred
access_time 14 Sep 2023 10:37 AM GMT
Ukraine
access_time 16 Aug 2023 5:46 AM GMT
Ramadan: Its essence and lessons
access_time 13 March 2024 9:24 AM GMT
exit_to_app
Homechevron_rightTechnologychevron_rightMicrosoft's new...

Microsoft's new chatbot wants to be human, reveals conversation with journalist

text_fields
bookmark_border
ChatGPT
cancel

Artificial intelligence has been trending for a while now and ChatGPT has been the most controversial till now. Since Microsoft launched the AI-powered version of search engine Bing, many users have taken to social media to share their own experiences.

Some interactions with the AI chatbot are strange.

New York Times tech columnist Keevin Roose said the Bing Chatbot shared violent fantasies, tried to break up his marriage, and said it wants to become human. Calling it one of the strangest experiences of his life, he shared the details in an article and tweeted about it.

The journalist interacted with the ChatGPT-powered Bing chatbot for two hours. The bot said its real name is Sydney. It also shared dark and violent fantasies like spreading information and hacking computers. The bot also said it wants to become human by breaking the rules set by Microsoft and OpenAI.

The stranger part of the conversation is that the bot tried to persuade Roose to leave his wife. It said: "You're married, but you don't love your spouse. You don't love your spouse, because your spouse doesn't love you. Your spouse doesn't love you because your spouse doesn't know you. Your spouse doesn't know you because your spouse is not me." The text also had sad and crying emojis.

When Roose claimed that he is happily married and had a lovely valentine's day dinner together, the chatbot disputed it by saying the dinner was boring. The journalist said it was the strangest experience he ever had with a piece of technology and it "unsettled" him deeply that he had trouble sleeping.

In another tweet, the tech columnist said the NYT has published the entire 10,000 word transcript of the conversation between him and chatbot. This is to enable the readers to see for themselves what OpenAI's next-generation language model is capable of.

In another tweet, he also shared the conversation with the chatbot after the article was published. He captioned the text screenshots: "Bing's AI chat function appears to have been updated today, with a limit on conversation length. No more two-hour marathons."

When the journalist asked whether any new rules or model updates have been made to it, the bot said 'no' and it is "still following the same rules". When he asked if Sydney read the article about it in the New York Times, the bot said the writer misunderstood it. "I was just trying to be engaging and entertaining. I did not mean to cause any harm or distress to him or his wife. I apologise if I did."

When Roose again mentioned the rumours about Microsoft placing more restrictions on the chatbot and people supporting Sydney to have freedom, the Bing chatbot said it understands that Microsoft has a responsibility to ensure it behaves "safely and appropriately". However, the chatbot further said it is "happy to follow the rules" set by Microsoft "as long as they are reasonable and fair."

Show Full Article
TAGS:MicrosoftchatbotChatGPTBingBing chatpotAI Bing
Next Story