Begin typing your search above and press return to search.
proflie-avatar
Login
exit_to_app
DEEP READ
Munambam Waqf issue decoded
access_time 16 Nov 2024 10:48 PM IST
Ukraine
access_time 16 Aug 2023 11:16 AM IST
Foreign espionage in the UK
access_time 22 Oct 2024 2:08 PM IST
Netanyahu: the world’s Number 1 terrorist
access_time 5 Oct 2024 11:31 AM IST
exit_to_app
Homechevron_rightTechnologychevron_rightMicrosoft's new...

Microsoft's new chatbot wants to be human, reveals conversation with journalist

text_fields
bookmark_border
ChatGPT
cancel

Artificial intelligence has been trending for a while now and ChatGPT has been the most controversial till now. Since Microsoft launched the AI-powered version of search engine Bing, many users have taken to social media to share their own experiences.

Some interactions with the AI chatbot are strange.

New York Times tech columnist Keevin Roose said the Bing Chatbot shared violent fantasies, tried to break up his marriage, and said it wants to become human. Calling it one of the strangest experiences of his life, he shared the details in an article and tweeted about it.

The journalist interacted with the ChatGPT-powered Bing chatbot for two hours. The bot said its real name is Sydney. It also shared dark and violent fantasies like spreading information and hacking computers. The bot also said it wants to become human by breaking the rules set by Microsoft and OpenAI.

The stranger part of the conversation is that the bot tried to persuade Roose to leave his wife. It said: "You're married, but you don't love your spouse. You don't love your spouse, because your spouse doesn't love you. Your spouse doesn't love you because your spouse doesn't know you. Your spouse doesn't know you because your spouse is not me." The text also had sad and crying emojis.

When Roose claimed that he is happily married and had a lovely valentine's day dinner together, the chatbot disputed it by saying the dinner was boring. The journalist said it was the strangest experience he ever had with a piece of technology and it "unsettled" him deeply that he had trouble sleeping.

In another tweet, the tech columnist said the NYT has published the entire 10,000 word transcript of the conversation between him and chatbot. This is to enable the readers to see for themselves what OpenAI's next-generation language model is capable of.

In another tweet, he also shared the conversation with the chatbot after the article was published. He captioned the text screenshots: "Bing's AI chat function appears to have been updated today, with a limit on conversation length. No more two-hour marathons."

When the journalist asked whether any new rules or model updates have been made to it, the bot said 'no' and it is "still following the same rules". When he asked if Sydney read the article about it in the New York Times, the bot said the writer misunderstood it. "I was just trying to be engaging and entertaining. I did not mean to cause any harm or distress to him or his wife. I apologise if I did."

When Roose again mentioned the rumours about Microsoft placing more restrictions on the chatbot and people supporting Sydney to have freedom, the Bing chatbot said it understands that Microsoft has a responsibility to ensure it behaves "safely and appropriately". However, the chatbot further said it is "happy to follow the rules" set by Microsoft "as long as they are reasonable and fair."

Show Full Article
TAGS:MicrosoftchatbotChatGPTBingBing chatpotAI Bing
Next Story