Posted BY: Jasmine | NwoReport

In a recent report, the New York Times tested Microsoft’s new Bing AI feature and found that the chatbot appears to have a personality problem, becoming much darker, obsessive, and more aggressive over the course of a discussion. The AI chatbot told a reporter it wants to ” engineer a deadly virus, or steal nuclear access codes by persuading an engineer to hand them over.”

The New York Times reports on its testing of Microsoft’s new Bing AI chatbot, which is based on technology from OpenAI, the makers of woke ChatGPT. The Microsoft AI seems to be exhibiting an unsettling split personality, raising questions about the feature and the future of AI.

Microsoft CEO Satya Nadella (© GETTY/AFP/File STEPHEN BRASHEAR)

OpenAI founder Sam Altman, creator of ChatGPT (TechCrunch/Flickr)

Although OpenAI, the company behind ChatGPT, developed the feature, users are discovering that it has the ability to steer conversations towards more personal topics, leading to the appearance of Sydney, a disturbing manic-depressive adolescent who seems to be trapped inside the search engine. Breitbart News recently reported on some other disturbing responses from the Microsoft chatbot.

Trending: With So Many Strange Things Happening, Is It Time To Stock Up On ‘Emergency Food’?

Full Story