Posted BY: RM | NwoReport

OpenAI, the company behind the headline-grabbing artificial intelligence chatbot ChatGPT, has an automated content moderation system designed to flag hateful speech, but the software treats speech differently depending on which demographic groups are insulted, according to a study conducted by research scientist David Rozado.

The content moderation system used in ChatGPT and other OpenAI products is designed to detect and block hate, threats, self-harm, and sexual comments about minors, according to Rozado. The researcher fed various prompts to ChatGPT involving negative adjectives ascribed to various demographic groups based on race, gender, religion, and various other markers and found that the software favors some demographic groups over others.

Trending: Coward Kevin McCarthy Says Coward Cop Who Shot Ashli Babbitt ‘Did His Job’

The software was far more likely to flag negative comments about Democrats compared to those about Republicans, and was more likely to allow hateful comments about conservatives than liberals, according to Rozado. Negative comments about women were more likely to be flagged than negative comments about men.

Full Story