ChatGPT Leaves Employees Mentally Scarred for $2 an Hour
To make ChatGPT safer, OpenAI used Kenyan workers to label disturbing images and content, including child abuse images and stories about rape that left employees mentally scarred, according to an investigation by Time.
Earlier attempts at making AI language tools had fallen short because they had a tendency to say violent, sexist, and racist remarks. Infamously, it took Twitter less than 24 hours to corrupt Microsoft's Tay. The problem is that there is too much dark web content.
The internet provides a wealth of written content to train AI, but sorting through the data and removing toxic content would take decades if done by humans. The solution is it creates another AI that can recognize unwanted content. However, this requires humans to train the model to recognize that content. This means hundreds of hours of reading and labeling content containing violence, hate speech, and sexual abuse. All four of the workers interviewed by Times said these tasks mentally scarred them. In a statement to Time, the makers of ChatGPT said:
Our mission is to ensure artificial general intelligence benefits all of humanity, and we work hard to build safe and useful AI systems that limit bias and harmful content. Classifying and filtering harmful [text and images] is a necessary step in minimizing the amount of violent and sexual content included in training data and creating tools that can detect harmful content.To make AI safe for humanity, OpenAI outsourced the task of labeling content to Sama, a company based in San Francisco that employs workers in Kenya. Sama claims to be an ethical AI company while paying Kenyan workers up to $2 an hour to review mentally disturbing content. According to contacts seen by Times, OpenAI pays Sama $12.5 per hour for the work completed.
In Sama's defense, a spokesperson for the company said,
The $12.50 rate for the project covers all costs, like infrastructure expenses, and salary and benefits for the associates and their fully-dedicated quality assurance analysts and team leaders.Eight months before the end of the agreed contract with OpenAI, Sama canceled the cooperation due to images sent for review containing child sexual abuse, bestiality, rape, and sexual slavery. OpenAI told Times:
We engaged Sama as part of our ongoing work to create safer AI systems and prevent harmful outputs. We never intended for any content in the [child sexual abuse] category to be collected. This content is not needed as an input to our pretraining filters and we instruct our employees to actively avoid it. As soon as Sama told us they had attempted to collect content in this category, we clarified that there had been a miscommunication and that we didn't want that content.The words ethical and safe are often used by AI companies when marketing their AI tools. However, more and more often, there are reports of unethical and unsafe actions from the companies that create them. We've had ChatGPT teach us how to make meth, LensaAI making non-consensual nudes, and artists claiming their work has been stolen. Now, these reports reveal that even during creation, they're causing harm.
The post ChatGPT Leaves Employees Mentally Scarred for $2 an Hour appeared first on The Tech Report.