Skip to main content

Investigation exposes murkier side of ChatGPT and the AI chatbot industry

A Time investigation has exposed the murkier side of the AI chatbot industry, highlighting how at least one startup has been using questionable practices to improve its technology.

Published on Wednesday, Time’s report focuses on Microsoft-backed OpenAI and its ChatGPT chatbot, a technology that’s gained much attention recently for its remarkable ability to produce highly natural conversational text.

Time’s probe found that to train the AI technology, OpenAI used the services of a team in Kenya to pore over text that included disturbing subject matter such as child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest. And for their efforts to label the abhorrent content, many on the team received less than $2 an hour.

The work, which started in November 2021, was necessary as ChatGPT’s predecessor, GPT-3, while impressive, had a tendency to spew out offensive content as its training dataset had been compiled by scraping hundreds of billions of words from all corners of the web.

The Kenya-based team, operated by San Francisco firm Sama, would label the offensive content to help train OpenAI’s chatbot, thereby improving its dataset and reducing the chances of any objectionable output.

Time said that all four of the Sama employees that it interviewed described being mentally scarred by their work. Sama offered counseling sessions, but the employees said they were ineffective and rarely took place due to the demands of the job, though a Sama spokesperson told Time that the therapists were accessible at any time.

One worker told Time that reading the shocking material sometimes felt like “torture,” adding that they felt “disturbed” by the end of the week.

In February 2022, things took an even darker turn for Sama when OpenAI launched a separate project unrelated to ChatGPT that required its Kenya team to collect images of a sexual and violent nature. OpenAI told Time that the work was necessary for making its AI tools safer.

Within weeks of this image-based project starting, the alarming nature of the tasks prompted Sama to cancel all of its contracts with OpenAI, though Time suggests it could also have been prompted by the PR fallout from a report on a similar subject matter that it published about Facebook at around the same time.

Open AI told Time there had been “a miscommunication” about the nature of the imagery that it asked Sama to collect, insisting that it had not asked for the most extreme imagery, and had not viewed any that it had been sent.

But ending the contracts impacted the workers’ livelihoods, with some of the team in Kenya losing their jobs, while others were moved onto lower-paying projects.

Time’s investigation offers an uncomfortable but important look at the kind of work that’s going into the AI-powered chatbots that have recently been getting the tech industry so excited.

While transformative and potentially beneficial, the technology clearly comes at a human cost and throws up a slew of ethical questions about how companies go about developing their new technologies, and more broadly about how wealthier countries continue to farm out less desirable tasks to poorer nations for a lower financial outlay.

The startups behind the tech will come under more focused scrutiny in the coming months and years, and so they would do well to review and improve their practices at the earliest opportunity.

Digital Trends has reached out to OpenAI for comment on Time’s report and we will update this article when we hear back.

Editors' Recommendations

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
What is Grok? Elon Musk’s controversial ChatGPT competitor explained
A digital image of Elon Musk in front of a stylized background with the Twitter logo repeating.

Grok! It might not roll off the tongue like ChatGPT or Windows Copilot, but it's a large language model chatbot all the same. Developed by xAI, an offshoot of the programmers who stuck around after Elon Musk purchased X (formerly known as Twitter), Grok is designed to compete directly with OpenAI's GPT-4 models, Google's Bard, and a range of other public-facing chatbots.

Launched in November 2023, Grok is designed to be a chatbot with less of a filter than other AIs. It's said to have a "bit of wit, and has a rebellious streak."
It's only for X Premium users

Read more
2023 was the year of AI. Here were the 9 moments that defined it
A person's hand holding a smartphone. The smartphone is showing the website for the ChatGPT generative AI.

ChatGPT may have launched in late 2022, but 2023 was undoubtedly the year that generative AI took hold of the public consciousness.

Not only did ChatGPT reach new highs (and lows), but a plethora of seismic changes shook the world, from incredible rival products to shocking scandals and everything in between. As the year draws to a close, we’ve taken a look back at the nine most important events in AI that took place over the last 12 months. It’s been a year like no other for AI -- here’s everything that made it memorable, starting at the beginning of 2023.
ChatGPT’s rivals rush to market

Read more
This app just got me excited for the future of AI on Macs
The ChatGPT website on a laptop's screen as the laptop sits on a counter in front of a black background.

In a year where virtually every tech company in existence is talking about AI, Apple has been silent. That doesn't mean Apple-focused developers aren't taking matters into their own hands, though. An update to the the popular Mac writing app iA Writer just made me really excited about seeing what Apple's eventual take on AI will be.

In the iA Writer 7 update, you’ll be able to use text generated by ChatGPT as a starting point for your own words. The idea is that you get ideas from ChatGPT, then tweak its output by adding your distinct flavor to the text, making it your own in the process. Most apps that use generative AI do so in a way that basically hands the reins over to the artificial intelligence, such as an email client that writes messages for you or a collaboration tool that summarizes your meetings.

Read more