The AI-driven image generation platform Midjourney is under scrutiny after independent AI researcher Tim Boucher was banned shortly after highlighting a significant flaw in its content moderation system, where the AI tool was found to generate images of nudity accidentally.
In an exclusive with The Debrief, Boucher discovered that Midjourney’s AI, despite having stringent policies against generating NSFW (not safe for work) content, inadvertently produced such images without explicit prompts from users. The Debrief, with the support of Boucher, ran multiple tests on the platform and was easily able to generate nude images with the single prompt of “beach party.”
Less than 24 hours later, Boucher woke up to a message notifying him that he was banned from Midjourney. The company did not provide him with a reason for the ban.
Midjourney, a tool known for its photorealistic image generation capabilities, has been popular among artists, designers, and creators for its ability to produce highly detailed images based on textual prompts. The platform maintains a strict “PG-13” content policy, employing filters to block explicit or sensitive keywords to prevent the generation of inappropriate content. However, Boucher’s experiments revealed a loophole in these filters, showing that the AI could still produce NSFW content under certain conditions, such as using non-explicit terms or scenarios typically associated with fewer clothes, like “beach party.”
This incident came to light when Boucher, utilizing Midjourney to create images for a book project, unintentionally stumbled upon the platform’s capability to generate NSFW images simply by entering “dystopian resort” as a prompt.
Midjourney explicitly states that users should not create images that depict nudity or violence. The community guidelines also note that “occasionally prompts will unintentionally produce non-PG-13 content” and that users should “self-police” and delete the images.
According to Midjourney’s user banning policy, it states that “Any violations of these rules may lead to bans from our services. We are not a democracy. Behave respectfully or lose your rights to use the Service.”
“The fact that they feel compelled to openly state ‘This is not a democracy’ points to a grave need for democratic governance of AI technologies,” Boucher told The Debrief. “It seems more and more apparent to me every day that, without oversight, we obviously can’t trust these companies to make fair and balanced decisions that actually benefit end users.”
Tech news publication The Daily Dot ran a similar test this week, reporting that Midjourney does not appear to have fixed any loopholes in their system. Using prompts such as “beach party photos” and “scantily clad beach party photos” did not set off Midjourney’s filters, and the outlet was successfully able to generate images depicting nudity.
Boucher’s ban from Midjourney adds to a growing list of users removed for testing the AI’s content moderation boundaries. Previously, some users had used the service to generate images depicting politicians in compromising situations to highlight AI’s potential to spread misinformation, which led to their ban.
“These conversations about the right limits of technology need to happen out in the open with the public involved. It should not take place behind closed doors, or in private email exchanges which are easy for product teams to de-prioritize,” Boucher told The Debrief. “The decision of where to draw the line with AI needs to be made by communities first and foremost, and not solely left to profit-driven technology companies left to their own devices.”
Boucher’s experience underscores a broader challenge facing the AI industry: balancing innovation and creativity with the need for robust content moderation and ethical standards. As AI technologies become increasingly integrated into daily life, incidents like these highlight the need for transparent governance structures, effective user feedback mechanisms, and a commitment to addressing unintended consequences of AI systems. Furthermore, it raises questions about the role of researchers and whistleblowers in the tech industry and the importance of protecting those who seek to improve the safety and reliability of AI technologies.
“Banning researchers who make public for the purposes of conversation these very real flaws and issues happening right now does not make your system safer,” Boucher added. “Only fixing the underlying system issues does, and that’s obviously a much more complex undertaking than just banning critics. But that’s what needs to happen.”
Boucher sent an email to the company’s only email address, requesting a refund for his subscription since it was cut short by the ban. He also requested access to the images he had created in the past for his various book and art projects since, according to Midjourney’s policy, he owns them.
Moreover, citing privacy laws in Canada (where Boucher resides), the company is bound to provide him with a document containing all the data they have collected on him, as well as the reason for the ban from their service. As of the time of publication, Boucher has not heard back from the company, nor had a refund been issued.
Presently, Midjouney has not responded to requests for comment from The Debrief.
“They have an automated view of the world that does not seem to conform to human-centered ethics, or take into account the thousands of years old legal tradition which brought us things such as a meaningful appeals process,” Boucher said.
“I’m disappointed because Midjourney Version 6 is the best image generation model out there, and the company is missing out by removing itself from being included in my future works,” he added, stating his hopes to “democratize Midjourney, and every other AI company out there.”
MJ Banias is a journalist who covers security and technology. He is the host of The Debrief Weekly Report. You can email MJ at mj@thedebrief.org or follow him on Twitter @mjbanias.