Sora AI Generator: The Dark Side of the Most Convincing Artificial Intelligence on the Market

Sofia Sklar ‘27

Illustration by Sadie Leveque ‘27

“Artificial intelligence refers to computer systems that can perform complex tasks normally done by human-reasoning, decision making, creating, etc.” (NASA) 

There was a time when artificial intelligence (AI) was clearly distinguishable from human-made works. Faces looked uncanny, hands had too many fingers and hair moved impossibly. 

Sora AI is a product of the Open AI company dedicated to creating eerily realistic videos. The platform is taking over TikTok with videos of dogs driving cars, grandmothers feeding bears and, of course, plenty of mukbang videos. If you thought that asking your little ChatGPT that you named something silly to write papers was “helpful,” just wait until you discover the ability to add yourself into scenes from movies. Or maybe make your dog dance to the latest song by Sabrina Carpenter. But your vanity-fueled addiction to artificially generated slop is corrupting the planet and society as we know it. 

AI is destroying the planet, one prompt at a time. An article published by Shaolei Ren for OECD.AI Public Observatory—a website dedicated to monitoring and educating people about Artificial Intelligence—discusses the impact that AI has on water sources. Yes, water sources. According to the article, “The global AI demand may even require 4.2–6.6 billion cubic meters of water withdrawal in 2027, which is more than the total annual water withdrawal of 4–6 Denmarks or half of the United Kingdom. [totaling to around 35 million people]” (Ren). To cut the scientific talk, AI is using precious, drinkable water to cool machines. This destroys not only our water, but also contributes to global warming. So if you were wondering why it’s so hot in October, you should blame the TikTok accounts creating AI generated brainrot. 

Another article by Ren and a colleague, Adam Wierman, published in Harvard Business Review, further discusses the environmental impact of AI. They state, “Even putting aside the environmental toll of chip manufacturing and supply chains, the training process for a single AI model, such as a large language model, can consume thousands of megawatt hours of electricity and emit hundreds of tons of carbon” (Ren & Wierman). Even training—which is the process of “preparing” the AI model—takes up unforeseen amounts of resources. “This is roughly equivalent to the annual carbon emissions of hundreds of households in America. Furthermore, AI model training can lead to the evaporation of an astonishing amount of fresh water into the atmosphere for data center heat rejection, potentially exacerbating stress on our already limited freshwater resources” (Ren & Wierman). The training—and subsequent use—of these machines are using more resources than the average consumer ever could, and the earth will be paying for it. 

But as if the guzzling of natural resources couldn’t get any worse, AI also enables some of the most despicable acts possible. AI models have been known to carry intense racial biases, which have been noted in many studies. Kimberly Holmes-Iverson published an article on the subject in Howard Magazine, stating, “The 2018 groundbreaking paper, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, detailed the rampant bias currently displayed by machine learning. Researchers Joy Buolamwini from MIT and Timnit Gebru from Microsoft examined a data set of 1,000 images of faces of those from three African countries and three European countries, putting IBM, Microsoft, and Face++ to the test. Each performed its worst on darker-skinned females, meaning the programs repeatedly could not “understand” or see the images of women or men of color, while it could quickly decipher those of white men and women” (Holmes-Iverson). 

She goes on to state, “Thakur is also research director at the Center for Democracy and Technology (CDT). His work has examined automated content moderation, data privacy, and gendered disinformation, among other tech policy issues. Thakur says research has shown that machine learning tools can be highly discriminatory and biased towards people of color in particular.  In one example, different facial recognition tools were shown to be less accurate when it came to classifying darker skinned women compared to other groups. In another example, when asked to complete sentences about Muslims, a machine learning model returned results that were often violent and linked to terrorism” (Holmes-Iverson). Biases of programmers are coded into the very software of AI, which then spews hatred into our society. 

Nobody is safe from AI, not even children, as predators utilize the software to create truly despicable creations. Child Rescue Coalition, a nonprofit organization dedicated to combatting child predators, released an article warning parents about the dangers of AI. The article is truly disturbing, and especially concerning given the amount of AI material that children are exposed to, especially on platforms such as YouTube Shorts or TikTok: “As parents, we can’t ignore the concerning impact of AI on child sexual abuse and online exploitation. It’s crucial for us to stay informed, have open conversations with our kids, and actively monitor their online activities. By taking a proactive role, we contribute to creating a safer digital space for our children in the face of evolving technological challenges,” says Phil Attwood, Director of Impact of Child Rescue Coalition” (Child Rescue Coalition). 

To use AI is to perpetuate the destruction of our society and our earth, and to deteriorate the value of human-made works. No AI “art” is truly art, as that comes from the human experience. In short, fuck AI.

SLC Phoenix