It’s 2023 and the thing everyone wants to talk about is artificial intelligence. From upgraded chatbots that can summon your college essay—in seconds—to a friendly-looking contact on your Discord app that can turn all your ideas into images.
I’ve always been a Sci-Fi nerd and a fervent advocate for technology as evolution- -through-other-means. When in college for Visual Design, circa 2009, I encountered the works of visionaries like Ray Kurzweil and Aubrey De Grey. I began to believe in a future where AI, Nanotech, and Bioengineering would shape a better, more equal humankind. We would live longer, healthier, and—overall—happier. We would solve issues like hunger and disease and, ultimately, aging. I even flirted with enrolling in a Scientific Initiation Program after writing an article about the future of technology and the blurring boundaries between artificiality and reality.
There was something, however, deeply hindering my foresight: capitalism.
As I see the fabric of Science Fiction enter our daily lives I also see the reasons for its dystopian predominance. As these tools become widely available—and mostly free for all—they come lacquered with the promise of practicality and performance-boosting.
You need an idea? Here. ChatGPT will give you one. Need an illustration? Midjourney! Need a logo for your company? DALL-E will give you one.
Need an actual artist? No. You can’t afford it.
While NASA incorporates algorithms to speed up their image-analysis process and further our understanding of the universe, companies like OpenAI seem to be targeting the work of one of the most vulnerable segments of society. Though I’m all for tools that make our lives easier and I still support the existence of these so called “AI”s, I saw its negative implications strike way before the positive ones.
And that is a problem that only seems to grow.
On December 2022 acclaimed best-selling author Christopher Paolini—the one who introduced younger millennials to the Hero’s Journey with Eragon and his Inheritance Cycle—released the cover for his new novel Fractal Noise. Proudly displayed on Tor’s page (the leading Sci-Fi/Fantasy publisher in the world), the cover was soon under the scrutiny of fans who quickly realized… it was AI-generated. The company issued a casual apology, blamed some third-party stock art website, the backlash went on for a couple of weeks and the world moved on.
These tools are newborns. They are mostly in beta versions and its results at first seem impressive, but to a trained eye they can honestly be… kind of pathetic.
On the next chapter of AI interfering with the work of artists, something unheard of happened to another segment of the publishing industry. Clarkesworld, one of the longest-standing publications for Science Fiction and Fantasy, known for its personalized rejections and for never closing for submissions, was forced to take a break. Due to some TikTok finance coach of some sort, people began to generate speculative fiction short stories on ChatGPT and submit them to magazines in the hopes of making passive income with it.
For anyone who’s familiar with the industry that is nothing short of laughable. But to the unadvised aspiring “AI-artist”, it seemed like a viable path. Clarkesworld was flooded with submissions written by silicon-based minds and forced to shut-off submissions for a while. Editor-in-Chief Neil Clarke decided not to reveal their method for identifying AI content, in order to prevent people from overriding the system, but said the quality was very poor and obviously false. He also pointed there are some ethical and legal issues with this technology that they’re not ready to accept.
As the technology advances, systems to identify AI-generated text become harder and harder to develop. In the meantime, real writers have one fewer channel to showcase their work.
Then, the University of Chicago released Glaze. A software made to protect artwork from having their style mimicked by AI. With a method developers call “cloaking”, Glaze adds subtle changes to the art, designed to confound the learning process of tools like Midjourney and DALL-E. As a band-aid on a gunshot wound, developers are being forced to create new tools to protect the intellectual property of artists from being stolen and reproduced. For profit.
Once again, capitalism leads us into creating problems so we can sell the solution. These tools are newborns. They are mostly in beta versions and its results at first seem impressive, but to a trained eye they can honestly be… kind of pathetic.
Some months ago I began to experiment with Midjourney. I’ve used it as a brainstorming assistant, as a stock-image generator for Photoshop composites and sometimes just to have fun with it. In my very first week as a user, I ran into its main flaw: it is unable to imagine something that doesn’t exist. It is incapable of complex abstraction and blind to the possibilities of something new.
How did I attest this? With snow. Yes, snow.
I’ve been working on a novel set in a land where the ocean and the snow are red. I must have tried hundreds of prompts at this point, in an attempt to make Midjourney show me an image of a landscape covered in red snow, a cliff by a red ocean or meadows sprinkled with crimson ice. It simply couldn’t.
It can only create from what already exists. It can only copy and combine in order to create an illusion of what real artistry is .
It generated beautiful images of cliffs, of snowstorms, of waves… all with red elements on it. But it can’t grasp the concept of the snow or the ocean themselves being red. Why? Because it can only create from what already exists. It can only copy and combine in order to create an illusion of what real artistry is .
Meanwhile the tech world is flooded with stories of entrepreneurs who used ChatGPT to generate ideas for a business, who created their logos in DALL-E and their mockups in Midjourney and everything is online and thriving and they are making money by being really fucking smart. While that same tech world was unable to keep their own bank from breaking and had to call for aid from the government they so deeply detest.
As Microsoft launches its Copilot I am curious to see if the tech industry will keep defending the unstopped advance of generative AI. When Copilot renders interns useless by making presentations and analyzing Excel sheets on its own maybe they will actually begin to question how far should we take the automation of creative jobs.
What impresses me the most is how quickly people accepted these tools. Instagram is flooded with AI art and those who call themselves “AI-artists”. Some even hide behind no hashtags or descriptions and come off as fake only to the trained eye. To me that is just like using a drum machine to program drums and call yourself a drummer.
YouTube is flooded with tutorials on how to create your company logo with Midjourney and Canva and, for anyone who ever designed a logo, it borders on insulting. It is the epitome of typography crime and a 2.0 version of “my nephew can do it.”
I’ve been working as a freelance graphic designer for almost fifteen years now. The majority of my work has been in the fields of illustration and album covers. I specialized in photo composites to create new worlds, with new physics, different geologies and unique flora and fauna. I made unsettling visuals and fantastical scenes to, hopefully, cause a gut-reaction.
Some of my former clients are not working with me anymore. They are releasing albums and singles with covers of their own, because some band member is now an “AI-artist”. Their compositions are off, the anatomy of their characters is all over the place, and their typographic work would not pass a first-term test in Graphic Design school.
Adobe’s Firefly is on the horizon and promises to bring these tools inside their applications, to enhance and support designers and digital artists. It claims to have its model trained solely on uncopyrighted content and to be a more responsible, ethical alternative. Seeing the beta version, it does look promising but—as expected—some of its results are not as impressive as Midjourney or Stable Diffusion. Of course. It’s not stealing other people’s works and its database is much smaller.
Though I still believe these tools can have advantages, that they can truly make our lives easier and, above all else, help people with less expertise to express their creativity, I don’t see this going well in a capitalist world. Not when inequality keeps growing and a select few hold the resources—and the power—over the vast majority. I feel it’s going to get a lot worse before it gets better.
As I write this, The Future of Life Institute just released an open letter calling for companies to stop AI for a period of six months, claiming “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.”
However, it poses a very important question: should we automate away all the jobs, including the fulfilling ones?
The letter’s been signed by the likes of unlimited free-speech champion Elon Musk and Apple’s co-founder Steve Wozniak. I’m skeptical about its actual effect, for I’ve seen Stephen Hawking himself try to warn us about this—to no success—before his passing. However, it poses a very important question: should we automate away all the jobs, including the fulfilling ones?
We’ll see, but I’m afraid the cat’s out of the bag.
A fat Batman with six fingers; Darth Vader in gold encrusted armor; Dragons with anatomies that make no sense; Donald Trump being arrested; the Pope wearing a puffer jacket… it’s all out there.
And the ill-advised believe it.
They believe those are photos because they saw it in the news. They believe those are artists and not just prompt-writers. They believe these tools to be artificial intelligence. When they have nothing to do with intelligence.
They are entering our lives to make us think less and work less with the promise of efficiency. In the same way some of us cannot make simple calculations without our phones anymore and—shockingly—a lot of us complain when something is too long to read, I fear that generative “AI” will make art more trivial, more ordinary. That by depriving art from its process we are taking away its soul. We are severing emotion in the hopes of growing profit.
And I am certain that capitalism will be the first in line to take advantage of it all and exploit those at the bottom of the pyramid. That’s the nature of the system and that’s why art has always been in conflict with it. Art was never supposed to be fast, never supposed to be instantaneous, never supposed to be just a product.
Art was never supposed to be fast, never supposed to be instantaneous, never supposed to be just a product.
Art is a result of emotion meeting skill, of process through media, of someone’s individuality resonating with others’. As Neuroscientist and MIT professor Miguel Nicolelis put it, we cannot reduce intuition, beauty, creativity, gut-feeling, and ultimately real intelligence to zeros and ones. This is a dream of computer scientists, tech-moguls, and entrepreneurs. Once these dreams enter real studies of how the brain works, they fall flat. They become more and more an ideal that belongs solely in Science Fiction.
These systems are based in the past and can only create a future that’s already occurred.
We’re at risk of painting a world with no tomorrow. Using tools that lead us to believe it isn’t worth creating something new.