As GenAI becomes more pervasive, will it replace creative work and should we be worried? I argue that we crave the vulerability of the human behind the creation, that we were never really original anyway, but that we will get to a point where with enough computational context, we will start to ascribe intentionality to the machine. A thought piece in three parts.
November 17, 2023

Vulnerability, Originality, and Intentionality

By Zoya Bylinskii

As GenAI becomes more pervasive, will it replace creative work and should we be worried? I argue that we crave the vulerability of the human behind the creation, that we were never really original anyway, but that we will get to a point where with enough computational context, we will start to ascribe intentionality to the machine. A thought piece in three parts.

Part 1: Vulnerability

AI will not replace our idols because we crave vulnerability.

We are all vulnerable, we spend our lives searching for meaning, either for a way to differentiate ourselves or an image to fit into. At the end of the day, we want idols to aspire to, and others’ scandals to immerse ourselves in. We gorge on the personal biographies of tech leaders, music superstars, serial criminals, and their unlucky victims. Why? We want to get into their heads, to try to understand (with underpinnings of hope or fear) how many things we share in common with them, and is there perhaps a universe in which we could have ended up in their shoes? We reflect on pieces of ourselves by studying others because it is easier to look outwards than inwards.

I think this is why we will never replace artists, musicians, actors, teachers, or mentors with generative counterparts; why we will never fall in love with an AI. We want to see through to a personality that had a childhood, a trauma, a fortunate circumstance, a failed relationship or two, some wins and misses. We go to watch a movie not just because of the story (and sometimes only because) of the names we recognize that we have come to associate with something meaningful to us, with ourselves. We can race to generate the most realistic humans in our future movies, but we will not race to go see them after the novelty of the technology wears off. We can generate realistic voices even today, but we'd still rather have virtual characters be dubbed by someone we recognize. Even in virtual worlds, we find relief in the knowledge that there are real humans behind the avatar, behind the voice, behind the message… humans that will at some point need to excuse themselves to go to the bathroom.

I believe we will always ascribe more value to a single paint stroke on a canvas made by a human hand than a complex but synthetically generated piece (again, modulo the novelty wearing off). We perceive rough sketches as more creative than hyper realistic renderings. If the next Malevich generates his art pieces with gen AI, if the future Taylor Swift writes her lyrics with the help of AI, we will still ascribe the meaning behind their creations to the human artists. When the human enters the picture, they don't need to add to the quality, they just need to add the meaning that we so direly search for, the vulnerability we see in ourselves.

Part 2: Originality

What's the panic? We've never been original anyway.

If ChatGPT was to write this piece for me but I was to put my name behind it, you could still take away from this that a person of my background, my educational history, and my belief systems, is aligned with these views. I have not used ChatGPT to write an email (yet), but I am certain that not a single sentence in my hundreds of thousands of past emails has been original (spelling mistakes included). What is original is that I've put them together to represent my views in particular contexts. What tools like generative AI will make easier is the reuse of increasingly larger blocks, so we're not fishing for them or pulling them off the tips of our tongues. The curated collection of all the (generated) pieces we pull together is what will represent our unique personalities just as a Pintrest board can be original even if each of its constituents have be pinned millions of times. The same tweet can be retweeted by dozens of influencers with completely different personalities. Their retweet will be loved by their respective communities not because of what the tweet itself contains, but the context it gains by association with the influencer, what it comes to represent when heard through their voice (even if all they're doing is silently moving their lips).

I've heard the wisdom that one should not be afraid to share their ideas because if it can “be stolen” than it was too generic of an idea to begin with. The truly good ideas are ones that only you can implement as intended, because you bring your own unique expertise and lived experience. You + your idea = jackpot.

P.S. In case you were wondering, I can assure you this piece was written in a plane with no wifi.

Part 3: Intentionality

When we won't be able to distinguish long range correlations from intentionality.

Putting something in the right place at the right time requires a human touch, i.e., curation. Sure, it can happen serendipitously once in a while, but a consistent chain of “things being put in the right place at the right time” is evidence of intentionality, a bigger purpose, a vision, with a personality behind it.

The reason we are getting increasingly fidgety about AI is that as AI models become larger, they learn longer range correlations and can use more context. What that boils down to is that it takes longer for them to “trip up”. Early generative visual models had so many artifacts that you had to squint to see what they were trying to depict. Now you have to squint to find the artifacts. Early language models stopped making sense after a few words. Now your attention span probably runs out before you’d be able to spot an inconsistency. (If you've made it this far in my article, congratulations, you may be one of the few that still might ).

Content generated consistently over many pages, many frames, with context that spans days, weeks, months… well that may start to feel like intentionality, purpose, personality. None of those things need be there, but as long as the context of the models exceeds our attention or memory span, we won't be able to tell the difference. And this moment is coming very soon because models are already starting to outpace the human brain in numbers of compute units and connections.