In the HBO drama of the same name, Westworld is a theme park entirely staffed by artificial intelligence. From the bartenders to the train conductors to the people you chat to in the street, the whole place is manned by machines, a.k.a. “hosts”. Westworld depicts a free-for-all landscape that allows guests to reveal their true, and often horrible, selves. It’s a sort of luxury video game that pokes at the boundary of what a “true” human is, recalling the bio-engineered replicants of Blade Runner. As one of the guests of the park asks the British-accented welcome-wagon synthetic human in season one: “Are you real?” “Well,” she replies, “if you can’t tell, does it matter?”

Though we’re not quite at Westworld or Blade Runner levels of artificial intelligence, automation and AI are expanding in reach faster than ever before. Of course, automation has affected jobs in the past but the fear is that, this time, it’s a much more widespread issue that affects a whole range of roles—and not just the ones that humans don’t want to do. Last year, a study from McKinsey Consulting showed that up to 800 million jobs globally could be replaced by automation by 2030, including a third of the jobs in the US and Germany. Well-paid, skilled jobs, that humans are proud of being able to do, are likely to be automated in the coming years. We’ve all noticed the self-service desks in the supermarket, reallocating manual jobs away from humans. But what happens when the higher level, cognitive tasks are taken over by machines too? Don’t forget that Blade Runner, with its bio-engineered workforce, was set in the far, distant future of… 2019.

And indeed, this trend is even having an effect today. In 2016, Google announced that its image recognition tool could now correctly caption images 94% of the time. While that in itself is not a job description, it’s certainly part of many jobs that humans currently do, whether that’s a photo editor at a magazine, picking which photos best illustrate a piece, or a publicist, sending out a release containing images of the latest styles to journalists. But surely, we also need to know whether the images are any good? Well, increasingly algorithms are becoming better at doing that, too.

Take the photo website EyeEm which sells photos taken by its vast number of users to agencies and publications, giving the creators a cut in the process. In 2015, they announced their “Computer Vision” technology, which allowed tagging and grading of photos to be automated via machine learning. This technology claims to predict not just the subjects of the photos uploaded but also suggest the mood, feeling and, perhaps most importantly, the quality of images. Co-founder Lorenz Aschoff has called this, “technology that understands, in general, beauty.”

But an algorithm doesn’t know which photos are good by magic—it knows which photos are good because it has been trained by humans who previously had the job of photo evaluation, which the machine has now taken over. Although EyeEm clarified that it does still use human curators, there appears to be a now common, and somewhat worrying trend, in which workers, often on precarious and low-paid contracts, are training their machine replacements. The better they do, the quicker their jobs are taken over by the algorithm they help to refine. In other words, it’s not just that the machines are replacing us, it’s that we’re helping them do it. Equally, at Zalando earlier this year, 250 marketing jobs were cut to make room for a more “machine-learning” approach to this area: “We assume that marketing will have to be more data-based in the future. For this to happen, we need a higher proportion of developers and data analysts,” Zalando CEO Rubin Ritter told the Frankfurter Allgemeine Zeitung.

Perhaps this is one reason behind the rise of influencers —tastemaking—as a job. Taste is personal, human—or so we might think. As the labour market has become more automated in some ways, curation as a way to split through all this internet noise has flourished into a full time vocation. Or, to look at it a different way, taste and curation have been forced into being something that is work because we’re reaching to find things that can’t be automated.

And yet, even this area seems to be complicated by automation technology: think of the rise of digitally-constructed influencers like Branded Boi, “The World’s First Digital Supermodel” Shudu or Lil Miquela. The latter now has garnered over one million followers on her selfie-filled Instagram and scored modelling gigs for Prada. So do we actually need our celebrities and tastemakers to be human? “It’s not that these users don’t know her image is computer-generated per se, it’s more that they don’t care,” Daisy Jones writes on the Lil Miquela phenomenon on Refinery29. Just like Westworld’s welcome wagon said: “Does it matter?”

Berlin-based brand ZIRKUSZIRKUS has also found that digital avatars can easily play the role of human models. For their most recent collection, they created an entirely virtual lookbook, where every aspect of the images, from the clothes to the background to the models, is entirely artificial. “I think most of our fans and even good friends didn’t recognise that the models are not real initially,” said the artist/designer 27_BUCKS, ZIRKUSZIRKUS’s head and founder. Indeed, the all-artificial gamble paid off: “We completely sold out the whole collection within 15 minutes,” he explained.

Where could this trend with CGI models and influencers take us in the future? As Adam Rivietz of micro-influencer agency #paid told Wired: “They [a CGI influencer] can’t tell you, ‘This shirt is softer than another and that’s one of the reasons you should buy it.’ They’re not real people, so they can’t give a totally authentic endorsement.” But is this all that influencers are for? When we scroll down our feeds, our interest is as much in entertainment and narrative as it is in receiving information. Claiming that Lil Miquela is bad because she’s fake is simply missing the point. Recently, the Spanish-Brazilian Instagram star, after having her account hacked by a right-wing account, was forced to admit that she is, in fact, a virtual avatar, made by the company Brud. She wrote to her followers, in a state of despair and disbelief: “My managers at @brud.fyi lied to me and now they’re lying to you.” Although of course she didn’t write it because she’s a fictional character, making this one of the most effective pieces of post-modern performance art of 2018. “If Brud loved me so much, why didn’t they tell me the truth?!?” Has a moment of realisation ever felt so human?

It’s hard not to be entertained by this absurd twist in the tale, its wild and fascinating narrative playing out around existential doubt and mortality. As AI technologies advance, media offers a way for these fantasies to unfold and find out where our sympathies lie, before we are faced with the reality. Where do we really draw the line between human and non-human? As one of Westworld’s hosts, Dolores, says, staring into a mirror at her own reflection and apparently realising her own free will: “Now I feel like I’ve discovered my own voice.” This is what AI has always offered us: a mirror to see ourselves—but differently.


Taken from INDIE NO 59, THE WORK ISSUE – get your copy here.

After investigating the intersections between artificial intelligence and work, stay tuned as we explore the effects that AI models and sex dolls might have on the future of intimacy.

Loading next Article