When Tech Leaves No Space for Humans

Phillip Wang, a software engineer at Uber, built on work by researchers at Nvidia and created the site to “raise some public awareness” for the technology that creates these images: generative adversarial networks, or GANs. These are programs with two neural networks. One of them generates an image; another determines how realistic they are and challenges the first to improve on its output. The goal is to create something that’s virtually indistinguishable from a real-life human face.

The people Wang’s site creates look real. They could be anyone. But they are no one.

A few days after the site launched, OpenAI revealed a tool that can write cohesive paragraphs of text given minimal human prompting. They call it “deepfakes for text,” referencing the technology that can be used to replace one person’s face over another in a video. As the Guardian explained, “the AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next.”

The people Wang’s site creates look real. They could be anyone. But they are no one.

The Guardian used the technology — it’s called GPT2, and OpenAI has not released it in full due to concerns about deceptive use “at scale” — to write an entire story about itself. The paper gave the program two paragraphs to begin with. Much like the faces generated by GANs, the story GPT2 wrote is nearly indistinguishable from a human version. GPT2 even fabricated quotes from its own creators (it did the same thing when the Guardian fed it the beginning of a piece on Brexit — it created fake quotes from UK Labour leader Jeremy Corbyn).

The GPT2’s text looks like something a human may have written. But nobody actually wrote it.

Last spring, Google unveiled two audio recordings. In each one, its Duplex program, which is designed to mimic voices, engaged in phone conversations with real people. First, the program calls a hair salon and makes a booking, coping with a range of scheduling options given by the receptionist. Then, the program attempts to make a dinner reservation — and the woman who answers says the restaurant doesn’t take reservations for groups of four or less.

 

The audio sounds very much like a recording of something a human said. But nobody actually spoke these words.

Late in 1877, Thomas Edison walked into the offices of Scientific American magazine with his latest invention. As the editors later described, it was “a little affair of a few pieces of metal, set up roughly on an iron stand about a foot square.” It was a phonograph, capable of recording the human voice and replaying it later. And while the magazine’s staff were already familiar with it, the technology’s possibilities were humbling nonetheless.

“No matter how familiar a person might be with modern machinery and its wonderful performances, or how clear in his mind the principle underlying this strange device may be, it is impossible to listen to the mechanical speech without his experiencing the idea that his senses are deceiving him,” the Scientific American editors later wrote. “We have already pointed out the startling possibility of the voices of the dead being reheard through this device, and there is no doubt but that its capabilities are fully equal to other results just as astonishing.”

There is no real way to separate humanity from its technology. Throughout history, technology has been adopted to both enhance and progress the human experience. And it has served another important purpose, too: technology has, in many cases, been created to preserve humanity. This is particularly true of communications technologies, from the printing press to the phonograph to the internet. Whether or not their creators realized it at their genesis, and despite the myriad functions they might go on to serve, fundamentally, they are a way for humanity to remember itself — for us to remind each other that we exist.

Maybe this is why these three recent advances in human-like artificial intelligence feel so unsettling. Unlike a simple recording, each one is more than just a copy of humans, or human communications. Instead, they are replications. That seemingly minor distinction is important. At the root of the idea of replication are its classical Latin ancestors, “re” (back, again) and “plicare” (to fold).

Before it was appropriated as a synonym for copy, replication, in its classic form, was about folding something back on itself. This gives us clarity on what’s really going on in with these new examples of A.I. A copy and its original can exist simultaneously and separately, and even if the original is lost, the copy is proof that it once existed. A replication, on the other hand, is a manipulation — a folding over — of the original to create a new form. Crucially, in the process, some facet of that original disappears. It’s part of what makes the uncanny valley such a profoundly unsettling space. Because what’s lost in each replication — of our human faces, of our human words, and of our human voices — is, well, humanity.

These new A.I. tools are not reminders that we exist; they are reminders that soon, in some fashion, we might not.

 “As a species, we’re collectors and rememberers,” Martin Kunze told GQ magazine last year. “We leave traces of ourselves everywhere.” Kunze, an Austrian ceramicist, is slowly building a time capsule, which he calls the Memory of Mankind project. On his own, Kunze began creating ceramic tiles “laser engraved with personal recollections and global news, texts of books and scientific studies” — an account of humanity for someone to find in the distant future, housed deep inside a salt mine.

The project has quickly expanded. Kunze now has three categories of material: editorial (“meant to include the automatic collection of editorials from newspapers around the world, from all sorts of political and geographical points of view “); institutional materials (“scientific papers and dissertations, art projects and popular songs, among other material,” including nuclear storage sites); and finally, personal items.

“Already he’s up to over 500 tablets, with participants from an array of countries, most of them sending files or e-mails through the website he’s created, with material they wanted printed on a tablet,” GQ contributor Michael Paterniti recounted. “They send their diary entries and love letters, newspaper articles and obscure dissertations, blogs and texts, the most important parts of us.”

Kunze’s motivation to store humanity’s memories is, in part, motivated by his concern for a lack of storage elsewhere.

“Sooner or later,” Kunze told GQ, “we’ll have to delete data, massively. Just for economical and ecological reasons. This deletion will not be organized, not considered in selecting what we want to keep.” This is why, on the Memory of Mankind site, visitors find that while the project is aimed outward and upward (at technologically advanced civilizations of the distant future and perhaps from distant planets), it includes a note that “maybe our grandchildren will make use of MOM, in the instance that no blogs from the early 21st century survive.”

A great deletion might indeed mean data is lost forever, but assuming the servers are not lost, storage will remain, waiting to be refilled. What the three recent developments in human-like A.I. should make us ponder is: what might that space be refilled with?

It might now be a mistake to assume that, if the seemingly-endless data tracing the details of our many lives disappears, more of the same will take its place. Instead, something stranger might occur. As A.I. is perfected, and the algorithms are improved, a world might emerge where the data that replaces what’s been lost is no longer created by humans. This data may only be human-like — the recorded information emerging from layers of algorithms acting against and communicating with one another in endless loops.

We may face a future where our physical selves remain, but without a way, or a space, for us to collectively remember our feelings and thoughts — those things that make us human.

One program places a phone call; another answers. One algorithm writes a news story; another writes a novel based on it. One algorithm creates a human face; a bot uses it to create a social media profile that interacts with others created by yet another algorithm. And on and on — algorithms upon algorithms, forever.

The fear of A.I. has long been that we might either physically merge with a computer, creating a hybrid superbeing, or be enslaved by it. But something else may occur. We may face a future where our physical selves remain, but without a way, or a space, for us to collectively remember our feelings and thoughts — those things that make us human. And so, as the servers swell with infinite replicated communication, we will reach a kind of endpoint of quiet loneliness, our humanity forever trapped on ceramic tablets entombed in a mountain cave.

The servers may never be empty, but the technology that we created to preserve ourselves will instead be preserving replications. Meanwhile, we will disappear — not from view, but from the record. For centuries, we created technology to preserve humanity. Now, the technology we’ve created will preserve itself.