Sunday, April 17, 2022

Life Turing infowartime



A recent piece in The New York Times—the kind of long-form journalism that, notwithstanding The Paper of Record’s many black sins in its political reporting, keeps me behind the paywall month after month—discusses the extraordinary advances that have been made in artificial intelligence over the past decade and change, with particular emphasis on the ability of cutting-edge “deep learning” software—“Generative Pre-Trained Transformer 3,” hereafter GPT-3—to parse language and to compose it. This is not Siri, or Alexa, or any of the consumer-level “assistants” with which we’re familiar, impressive as these may have seemed seven or eight years ago. Indeed, GPT-3 is not consumer-level at all, and its creators, an outfit calling itself OpenAI, are keeping the thing on a tight leash, because it is a vastly powerful tool the existence of which invites all sorts of possibilities for abuse. “The very premise that we are now having a serious debate over the best way to instill moral and civic values in our software,” the reporter concludes, “should make it clear that we have crossed an important threshold.”

Read the piece if you are able—the NYT would like you to pay for the privilege, and you ought to, but for anyone who for moral, political or financial reasons can’t see your way to purchasing it, there are ways to tunnel beneath the paywall (cough, “private” or “incognito” browsing), and the article really is worth your fifteen or twenty minutes’ attention, if not your coin.

“Artificial intelligence” has long been, rather like commercial nuclear fusion, just around a corner never cleared (“nuclear fusion is thirty years away—and always will be”). Indeed, in the 1950s there was much talk about “electronic brains,” referring to room-sized machines that deployed considerably less computational horsepower than the average cellular phone brings to bear without breaking a sweat. Nevertheless, bold predictions were being made, perhaps not entirely without dreams of sweet DARPA research grants dancing in certain academic heads, of “thinking machines” in immediate prospect. Well, you know, the Industrial Revolution had to start somewhere.

Having read “A.I. Is Mastering Language. Should We Trust What It Says?” I was moved to retrieve (and blow off its integument of dust) from the bookshelf in the hall a 1997 anthology, HAL’s Legacy, a collection of a dozen-and-a-half essays about artificial intelligence, about the vision of this presented in 2001 (a film I regard as a cultural artifact as profoundly expressing the mythos of its era as Genesis and The Iliad did for theirs) and how it inspired a generation, by now two or more, to pursue the grail of software sentience.

Are they there yet? I don’t think so. But they’re a damned sight closer than anyone could have concluded, based on these 1997 descriptions of the state of the art, that we might be by now. Put another way, progress in the field over the past quarter-century considerably exceeds advances made in military aviation between the Sopwith Camel and the B2 bomber. Seriously.

For far too many years the “Turing Test” was one of the measures of machine sentience. Another was chess, but when Kasparov fell to “Deep Blue,” that metric was tossed. As software continues to mimic and meet the Turing standard, the goalposts continue to be repositioned, and with GPT-3’s latest feats, I imagine that they’re way out at the end of the parking lot, if not into the next county altogether.

GPT-3 is not “self-aware.” For one thing, I’m reasonably sure that there’s not a “self” there. Except…except…how sure are we that there’s really a self here in our spongy grey matter? Sure, we feel that, but unless you’re going to go all “soul” on me, I hope that you will agree that human consciousness arises from a kind of “emergent behavior” on the part of a collective of preconscious subroutines, themselves based on dense electrochemical interchanges among our tightly-packed neurons. Machines will likely never replicate the essence of these processes, but I’m less confident that they can’t arrive at something resembling the product.

It is striking how conservative most of the contributors to HAL’s Legacy were. AI, at that point, was still thirty years away, at least, even to the most optimistic among them (possibly excepting Doug Lenat, whose “Cyc” project, perhaps misconceived, and certainly unrealistic given the input resources of the nineties, looks as though it may have anticipated the kind of deep learning that was eventually realizable between the vast volumes of digital intake now at hand and the wherewithal of the processing power that may presently be brought to bear to digest this).

I have said this before, and often, but I believe that, unless industrial civilization collapses—a prospect by no means uncertain—machine sentience will arrive among us. It will probably not be recognized until afterward, and with each evidence of its presence the standard of proof, those goalposts, will be picked up and transported across the state line if necessary. And, you know, the machines may talk to us, absolutely passing the Turing test, and we will still wonder “is there anyone home?” But at that point, it may be that posing the same question to ourselves will be appropriate.

No comments: