“I love this house, but sometimes it’s a sad place,” he said, while we looked at the pictures. “Because she loved being here and isn’t here.”

The sun had almost set, and Hinton turned on a little light over his desk. He closed the computer and pushed his glasses up on his nose. He squared up his shoulders, returning to the present.

“I wanted you to know about Roz and Jackie because they’re an important part of my life,” he said. “But, actually, it’s also quite relevant to artificial intelligence. There are two approaches to A.I. There’s denial, and there’s stoicism. Everybody’s first reaction to A.I. is ‘We’ve got to stop this.’ Just like everybody’s first reaction to cancer is ‘How are we going to cut it out?’ ” But it was important to recognize when cutting it out was just a fantasy.

He sighed. “We can’t be in denial,” he said. “We have to be real. We need to think, How do we make it not as awful for humanity as it might be?”

How useful—or dangerous—will A.I. turn out to be? No one knows for sure, in part because neural nets are so strange. In the twentieth century, many researchers wanted to build computers that mimicked brains. But, although neural nets like OpenAI’s GPT models are brainlike in that they involve billions of artificial neurons, they’re actually profoundly different from biological brains. Today’s A.I.s are based in the cloud and housed in data centers that use power on an industrial scale. Clueless in some ways and savantlike in others, they reason for millions of users, but only when prompted. They are not alive. They have probably passed the Turing test—the long-heralded standard, established by the computing pioneer Alan Turing, which held that any computer that could persuasively imitate a human in conversation could be said, reasonably, to think. And yet our intuitions may tell us that nothing resident in a browser tab could really be thinking in the way we do. The systems force us to ask if our kind of thinking is the only kind that counts.

During his last few years at Google, Hinton focussed his efforts on creating more traditionally mindlike artificial intelligence using hardware that more closely emulated the brain. In today’s A.I.s, the weights of the connections among the artificial neurons are stored numerically; it’s as though the brain keeps records about itself. In your actual, analog brain, however, the weights are built into the physical connections between neurons. Hinton worked to create an artificial version of this system using specialized computer chips.

“If you could do it, it would be amazing,” he told me. The chips would be able to learn by varying their “conductances.” Because the weights would be integrated into the hardware, it would be impossible to copy them from one machine to another; each artificial intelligence would have to learn on its own. “They would have to go to school,” he said. “But you would go from using a megawatt to thirty watts.” As he spoke, he leaned forward, his eyes boring into mine; I got a glimpse of Hinton the evangelist. Because the knowledge gained by each A.I. would be lost when it was disassembled, he called the approach “mortal computing.” “We’d give up on immortality,” he said. “In literature, you give up being a god for the woman you love, right? In this case, we’d get something far more important, which is energy efficiency.” Among other things, energy efficiency encourages individuality: because a human brain can run on oatmeal, the world can support billions of brains, all different. And each brain can learn continuously, rather than being trained once, then pushed out into the world.

As a scientific enterprise, mortal A.I. might bring us closer to replicating our own brains. But Hinton has come to think, regretfully, that digital intelligence might be more powerful. In analog intelligence, “if the brain dies, the knowledge dies,” he said. By contrast, in digital intelligence, “if a particular computer dies, those same connection strengths can be used on another computer. And, even if all the digital computers died, if you’d stored the connection strengths somewhere you could then just make another digital computer and run the same weights on that other digital computer. Ten thousand neural nets can learn ten thousand different things at the same time, then share what they’ve learned.” This combination of immortality and replicability, he says, suggests that “we should be concerned about digital intelligence taking over from biological intelligence.”

How should we describe the mental life of a digital intelligence without a mortal body or an individual identity? In recent months, some A.I. researchers have taken to calling GPT a “reasoning engine”—a way, perhaps, of sliding out from under the weight of the word “thinking,” which we struggle to define. “People blame us for using those words—‘thinking,’ ‘knowing,’ ‘understanding,’ ‘deciding,’ and so on,” Bengio told me. “But even though we don’t have a complete understanding of the meaning of those words, they’ve been very powerful ways of creating analogies that help us understand what we’re doing. It’s helped us a lot to talk about ‘imagination,’ ‘attention,’ ‘planning,’ ‘intuition’ as a tool to clarify and explore.” In Bengio’s view, “a lot of what we’ve been doing is solving the ‘intuition’ aspect of the mind.” Intuitions might be understood as thoughts that we can’t explain: our minds generate them for us, unconsciously, by making connections between what we’re encountering in the present and our past experiences. We tend to prize reason over intuition, but Hinton believes that we are more intuitive than we acknowledge. “For years, symbolic-A.I. people said our true nature is, we’re reasoning machines,” he told me. “I think that’s just nonsense. Our true nature is, we’re analogy machines, with a little bit of reasoning built on top, to notice when the analogies are giving us the wrong answers, and correct them.”

On the whole, current A.I. technology is talky and cerebral: it stumbles at the borders of the physical. “Any teen-ager can learn to drive a car in twenty hours of practice, with hardly any supervision,” LeCun told me. “Any cat can jump on a series of pieces of furniture and get to the top of some shelf. We don’t have any A.I. systems coming anywhere close to doing these things today, except self-driving cars”—and they are over-engineered, requiring “mapping the whole city, hundreds of engineers, hundreds of thousands of hours of training.” Solving the wriggly problems of physical intuition “will be the big challenge of the next decade,” LeCun said. Still, the basic idea is simple: if neurons can do it, then so can neural nets.