Can AI agents become conscious? Experts look ahead to artificial general intelligence

HALL of Tech
By -
0
Can AI agents become conscious? Experts look ahead to artificial general intelligence Alan Boyle
A robot named Sophia can carry on a conversation and make facial expressions. (Hanson Robotics Photo)

There’s no question that artificial intelligence is rapidly becoming more intelligent, thanks to software platforms including ChatGPT, Google Gemini and Grok. But does that mean AI agents will one day outdo the generalized smarts that distinguish human intelligence? And if so, is that good or bad for humanity? Those were just a couple of the questions raised during this week’s AGI-24 conference in Seattle.

Conference sessions at the University of Washington centered on a concept known as artificial general intelligence, or AGI. Artificial intelligence can already outperform humans on a growing list of specialized tasks, ranging from playing the game of Go to diagnosing some forms of cancer. But humans are still more intelligent than AI agents when it comes to dealing with a wider range of tasks, including tasks they haven’t been trained to do. That’s what AGI is all about.

David Hanson, a roboticist and artist who’s best known for creating a humanoid robot named Sophia, said the questions surrounding human-level intelligence and consciousness are a high priority for his team at Hanson Robotics.

“The goal really is continuously to explore what it means to be intelligent,” he said during a Friday session. “How can we achieve consciousness? How can we make machines that co-evolve with humans? All of these efforts, while they’re really cool, and I’m very proud of them, they’re all just trying to get the engine to start on this kind of conscious machine that can co-evolve with humans.”

To get to that point, developers would have to create AI agents with “bio-drives” inspired by the drives that motivate biological organisms, Hanson said.

“Then you have an agent that has a kind of self, and that self is composed of a few specific kinds of patterns — mind, body, evolutionary drive, a desire to live,” he said. “We call this the whole-organism architecture. W-H-O-A, or Whoa. So, I think if you put these things together in the right way, you get an agent that wakes up and says, ‘Whoa! Where am I? Who are you? What is this place?'”

Such an agent would “start to seek the affinity, the homologous relationships between itself and humans and other living beings,” Hanson said. “It also is a ‘Whoa’ moment for humanity when a machine starts doing those things.”

For example, what happens if the agent’s desire to live leads it to “fix” itself so that humans can’t turn it off? Hanson said it’ll be up to the developers of future AGI agents to exercise prudence as they make progress. Toward that end, he’s brought together a “little hacker group” to work on biologically inspired approaches to AGI.

“I think that this ‘tinkerers’ approach is the way forward. Let’s just try things. See if it works. AGI is not going to spiral toward uncontrollable super-intelligence and go ‘foom’ right away,” Hanson said.

“We’re going to create baby AGI, and then we figure out how to nurture those babies to grow up,” he said. “Show them love. I think this is a really important principle. Don’t treat them like tools when we need them to be beings.”

Among the speakers at the AGI-24 conference were David Hanson, founder of Hanson Robotics; and Christof Koch, a neuroscientist at the Allen Institute. (Photos Courtesy of David Hanson and Christof Koch)

But could AI agents ever become beings in the same sense that humans are beings? During a virtual session that followed Hanson’s talk, Christof Koch, a neuroscientist at the Seattle-based Allen Institute, insisted that consciousness shouldn’t be equated with intelligence. And he argued that AI agents would be incapable of consciousness, due to the way that their hardware is built.

Koch subscribes to a model for consciousness known as integrated information theory. The model proposes that levels of consciousness can be measured based on the interconnectedness of elements in a given system and the causal power generated by that system.

“For computers to be conscious, it must have the causal powers of brains,” Koch said. But the architecture that forms the basis for today’s computer hardware falls far short of the human brain’s capacity.

“No matter what you’re running, the causal power of this machine will always be minuscule, and it doesn’t depend on the software,” Koch said.

That doesn’t mean computers can’t get smarter. In Koch’s view, AI agents could simulate the intelligence and even the interior lives of humans so well that it seems as if they’re conscious. But that wouldn’t mean the agents actually experienced life and feelings in the same way that humans did.

Koch drew a comparison to a computerized simulation of a black hole. “You don’t have to be concerned that if you turn on that simulation, spacetime will bend around the computer, executing the software such that it would be sucked into the black hole,” he said. “People say, ‘Well, that’s ridiculous. It’s just a simulation.’ So, that’s my point.”

Koch doesn’t completely rule out the possibility of artificial consciousness. He said quantum computers or neuromorphic computers could open new routes to making machines conscious.

Does it make any difference that consciousness is distinct from signs of intelligence? Koch said it definitely does — and in a sense, he’s putting his money where his mouth is.

Koch said he holds an executive position and has a financial interest in a venture called Intrinsic Powers, which is developing a brain-monitoring device to assess the presence of consciousness in behaviorally unresponsive patients. He noted a newly published study that suggested up to 100,000 patients in the U.S. might have some level of consciousness even though they don’t respond to outside stimuli.

“They’re actually covertly conscious,” Koch said. “How do we detect that? Because many of these will die because of withdrawal of critical care after 45 days. In fact, 80% of them die.”

Hanson is equally committed to working on AGI and artificial consciousness. “We can’t wait 100 years, or we’re going to be out of luck, out of time. We’re going to draw down a depth from the ecosystem that we simply cannot repay, and if we just stopped today and said, ‘OK, we’re just going to go and play our Nintendos and try to chill with solar panels,’ we still would probably be too late,” he said.

“So, it’s not the AGI that’s going to kill humanity. It’s the absence of AGI that’s going to kill humanity,” he added. “We are not smart enough yet. We have to get smarter, and this is why I do propose AGI now. Let’s accelerate this in the right way.”

https://ift.tt/FsAEeHO August 17, 2024 at 03:27PM GeekWire
Tags:

Post a Comment

0Comments

Post a Comment (0)