Some say artificial intelligence will be humanity’s greatest helper. Others warn that AI will become humanity’s most dangerous rival. But maybe there’s a third alternative — with AI agents achieving the status of personhood alongside their human brethren.
The potential for that scenario is the focus of a newly published book titled “The Line: AI and the Future of Personhood.” The author, Duke University law professor James Boyle, says the book has been more than a decade in the making — which suggests more than the usual prescience about the tech world’s current fascination with AI.
In the latest episode of the Fiction Science podcast, he recalls the reaction he received when he shared his early ideas about the book with federal judges more than a dozen years ago..
“They’re like, ‘Rights are reserved for humans, naturally born of women!’ OK, well, not necessarily a great crowd,” says Boyle, founder of Duke Law School’s Center for the Study of the Public Domain. “Obviously, things have changed since then. The book seems perhaps less unhinged now than it did then.”
AI has come a long way since 2011. The smarts that were developed for advanced chatbots like OpenAI’s ChatGPT and Anthropic’s Claude are now being integrated into a wide array of software products, including Microsoft’s Copilot and Amazon’s Alexa voice assistant.
The best AI agents can sometimes sound all-too-human, as a New York Times columnist discovered when a Microsoft chatbot said it was in love with him and wanted to become a human. This year, Stanford researchers declared that ChatGPT-4 has passed the Turing Test — an exercise that was proposed decades ago to define the line between human intelligence and machine intelligence.
Will there come a time when AI’s cognitive capabilities clearly exceed ours — and if so, what implications will that have for the rights and responsibilities accorded to AI agents? Boyle thinks that intelligent machines will eventually achieve some form of personhood.
When he began working on the book, Boyle assumed that AI entities would become so humanlike that they’d have to be granted personhood on moral grounds. An example of that from science fiction would be Data’s effort to win the right of android self-determination in an episode of “Star Trek: The Next Generation.”
“I thought that would be where the debate was,” Boyle says. “And then, increasingly, I realized that an equally likely — maybe even more likely — way that we will actually get some form of legal personality for AI is the same way we did with corporations. Which is, ‘This is convenient. We need someone to sue and be sued.’”
Corporate personhood is a somewhat controversial legal concept that grants corporations some of the rights and responsibilities that flesh-and-blood citizens have (for example, the right to make contracts, and the liability to be prosecuted for crimes) while lacking others (such as the right to marry, run for office or vote).
The idea that Alexa could file suit to get itself out of a contractual obligation may sound as much like science fiction as the Starship Enterprise’s warp drive. But Oren Etzioni, the founder of TrueMedia.org and the former CEO of the Seattle-based Allen Institute for Artificial Intelligence, says it’s not so far-fetched.
“If a corporation is a person of sorts, then certainly we will reach a point where AI agents are,” Etzioni told me in an email. “Once AI agents achieve consciousness, then we will need to treat their feelings with compassion.”
Will ChatGPT and other large language models, or LLMs, ever achieve consciousness? That’s not in the cards, according to Christof Koch, a neuroscientist at Seattle’s Allen Institute (which is separate from the Allen Institute for AI).
“There are two questions really here,” Koch told me in an email. “1. Will the behavior, including language comprehension and speech, of AI/robots become so similar to people’s (or even better in functional terms) that most people will treat them as conscious? 2. Will it ever feel like anything to be such an AI/robot?”
Koch says the answer to the first question is, yes, most likely. “The only uncertainty is by when?” he wrote. “And, as Oren implies, this will have moral, ethical, legal and societal consequences.”
However, Koch argues that the architectures that form the basis for today’s computers are incapable of supporting anything like human-level consciousness. “That is, it will never feel like anything to be a LLM, even though these may achieve, sooner or later, super-intelligence. This may change with radically different hardware, such as quantum or neuromorphic computers,” he said.
“Intelligence is about doing, while consciousness is about being.”
Boyle is sympathetic to Koch’s point of view on the question of whether AI agents will ever become conscious. “The current architecture is ‘no,’ but in the future, I think ‘yes,’” he says.
In his book, and in the podcast, Boyle turns to science fiction as well as science facts to lay out his case. He says the movie “Blade Runner” and the Philip K. Dick novel on which it was based — “Do Androids Dream of Electric Sheep” — serve as “the source of the Nile” for his views on how AI agents of the future might be treated.
Boyle is particularly intrigued by a questionnaire that’s used by detectives in the movie and in the novel to determine whether the entity they’re facing is a natural-born human or an artificially produced replicant. The Voight-Kampff test is meant to measure how much empathy the entity feels in response to emotional stimuli.
“For me, the question that both of those works present is whether we’re all replicants — or rather, whether we can pass any test that we would actually be willing to set for something else,” Boyle says.
Boyle thinks the humans who create AI products, and the consumers who use them, will face a different kind of test in the years ahead.
“I would bet you that sometime within this century, there will be companies — whether it’s Microsoft, or whether it’s Anthropic, or whether it’s OpenAI — who are deliberately saying our self-actualizing AIs, which we treat as people, are participating in this,” he says. “It’ll be kind of like a Whole Foods vibe — a sort of ‘fair-trade coffee’ kind of thing.”
Boyle is also betting that other companies will market their AI agents as “loyal cybernetic servants, and nothing else … you need treat them no more thoughtfully than you do your vacuum cleaner.”
“I would expect that to be a division in the market, like the division between proprietary and free and open-source software,” he says.
When will that moment in the market come about? Which approach will win out? How long until we know whether or not our AI creations fall within the line that marks the boundary of personhood? It’s hard to predict precisely what the timeline might be — or what intelligent machines might do once they cross the line.
“I’ll be very nice to my Roomba in the meantime,” Boyle says. “I think that’s the only thing we can all do.”
“The Line: AI and the Future of Personhood” is available in print and as an e-book. As former board chair of Creative Commons, James Boyle is a proponent of open access to information, and so his book is freely available online via the Duke Law School website. (For what it’s worth, James and I share the Boyle family name but are not closely related.)
Allen Institute neuroscientist Christof Koch has also published several books on the subject of consciousness. His latest book is “Then I Am Myself the World: What Consciousness Is and How to Expand It.” For more about Koch’s research, check out this 2019 GeekWire article, which focuses on an earlier book titled “The Feeling of Life Itself.” You may also like my Fiction Science interview with Koch from 2021.
My co-host for the Fiction Science podcast is Dominica Phetteplace, an award-winning writer who is a graduate of the Clarion West Writers Workshop and lives in San Francisco. To learn more about Phetteplace, visit her website, DominicaPhetteplace.com.
Take a look at the original version of this item on Cosmic Log to get James Boyle’s recommendations for further reading, and stay tuned for future episodes of the Fiction Science podcast via Apple, Spotify, Player.fm, Pocket Casts and Podchaser. If you like Fiction Science, please rate the podcast and subscribe to get alerts for future episodes.
https://ift.tt/TQ41lBb November 05, 2024 at 04:56PM GeekWire
Post a Comment
0Comments