A.I. needs a ‘new space in our world,’ says ex-Google

0
46
A.I. needs a ‘new space in our world,’ says ex-Google

A.I. needs a ‘new space in our world,’ says ex-Google

Artificial intelligence will kill us all or solve the world’s biggest problems—or something in between—depending on who you ask. But one thing seems clear: In the years ahead, A.I. will integrate with humanity in one way or another.

Blake Lemoine has thoughts on how that might best play out. Formerly an A.I. ethicist at Google, the software engineer made headlines last summer by claiming one of the tech giant’s advanced A.I. systems was sentient. Soon after, the company fired him.

In an interview with Lemoine published on Friday, Futurism asked him about his “best-case hope” for A.I. integration into human life. 

Surprisingly, he brought our furry canine companions into the conversation, noting that our symbiotic relationship with dogs has evolved over the course of thousands of years.

“We’re going to have to create a new space in our world for these new kinds of entities, and the metaphor that I think is the best fit is dogs,” he said. “People don’t think they own their dogs in the same sense that they own their car, though there is an ownership relationship, and people do talk about it in those terms. But when they use those terms, there’s also an understanding of the responsibilities that the owner has to the dog.”

Figuring out some kind of comparable relationship between humans and A.I., he said, “is the best way forward for us, understanding that we are dealing with intelligent artifacts.”

Many A.I. experts, of course, disagree with his take on the technology, including ones still working for his former employer. After suspending Lemoine last summer, Google accused him of “anthropomorphizing today’s conversational models, which are not sentient.” 

“Our team—including ethicists and technologists—has reviewed Blake’s concerns per our A.I. Principles and have informed him that the evidence does not support his claims,” company spokesman Brian Gabriel said in a statement, though he acknowledged that “some in the broader A.I. community are considering the long-term possibility of sentient or general A.I.” 

Alphabet and Google CEO Sundar Pichai, asked if he worried about beliefs like Lemoine’s that A.I. was sentient spreading, told the Hard Fork podcast last month: “I think it’s one of the things we have to figure out over time, as these models become more capable. So my short answer is yes, I think you will see more like this.”

Pichai added, “You know, I said this before. A.I. is the most profound technology humanity will ever work on. I’ve always felt that for a while. I think it will get to the essence of what humanity is. And so this is the tip of the iceberg, if anything, on any of these kinds of issues, I think.”

Gary Marcus, an emeritus professor of cognitive science at New York University, called Lemoine’s claims “nonsense on stilts” last summer and is skeptical about how advanced today’s A.I. tools really are. “We put together meanings from the order of words,” he told Fortune in November. “These systems don’t understand the relation between the orders of words and their underlying meanings.”

But Lemoine isn’t backing down. He told Futurism that he had access to an advanced A.I. system within Google that the public hasn’t yet seen. Pichai on the Hard Fork podcast compared the A.I. chatbot Bard that Google has so far released to a “souped-up Civic” in a “race with more powerful cars.”

“We did say we are using a lightweight and efficient version of LaMDA,” Pichai added. “So in some ways, we put out one of our smaller models out there, what’s powering Bard. And we were careful.”

Lemoine said, “The most sophisticated system I ever got to play with was heavily multimodal—not just incorporating images, but incorporating sounds, giving it access to the Google Books API, giving it access to essentially every API backend that Google had, and allowing it to just gain an understanding of all of it. That’s the one that I was like, ‘You know this thing, this thing’s awake.’ And they haven’t let the public play with that one yet.”

He suggested such systems could experience something like emotions. 

“There’s a chance that—and I believe it is the case—that they have feelings and they can suffer and they can experience joy,” he told Futurism. “Humans should at least keep that in mind when interacting with them.”

Read The Full Article Here