Artificial intelligence experts who were cited in an open letter calling for a pause on AI research have distanced themselves from the letter and slammed it for "fearmongering."
"While there are a number of recommendations in the letter that we agree with (and proposed in our 2021 peer-reviewed paper known informally as ‘Stochastic Parrots’), such as ‘provenance and watermarking systems to help distinguish real from synthetic’ media, these are overshadowed by fearmongering and AI hype, which steers the discourse to the risks of imagined ‘powerful digital minds’ with ‘human-competitive intelligence,’" Timnit Gebru, Emily M. Bender, Angelina McMillan-Major and Margaret Mitchell wrote in a statement on Friday.
The four tech experts were included in a citation in a letter published earlier this week calling for a minimum six-month pause on training powerful AI systems. The letter has racked up more than 2,000 signatures as of Saturday, including from Tesla and Twitter CEO Elon Musk and Apple co-founder Steve Wozniak.
"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs," the letter begins. The open letter was published by the Future of Life Institute, a nonprofit that "works on reducing extreme risks from transformative technologies," according to its website.
TECH CEO WARNS AI RISKS 'HUMAN EXTINCTION' AS EXPERTS RALLY BEHIND SIX-MONTH PAUSE
Gebru, Bender, McMillan-Major and Mitchell’s peer reviewed research paper, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" is cited as the first footnote on the letter’s opening line, but the researchers say the letter is spreading "AI hype."
"It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse which promises either a ‘flourishing’ or ‘potentially catastrophic’ future," the four wrote. "Such language that inflates the capabilities of automated systems and anthropomorphizes them, as we note in Stochastic Parrots, deceives people into thinking that there is a sentient being behind the synthetic media."
AI PAUSE GIVES 'BAD GUYS' TIME TO CATCH UP, BILL ACKMAN SAYS: 'I DON'T THINK WE HAVE A CHOICE'
Mitchell previously oversaw ethical AI research at Google and currently works as the chief ethical scientist at AI lab Hugging Face. She told Reuters that while the letter calls for a pause specifically on AI tech "more powerful than GPT-4," it is unclear which AI systems would even qualify as breaking those parameters.
"By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of [Future of Life Institute]," she said. "Ignoring active harms right now is a privilege that some of us don’t have."
Another expert cited in the letter, Shiri Dori-Hacohen, a professor at the University of Connecticut, told Reuters that while she agrees with some of the points made in the letter, she disagrees with how her research was used.
Dori-Hacohen co-authored a research paper last year, titled "Current and Near-Term AI as a Potential Existential Risk Factor," which argued that widespread use of AI already poses risks and could influence decisions on issues such as climate change and nuclear war, according to Reuters.
TECH EXPERT GIVES AI WAKE-UP CALL: ‘WOLF’ IS HERE
"AI does not need to reach human-level intelligence to exacerbate those risks," she said.
"There are non-existential risks that are really, really important, but don’t receive the same kind of Hollywood-level attention."
I INTERVIEWED CHATGPT AS IF IT WAS A HUMAN; HERE'S WHAT IT HAD TO SAY THAT GAVE ME CHILLS
The letter argues that AI leaders should "develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts."
"In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems," the letter adds.
Gebru, Bender, McMillan-Major and Mitchell argued that "it is indeed time to act" but that "the focus of our concern should not be imaginary ‘powerful digital minds.’ Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities."
Future of Life Institute president Max Tegmark told Reuters that "if we cite someone, it just means we claim they’re endorsing that sentence."
"It doesn’t mean they’re endorsing the letter, or we endorse everything they think," he said.
He also shot down criticisms that Musk, who donated $10 million to Future of Life Institute in 2015 and serves as an external adviser, is allegedly trying to lead the charge on shutting down his competition.
"It’s quite hilarious. I’ve seen people say, ‘Elon Musk is trying to slow down the competition,’" he said. "This is not about one company."
Tegmark said that Musk had no role in drafting the letter.
CLICK HERE TO READ MORE ON FOX BUSINESS
Another expert cited in the Future of Life Institute’s letter, Dan Hendrycks of the California-based Center for AI Safety, said he agrees with the contents in the letter, according to Reuters. He argued that it’s practical to take account of "black swan events," which are defined as appearing as unlikely to happen but would have dire consequences if they were to unfold, according to the outlet.
Article From & Read More ( Tech experts slam letter calling for AI pause that cited their research: 'Fearmongering' - Fox Business )https://ift.tt/OnLR1SY
Business
Bagikan Berita Ini
0 Response to "Tech experts slam letter calling for AI pause that cited their research: 'Fearmongering' - Fox Business"
Post a Comment