Artificial intelligence could pose a "risk of extinction" to humanity on the scale of nuclear war or pandemics, and mitigating that risk should be a "global priority," according to an open letter signed by AI leaders such as Sam Altman of OpenAI as well as Geoffrey Hinton, known as the "godfather" of AI.
The one-sentence open letter, issued by the nonprofit Center for AI Safety, is both brief and ominous, without extrapolating how the more than 300 signees foresee AI developing into an existential threat to humanity.
In an email to CBS MoneyWatch, Dan Hendrycks, the director of the Center for AI Safety, wrote that there are "numerous pathways to societal-scale risks from AI."
"For example, AIs could be used by malicious actors to design novel bioweapons more lethal than natural pandemics," Hendrycks wrote. "Alternatively, malicious actors could intentionally release rogue AI that actively attempt to harm humanity. If such an AI was intelligent or capable enough, it may pose significant risk to society as a whole."
Longer-term risks also pose threats to humanity, such as if AIs automate parts of the economy and humans give up control to the tech in order to remain competitive, he added.
"In this scenario, we increasingly rely on AIs to navigate the increasingly fast-paced and complex landscape," he noted. "This increasing dependence could make the idea of simply 'shutting them down' not just disruptive, but potentially impossible, leading to a risk of humanity losing control over our own future."
Altman earlier this month told lawmakers that AI could "go quite wrong" and could "cause significant harm to the world" unless it is properly regulated. Generative AI can create text, photos and videos that can be difficult to distinguish from human-generated creations, leading to problems like the AI-generated song that cloned the voices of musicians Drake and The Weeknd.
That song was ultimately pulled from streaming platforms after publishing giant Universal Music Group said it violated copyright law.
More immediately, experts are highlighting the risks that AI poses to certain types of workers, with researchers noting the technology could eliminate millions of jobs. Adoption of AI in the workplace comes with uncertainty and risks — and not only for jobs at the companies that employ the tech, according to a new report from UBS analysts.
For instance, generative AI can "hallucinate" answers, or a term for spitting out incorrect information that appears believable, a trait that could not only spread misinformation but pose a risk to the credibility of companies that use it, UBS noted. Such a case occurred recently when a lawyer submitted a brief based on research done by ChatGPT — which invented cases that didn't exist and insisted they were real.
Other signees to the open letter include luminaries such as philosopher Daniel Dennett of Tufts, environmentalist Bill McKibben of Middlebury College and musician Grimes.
Article From & Read More ( AI could pose "risk of extinction" akin to nuclear war and pandemics, experts say - CBS News )https://ift.tt/5vKaoJd
Business
Bagikan Berita Ini
0 Response to "AI could pose "risk of extinction" akin to nuclear war and pandemics, experts say - CBS News"
Post a Comment