[ad_1]
This March, nearly 35,000 AI researchers, technologists, entrepreneurs, and concerned citizens signed an open letter from the nonprofit Future of Life Institute that called for a “pause” on AI development, due to the risks to humanity revealed in the capabilities of programs such as ChatGPT.
“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves … Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?”
I could still be proven wrong, but almost six months later and with AI development faster than ever, civilization hasn’t crumbled. Heck, Bing Chat, Microsoft’s “revolutionary,” ChatGPT-infused search oracle, hasn’t even displaced Google as the leader in search. So what should we make of the letter and similar sci-fi warnings backed by worthy names about the risks posed by AI?
Two enterprising students at MIT, Isabella Struckman and Sofie Kupiec, reached out to the first hundred signatories of the letter calling for a pause on AI development to learn more about their motivations and concerns. The duo’s write-up of their findings reveals a broad array of perspectives among those who put their name to the document. Despite the letter’s public reception, relatively few were actually worried about AI posing a looming threat to humanity itself.
Many of the people Struckman and Kupiec spoke to did not believe a six-month pause would happen or would have much effect. Most of those who signed did not envision the “apocalyptic scenario” that one anonymous respondent acknowledged some parts of the letter evoked.
A significant number of those who signed were, it seems, primarily concerned with the pace of competition between Google, OpenAI, Microsoft, and others, as hype around the potential of AI tools like ChatGPT reached giddy heights. Google was the original developer of several algorithms key to the chatbot’s creation, but it moved relatively slowly until ChatGPT-mania took hold. To these people, the prospect of companies rushing to release experimental algorithms without exploring the risks was a cause for concern—not because they might wipe out humanity but because they might spread disinformation, produce harmful or biased advice, or increase the influence and wealth of already very powerful tech companies.
Some signatories also worried about the more distant possibility of AI displacing workers at hitherto unseen speed. And a number also felt that the statement would help draw the public’s attention to significant and surprising leaps in the performance of AI models, perhaps pushing regulators into taking some sort of action to address the near-term risks posed by advances in AI.
Back in May, I spoke to a few of those who signed the letter, and it was clear that they did not all agree entirely with everything it said. They signed out of a feeling that the momentum building behind the letter would draw attention to the various risks that worried them, and was therefore worth backing.
[ad_2]
Matéria ORIGINAL wired