-
OpenAI reportedly recognizes the significant risks of building an artificial general intelligence (AGI) system, but ignores them.
OpenAI reportedly recognizes the significant risks of building an artificial general intelligence (AGI) system, but ignores them.
AGI is a hypothetical type of artificial intelligence, characterized by the ability to understand and reason through a wide range of tasks. This technology will mimic or predict human behavior, while demonstrating the ability to learn and think.
In an interview with the New York Times, researcher Daniel Cocotaylo, who left the governance team at Open AI in April, said that the probability of “advanced artificial intelligence” destroying humanity is about 70%, but the development team (based in San Francisco) is moving forward with it. regardless.
The former employee said: “OpenAI is really passionate about building general artificial intelligence, and seeks to be the first in this field.”
Cocotaylo added that after joining OpenAI two years ago, where he was tasked with forecasting the technology's progress, he came to the conclusion that not only would the industry not develop AGI by 2027, but there was a strong chance that the technology could catastrophically harm or even destroy humanity. , according to the New York Times.
Cocotaylo also reported that he told OpenAI CEO Sam Altman that the company should “focus on safety” and spend more time and resources on addressing the risks posed by AI rather than continuing to make it smarter. He claimed that Altman agreed with him, but nothing has changed since then.
Cocotailo is part of a group of OpenAI insiders, who recently issued an open letter urging AI developers to achieve greater transparency and more protections for whistleblowers.
OpenAI has defended its safety record amid employee criticism and public scrutiny, saying the company is proud of its track record of providing the most efficient and safe AI systems, and believes in its scientific approach to addressing risks.
It's strange.
ReplyDeleteStrange
ReplyDelete