The New York Times, citing sources familiar with the process, revealed that two employees in Google's responsible innovation department tried and failed to prevent the launch of Bard chatbot last month.
They warned that he was vulnerable to "inaccurate and dangerous statements".
Already apprehensive product reviewers noticed problems with large language models of artificial intelligence such as Bard and its main competitor ChatGPT when Google's chief lawyer met with search and safety executives to tell them that the company was prioritizing AI over everything else.
The sources claimed that the couple's concerns about the chatbot producing false information, harming users who had become emotionally attached, or even unleashing "technology-facilitated violence" through artificial mass harassment were later downplayed by the responsible innovation supervisor, Jane Jinai. . While the reviewers urged Google to wait before launching Bard, Jenai allegedly edited their report to remove that recommendation entirely.
Jenai defended her actions to the Times, pointing out that the reviewers weren't supposed to share people's opinions on whether or not they should go ahead, because Bard was just an experiment. It claimed to have improved the report, having "corrected inaccurate assumptions, and already added more risks and harms that need to be studied". This, she insisted, made the final product safer.
Google credited Genai for its decision to release Bard as a "limited trial," but the chatbot is still set to fully integrate with Google's market-dominant search engine "soon," according to Google's own website.
Google has succeeded in eliminating employee rebellions on the issue of artificial intelligence before. Last year, it fired Blake Lemoine, after he claimed that LaMDA (Language Model for Dialogue Applications) had become sentient, while researcher Mahdi Elmohamady resigned after the company prevented him from publishing a paper warning of cybersecurity vulnerabilities in large language models such as Bard. And in 2020, AI researcher Timnit Gebru was let go, after publishing research accusing Google of not being careful enough about AI development.
However, a growing faction of AI researchers, technology executives, and other influential futurists have influenced the rapid "advancement" of Google and its competitors in Microsoft and OpenAI so that effective safeguards can be imposed on the technology. A recent open letter calling for a six-month suspension of "giant AI experiments" attracted thousands of signatories, including OpenAI co-founder Elon Musk and Apple co-founder Steve Wozniak.
The technology's ability to upend society by rendering many human professions (or humans themselves) obsolete is central to the warnings of many experts, although lesser risks such as data breaches - which have already occurred in OpenAI - are commonly cited. frequent.
Great
ReplyDeleteGood
ReplyDeleteG
ReplyDeleteSuper
ReplyDeleteIt's sparking debate among tech experts and researchers globally.
ReplyDelete