Last week, the ChatGPT neural network included law professor Jonathan Turley on a list of lawyers allegedly involved in an illegal story, but the accusations are false, and the information is taken from a non-existent article in the Washington Post. Russian IT expert Rodion Kadyrov spoke to Gazeta news about the misinformation surrounding the use of neural networks.
The expert noted that the lawyer's problem "is not just an isolated incident," but "an example of how modern language models, such as OpenAI's ChatGPT, are able to generate false information that puts people's reputation and even their lives at risk." What is particularly alarming, Kadyrov noted, is that "hallucinations" related to neural networks have become normal and arise due to the specifics of their work.
The expert explained that such failures occur when artificial intelligence models, in an attempt to answer any request, start "synthesizing" information. This happens when a certain amount of information is required, while the neural network does not have enough data to complete it.
“For example, if you ask a neural network to name 10 names associated with an event, it can come up with seven experts, even if it actually only knows three names,” the expert explained.
The problem with this type of misinformation is not limited to text. In March, a graphic purportedly showing former US President Donald Trump being arrested was published. The fake images were created using artificial intelligence and then spread on social media.
The expert also highlighted the lack of effective controls by social media platforms, as well as the need for stricter standards and regulations for the use of AI in such sensitive areas. He called on people to pay attention to the quality of content and pay attention to details because the quality of the fake version is often lower than that of the original content.
Informative
ReplyDeleteInteresting!
ReplyDeleteInformative
ReplyDeleteNice
ReplyDeleteInformative
ReplyDelete