Paris : Artificial intelligence at the service of information hackers

Apple : Advises technical support employees not to volunteer any information about the radiation problem Paris : Artificial intelligence at the service of information hackers Observers fear that chatbots and fraud programs based on artificial intelligence will facilitate the work of those involved in cybercrimes and online fraud, after they became available for public use about a year ago, without making radical changes in terms of traditional information attacks.  Better tools for “phishing” “Phishing” is the process of contacting a target person and having them enter their data on a pirated site that looks like the original.  “Artificial intelligence facilitates and accelerates the pace of attacks,” explains Jerome Bellois, an information security expert at the consulting firm Wavestone and author of a book on cyberattacks, by sending convincing emails that are free of spelling errors.  Thus, hackers exchange plans that enable them to generate targeted fraudulent messages automatically, through online forums or private messages.  To overcome the limitations set by solution providers around AI, specialized groups have been marketing language models trained to produce malicious content since this summer, such as the FroodGPT app. But its effectiveness remains to be proven.  “We are only at the beginning,” warns Belois.  Risk of data leakage Generative artificial intelligence is considered one of the five main threats that companies fear, according to a recent study issued by the American company “Gartner”.  Companies, especially in this field, fear the leaking of sensitive data transmitted by their employees, which prompted major companies, including Apple, Amazon, and Samsung, to prevent their employees from using GPT chat.  “Every piece of information entered into a generative AI tool can enter its learning path, and this could cause sensitive or confidential information to appear in the results of other users’ searches,” explains Ran Shaw, research director at Gartner.  Last August, OpenAI, the company that developed the ChatGPT program, launched the professional version of “ChatGPT Enterprise,” which does not use chats for learning, in order to reassure companies that fear their information will be leaked.  For its part, Google recommends that its employees not enter confidential or sensitive information into its automated chat program, “Bard.”  Forgery of audio and video The main new threat to artificial intelligence is the ease with which it can copy faces and voices and generate exact matches. Through a recording lasting no more than a few seconds, some online tools allow the generation of an exact replica, in which colleagues or relatives may fall victim.  The founder of Opfor Intelligence, Jerome Saez, believes that these tools may quickly be used by “a whole group of parties involved in small fraud operations, which have an active presence in France, and are often behind malicious campaigns that use text messages, with the aim of obtaining card numbers.” Banking.  He added, “These young offenders, who are generally young, will easily be able to imitate voices.”  In June, an American mother fell victim to fraud after a man called her to demand a ransom in exchange for her handing over her daughter, who he claimed had been kidnapped. The man made her hear what he claimed was the screams of her victimized daughter. The incident ended without causing any damage, after the police suspected that the incident constituted a fraud based on artificial intelligence.  Belloa explains that for companies that have become aware of widely used fraudulent methods based on fraudsters impersonating the company's CEO to obtain financial transfers, “the use of a fake audio or video snippet,” by a hacker, “can turn the tide of events in your favor.” this is the last one.  Beginner hackers Saez believes that “none of the attacks that succeeded last year lead to the belief that they were the result of the use of generative artificial intelligence.”  Although chatbots are able to identify some flaws and generate malicious code fragments, they are unable to execute them directly.  On the other hand, Saez believes that artificial intelligence “will allow people with limited talent to improve their skills,” ruling out the possibility of “those starting from scratch developing programs encrypted using GPT chat.”  Apple : Advises technical support employees not to volunteer any information about the radiation problem Apple, which is facing controversy in France over radiation levels in iPhone 12 phones, has advised technical support employees not to volunteer any information when consumers ask about this problem.   The company said, “If customers ask about the French government’s claims that the model exceeds electromagnetic radiation standards, employees should say they have nothing to offer,” according to Bloomberg News.   The company added that employees must reject customer requests to return or replace the phone, unless it was purchased within the past two weeks, and this is Apple's normal return policy.

Observers fear that chatbots and fraud programs based on artificial intelligence will facilitate the work of those involved in cybercrimes and online fraud, after they became available for public use about a year ago, without making radical changes in terms of traditional information attacks.

Better tools for “phishing”
“Phishing” is the process of contacting a target person and having them enter their data on a pirated site that looks like the original.

“Artificial intelligence facilitates and accelerates the pace of attacks,” explains Jerome Bellois, an information security expert at the consulting firm Wavestone and author of a book on cyberattacks, by sending convincing emails that are free of spelling errors.

Thus, hackers exchange plans that enable them to generate targeted fraudulent messages automatically, through online forums or private messages.

To overcome the limitations set by solution providers around AI, specialized groups have been marketing language models trained to produce malicious content since this summer, such as the FroodGPT app. But its effectiveness remains to be proven.

“We are only at the beginning,” warns Belois.

Risk of data leakage
Generative artificial intelligence is considered one of the five main threats that companies fear, according to a recent study issued by the American company “Gartner”.

Companies, especially in this field, fear the leaking of sensitive data transmitted by their employees, which prompted major companies, including Apple, Amazon, and Samsung, to prevent their employees from using GPT chat.

“Every piece of information entered into a generative AI tool can enter its learning path, and this could cause sensitive or confidential information to appear in the results of other users’ searches,” explains Ran Shaw, research director at Gartner.

Last August, OpenAI, the company that developed the ChatGPT program, launched the professional version of “ChatGPT Enterprise,” which does not use chats for learning, in order to reassure companies that fear their information will be leaked.

For its part, Google recommends that its employees not enter confidential or sensitive information into its automated chat program, “Bard.”

Forgery of audio and video
The main new threat to artificial intelligence is the ease with which it can copy faces and voices and generate exact matches. Through a recording lasting no more than a few seconds, some online tools allow the generation of an exact replica, in which colleagues or relatives may fall victim.

The founder of Opfor Intelligence, Jerome Saez, believes that these tools may quickly be used by “a whole group of parties involved in small fraud operations, which have an active presence in France, and are often behind malicious campaigns that use text messages, with the aim of obtaining card numbers.” Banking.

He added, “These young offenders, who are generally young, will easily be able to imitate voices.”

In June, an American mother fell victim to fraud after a man called her to demand a ransom in exchange for her handing over her daughter, who he claimed had been kidnapped. The man made her hear what he claimed was the screams of her victimized daughter. The incident ended without causing any damage, after the police suspected that the incident constituted a fraud based on artificial intelligence.

Belloa explains that for companies that have become aware of widely used fraudulent methods based on fraudsters impersonating the company's CEO to obtain financial transfers, “the use of a fake audio or video snippet,” by a hacker, “can turn the tide of events in your favor.” this is the last one.

Beginner hackers
Saez believes that “none of the attacks that succeeded last year lead to the belief that they were the result of the use of generative artificial intelligence.”

Although chatbots are able to identify some flaws and generate malicious code fragments, they are unable to execute them directly.

On the other hand, Saez believes that artificial intelligence “will allow people with limited talent to improve their skills,” ruling out the possibility of “those starting from scratch developing programs encrypted using GPT chat.”

Apple : Advises technical support employees not to volunteer any information about the radiation problem

Apple, which is facing controversy in France over radiation levels in iPhone 12 phones, has advised technical support employees not to volunteer any information when consumers ask about this problem.

 The company said, “If customers ask about the French government’s claims that the model exceeds electromagnetic radiation standards, employees should say they have nothing to offer,” according to Bloomberg News.

 The company added that employees must reject customer requests to return or replace the phone, unless it was purchased within the past two weeks, and this is Apple's normal return policy.

Post a Comment

Previous Post Next Post

Search Here For Top Offers