Fake software now targets Facebook users by offering ChatGPT-based tools.
The recent Q1 2023 Security Reports from Meta, which was released on May 3, confirmed this cybersecurity issue.
“From a bad actor’s perspective, ChatGPT is the new crypto,” said Guy Rosen, chief information security officer at Meta.
In March alone, approximately ten malware families masquerading as ChatGPT and other AI tools were discovered by their security analysts.
Facebook Hackers Takes Advantage of ChatGPT’s Fame
Cybercriminals use software that provides ChatGPT-based tools. However, Meta clarified that this software contains malware that grants hackers complete access to the devices of their victims.
The technology company reported that it had already blocked thousands of malicious web addresses associated with ChatGPT and other AI models.
Meta explained that his team investigated and implemented security measures to prevent these malware variants, which exploit users who are interested in utilizing OpenAI’s ChatGPT.
“The space of generative AI is rapidly evolving, and bad actors are aware of this,” stated Rosen in his official blog post.
Read more: Withheld IRS Tax Refund? Here’s what you can do!
Meta Work Accounts
Meta explained that Facebook hackers, who now employ ChatGPT-based tools to entice victims, frequently target users who administer enterprises. The technology company added that these malicious entities prefer to target Facebook users who rely on Facebook for business.
As a result, Meta decided to introduce a new form of account for enterprises and work-related purposes: Meta Work accounts. This new technology, according to Meta, enables users to access Facebook’s Business Manager tools without having to link their personal Facebook accounts.
This link provides additional information about this cybersecurity initiative. Elon Musk carried out the new Twitter encrypted DM feature to secure his customers further.
Read more: Elon Musk new Twitter CEO: What you need to know?