Nov 14, 2024Ravie LakshmananArtificial Intelligence / Cryptocurrency
Google has revealed that bad actors are leveraging techniques like landing page cloaking to conduct scams by impersonating legitimate sites.
“Cloaking is specifically designed to prevent moderation systems and teams from reviewing policy-violating content which enables them to deploy the scam directly to users,” Laurie Richardson, VP and Head of Trust and Safety at Google, said.
“The landing pages often mimic well-known sites and create a sense of urgency to manipulate users into purchasing counterfeit products or unrealistic products.”
Cloaking refers to the practice of serving different content to search engines like Google and users with the ultimate goal of manipulating search rankings and deceiving users.
The tech giant said it has also observed a cloaking trend wherein users clicking on ads are redirected via tracking templates to scareware sites that claim their devices are compromised with malware and lead them to other phony customer support sites, which trick them into revealing sensitive information.
Some of the other recent tactics adopted by fraudsters and cybercriminals are listed below –
- Misuse of artificial intelligence (AI) tools to create deepfakes of public figures, taking advantage of their credibility and reach to conduct investment fraud
- Using hyper-realistic impersonation for bogus crypto investment schemes
- App and landing page clone scams that dupe users into visiting lookalike pages of their legitimate counterparts, leading to credential or data theft, malware downloads, and fraudulent purchases
- Capitalizing on major events and combining them with AI to defraud people or promote non-existent products and services
Google told The Hacker News that it intends to release such advisories about online fraud and scams every six months as part of its efforts to raise awareness about the risks.
Many of the cryptocurrency-related scams such as pig butchering originate from Southeast Asia and are run by organized crime syndicates from China, who lure individuals with the prospect of high-paying jobs, only to be confined within scam factories located across Burma, Cambodia, Laos, Malaysia, and the Philippines.
A report published by the United Nations last month revealed that criminal syndicates in the region are stepping up by swiftly integrating “new service-based business models and technologies including malware, generative AI, and deepfakes into their operations while opening up new underground markets and cryptocurrency solutions for their money laundering needs.”
The U.N. Office on Drugs and Crime (UNODC) described the incorporation of generative AI and other technological advancements in cyber-enabled fraud as a “powerful force multiplier,” not only making it more efficient but also lowering the bar for entry to technically less-savvy criminals.
Google, earlier this April, sued two app developers based in Hong Kong and Shenzhen for distributing fake Android apps that were used to pull off consumer investment fraud schemes. Late last month, the company, alongside Amazon, filed a lawsuit against a website named Bigboostup.com for selling and posting fake reviews on Amazon and Google Maps.
“The website sold fake product reviews to bad actors to publish on their product listing pages in Amazon’s store and fake reviews of business listings on Google Search and Google Maps,” Amazon said.
The development comes a little over a month after Google announced a partnership with the Global Anti-Scam Alliance (GASA) and DNS Research Federation (DNS RF) to tackle online scams.
Furthermore, the company said it has blocked or removed more than 5.5 billion advertisements for violating its policies in 2023 alone, and that it’s rolling out live scam detection in its Phone app for Android to secure users against potential scams and fraud by making use of its Gemini Nano on-device AI model.
“For example, if a caller claims to be from your bank and asks you to urgently transfer funds due to an alleged account breach, Scam Detection will process the call to determine whether the call is likely spam and, if so, can provide an audio and haptic alert and visual warning that the call may be a scam,” it said.
Another new security feature is the introduction of real-time alerts in Google Play Protect to notify users of potentially malicious apps like stalkerware installed on their devices.
“By looking at actual activity patterns of apps, live threat detection can now find malicious apps that try extra hard to hide their behavior or lie dormant for a time before engaging in suspicious activity,” Google noted.