Cloaking is an age-old technique that has found countless new applications — and developed increasingly sophisticated evasion techniques over the years.
In the early days of the internet as we know it, bad actors used cloaking to trick search engines into believing sites full of malware or inappropriate content actually contained quality content. For example, by showing the search bot a site related to a popular movie, and showing the user a site containing a pornographic take on that movie.
In 2020, scammers have graduated to using cloaking techniques to buy ad inventory on publisher sites, social media, and apps. Scammers that intend on circumventing review systems typically serve ads that fall into three main categories; misleading, inappropriate, or malicious. These scammers rely on cloaking to evade ad policies that strictly prohibit dietary scams, trademark infringing goods, or any form of deceptive advertisements– including malware.
In an ideal world, publishers should know which ads they are serving at any given time, unfortunately, selling in an open marketplace is less than transparent. The ability to control, view and be aware of the ads being served on a site is vital in an era of fake news and online scammers. However, as the majority of digital advertising is bought and sold via programmatic, whether it’s direct or indirect, the knowledge of what ads appear, where and when becomes murky.
How Cloakers Bypass Scanning Technology
Cloaking methods can be quite involved and difficult to detect. The idea is that a cloaked attack will identify environments where there is an end-user and environments where there is not. ‘Non-user’ environments include search engines and certain ad monitoring tools. This particularly sophisticated technique uses detection tools that analyze various parameters, including IP address, browser, device, etc., in order to identify artificial, non-user environments.
Cloakers typically bypass layers of manual and automated quality assurance by hiding their own real URLs within lines of code or include code that looks like the URL of a legitimate publisher or company. The fake or obfuscated code looks legit to basic scanning tools, so it reaches its intended destination where the user can interact with it directly.
An ad tag might contain code that appears legit to scanners, but that is written in such a way that it can’t actually execute anything. However, buried within all that code is a malicious URL that does work. Or, a malicious URL might be disguised by additional (and ineffective) code inserted between the URL’s characters.
In short, when scammers identify screening efforts, they hide their malicious activity, so if a security tool scans the ad tag, it will not be able to spot malicious activity. Cloaked attacks are expressly designed to pass through a scan at the ad tag level, before the impression is rendered, and to show scanning tech a false result.
New And Improved Clickbait
Cloaking is behind many of the “fake ad” campaigns publishers have seen recently. One common method scammers use involves an ad creative showing a celebrity’s face, plus a salacious or teasing phrase.
For example, first, users come across an endorsement (fake, of course) of a product or service from a celebrity.
Then, the user clicks and is taken to a fraudulent site — sometimes with fake content and pirated logos from premium publishers.
Next, the user is phished or most commonly, encouraged to invest in a type of cryptocurrency. According to the FTC, Cryptocurrency scamming is a $3B business, with each campaign drawing thousands of victims each losing hundreds to thousands of dollars at the hands of perpetrators.
The widespread extent of these scams has resulted in lawsuits against Facebook and an array of police complaints from the celebrities who were featured in these scams.
One of the most troubling aspects of fake celebrity ads is that they ultimately get much higher CTRs than the industry average. So, the number of users affected is totally disproportionate to the number of ads the bad actors need to cloak and deploy.
And as ad cloaking becomes more mainstream, scammers on forums trade best practices, tips, and tricks like these
Google considers cloaking a direct violation of Google’s Webmaster Guidelines and warns of prohibited cloaking techniques including:
- Serving a page of HTML text to search engines, while showing a page of images or Flash to users
- Inserting text or keywords into a page only when the User-agent requesting the page is a search engine, not a human visitor
Tech Titans Take Aim At Cloakers
Cloaking—like the commoditization of exploits, is an infrastructure component for sale, both underground and aboveground.
The primary reason ad cloaking has become so mainstream lies in the fact that the very entities that engage in cloaking are actually registered companies. Some or many exist specifically to facilitate fraud and provide tools for other bad actors to launch their own attacks — and some ad platforms go so far as to encourage these malicious companies by serving them through a distinct arm of their own business.
If you asked publishers and advertisers to name the leading cloaking vendor, public enemy number one would likely be LeadCloak. Standing out among, competitors (Cloakerly, Linkscloaking, and TrafficArmor) Leadcloak is now internationally recognized after Facebook sued the founder of LeadCloak, Basant Gajjar, alleging his company provided and distributed software with the specific aim of bypassing Facebook and Instagram’s ad QA system. The suit alleges LeadCloak had facilitated scams involving COVID-related content, cryptocurrency, fake news, and dubious dietary supplements.
Particularly brazen in marketing its cloaking tech, LeadCloak has given itself a fairly transparent name and openly describes its product as cloaking tech on its company website.
Leadcloak is not the only vendor that openly describes its product, Cloakerly, offers multiple packages with the beginner’s basic package starting at $149 and their enterprise solution topping $1,000.
Blocking Cloakers in Real-Time
The phishing attacks and in-banner video schemes of years past have been eclipsed by forced redirects, and in our current reality, publishers are focused on fake “clickbait” ads. All of these methods have something in common: They all, in one way or another, have been able to spread wide because bad actors have used cloaking strategies to camouflage their code and its true purpose.
Broadly speaking, cloaking is very difficult for publishers to combat because like so many ad security and quality threats, there are many variations in how the bad ads and pages are cloaked.
Publishers have responded to the steady adaptation of cloaking techniques over time with a medley of anti-cloaking (or de-cloaking) techniques. Unfortunately, publishers’ security tools are often not as robust as they need to be to detect cloaking. Many publishers only use basic ad security tools like ad tag scanning — which cloaking is engineered to trick and circumvent.
Real-time blocking can catch a cloaked ad at the point which it finally reveals itself, and before the page content loads. Plus since real-time blocking runs on the user’s device, cloakers can’t set apart real users and artificial ones.
Traditionally, ad quality was an ad-ops concern, yet given its huge impact on publishers’ bottom line, including brand image, user loyalty, overall performance and revenue, this is now a management decision as it impacts the entire business performance.
Real-time blocking gets to the cause — in this case, the cloaking — and provides trusted security in all digital environments.