Ad cloaking is a sophisticated camouflage technique that malicious actors use in the programmatic ad environment. Ad cloaking hides malicious creatives and landing pages and only exposes them to users after the campaign has been scanned.
Scammers know that security professionals are going to search for them using a variety of methods and tools. They’re on the lookout for those screening efforts, and when they identify them, they hide their malicious campaigns so that if a security tool scans the ad tag, it will not be able to spot malicious activity. This technique is called ad cloaking. Cloaked attacks are expressly designed to pass through a scan at the ad tag level, before the impression is rendered, and to show scanning tech a false result.
Ad cloaking affects both publishers and advertisers, depending on the strategy and end goals of whoever launched the cloaked attack. If the cloakers behind the attack want to create a campaign to steal ad spend from reputable buyers, they can create a counterfeit page with a false IP address that mimics a premium advertiser, and cloak their real landing page URL within the code.
An advertising platform with its guard down will believe this site is legit and will send it quality ads — which no human ever sees. Because the platform essentially has conflated the premium publisher’s genuine site with the counterfeited site, the genuine publisher’s user traffic and viewability numbers decrease — and so do its CPMs, because it appears to platforms that the site has much more inventory than it actually does.
How does ad cloaking work?
Ad cloaking is used to hide the actual creatives, URLs, and landing pages of malicious campaigns and only reveal them to users who meet a variety of different criteria.
There are many different types of ad cloaking campaigns, but the common thread is that over time, a cloaked attack will identify environments where there is an end-user and environments where there is not. “Non-user” environments would include bots, security mechanisms in search engines like Google, and certain ad monitoring tools designed to detect bad ads. Cloaking uses detection tools that analyze various elements, for example, IP address, browser, device, etc., in order to identify artificial, non-user environments.
Ad cloakers and malvertisers typically bypass layers of manual and automated quality assurance by hiding their actual web URL within a script or lines of code, or including code that looks like the web URL of a legitimate publisher or company. The fake or obfuscated script looks legit to basic scanning tools such as those offered by Facebook or Google, so the fraud reaches its intended destination where the user can interact with it directly.
There are two ways ad cloaking is used:
1. Pre-click ad cloaking
All users generate ad calls as they scroll through a page on a website. The ad calls have various parameters such as the type of device, IP addresses, etc. Bad actors use those parameters to decide what ad creative and landing pages they will serve. In most cases, the ad creative is legitimate. But if certain parameters exist, the system serves ads with malicious code, usually traffic redirects that send the user to bad URLs and landing pages.
In other words, in pre-click cloaking, different ad creatives are served to different users depending on various user parameters.
2. Post-click page cloaking
Post-click or website cloaking is more prevalent than pre-click cloaking. In this method, the decision where to send the end user is only made after the click. Everyone sees the same ad creatives, but some people are sent to one landing page, and other users are sent to another. It’s much harder for publishers or a security review to detect this type of cloaking campaign because the ad creative itself is fine. In order to even search for a post-click campaign, publishers need to meet certain criteria and actually click the ad, making it almost impossible to detect.
How malvertisers bypass ad scanning
Malvertisers get past ad scanners by using ad tags that display legitimate creative and landing pages, and hide the URL of the landing page that their ads actually lead to.
When a publisher is attacked using cloaking techniques, the bad actors’ methods are basically analogous. The fraudsters design ad creative with corresponding landing pages that appear legit to users (for example, a car rental ad). This is the content the ad scanner “sees” when it looks at the ad tag. The real URLs for the creative and landing page have been cloaked within the code.
When the ad loads on the publisher’s website, the counterfeit creative is swapped out with low-quality, sensationalistic creative, for example, tabloid-style “celebrity in crisis” ads. And the counterfeit landing page is also swapped out, so when the end user clicks on the ads — which, as we’ll explain later, is a distinct and pronounced risk — they end up on a website where they are subjected to malware, a phishing attempt, or some other malicious activity.
How social media titans have taken aim at cloakers
Leading social media companies have taken legal action against companies involved in ad cloaking, but has had a hard time pinning down the responsible actors.
Another twist in the cloaking saga is that many entities that engage in cloaking are actually registered companies. Some or many exist specifically to facilitate fraud and provide tools for other bad actors to launch their own cloaking attacks.
In early April 2020, Facebook sued the founder of LeadCloak, Basant Gajjar, alleging his company provided and distributed software with the specific aim of bypassing Facebook and Instagram’s ad quality system. Facebook wasn’t alone — according to the suit, LeadCloak had also targeted other major digital companies including Google, Oath, WordPress, and Shopify.
LeadCloak was particularly brazen in marketing its cloaking tech, giving itself a fairly transparent name and openly describing its product as cloaking tech on its company website. Facebook’s suit alleged that LeadCloak had facilitated scams involving fake news related to Covid content, cryptocurrency, other types of fake news, and marketing dubious dietary supplements.
This isn’t the first time Facebook has taken legal action against bad actors that use cloaking to buy ads on its platform. In December 2019, Facebook sued ILikeAd, over a similar alleged cloaking scheme. Again, in this cloaking scam, cloaked ads containing pirated celebrity images were used to lure Facebook users to a landing page where they were enticed to download malware that took over their accounts and forced those accounts to buy and ruan ads for dietary supplements. However, in spite of Facebook’s efforts to stop cloaked attacks, Facebook has had difficulty suing the entities responsible because of the advanced level of sophistication these entities use to obfuscate whoever is behind them.
How is ad cloaking detected?
Spikes in CTR on display ads, a decrease in user metrics, declines in CPMs, and in-banner videos are all indications that a site may have fallen victim to ad cloaking.
Here are some symptoms of serious ad quality problems that publishers should consider to be red flags that might indicate that malicious actors are trying to use cloaking to attack their websites or traffic to specific pages.
- Spikes in CTR on display ads
Industry CTRs for display ads (marketing ads that contain an image) are usually quite low. Specific averages vary depending on the source but broadly speaking, the expectation today is that display ads will have CTRs of less than 0.1%. Whatever the normal CTR might be for your accounts, a sudden jump may be a sign that your site has been attacked by a malicious cloaked ad campaign.
- A sudden decrease in metrics like user time on site, session depth, overall revenue, etc., or an increase in bounce rate.
Negative changes in any of these traffic metrics can indicate a problem and lead to a loss of monetization. If the first symptom the publisher notices is the loss of monetization, they should use analytics to walk back and pinpoint sources of poor performance.
- Declines in viewability rates and CPMs
A sudden drop in viewability or CPM could be a sign that the publisher’s buy-side partners have suffered an ad cloaking attack. When advertising platforms are tricked into buying fake inventory, the advertisers’ spend is diverted to counterfeit sites with different IP addresses and away from the real publisher’s site. Publishers need to communicate clearly and early with their demand partners whenever they detect such dips in performance metrics because they are often related to ad cloaking campaigns.
- In-banner video appearing on the site
IBV is a long-standing industry issue that creates poor UX and does premium advertisers no favors in trying to connect with an audience. Publishers should report IBV to their demand partners and platforms and understand what type of protection and QA methods those partners have in place.
Challenges in detecting cloaked ads
Malvertisers only activate cloaking after a campaign has been scanned, and use techniques like fingerprinting, canvas, and battery charge tracking to evade detection.
As mentioned above, cloaking scammers use multiple techniques and levels to identify users and stay under the radar of the basic screening conducted by platforms like Google search engines and Facebook.
For example, scammers use fingerprinting in both pre-click and post-link cloaking to verify that they are dealing with real users and not security scanners or bots. If the user purports to be using a mobile device, fingerprinting mechanisms look for a touch screen. If they don’t find it, they know it’s a security platform, and they’ll display the legit ad image and landing page. They even track the charging percentage of users’ batteries. When batteries are always 100% charged, they know they are not dealing with standard users. Scammers also utilize Canvas, an element that allows browsers to show graphics and animations in HTML5. Depending on what it uses, the scammers can identify a computer and evade security mechanisms.
In addition to these techniques, scammers also utilize timing. It works like this: before scammers can launch campaigns, they need to get approval from the DSP. So, they usually begin the campaign with the cloaker turned off, and direct it at minimal traffic. However, since most of the QA review is done at the beginning, once they receive approval, they can start the cloaker feeling confident that a future scan won’t pick up on the script with the redirect of the URL to different IP addresses. That is the reason why most scanning at multiple points along the supply chain doesn’t pick up malvertisers and malicious actors: cloaked ads only reveal their malicious nature after the last scan or review.
This may seem like a lot of trouble to evade detection, but scammers invest a lot in every campaign, so they would rather err to the side of caution.
How to stop cloaked ads with GeoEdge’s real-time blocking solution
Multiple techniques are needed to detect and block cloaking before it impacts users and the traffic on a publisher’s website.
Scanning solutions vary fairly widely from one provider to the next, and not all protection solutions review all ad creatives or every image or code used in a landing page. If the scanner relies on samples, many threats can pass through a review undetected, especially since cloaking only switches out the ad creative at the last micro-moment, when the page and ad content render.
GeoEdge offers a unique solution because it can search for the mechanisms that scammers use to detect bots and non-human traffic — things like fingerprinting, battery charge tracking and Canvas — in real time, after the ad call but before the ad is served. When the platform identifies these methods, it is an indication of cloaking, even when the cloaking mechanism itself is hidden. GeoEdge then blocks the cloaked ad in real time and serves a legit ad before the page content loads. Since a good ad is served, publishers don’t lose revenue or traffic, and users are protected from the malicious activity conducted by scammers.