How real time malvertising solutions work

If your’e browsing the web without a decent ad blocking solution (what are you, nut?!) chances are you’ve experienced the following scenario: you browse a legitimate website of a well known reputable brand,  and all of the sudden your’e redirected into another website, asking you to install software update, or alerting you that your’e infected with malware and asks you to install a solution or call tech support, or says something like “CONGRATULATIONS, DEAR AMAZON USER!”.

All of these examples are instances of malvertising, the practice of bad advertisers targeting innocent end users via delivering malicious ad payload though the adtech supply chain into legitimate websites. As I wrote before:

“Malvertising” is the compound of malicious and advertising. In this model, we have benign publishers and users being defrauded by malicious advertisers, which usually uses social engineering techniques in order to make users install malware, call tech support scams or many other nasty things.

I personally think that the mortal sin of the adtech industry is not turning a blind eye on bot traffic that cause financial loss to big brand advertisers, but rather turning a blind eye on malware distribution operations that compromise innocent end users.

Over the past years, malvertising has been a growing concern for both publishers and users. The first generation of malvertising detection solutions designed as “bot farms”, i.e., an army of bots pretending to be real users, that scans the publisher’s website and flags malicious ads.

This design works fine, but it has several serious drawbacks:

  • After detection, the publisher need to remove the offending ad tag. This introduces adops overhead and also prevents legitimate ads from the same tag from being served, potentially decreasing revenue.
  • Potentially low overlap between ads served to the scanners and ads served to actual website users, especially in the RTB and header bidding era.
  • Many users are affected until the scanner hits the right combination of targeting set parameters and finds the offending ad tag.
  • Malvertisers abusing bot detection technology to cloak the bad ads and pretend legit when scanned.

So, another solution is needed. Second generation solutions arrived to the market around 3 years ago, featuring real-time malvertising detection and prevention. These solutions are designed as JS tag that served to the client side along with the untrusted, ad party ad creative. The fundamental observation is that client side JS code can monitor the run-time behavior of another JS code running in the same execution context.

The benefits of this design are clear:

  • Blocking the bad ad from the first detected impression, so there’s no time window where users are affected.
  • No need to shut the entire tag tag down, so no adops overhead is introduced and revenue from legitimate ads keep flowing.
  • The detection runs inside the actual user’s browser, so there’s perfect overlap so ads observed vs ads served to users.
  • The detection runs inside the actual user’s browser, so there’s no need to run multiple expensive scans in order to hit the right combination of targeting set parameters that serves the bad ad.
  • The detection runs inside the actual user’s browser, so bot detection technology won’t help the malvertisers with cloaking.

So, how these solutions work?

As I said, the fundamental observation they are based on is that one JS code can monitor the run time behavior (aka dynamically analyze) of another JS code running in the same execution context, though tampering, aka monkey-patching, overriding, instrumenting, augmenting, hooking and decorating, call it however you want, the basic idea is the same.

The monitoring script needs to tamper with the JS and / or DOM environment in a way that allows it to inspect “interesting” operations and decide weather or not to allow them.

So first we need to define “interesting” operations. One of the fundamentals is DOM manipulations, i.e., adding, mutating or removing nodes to / from the DOM tree. This is how you add a new script, for example, which is pretty much a required part of initiating any malvertising logic. The very frequently used appendChild method is one of many methods to manipulate the DOM by adding a new node into a given node.

For example, let say we have a malvertising script that looks like this:

let script = document.createElement('script');
script.code = 'doEvil();';
document.head.appendChild(script);

We can monitor it by tampering with the appendChild method:

// save reference to the original appnedChild
let _appendChild = Node.prorotoype.appendChild;

// override appendChild with monitoring logic
Node.prorotoype.appendChild = function (newNode) {
	if (isEvil(newNode)) {
                // if isEvil returns true, terminate
		return;
	} else {
                // otherwise, complete the DOM manipulation
		return _appendChild.call(this, newNode);
	}
};

Now, implementing the decision logic of isEvil is non-trivial. How can we do it? This a classic malware detection question, and the answer is there are two different ways to do it:

  • Signature based detection (static)
  • Behavioral detection (dynamic)

Signature based detection basically means we have a list of “signatures” that can be matched against a thing, and if one of the signature matches, it means the this is a known malicious threat.

For example:

let signatures = [ 'installMalware();', 'doEvil();' ];

function isEvil (script) {
	return signatures.some(sig => script.code.contains(sig));
}

This logic will test the new script’s code against our signatures list. In this case, it will match the second signature, so isEvil will return true, thus the monitoring script will terminate the operation, so the evil script won’t be appended to the DOM and won’t execute.

Signature based detection works, but it has several drawbacks:

  • It won’t detect new, unknown threats, but only known threats that were already singed.
  • Code can be obfuscated in many different ways that bypass the signatures.
  • It requires to maintains an ever growing list of signatures, which means ever growing memory and CPU requirements.

Behavioral detection is different. Instead of observing artifacts about the code, it observes the code run-time behavior itself. Le’ts say the doEvil function of the malicious script looks like this:

function doEvil () {
	navigator.vibrate(10000)
}

Which means it makes the user’s mobile device to vibrate for 10 seconds. Let’s assume this behavior is unaccepted and considered malicious, since vibrating the device is allowed for max 300 ms. So the monitoring logic will look like:

function isEvil (script) {	
	let executionResult = executeInSandbox(script.code);
	let didVibrate = executionResult.calledFunctions.vibrate;
	let vibrateTime = didVibrate && executionResult.calledFunctions.vibrate.args[0];
	return didVibrate && vibrateTime > 300;
}

First, the code needs to be executed in a sandbox: a separate execution environment. Second, we check if the vibrate method was called inside the sandbox, and with what arguments, i.e., for how long did it vibrate. Last, we see check if the vibration time exceeded 300 ms. If so, we return true, meaning the new script will be blocked by the monitoring logic.

The benefits of behavioral detection are:

  • Detecting the bad behavior itself, regardless if the specific threat was seen before
  • Code obfuscation won’t affect detection
  • No need to maintain a list of signatures

However, this approach isn’t perfect:

  • Not every bad behavior can be monitored in a browser environment
  • Running the code in a sandbox is a computationally taxing
  • The malicious code might employ red / blue pill techniques similar to those used by malware for VM detection to appear legit inside the sandbox, and then do evil stuff in the real environment

As a note, I’d say it’s possible to implement behavioral detection without a sandbox. Instead, one can monitor the functions of the real environment. With this design, the first bullet still applies, but not the rest.

In practice, real time malvertising solutions use both of these approaches. Some of the bad behaviors are detected dynamically and rest are detected by signatures. These signatures are either malicious domains, or a specific sub-string of malicious URL or code.

These signatures are often produced by a feedback mechanism between the scanners I described earlier and the real time solutions: the real time solution collects “samples” of the final creatives as served to the user, sends them back the backend, where they are scanned using the bot in order to behaviorally identify malicious activities. Once found, a signature could be auto generated and added to the signature list used the the real time solution.

The main limitation of these solutions is that they are execute under the constrains of the same origin policy, like any other JS code in the browser, which means they can only monitor code that execute within same-origin browsing contexts, i.e., if the malicious code is running inside a cross origin iframe (that deemed as not malicious itself), even if it’s inside the signature list, it wouldn’t be seen by the monitoring script.

Another thing I’ve seen is malvertising code that exploits specific vulnerabilities in the implementation of specific real time solutions, which let them run malicious code in the same browsing context of the monitoring script without getting caught.

Maybe I’ll write about those vulnerabilities and how it’s possible to defend against them in the future 😉

Bottom line, there’s no silver bullet. Real time malvertising solutions need to be used in combination with scanners, and even then some sophisticated malvertisers will sneak in and exploit innocent users.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s