Secure header bidding architecture

Hello readers! It’s been quite a while. Today, I want to suggest a less insecure bot filtering architecture for online advertisers and publishers.

First, let’s have a quick overview of the current situation and define some terms:

Header bidding

Forget all the nonsense about “wrapper” that “sits the in header” and “wraps calls to the ad server with other demand partner”. I’ve read and heard explanations along these lines way too many times, and they are great example of complete adtech (jargon) madness. Like pretty much anything else in the browser, the “wrapper” is just a script that makes some network calls and execute some logic. Whether it’s located in the <head> or <body> is completely irrelevant implementation detail. </rant>.

In technical terms, header bidding library defines an interface which its adapters are implementing, allowing its consumers to use unified API to communicate with all of their demand partners. Originally invented as a hack to create a fair competition between Adx supply and other SSPs, and the de facto standard is Prebid.js (pbjs) which was originally created by AppNexus. Most other “unique proprietary header bidding wrappers” are just clones of pbjs with changed global variable name, and maybe some other niche features that cost way too much money for their value.

The general flow of pbjs look like this:

User loads the page, and pbjs is loaded with it. The page includes definitions of its ad units, sizes, media type, etc, and pbjs sends this info, along with the user info, to the configured demand partners by invoking their adapters. Then pbjs waits for response from all demand partners or a specified timeout, passing all the returned bids to the ad server using key-value targeting. The ad sever receives the ad request with the competing bids, runs its ad selection process, and returns its own creative if it have a better paying line item, and if not, a pre-configured pbjs creative that basically tells pbjs to render its highest bid’s ad markup.

So far so good.

Bot detection

As I wrote before, current bot detection for advertising  have two phases, pre-bid and post bid. To summarize, pre-bid detection is based on the network properties of the client that makes the bid request, and post-bid is based on browser fingerprinting. The advantage of pre-bid detection is that it happens before money is spent, the disadvantage is it’s less accurate and reliable than post-bid, which is on the other hand more secure, but occurs after the ad is already served and money is already spent.

 

Server to server

Remember I wrote about pbjs, “Then pbjs waits for response from all demand partners or a specified timeout”? It’s actually a bigger deal then it sounds at first. Latency is a big burden on both user experience and CPM rates (as it negatively affects viewability and number of served impressions), so a solution emerged in the from of Prebid server. It works the same as regular pbjs flow, but instead of invoking bidders from the clients browser, it invokes the from a dedicated server, returning to the client only the winning bid.

But it also breaks the security model of pre-bid bot detection, since now the client isn’t make the bid request to SSP by design, and that’s bad, because now bots are detected only after money is spent.

 

My suggested solution

Bot detection vendors set up a Prebid server that sits as a proxy between the publisher and the SSPs. It then should convince publisher to add their code, perhaps as a Prebid module, which will contains their bot fingerprinting code and will append to the initial request for the prebid server. The results should be mandatory and if not included, no bid requests will be passed to demand partners.

With this architecture, advertisers get the best of both worlds: the security of post-bid detection solutions with the money saving effect of pre-bid detection solutions.

For publisher, the incentive to implement this module will be higher CPMs and viewability rates, since all of their traffic is now validated. It’s also technically possible to create a module that reads the client side pbjs config an automatically re-directs it through the prebid server, which saves the publisher the technical overhead of moving to prebid server, while gaining the latency improvements.

The challenges of this solutions, are the costs associated with the required amounts of network and computational resources, but it maybe can be done economically.

Let me know what do you think! I’m waiting to see more JIRA refers to the blog now 😉

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s