Try this "fartbot" experiment yourself
I've been saying for years that analytics is better than fraud detection. This is because fraud detection sucks -- you're not sure if they detect the fraud correctly, you have no way to find out, and getting a percentage IVT ("invalid traffic") number is not actionable and is too late (campaign is over). It has recently come to light that these verification vendors failed to detect the simplest things, like domain mis-matches in the Gannett mess-up. Billions of ads were transacted for at least 9 months and no one noticed or took action. Paying customers of these vendors did not get what they were paying for.
Fraud verification has been flying blind
It's also painfully obvious now that the reason these vendors have consistently reported IVT in the range of 0.1% - 1% is because they have been flying blind. Up to 99% of their tags are blocked because bots deliberately strip out the detection tags to avoid getting caught.
The code sample above was observed in the wild in 2015, and it shows how "tag stripping" is done. The bad guys are clever -- they block the tag but pass "statusCode: 200," which tricks the servers into thinking the tag was delivered, even when it was not. IAS won't see the tag evasion and they have no telemetry to use to label that ad impression as IVT. Hopefully this makes it clear that the 1% they report is the percent they can catch; the other 99% does not mean good or not fraud; the 99% means they have no data and they don't know -- i.e. they were "flying blind."
Fraud verification failed to detect "fartbot"
If I haven't convinced you after 10 years that these vendors' detections don't work well, perhaps the following will. For kicks, we took one of the bots from the IAB list, called "fartbot," and ran an experiment to see if these vendors could detect the bot named "fartbot" and protect their customers' ads from being shown. Detecting bots on the IAB list is the bare minimum for getting MRC accreditation for fraud detection. If they can't detect "fartbot" why are these vendors still MRC accredited? You can reproduce this simple experiment yourself in any browser. In your browser, 1) open the network console (CTRL+SHIFT+c), 2) go to "more tools" and 3) go to "network conditions." Near the bottom, uncheck the "user browser default" checkbox and type a custom browser name that includes the words spider, crawler, or bot -- like "fartbot." This changes the browser's name -- HTTP_USER_AGENT -- to "fartbot." You'd think the fraud detection tech will detect this, right? It certainly is not a normal browser name, to which ads should be served. Visit any site with your altered browser name and look for ads (be sure you are not using your own ad blockers). If you see ads, that means whatever fraud detection was in use by the advertiser, publisher, DSP ("demand-side platform"), the ad exchange, and/or the publisher ALL failed to detect "fartbot" and prevent ads from being served. The simple bot called "fartbot" was not detected. They could not even detect GIVT (general invalid traffic) properly, but yet MRC lists them as "accredited" and TAG lists them as "certified." Hmmmm...
Fraud verification is not actionable
Even when fraud verification vendors detected something, they only give you a % IVT. What do you do with that? You can't optimize your campaign with that since you don't know which domains or apps were fraudulent and should be removed. Further, I am sure most of you have noticed the discrepancies that these vendors can't explain. Two vendors measuring the same campaign produce entirely different numbers. Both vendors are MRC-accredited (which is useless, BTW). Because neither can explain their measurements, you're still stuck in a he said she said situation. You have to negotiate for your refund. It's always better to prevent your money from going to bad guys, rather than try to get it back after the campaign is over. Remember Uber sued 100 mobile exchanges for ripping them off in 2018. It's 2022 now, and even if they win the lawsuit, most of those 100 ad tech companies don't exist any more, so there is no money to be gotten back. They made off with the money, and got away with the fraud. Note that the fraud occurred despite every party in the programmatic supply chain having paid for fraud verification. No one noticed, until Uber's analytics person, Kevin Frisch, looked for himself.
Fraud verification reports prove their failure
Finally, you only need to look at the reports supplied by these verification vendors to see their obvious failure. Large buckets of impressions are labeled as "mobile in-app." That means they don't know or didn't record which mobile apps ran your ads. But yet, these buckets were labeled "99.992% fraud free." How can they label something "99.992% fraud free" when they don't even know the apps? Oh right, because they were flying blind. They should NOT have marked that "fraud free" -- that is lying and misleading to their paying customers. Further down that spreadsheet in the screen shot above they also label rows marked as "N/A" as 99% fraud free. We now know how or why. The don't know what site it was, but they still marked it as 99% fraud free because they had no data. The screen shot to the right shows the mobile app names from the same campaign. Even without fraud detection, you can review the list of mobile apps and determine if "casino, gambling, lotto, and scratch off games" are good for your brand and ads to run in.
Analytics empower you to take action yourself
These were the blunt instruments you've had to use before. But now, you have better tools -- analytics. It's no so much about whether something was a bot or not. It is about getting more details about your own campaigns so you can see what is really happening. "If you can see better, you can do better." For example, with tags in your digital media you can check simple things like reach and frequency. Left side shows that "reach" is only a few hundred sites and apps, not the reach that most advertisers think they get through programmatic channels. The right side shows frequency, hundreds of ads shown to the same device, probably a lot more than advertisers think, because they though f-caps were deployed and enforced. They should check on that.
Where did your ads actually go? Getting reports with "mobile in-app" as the largest buckets and other rows called (not set) or "N/A" certainly don't help you. The following is an analysis using FouAnalytics for a large client, right after the Gannett news. We wanted to check if their ads went to the wrong places. By using javascript tags, FouAnalytics detected the domain where the ad ended up (left side). We compare the detected domain with the domain/url passed in the bid request (right side) to see the mismatch. In this specific case, we found a few hundred impressions with this mismatch, out of 10s of millions of impressions purchased. Analytics allows us to show clients the details so they can understand if they were exposed to the Gannett mess-up and how large was their exposure.
The other fraud vendors COULD have detected the mis-declared domains too. But they didn't. That is because they didn't run javascript tags post-ad serving or they took the domain passed in the bid request as if it were the real domain. Because all the data in a bid request is declared, it is not reliable for fraud detection. Even the IP address can be and is regularly faked by fraudsters, by using residential proxy services to bounce the traffic and disguise the origins of the bots. This is why I've said WhiteOps' pre-bid filtration of 15 trillion bid requests per week is mostly useless -- they are trying to detect fraud with data that is likely falsified, so they won't find much except the most amateur script kiddies that forgot to disguise their fake domain or their bots.
With analytics, you can see and understand why something is fraud/bots and why something is not fraud. Looking at the click patterns (left side) above, you can see what normal clicks look like -- scattered around the page and clustered around the site navigation areas at the top. This contrasts to the click patterns seen in red, where the larger circles means the users clicked on the same X,Y coordinates on the screen. Can you get a whole bunch of humans to click on exactly the same pixel location as many other humans they don't know? Obviously, no. But botnets are programmed to click an X,Y location. Those locations don't even correspond to there the navigation menu is or where the links are on the page. By reviewing click patterns you can see for yourself and understand why something is marked as "bot clicks." On the right side, you see touch patterns on mobile screens. Can you tell which hand the users are using? The touches are mostly on the right side; so they are holding the phone with their right hands and scrolling with their thumbs.
So What?
With analytics, you can see better. When you see better, you can do better (digital marketing). You don't need to pay for fraud verification (for all the reasons above). Advertisers that I know are ditching the fraud vendors after years of wasting money. You also DON'T have to use FouAnalytics. You can use your own analytics and reports; just be sure to ask for more detailed reporting and dig into the details and ask yourself if it even makes sense. The slides above show that you can discover discrepancies between bids won, ads served, and ads rendered from your own reports, so you can see if you got what you paid for. Once you do these basic steps with reports you already have (or should already have), you can upgrade to FouAnalytics. Just like you have Google Analytics for your websites, you can use FouAnalytics, analytics for your digital media. I will keep the platform free for small and medium businesses. For large advertisers, I will personally choose who gets to use FouAnalytics. If chosen, they pay for the analytics via annual FouAnalytics Enterprise subscriptions -- access to the platform for their employees, measurement across all campaigns, up to X billions of ad impressions -- much like a Microsoft Office site license or annual subscription.
The era of fraud verification is ending; the era of analytics is here. Happy Saturday y'all.
Transformational Digital Marketing Leader | Future-Proof Strategies
2yBrilliant article. Thank you. It seems like any user agent with the term "bot" in it is blocked, at least where I was testing (probably because of your article pointing it out or maybe a setting in the akamai servers). Although if I name it as "Fakeperson_datastealing" then ads load. So it's really rudimentary attempts to filter it out. if anything.