The way we acquire users on mobile has changed tremendously during the past few years. If you were an app developer five years back, I’m pretty sure you’d have picked your publishers or networks based on their ability to deliver the highest volume of installs. These were by far the only expectations that were set on a successful campaign.
And for a while, this worked. In an era of fast developments and tech disruptions, where ad campaigns reached actual people by promoting apps that were the first of their kind, a mere download equaled revenue. But as the app ecosystem grew more populated and lucrative, fraudsters lurked beneath the surface ramping up tactics to interfere and get a share.
We were all there when that realization kicked in and caused more and more advertisers to start tackling the problem by broadening their view to post-install events and the quality of their users. Marketers began paying attention to user retention cohorts and looking into other app metrics like registration rate, ROI, and level achievement.
That critical, overarching view on campaign outcomes made it harder for fraudsters to remain undetected. Take bots, for example. They couldn’t mix their incent traffic into non-incent campaigns as easily as they had done before. They would still get flagged for the low or zero quality with these sources. If fraudsters wanted to keep earning money at that point, they needed to change their strategy.
A new stage in the evolution of ad fraud set forth
It became apparent that if the app industry moved into the direction of performance-focused advertising, fraudsters would find their way to fake performance. And this is where we enter today’s battle against performance fraud, at the core of which we’re asking ourselves: How can we determine when an action in a campaign is fake?
Being a compliance analyst has given me some perspective on dealing with this new kind of problem. In fact, the number of approaches in spotting advanced fraud can be overwhelming. Therefore, let’s stick to the most common cases of detecting fraud based on its post-install data, and look at how this can be done.
1 Too good to be true
We’re all familiar with this expression, but when it comes to trusting post-installs, most of us still like to believe in miracles. I’m convinced that this belief will not do anything except fill the pockets of fraudsters. Let me get more concrete with this.
Sources with an over-the-top quality
A “too-good-to-be-true” source in your network campaign is a source that displays a quality that is, let’s say, “over-the-top”. With this, I’m talking about a quality that significantly stands out from other sources, performing even better than your Facebook campaigns, and perhaps similar to or better than your organic installs. Now, how would you assess this?
Don’t let it happen
In an ideal situation, you would flag this source to your compliance team for a full investigation and ensure it’s safe, before pushing anything out. From my experience, I believe there is always a reason to be skeptical. In reality, however, I’ve seen too many app developers give higher Click Per Installs (CPIs) to allegedly “good publishers”, and ask them push the source straight away.
What happens when you do that? Well, you risk encouraging potential fraudsters to either continue faking “good quality” or to steal your organic installs even more effectively. At the same time, you discourage other publishers who were actually sending legit traffic, leading them to turn to other advertisers who would be ready to pay more for these installs.
2 Low quality is a relative term
While hyper-performing post-installs likely indicate fraud, things get a bit more complicated when it comes to the under-performing ones. Does low quality equal fraud? It doesn’t have to. Here’s how you can be sure.
Look at different sources
First of all, compliance analysts and advertisers should look at different sources for defining their KPIs. When a compliance analyst examines the quality of a campaign, they usually compare it against the average results from their own or similar networks. Those are usually lower than the organic numbers and rates advertisers tend to refer to. Here I must say, the threshold organic delivers is very relative. It will give you an idea of your results but you cannot rely on it. Advertisers prioritizing organic metrics might therefore not be realistic when setting their expectations on the quality of a source.
Consider other factors
Secondly, failing to consider factors beyond your results when drawing conclusions about a publisher’s performance is a mistake. Publishers showing bad campaign results aren’t necessarily fraudulent. Before judging them, first ask yourself if the targeting was precise enough, or maybe if the selected app from the publisher was actually a proper fit for the ad. In such cases, we can detect lower quality from legit sources. Consequently, there’s no immediate need to scrub or be suspicious of it.
Have no doubt when it comes to zero performance
On the other hand, you should be wary of a complete lack of quality, which reveals itself through 0% values or cases in which users installed the app but didn’t do anything afterwards. Also, take the trends which caused you to flag specific publishers as significant outliers seriously. An example of this would be retention cohorts that look very promising but where the majority of the users didn’t actually complete the first level within the game. These are the publishers you should flag to be checked with the greatest caution. In other words, if retention is zero or the achievement of important levels or metrics is zero, then this becomes a compliance question.
3 Post-install event funnels and distribution outliers
Once you’ve tackled the above points and flagged them, there’s an additional step you can take to track down fraud, and that’s establishing post-install event funnels and distribution trends to catch outliers.
Outliers of post-install event funnels
Identify the most crucial events within your app first, and from there reconstruct user paths by segmenting the data based on their behavioral patterns. This will help your partners flag and filter out significant outliers from that trend. Let me illustrate this. From looking at how your app is built, you should be able to suggest a standard user flow in your app. This flow can include events such as opening the app, registering for a game, reaching a certain level, and making a purchase to reach another level. Once you’ve set up these events, it’s key to locate what users from which publishers are behind them. Did they reach other levels after registering? Did they make purchases? According to the percentages of completion stages, which events did they skip?

By sharing these events with your network partner and explaining which trends you consider to be normal, you’ll help proactively flag and pause multiple, non-legit activities.
Outliers of post-event distribution trends
You can also dig deeper and flag unusual trends in the events distribution by observing if publishers send post-install events in abnormal spikes. Here, it is helpful to look at how users are behaving within short time frames. You might see outliers rushing through one event to the next in just a few seconds. It’s also effective to watch user activity during abnormal hours. If a publisher sends you all their event activity at night, from their location, I’ll have a guess: fraud. Alarm bells should also go off if they deliver events no other publisher is sending, like user traffic reaching a certain level that has been already been removed from the advertising setup. This all can help you to fight sophisticated bot traffic, e.g. SDK spoofing.
Think outside the box
The pure performance optimizations today are strongly shifting towards the direction of compliance. Thinking outside of the box, looking into the trends, developing smart models and studying their outliers will help app developers protect themselves much more efficiently. Glispa is working on flagging and optimizing our traffic based on the post-install data and constantly looking into the ways of improving the existing algorithms.

Evgeniia Pshenitcyna
Senior Compliance Manager | Glispa
Evgeniia has about eight years experience in digital advertising. She started her career in classic digital agencies in strategic media planning and project management, then made a move to the mobile side of ad tech. There, she grew to an experienced campaign manager and fraud analyst, and specialized in BI to unite her extensive industry knowledge with her technical expertise.
Related Posts
April 4, 2019
What I’ve learnt about Ramadan and mobile marketing
February 26, 2019
Live from MWC 2019: Do you have 5 minutes for 5G?
January 15, 2019
Me vs. the machine: A few ways AI is impacting mobile marketing
January 11, 2019
Infographic: Start optimizing your app earlier with playables
January 8, 2019
The Online Billboard: Why are brands choosing programmatic?
November 27, 2018
Glispa announced as finalist for App Growth Awards 2018
October 11, 2018
Why is everyone still talking about mobile retargeting?
September 27, 2018
5 critical market changes P2P lending apps should embrace in India
September 10, 2018
Mobile Growth Summit 2018: What did we learn?
August 30, 2018
Infographic: Lifting brand perception with playables
July 30, 2018
Whitepaper: Why creative matters in this mobile age
June 24, 2018
Infographic: How playables help you combat ad fraud
June 21, 2018
#GameofPhones: A panel on the impact of creatives
May 2, 2018
MAU 2018: What happened in Vegas?
March 27, 2018
Top 4 ways to acquire quality users for your app
March 16, 2018
MWC 2018: Key trends that will change mobile advertising
February 15, 2018
Laser focus on user engagement
April 26, 2017