arrowBack to Case Studies

    Recommender System to Improve Monetization for Ad-Tech Leader

    October 17, 2023 .

    Copy Link

    Problem Statement

    The client is one of the leading global players in the mobile advertising and app monetization space. It facilitates running offers for advertising companies across multiple platforms in its partner network. This in turn helps both advertisers as well as publishers in customer acquisition and retention.

    The client has operations in more than 100 countries with 15M+ daily active users who complete 80M+ advertiser offers daily through its interface on publisher mobile applications. The carrot used by advertisers for users to complete offers is rewards in the form of in-app currency or dollars.

    The client had multiple product lines which catered to multiple types of offers like Videos, Shopping Offers, Click Offers, Engagement/ Install offers, Survey Offers etc. All of these offers compete against each other to get user attention. However, just getting user attention isn’t enough. To get the most out of the interaction, the user has to complete the offer. Some of these offers are lengthy or complex offering substantial rewards whereas some of the offers are simpler offering lower rewards. Some users drop off during the attempt, some may be disqualified by the advertiser whereas some others may try to game the system. As an Ad- network, the client had to show contextually relevant offers to these users to improve the probability of conversion and hence monetization rates. 


    We started off with a two pronged approach.

    1. Understanding KPIs & Behavior
      • Identifying the current monetization rates / KPIs to set benchmarks: We did this by quickly using Google Data Studio to generate a few reports/ real time dashboards to understand the trends
      • Identifying the user preferences: We conducted various deep-dives to understand which features impact user behaviour like
        • Value proposition of the offer: Offers having lower time spent per dollar reward earned are more attractive
        • Difficulty Level: the offer’s having higher difficulty level have a higher chance of a user quitting midway and not completing it etc.
    2. Understanding the Tech Specs of the System
      • Doing these analysis above, gave us deep insights into things like:
        • Scale of data to be managed by the system
        • Expected Latency of the API
        • Potential data sources & refresh rates
        • Types of models/ approaches which can be tried


    How we focused on rapid prototyping to production!

    Generating Impact early and being able to iterate fast are cornerstones of our operating methods! We built a team consisting of Data Engineers & Data Scientists with a goal of rapid prototyping and moving to production fast! This will set the stage for future experiments and model releases.

    The initial analysis had yielded around 10-15 features which had a significant impact on the user behaviour. Our data engineers started building data pipelines for feature engineering by connecting various data sources (Bigquery, GCS & AWS). This would later on evolve to become a feature store. Different pipelines served different used cases. The ones which involved Big Data using PySpark & Scala whereas some others where the scale of data was lesser were built using simple python scripts. These scripts were deployed using Airflow on GCS & EMR clusters.

    Recommendation Model

    While these pipelines were being built, our Data Scientists were able to create a Training Dataset using Pyspark using the same feature set. They started developing a baseline version of the algorithm on AWS Sagemaker. Given, we wanted to create impact fast. We first rolled out experiments with some simple models like user-item & user-user collaborative filtering to help set performance benchmarks. We tried various other algorithms like Contextual Bandits etc. 

    We also had to tackle the problem of cold start for newly launched offers and we developed an auditioning framework for the same, along-with a dedicated algorithm to ensure better candidate selection even in the auditioning phase. 

    Inference Pipeline (API)

    The inference pipelines were first built using AWS API Gateway & Lambda but as soon as we started using much more complex models, we moved the inference infrastructure to a dedicated EMR cluster which had auto-scaling enabled. We deployed the system as an API. The engineering team just needed to make a Get request from our API whenever a new user’s offerwall had to be rendered. We had to make multiple tweaks related to optimization and caching etc. to ensure that the latency of the API could be <200ms. 


    Over the last 1 year:

    1. Started with just 1 Geography (US), serving >1M requests per day but scaled to ~21 countries, serving >10M requests per day.
    2. Monetization rate increased by ~70% YoY
    3. Customer Return Rates increased by ~10%
    4. Interactions per user increased from ~9 to ~13

    More On AdTech

    Connect with

    Together we can make sure the world thrives.