Fairgen ‘boosts’ survey results using synthetic data and AI-generated responses
Surveys have been used to gain insights on populations, products and public opinion since time immemorial. And while methodologies might have changed through the millennia, one thing has remained constant: The need for people, lots of people.
But what if you canât find enough people to build a big enough sample group to generate meaningful results? Or, what if you could potentially find enough people, but budget constraints limit the amount of people you can source and interview?
This is where Fairgen wants to help. The Israeli startup today launched a platform that uses âstatistical AIâ to generate synthetic data that it says is as good as the real thing. The company is also announcing a fresh $5.5 million fundraise from Maverick Ventures Israel, The Creator Fund, Tal Ventures, Ignia and a handful of angel investors, taking its total cash raised since inception to $8 million.
âFake dataâ
Data might be the lifeblood of AI, but it has also been the cornerstone of market research since forever. So when the two worlds collide, as they do in Fairgenâs world, the need for quality data becomes a little bit more pronounced.
Founded in Tel Aviv, Israel, in 2021, Fairgen was previously focused on tackling bias in AI. But in late 2022, the company pivoted to a new product, Fairboost, which it is now launching out of beta.
Fairboost promises to âboostâ a smaller dataset by up to three times, enabling more granular insights into niches that may otherwise be too difficult or expensive to reach. Using this, companies can train a deep machine learning model for each dataset they upload to the Fairgen platform, with statistical AI learning patterns across the different survey segments.
The concept of âsynthetic dataâ â data created artificially rather than from real-world events â isnât novel. Its roots go back to the early days of computing, when it was used to test software and algorithms, and simulate processes. But synthetic data, as we understand it today, has taken on a life of its own, particularly with the advent of machine learning, where it is increasingly used to train models. We can address both data scarcity issues as well as data privacy concerns by using artificially generated data that contains no sensitive information.
Fairgen is the latest startup to put synthetic data to the test, and it has market research as its primary target. Itâs worth noting that Fairgen doesnât produce data out of thin air, or throw millions of historical surveys into an AI-powered melting pot â market researchers need to run a survey for a small sample of their target market, and from that, Fairgen establishes patterns to expand the sample. The company says it can guarantee at least a two-fold boost on the original sample, but on average, it can achieve a three-fold boost.
In this way, Fairgen might be able to establish that someone of a particular age bracket and/or income level is more inclined to answer a question in a certain way. Or, combine any number of data points to extrapolate from the original dataset. Itâs basically about generating what Fairgen co-founder and CEO Samuel Cohen says are âstronger, more robust segments of data, with a lower margin of error.â
âThe main realization was that people are becoming increasingly diverse â brands need to adapt to that, and they need to understand their customer segments,â Cohen explained to TechCrunch. âSegments are very different â Gen Zs think differently from older people. And in order to be able to have this market understanding at the segment level, it costs a lot of money, takes a lot of time and operational resources. And thatâs where I realized the pain point was. We knew that synthetic data had a role to play there.â
An obvious criticism â one that the company concedes that they have contended with â is that this all sounds like a massive shortcut to having to go out into the field, interview real people and collect real opinions.
Surely any under-represented group should be concerned that their real voices are being replaced by, well, fake voices?
âEvery single customer we talked to in the research space has huge blind spots â totally hard-to-reach audiences,â Fairgenâs head of growth, Fernando Zatz, told TechCrunch. âThey actually donât sell projects because there are not enough people available, especially in an increasingly diverse world where you have a lot of market segmentation. Sometimes they cannot go into specific countries; they cannot go into specific demographics, so they actually lose on projects because they cannot reach their quotas. They have a minimum number [of respondents], and if they donât reach that number, they donât sell the insights.â
Fairgen isnât the only company applying generative AI to the field of market research. Qualtrics last year said it was investing $500 million over four years to bring generative AI to its platform, though with a substantive focus on qualitative research. However, it is further evidence that synthetic data is here, and here to stay.
But validating results will play an important part in convincing people that this is the real deal and not some cost-cutting measure that will produce suboptimal results. Fairgen does this by comparing a ârealâ sample boost with a âsyntheticâ sample boost â it takes a small sample of the dataset, extrapolates it and puts it side-by-side with the real thing.
âWith every single customer we sign up, we do this exact same kind of test,â Cohen said.
Statistically speaking
Cohen has an MSc in statistical science from the University of Oxford, and a PhD in machine learning from Londonâs UCL, part of which involved a nine-month stint as a research scientist at Meta.
One of the companyâs co-founders is chairman Benny Schnaider, who was previously in the enterprise software space, with four exits to his name: Ravello to Oracle for a reported $500 million in 2016; Qumranet to Red Hat for $107 million in 2008; P-Cube to Cisco for $200 million in 2004; and Pentacom to Cisco for $118 in 2000.
And then thereâs Emmanuel Candès, professor of statistics and electrical engineering at Stanford University, who serves as Fairgenâs lead scientific advisor.
This business and mathematical backbone is a major selling point for a company trying to convince the world that fake data can be every bit as good as real data, if applied correctly. This is also how theyâre able to clearly explain the thresholds and limitations of its technology â how big the samples need to be to achieve the optimum boosts.
According to Cohen, they ideally need at least 300 real respondents for a survey, and from that Fairboost can boost a segment size constituting no more than 15% of the broader survey.
âBelow 15%, we can guarantee an average 3x boost after validating it with hundreds of parallel tests,â Cohen said. âStatistically, the gains are less dramatic above 15%. The data already presents good confidence levels, and our synthetic respondents can only potentially match them or bring a marginal uplift. Business-wise, there is also no pain point above 15% â brands can already take learnings from these groups; they are only stuck at the niche level.â
The no-LLM factor
Itâs worth noting that Fairgen doesnât use large language models (LLMs), and its platform doesnât generate âplain Englishâ responses Ă la ChatGPT. The reason for this is that an LLM will use learnings from myriad other data sources outside the parameters of the study, which increases the chances of introducing bias that is incompatible with quantitative research.
Fairgen is all about statistical models and tabular data, and its training relies solely on the data contained within the uploaded dataset. That effectively allows market researchers to generate new and synthetic respondents by extrapolating from adjacent segments in the survey.
âWe donât use any LLMs for a very simple reason, which is that if we were to pre-train on a lot of [other] surveys, it would just convey misinformation,â Cohen said. âBecause youâd have cases where itâs learned something in another survey, and we donât want that. Itâs all about reliability.â
In terms of business model, Fairgen is sold as a SaaS, with companies uploading their surveys in whatever structured format (.CSV, or .SAV) to Fairgenâs cloud-based platform. According to Cohen, it takes up to 20 minutes to train the model on the survey data itâs given, depending on the number of questions. The user then selects a âsegmentâ (a subset of respondents that share certain characteristics) â e.g. âGen Z working in industry x,â â and then Fairgen delivers a new file structured identically to the original training file, with the exact same questions, just new rows.
Fairgen is being used by BVA and French polling and market research firm IFOP, which have already integrated the startupâs tech into their services. IFOP, which is a little like Gallup in the U.S., is using Fairgen for polling purposes in the European elections, though Cohen thinks it might end up getting used for the U.S. elections later this year, too.
âIFOP are basically our stamp of approval, because they have been around for like 100 years,â Cohen said. âThey validated the technology and were our original design partner. Weâre also testing or already integrating with some of the largest market research companies in the world, which Iâm not allowed to talk about yet.â