AI apps must be trained to evaluate data. However, some data uses are restricted by law or the logistics of gathering it. These difficulties created a need for synthetic data, and the market is providing it.
To most people, “synthetic data” sounds like a paradox, or at least a contradiction in terms. Synthetic is something made up, and data is presumed to be a collection of facts. In the realm of AI, though, synthetic data is not only real, it has become a critical tool. In fact, it is becoming so important that it may actually surpass the use of real data for training AI apps by the end of this decade, according to at least one prediction. Any enterprise that plans to use AI at some point will have to embrace this concept with the confusing name, if not become an outright consumer of synthetic data.
Machine learning (ML), the process by which the neural networks that run AI apps learn how to do what they do, requires large amounts of data to both train an AI and to help ensure that the training an app does receive isn't biased in some subtle way. Essentially, an AI app is presented known data and works on it until it produces an expected value or output, at which point the owner of the app can consider it trained enough to proceed to the next training step. Although the process sounds simple, it can be rather complicated in execution.
Without getting too much into the weeds here, at a basic level, an AI app isn't learning the data itself, but rather how to classify it. The app turns that classification into a math function for subsequent processing. Part of this classification process also involves labeling the data, a function that currently is mostly done by humans to shortcut the trial-and-error process an AI app would have to go through to do that itself. Labeling provides a context for the data, and as the ML process continues, more labels are added to the data to give it more context. Generally speaking, the more data in a dataset and the more accurate the labels a dataset is given, the more efficient the ML process can be.
Finding Enough of the Right Kind of Data
Ah, but where does that data come from? What's the guarantee for its accuracy? And how much data of the kind needed can one get?
If you were trying to figure out something like whether people would be more or less likely to use an antacid tablet for heartburn depending on the tablet's flavor, asking every person in the nation that question would produce a large dataset, but the time and expense of doing it would be prohibitive. If instead you tried to get data from hospitals on how often their patients needed antacid tablets based on what they had consumed from hospital room service, you'd run up against privacy protections embedded in such laws as the Health Insurance Portability and Accountability Act (HIPAA) that don't let hospitals release that kind of data because the data could be traced back to the individuals from which it came, potentially violating their privacy.
Finally, even if somehow there was data available by either method, how should that data be labeled? How would you prevent human error in the labeling process? Synthetic data solves these problems.
It used to be that the privacy dilemma could be worked around by anonymizing data. For example, a large amount of data would be processed about epidemic victims by jumbling or substituting other letters for their names so specific records couldn't be traced back to an individual. However, in 2019, researchers from Imperial College London and Belgium's Université Catholique de Louvain proved that the actual people from an anonymized dataset could be identified via as few as 15 demographic attributes.
While there had been similar proofs with smaller datasets earlier, this was a tipping point at which Europe's General Data Protection Regulation (GDPR) would be violated. Simple anonymizing is no longer enough protection, particularly for text data. Now the standard must be that datasets used for research on medical problems, or those dealing with financial records, must be particularly careful about being true to the structure of a dataset without violating any person's privacy.
The Value of Synthetic Data
Synthetic data is generated by custom-built algorithms that can generate artificial datasets that replicate the structure of any kind of actual dataset. Because these algorithms model the data in a real dataset without replicating the actual data within it, the synthetic dataset retains the form, probability distribution, and other characteristics of a dataset of any particular kind that might be used for AI app training. However, the synthetic dataset has no actual information that can be tied to any individual, so it avoids the privacy risk. Synthetic data simply represents actual data and lets an AI app set baselines just as it could with real data. This enables ML training to retain the ability to help Ai apps learn to recognize statistical patterns and other properties in a dataset that can't be traced back to anything because it's replacement data rather than actual data. And it's well-known that ML training activities can generate more-accurate AI app models if the training data is more diverse.
In keeping with the idea of making medical-related data available for AI training and other purposes without violating privacy rules, health-insurance provider Anthem announced in May it would partner with Google Cloud to generate synthetic data that will include "medical histories, health care claims and other medical data."
Synthetic data has an economic benefit that counteracts the problem of prohibitive data-gathering costs as well. The generated datasets can be scaled to any size without affecting validity. This lets enterprises looking for a very large amount of data find some without having to research or generate it themselves and lets smaller companies access this aid to ML activity at a reasonable cost. It also provides enough data diversity to represent the reality-based models that ML training tries to teach and opens possibilities for enterprises to simply experiment with ML training relatively inexpensively. Such experimentation can act as a confidence-building measure for organizations choosing to sample AI technology before launching any full-bore project.
In addition, synthetic datasets come with labels already assigned. The entity using such datasets gets to skip that step in preparing AI training data. Making tweaks to the synthetic datasets also helps AI support staff adjust data-generation parameters and observe those results. The technology has the potential, for example, to help spot and counteract lending biases in the financial services industry and to help retailers analyze consumer purchasing behavior, all without exposing personal information about individuals.
Synthetic Data's Types and Categories
Generally speaking, synthetic data is divided into three major types and three categories. The three major types are synthetic text, synthetic images, and tabular synthetic data. Text is data in a natural language format. Images, primarily used in computer vision applications, include still and video graphics that are used to train AI apps to correctly "see" and discriminate between images of different objects. Tabular synthetic data is data of many types that appears in field-like structures for analysis by algorithms.
The three categories are fully synthetic data, partially synthetic data, and hybrid synthetic data. Fully synthetic data, as one might expect, is generated independently of real data and therefore has no relationship to actual data except that its format is similar. Partially synthetic data uses real data as a structure but substitutes for actual data values if there is a potential privacy issue. Hybrid synthetic data uses randomly selected items of real data that is then combined with similarly structured synthetic records to create an independent synthetic data item.
There are two major ways to create synthetic data. The first is to take a statistical distribution of a set of real-world data and then substitute numbers for the original data to create a dataset with different actual values. The second is to generate a model of an observed statistical distribution of an actual dataset and then substitute random data using the model as a structure. The models primarily used are generative, which means they include a model of the data distribution itself and can tell users the probability that a particular data item is representative of the overall dataset. (This is opposed to discriminative models, which use labels and tell users how likely the label is to be correct.)
(As a side note, synthetic data shouldn't be confused with a somewhat similar term, surrogate data. Surrogate data is specific to a group of data points in a sequence that have been taken and displayed at equally spaced moments in time, which can subsequently be analyzed via moving-average models.)
Obtaining Synthetic Data
If these definitions aren't entirely clear, the good news is that rather than building an algorithm to generate synthetic data, it's easier to find enterprises that provide synthetic data for training AI apps. For most situations, synthetic data produced by established companies using their own tested generation algorithms both produces more accurate synthetic data without violating privacy rules and is cheaper than an enterprise inventing a synthetic data-generating algorithm on their own.
On the one hand, no one can guarantee total accuracy with today's means of generating synthetic data. However, the general consensus among many enterprises seeking ML training data for AI apps is that the accuracy is high enough to offset the expense, inconvenience, and necessity of generating and labeling some version of synthetic data in-house.
There are currently scores of vendors offering synthetic data of various kinds and tools for building in-house synthetic datasets, should anyone's needs make traveling that route more practical in the long term. Because of space considerations, presented here is simply a series of links to associated providers. Readers interested in obtaining synthetic data or related tools for building it independently are encouraged to research which vendors provide aid best suited to any particular AI training needs.
Synthetic Data and Data Generation Apps Vendors
LATEST COMMENTS
MC Press Online