OpenAI breach is a reminder that AI companies are treasure troves for hackers
Thereâs no need to worry that your secret ChatGPT conversations were obtained in a recently reported breach of OpenAIâs systems. The hack itself, while troubling, appears to have been superficial â but itâs reminder that AI companies have in short order made themselves into one of the juiciest targets out there for hackers.
The New York Times reported the hack in more detail after former OpenAI employee Leopold Aschenbrenner hinted at it recently in a podcast. He called it a âmajor security incident,â but unnamed company sources told the Times the hacker only got access to an employee discussion forum. (I reached out to OpenAI for confirmation and comment.)
No security breach should really be treated as trivial, and eavesdropping on internal OpenAI development talk certainly has its value. But itâs far from a hacker getting access to internal systems, models in progress, secret roadmaps, and so on.
But it should scare us anyway, and not necessarily because of the threat of China or other adversaries overtaking us in the AI arms race. The simple fact is that these AI companies have become gatekeepers to a tremendous amount of very valuable data.
Letâs talk about three kinds of data OpenAI and, to a lesser extent, other AI companies created or have access to: high-quality training data, bulk user interactions, and customer data.
Itâs uncertain what training data exactly they have, because the companies are incredibly secretive about their hoards. But itâs a mistake to think that they are just big piles of scraped web data. Yes, they do use web scrapers or datasets like the Pile, but itâs a gargantuan task shaping that raw data into something that can be used to train a model like GPT-4o. A huge amount of human work hours are required to do this â it can only be partially automated.
Some machine learning engineers have speculated that of all the factors going into the creation of a large language model (or, perhaps, any transformer-based system), the single most important one is dataset quality. Thatâs why a model trained on Twitter and Reddit will never be as eloquent as one trained on every published work of the last century. (And probably why OpenAI reportedly used questionably legal sources like copyrighted books in their training data, a practice they claim to have given up.)
So the training datasets OpenAI has built are of tremendous value to competitors, from other companies to adversary states to regulators here in the U.S. Wouldnât the FTC or courts like to know exactly what data was being used, and whether OpenAI has been truthful about that?
But perhaps even more valuable is OpenAIâs enormous trove of user data â probably billions of conversations with ChatGPT on hundreds of thousands of topics. Just as search data was once the key to understanding the collective psyche of the web, ChatGPT has its finger on the pulse of a population that may not be as broad as the universe of Google users, but provides far more depth. (In case you werenât aware, unless you opt out, your conversations are being used for training data.)
In the case of Google, an uptick in searches for âair conditionersâ tells you the market is heating up a bit. But those users donât then have a whole conversation about what they want, how much money theyâre willing to spend, what their home is like, manufacturers they want to avoid, and so on. You know this is valuable because Google is itself trying to convert its users to provide this very information by substituting AI interactions for searches!
Think of how many conversations people have had with ChatGPT, and how useful that information is, not just to developers of AIs, but to marketing teams, consultants, analysts⊠itâs a gold mine.
The last category of data is perhaps of the highest value on the open market: how customers are actually using AI, and the data they have themselves fed to the models.
Hundreds of major companies and countless smaller ones use tools like OpenAI and Anthropicâs APIs for an equally large variety of tasks. And in order for a language model to be useful to them, it usually must be fine-tuned on or otherwise given access to their own internal databases.
This might be something as prosaic as old budget sheets or personnel records (to make them more easily searchable, for instance) or as valuable as code for an unreleased piece of software. What they do with the AIâs capabilities (and whether theyâre actually useful) is their business, but the simple fact is that the AI provider has privileged access, just as any other SaaS product does.
These are industrial secrets, and AI companies are suddenly right at the heart of a great deal of them. The newness of this side of the industry carries with it a special risk in that AI processes are simply not yet standardized or fully understood.
Like any SaaS provider, AI companies are perfectly capable of providing industry standard levels of security, privacy, on-premises options, and generally speaking providing their service responsibly. I have no doubt that the private databases and API calls of OpenAIâs Fortune 500 customers are locked down very tightly! They must certainly be as aware or more of the risks inherent in handling confidential data in the context of AI. (The fact OpenAI did not report this attack is their choice to make, but it doesnât inspire trust for a company that desperately needs it.)
But good security practices donât change the value of what they are meant to protect, or the fact that malicious actors and sundry adversaries are clawing at the door to get in. Security isnât just picking the right settings or keeping your software updated â though of course the basics are important too. Itâs a never-ending cat-and-mouse game that is, ironically, now being supercharged by AI itself: agents and attack automators are probing every nook and cranny of these companiesâ attack surfaces.
Thereâs no reason to panic â companies with access to lots of personal or commercially valuable data have faced and managed similar risks for years. But AI companies represent a newer, younger, and potentially juicier target than your garden-variety poorly configured enterprise server or irresponsible data broker. Even a hack like the one reported above, with no serious exfiltrations that we know of, should worry anybody who does business with AI companies. Theyâve painted the targets on their backs. Donât be surprised when anyone, or everyone, takes a shot.