Your fears. Your business secrets. Your health questions. Your financial worries. Years of conversations you never would have had out loud, now feeding an ad machine.
The bottom line, before you read another word: OpenAI’s own documentation confirms that “past chats,” including conversations from months or years ago, have been released from their servers and sold to the ad networks. Every private thought, business strategy, and personal confession you typed into ChatGPT since its launch has now be sold and analyzed to sell you things.
This Isn’t About Ads. It’s About Trust.
Think about the last time you opened ChatGPT.
Maybe you asked about a health symptom you were too embarrassed to bring up with your doctor. Maybe you drafted a sensitive email to an employee you were about to let go. Maybe you worked through a business acquisition strategy, shared revenue numbers, or asked how to handle a difficult investor conversation.
You did that because you thought it was private.
On February 9, 2026, OpenAI officially launched advertising inside ChatGPT. But the ads themselves aren’t the scandal. The scandal is what powers them.
From OpenAI’s Own Help Center:
“If memory is on [which it is by default], ChatGPT may save and use memories and reference recent chats when selecting an ad.”
“[The system] may also use additional signals, like past chats and how you’ve interacted with ads, to make ads more relevant over time.”
Consider What People Actually Tell ChatGPT
This isn’t a search engine where you type a few keywords. People pour themselves into these conversations. Business owners and executives use ChatGPT for:
- Processing personal health concerns and symptoms
- Navigating relationship and family issues
- Brainstorming responses to legal threats
- Discussing mental health struggles they haven’t shared with anyone
“People think that they are interacting in a completely private, secure environment, which is false.” – Nathalie Marechal, Center for Democracy and Technology
“If you share sensitive information in a dialogue with ChatGPT, it may be collected and used for training. You start seeing ads for medications, and it’s easy to see how this information could end up in the hands of an insurance company.” – Jennifer King, Stanford Privacy Researcher
The Numbers Are Disturbing
| Stat | What It Means |
|---|---|
| 800M+ | Weekly active ChatGPT users whose data has been sold |
| 64% | Of professionals worry about sharing sensitive info through AI tools (Cisco, 2025) |
| 40% | Of organizations have already experienced an AI privacy breach (Gartner) |
| €15M | Fine Italy levied against OpenAI for GDPR violations, before ads even launched |
| 56% | Year-over-year jump in AI-related privacy incidents (Stanford AI Index) |
How It Works (And Why “Opting Out” Won’t Undo the Damage)
Here’s what most people miss: the ads you see inside ChatGPT are beside the point. What matters is what happened before you ever saw one.
OpenAI built an entirely new advertising platform from the ground up. They hired senior executives directly from Facebook and Google’s ad divisions. They onboarded launch partners including Adobe, Target, Microsoft, and over 30 brands through Omnicom Media Group with a minimum buy-in of $200,000. That infrastructure wasn’t built on a promise of future data. It was built on the data that already existed: YOURS!
The ad personalization settings were turned on by default. That means unless you proactively went into Settings > Ad Controls and disabled them before February 9, 2026, your full conversation archive was already available to the targeting system. OpenAI’s own documentation confirms the system uses “past chats” and “memories” to match ads. Not future chats. Past ones.
So what does “opting out” actually do?
It stops future data from being fed into the system. That’s it.
It does not recall data that has already been processed. It does not undo the profiling that has already occurred. It does not claw back the behavioral signals, topic patterns, and interest categories that have already been extracted from years of your conversations and used to build your advertising profile.
OpenAI states that if you tap “Delete ads data,” full server-side removal takes up to 30 days. But that language refers to the ads data OpenAI holds internally. It says nothing about the aggregate performance data, behavioral patterns, or audience segment information that has already been shared with advertising partners. Advertisers receive “overall information about how their ads perform such as total views or clicks,” but the targeting categories your data helped create? Those are already in the system.
Think of it this way:
If someone photocopied your private journal and used it to build a detailed profile of your fears, desires, health concerns, and financial situation, then handed that profile to advertisers, giving you the option to “opt out” doesn’t un-photocopy the journal. It doesn’t delete the profile. It just means they’ll stop reading new pages.
That is what happened here. The conversations already took place. The data was already analyzed. The advertising infrastructure was already built on top of it. Opting out now is a forward-looking action applied to a backward-looking problem.
A Dramatic Change of Heart
| When | What Sam Altman Said |
|---|---|
| May 2024 | “I kind of hate ads as an aesthetic choice.” Called ads a “last resort.” |
| July 2025 | “I’m not totally against it… I love Instagram ads.” |
| Feb 2026 | Full ad platform launched. Minimum $200K buy-in. Facebook and Google ad execs hired. |
So What Do You Do Now?
The good news: you have options. Two serious alternatives exist that treat your data the way ChatGPT promised to, and actually follow through.
Option 1: Claude by Anthropic
On February 4, 2026, Anthropic published a statement that drew a hard line. Not for enterprise customers. Not for paid users only. For everyone.
From Anthropic’s own published policy:
“Claude will remain ad-free. Our users won’t see ‘sponsored’ links adjacent to their conversations with Claude; nor will Claude’s responses be influenced by advertisers or include third-party product placements our users did not ask for.”
“Users shouldn’t have to second-guess whether an AI is genuinely helping them or subtly steering the conversation towards something monetizable.”
They went further, stating directly: “We do not sell users’ data to third parties.”
Anthropic also backed up the words with action, running four Super Bowl commercials mocking the idea of ads inside AI conversations. The tagline: “Ads are coming to AI. But not to Claude.”
What individual users should know:
- Deleted chats are excluded from training under any circumstance.
- No ads on any plan. Free, Pro, Max. None.
- No advertiser access to your conversations. Not aggregated, not anonymized, not at all.
- Your data is not sold to third parties. This is stated in their privacy policy, not just their marketing.
You control whether your data trains future models. Anthropic presents a choice (opt in or opt out) for consumer users. If you opt out, your data is retained for only 30 days and is never used for training.
Option 2: Lumo by Proton
If Claude is the privacy-forward alternative, Lumo is the privacy-absolute alternative. Built by Proton, the Swiss company behind ProtonMail and ProtonVPN, Lumo takes a fundamentally different architectural approach to AI.
Caboodle Media is an authorized reseller of Lumo by Proton, and we stand behind it because Proton’s privacy principles align directly with ours.
What Makes Lumo Different
- Zero-access encryption on saved chats. Proton itself cannot read your conversations.
- No server-side logs of queries or responses. Data erased after processing.
- Never used for training. Your conversations are never shared with anyone, ever.
- Ghost Mode. Conversations vanish permanently when you close them.
- Open-source models including Mistral and Nvidia, fully auditable.
- Swiss/EU jurisdiction. Hosted in Germany and Norway under GDPR, not US law.
Side-by-Side: Where Your Data Actually Goes
| Feature | ChatGPT | Claude | Lumo |
|---|---|---|---|
| Ads in Product | Yes (Free/Go) | No | No |
| Uses Chats for Ad Targeting | Yes | No | No |
| Past Chat Data for Ads | Yes (if enabled) | N/A | N/A |
| Zero-Knowledge Encryption | No | No | Yes |
| Data Used for Training | Default: Yes | Opt-in choice | Never |
| Open Source Models | No | No | Yes |
| SOC 2 Certified | Yes | Yes (Type I & II) | Yes (Type II) |
| HIPAA Ready | Enterprise only | Yes (with BAA) | Yes |
| Jurisdiction | USA | USA | Switzerland/EU |
| Ad-Free Guarantee | Paid plans only | Yes (all plans) | Yes (all plans) |
What Every Business Owner Should Do This Week
1. Audit what your team has been sharing with ChatGPT. If anyone on your team has discussed financials, HR matters, legal strategy, customer data, or competitive intelligence inside ChatGPT, that data may now inform ad targeting.
2. Check your ChatGPT settings immediately. Go to Settings > Ad Controls and disable “Personalize ads” and “Past chats and memory.” Then tap “Delete ads data.” Know that this doesn’t eliminate contextual ads and full removal takes up to 30 days.
3. Establish an AI usage policy for your organization. Define what types of information can and cannot be shared with AI tools. This is no longer optional hygiene. It’s a data security imperative.
4. Evaluate Claude and Lumo as replacements. Both offer meaningfully stronger privacy protections. Contact Caboodle Media for guidance on implementation, or to set up Lumo through our reseller partnership.
5. Recognize that “free” AI now has a concrete cost. The subscription fee for a private AI tool may be the most cost-effective data security investment your business makes this year.
Ready to Protect Your Business?
Caboodle Media helps businesses navigate the AI landscape with privacy and security as a foundation, not an afterthought. As experienced technology consultants, we can help you evaluate the right AI tools for your team.
Sources: OpenAI Help Center, Anthropic, Proton, Stanford AI Index, Cisco Data Privacy Benchmark Study 2025, Gartner, TechCrunch, CNBC, Euronews


