November 28, 2025
Data to Train AI

Are Tech Companies Using Your Data to Train AI?

Artificial intelligence has become the centerpiece of the tech industry, and major companies are racing to enhance their AI tools at unprecedented speed. This rapid expansion has created growing concern among users in the United States, many of whom wonder how much of their personal data is being collected, analyzed, or repurposed to train these AI models. Platforms such as Meta, Google, and LinkedIn have recently rolled out new AI features, but the details behind how these tools use user information remain confusing to many.

A wave of viral posts across social media has fueled fears that everything you write, share, or upload is automatically feeding corporate AI engines. Claims that Gmail can now “read all your emails,” or that Meta will “scan every private message starting December 16,” have spread quickly. While these statements exaggerate or misinterpret the facts, they highlight a real issue: tech companies are often vague or overly complex when explaining their data policies.

Krystyna Sikora, a research analyst at the Alliance for Securing Democracy, notes that technology companies are rarely fully transparent about the data they gather and how it is used. This lack of clarity can easily turn into confusion and misinformation. The only reliable way users can understand their rights is by reading the terms and conditions—something most people never do. The United States currently has no comprehensive federal data privacy law, unlike many countries in Europe and Asia, which gives tech companies considerable leeway.

Below is a clear breakdown of what Meta, Google, and LinkedIn are actually doing with user data, how their AI tools function, and whether you have the option to opt out.

Meta: What Data Is Being Used for AI—and What Isn’t

The Viral Claim

A widely shared post on X (formerly Twitter) claimed:
“Starting December 16th, Meta will start reading your DMs. Every conversation, photo, and voice message will be fed into AI.”

As alarming as this sounds, it is not accurate.

What Meta’s New Policy Actually Does

Meta introduced a policy update effective December 16 that focuses on personalizing content and ads based on how users interact with Meta AI. For example:

  • If you ask Meta’s chatbot for hiking tips, you may later see ads or group suggestions about hiking.

  • If your AI conversations mention specific hobbies or interests, Meta may use that to tailor your feed.

However, Meta specifically states:

  • Private messages in Facebook Messenger, Instagram DMs, and WhatsApp are not used for AI training.

  • The company does use public-facing content, such as:

    • Public posts

    • Public reels

    • Public comments

    • Public photos

If any of your content is set to “public,” it may be included in training data.

Data About People Who Don’t Use Meta

A notable detail in Meta’s policy is that its AI may still learn from posts about people who do not have Meta accounts.
For example, if someone uploads a public photo and tags a friend who is not on any Meta platform, that image and caption may still be used.

Can You Opt Out?

In the U.S., no—there is no full opt-out option for Meta AI.

  • You cannot disable Meta AI in Facebook, Instagram, or Threads.

  • WhatsApp allows you to turn off AI interactions, but only per chat.

A misleading post circulating online suggested filling out a form to opt out, but the form is only for reporting AI responses that contain sensitive or inaccurate personal data.

Expert Insight

David Evan Harris, an AI ethics instructor at UC Berkeley, explains that because the U.S. has no regulated privacy framework, users do not have standardized rights to refuse AI data usage—unlike in places such as the United Kingdom, South Korea, or Switzerland.

Even deleting your account does not prevent Meta from using past public data that was already collected.

Google: Can Gemini Really Read Your Emails?

The Viral Claim

A popular Instagram post warned:
“Google just gave its AI access to read every email in Gmail — even the attachments.”

This claim oversimplifies and distorts what Google actually changed.

What Google Announced

On November 5, Google introduced Gemini Deep Research, an advanced AI tool that can interact with Gmail, Google Drive, and Google Chat—but only with user permission.

You must manually enable access before Gemini can pull data from:

  • Gmail (emails + attachments)

  • Drive (documents, PDFs, photos)

  • Google Chat messages

Users can also choose which apps Gemini is allowed to use.

Other Ways Google Collects Data

Google can gather additional information through:

  • Gemini searches and prompts

  • Uploaded images or videos

  • Interactions with apps like YouTube or Spotify (if granted permission)

  • Call logs and message logs (if granted permission)

Google states that children under 13 are excluded from AI training datasets.

Gmail Smart Features

Smart features—like automatic event creation, autocomplete, or smart replies—use email content to improve predictions. These settings are automatically turned on in the U.S.

Turning off smart features prevents AI from drawing on Gmail content, but it does not disable the Gemini app itself.

A Lawsuit Adds More Confusion

A recent California lawsuit claims Google enabled Gemini to access private content by default after an October policy update. According to the complaint, users must now turn off access manually rather than opt in.

The lawsuit argues this change violates California’s Invasion of Privacy Act, which restricts unauthorized recording and wiretapping.

Can You Opt Out?

Yes, but partially:

  • You can turn off Gmail smart features.

  • You can use Gemini without signing in, preventing it from saving chats.

  • You can choose not to authorize Gemini Deep Research to access any private data.

However, completely avoiding Google’s data collection is difficult unless you limit your use of Google apps altogether.

LinkedIn: AI Training Using Public Profiles

The Viral Claim

“Starting November 3, LinkedIn will use all your data to train AI.”

This claim circulated widely, but not all details are accurate.

What LinkedIn Is Doing

LinkedIn confirmed that:

  • It will use some data from U.S. members to train AI models.

  • Only public profiles and public posts are included.

  • Private messages are not used for AI training.

The data helps improve tools such as content generation, career insights, and automated recommendations.

Microsoft’s Data Access

Since Microsoft owns LinkedIn, it is also receiving some data—such as profile information and advertising interactions—to improve its ad targeting system.

Can You Opt Out?

Yes. LinkedIn is the only platform among the three that offers a clear opt-out option.

Users can:

  1. Go to Settings → Data Privacy → Data for Generative AI Improvement
    Disable “Use my data for training content creation AI models.”

  2. Go to Advertising Data
    Turn off targeted ads and disable data sharing with partners.

This opt-out gives users greater control over both AI training and personalized advertising.

Final Thoughts: What This Means for Your Privacy

The rapid rise of AI is reshaping how tech companies handle user data. While viral posts tend to exaggerate or distort the facts, the underlying concern is real: AI systems often rely on enormous datasets, and the boundaries around what is collected are not always clear.

Here are the key takeaways:

  • Meta uses public content but not private messages, and there is no opt-out.

  • Google’s AI tools require user permission, but several features are enabled by default.

  • LinkedIn allows users to fully opt out of AI training and targeted advertising.

  • The U.S. lacks comprehensive privacy laws, leaving users with fewer rights than many other countries.

To protect your data, regularly review your privacy settings, limit what you share publicly, and stay informed about policy updates—because AI integration is accelerating, and companies are updating these rules frequently.