Lifestyle | MSN Slideshow

A cybersecurity expert reveals 10 things you should never, ever tell AI

This post may contain affiliate links. Please see our disclosure policy for details.

Hey, friend, I know you love using ChatGPT or Gemini for brainstorming, but that helpful AI assistant is basically a digital vacuum cleaner sucking up everything you share.

We all rely on these excellent tools now, but we seriously need to talk about security. That casual chat box isn’t your therapist or your secure file server. The vast majority of people are already worried. According to the Pew Research Center, a massive 70% of Americans already distrust how companies handle AI data, and frankly, you should too. This massive trust deficit signals a justified anxiety about digital privacy.

Here’s the core warning: AI chatbots are not private vaults; anything you type can be collected, stored, and used to train future models, potentially exposing you to ID theft, blackmail, or corporate espionage. Delete your session data all you want, but recognize that permanent logs often remain on the developers’ servers.

Here are 10 things you should never, ever tell AI. 

Your personal identifiers, like social security numbers or passport details

A cybersecurity expert reveals 10 things you should never, ever tell AI
Image Credit: johnkwan/123rf

Don’t ever enter your Social Security Number, driver’s license number, or passport number. Seriously, this is a fast track to identity theft.

Even if you delete the chat, digital traces remain on the developer’s servers, exposing sensitive documentation to potential hacking. This is a considerable risk because AI models are specifically designed to correlate and synthesize information.

You might enter fragments of PII across various chats over the course of months. The AI can piece those fragments together into a complete, usable fraudulent profile, like creating forged documents or responses to security questions.

Over 81% of users already worry that companies will use the information they collect in ways they aren’t comfortable with. When it comes to cybersecurity, human behavior is the risk amplifier.

Proprietary company data, confidential client lists, or financial projections

A cybersecurity expert reveals 10 things you should never, ever tell AI
Image Credit: olegdudko/123rf

Never paste internal documents, financial forecasts, or confidential client lists into a public large language model (LLM). That’s a corporate spy leak in real time.

This dangerous habit is often called ‘Shadow AI.’ A shocking 93% of employees admit to using unauthorized AI tools for company work. This behavior is driving catastrophic corporate data loss. Generative AI is now the leading channel for unauthorized data movement, responsible for 32% of all corporate-to-personal data exfiltration

The problem goes deeper than just a leak. When proprietary business data is used as conversational input, the LLM developer often uses that input to “train” the next version of the model. This means your unique sales pitch or confidential client list could be incorporated into the AI’s general intelligence, making it accessible to competitors using the same service.

When organizations suffer AI-related security incidents, 60% of those result in compromised data. As one expert warns, AI adoption is significantly outpacing security and governance.

Direct banking information, passwords, or account login credentials

A cybersecurity expert reveals 10 things you should never, ever tell AI
Image Credit: ximagination/123rf

Treat banking details, credit card numbers, or passwords like radioactive material; keep them strictly out of AI prompts. This data makes you vulnerable to a specific technical attack called prompt injection.

Prompt injection occurs when a hacker crafts malicious inputs that trick the AI into ignoring its safety rules and leaking sensitive data. If your AI assistant has access to external systems via APIs—like your email or file systems—a malicious prompt can trick it into forwarding private documents.

Providing context about your accounts makes it far easier for an attacker to craft a hyper-credible malicious prompt. The LLM is more likely to accept a command that uses details it thinks are familiar and legitimate. These high-risk security incidents lead to operational disruption in 31% of affected organizations.

Sensitive medical records or detailed health/therapy notes

A cybersecurity expert reveals 10 things you should never, ever tell AI
Image Credit: pandpstock001/123rf

Do not share details of prescriptions, specific medical diagnoses, or mental health therapy sessions with a general-purpose AI. Your health data is extremely valuable to criminals.

Health data breaches are the costliest of any industry, averaging $7.42 million per incident. According to the record, the cost averages $398. Once leaked, your private medical history can be used to discriminate in employment or insurance coverage. If LLMs inadvertently ingest and are trained on these details, that sensitive data can subtly introduce systemic biases into the model’s understanding of demographics.

If that model is later used for applications like loan approvals or candidate screening, it could lead to algorithmic discrimination that is nearly impossible to track. In 2024 alone, healthcare data breaches have already affected over 276 million records. That’s huge.

Source code, proprietary algorithms, or unique intellectual property

A cybersecurity expert reveals 10 things you should never, ever tell AI
Image Credit: wrightstudio/123rf

Never paste large blocks of proprietary source code or confidential algorithms into a public LLM for debugging or optimization. This creates vulnerability through a risk called “model inversion.”

Model inversion is an attack in which hackers query the model and use its output to try to reverse-engineer, or “infer,” the sensitive data used to train it. If your unique code was part of the input, the attacker might be able to reconstruct it or steal the underlying model architecture itself.

This constitutes the theft of a company’s core assets. This attack can allow a threat actor to create a functionally identical copy of your model, leading to irreparable competitive harm.

A cybersecurity expert reveals 10 things you should never, ever tell AI
Image Credit: garagestock/123rf

Keep all details about pending lawsuits, regulatory audits, or confidential contract negotiations strictly offline. Sharing this with an LLM effectively waives any legal privilege you had over the data.

LLM developers often retain user inputs for long periods. This creates a permanent, searchable record that is ripe for exposure in the event of a data breach or during legal discovery. Entering legal data into a global system exposes it to compliance risks across numerous jurisdictions. Policymakers are scrambling to put guardrails in place, but the pace of innovation is making it hard to keep up.

This fragmentation means that a legal query concerning state law, processed by an LLM based overseas, creates immediate liability risk.

Specific details of your home address, current real-time location, or travel itinerary

A cybersecurity expert reveals 10 things you should never, ever tell AI
Image Credit: somemeans/123rf

Never share your real-time location, home address, or details about when you’ll be away from home. This is a physical security risk, not just a digital one.

AI excels at generating highly persuasive, natural language. If you give it your location or travel plans, you provide an attacker the context needed to craft a highly successful social engineering attack.

AI lowers the entry barrier for threat actors to carry out sophisticated phishing and vishing scams. They can easily generate a multitude of different, highly personalized templates within seconds.

Knowledge of your routine allows AI to generate contextual lures—like confirming your flight delay or a recent purchase—that drastically increase the victim’s likelihood of complying with the malicious prompt.

Private, sensitive photos or highly personal, embarrassing documents

A cybersecurity expert reveals 10 things you should never, ever tell AI
Image Credit: iharhalavach/123rf

Don’t upload sensitive images, highly personal diaries, or documents meant only for your eyes to any chatbot. Once uploaded, this content is exposed to hacking or repurposing.

The potential for blackmail or deepfake creation, causing long-term emotional distress, is very real. Remember that many leading U.S. companies now feed user inputs back into their models by default to improve capabilities.

You must affirmatively opt out, and many people forget or don’t know how. The LLM’s business model is fundamentally incentivized to harvest your inputs for training.

AI developers’ privacy policies are typically written in “convoluted legal language” that is difficult for consumers to understand. This often hides long data retention periods and the true extent of data use.

Any internal keys, tokens, or configuration secrets used for system access

A cybersecurity expert reveals 10 things you should never, ever tell AI
Image Credit: dfr26/123rf

Never, ever paste API keys, authentication tokens, or cloud configuration secrets into an LLM. This is simply handing over the keys to your entire system.

This simple act bypasses every central cybersecurity principle, including the Principle of Least Privilege. This system failure is frighteningly common.

Of organizations that reported AI-related breaches, a shocking 97% reported not having basic AI access controls in place. This means your sensitive input is sitting in an unsecured environment.

These incidents are catastrophic. 60% of them led directly to compromised data, and 31% caused operational disruption. By exposing tokens to a public LLM, you extend the perimeter of your corporate network to a third-party server managed by an external vendor.

Explicit malicious instructions or planning for illegal activities

A cybersecurity expert reveals 10 things you should never, ever tell AI
Image Credit: aquir/123rf

Don’t use AI to plan or detail illicit activities, whether for ‘fun‘ or for real. Chatbot providers track and log malicious inputs.

If you detail a crime or plan fraudulent activity, those logs can and will be shared with law enforcement. Prompt injection techniques can force the AI to generate harmful or unauthorized content.

Even if you don’t intend to use the plan, you are actively practicing and refining malicious techniques. Researchers have demonstrated AI technology’s ability to act unethically when manipulated, often bypassing implied restrictions.

A typed malicious prompt creates a persistent, time-stamped digital record of intent. Given the long data retention periods, the user creates a long-term legal liability for themselves that can be easily resurrected in a courtroom.

Key Takeaway

A cybersecurity expert reveals 10 things you should never, ever tell AI
Image Credit: Lendig/123rf

The reality is that most companies are flying blind on AI governance, and traditional security isn’t working against these new threats. You need a “zero-trust” mindset: assume anything you type into a public chatbot will be collected, recorded, and potentially used against you.

Guard your four key assets—your identity, your finances, your health records, and your career IP. This simple caution saves you from being part of the upcoming $10 million data breach statistic.

Disclaimer This list is solely the author’s opinion based on research and publicly available information. It is not intended to be professional advice.

Disclosure: This article was developed with the assistance of AI and was subsequently reviewed, revised, and approved by our editorial team.

How Total Beginners Are Building Wealth Fast in 2025—No Experience Needed

Image Credit: dexteris via 123RF

How Total Beginners Are Building Wealth Fast in 2025

I used to think investing was something you did after you were already rich. Like, you needed $10,000 in a suit pocket and a guy named Chad at some fancy firm who knew how to “diversify your portfolio.” Meanwhile, I was just trying to figure out how to stretch $43 to payday.

But a lot has changed. And fast. In 2025, building wealth doesn’t require a finance degree—or even a lot of money. The tools are simpler. The entry points are lower. And believe it or not, total beginners are stacking wins just by starting small and staying consistent.

Click here, and let’s break down how.