Back to Insights
    AI Security

    Getting Shadow AI under control

    3 min read
    Share:

    Shadow AI is now one of the biggest operational risks facing organisations. Employees are using public AI tools at pace, often with sensitive data, and often without understanding the implications. The result is a widening governance gap between what leaders believe is happening and what is actually happening in day-to-day work.

    Ahead of our Microsoft event on 17 March, this article outlines the real risk of Shadow AI and the steps organisations can take using the Microsoft capabilities they already own.


    The real risk of Shadow AI

    Recent incidents show how easily well-intentioned employees can create serious exposure.

    Samsung

    Developers uploaded confidential source code and meeting notes into ChatGPT, creating an irreversible exposure of proprietary information. Multiple investigations confirm that engineers pasted source code and internal content directly into the model interface, leading to retention of sensitive data.

    Source: TechRadar

    Grok (xAI)

    More than 370,000 private user conversations were inadvertently made public due to a design flaw in the "share" feature, resulting in chat transcripts being indexed by search engines and exposing personal data, attachments and sensitive content.

    Source: BBC News

    Hill Dickinson

    The international law firm detected over 32,000 ChatGPT interactions in one week and introduced restrictions after discovering significant usage that was not compliant with its AI policy, raising concerns about exposure of client information.

    Source: BBC News

    Research consistently shows that more than 70 percent of employees have used a public AI tool without approval at least once. Most organisations cannot see this happening until after the fact.


    Give employees a safe tool

    Shadow AI is not caused by negligence. It is caused by the absence of a secure, sanctioned alternative.

    Microsoft Copilot Chat provides a protected environment where prompts and outputs stay inside the organisation. Benefits include:

    • No retention of corporate data
    • Clear separation between organisational content and public models
    • Easy access on desktop and mobile
    • A familiar interface that encourages safe adoption

    If employees have a secure option, they will use it. If they do not, they will seek out alternatives.


    Get visibility of Shadow AI usage

    Visibility is essential. You cannot govern what you cannot see.

    Data Security Posture Management for AI (DSPM for AI) helps organisations:

    • Identify sensitive data that could be exposed to AI tools
    • Understand data pathways that could lead to leakage
    • Surface risky usage patterns
    • Prioritise remediation based on real data flows

    Combined with Defender for Cloud Apps discovery, organisations can detect which AI tools are being used, how often, and at what level of risk. This creates a data-driven foundation for governance.


    Define an acceptable AI use policy

    A policy only works if it is practical and enforceable. A strong acceptable use policy should set out:

    • What data types can and cannot be entered into AI tools
    • Approved, restricted and prohibited AI services
    • Safeguards required for sensitive or regulated information
    • Responsibilities for legal, security, data owners and team leaders
    • When AI output must be validated or escalated

    A policy is only meaningful when supported by technical controls.


    Enforcing acceptable AI use with Microsoft capabilities

    Data Loss Prevention

    Microsoft Purview DLP identifies and prevents sensitive data from being entered into public AI tools across devices, browsers and apps. DLP can target classifications such as client confidential, personal data or regulated information.

    Sensitivity Labels

    Purview Sensitivity Labels attach controls directly to the data. Labels such as Confidential or Highly Confidential can block uploads to unsanctioned AI tools and ensure restrictions travel with the file across environments.

    Blocking Unsanctioned AI Use

    Defender for Cloud Apps allows organisations to block or limit access to high-risk AI services. This reduces Shadow AI usage and directs users toward approved tools like Copilot Chat and Microsoft 365 Copilot.

    Conditional Access

    Microsoft Entra Conditional Access enforces access rules based on device compliance, user risk, network, location or role. Only trusted users on trusted devices can interact with AI systems.

    Monitoring and alerting

    Microsoft Defender surfaces behavioural anomalies such as high-volume transfers to AI endpoints or unusual access patterns. This enables early detection of inappropriate AI use.


    The bottom line

    Shadow AI is already happening in every organisation. The most effective response is simple. Give employees a secure AI tool. Gain visibility of real usage. Define a clear acceptable use policy. Enforce it with the Microsoft capabilities already in your environment.


    Want to know where to get started?

    Upcoming Event

    Shadow AI is here: How to regain control and enable AI securely

    17 March 202612:00 PM GMTEoin Fahy (Tenzing) & Ashley McAndrew-Hack (Microsoft)

    Join us for a practical session on the risks of inaction, what good AI governance looks like, and a simple, actionable approach to get control of Shadow AI.

    Register for the event →

    You can also speak with our team or take our quick AI Security Maturity Assessment to benchmark where your organisation stands and understand your top risks and opportunities.

    Assess Your AI Security Maturity

    Take our 2-minute AI Security Maturity Assessment to benchmark your organisation and receive personalised recommendations for secure and scalable AI adoption.

    Start Assessment