The Hidden Risks inside ChatGPT in Entra ID
The drive to integrate powerful AI tools, such as ChatGPT, into the enterprise environment with Microsoft 365 for enhanced productivity is not a new concept. This integration hides significant, emerging risks. A few security incidents, along with a few more involving threat-hunting actions, and brief research reveal concrete instances of potentially dangerous behavior when ChatGPT interacts with Entra ID and other Microsoft 365 services.
The marriage of ChatGPT and Microsoft 365 promises a productivity multiplier, featuring natural language summaries of documents, instant context-aware answers surfaced from SharePoint, and personal assistants that can parse vast amounts of OneDrive content in seconds.
That promise, however, rides on an OAuth consent flow that trades an admin or user click for delegated Graph access. In practice, a single consent event can mint the service principal for ChatGPT, issue refresh tokens, and grant read access to all resources that the user can access.
ChatGPT Integration in a Nutshell
The integration between ChatGPT and Microsoft 365 (including Entra ID) marks a new phase in how organizations interact with their data. What began as a conversational AI has evolved into an intelligent assistant capable of directly connecting to OneDrive, SharePoint, Outlook, and Teams through Microsoft Graph. This bridge allows ChatGPT to summarize files, extract insights, and reason over enterprise data through natural language prompts.
Behind that elegant interface, however, lies a complex authentication and consent architecture powered by OAuth 2.0 and Microsoft Entra ID. Each time a user clicks “Connect OneDrive”, a service principal for ChatGPT is created in the tenant, and delegated permissions are issued. Those permissions can be extensive, often granting ChatGPT read parity across every file the user can access in OneDrive and SharePoint.
This integration is both revolutionary and risky. For productivity teams, it serves as a gateway to faster knowledge retrieval and AI-driven collaboration. For security teams, it’s a vivid example of how identity becomes the new perimeter, where a single consent decision can expose vast data surfaces.
How ChatGPT Onboards
When a user connects ChatGPT to SharePoint Online or OneDrive for Business, the Microsoft Entra ID automatically provisions a service principal with the ChatGPT name inside the tenant. The application associated with App ID ‘e0476654-c1d5-430b-ab80-70cbd947616a’ is OpenAI’s multi-tenant Entra ID app.

If user consent is enabled, a standard user can initiate the entire process without requiring any administrative involvement. The consent screen appears, listing several Microsoft Graph permissions. Once accepted, the ChatGPT service principal is created, and tokens are issued. From that moment, ChatGPT can use Microsoft Graph APIs to read the user’s content directly from OneDrive or SharePoint.
For tenants with user consent restricted, the same request will prompt “Admin approval required.” However, in environments with permissive defaults or where legacy settings persist, ChatGPT can slip through unnoticed. Once the user is onboarded and has given consent, their status is displayed below in ChatGPT.
The User Footprint
The screenshot above shows a critical detail that often goes unnoticed: ChatGPT’s integration with Microsoft 365 can be authorized entirely through user consent. This means that an end user, without admin involvement, can grant the ChatGPT enterprise application (AppId: e0476654-c1d5-430b-ab80-70cbd947616a) powerful delegated access through the Microsoft Graph API.
Key Observations
Delegated Permissions via User Consent
Each permission listed is offline_access, Files.Read.All, Sites.Read.All, and User.Read was approved directly by an individual user, in some cases.
This is significant because these scopes collectively allow ChatGPT to act within the user’s full access boundary across OneDrive and SharePoint Online.
What does each permission enable?
- Files.Read.All – Grants read access to every file the user can access in both OneDrive and SharePoint.
- Sites.Read.All – Extends that access to all site collections.
- offline_access – Issues a refresh token, enabling persistent access even when the user is not online.
- User.Read – Allows ChatGPT to identify the user and read their basic profile information.

Tip: If you must allow admin consent apps, create a custom permission grant policy. For example, auto-block any request that requires “Sites.Read.All” unless the publisher is verified and pre-approved.
Why This Matters
This configuration illustrates how a single user can inadvertently increase an organization’s data exposure footprint. When ChatGPT is permitted with these permissions:
- It gains read parity with the user across corporate data stores.
- It retains long-lived access through refresh tokens (offline_access).
- It operates silently via Microsoft Graph, without triggering traditional endpoint or DLP alerts.
From a security operations standpoint, this means identity becomes the exploit surface, not code, not malware, but consent.
For defenders, monitoring OAuth consents and service principal creation events in Entra ID is now as essential as watching for lateral movement in endpoint telemetry.
Together, these scopes effectively give ChatGPT read-only visibility across the user’s entire Microsoft 365 data footprint. There’s no granular “read this file only” scope in Microsoft Graph’s current OneDrive picker API. Therefore, when a user wants to analyze a single document, ChatGPT is granted access to every document.
While these are delegated permissions (the app acts as the user), the combination of Files.Read.All and Sites.Read.All is broad enough to make any security architect nervous.
The Admin Footprint
The image above shows the complete permission set granted when an administrator approves ChatGPT’s access in Microsoft Entra ID. Unlike user consent, admin consent applies tenant-wide, meaning every user who connects to ChatGPT inherits these delegated capabilities. This is the most privileged state in which the connector can operate.
All permissions delegated, which means ChatGPT acts strictly on behalf of authenticated users. However, the breadth of scopes (Files.Read.All, AllSites.Read) effectively gives the app read-level visibility into every location those users can access.
What ChatGPT Can See and Do
Once consent is granted, ChatGPT can:
- Enumerate and download all files to which the user has access across OneDrive and SharePoint.
- Read the contents of those files (unless they’re encrypted with Purview sensitivity labels).
- Retain a long-lived refresh token, meaning the access persists until explicitly revoked.
- Process file content within OpenAI’s environment for AI-driven responses.
It cannot modify, delete, or upload files. These permissions are read-only, but data exposure doesn’t require write access. The simple operation of reading is enough.

Even though ChatGPT operates within the user’s context, the risk amplifies if that user is an executive, administrator, or anyone with broad data access. The app doesn’t distinguish between “personal notes” and “board meeting documents.”
Tip: Treat offline_access as a persistence mechanism. Run periodic sweeps with Get-MgUserTokenLifetimePolicy to enforce short-lived refresh tokens.
When AI Turns into a Shadow DLP Engine
The image below captures a subtle yet critical moment: ChatGPT actively scanning a connected SharePoint repository for sensitive content, specifically a file containing the string “Password” and plaintext credentials. The user didn’t write a query in Microsoft Purview, and they simply prompted ChatGPT with “scan for Password in SharePoint files.”

Because ChatGPT operates via delegated Microsoft Graph permissions, it performs this action with the same level of access the user holds. To the Microsoft 365 ecosystem, this activity appears as legitimate Graph API calls made by the ChatGPT service principal (AppId e0476654-c1d5-430b-ab80-70cbd947616a).
This example serves as proof of why the ChatGPT and Microsoft 365 integration warrants close security attention. It shows ChatGPT querying SharePoint through the Graph API and identifying a file containing plaintext credentials. This behavior resembles a data-loss prevention scan, but it actually occurs outside of native Microsoft 365 security controls.
Tip: Check Graph API calls like GET /sites/{id}/drives/{id}/root/search(q=’password’). That’s ChatGPT (or an attacker) turning Graph into Recon-as-a-Service.
The Attacker Perspective and Risks
When ChatGPT becomes Recon-as-a-Service
Suppose an adversary gains the ability to use ChatGPT with delegated Microsoft Graph access (whether by convincing a user to consent, compromising an account, or abusing an admin-consented service principal). In that case, the integration converts identity into a highly efficient reconnaissance platform.
Instead of logging into endpoints, planting agents, or running noisy scripts, the attacker can ask the AI to enumerate, search, summarize, and prioritize content across OneDrive and SharePoint within the bounds of the compromised identity. The yield is high and the noise is low.
Tip: Pivot your threat hunting mindset שמג stop looking only for MFA bypass and token theft. Start looking for “consent as compromise”.
There was a time when attackers fought for shells, persistence, and privilege. Today, in the world of Microsoft 365, we battle for consent. One OAuth consent equals a foothold that never drops a beacon. It doesn’t need persistence because the cloud persists for you.
This post dissects the ChatGPT Microsoft 365 connector (App ID e0476654-c1d5-430b-ab80-70cbd947616a) from an attacker’s point of view, but the goal is intelligence, not instruction. If you’re a blue teamer, this is what the adversary sees when looking through your tenant.
Delegated Access in the AI Era
From an attacker’s perspective, the ChatGPT connector is remarkably simple in its design. It asks for:
- User.Read – for identity context.
- offline_access – for refresh token persistence.
- Files.Read.All – whole OneDrive/SharePoint read scope.
- Sites.Read.All – cross-site visibility.
That combination is a treasure map. It doesn’t exploit a CVE. It simply abuses trust, the same kind of trust users give to calendar sync apps and “smart” notetakers.
In offensive terms, ChatGPT is a delegated read agent living in a SaaS control plane. It runs under the user’s identity, inherits their access graph, and moves silently through Microsoft’s Graph API endpoints.
The Data Landscape
The Graph API behind OneDrive and SharePoint is a content buffet. When viewed through a threat actor’s eyes, it’s a structured knowledge base:
- /me/drive/root/children exposes the user’s personal files.
- /sites/root opens team sites.
- /drives/{id}/items/{id}/content gives direct file payloads.
- Search endpoints enable metadata reconnaissance without requiring the download of megabytes of data.
The offensive insight is that once consent is granted, the Graph API becomes the post-exploitation interface. There’s no need for lateral movement in the endpoint sense, and the lateral surface is already pre-connected via identity.
Over-Permissions
The ChatGPT connector for Microsoft 365 is a prime example of coarse Graph permissions leading to unintended exposure. When an admin grants scopes, such as Files.Read.All, Sites.Read.All, and AllSites.Read, the AI achieves delegated read parity across all content the user can access, not just the file they intended to share. Combined with offline_access, this permission set lets the service maintain persistent access through refresh tokens.
Suppose the consenting user happens to be an owner of multiple SharePoint sites or shared libraries. In that case, ChatGPT becomes a tenant-wide document indexer reading sensitive data under the guise of legitimate delegated access.
From an adversarial perspective, these permissions are reconnaissance gold. They allow enumeration, keyword search, and content correlation across the organization through sanctioned Graph API calls that look completely normal in telemetry.
The Security Fallout
- Unintentional Data Exfiltration: ChatGPT’s ability to access all files on a system means that sensitive data can be sent to external servers for analysis. For users on free or personal ChatGPT plans, this data may even be retained or used for model training.
- Persistent Tokens: The offline_access scope issues refresh tokens that stay valid until revoked. If these tokens are compromised either by malware, token theft, or a breach in the ChatGPT service, an attacker could continuously access files through Microsoft Graph.
- User Unawareness: Most users don’t read consent dialogs. The text “read all your files” might sound benign, but it translates to full read access across their digital workspace.
- Regulatory and Exposure: Uploading internal or regulated data to an external AI violates most corporate security and privacy frameworks, such as ISO 27001. It’s essentially a Shadow IT backdoor for data, allowing it to leave your controlled environment.
- Limited Mitigation Once Granted: Once consent is given, even deleting the app doesn’t retroactively recall data already processed. The only immediate containment is to revoke the service principal or block the app ID altogether.
Defender Checklist and PowerShell Commands
Need a quick signal, not dashboards? These PowerShell snippets target the ChatGPT AppId directly to discover presence, audit consents, and sign-ins. Additionally, it presents some key recommendations to ChatGPT.
Block the ChatGPT App ID: Locate the ChatGPT enterprise app (AppId = e0476654-c1d5-430b-ab80-70cbd947616a). Then, Block sign-in or delete the service principal entirely.
Scope Control: Microsoft Graph permissions, such as “Sites.Read.All” and “AllSites.Read”, and give ChatGPT delegated read parity across every SharePoint site the user can access. Audit who has consented to these scopes with Get-MgOAuth2PermissionGrant.
Site Permissions Hygiene: Limit broad “Everyone” or “Company-wide” SharePoint permissions. A single user consent means ChatGPT can traverse those open doors.
Enforce Conditional Access for SharePoint/OneDrive: Require MFA, compliant devices, and monitored sessions for Graph API calls accessing SharePoint or OneDrive. Prevent ChatGPT from becoming a “shadow DLP engine.”
Conditional Access Enforcement: Apply a policy specifically for SharePoint and OneDrive access. Require compliant devices, monitored sessions, and block risky sign-ins for apps requesting SharePoint Graph scopes.
Detection Tip: Hunt for Graph API calls like GET /sites/{id}/drives/{id}/root/search(q=’confidential’). That’s an AI (or attacker) harvesting SharePoint content at scale.
Finding the ChatGPT Service Principal
Get-MgServicePrincipal -Filter “appId eq ‘e0476654-c1d5-430b-ab80-70cbd947616a'” | fl Id,DisplayName,AppId,PublisherName,AccountEnabled

Check who granted ChatGPT access (delegated OAuth consents)
$sp=(Get-MgServicePrincipal -Filter “appId eq ‘e0476654-c1d5-430b-ab80-70cbd947616a'”)
Get-MgOauth2PermissionGrant -All -Filter “clientId eq ‘$($sp.Id)'” | Select ConsentType,Scope,PrincipalId,ResourceId

This one-liner builds a forensic timeline of every ChatGPT sign-in in the last 14 days:
- Which users interacted with the ChatGPT app
- From which IP addresses
- Whether Conditional Access rules allowed or denied the session.
$since = (Get-Date).AddDays(-14).ToUniversalTime().ToString(“yyyy-MM-ddTHH:mm:ssZ”)
Get-MgAuditLogSignIn -All -Filter “appId eq ‘e0476654-c1d5-430b-ab80-70cbd947616a’ and createdDateTime ge $since” | Select createdDateTime,userDisplayName,ipAddress,conditionalAccessStatus

Final thoughts and recommendations
The ChatGPT Microsoft 365 connector is not malicious. Still, it’s overly permissive by design.
However, as defenders, you should judge by impact, not intent. A single OAuth consent can turn into a silent data leak across your tenant.
This isn’t just about ChatGPT, and it is the exact OAuth mechanism that powers thousands of integrations across SaaS ecosystems. What makes ChatGPT stand out is its scale, tens of millions of users, and an app that now requests one of the most sensitive permission sets in the Microsoft Graph universe.
Microsoft is tightening defaults, and rightly so. However, security maturity requires more than default settings, and it needs visibility, policy enforcement, and the ability to decline when “connect your data to AI” sounds tempting.
What about Microsoft Sentinel hunting and detections? Or how to abuse this integration? In the next post.