Sentinel MCP for Threat Hunting and Investigations

You know the drill. A major alert lands, and within minutes, your screen is drowning in browser tabs, multiple consoles, stale queries that return nothing useful, and half-written notes scattered across random text files named something like “incident-final-09.txt.” That kind of chaos doesn’t just slow you down. It completely breaks your investigation rhythm. Every wasted minute disrupts your pivot chain, erases context, and delays your response.

This is exactly where Microsoft Sentinel with the MCP server changes the game for hunters. Instead of bouncing between tools and losing the story, you can let the MCP-backed triage and hunting tools run advanced KQL over your Defender and Sentinel data, pull in the right tables with schema-aware queries, and keep your hunt structured from hypothesis to closure.

Hunts become something you can define, repeat, and defend: a clear hypothesis mapped to MITRE, curated hunting queries, bookmarked results, entity pages, and playbooks that turn findings into analytic rules, incidents, and threat indicators without you copying and pasting artifacts all over the place.

Sentinel MCP gives you a tighter loop. Query, pivot, correlate, and document within a single working context. It does not replace Microsoft Sentinel. It makes investigators faster at using it.

This post breaks down how to use Sentinel MCP for hunting and incident investigations. It also covers where it shines and where it absolutely does not.

Before starting

SentinelMCP is an open framework designed to structure and automate security operations around Microsoft Sentinel. It provides a layered MDR-style architecture that helps security teams move from raw alerts to structured investigations through defined roles, escalation paths, and automated workflows.

The project introduces a four-tier investigation model with specialized agents, skill-based workflows, and built-in documentation processes to support threat hunting, incident response, and forensic investigations. Its goal is to reduce alert chaos and turn Sentinel data into actionable intelligence that investigators can work with more efficiently.

What Sentinel MCP Actually Changes

At a technical level, Sentinel MCP exposes Sentinel-focused operations as MCP tools, making them directly available within your coding and analysis workflow. This allows you to run investigation actions, query data, and interact with Sentinel without leaving the environment you’re already working in.

On a technical level, Sentinel MCP makes Sentinel-related actions available as MCP tools, so you can run investigation tasks directly within your coding or analysis workflow.

Core capabilities usually include:

  • list_sentinel_workspaces for workspace validation
  • search_tables for schema and table discovery
  • query_lake for KQL execution
  • analyze_user_entity for identity-centric risk context
  • analyze_url_entity for URL threat analysis

Why this matters for defenders:

  • Faster query iteration
  • Better context retention during pivots
  • Lower friction when moving from hunt to investigation narrative
  • Easier repeatable workflows with query patterns

Technical Setup That Actually Works

Before hunting, validate three things:

Identity and Access

  • You have RBAC to the target Sentinel workspace.
  • You can read the required tables (SigninLogs, AuditLogs, Defender XDR tables if connected)

Data Readiness

  • Ingestion is current
  • Entra ID and Defender connectors are actually enabled
  • Tables contain fresh data for your time range

Operational Guardrails

  • Start with 1h to 24h query windows
  • Always filter by TimeGenerated first
  • Always limit output using take or summarize

Quick ingestion sanity check:

union withsource=TableName
    SigninLogs,
    AuditLogs,
    SecurityIncident,
    BehaviorAnalytics
| where TimeGenerated > ago(24h)
| summarize LastEvent=max(TimeGenerated), Events=count() by TableName
| order by LastEvent desc

If this comes back stale, stop hunting and fix ingestion first.

Hunting Workflow: From Hypothesis to Evidence

A practical Sentinel MCP hunt loop looks like this.

Step 1: Form a sharp hypothesis

Forming a sharp hunting hypothesis is the analyst’s starting point, and before you open a single table or write a line of KQL, you should have a claim you can prove or disprove. A hypothesis must be a precise, falsifiable statement about adversary behavior, tied to a specific technique, a plausible mechanism, and an expected artifact in your data.

The example hypothesis, “Adversary is abusing valid Entra credentials and token replay to maintain access,” may look simple, but it carries a lot of underlying structure. Break it apart, and you see a few distinct things, such as an actor assumption (a threat actor already has valid credentials), a technique (token replay, not password spray, not phishing), and a goal.

That specificity is what makes it huntable.

Step 2: Discover the exact schema

Schema assumptions are one of the most common ways a hunt falls apart silently. You copy a query that worked in another tenant, run it against yours, and get zero results, not because the threat isn’t there, but because the column was renamed, never populated, or split across a connector version update.

Use search_tables before writing any KQL that depends on specific field names. Confirm that the columns you intend to filter on actually exist and contain data in your target workspace. Pay particular attention to nested fields like LocationDetails, DeviceDetail, and InitiatedBy. These are frequently structured differently depending on the connector version, Entra ID tier, and ingestion pipeline.

Step 3: Run a broad signal query

Start wide and let the data tell you where to narrow. Your first query is not meant to find the attacker. Otherwise, it’s meant to find the surface area worth investigating.

Pull high and medium sign-ins across the full tenant for the past seven days, including anomalous token detections and AiTM signals, and summarize by user, country, and app. Resist the urge to filter aggressively at this stage. Over-filtering early collapses the hypothesis before the evidence has a chance to shape it.

SigninLogs
| where TimeGenerated > ago(7d)
| where RiskLevelDuringSignIn in ("high","medium")
   or RiskEventTypes_V2 has_any ("adversaryInTheMiddle","anomalousToken","tokenIssuerAnomaly")
| project TimeGenerated, UserPrincipalName, IPAddress, AppDisplayName, RiskLevelDuringSignIn, RiskEventTypes_V2, ConditionalAccessStatus, ResultType
| order by TimeGenerated desc
| take 200

Step 4: Pivot by entity

Once your broad query surfaces a suspect, collapse the hunt onto a single entity and drive deep. Pick one entity, such as a principal UPN, one IP, and one app registration, and own it completely before expanding the scope.

Pull every successful authentication, every device profile shift, every ClientAppUsed transition within your timeframe. Pivoting on everything simultaneously diffuses your chain of custody and buries the kill chain signal in noise. Disciplined entity pivoting is what separates structured threat hunting from alert triage.

Step 5: Correlate with control-plane activity

AuditLogs show what they did with that access, and this is where the investigation becomes forensically significant. Join your suspect identity against the control plane and hunt for the post-compromise fingerprint:

  • MFA method registrations that the legitimate user didn’t initiate
  • OAuth consent grants that scope permissions to attacker-controlled apps, privileged role assignments timed suspiciously close to the initial anomalous sign-in
  • Conditional Access policy modifications designed to weaken future authentication requirements.

These are the adversary establishing persistence. Timestamp correlation between the auth event and the control-plane change is your evidence thread.

Step 6: Build timeline and scope

Merge SigninLogs, AADUserRiskEvents, and AuditLogs into a single chronological view, ordered by TimeGenerated in ascending order. This is your chain of custody in query form with initial access anchored, risk detections marked, and control-plane changes sequenced. The timeline defines your containment boundary and becomes the evidentiary artifact that survives the investigation

KQL Samples by Stage (Short and Advanced)

Use these as copy-paste templates in Sentinel MCP query_lake calls.

Stage 1: Validate Ingestion and Data Freshness

Short sample:

union withsource=TableName SigninLogs, AuditLogs, SecurityIncident
| where TimeGenerated > ago(24h)
| summarize LastEvent=max(TimeGenerated), Events=count() by TableName
| order by LastEvent desc
| take 20

Advanced sample:

let lookback = 24h;
let freshnessThresholdMinutes = 90;
union withsource=TableName
    SigninLogs,
    AADUserRiskEvents,
    AuditLogs,
    SecurityIncident,
    BehaviorAnalytics
| where TimeGenerated > ago(lookback)
| summarize
    LastEvent=max(TimeGenerated),
    FirstEvent=min(TimeGenerated),
    Events=count()
    by TableName
| extend FreshnessMinutes = datetime_diff('minute', now(), LastEvent)
| extend Status = iff(FreshnessMinutes > freshnessThresholdMinutes, "STALE", "OK")
| project TableName, Status, FreshnessMinutes, Events, FirstEvent, LastEvent
| order by Status desc, FreshnessMinutes desc
| take 50

Stage 2: Broad Hunt for Identity Risk Signals

Short sample:

SigninLogs
| where TimeGenerated > ago(7d)
| where RiskLevelDuringSignIn in ("high", "medium")
| project TimeGenerated, UserPrincipalName, IPAddress, AppDisplayName, RiskLevelDuringSignIn
| order by TimeGenerated desc
| take 100

Advanced sample:

// MITRE: T1557, T1078.004
let lookback = 7d;
SigninLogs
| where TimeGenerated > ago(lookback)
| where RiskLevelDuringSignIn in ("high", "medium")
   or RiskEventTypes_V2 has_any ("adversaryInTheMiddle", "anomalousToken", "tokenIssuerAnomaly", "unfamiliarFeatures")
| extend Country = tostring(LocationDetails.countryOrRegion)
| summarize
    SignInCount=count(),
    DistinctIPs=dcount(IPAddress),
    Apps=make_set(AppDisplayName, 5),
    RiskEvents=make_set(RiskEventTypes_V2, 10),
    LastSeen=max(TimeGenerated)
    by UserPrincipalName, Country, RiskLevelDuringSignIn
| order by SignInCount desc, LastSeen desc
| take 150

Stage 3: Pivot on a Suspect User

Short sample:

let targetUser = "user@contoso.com";
SigninLogs
| where TimeGenerated > ago(7d)
| where UserPrincipalName =~ targetUser
| project TimeGenerated, IPAddress, AppDisplayName, ResultType, ConditionalAccessStatus
| order by TimeGenerated desc
| take 200

Advanced sample:

let targetUser = "user@contoso.com";
let lookback = 7d;
SigninLogs
| where TimeGenerated > ago(lookback)
| where UserPrincipalName =~ targetUser
| where ResultType == "0"
| extend
    City=tostring(LocationDetails.city),
    Country=tostring(LocationDetails.countryOrRegion),
    Browser=tostring(DeviceDetail.browser),
    OS=tostring(DeviceDetail.operatingSystem)
| summarize
    SuccessCount=count(),
    DistinctIPs=dcount(IPAddress),
    DistinctCountries=dcount(Country),
    Browsers=make_set(Browser, 5),
    LastSeen=max(TimeGenerated)
    by AppDisplayName, ClientAppUsed
| order by SuccessCount desc
| take 100

Stage 4: Correlate Risk with Control Plane Changes

Short sample:

AuditLogs
| where TimeGenerated > ago(7d)
| where OperationName has_any ("User registered security info", "Consent to application", "Add member to role")
| project TimeGenerated, OperationName, InitiatedBy=tostring(InitiatedBy.user.userPrincipalName)
| order by TimeGenerated desc
| take 100

Advanced sample:

let lookback = 7d;
let riskUsers = SigninLogs
| where TimeGenerated > ago(lookback)
| where RiskLevelDuringSignIn in ("high", "medium")
| distinct UserPrincipalName;
AuditLogs
| where TimeGenerated > ago(lookback)
| where OperationName has_any (
    "User registered security info",
    "Consent to application",
    "Add delegated permission grant",
    "Add member to role",
    "Update conditional access policy"
)
| extend
    Actor=tostring(InitiatedBy.user.userPrincipalName),
    TargetUPN=tostring(TargetResources[0].userPrincipalName),
    TargetName=tostring(TargetResources[0].displayName)
| join kind=leftouter riskUsers on $left.TargetUPN == $right.UserPrincipalName
| extend IsRiskUser = iff(isnotempty(UserPrincipalName), true, false)
| project TimeGenerated, OperationName, Actor, TargetUPN, TargetName, IsRiskUser
| order by TimeGenerated desc
| take 200

Stage 5: Detect Privilege Escalation

Short sample:

AuditLogs
| where TimeGenerated > ago(7d)
| where OperationName has "Add member to role"
| project TimeGenerated, OperationName, Role=tostring(TargetResources[0].displayName), Actor=tostring(InitiatedBy.user.userPrincipalName)
| order by TimeGenerated desc
| take 100

Advanced sample:

// MITRE: T1098.003
let lookback = 7d;
let privilegedRoles = dynamic([
    "Global Administrator",
    "Privileged Role Administrator",
    "Security Administrator",
    "Exchange Administrator",
    "Application Administrator"
]);
AuditLogs
| where TimeGenerated > ago(lookback)
| where OperationName has_any ("Add member to role", "Add eligible member to role", "Add scoped member to role")
| extend
    Role=tostring(TargetResources[0].displayName),
    TargetUPN=tostring(TargetResources[0].userPrincipalName),
    Actor=tostring(InitiatedBy.user.userPrincipalName)
| where Role in~ (privilegedRoles)
| summarize Assignments=count(), FirstSeen=min(TimeGenerated), LastSeen=max(TimeGenerated) by Role, TargetUPN, Actor
| order by LastSeen desc
| take 100

Stage 6: Build a Consolidated Investigation Timeline

Short sample:

let targetUser = "user@contoso.com";
union
    (SigninLogs | where TimeGenerated > ago(7d) | where UserPrincipalName =~ targetUser | project TimeGenerated, Source="SignIn", Detail=AppDisplayName),
    (AuditLogs | where TimeGenerated > ago(7d) | where tostring(TargetResources[0].userPrincipalName) =~ targetUser | project TimeGenerated, Source="Audit", Detail=OperationName)
| order by TimeGenerated asc
| take 500

Advanced sample:

let targetUser = "user@contoso.com";
let lookback = 7d;
union
(
    SigninLogs
    | where TimeGenerated > ago(lookback)
    | where UserPrincipalName =~ targetUser
    | project TimeGenerated, Source="SignIn", Severity=RiskLevelDuringSignIn,
      Detail=strcat("App=", AppDisplayName, ", IP=", IPAddress, ", Result=", ResultType)
),
(
    AADUserRiskEvents
    | where TimeGenerated > ago(lookback)
    | where UserPrincipalName =~ targetUser
    | project TimeGenerated, Source="RiskEvent", Severity=RiskLevel,
      Detail=strcat("Type=", RiskEventType, ", State=", RiskState)
),
(
    AuditLogs
    | where TimeGenerated > ago(lookback)
    | where tostring(TargetResources[0].userPrincipalName) =~ targetUser
       or tostring(InitiatedBy.user.userPrincipalName) =~ targetUser
    | project TimeGenerated, Source="Audit", Severity="info",
      Detail=strcat("Operation=", OperationName, ", Actor=", tostring(InitiatedBy.user.userPrincipalName))
)
| order by TimeGenerated asc
| take 1500

Investigation Patterns That Catch Attacks

These are high-value patterns for Entra-focused investigations.

AiTM and token replay

  • SigninLogs risk signals
  • AADUserRiskEvents for detection timing
  • Same user, rapid IP and device profile shifts

Persistence after identity compromise

  • MFA method changes in AuditLogs
  • OAuth consent grants
  • Conditional Access policy weakening

Privilege escalation

  • Role membership additions to high-impact roles
  • Newly privileged service principals
  • Admin activity from unusual geolocation or ASN

Blast radius

  • Cross-resource access from a compromised identity
  • Correlated incidents in SecurityIncident
  • UEBA anomalies from BehaviorAnalytics

Pros of Sentinel MCP for Hunting and IR

  • Faster analyst loop time: You move quickly from hypothesis to a validated query path. Less portal navigation, more analysis.
  • Better investigative continuity: You keep context in one place while pivoting across logs, entities, and hypotheses.
  • Stronger repeatability: Analysts can standardize hunt playbooks as query blocks and reuse them across incidents.
  • Easier shift-left for detections: Great hunt queries can be promoted to analytic rules more quickly when the workflow is tight.
  • Cleaner collaboration: Query artifacts and findings are easier to hand off than screenshot-driven investigations.

Cons and Hard Limits You Should Know

  • It’s not a full Sentinel operation: Incident lifecycle actions, workbook-heavy analysis, and some management tasks still belong in the portal.
  • Quality is data-dependent: No tool can fix weak connectors, missing tables, or poor retention. Garbage in, fast garbage out.
  • Requires KQL discipline: If analysts skip time filters and limits, cost and latency will spike fast.
  • Schema assumptions break hunts: Tenant differences can break copied queries. Always verify with table discovery first.
  • Not a replacement for the investigation process: MCP accelerates execution. It does not replace triage logic, evidence standards, or containment decisions.

Performance Rules That Save You Money

  • Put TimeGenerated as the first where
  • Prefer has over contains for tokenized text searches
  • Use the project early to reduce the payload
  • Use summarize for shape, take for sample
  • Query in rings: 1h, 6h, 24h, then 7d
  • Keep joins intentional and small
  • Baseline first, anomaly second, conclusion last

If your query feels clever but scans everything, it is not clever.

A Practical SOC Playbook with Sentinel MCP

Use this sequence for new identity-driven incidents:

  1. Validate workspace connectivity
  2. Check ingestion freshness
  3. Hunt high-risk sign-ins and risk events
  4. Pivot to impacted user and IP entities
  5. Correlate with AuditLogs for persistence changes
  6. Check privileged role changes
  7. Build timeline artifact
  8. Document scope and containment actions

This gives you speed without losing forensic defensibility.

Final Verdict

Sentinel MCP is a force multiplier for teams that already know how to hunt and investigate. It is technical leverage, not magic.

If your SOC is Entra-heavy, this is one of the ROI workflow upgrades you can make: less tab thrashing, more signal extraction, faster incident closure, and better detection-engineering feedback loops.

Run it like an investigator, not a tourist. Hypothesis first, data validation second, pivots with intent, evidence always.

More about Sentinel MCP

Sentinel MCP Unlocked

 

Discover more from CYBERDOM

Subscribe now to keep reading and get access to the full archive.

Continue reading