Adversarial AI Digest – June, 2025

A digest of AI security research, insights, reports, upcoming events, and tools & resources. Follow the AI Security community on Twitter and the LinkedIn group for additional updates.

Sponsored by CyberBiz and InnovGuard.com – Technology Risk & Cybersecurity Advisory – Innovate and Invest with Confidence, Lead with Assurance.

Note: This blog post is in cooperation with Tal Eliyahu.

Insights

Mapping: MAESTRO Threats  and  MITRE D3FEND Techniques:  This website presents an interactive exploration of the intersection between two pivotal cybersecurity frameworks: MAESTRO and MITRE D3FEND. It aims to provide cybersecurity professionals with actionable insights into securing Agentic AI systems by mapping identified threats to corresponding defensive techniques, as outlined by Edward Lee. https://edward-playground.github.io/maestro-d3fend-mapping/

10 Key Risks of Shadow AI. practical breakdown of Shadow AI: how unmanaged AI use, including tools, models, and features, creates hidden risks across security, compliance, data, and governance. https://www.linkedin.com/pulse/10-key-risks-shadow-ai-tal-eliyahu-9aopc/

How an AI Agent Vulnerability in LangSmith Could Lead to Stolen API Keys and Hijacked LLM Responses by Sasi Levi and Gal Moyal, Noma Securityhttps://noma.security/blog/how-an-ai-agent-vulnerability-in-langsmith-could-lead-to-stolen-api-keys-and-hijacked-llm-responses/

Explore the latest threats to Model Context Protocol (MCP),  covering issues from prompt injection to agent hijacking   in this digest collected by Adversa AI. https://adversa.ai/blog/mcp-security-digest-june-2025/

GenAI Guardrails: Implementation & Best Practices  Lasso outlines how organizations are designing and deploying guardrails for generative AI,  including challenges, frameworks, and real-world examples. https://www.lasso.security/blog/genai-guardrails

Trend Micro’s “Unveiling AI Agent Vulnerabilities” 4-part series explores key security threats in agentic AI systems,   including Part I: Introduction, Part II: Code Execution, Part III: Data Exfiltration, and Part IV: Database Access.

Malicious AI Models Undermine Software Supply-Chain Security  https://cacm.acm.org/research/malicious-ai-models-undermine-software-supply-chain-security/

Leaking Secrets in the Age of AI  Shay Berkovich and Rami McCarthy scanned public repos and found widespread AI-related secret leaks  driven by notebooks, hardcoded configs, and gaps in today’s secret scanning tools. https://www.wiz.io/blog/leaking-ai-secrets-in-public-code

What is AI Assets Sprawl? Causes, Risks, and Control Strategies  Dor Sarig from Pillar Security explores how unmanaged AI models, prompts, and tools accumulate across enterprises, creating security, compliance, and visibility challenges without proper controls. https://www.pillar.security/blog/what-is-ai-assets-sprawl-causes-risks-and-control-strategies

Is your AI safe? Threat analysis of MCPNil Ashkenazi outlines how the Model Context Protocol (MCP) introduces risks, including tool misuse, prompt-based exfiltration, and unsafe server chaining. The focus is on real-world attack paths and how insecure integrations can be exploited. https://www.cyberark.com/resources/threat-research-blog/is-your-ai-safe-threat-analysis-of-mcp-model-context-protocol

A New Identity Framework for AI Agents  by Omar Santos. Everyone is experiencing the rapid proliferation of autonomous AI agents and Multi-Agent Systems (MAS). These are no longer AI chatbots and assistants. They are increasingly self-directed entities capable of making decisions, performing actions, and interacting with critical systems at unprecedented scales. We need to perform fundamental re-evaluations of how identities are managed and access is controlled for these AI agents. https://community.cisco.com/t5/security-blogs/a-new-identity-framework-for-ai-agents/ba-p/5294337

Hunting Deserialization Vulnerabilities With ClaudeTrustedSec, exploring how to find zero-days in .NET assemblies using Model Context Protocol (MCP). https://trustedsec.com/blog/hunting-deserialization-vulnerabilities-with-claude

Uncovering Nytheon AI Vitaly Simonovich 🇮🇱 from Cato Networks, analyzes Nytheon AI, a Tor-based GenAI platform built from jailbroken open-source models (Llama 3.2, Gemma, Qwen2), offering code generation, multilingual chat, image parsing, and API access  wrapped in a modern SaaS-style interface. https://www.catonetworks.com/blog/cato-ctrl-nytheon-ai-a-new-platform-of-uncensored-llms/

Touchpoints Between AI and Non-Human Identities  Tal Skverer from Astrix Security and Ophir Oren 🇮🇱 from Bayer examine how AI agents rely on non-human identities (NHIs)  API keys, service accounts, and OAuth apps   to operate across platforms. Unlike traditional automation, these agents request dynamic access, mimic users, and often require multiple NHIs per task, creating complex, opaque identity chains. https://astrix.security/learn/blog/astrix-research-presents-touchpoints-between-ai-and-non-human-identities/

Breaking down ‘EchoLeak’, Vulnerability Enabling Data Exfiltration from Microsoft 365 Copilot  Itay Ravia and other members of Aim Security identified a vulnerability in Microsoft 365 Copilot where specially crafted emails can trigger data leakage through prompt injection and markdown/CSP bypasses. The issue stems from how Copilot processes untrusted input, potentially exposing internal content. https://www.aim.security/lp/aim-labs-echoleak-blogpost

Remote Prompt Injection in GitLab Duo Leads to Source Code TheftLegit Security’s Omer Mayraz demonstrates how a single hidden comment could trigger GitLab Duo (Claude-powered) to leak private source code, suggest malicious packages, and exfiltrate zero-days. The exploit chain combines prompt injection, invisible text, markdown-to-HTML rendering abuse, and access to sensitive content,  showcasing the real-world risks of deeply integrated AI agents in developer workflows. https://www.legitsecurity.com/blog/remote-prompt-injection-in-gitlab-duo

ISO/IEC 42005:2025 has been formally published. ISO 42005 provides guidance for organizations conducting assessments of the impact of AI systems on their operations. Establishing a process and performing an AI system impact assessment is integral for organizations looking to pursue ISO 42001 certification. More importantly, the AI impact assessment enables organizations to identify high-risk AI systems and assess any potential impact on individuals, groups, or societies in terms of fairness, safety, and transparency. https://www.ethos-ai.org/p/ai-impact-checklist or https://www.linkedin.com/posts/noureddine-kanzari-a852a6181_iso-42005-the-standard-of-the-future-activity-7334498579710943233-ts6S

Checklist for LLM Compliance in Government.  Deploying AI in government? Compliance isn’t optional. Missteps can lead to fines reaching $38.5M under global regulations like the EU AI Act  or worse, erode public trust. This checklist helps your government agency avoid pitfalls and meet ethical standards when deploying large language models (LLMs). https://www.newline.co/@zaoyang/checklist-for-llm-compliance-in-government–1bf1bfd0

How I used o3 to find CVE-2025–37899, a remote zero-day vulnerability in the Linux kernel’s SMB implementation by Sean Heelan. https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-2025-37899-a-remote-zeroday-vulnerability-in-the-linux-kernels-smb-implementation/

Reports

Confidential AI Inference Systems Anthropic and Pattern Labs, including exploring confidential inference,  an approach for running AI models on sensitive data without exposing it to infrastructure operators or cloud providers. In a typical AI deployment, three parties are involved: the model owner, the user providing the data, and the cloud provider hosting the service.

Without safeguards, each must trust the others with sensitive assets. Confidential inference eliminates this need by enforcing cryptographic boundaries,   ensuring that neither the data nor the model is accessible outside the secure enclave, not even to the infrastructure host. https://www.linkedin.com/feed/update/urn:li:activity:7341501922383695872

Article content

AI Red-Team Playbook for Security Leaders Hacken, Blockchain Security Auditor’s AI Red-Team Playbook for Security Leaders offers a strategic framework for safeguarding large language model (LLM) systems through lifecycle-based adversarial testing. It identifies emerging risks,  prompt injections, jailbreaks, retrieval-augmented generation (RAG) exploits, and data poisoning  while emphasizing real-time mitigation and multidisciplinary collaboration.

The playbook integrates methodologies like PASTA (Process for Attack Simulation and Threat Analysis) and STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege), reinforcing the importance of aligning AI security with enterprise risk governance. https://www.linkedin.com/feed/update/urn:li:activity:7339375352231710721

Article content

AI Security Market Report by Latio Security practitioners have been searching for a resource that clearly describes both the AI security challenges that exist and the solutions the market has provided. As part of this report, Latio surveyed numerous security leaders and found a consistent response: interest in AI Security is high, but it’s still unclear what the actual problems are. This report brings Latio’s characteristic practitioner-focused mindset to the problem, highlighting what challenges are out there and clearly stating the maturity of various vendor offerings to address the challenges. https://www.linkedin.com/feed/update/urn:li:activity:7325564352156119040

Article content

Fundamentals of Secure AI Systems with Personal Data by Enrico Glerean is a training for cybersecurity professionals, developers, and deployers of AI systems on AI security & Personal Data Protection, addressing the current AI needs and skill gaps. https://www.linkedin.com/feed/update/urn:li:activity:7337637796007817216

Article content

Security Risks in Artificial Intelligence for Finance. Set of best practices intended for the Board and C-Level by EFR – European Financial Services Round Table https://www.linkedin.com/feed/update/urn:li:activity:7341851628955758592

Article content

Disrupting malicious uses of AI: June 2025OpenAI continues its work to detect and prevent the misuse of AI, including threats such as social engineering, cyber espionage, scams, and covert influence operations. Over the last three months, AI tools have enabled OpenAI’s teams to uncover and disrupt malicious campaigns. Their efforts align with a broader mission to ensure that AI is used safely and democratically, protecting people from real harm, rather than enabling authoritarian abuse. https://www.linkedin.com/feed/update/urn:li:activity:7336790322426912768

Agentic AI Red Teaming Guide by Cloud Security Alliance. Agentic systems introduce new risks,  autonomous reasoning, tool use, and multi-agent complexity  that traditional red teaming can’t fully address. This guide aims to fill that gap with practical, actionable steps. https://www.linkedin.com/feed/update/urn:li:activity:7333874110684348417

AI Data Security  Best Practices for Securing Data Used to Train & Operate AI Systems by Cybersecurity and Infrastructure Security Agency  This guidance highlights the critical role of data security in ensuring the accuracy, integrity, and trustworthiness of AI outcomes. It outlines key risks that may arise from data security and integrity issues across all phases of the AI lifecycle, from development and testing to deployment and operation. https://www.linkedin.com/feed/update/urn:li:activity:7331359099193872387

Upcoming Events

The AI Summit at Black Hat in USA,  August 5, 2025 | Mandalay Bay, Las Vegas | https://www.blackhat.com/us-25/ai-summit.html

AI Village @ DEF CON CON 33 – August 7, 2025 | Las Vegas Convention Centerin Las Vegas, NV

Artificial Intelligence Risk Summit   August 19–20, 2025 | https://www.airisksummit.com/

The AI Summit at Security Education Conference Toronto (SecTor) 2025,  September 30, 2025 | MTCC, Toronto, Ontario, Canada | https://www.blackhat.com/sector/2025/ai-summit.html

The International Conference on Cybersecurity and AI-Based Systems   1-4 September, 2025 | Bulgaria | https://www.cyber-ai.org/

Research

AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models  https://arxiv.org/abs/2506.14682

VulBinLLM: LLM-powered Vulnerability Detection for Stripped Binaries  https://arxiv.org/abs/2505.22010

CAI: An Open, Bug Bounty-Ready Cybersecurity AI  https://arxiv.org/abs/2504.06017

Dynamic Risk Assessments for Offensive Cybersecurity Agents  https://arxiv.org/abs/2505.18384

Design Patterns for Securing LLM Agents against Prompt Injections  https://arxiv.org/abs/2506.08837

PANDAGUARD: Systematic Evaluation of LLM Safety against Jailbreaking Attacks https://arxiv.org/abs/2505.13862

Lessons from Defending Gemini Against Indirect Prompt Injections  https://arxiv.org/abs/2505.14534

Enterprise-Grade Security for the Model Context Protocol (MCP): Frameworks and Mitigation Strategies https://arxiv.org/abs/2504.08623

Securing AI Agents with Information-Flow Control.  As AI agents become increasingly autonomous and capable, ensuring their security against vulnerabilities such as prompt injection becomes critical. This paper explores the use of information-flow control (IFC) to provide security guarantees for AI agents. https://arxiv.org/abs/2505.23643

Common Corpus: The Largest Collection of Ethical Data for LLM Pre-Training.   LLMs are pre-trained on large amounts of data from different sources and domains. These data most often contain trillions of tokens with large portions of copyrighted or proprietary content, which hinders the usage of such models under AI legislation. This raises the need for truly open pre-training data that is compliant with the data security regulations. In this paper, we introduce Common Corpus, the largest open dataset for language model pre-training. https://arxiv.org/abs/2506.01732

A Novel Zero-Trust Identity Framework for Agentic AI: Decentralized Authentication and Fine-Grained Access Control  https://arxiv.org/abs/2505.19301

Tools & Resources

GitHub Secure Code Game  A GitHub Security Lab initiative, providing an in-repo learning experience, where learners secure intentionally vulnerable code. https://github.com/PromptLabs/Prompt-Hacking-Resources

Cybersecurity AI (CAI), an open Bug Bounty-ready Artificial Intelligence  by Alias Robotics  https://github.com/aliasrobotics/cai

Awesome LLMSecOps   LLM | Security | Operations in one GitHub repo with good links and pictures. https://github.com/wearetyomsmnv/Awesome-LLMSecOps

Tracecat, by Tracecat, is a modern, open-source automation platform designed for security and IT engineers. Simple YAML-based templates for integrations with a no-code UI for workflows. Built-in lookup tables and case management. Orchestrated using Temporal for scale and reliability. https://github.com/TracecatHQ/tracecat

MCP-Defender is a Desktop app that automatically scans and blocks malicious MCP traffic in AI apps like Cursor, Claude, VS Code, and Windsurf. https://github.com/MCP-Defender/MCP-Defender

Deepteam  by Confident AI (YC W25) – The LLM Red Teaming Framework. https://github.com/confident-ai/deepteam

Awesome Cybersecurity Agentic AI  https://github.com/raphabot/awesome-cybersecurity-agentic-ai

Bonus

Kali GPT generates payloads, guides the use of Metasploit/Hydra, and explains techniques step-by-step  https://chatgpt.com/g/g-uRhIB5ire-kali-gpt

White Rabbit Neo  Automates exploits and offensive scripts. Pure Red Team thinking  https://www.whiterabbitneo.com/

Pentest GPT Scans, exploits, reports. Follows OWASP workflows, automates findings  https://pentestgpt.ai/

Bug Hunter GPT finds and exploits XSS, SQLi, and CSRF. Generates PoCs step-by-step  https://chatgpt.com/g/g-y2KnRe0w4-bug-hunter-gpt

X HackTricks GPT  Trained with hacktricks-xyz. Offensive and defensive techniques in context  https://chatgpt.com/g/g-aaNx59p4q-hacktricksgpt

OSINT GPT  Finds leaks, analyzes social networks, dorks, domains, and more  https://chatgpt.com/g/g-ysjJG1VjM-osint-gpt

SOC GPT Automates analysis of SIEM alerts, ticket generation, and responses https://chatgpt.com/g/g-tZAEuGaru-soc

BlueTeam GPT  Designed for defenders: anomaly detection, hardening, MITRE ATT&CK  https://chatgpt.com/g/g-GP9M4UScu-blue-team-guide

Threat Intel GPT Summarizes threat reports, analyzes IOCs and TTPs in seconds  https://chatgpt.com/g/g-Vy4rIqiCF-threat-intel-bot

YARA GPT Writes and explains YARA rules for advanced detection  https://chatgpt.com/g/g-caq5P2JnM

Let’s Connect

If you’re a founder building something new or an investor evaluating early-stage opportunities, let’s connect.

Read something interesting? Share your thoughts in the comments.

Discover more from CYBERDOM

Subscribe now to keep reading and get access to the full archive.

Continue reading