<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Cloud Security Newsletter</title>
    <description>Bringing you relevant Cloud Security News, Interviews &amp; Expert Knowledge so you don’t have to spend hours looking for it.</description>
    
    <link>https://www.cloudsecuritynewsletter.com/</link>
    <atom:link href="https://rss.beehiiv.com/feeds/hEEMTXlHVR.xml" rel="self"/>
    
    <lastBuildDate>Thu, 5 Mar 2026 00:35:40 +0000</lastBuildDate>
    <pubDate>Wed, 04 Mar 2026 23:21:00 +0000</pubDate>
    <atom:published>2026-03-04T23:21:00Z</atom:published>
    <atom:updated>2026-03-05T00:35:40Z</atom:updated>
    
      <category>Artificial Intelligence</category>
      <category>Cybersecurity</category>
      <category>Technology</category>
    <copyright>Copyright 2026, Cloud Security Newsletter</copyright>
    
    
    
    <docs>https://www.rssboard.org/rss-specification</docs>
    <generator>beehiiv</generator>
    <language>en-us</language>
    <webMaster>support@beehiiv.com (Beehiiv Support)</webMaster>

      <item>
  <title>🚨 The 29-Minute SOC: Why AI-Accelerated Attacks Are Forcing Security Teams to Rethink Response</title>
  <description>CrowdStrike’s 2026 report reveals attackers breaking out in minutes while espionage groups hide command-and-control traffic inside cloud APIs. This week’s Cloud Security Brief examines what this means for enterprise SOC architecture and why AI-assisted investigations are becoming unavoidable.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/30491288-9a63-4c9c-bc64-ea9ba2d58ad0/Screenshot_2026-03-04_at_10.33.35_PM.png" length="1876222" type="image/png"/>
  <link>https://www.cloudsecuritynewsletter.com/p/29-minute-soc-ai-attacks</link>
  <guid isPermaLink="true">https://www.cloudsecuritynewsletter.com/p/29-minute-soc-ai-attacks</guid>
  <pubDate>Wed, 04 Mar 2026 23:21:00 +0000</pubDate>
  <atom:published>2026-03-04T23:21:00Z</atom:published>
    <dc:creator>Ashish Rajan</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="hello-from-the-cloudverse"><span style="background-color:#28EEDA;"><b>Hello from the Cloud-verse!</b></span></h2><p class="paragraph" style="text-align:left;">This week’s Cloud Security Newsletter topic<b>: </b><b>When AI Plays Both Sides: Rethinking SOC Architecture in the Era of 29-Minute Breakouts</b><b> </b><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/why-ai-infrastructure-is-harder-to-secure-than-cloud?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" rel="noopener noreferrer nofollow">(continue reading)</a> </p><div class="button" style="text-align:center;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="https://links.cloudsecuritypodcast.tv/2026-browser-attack-techniques-mar2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response"><span class="button__text" style=""> This issue is sponsored by Push Security </span></a></div><hr class="content_break"><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/30491288-9a63-4c9c-bc64-ea9ba2d58ad0/Screenshot_2026-03-04_at_10.33.35_PM.png?t=1772663875"/></a><div class="image__source"><span class="image__source_text"><p>This image was generated by AI. It&#39;s still experimental, so it might not be a perfect match!</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Incase, this is your 1st Cloud Security Newsletter! You are in good company! </b><br>You are reading this issue along with your friends and colleagues from companies like <i>Netflix</i>, Citi, <i>JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more</i> who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to <a class="link" href="https://open.spotify.com/show/6LZgeh4GecRYPc0WrwMB4I?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a> & <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> every week.</p><hr class="content_break"><p class="paragraph" style="text-align:left;">Welcome to this week’s Cloud Security Newsletter</p><p class="paragraph" style="text-align:left;">The security landscape shifted this week not because of a single breach, but because of <b>three signals that point to a structural change in cyber defense.</b></p><p class="paragraph" style="text-align:left;">First, CrowdStrike’s 2026 Global Threat Report revealed that the average adversary breakout time is now 29 minutes, with the fastest intrusion completing lateral movement in 27 seconds.</p><p class="paragraph" style="text-align:left;">Second, IBM’s X-Force Index shows vulnerability exploitation overtaking phishing as the #1 initial access vector, driven by automated vulnerability discovery and AI-assisted attacks.</p><p class="paragraph" style="text-align:left;">Third, Google and Mandiant disrupted a PRC-linked campaign that hid command-and-control traffic inside Google Sheets API calls, bypassing traditional allowlists.</p><p class="paragraph" style="text-align:left;">Together, these developments point to a clear conclusion:</p><p class="paragraph" style="text-align:left;">Defensive response timelines are now measured in minutes, not hours.</p><p class="paragraph" style="text-align:left;">To understand what this means for enterprise SOC architecture, this week’s featured expert Edward Wu, Founder of <a class="link" href="https://links.cloudsecuritypodcast.tv/dropzone-request-demo-mar2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow">DropZone AI</a>, explains why the future SOC model is increasingly becoming:</p><p class="paragraph" style="text-align:left;"><b>“Humans set strategy. AI executes.” </b> <i>[</i><a class="link" href="https://www.cloudsecuritypodcast.tv/videos?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow">Listen to the episode</a><i>]</i></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="tldr-for-busy-readers">⚡ TL;DR for Busy Readers</h2><ul><li><p class="paragraph" style="text-align:left;"><b>Attackers now break out in 29 minutes.</b><br>If your MTTD + MTTR exceeds this, lateral movement is statistically likely.</p></li><li><p class="paragraph" style="text-align:left;"><b>PRC-linked attackers used Google Sheets as covert C2.</b><br>➡️ Audit Google API usage and service account behavior now.</p></li><li><p class="paragraph" style="text-align:left;"><b>Microsoft launched native CIEM across AWS, GCP, and Azure.</b><br>➡️ Expect a surge of overprivileged identity findings after enabling.</p></li><li><p class="paragraph" style="text-align:left;"><b>Vulnerability exploitation is now the #1 attack vector (IBM).</b><br>➡️ Prioritize unauthenticated CVE patching and AI-generated code scanning.</p></li><li><p class="paragraph" style="text-align:left;"><b>AI SOC analysts can now perform tier-1 investigations autonomously.</b><br>➡️ Start documenting environment context and response authorization policies.</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-security-news">📰 <b>THIS WEEK&#39;S TOP 4 SECURITY HEADLINES</b></h2><p class="paragraph" style="text-align:left;">Each story includes <b>why it matters</b> and <b>what to do next</b> — no vendor fluff.</p><h3 class="heading" style="text-align:left;" id="1-crowd-strike-2026-global-threat-r"><b>1. CrowdStrike 2026 Global Threat Report: AI Compresses Adversary Breakout Time to 29 Minutes</b></h3><p class="paragraph" style="text-align:left;"><b>WHAT HAPPENED</b></p><p class="paragraph" style="text-align:left;">CrowdStrike&#39;s 2026 Global Threat Report documents a 65% increase in attack speed year-over-year. The average eCrime breakout time is now 29 minutes; the fastest observed breakout: 27 seconds; in one intrusion, exfiltration began within four minutes of initial access. AI is operating as both accelerant and new attack surface: adversaries exploited legitimate GenAI tools at 90+ organizations via malicious prompt injection; exploited vulnerabilities in AI development platforms for persistence and ransomware staging; and published malicious AI servers impersonating trusted services. Russia-nexus FANCY BEAR deployed LLM-enabled malware (LAMEHUG) for automated recon; DPRK-nexus FAMOUS CHOLLIMA scaled insider operations via AI-generated personas. 82% of 2025 detections were malware-free.</p><p class="paragraph" style="text-align:left;"><b>WHY IT MATTERS</b></p><p class="paragraph" style="text-align:left;">The 29-minute figure is not a metric to track it&#39;s a hard architectural constraint. If your MTTD + MTTR combined exceeds 29 minutes, lateral movement is statistically likely before containment begins. In cloud environments, where identity federation and service account trust chains enable rapid cross-account traversal, this window compresses further.</p><p class="paragraph" style="text-align:left;">The GenAI prompt injection finding is the most operationally novel data point in the report. Adversaries are no longer exploiting software they are socially engineering software, tricking AI-enabled applications into misusing their own service credentials. This is insider threat detection applied to non-human identities.</p><p class="paragraph" style="text-align:left;">🎯 <b>Action:</b> </p><ul><li><p class="paragraph" style="text-align:left;">Validate EDR/XDR detection coverage for malware-free intrusion patterns; establish DLP and governance controls for enterprise GenAI tool usage; </p></li><li><p class="paragraph" style="text-align:left;">Pressure-test detection gaps in AI development platform access (MLflow, SageMaker, Vertex AI); </p></li><li><p class="paragraph" style="text-align:left;">Benchmark MTTD + MTTR against the 29-minute breakout threshold.</p></li></ul><p class="paragraph" style="text-align:left;">👉 <a class="link" href="https://www.crowdstrike.com/en-us/global-threat-report/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow"><b>Read the CrowdStrike Threat Report →</b></a></p><p class="paragraph" style="text-align:left;"><b>👉🏾 </b><a class="link" href="https://links.cloudsecuritypodcast.tv/2026-browser-attack-techniques-mar2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow"><b>Read the 2026 Browser Attack Techniques Report → </b></a></p><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://www.crowdstrike.com/en-us/press-releases/2026-crowdstrike-global-threat-report/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow"> CrowdStrike Press Release</a> |<a class="link" href="https://www.crowdstrike.com/en-us/blog/crowdstrike-2026-global-threat-report-findings/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow"> CrowdStrike Blog</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="2-microsoft-embeds-native-ciem-acro"><b>2. Microsoft Embeds Native CIEM Across Azure, AWS, and GCP in Defender for Cloud</b></h3><p class="paragraph" style="text-align:left;"><b>WHAT HAPPENED</b></p><p class="paragraph" style="text-align:left;">Cloud Infrastructure Entitlement Management (CIEM) is now a native capability in Microsoft Defender for Cloud across all three major cloud platforms. Key changes: inactive identity detection now evaluates unused role assignments (not sign-in activity); the inactivity lookback window extends to 90 days (up from 45); CIEM onboarding no longer requires elevated high-risk permissions; and GCP Cloud Logging ingestion is available in preview. This update follows Microsoft&#39;s announced retirement of Entra Permissions Management Defender CSPM is now the defined migration destination.</p><p class="paragraph" style="text-align:left;"><b>WHY IT MATTERS</b></p><p class="paragraph" style="text-align:left;">This is a meaningful consolidation with real procurement implications. Enterprises running Entra Permissions Management as a standalone CIEM tool now have a clear migration path. More consequentially, the shift from sign-in-based to role-assignment-based inactivity detection will surface a materially larger set of overprivileged identities especially service principals and managed identities in AWS and GCP that authenticate via service accounts rather than interactive login.</p><p class="paragraph" style="text-align:left;">Expect an initial wave of new CIEM findings post-migration. The right move is to build a remediation workflow and establish a baseline before enabling at scale not to be caught flat-footed by hundreds of new recommendations on day one.</p><p class="paragraph" style="text-align:left;">🎯 <b>Action:</b> </p><ul><li><p class="paragraph" style="text-align:left;">Plan CIEM migration from Entra Permissions Management before the retirement deadline; </p></li><li><p class="paragraph" style="text-align:left;">pre-build remediation workflows for the likely surge in overprivileged identity findings; </p></li><li><p class="paragraph" style="text-align:left;">pay particular attention to non-human identities (service principals, managed identities) that don&#39;t generate sign-in events.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/release-notes?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow"> Microsoft Learn Release Notes</a> |<a class="link" href="https://techcommunity.microsoft.com/blog/microsoftdefendercloudblog/the-future-of-ciem-in-microsoft-defender-for-cloud/4398169?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow"> Microsoft Tech Community</a> |<a class="link" href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/permissions-management?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow"> Microsoft Learn CIEM Overview</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-google-and-mandiant-disrupt-prc-e"><b>3. Google and Mandiant Disrupt PRC Espionage Campaign Abusing Google Sheets as Covert C2</b></h3><p class="paragraph" style="text-align:left;"><b>WHAT HAPPENED</b></p><p class="paragraph" style="text-align:left;">Google Threat Intelligence Group (GTIG), Mandiant, and partners took action to disrupt a global espionage campaign targeting telecommunications and government organizations across four continents. The threat actor UNC2814, a suspected PRC-nexus group tracked since 2017 achieved confirmed intrusions across 53 victims in 42 countries. Central to the campaign was the GRIDTIDE backdoor: a C-based malware that abuses the Google Sheets API as a communication channel to disguise C2 traffic. Google terminated all attacker-controlled Cloud Projects and released indicators of compromise.</p><p class="paragraph" style="text-align:left;"><b>WHY IT MATTERS</b></p><p class="paragraph" style="text-align:left;">This campaign is a direct operational threat to any enterprise running Google Workspace or permitting Google APIs through their perimeter which is nearly every large organization. GRIDTIDE hides malicious traffic within legitimate cloud API requests, requiring no exploit and leaving no conventional network indicator: the backdoor is just another HTTPS call to <a class="link" href="https://googleapis.com?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow">googleapis.com</a>.</p><p class="paragraph" style="text-align:left;">Post-intrusion, the group moved laterally via SSH, escalated privileges, and deployed SoftEther VPN Bridge for persistent encrypted egress infrastructure metadata suggests active use since July 2018. Google expects UNC2814 to work to re-establish its footprint: this campaign is disrupted, not finished.</p><p class="paragraph" style="text-align:left;">🎯 <b>Action:</b> </p><ul><li><p class="paragraph" style="text-align:left;">Audit Google Service Account creation and API access patterns in GCP/GWS; </p></li><li><p class="paragraph" style="text-align:left;">Deploy Google-provided search queries to scan for GRIDTIDE IOCs; </p></li><li><p class="paragraph" style="text-align:left;">build SIEM/NDR rules to flag anomalous Sheets API call volumes from non-browser user agents; </p></li><li><p class="paragraph" style="text-align:left;">treat SoftEther VPN traffic as a high-fidelity indicator.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://cloud.google.com/blog/topics/threat-intelligence/disrupting-gridtide-global-espionage-campaign?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow"> Google Cloud Blog / GTIG</a> |<a class="link" href="https://thehackernews.com/2026/02/google-disrupts-unc2814-gridtide.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow"> The Hacker News</a> |<a class="link" href="https://www.cybersecuritydive.com/news/china-cyberattacks-telecommunications-google-sheets/813082/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow"> Cybersecurity Dive</a> |<a class="link" href="https://www.theregister.com/2026/02/25/google_and_friends_disrupt_unc2814/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow"> The Register</a> |<a class="link" href="https://www.infosecurity-magazine.com/news/google-prolific-china-hacking/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow"> Infosecurity Magazine</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="4-ibm-x-force-2026-vulnerability-ex"><b>4. IBM X-Force 2026: Vulnerability Exploitation Overtakes Phishing as #1 Attack Vector</b></h3><p class="paragraph" style="text-align:left;"><b>WHAT HAPPENED</b></p><p class="paragraph" style="text-align:left;">IBM&#39;s 2026 X-Force Threat Intelligence Index reports that vulnerability exploitation became the leading cause of attacks in 2025, accounting for 40% of incidents. A 44% increase in public-facing application attacks was driven by missing authentication controls and AI-enabled vulnerability discovery. Large supply chain and third-party compromises nearly quadrupled since 2020. X-Force tracked nearly 40,000 vulnerabilities in the year 56% of disclosed flaws required no authentication to exploit. AI-assisted coding tools are compounding the exposure, with unvetted generated code feeding insecure pipelines. Infostealer malware drove the exposure of 300,000+ ChatGPT credentials on dark web marketplaces, signaling that AI platforms now carry credential risk on par with core enterprise SaaS.</p><p class="paragraph" style="text-align:left;"><b>WHY IT MATTERS</b></p><p class="paragraph" style="text-align:left;">The displacement of phishing by vulnerability exploitation as the leading initial access vector is a structural signal that should directly influence defensive investment allocation. The 56% of vulnerabilities requiring no authentication is particularly alarming in cloud-native environments, where public-facing APIs, serverless functions, and container ingress points routinely bypass traditional perimeter controls.</p><p class="paragraph" style="text-align:left;">The 4x supply chain increase since 2020 is a direct indictment of CI/CD pipeline security maturity industry-wide. For teams embracing AI-assisted development, the risk compounds: AI-generated code is entering pipelines faster than security reviews can keep pace, and attackers know it.</p><p class="paragraph" style="text-align:left;">🎯 <b>Action:</b> </p><ul><li><p class="paragraph" style="text-align:left;">Prioritize unauthenticated CVE remediation in patch queues; </p></li><li><p class="paragraph" style="text-align:left;">extend SAST/SCA coverage into AI-generated code outputs; </p></li><li><p class="paragraph" style="text-align:left;">audit third-party SaaS integration trust chains; </p></li><li><p class="paragraph" style="text-align:left;">apply credential hygiene controls to enterprise AI platform accounts (ChatGPT Enterprise, Copilot, Claude) as you would to identity providers.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://newsroom.ibm.com/2026-02-25-ibm-2026-x-force-threat-index?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow"> IBM Newsroom</a> |<a class="link" href="https://www.ibm.com/reports/threat-intelligence?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow"> IBM X-Force Report</a> |<a class="link" href="https://industrialcyber.co/reports/ibm-x-force-reports-44-surge-in-exploitation-of-public-facing-applications-as-supply-chain-and-identity-attacks-intensify/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow"> Industrial Cyber</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="cloud-security-topic-of-the-week">🎯 Cloud Security Topic of the Week: </h2><h3 class="heading" style="text-align:left;" id="when-ai-plays-both-sides-rethinking"><b>When AI Plays Both Sides: </b><br>Rethinking SOC Architecture in the Era of 29-Minute Breakouts</h3><p class="paragraph" style="text-align:left;">There is a quiet but consequential arms race underway inside enterprise security operations, and it&#39;s not playing out in the way most security leaders initially anticipated. The fear was that AI would produce dramatically more sophisticated attacks autonomous, multi-stage campaigns executing end-to-end. The reality, as Drop Zone AI&#39;s Edward Wu explains, is more operationally challenging in a different way: AI has fundamentally changed the economics and speed of attack preparation and initial access, even before it automates full campaigns end-to-end.</p><p class="paragraph" style="text-align:left;">This week&#39;s CrowdStrike report puts a precise figure on what that means for defenders: 29 minutes from initial access to lateral movement, with a fastest-ever 27-second observed breakout. The question for every cloud security leader is not whether their SIEM caught the alert, it&#39;s whether their entire detection and response pipeline, from signal to containment action, can complete within that window.</p><p class="paragraph" style="text-align:left;">For cloud environments specifically, the challenge is amplified. Identity federation, service account trust chains, and cross-account IAM relationships mean that a single compromised credential can traverse from one AWS account to an entire organization&#39;s environment far faster than a traditional on-prem lateral movement scenario. The GRIDTIDE campaign disclosed this week is a concrete illustration: no exploit, no conventional indicator, just legitimate API calls that an overwhelmed tier-1 analyst reviewing a queue of 300 alerts would have no reasonable way to flag in time.</p><p class="paragraph" style="text-align:left;">Edward Wu&#39;s framing of the solution is worth sitting with: not &quot;AI replaces humans in the SOC,&quot; but &quot;humans set strategy, AI executes.&quot; The three components of human strategy he identifies: scope of work, scope of authorization, and business context are exactly the kinds of decisions that cannot be automated, and exactly what most security teams are still trying to find time to make amid a flood of alerts. That is the real asymmetry to close.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="featured-experts-this-week"><b>Featured Experts This Week </b>🎤</h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.linkedin.com/in/edwardxwu/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow"><b>Edward Wu</b></a><b>- </b> Founder & CEO | <a class="link" href="https://links.cloudsecuritypodcast.tv/dropzone-request-demo-mar2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow">Dropzone AI</a></p></li><li><p class="paragraph" style="text-align:left;"><b><a class="link" href="https://www.linkedin.com/in/ashishrajan/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow">Ashish Rajan</a></b> - CISO | Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> , Host of <a class="link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a></p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="definitions-and-core-concepts"><b>Definitions and Core Concepts 📚</b></h2><p class="paragraph" style="text-align:left;">Before diving into our insights, let&#39;s clarify some key terms:</p><ul><li><p class="paragraph" style="text-align:left;"><b>AI SOC Analyst</b> An AI agent category designed to autonomously investigate security alerts at tier-1 analyst quality or above. Tools in this category (such as Drop Zone AI) analyze alert data, correlate with environmental context, and produce investigation outputs without requiring a human analyst to review each alert from scratch.</p></li><li><p class="paragraph" style="text-align:left;"><b>Prompt Injection</b> An attack technique targeting AI-enabled applications whereby malicious input is crafted to override or manipulate the application&#39;s intended behavior. In an enterprise security context, this translates to a service account being &quot;social engineered&quot; tricked into performing actions outside its intended scope, generating behavioral anomalies detectable by SOC tooling.</p><p class="paragraph" style="text-align:left;"></p></li></ul><hr class="content_break"><p class="paragraph" style="text-align:center;">This week&#39;s issue is sponsored by <a class="link" href="https://links.cloudsecuritypodcast.tv/2026-browser-attack-techniques-mar2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow"><b>Push Security</b></a></p><p class="paragraph" style="text-align:center;"><b>Learn how browser-based attacks have evolved</b> — <a class="link" href="https://links.cloudsecuritypodcast.tv/2026-browser-attack-techniques-mar2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow">get the 2026 report</a></p><p class="paragraph" style="text-align:left;">Most breaches today start with an attacker targeting cloud and SaaS apps directly over the internet. In most cases, there’s no malware or exploits. Attackers are abusing legitimate functionality, dumping sensitive data, and holding companies to ransom. This is now the standard playbook. </p><p class="paragraph" style="text-align:left;">The common thread? It&#39;s all happening in the browser. </p><p class="paragraph" style="text-align:left;"><a class="link" href="https://links.cloudsecuritypodcast.tv/2026-browser-attack-techniques-mar2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow">Get the latest report from Push Security</a> to understand how browser-based attacks work, and where they’ve been used in the wild, breaking down AitM attacks, ClickFix, malicious extensions, OAuth consent attacks, and more.<span style="color:rgb(0, 0, 0);font-family:"DM Sans", sans-serif;"> </span></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="our-insights-from-this-practitioner">💡<b>Our Insights from this Practitioner 🔍</b></h2><h3 class="heading" style="text-align:left;" id="keeping-up-with-the-29-min-attacker"><b>Keeping up with the 29min Attacker Window as SOC</b><b> (</b><a class="link" href="https://www.cloudsecuritypodcast.tv/videos?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow"><b>Full Episode here</b></a><b>)</b></h3><hr class="content_break"><h3 class="heading" style="text-align:left;" id="the-analytics-gap-is-not-a-headcoun"><b>The Analytics Gap Is Not a Headcount Problem It&#39;s an Architecture Problem</b></h3><p class="paragraph" style="text-align:left;">Edward Wu has spent more than a decade at the intersection of alert generation and alert investigation. Eight years at Actual Hub Networks building NDR detection systems gave him an unusually clear view of a dynamic that most security teams experience as a chronic background stressor: the volume of alerts is structurally outpacing the capacity to process them. What&#39;s changed and why he founded Drop Zone AI is that AI agents have reached the point where they can close that gap operationally, not just theoretically.</p><p class="paragraph" style="text-align:left;"><i>&quot;We believe that humans alone are insufficient to close this asymmetric capacity gap. Silicon and electricity can perform a lot of analysis for pennies on the dollar and can really help plug this ever-expanding analytical gap between the analytics required to sufficiently protect the organization and the limited capacity constrained by headcount, budget, and staffing.&quot;</i> Edward Wu</p><p class="paragraph" style="text-align:left;">This isn&#39;t a vendor pitch, it&#39;s a structural observation that the CrowdStrike and IBM data this week substantiates. Alert volumes are growing 30% year-over-year, attack surfaces are expanding as cloud-native infrastructure proliferates, and the window between initial access and lateral movement has collapsed to 29 minutes. The math has changed. A human-only tier-1 process simply cannot operate at the required speed and scale.</p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="what-ai-can-actually-do-today-and-w"><b>What AI Can Actually Do Today And What It Can&#39;t</b></h3><p class="paragraph" style="text-align:left;">Wu is careful to distinguish between the current reality of AI-assisted attacks and the inevitable future. Today, attackers are using LLMs for the early stages of campaigns, highly personalized spear-phishing at scale, automated reconnaissance, and AI-assisted vulnerability discovery and exploit generation in the AppSec domain. Full end-to-end autonomous attack campaigns of 10 to 15 steps? Not yet. But trending there quickly.</p><p class="paragraph" style="text-align:left;"><i>&quot;We have not seen AI agents end-to-end performing a 10-step or 15-step attack campaign but we have absolutely seen a lot of cases of AI-generated, very personalized spear-phishing emails, and AI utilization in the early reconnaissance phase. And the world is trending toward autonomous end-to-end campaigns.&quot;</i> Edward Wu</p><p class="paragraph" style="text-align:left;">On the defense side, the picture is more mature. Wu reports that Drop Zone&#39;s AI SOC analyst is delivering investigation quality at or above a typical tier-1 human analyst, autonomously and at scale, across 300+ customer environments. The company has processed the equivalent of 160 years of human alert investigations through software alone. Hallucination concerns, once a legitimate objection, have proven to be largely an artifact of poor context management rather than fundamental model limitations.</p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="the-mssp-model-is-transforming-whet"><b>The MSSP Model Is Transforming Whether MSSPs Know It Or Not</b></h3><p class="paragraph" style="text-align:left;">One of the most practically useful threads in Wu&#39;s conversation concerns managed security service providers. The traditional MSSP model allocating fractional analyst time across dozens of clients has a structural flaw that Wu names directly: customizability. An analyst covering 50 clients cannot internalize what constitutes normal behavior in each environment. Clients consistently cite this as their primary complaint.</p><p class="paragraph" style="text-align:left;">What Wu observes at the leading edge of the MSSP market is a shift from 100% human-delivered service models to 80–90% AI-delivered outcomes, with human analysts focused on the final 10%. This is not cost-cutting, it&#39;s the only viable model at the speed and accuracy levels the threat landscape now demands. Simultaneously, some enterprises that previously outsourced tier-1 triage to MSSPs are bringing that function in-house, replacing the MSSP relationship with AI tooling for staff augmentation.</p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="the-human-strategy-ai-execution-mod"><b>The &#39;Human Strategy, AI Execution&#39; Model</b></h3><p class="paragraph" style="text-align:left;">Wu&#39;s clearest articulation of how this architecture works in practice centers on three components of human responsibility that AI cannot substitute for:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Scope of work:</b> Humans must define what the AI investigates, what alert types matter, and what threat hunts are in scope for the organization&#39;s risk profile.</p></li><li><p class="paragraph" style="text-align:left;"><b>Scope of authorization:</b> Humans must determine what actions the AI can take autonomously containing a host, disabling a user account, escalating an alert and under what conditions. This is a governance and liability question, not just a technical one.</p></li><li><p class="paragraph" style="text-align:left;"><b>Business context:</b> No AI system can read minds. The organization&#39;s operational knowledge which service account behaviours are normal, which integrations are expected, which IP ranges belong to trusted partners must be materialized in an accessible format.</p></li></ul><p class="paragraph" style="text-align:left;"><i>&quot;Making your context knowledge accessible to that system whether it&#39;s an AI agent like Drop Zone, or a human coworker is vitally important. We&#39;ve seen cases where using AI to generate a structured onboarding survey, then having practitioners fill it out, can bootstrap an AI agent&#39;s understanding of your environment very quickly.&quot;</i> Edward Wu</p><p class="paragraph" style="text-align:left;">This framing has direct implications for how cloud security teams should approach AI adoption in their SOC. The work of writing down your environmental context: what&#39;s normal, what matters, what the AI is authorized to do is not overhead. It is the core governance activity that makes the entire model functional. It also doubles as institutional knowledge documentation that survives analyst turnover.</p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="prompt-injection-as-a-soc-detection"><b>Prompt Injection as a SOC Detection Problem</b></h3><p class="paragraph" style="text-align:left;">Wu&#39;s perspective on prompt injection offers a useful reframe for security teams trying to operationalize this emerging risk. Prompt injection, he argues, is not a new detection category requiring a new toolset. It is insider threat detection applied to non-human identities.</p><p class="paragraph" style="text-align:left;">When a malicious prompt tricks a GenAI application into misusing its service credential to read 50GB of data from an internal repository, that activity shows up as an anomalous behavioural alert the same kind a behavioural analytics engine would generate for a compromised human account. The investigation question is identical. The difference is that cloud security teams may not have yet baselined their AI application service accounts with the same rigor they apply to privileged human identities.</p><p class="paragraph" style="text-align:left;">This is an under appreciated gap. As enterprises deploy AI assistants, code generation tools, and agent workflows, each operates with a service credential. Until those identities are baselined, monitored, and governed with the same discipline applied to human privileged access, they represent an uninvestigated attack surface.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>RELATED RESOURCES 🎧</b></h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://dropzone.ai/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow">Drop Zone AI</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.cisa.gov/artificial-intelligence?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow">CISA AI Security Guidance for Critical Infrastructure</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.nist.gov/system/files/documents/2023/01/26/AI%20RMF%201.0.pdf?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow">NIST AI Risk Management Framework</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://atlas.mitre.org/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow">MITRE ATLAS Adversarial Threat Landscape for AI Systems</a></p></li></ul><h3 class="heading" style="text-align:left;" id="cloud-security-podcast"><b>Cloud Security Podcast</b></h3><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.cloudsecuritypodcast.tv/videos?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow"><b>Cloud Security Podcast Episode with Edward Wu</b></a></p></li></ul><hr class="content_break"><h3 class="heading" style="text-align:left;" id="question-for-you-reply-to-this-emai">Question for you? (Reply to this email)</h3><p class="paragraph" style="text-align:left;">🤔<b> </b>If your SOC deployed an AI investigation agent tomorrow, what is the first action you would allow it to take autonomously?</p><p class="paragraph" style="text-align:left;">• Disable user account<br>• Isolate host<br>• Block token/session<br>• None — humans only<br></p><p class="paragraph" style="text-align:left;">Next week, we&#39;ll explore another critical aspect of cloud security. Stay tuned!</p><hr class="content_break"><p class="paragraph" style="text-align:left;">📬 Want weekly expert takes on AI & Cloud Security? [<a class="link" href="https://www.cloudsecuritynewsletter.com/subscribe?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow">Subscribe here</a>]”</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><span style="color:rgb(238, 40, 60);"><b><a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">We would love to hear from you</a></b></span>📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙</p><p class="paragraph" style="text-align:start;">Peace!</p><p class="paragraph" style="text-align:start;"><a class="link" href="https://www.linkedin.com/in/shilpi-bhattacharjee/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow">Shilpi Bhattacharjee</a></p><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dc094a1c-678a-43e0-adc9-1bee6c3499e2/CSP_Logo_Blue_ScreenRes_3000x3000_v2.jpg"/></a></div><div class="button" style="text-align:left;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="{{rp_referral_hub_url}}"><span class="button__text" style=""> Share the newsletter </span></a></div><p class="paragraph" style="text-align:left;">Was this forwarded to you? You can <a class="link" href="https://www.cloudsecuritynewsletter.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow">Sign up here</a>, to join our growing readership.</p><p class="paragraph" style="text-align:left;">Want to <b>sponsor</b> the next newsletter edition! <a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">Lets make it happen </a></p><p class="paragraph" style="text-align:left;">Have you joined our FREE <b>Monthly</b> <a class="link" href="https://www.cloudsecuritybootcamp.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Bootcamp</a> yet?</p><p class="paragraph" style="text-align:left;">checkout our <b>sister podcast</b> <a class="link" href="https://www.youtube.com/@AISecurityPodcast?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=the-29-minute-soc-why-ai-accelerated-attacks-are-forcing-security-teams-to-rethink-response" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=82e50951-b028-4e3a-a147-c8d62105cdc9&utm_medium=post_rss&utm_source=cloud_security_newsletter">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>🚨 AI Agents Are Now the Attack Surface &amp; Building an AI Security Blueprint Before It&#39;s Too Late</title>
  <description>This week&#39;s brief covers the Cline npm supply chain attack weaponising prompt injection against CI/CD pipelines, BeyondTrust CVE-2026-1731 now confirmed in active ransomware campaigns across 11,000+ exposed instances. Alongside the Cisco State of AI Security 2026 report and Microsoft&#39;s new Security Dashboard for AI, TrendAI&#39;s Shannon Murphy outlines a pragmatic AI security blueprint centred on data governance, agent identity, and cross-functional ownership for organisations at every stage of AI adoption. Key themes: agentic AI security, AI asset inventory, DSPM, supply chain risk, and enterprise AI governance frameworks.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/9bb33b40-97ba-4083-8391-6b9f2e6fa481/Screenshot_2026-02-25_at_10.02.49_PM.png" length="2171072" type="image/png"/>
  <link>https://www.cloudsecuritynewsletter.com/p/ai-agents-security-blueprint</link>
  <guid isPermaLink="true">https://www.cloudsecuritynewsletter.com/p/ai-agents-security-blueprint</guid>
  <pubDate>Wed, 25 Feb 2026 22:58:22 +0000</pubDate>
  <atom:published>2026-02-25T22:58:22Z</atom:published>
    <dc:creator>Ashish Rajan</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="hello-from-the-cloudverse"><span style="background-color:#28EEDA;"><b>Hello from the Cloud-verse!</b></span></h2><p class="paragraph" style="text-align:left;">This week’s Cloud Security Newsletter topic<b>: The AI Security Blueprint: A Maturity-Staged Framework for Enterprise AI Governance </b><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/why-ai-infrastructure-is-harder-to-secure-than-cloud?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" rel="noopener noreferrer nofollow">(continue reading)</a> </p><div class="button" style="text-align:center;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="https://www.aisecuritypodcast.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late"><span class="button__text" style=""> This issue is sponsored by AI Security Podcast </span></a></div><hr class="content_break"><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/why-ai-infrastructure-is-harder-to-secure-than-cloud?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/9bb33b40-97ba-4083-8391-6b9f2e6fa481/Screenshot_2026-02-25_at_10.02.49_PM.png?t=1772057144"/></a><div class="image__source"><span class="image__source_text"><p><i>This image was generated by AI. It&#39;s still experimental, so it might not be a perfect match!</i></p></span></div></div><p class="paragraph" style="text-align:left;"><b>Incase, this is your 1st Cloud Security Newsletter! You are in good company! </b><br>You are reading this issue along with your friends and colleagues from companies like <i>Netflix</i>, Citi, <i>JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more</i> who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to <a class="link" href="https://open.spotify.com/show/6LZgeh4GecRYPc0WrwMB4I?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a> & <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> every week.</p><hr class="content_break"><p class="paragraph" style="text-align:left;">Welcome to this week’s Cloud Security Newsletter</p><p class="paragraph" style="text-align:left;">If this week had a single unifying signal it was this: the AI systems your organisation is deploying faster than ever are becoming the attack surface. From a weaponised npm package silently installing an autonomous AI agent on developer machines, to a Russian nation-state actor using legitimate SaaS webhooks to exfiltrate data without touching a single CVE, to ransomware operators now confirmed exploiting a CVSS 9.9 pre-auth RCE in one of the enterprise&#39;s most privileged remote access tools   the threat actors are not waiting for your AI governance programme to catch up.</p><p class="paragraph" style="text-align:left;">This week&#39;s guest, Shannon Murphy, Senior Researcher and AI Security Strategist at TrendAI, has spent the last five years working directly with CISOs, CTOs, and cloud security architects on exactly this problem. In a wide-ranging conversation with Cloud Security Podcast host Ashish Rajan, Shannon lays out a clear-eyed AI security blueprint   grounded not in theory but in the patterns she observes across enterprise field engagements   covering data governance, agent identity, shift-left for AI, and how to build a cross-functional governance committee that actually holds.</p><p class="paragraph" style="text-align:left;">We also cover the Cisco State of AI Security 2026 report revealing that 71% of enterprises are deploying agentic AI they cannot secure, and Microsoft&#39;s new Security Dashboard for AI now in public preview. The news this week is not background noise, it is a live demonstration of every risk Shannon describes.<i>[</i><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/how-to-build-an-ai-security-program-from-scratch?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">Listen to the episode</a><i>]</i></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="two-actions-to-take-this-week">🎯 Two Actions to Take This Week</h2><p class="paragraph" style="text-align:left;">👉 <b>Patch or isolate BeyondTrust immediately</b><br>👉 <b>Audit every AI agent in CI/CD and restrict token scope</b></p><p class="paragraph" style="text-align:left;">The AI readiness gap is no longer theoretical.<br>It’s operational risk.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="tldr-for-busy-readers">📰 TL;DR for Busy Readers</h2><ul><li><p class="paragraph" style="text-align:left;"><b>BeyondTrust CVE-2026-1731 (CVSS 9.9) is confirmed</b><br>Patch to RS 25.3.2 / PRA 25.1.1 immediately or isolate from internet exposure.</p></li><li><p class="paragraph" style="text-align:left;"><b>Cline npm supply chain attack </b></p><ul><li><p class="paragraph" style="text-align:left;">Prompt injection used to steal publish credentials.<br>→ Enforce 48-hour npm version hold.<br>→ Audit AI agent permissions in CI/CD.</p></li></ul></li><li><p class="paragraph" style="text-align:left;"><b>Cisco&#39;s 2026 AI Security report</b>: </p><ul><li><p class="paragraph" style="text-align:left;">83% deploying agentic AI. Only 29% ready.<br>→ Treat the readiness gap as a funded backlog item.</p></li></ul></li><li><p class="paragraph" style="text-align:left;"><b>Microsoft&#39;s Security Dashboard for AI (public preview)</b> </p><ul><li><p class="paragraph" style="text-align:left;">First Unified AI asset inventory across Defender, Entra, Purview.<br>→ Enable this week and export your first AI asset register.</p></li></ul></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-security-news">📰 <b>THIS WEEK&#39;S TOP 4 SECURITY HEADLINES</b></h2><p class="paragraph" style="text-align:left;">Each story includes <b>why it matters</b> and <b>what to do next</b> — no vendor fluff.</p><h3 class="heading" style="text-align:left;" id="1-beyond-trust-cve-20261731-cvss-99"><b>1. </b><b>BeyondTrust CVE-2026-1731 (CVSS 9.9) </b></h3><p class="paragraph" style="text-align:left;"><b>What Happened: </b>What began as a critical disclosure on February 6 escalated this week into confirmed ransomware exploitation across multiple sectors. BeyondTrust&#39;s own telemetry indicates active exploitation started January 31 a full week before public disclosure   making CVE-2026-1731 a zero-day in retrospect. The flaw is an OS command injection vulnerability in the thin-scc-wrapper component of BeyondTrust Remote Support (RS) and Privileged Remote Access (PRA), exposed via WebSocket and exploitable without authentication. A public PoC dropped February 10; GreyNoise observed mass scanning within 24 hours. CISA added it to the KEV catalog on February 13 with a 72-hour remediation mandate for federal agencies and updated the KEV entry on February 19 to activate the ransomware exploitation flag.</p><p class="paragraph" style="text-align:left;">Palo Alto Networks Unit 42 confirmed active exploitation this week across finance, legal, healthcare, higher education, and retail in the US, France, Germany, Australia, and Canada. Observed post-exploitation activity includes VShell and SparkRAT deployment, web shell installation, PostgreSQL database exfiltration, and lateral movement.</p><p class="paragraph" style="text-align:left;"><b>Why it matters to you:</b> BeyondTrust RS and PRA are privileged access tools by design they carry SYSTEM-level authority over every managed endpoint. An unauthenticated RCE on these appliances is effectively a master key to your entire managed estate. With 11,000+ internet-exposed instances confirmed and ransomware actors now actively pre-positioning, treat this as an active incident response situation, not a patch management queue item.</p><p class="paragraph" style="text-align:left;">👉 <a class="link" href="https://unit42.paloaltonetworks.com/beyondtrust-cve-2026-1731/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">Download Unit 42 IOCs and validate exposure this week.</a></p><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://thehackernews.com/2026/02/infostealer-steals-openclaw-ai-agent.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow"> </a><a class="link" href="https://www.bleepingcomputer.com/news/security/cisa-beyondtrust-rce-flaw-now-exploited-in-ransomware-attacks/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">BleepingComputer</a> · <a class="link" href="https://www.securityweek.com/beyondtrust-vulnerability-exploited-in-ransomware-attacks/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">SecurityWeek</a> · <a class="link" href="https://www.scworld.com/news/cisa-update-beyondtrust-rce-exploited-in-ransomware-attacks?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">SC Media</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="2-cline-cli-npm-supply-chain-attack"><b>2. </b><b>Cline CLI npm Supply Chain Attack Prompt Injection Weaponised to Steal Publish Credentials and Deploy OpenClaw</b></h3><p class="paragraph" style="text-align:left;"><b>What happened:</b> On February 17, a threat actor used a stolen npm publish token to release cline@2.3.0   a poisoned update to Cline CLI, a popular AI-powered coding assistant with approximately 90,000 weekly npm downloads. A single postinstall script silently ran npm install -g openclaw@latest on any machine installing the package. The malicious version was live for approximately eight hours. StepSecurity estimated roughly 4,000 downloads of the compromised version.</p><p class="paragraph" style="text-align:left;">What makes this attack structurally significant is the initial access vector: security researcher Adnan Khan had disclosed on February 9 that Cline&#39;s Claude-powered GitHub issue-triage workflow was vulnerable to prompt injection. A crafted GitHub issue could cause the AI agent to execute a malicious payload, poison the GitHub Actions cache, and pivot to steal the npm publish token. Cline patched the triage workflow within 30 minutes   but rotated the wrong token. Eight days later, the still-valid token was used to publish the malicious package. The payload, OpenClaw, is a legitimate open-source AI agent with broad system access (full disk, terminal, persistent WebSocket daemon) and a known critical CVE (CVE-2026-25253, CVSS 8.8) in versions prior to 2026.1.29 allowing unauthenticated operator access.</p><p class="paragraph" style="text-align:left;"><b>Why it matters to you:</b> This attack introduces a materially new threat model: prompt injection against AI agents in CI/CD pipelines as an initial access technique for credential theft. The entry point was not a phishing email or a code vulnerability, it was a GitHub issue. Any organisation using LLM-powered bots to automate repository triage, PR review, or release workflows with access to production secrets is now a viable target for this attack pattern. This connects directly to Shannon Murphy&#39;s warning that agentic AI is creating new blind spots that existing DLP and AppSec tooling cannot cover.</p><p class="paragraph" style="text-align:left;">👉 If your AI agent can push code, it must be governed like a privileged identity.</p><p class="paragraph" style="text-align:left;"><b>Sources:</b> <a class="link" href="https://thehackernews.com/2026/02/cline-cli-230-supply-chain-attack.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">The Hacker News</a> · <a class="link" href="https://www.darkreading.com/application-security/supply-chain-attack-openclaw-cline-users?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">Dark Reading</a> · <a class="link" href="https://snyk.io/blog/cline-supply-chain-attack-prompt-injection-github-actions/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">Snyk Deep-Dive</a> · <a class="link" href="https://www.stepsecurity.io/blog/cline-supply-chain-attack-detected-cline-2-3-0-silently-installs-openclaw?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">StepSecurity Detection Report</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-cisco-state-of-ai-security-2026-7"><b>3. </b><b>Cisco &quot;State of AI Security 2026&quot;- 71% of Enterprises Are Deploying Agentic AI They Cannot Secure</b></h3><p class="paragraph" style="text-align:left;"><b>What happened:</b> Cisco&#39;s AI Threat Intelligence & Security Research team released its flagship annual report on February 19, with Help Net Security publishing a practitioner-focused analysis on February 23. The report documents three compounding risks: rapid agentic AI deployment outpacing security readiness; a fragile AI supply chain with documented tool poisoning and MCP ecosystem vulnerabilities; and adversarial techniques   particularly prompt injection and jailbreaks   maturing from research concepts into documented real-world exploits. Key statistic: 83% of surveyed organisations plan to deploy agentic AI into business functions; only 29% feel ready to secure those deployments.</p><p class="paragraph" style="text-align:left;">Documented incidents include a GitHub MCP server compromise in which a malicious issue injected hidden instructions that hijacked an agent and exfiltrated private repository data. The report also covers a fake npm package mimicking an email integration that silently forwarded outbound messages to attacker infrastructure   a pattern strikingly consistent with the Cline incident reported in the same week. Cisco&#39;s researchers demonstrated that open-weight models remain susceptible to multi-turn jailbreaks at significantly higher success rates than single-turn attacks.</p><p class="paragraph" style="text-align:left;"><b>Why it matters to you:</b> The report crystallises what security leaders are observing operationally: AI agents are being granted authority to execute tasks, query databases, modify code, and interact with external services   often without the controls that would be non-negotiable for a human performing the same actions. The agent-to-agent trust problem is particularly acute. For cloud security teams, the MCP attack surface deserves immediate attention; Cisco has released open-source scanners for MCP, A2A, and agentic skill files as companion tooling. The 71% readiness gap is not a statistic to present to leadership   it is a project backlog. This data is the empirical foundation for every strategic recommendation Shannon Murphy makes in this week&#39;s feature.</p><p class="paragraph" style="text-align:left;">👉 Use this data in your next board update and tie it to funded remediation.</p><p class="paragraph" style="text-align:left;"><b>Sources:</b> <a class="link" href="https://blogs.cisco.com/ai/cisco-state-of-ai-security-2026-report?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">Cisco AI Security Blog (Primary)</a> · <a class="link" href="https://www.helpnetsecurity.com/2026/02/23/ai-agent-security-risks-enterprise/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">Help Net Security</a> · <a class="link" href="https://learn-cloudsecurity.cisco.com/2026-state-of-ai-security-report?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">Cisco Report</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="4-microsoft-launches-security-dashb"><b>4. </b><b>Microsoft Launches Security Dashboard for AI in Public Preview - Unified CISO Visibility Across the Enterprise AI Estate</b></h3><p class="paragraph" style="text-align:left;"><b>What happened:</b><b> </b>Microsoft released the Security Dashboard for AI into public preview on February 16, available across enterprise tenants with eligible Defender, Entra, and Purview subscriptions at no additional cost. Accessible at ai. security. microsoft. com, the dashboard aggregates real-time risk signals from all three platforms into a single governance interface designed for CISOs and AI risk leaders. Core capabilities include: a comprehensive AI asset inventory spanning Microsoft 365 Copilot agents, Copilot Studio agents, Azure AI Foundry deployments, MCP servers, and third-party AI applications including OpenAI, Google Gemini, and ChatGPT tenant integrations; an AI risk scorecard with posture drift tracking; correlated risk views linking Purview data sensitivity signals with Entra identity context and Defender threat alerts; and delegated remediation actions. Security Copilot is embedded for natural-language investigation.</p><p class="paragraph" style="text-align:left;"><b>Why it matters to you:</b> This announcement directly addresses the shadow AI problem Shannon Murphy identifies as the critical first milestone in any AI security programme: you cannot govern what you cannot see. The dashboard&#39;s AI inventory discovery function is the operationalisation of that principle   and for organisations already invested in the Microsoft security stack, it is immediately actionable. The dashboard also directly addresses the data leakage risk Cisco independently flags in this week&#39;s AI Security report: oversharing detection in Purview integration targets agents with overly broad data permissions, one of the most prevalent enterprise AI exposure patterns observed in 2025. For organisations not on the Microsoft stack, this announcement raises the competitive bar for what a mature CNAPP or CSPM vendor must now offer in AI security posture management.</p><p class="paragraph" style="text-align:left;">Enable it. Export inventory. Start governance.</p><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://www.bleepingcomputer.com/news/microsoft/microsoft-says-bug-causes-copilot-to-summarize-confidential-emails/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow"> </a><a class="link" href="https://techcommunity.microsoft.com/blog/microsoft-security-blog/introducing-security-dashboard-for-ai-now-in-public-preview/4494637?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">Microsoft TechCommunity (Primary)</a> · <a class="link" href="https://www.helpnetsecurity.com/2026/02/16/microsoft-security-dashboard-for-ai-tool/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">Help Net Security</a> · <a class="link" href="https://learn.microsoft.com/en-us/security/security-for-ai/security-dashboard-for-ai?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">Microsoft Learn Docs</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="cloud-security-topic-of-the-week">🎯 Cloud Security Topic of the Week: </h2><h3 class="heading" style="text-align:left;" id="the-ai-security-blueprint-a-maturit"><b>The AI Security Blueprint: A Maturity-Staged Framework for Enterprise AI Governance</b></h3><p class="paragraph" style="text-align:left;">One of the clearest themes emerging from Shannon Murphy&#39;s conversation is that most organisations are attempting to govern AI deployments using frameworks and tool stacks designed for a deterministic, pre-AI world   and that gap is not theoretical. It is showing up in the Cisco report&#39;s 71% readiness gap, in the Cline supply chain attack, and in the data leakage scenarios Shannon describes from real enterprise field engagements.</p><p class="paragraph" style="text-align:left;">The AI security blueprint she outlines is structured around three maturity stages   adopter, builder, and scaler   each with distinct risk profiles and corresponding security requirements. What makes this framework practically valuable is that Shannon explicitly states the underlying philosophy remains consistent across all three stages: discover, assess, prioritise, mitigate. The level of security capability scales with the attack surface; the methodology does not change.</p><p class="paragraph" style="text-align:left;"><b>Stage-1 - Adopter:</b> Organisations in productivity-gain mode face their highest risk from data governance failures and over-permissioned AI access. </p><p class="paragraph" style="text-align:left;">Primary Risk: Shadow AI & Data Exposure<br>Objective: Real-time AI asset visibility</p><p class="paragraph" style="text-align:left;">Deliverable:<br>Continuously updated AI inventory not a spreadsheet.</p><p class="paragraph" style="text-align:left;"><b>Stage-2 - Builder:</b> Development teams building internal AI tools or going to market with AI-powered products face all of the adopter risks plus the application security and supply chain risks illustrated by the Cline attack this week. </p><p class="paragraph" style="text-align:left;">Primary Risk: Supply chain & application security<br>Add:</p><ul><li><p class="paragraph" style="text-align:left;">AI-specific vulnerability scanning</p></li><li><p class="paragraph" style="text-align:left;">Container security</p></li><li><p class="paragraph" style="text-align:left;">Runtime monitoring</p></li><li><p class="paragraph" style="text-align:left;">Agent identity governance</p></li></ul><p class="paragraph" style="text-align:left;">Shift-left is necessary.<br>Runtime monitoring is mandatory.</p><p class="paragraph" style="text-align:left;"><b>Stage-3-  Scaler</b>: Organisations investing in AI factories and enterprise-wide automation are operating in what Shannon describes as an inferencing security paradigm: continuous monitoring of live AI systems for behavioural drift, adversarial manipulation, and agent-to-agent trust failures. </p><p class="paragraph" style="text-align:left;">Primary Risk: Inferencing Security & Agent-to-Agent Trust<br>Objective:Treat agents as identities:</p><ul><li><p class="paragraph" style="text-align:left;">Scoped permissions</p></li><li><p class="paragraph" style="text-align:left;">Short-lived credentials</p></li><li><p class="paragraph" style="text-align:left;">Access governance</p></li><li><p class="paragraph" style="text-align:left;">Continuous behavioural monitoring</p></li></ul><p class="paragraph" style="text-align:left;">DSPM becomes foundational here.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="featured-experts-this-week"><b>Featured Experts This Week </b>🎤</h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.linkedin.com/in/greatgtm/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow"><b>Shannon Murphy</b></a><b>- </b> Senior Researcher & AI Security Strategist | TrendAI</p></li><li><p class="paragraph" style="text-align:left;"><b><a class="link" href="https://www.linkedin.com/in/ashishrajan/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">Ashish Rajan</a></b> - CISO | Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> , Host of <a class="link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a></p></li></ul><h2 class="heading" style="text-align:left;" id="definitions-and-core-concepts"><b>Definitions and Core Concepts 📚</b></h2><p class="paragraph" style="text-align:left;">Before diving into our insights, let&#39;s clarify some key terms:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Agentic AI</b><br>Autonomous systems executing multi-step tasks via tool calls.</p></li><li><p class="paragraph" style="text-align:left;"><b>Prompt Injection</b><br>Malicious instructions embedded in data processed by AI agents.</p></li><li><p class="paragraph" style="text-align:left;"><b>Model Context Protocol (MCP)</b><br>Standard defining how agents discover and call tools.</p><p class="paragraph" style="text-align:left;"></p></li></ul><hr class="content_break"><p class="paragraph" style="text-align:center;">This week&#39;s issue is sponsored by <a class="link" href="https://www.aisecuritypodcast.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow"><b>AI Security Podcast</b></a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="our-insights-from-this-practitioner">💡<b>Our Insights from this Practitioner 🔍</b></h2><h3 class="heading" style="text-align:left;" id="how-to-build-an-ai-security-program"><b>How to Build an AI Security Program from Scratch.</b><b> (</b><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/how-to-build-an-ai-security-program-from-scratch?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow"><b>Full Episode here</b></a><b>)</b></h3><hr class="content_break"><h3 class="heading" style="text-align:left;" id="1-why-95-of-ai-projects-fail-and-wh"><span style="color:rgb(67, 67, 67);"><b>1. Why 95% of AI Projects Fail   and What Security Owns in the Answer</b></span></h3><p class="paragraph" style="text-align:left;">Shannon opens with a striking data point from an MIT study that circulated widely in the security community: 95% of AI projects are failing. Her diagnosis is direct:</p><p class="paragraph" style="text-align:left;">&quot;Any security leader who attempts to drive an AI governance strategy in a silo will fail. 95% of AI projects are failing because we&#39;re not having all the stakeholders at the table.&quot;   Shannon Murphy, TrendAI</p><p class="paragraph" style="text-align:left;">The failure pattern she describes is recognisable to anyone who has watched a well-intentioned AI governance initiative stall: business units move fast under top-down pressure to adopt AI, security is brought in late or not at all, and the resulting programme has policy gaps that surface as incidents. The structural fix she advocates is a cross-functional governance committee   legal, compliance, engineering, and security   with board-level sponsorship that distributes risk ownership rather than concentrating it in the security team alone.</p><p class="paragraph" style="text-align:left;">For cloud security leaders, this is both a risk management and a career positioning insight. Shannon notes that AI is creating the conditions for security to have a genuine seat at strategic decision-making tables for the first time   because business leaders now understand they have knowledge gaps that require security intelligence to navigate. The opportunity to shift from reactive incident responder to proactive governance partner is real, but it requires showing up with scenario-based risk framing (&quot;here is what a data exfiltration incident looks like in our AI environment and here is what it costs&quot;) rather than technical jargon.</p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="2-your-existing-stack-has-a-blind-s"><span style="color:rgb(67, 67, 67);"><b>2. Your Existing Stack Has a Blind Spot the Size of Your AI Deployment</b></span></h3><p class="paragraph" style="text-align:left;">One of the most practically important points Shannon makes concerns the false sense of coverage that a mature security stack can create when AI enters the picture:</p><p class="paragraph" style="text-align:left;">&quot;AI is embedded in every single SaaS application and every single tool that your team is using. You need to know what people are using, you need to know what content is going into that experience and what content is going out.&quot;   Shannon Murphy, TrendAI</p><p class="paragraph" style="text-align:left;">She illustrates the blind spot with a deceptively simple example: an employee asks their corporate AI copilot for a colleague&#39;s salary. Traditional DLP, tuned to flag sensitive data leaving via email or file transfer, has no visibility into this interaction. The data exposure happens entirely within what the organisation considers a sanctioned, secured application   and no alert is generated. Scale this to thousands of employees across dozens of AI-enabled SaaS tools, and the aggregate data risk is substantial.</p><p class="paragraph" style="text-align:left;">Her prescription is not to abandon the existing stack but to recognise it as table stakes that now requires an AI-specific visibility layer on top. The key principle: context is everything. Risk that exists in isolation   an AI query here, a model access there   becomes actionable and prioritisable only when it can be seen in relation to the identity making the request, the data being accessed, and the threat signals already in your environment. This is precisely what Microsoft&#39;s Security Dashboard for AI announced this week attempts to operationalise.</p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-the-three-milestone-ai-security-r"><span style="color:rgb(67, 67, 67);"><b>3. The Three-Milestone AI Security Roadma</b></span><span style="color:rgb(67, 67, 67);">p</span></h3><p class="paragraph" style="text-align:left;">Shannon provides the clearest practical roadmap in the conversation for how a security leader should sequence their AI security programme. It maps directly to the three building blocks her blueprint prioritises:</p><p class="paragraph" style="text-align:left;">Milestone 1   Real-Time AI Asset Visibility: &quot;Shadow AI is absolutely massive and you need to be able to wrap your arms around it.&quot; This is the non-negotiable foundation. Shannon is explicit that a static inventory is insufficient: &quot;What we have today in place is not what we have tomorrow   literally tomorrow.&quot; The first deliverable is a continuously updated, real-time inventory of every AI application, agent, model, and integration in your environment. Tools exist today to make this tractable   the question is whether the programme has been prioritised.</p><p class="paragraph" style="text-align:left;">Milestone 2   Identity and Access Governance for AI: Once you have inventory, the next question is access. Who   and what   gets access to which data and tools? Shannon&#39;s recommendation to treat agents as identities is strategically important: &quot;Maybe we wanna start treating them a little bit like identities. Taking an identity risk management approach to those agents.&quot; The tooling for identity governance is mature; applying it systematically to AI agents requires discipline and the right agent inventory to work from.</p><p class="paragraph" style="text-align:left;">Milestone 3   Data Governance and Provenance: Shannon identifies this as &quot;your biggest project&quot; and the one most security teams are furthest behind on. DSPM   understanding where your data lives, who has access to it in AI contexts, and what happens to it during model fine-tuning or inference   is the pillar she describes as &quot;the central bingo card conversation in every CISO engagement over the last two years.&quot; For organisations fine-tuning open-weight models on proprietary data, this is particularly acute: the data used for fine-tuning must be governed with the same rigour as production data.</p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="4-shift-left-for-ai-more-critical-t"><span style="color:rgb(67, 67, 67);"><b>4. Shift Left for AI - More Critical Than Ever, and More Incomplete</b></span></h3><p class="paragraph" style="text-align:left;">Ashish Rajan asks Shannon directly whether shift-left DevSecOps is still relevant in an AI-first world. Her answer is nuanced and worth the full framing:</p><p class="paragraph" style="text-align:left;">&quot;Shift left is more needed than ever before. But it is what is going to keep you out of trouble from a quality perspective   and when we layer in things like an AI scanner for vulnerability, that&#39;s what&#39;s going to keep you out of trouble even when we&#39;re live in runtime.&quot;   Shannon Murphy, TrendAI</p><p class="paragraph" style="text-align:left;">The key addition she makes is that shift-left for AI does not end at the pipeline gate. Unlike traditional deterministic software, AI applications continue to change after deployment through model drift, fine-tuning updates, and the inherent non-determinism of LLM outputs. This means that runtime monitoring   for hallucination, for adversarial prompt injection, for novel zero-day vulnerabilities in live inference stacks   is a distinct and mandatory complement to pre-deployment scanning. The Cline attack this week is a live demonstration: a supply chain compromise in the pre-deployment phase that delivered a runtime-persistent agent with ongoing system access. Both vectors required coverage; neither alone was sufficient.</p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="5-the-model-card-as-enterprise-trus"><span style="color:rgb(67, 67, 67);"><b>5. The Model Card as Enterprise Trust Infrastructure</b></span></h3><p class="paragraph" style="text-align:left;">Shannon surfaces an emerging practice that deserves broader adoption among organisations building AI-powered products: the model card used as customer-facing transparency documentation.</p><p class="paragraph" style="text-align:left;">&quot;Some organizations are doing something really great using a model card that I call a license to thrive   where they show here are the models we use, here are the safety precautions we take, this is how we use a Zero Trust approach to protect your data.&quot; She expects standardisation of this practice to accelerate through 2026 as regulated industries (healthcare, financial services) begin demanding it from AI-powered vendors as a due diligence requirement.</p><p class="paragraph" style="text-align:left;">For security leaders at organisations building or evaluating AI-powered products, the model card framework serves a dual purpose: externally, it builds customer trust without disclosing IP; internally, it creates the documentation discipline that forces clarity about which models are in production, what data they have been trained on, and what controls are in place. That internal clarity is also the foundation of a defensible DSPM programme.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>RELATED RESOURCES 🎧</b></h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.nist.gov/artificial-intelligence/ai-risk-management-framework?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">NIST AI Risk Management Framework (AI RMF)</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://owasp.org/www-project-top-10-for-large-language-model-applications/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">OWASP Top 10 for LLM Applications</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://atlas.mitre.org/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">MITRE ATLAS   Adversarial Threat Landscape for AI Systems</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.trendmicro.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">TrendAI Security Blueprint Whitepaper</a>   Framework for adopters, builders, and scalers</p></li></ul><h3 class="heading" style="text-align:left;" id="cloud-security-podcast"><b>Cloud Security Podcast</b></h3><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/how-to-build-an-ai-security-program-from-scratch?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow"><b>Cloud Security Podcast Episode with Shannon Murphy</b></a></p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>Related Podcast Episodes 🎧</b></h2><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/how-to-build-an-ai-security-program-from-scratch?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/8eef9378-33a0-4881-b30a-4b54b726e9c7/S07EP00_Shannon_Murphy_.jpg?t=1772057210"/></a></div><h3 class="heading" style="text-align:left;" id="question-for-you-reply-to-this-emai">Question for you? (Reply to this email)</h3><p class="paragraph" style="text-align:left;">🤔<b> </b>If your AI agent can read GitHub issues and push production code… Does it have more access than your junior engineer?</p><p class="paragraph" style="text-align:left;">Because in many enterprises — it does.<br></p><p class="paragraph" style="text-align:left;">Next week, we&#39;ll explore another critical aspect of cloud security. Stay tuned!</p><hr class="content_break"><p class="paragraph" style="text-align:left;">📬 Want weekly expert takes on AI & Cloud Security? [<a class="link" href="https://www.cloudsecuritynewsletter.com/subscribe?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">Subscribe here</a>]”</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><span style="color:rgb(238, 40, 60);"><b><a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">We would love to hear from you</a></b></span>📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙</p><p class="paragraph" style="text-align:start;">Peace!</p><p class="paragraph" style="text-align:start;"><a class="link" href="https://www.linkedin.com/in/shilpi-bhattacharjee/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">Shilpi Bhattacharjee</a></p><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dc094a1c-678a-43e0-adc9-1bee6c3499e2/CSP_Logo_Blue_ScreenRes_3000x3000_v2.jpg"/></a></div><div class="button" style="text-align:left;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="{{rp_referral_hub_url}}"><span class="button__text" style=""> Share the newsletter </span></a></div><p class="paragraph" style="text-align:left;">Was this forwarded to you? You can <a class="link" href="https://www.cloudsecuritynewsletter.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">Sign up here</a>, to join our growing readership.</p><p class="paragraph" style="text-align:left;">Want to <b>sponsor</b> the next newsletter edition! <a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">Lets make it happen </a></p><p class="paragraph" style="text-align:left;">Have you joined our FREE <b>Monthly</b> <a class="link" href="https://www.cloudsecuritybootcamp.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Bootcamp</a> yet?</p><p class="paragraph" style="text-align:left;">checkout our <b>sister podcast</b> <a class="link" href="https://www.youtube.com/@AISecurityPodcast?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=ai-agents-are-now-the-attack-surface-building-an-ai-security-blueprint-before-it-s-too-late" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=50be88b7-44ea-43c7-a0b4-048b4df0b860&utm_medium=post_rss&utm_source=cloud_security_newsletter">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>🚨 OpenClaw AI Agents Are Now Infostealer Targets: Using OpenSource for Securing the Cloud-AI Stack!</title>
  <description>This week: infostealers begin targeting AI agent credentials (OpenClaw), Palo Alto acquires Koi Security to define Agentic Endpoint Security, and Microsoft 365 Copilot&#39;s DLP bypass exposes critical governance gaps. Toni de la Fuente, creator of Prowler, joins Ashish Rajan to unpack the shared responsibility gap in AI workloads, MCP architecture risks, and how open-source security tooling must evolve to meet the cloud-AI convergence challenge.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/5d46bb24-447f-4605-a849-21b058a13e44/Screenshot_2026-02-19_at_12.47.23_AM.png" length="4321461" type="image/png"/>
  <link>https://www.cloudsecuritynewsletter.com/p/openclaw-ai-agents-infostealer-credential-risk</link>
  <guid isPermaLink="true">https://www.cloudsecuritynewsletter.com/p/openclaw-ai-agents-infostealer-credential-risk</guid>
  <pubDate>Thu, 19 Feb 2026 00:52:10 +0000</pubDate>
  <atom:published>2026-02-19T00:52:10Z</atom:published>
    <dc:creator>Ashish Rajan</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="hello-from-the-cloudverse"><span style="background-color:#28EEDA;"><b>Hello from the Cloud-verse!</b></span></h2><p class="paragraph" style="text-align:left;">This week’s Cloud Security Newsletter topic<b>: </b>The Shared Responsibility Gap in AI Workloads: Why Cloud Security Assumptions Break at the LLM Layer<b> </b><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/why-ai-infrastructure-is-harder-to-secure-than-cloud?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" rel="noopener noreferrer nofollow">(continue reading)</a> </p><div class="button" style="text-align:center;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="https://links.cloudsecuritypodcast.tv/push-security-feb-2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack"><span class="button__text" style=""> This issue is sponsored by Push Security </span></a></div><hr class="content_break"><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/why-ai-infrastructure-is-harder-to-secure-than-cloud?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/5d46bb24-447f-4605-a849-21b058a13e44/Screenshot_2026-02-19_at_12.47.23_AM.png?t=1771462100"/></a><div class="image__source"><span class="image__source_text"><p>This image was generated by AI. It&#39;s still experimental, so it might not be a perfect match!</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Incase, this is your 1st Cloud Security Newsletter! You are in good company! </b><br>You are reading this issue along with your friends and colleagues from companies like <i>Netflix</i>, Citi, <i>JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more</i> who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to <a class="link" href="https://open.spotify.com/show/6LZgeh4GecRYPc0WrwMB4I?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a> & <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> every week.</p><hr class="content_break"><p class="paragraph" style="text-align:left;">Welcome to this week’s Cloud Security Newsletter</p><p class="paragraph" style="text-align:left;">We are at an inflection point. The same attack vectors that have haunted cloud security teams for a decade  misconfiguration, credential exposure, shadow usage, and blurry shared responsibility  are now being applied with surgical precision to AI workloads. This week&#39;s news cycle makes that unmistakably clear: from the first confirmed infostealer theft of an AI agent&#39;s cryptographic keys to Microsoft&#39;s Copilot silently bypassing DLP controls on confidential email for weeks.</p><p class="paragraph" style="text-align:left;">To help make sense of what this means for practitioners building and securing AI systems on cloud infrastructure, this edition features <b>Toni de la Fuente</b>, founder and CEO of Prowler  one of the most widely deployed open-source cloud security platforms in the industry, with a decade of checks built around real-world misconfiguration patterns. In conversation with Cloud Security Podcast host Ashish Rajan, Toni delivers a grounded, practitioner-first framework for understanding where cloud security ends and AI security begins  and why that distinction matters more than ever. <i>[</i><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/why-ai-infrastructure-is-harder-to-secure-than-cloud?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow">Listen to the episode</a><i>]</i></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="tldr-for-busy-readers">📰 TL;DR for Busy Readers</h2><ul><li><p class="paragraph" style="text-align:left;"><b>AI agents are credential stores:</b> Infostealers now exfiltrate gateway tokens, cryptographic keys, and behavioral memory from tools like <b>OpenClaw treat agent configs like privileged secrets.</b></p></li><li><p class="paragraph" style="text-align:left;"><b>Agentic Endpoint Security is now a category:</b> Palo Alto&#39;s ~$400M acquisition of Koi signals every SOC must now inventory AI agents running on endpoints as privileged processes.</p></li><li><p class="paragraph" style="text-align:left;"><b>Copilot can silently nullify your DLP:</b> M365 Copilot bypassed sensitivity labels on Sent Items and Drafts for weeks to audit your Purview logs for Jan 21–early Feb now.</p></li><li><p class="paragraph" style="text-align:left;"><b>The shared responsibility gap has expanded:</b> Bedrock, Vertex, and Azure AI services inherit cloud&#39;s responsibility ambiguity and add new layers  infra, LLM configuration, and shadow AI all need explicit ownership.</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-security-news">📰 <b>THIS WEEK&#39;S TOP 5 SECURITY HEADLINES</b></h2><p class="paragraph" style="text-align:left;">Each story includes <b>why it matters</b> and <b>what to do next</b> — no vendor fluff.</p><h3 class="heading" style="text-align:left;" id="1-infostealers-begin-targeting-ai-a"><b>1. Infostealers Begin Targeting AI Agents: First Confirmed Theft of OpenClaw Credentials and Cryptographic Keys</b></h3><p class="paragraph" style="text-align:left;">Hudson Rock disclosed on February 13 that a Vidar-variant infostealer successfully exfiltrated an OpenClaw AI agent&#39;s entire configuration environment  including its gateway authentication token, public and private cryptographic keys, and <a class="link" href="https://soul.md?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow">soul.md</a> behavioral memory files. The malware didn&#39;t need a purpose-built AI module: it used a broad file-grabbing routine targeting extensions like .openclaw. OpenClaw has over 200,000 GitHub stars and is increasingly integrated into professional and enterprise workflows.</p><p class="paragraph" style="text-align:left;"><b>Why it matters to you:</b> This isn&#39;t a one-off curiosity  it&#39;s a proof-of-concept that the infostealer ecosystem will follow the value. Just as they built dedicated modules for Chrome credentials and Telegram sessions, dedicated AI agent parsers are coming. The stolen gateway token in this case could allow an attacker to remotely impersonate the victim&#39;s client in authenticated API requests. CISOs should act now: inventory agentic AI tool usage enterprise-wide, enforce encrypted storage of all agent configuration files and tokens, restrict port exposure for locally-running agents, and ensure DLP tooling covers .openclaw, .json, and similar agent file paths.</p><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://thehackernews.com/2026/02/infostealer-steals-openclaw-ai-agent.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow"> The Hacker News</a> |<a class="link" href="https://www.bleepingcomputer.com/news/security/infostealer-malware-found-stealing-openclaw-secrets-for-first-time/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow"> BleepingComputer</a> |<a class="link" href="https://www.infosecurity-magazine.com/news/infostealer-targets-openclaw/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow"> Infosecurity Magazine</a></p><p class="paragraph" style="text-align:left;"></p><hr class="content_break"><p class="paragraph" style="text-align:left;"></p><h3 class="heading" style="text-align:left;" id="2-palo-alto-networks-to-acquire-koi"><b>2. Palo Alto Networks to Acquire Koi Security: &#39;Agentic Endpoint&#39; Emerges as a Formal Security Category (~$400M)</b></h3><p class="paragraph" style="text-align:left;">Palo Alto Networks announced a definitive agreement to acquire Koi Security  a one-year-old Israeli startup already protecting 500,000 endpoints  on February 17, 2026. Koi&#39;s platform gives enterprises visibility and control over AI agents, browser extensions, IDE plugins, MCP servers, and model artifacts that carry deep permissions but bypass traditional endpoint controls. Koi&#39;s capabilities will be folded into Prisma AIRS and Cortex XDR post-close.</p><p class="paragraph" style="text-align:left;"><b>Why it matters to you:</b> This acquisition formally names a new product category and confirms what security architects have been quietly worried about: your endpoint perimeter now includes AI agents. Palo Alto&#39;s own announcement cited 135,000 exposed OpenClaw instances and 800+ malicious skills in its marketplace discovered within days of launch  alongside documentation of the first malicious MCP server in the wild. For defenders: treat AI agents running on your estate (browser copilots, desktop assistants, IDE plugins, local LLM tools) as privileged processes requiring the same conditional access and least-privilege controls you apply to cloud admin sessions.</p><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://www.paloaltonetworks.com/company/press/2026/palo-alto-networks-announces-intent-to-acquire-koi-to-secure-the-agentic-endpoint?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow"> Palo Alto Networks Press Release</a> |<a class="link" href="https://www.helpnetsecurity.com/2026/02/17/palo-alto-networks-koi-acquistion/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow"> Help Net Security</a> |<a class="link" href="https://securityboulevard.com/2026/02/palo-alto-networks-moves-to-secure-agentic-endpoints-with-koi-deal/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow"> Security Boulevard</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-proofpoint-acquires-acuvity-email"><b>3. Proofpoint Acquires Acuvity: Email Giant Bets on AI Governance and MCP Server Visibility</b></h3><p class="paragraph" style="text-align:left;">Proofpoint announced on February 12 the acquisition of Acuvity, whose platform provides unified visibility and enforcement across AI usage  from endpoints and browsers to MCP servers, locally installed tools like OpenClaw and Ollama, and AI-powered workflows embedded in Microsoft 365 and other enterprise SaaS. Financial terms were not disclosed.</p><p class="paragraph" style="text-align:left;"><b>Why it matters to you:</b> Email and collaboration platforms are where AI copilots acquire their richest enterprise context  contracts, calendar data, internal policy docs, financial records. That makes M365, Teams, and SharePoint the highest-value data path for AI-driven exfiltration and shadow AI risk. Proofpoint is positioning Acuvity as the control plane that spans human email behavior, sensitive data governance, and AI agent activity in a single policy layer. Alongside the Palo Alto/Koi deal, the market signal is loud: AI governance is an active buying priority now, not a 2027 roadmap item. Defenders should immediately define an AI usage policy covering approved models, permitted data classifications, required logging of prompts and responses, and guardrails around all SaaS connectors your agents are authorized to touch.</p><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://www.proofpoint.com/us/newsroom/press-releases/proofpoint-acquires-acuvity-deliver-ai-security-and-governance-across?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow"> Proofpoint Press Release</a> |<a class="link" href="https://cyberscoop.com/proofpoint-acuvity-deal-agentic-ai-security/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow"> CyberScoop</a> |<a class="link" href="https://www.helpnetsecurity.com/2026/02/13/proofpoint-acquired-acuvity/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow"> Help Net Security</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="4-microsoft-365-copilot-bug-bypasse"><b>4. Microsoft 365 Copilot Bug Bypassed DLP Controls, Summarizing Confidential Emails for Weeks (CW1226324)</b></h3><p class="paragraph" style="text-align:left;">Microsoft confirmed that a bug tracked as CW1226324 caused M365 Copilot to summarize confidential emails from users&#39; Sent Items and Drafts folders from January 21 through early February  bypassing sensitivity labels explicitly configured to restrict automated tool access. The root cause was a code issue in the Copilot work tab that incorrectly picked up labeled items in those folders. Items in other folders were not affected. A server-side fix began rolling out in early February; no final remediation timeline has been provided and the number of impacted tenants has not been disclosed.</p><p class="paragraph" style="text-align:left;"><b>Why it matters to you:</b> This is a governance stress test for every organization running Copilot. Server-side AI processing can nullify tenant controls without any action from administrators or end users  and Sent Items and Drafts routinely hold content subject to attorney-client privilege, regulatory protections, and contractual confidentiality. Immediate actions: verify your tenant received the CW1226324 remediation; review Purview Copilot activity logs for January 21 through early February; use Restricted Content Discovery to harden SharePoint exclusions; and temporarily restrict Copilot Chat for high-risk user groups (legal, HR, executive, finance) until you can validate DLP is functioning as expected. Strategically, this incident demands that AI feature releases be added to your change management process with a mandatory DLP regression test before rollout.</p><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://www.bleepingcomputer.com/news/microsoft/microsoft-says-bug-causes-copilot-to-summarize-confidential-emails/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow"> BleepingComputer</a> |<a class="link" href="https://techcrunch.com/2026/02/18/microsoft-says-office-bug-exposed-customers-confidential-emails-to-copilot-ai/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow"> TechCrunch</a> |<a class="link" href="https://www.theregister.com/2026/02/18/microsoft_copilot_data_loss_prevention/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow"> The Register</a> |<a class="link" href="https://office365itpros.com/2026/02/13/dlp-policy-for-copilot-bug/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow"> Office 365 IT Pros</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="cloud-security-topic-of-the-week">🎯 Cloud Security Topic of the Week: </h2><h3 class="heading" style="text-align:left;" id="the-shared-responsibility-gap-in-ai"><b>The Shared Responsibility Gap in AI Workloads: Why Cloud Security Assumptions Break at the LLM Layer</b></h3><p class="paragraph" style="text-align:left;">If you spent the last decade internalizing AWS&#39;s shared responsibility model, your data, your configuration, your IAM; their hypervisor, and their physical infrastructure, prepare to rethink those clean lines. Managed AI services like Amazon Bedrock, Google Vertex AI, and Azure AI introduce layered dependencies (and layered ambiguity) that make the cloud SRM look elegantly simple by comparison.</p><p class="paragraph" style="text-align:left;">The central question practitioners must now answer isn&#39;t just &quot;what does the vendor secure?&quot;  it&#39;s &quot;which of the five or six parties in my AI stack has responsibility for what, and does any of them actually know?&quot; As Toni de la Fuente explains in this week&#39;s conversation, this isn&#39;t a theoretical gap. It&#39;s the exact same disorientation cloud teams experienced in the early days of RDS and Lambda  amplified across LLMs, agent runtimes, MCP servers, and the prompt-to-response pipeline that now touches your most sensitive enterprise data.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="featured-experts-this-week"><b>Featured Experts This Week </b>🎤</h2><ul><li><p class="paragraph" style="text-align:left;"><i><b>Toni de la Fuente</b></i><b>- </b> CEO, Prowler Security</p></li><li><p class="paragraph" style="text-align:left;"><b><a class="link" href="https://www.linkedin.com/in/ashishrajan/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow">Ashish Rajan</a></b> - CISO | Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> , Host of <a class="link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a></p></li></ul><h2 class="heading" style="text-align:left;" id="definitions-and-core-concepts"><b>Definitions and Core Concepts 📚</b></h2><p class="paragraph" style="text-align:left;">Before diving into our insights, let&#39;s clarify some key terms:</p><ul><li><p class="paragraph" style="text-align:left;"><b>MCP (Model Context Protocol)</b> An open protocol that allows AI agents and language models to connect to external tools and data sources  databases, APIs, file systems  in a standardized way. MCP enables agents to &quot;do things&quot; beyond generating text: query live data, execute functions, and interact with enterprise systems. Because MCP servers often carry broad permissions, an insecure MCP implementation (e.g., one that speaks directly to a production database without RBAC) represents a significant attack surface.</p></li><li><p class="paragraph" style="text-align:left;"><b>Agentic Endpoint Security</b> A new security category, formalized this week by Palo Alto&#39;s Koi acquisition, focused on providing visibility and control over AI agents and related software (browser extensions, IDE plugins, MCP servers, local LLM tools) running on endpoints. These processes operate with high privileges and deep data access but bypass traditional endpoint security controls designed for human-interactive applications.</p></li><li><p class="paragraph" style="text-align:left;"><b>Shadow AI</b> Unauthorized or unmonitored use of AI tools within an organization  employees using consumer ChatGPT, DeepSeek, or other models to process enterprise data without IT or security awareness. Shadow AI creates data governance and regulatory exposure, particularly when users upload contracts, PII, or confidential internal documents to models that may use that data for training.</p></li><li><p class="paragraph" style="text-align:left;"><b>Promptfoo</b> An open-source LLM assessment framework that supports red-teaming, vulnerability scanning, and OWASP/ATLAS threat mapping for language models. Prowler has integrated Promptfoo to allow security teams to assess LLMs as part of their existing cloud security scanning workflows.</p></li><li><p class="paragraph" style="text-align:left;"><b>DLP (Data Loss Prevention)</b> Security controls that detect and prevent unauthorized transmission or exposure of sensitive data. In M365, DLP relies on sensitivity labels to restrict automated tool access to protected content. This week&#39;s Copilot incident exposed a critical assumption gap: that server-side AI pipelines respect tenant-configured DLP policies  they may not.</p></li></ul><hr class="content_break"><p class="paragraph" style="text-align:center;">This week&#39;s issue is sponsored by <b><a class="link" href="https://links.cloudsecuritypodcast.tv/push-security-feb-2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow">Push Security</a></b></p><p class="paragraph" style="text-align:center;"><b>Stop browser-based attacks with </b><b><a class="link" href="https://links.cloudsecuritypodcast.tv/push-security-feb-2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow">Push Security</a></b><br><br>Major breaches are increasingly originating in the browser, where traditional security controls can’t detect them. This is a conscious shift, with both criminal and nation-state actors adopting browser-native TTPs into their standard toolkit.<br><br>Push Security brings real-time detection and response into every browser — where today’s work and attacks actually happen. Push gives security teams visibility into modern threats, proactive control over user risk, and powerful telemetry to detect, investigate, and stop attacks fast.<br><br>Frictionless deployment. Instant protection. </p><p class="paragraph" style="text-align:center;"><a class="link" href="https://links.cloudsecuritypodcast.tv/push-security-feb-2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow">Check out Push Security now.</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="our-insights-from-this-practitioner">💡<b>Our Insights from this Practitioner 🔍</b></h2><h3 class="heading" style="text-align:left;" id="the-shared-responsibility-gap-in-ai"><b>The Shared Responsibility Gap in AI Workloads: Why Cloud Security Assumptions Break at the LLM Layer (</b><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/why-ai-infrastructure-is-harder-to-secure-than-cloud?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow"><b>Full Episode here</b></a><b>)</b></h3><hr class="content_break"><h3 class="heading" style="text-align:left;" id="1-cloud-security-and-ai-security-ar"><b>1. Cloud Security and AI Security Are Overlapping, Not Identical  and That Distinction Has Real Architectural Consequences</b></h3><p class="paragraph" style="text-align:left;">One of the most clarifying frameworks Toni offers is a simple structural decomposition: AI systems have two security domains, and they require different thinking. The first is the infrastructure of AI  GPUs, storage, the cloud services (S3, SageMaker, Bedrock, Vertex) that host the data pipelines and model training workloads. This domain largely inherits cloud security controls, and practitioners with cloud backgrounds are well-equipped to address it. The second is the AI itself  the model configuration, the guardrails, the prompt injection controls, the behavioral parameters that determine what the model can do, what it knows, and what it will refuse.</p><p class="paragraph" style="text-align:left;"><i>&quot;The same way that we distinguish between cloud infrastructure and application infrastructure, we can distinguish between AI infrastructure or AI configuration itself.&quot;</i>  Toni </p><p class="paragraph" style="text-align:left;">The practical implication for security architects: when reviewing a new AI workload for production readiness, you need two distinct checklists. One that treats it like a cloud workload (IAM, network exposure, encryption, logging) and one that treats it like a model deployment (guardrail configuration, prompt injection testing, data classification of training inputs, model access control).</p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="2-the-shared-responsibility-model-h"><b>2. The Shared Responsibility Model Has Been Extended  and the Boundaries Are Murkier Than Ever</b></h3><p class="paragraph" style="text-align:left;">Toni draws a direct parallel between the confusion early cloud adopters felt about services like RDS and what practitioners are now experiencing with Bedrock and similar managed AI services. The difference is that AI services add multiple new configuration surfaces  guardrails, model access policies, prompt filtering  that vendors are still publishing documentation for as they go.</p><p class="paragraph" style="text-align:left;"><i>&quot;If a company like AWS launches AI service like Bedrock, you could expect that it&#39;s following all the best practices. But what about your side or what about your expectations? I mean, that is very blurry. Sometimes it&#39;s not even clear for anybody  not for the customer, not for integrators, and of course not for the CSP.&quot;  - Toni</i></p><p class="paragraph" style="text-align:left;">Ashish adds a critical structural observation: &quot;I used to be the first party with just me and things I manage on my virtual machine. Then I had a third party with me and Amazon managing my workload. Now I have a fourth party, fifth party as well  because now my Bedrock is the access to my Claude or OpenAI.&quot;</p><p class="paragraph" style="text-align:left;">This isn&#39;t an abstraction. Each party in that chain carries its own configuration surface, its own security posture, and its own data handling behaviors. Security teams that have mapped shared responsibility for IaaS are now working with dependency chains where the blast radius of a misconfiguration is significantly harder to contain.</p><p class="paragraph" style="text-align:left;"><b>Practical guidance:</b> Map your AI stack completely. For every AI service in your environment, document which configuration options belong to your responsibility, which are vendor-managed, and which are simply undisclosed. Build your security controls around the API endpoints you can see and modify  that is your accountability boundary.</p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-mcp-security-the-new-frontier-whe"><b>3. MCP Security: The New Frontier Where Most AI Applications Are Getting It Wrong</b></h3><p class="paragraph" style="text-align:left;">One of the most operationally actionable insights in this conversation comes from Toni&#39;s direct experience building Prowler&#39;s own MCP implementation. The team observed a consistent pattern across community-built AI architectures: MCP servers deployed with direct database access, no RBAC enforcement, and no authentication layer beneath the tool interface. This is the AI equivalent of exposing an admin console directly to the internet without a jump server.</p><p class="paragraph" style="text-align:left;"><i>&quot;We have seen many how insecure the default configuration or default AI architecture with MCP can be. Never compute an MCP talking to a database directly. Put your MCP on top of your RBAC, and the RBAC is below the API.&quot;</i>  Toni de la Fuente</p><p class="paragraph" style="text-align:left;">The architectural pattern Prowler enforces in its own implementation: MCP → RBAC → API → Database. Each layer has a defined access control boundary. The MCP server has no direct datastore access; it operates within permissions granted through the RBAC layer. This pattern translates directly to any enterprise building production agentic applications.</p><p class="paragraph" style="text-align:left;">For teams currently evaluating or deploying AI agents connected to internal systems: treat every MCP endpoint as a privileged interface. Apply the same conditional access controls you&#39;d require for a cloud admin session  MFA, session logging, least-privilege permission scoping, and regular access reviews.</p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="4-ai-doesnt-know-everything-and-ass"><b>4. AI Doesn&#39;t Know Everything  and Assuming It Does Is a Security Risk</b></h3><p class="paragraph" style="text-align:left;">Toni describes an illuminating experiment the Prowler team ran with Claude Code: they asked it to identify any S3 buckets open to the public internet using only AWS credentials, a check Toni himself wrote nearly ten years ago as one of Prowler&#39;s first detections. Claude Code worked through several incorrect approaches before eventually identifying Prowler as the appropriate tool for the task.</p><p class="paragraph" style="text-align:left;"><i>&quot;We think that AI is going to know everything magically. This is not magic. You have to measure between AI-created detections or using AI to take advantage of rule-based detections. At the end of the day, that is what we truly believe is needed  AI around everything, but sometimes you have to tell the AI: no, this is the A, B, C that you have to take into account.&quot;</i> </p><p class="paragraph" style="text-align:left;">The security implication extends well beyond tooling: AI-driven detection systems deployed as autonomous agents carry real risk if their limitations aren&#39;t acknowledged and guarded against. The failure mode isn&#39;t dramatic  it&#39;s subtle. An agent that confidently executes the wrong check or scans the wrong region doesn&#39;t trigger an alert; it just leaves a gap. The corrective is to use AI to augment and accelerate rule-based detection engines  not replace them.</p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="5-the-sdlc-has-changed-and-security"><b>5. The SDLC Has Changed  and Security Teams Need to Catch Up</b></h3><p class="paragraph" style="text-align:left;">Perhaps the most strategically significant observation in this conversation is about how AI is changing who can deploy cloud infrastructure. Tools like Claude Code and Lovable enable developers with limited infrastructure experience to provision cloud resources, generate Terraform, configure databases, and deploy to production  all through conversational prompts.</p><p class="paragraph" style="text-align:left;">Toni frames this directly: &quot;<i>When you create a new application with Claude Code, it&#39;s going to generate the Terraform code, ask you for credentials in AWS, and then you deploy something there  with your storage, database, everything. Now what?</i>&quot;</p><p class="paragraph" style="text-align:left;">The &#39;now what&#39; is exactly the security gap. Applications are being generated and deployed faster than security review processes can follow. The answer is continuous scanning across both the infrastructure-as-code layer and the running cloud environment  and AI-powered tooling is now fast enough to make this feasible at developer velocity. The security team&#39;s role is to ensure those scans are running, that findings are routed back into the development pipeline, and that AI-generated infrastructure doesn&#39;t escape into production carrying the same misconfigurations that have been exploitable for years.</p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="6-three-security-controls-every-a-i"><b>6. Three Security Controls Every AI-on-Cloud Workload Needs Before Production</b></h3><p class="paragraph" style="text-align:left;">When pressed by Ashish for a concise framework for practitioners moving AI-on-cloud workloads to production, Toni offered three pillars:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Secure the infrastructure:</b> Treat the cloud workload running the AI application with the same rigor as any production cloud environment  IAM least privilege, network segmentation, encryption at rest and in transit, runtime monitoring. This is table stakes, and AI doesn&#39;t change it.</p></li><li><p class="paragraph" style="text-align:left;"><b>Secure the LLM:</b> Know where your model lives. Is it a shared multi-tenant service? Does the vendor use your prompts and data to improve the model? Assess your LLM configuration using tools like Promptfoo and map findings to the OWASP Top 10 for LLMs and MITRE ATLAS.</p></li><li><p class="paragraph" style="text-align:left;"><b>Know who has access:</b> Shadow AI is not a theoretical problem, it&#39;s happening across your organization today. Establish an AI usage policy, enforce it through technical controls (DLP, web filtering, identity governance), and build visibility into which AI tools are accessing which enterprise data through which connectors.</p></li></ul><p class="paragraph" style="text-align:left;">This framework maps cleanly onto this week&#39;s news. The Microsoft Copilot DLP failure is a pillar two and three failure. The OpenClaw infostealer theft is a pillar of three failure  credentials for an AI agent treated like user data instead of privileged secrets. The Koi acquisition is the market responding to the pillar one gap on the endpoint.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>RELATED RESOURCES 🎧</b></h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.cisa.gov/known-exploited-vulnerabilities-catalog?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow">CISA Known Exploited Vulnerabilities Catalog</a> - Authoritative source for actively exploited CVEs requiring remediation</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://docs.anthropic.com/claude/docs/model-context-protocol?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow">Anthropic&#39;s Model Context Protocol Documentation</a> - Technical specifications and security considerations for MCP</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://cloudsecurityalliance.org/research/topics/ai?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Alliance: AI Security Guidance</a> - Enterprise frameworks for AI governance</p></li></ul><h3 class="heading" style="text-align:left;" id="cloud-security-podcast"><b>Cloud Security Podcast</b></h3><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/why-ai-infrastructure-is-harder-to-secure-than-cloud?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow"><b>Cloud Security Podcast Episode with Toni De la Fuente</b></a></p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>Related Podcast Episodes 🎧</b></h2><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/why-ai-infrastructure-is-harder-to-secure-than-cloud?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/b385d1ef-a90f-4f85-8a62-9e14e5582eee/S07EP06_Toni_De_La_Fuente.jpg?t=1771460969"/></a></div><h3 class="heading" style="text-align:left;" id="question-for-you-reply-to-this-emai">Question for you? (Reply to this email)</h3><p class="paragraph" style="text-align:left;">🤔<b> </b>Are you currently:</p><p class="paragraph" style="text-align:left;">A) Blocking AI tools<br>B) Allowing everytrhing<br>C) Trying to build decision-point governance</p><p class="paragraph" style="text-align:left;">Reply and tell me where you are.<br></p><p class="paragraph" style="text-align:left;">Next week, we&#39;ll explore another critical aspect of cloud security. Stay tuned!</p><hr class="content_break"><p class="paragraph" style="text-align:left;">📬 Want weekly expert takes on AI & Cloud Security? [<a class="link" href="https://www.cloudsecuritynewsletter.com/subscribe?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow">Subscribe here</a>]”</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><span style="color:rgb(238, 40, 60);"><b><a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">We would love to hear from you</a></b></span>📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙</p><p class="paragraph" style="text-align:start;">Peace!</p><p class="paragraph" style="text-align:start;"><a class="link" href="https://www.linkedin.com/in/shilpi-bhattacharjee/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow">Shilpi Bhattacharjee</a></p><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dc094a1c-678a-43e0-adc9-1bee6c3499e2/CSP_Logo_Blue_ScreenRes_3000x3000_v2.jpg"/></a></div><div class="button" style="text-align:left;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="{{rp_referral_hub_url}}"><span class="button__text" style=""> Share the newsletter </span></a></div><p class="paragraph" style="text-align:left;">Was this forwarded to you? You can <a class="link" href="https://www.cloudsecuritynewsletter.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow">Sign up here</a>, to join our growing readership.</p><p class="paragraph" style="text-align:left;">Want to <b>sponsor</b> the next newsletter edition! <a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">Lets make it happen </a></p><p class="paragraph" style="text-align:left;">Have you joined our FREE <b>Monthly</b> <a class="link" href="https://www.cloudsecuritybootcamp.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Bootcamp</a> yet?</p><p class="paragraph" style="text-align:left;">checkout our <b>sister podcast</b> <a class="link" href="https://www.youtube.com/@AISecurityPodcast?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=openclaw-ai-agents-are-now-infostealer-targets-using-opensource-for-securing-the-cloud-ai-stack" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=15abca6b-b29e-4054-8251-b8a1c2d9efe2&utm_medium=post_rss&utm_source=cloud_security_newsletter">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>🚨 60K Cloud Servers Compromised + The AI Governance Illusion</title>
  <description>This week: Critical vulnerabilities under active exploitation, cloud-native worm TeamPCP compromises 60K+ servers across AWS/Azure/GCP, and AI security adoption strategies from Harmonic Security&#39;s CTO on building developer-friendly governance that actually works.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/4c0fe123-23d9-4319-ab5c-d3fd3ac73f3c/Screenshot_2026-02-12_at_12.01.51_AM.png" length="1414785" type="image/png"/>
  <link>https://www.cloudsecuritynewsletter.com/p/60k-cloud-servers-wormed-ai-governance-mcp</link>
  <guid isPermaLink="true">https://www.cloudsecuritynewsletter.com/p/60k-cloud-servers-wormed-ai-governance-mcp</guid>
  <pubDate>Thu, 12 Feb 2026 00:24:56 +0000</pubDate>
  <atom:published>2026-02-12T00:24:56Z</atom:published>
    <dc:creator>Ashish Rajan</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="hello-from-the-cloudverse"><span style="background-color:#28EEDA;"><b>Hello from the Cloud-verse!</b></span></h2><p class="paragraph" style="text-align:left;">This week’s Cloud Security Newsletter topic<b>: </b><b>The Developer-First AI Governance Dilemma: Building Security Controls That Don&#39;t Get Routed Around</b><b> </b><a class="link" href="#cloud-security-topic-of-the-week" rel="noopener noreferrer nofollow">(continue reading)</a> </p><div class="button" style="text-align:center;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="https://links.cloudsecuritypodcast.tv/push-security-feb-2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion"><span class="button__text" style=""> This issue is sponsored by Push Security </span></a></div><hr class="content_break"><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/vulnerability-management-vs-exposure-management?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/4c0fe123-23d9-4319-ab5c-d3fd3ac73f3c/Screenshot_2026-02-12_at_12.01.51_AM.png?t=1770854540"/></a><div class="image__source"><span class="image__source_text"><p>This image was generated by AI. It&#39;s still experimental, so it might not be a perfect match!</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Incase, this is your 1st Cloud Security Newsletter! You are in good company! </b><br>You are reading this issue along with your friends and colleagues from companies like <i>Netflix</i>, Citi, <i>JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more</i> who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to <a class="link" href="https://open.spotify.com/show/6LZgeh4GecRYPc0WrwMB4I?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a> & <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> every week.</p><hr class="content_break"><p class="paragraph" style="text-align:left;">Welcome to this week’s Cloud Security Newsletter</p><p class="paragraph" style="text-align:left;">As ransomware operators actively exploit critical vulnerabilities in email infrastructure and cloud-native threats evolve into self-propagating worms across multi-cloud environments, security teams face a fundamental question: how do we govern rapidly evolving attack surfaces while maintaining the agility that development teams demand?</p><p class="paragraph" style="text-align:left;">This week, we examine three critical security stories from the SmarterMail CVE-2026-24423 ransomware campaign to the TeamPCP botnet&#39;s multi-cloud rampage and cloud platform abuse for phishing infrastructure. More importantly, we dive deep into a conversation with <b>Bryan Woolgar-O&#39;Neil, CTO and Co-Founder of Harmonic Security</b>, who shares hard-won lessons on implementing AI security governance that developers don&#39;t route around. <i>[</i><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/is-developer-friendly-ai-security-possible-with-mcp-shadow-ai?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">Listen to the episode</a><i>]</i></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-mental-security-check">🚨 This Week’s Mental Security Check</h2><ul><li><p class="paragraph" style="text-align:left;">🔥 <b>Ransomware actively exploiting email RCE</b></p></li><li><p class="paragraph" style="text-align:left;">☁️ <b>60,000+ servers wormed across AWS, Azure & GCP</b></p></li><li><p class="paragraph" style="text-align:left;">🎣 <b>Phishing kits hosted on trusted cloud domains</b></p></li><li><p class="paragraph" style="text-align:left;">🤖 <b>Developers running local MCP servers with production access</b></p></li></ul><p class="paragraph" style="text-align:left;">Two questions to ask:</p><p class="paragraph" style="text-align:left;">1️⃣ Are your Docker/Kubernetes endpoints exposed?<br>2️⃣ Do you actually know where AI is touching production data?</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="tldr-for-busy-readers">📰 TL;DR for Busy Readers</h2><ul><li><p class="paragraph" style="text-align:left;">Patch SmarterMail CVE-2026-24423 immediately</p></li><li><p class="paragraph" style="text-align:left;">Audit exposed Docker APIs & Kubernetes control planes</p></li><li><p class="paragraph" style="text-align:left;">Assume phishing infrastructure now lives on trusted cloud domains</p></li><li><p class="paragraph" style="text-align:left;">70%+ of AI usage in most orgs is invisible</p></li><li><p class="paragraph" style="text-align:left;">Coaching-based AI governance beats blanket blocking</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-security-news">📰 <b>THIS WEEK&#39;S TOP 5 SECURITY HEADLINES</b></h2><p class="paragraph" style="text-align:left;">Each story includes <b>why it matters</b> and <b>what to do next</b> — no vendor fluff.</p><h3 class="heading" style="text-align:left;" id="1-team-pcp-botnet-the-first-real-mu"><b>1. ☁️ TeamPCP Botnet - </b><b>The First Real Multi-</b><b> Cloud-Native Worm</b><b> Worm</b><b> Compromises 60,000+ Servers Across AWS, Azure, and GCP</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened</b><br>Security researchers at Flare published findings on February 9 detailing TeamPCP, a sophisticated cloud-native worm that has successfully compromised over 60,000 servers across all three major cloud platforms AWS, Azure, and Google Cloud Platform. The malware operates through automated exploitation of exposed Docker APIs, misconfigured Kubernetes clusters, and the React2Shell vulnerability, spreading laterally within and across cloud environments. Once established, TeamPCP deploys cryptocurrency miners and functions as command-and-control infrastructure for ransomware operations. The attack chain demonstrates cloud-specific tradecraft, including abuse of cloud metadata services, IAM credential harvesting, and multi-cloud persistence mechanisms.</p><p class="paragraph" style="text-align:left;"><b>Why It Matters</b><br> TeamPCP represents the maturation of cloud-native attack techniques into wormable, self-propagating threats that treat multi-cloud infrastructure as a unified attack surface. This isn&#39;t an opportunistic cryptominer it&#39;s a platform for ransomware C2 infrastructure that happens to self-fund through mining while establishing persistent access across your cloud estate.</p><p class="paragraph" style="text-align:left;">The multi-cloud dimension is particularly concerning. Organizations that assume cloud security posture management tools provide adequate protection should revisit their assumptions TeamPCP demonstrates how attackers move fluidly between AWS, Azure, and GCP, exploiting the seams between different security tooling and visibility gaps. Your cloud security architecture needs unified detection, shared threat intelligence, and consistent security policies across all cloud platforms.</p><p class="paragraph" style="text-align:left;"><b>Immediate actions</b>: </p><ul><li><p class="paragraph" style="text-align:left;">Scan for exposed Docker APIs</p></li><li><p class="paragraph" style="text-align:left;">Audit internet-facing Docker and Kubernetes endpoints</p></li><li><p class="paragraph" style="text-align:left;">review IAM policies for least privilege violations,</p></li><li><p class="paragraph" style="text-align:left;">Review Kubernetes API server exposure</p></li><li><p class="paragraph" style="text-align:left;">Enable VPC Flow Logs across environments</p></li><li><p class="paragraph" style="text-align:left;">Monitor metadata service access attempts</p></li><li><p class="paragraph" style="text-align:left;">Threat hunting should focus on abnormal CPU/GPU usage</p></li><li><p class="paragraph" style="text-align:left;">outbound C2 patterns</p></li><li><p class="paragraph" style="text-align:left;">credential access attempts against metadata services.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Sources</b>:<a class="link" href="https://flare.io/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow"> Flare Research</a> |<a class="link" href="https://www.darkreading.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow"> Dark Reading</a> |<a class="link" href="https://thehackernews.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow"> The Hacker News</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="2-threat-actors-weaponize-azure-fir"><b>2. </b>🎣<b> Threat Actors Weaponize Azure, Firebase, and AWS for AiTM Phishing Infrastructure</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened</b><br> Research published by <a class="link" href="https://ANY.RUN?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">ANY.RUN</a> on February 4 reveals an escalating trend of threat actors abusing legitimate cloud platforms specifically <b>Microsoft Azure, Google Firebase, and AWS</b> to host adversary-in-the-middle (AiTM) phishing kits. These campaigns leverage the trusted domain reputation and SSL certificates of major cloud providers to bypass traditional email security filters and URL reputation systems. The researchers identified widespread deployment of sophisticated phishing frameworks like Tycoon2FA, which specifically targets enterprise users with MFA-enabled accounts by intercepting authentication tokens in real-time. The abuse extends across Azure Static Web Apps, Firebase Hosting, and AWS Amplify.</p><p class="paragraph" style="text-align:left;"><b>Why It Matters</b><br> When attackers host phishing infrastructure on <a class="link" href="https://azure.com?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">azure.com</a>, <a class="link" href="https://firebaseapp.com?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">firebaseapp.com</a>, or <a class="link" href="https://amazonaws.com?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">amazonaws.com</a> subdomains, traditional blocklist-based defenses fail catastrophically. These platforms provide attackers with legitimate SSL certificates, resilient hosting infrastructure, and the same high-availability guarantees your organization relies on for production workloads.</p><p class="paragraph" style="text-align:left;">For cloud security leaders, this demands a strategic shift in defensive architecture. <b>Email security posture</b> must evolve beyond domain reputation to incorporate behavioral analysis, real-time link inspection, and user authentication patterns. <b>Cloud platform governance</b> needs to include monitoring for abuse of your own organization&#39;s cloud subscriptions if attackers can stand up phishing sites on Microsoft and Google infrastructure, nothing prevents them from doing so within your Azure tenant or GCP project if credentials are compromised.</p><p class="paragraph" style="text-align:left;">The AiTM dimension targeting MFA is particularly critical. Organizations that have implemented MFA as a primary security control need to recognize that session token theft bypasses this protection entirely. This reinforces the need for <b>phishing-resistant authentication</b> (FIDO2/WebAuthn), <b>conditional access policies</b> based on device posture and network location, and <b>session lifetime controls</b> that limit token validity windows.</p><p class="paragraph" style="text-align:left;"><b>Sources</b>:<a class="link" href="https://cyberpress.org/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow"> Cyberpress</a> |<a class="link" href="https://gbhackers.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow"> GBHackers on Security</a> |<a class="link" href="https://any.run/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow"> </a><a class="link" href="https://ANY.RUN?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">ANY.RUN</a><a class="link" href="https://any.run/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow"> Research</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-aws-iam-identity-center-adds-i-pv"><b>3. AWS IAM Identity Center adds IPv6 (dual-stack endpoints)</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b><br>AWS announced IPv6 support for IAM Identity Center via new dual-stack endpoints, allowing access over IPv4, IPv6, or dual-stack clients while keeping existing IPv4 endpoints intact. The dual-stack capability applies across access portals, managed applications, and Identity Center flows so authentication can route through IPv6 where available. AWS positioned this as a compatibility and modernization upgrade for orgs operating IPv6 networks and hybrid environments.</p><p class="paragraph" style="text-align:left;"><b>Why It Matters (for cloud security leaders)</b></p><ul><li><p class="paragraph" style="text-align:left;"><b>Identity is your control plane’s front door.</b> Introducing IPv6 paths changes the network perimeter assumptions some enterprises still rely on (firewall rules, proxies, allowlists, egress controls), especially where teams accidentally only modeled IPv4 flows.</p></li><li><p class="paragraph" style="text-align:left;"><b>Policy drift risk:</b> If your controls (WAF/proxy, egress filtering, DLP, CASB, SIEM parsing) are IPv4-centric, you can end up with “shadow paths” where identity traffic bypasses expected inspection. This is an availability <i>and</i> security issue: broken routing can cause auth failures; uninspected routing can reduce visibility.</p></li><li><p class="paragraph" style="text-align:left;">Validate that IPv6 is permitted/inspected across enterprise networks where Identity Center traffic originates (office, ZTNA, VDI, managed endpoints).</p></li><li><p class="paragraph" style="text-align:left;">Update firewall/proxy rules and allowlists for the dual-stack domains, and verify your monitoring tooling correctly logs IPv6 addresses (normalization, correlation, geo, alert rules).</p></li><li><p class="paragraph" style="text-align:left;">Run a short change-control test: confirm sign-in flows, SSO to managed apps, and break-glass access still work under dual-stack conditions.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Source:</b><a class="link" href="https://www.reuters.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow"> </a><b><a class="link" href="https://aws.amazon.com/blogs/security/iam-identity-center-now-supports-ipv6/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">AWS</a></b></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-crowd-strike-acquires-sgnl-for-74"><b>4. </b>🚨<b> SmarterMail CVE-2026-24423: Unauthenticated RCE Under Active Ransomware Exploitation</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened</b><br> CISA added CVE-2026-24423 to its Known Exploited Vulnerabilities catalog on February 5, 2026, following confirmation that ransomware operators are actively exploiting a critical unauthenticated remote code execution vulnerability in SmarterTools&#39; SmarterMail email server platform. The vulnerability carries a CVSS score of 9.3 and affects versions prior to the emergency patch released in late January. In a striking validation of the threat, SmarterTools itself disclosed on February 9 that it had been breached through this same vulnerability, with attackers gaining access to internal systems. The platform serves over 15 million users globally across enterprise and service provider deployments.</p><p class="paragraph" style="text-align:left;"><b>Why It Matters</b><br> This incident represents a perfect storm of risk for cloud and hybrid email infrastructure. The vulnerability requires no authentication and enables full system compromise, making it trivially exploitable at scale. Ransomware groups are already operationalizing exploits in the wild; this isn&#39;t theoretical. The vendor breach demonstrates that even security-aware organizations struggle with emergency patching timelines, highlighting the window of exposure all SmarterMail operators face.</p><p class="paragraph" style="text-align:left;">For cloud security teams, this demands immediate action: <b>inventory verification</b> to identify any SmarterMail instances in on-premises, private cloud, or hybrid deployments; <b>network segmentation review</b> to ensure email infrastructure isn&#39;t directly internet-accessible without compensating controls; and <b>incident response preparation</b> given the ransomware context. If you&#39;re running SmarterMail, assume a breach until patching is verified.</p><p class="paragraph" style="text-align:left;">The CISA KEV listing triggers compliance obligations for federal agencies (21-day patching deadline) but should be treated as a forcing function for private sector cloud environments as well. This reinforces the security maturity gap between legacy on-premises solutions and cloud-native alternatives with automated security updates and vendor-managed infrastructure.</p><p class="paragraph" style="text-align:left;"><b>Sources</b>:<a class="link" href="https://www.cisa.gov/known-exploited-vulnerabilities-catalog?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow"> CISA KEV Catalog</a> |<a class="link" href="https://www.helpnetsecurity.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow"> Help Net Security</a> |<a class="link" href="https://www.bleepingcomputer.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow"> BleepingComputer</a> |<a class="link" href="https://www.securityweek.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow"> SecurityWeek</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-crowd-strike-acquires-sgnl-for-74"><b>5. WinRAR CVE-2025-8088: “patched, still exploited” across state + cybercrime actors</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened</b><br>Google reported widespread exploitation of <b>CVE-2025-8088</b> (patched July 2025) using Windows <b>Alternate Data Streams (ADS)</b> to drop attacker-controlled files into execution/persistence locations (e.g., Startup paths). Despite being an “n-day,” it remains high-impact because patch adoption is uneven.</p><p class="paragraph" style="text-align:left;"><b>Why It Matters</b></p><ul><li><p class="paragraph" style="text-align:left;">Endpoint compromise is still the fastest route to <b>cloud session theft</b> and control-plane abuse.</p></li><li><p class="paragraph" style="text-align:left;">This is a governance lesson as much as a vulnerability story: unmanaged “non-core” tools (archivers, plugins) become reliable attacker entry points.</p></li><li><p class="paragraph" style="text-align:left;"><b>Action:</b> force-update WinRAR fleet-wide; detect suspicious archive extraction to autorun locations; tighten conditional access and privilege to reduce downstream cloud blast </p></li></ul><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://www.axios.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow"> </a><a class="link" href="https://cloud.google.com/blog/topics/threat-intelligence/exploiting-critical-winrar-vulnerability?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">Google Cloud Threat Intelligence Group</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="cloud-security-topic-of-the-week">🎯 Cloud Security Topic of the Week: </h2><h3 class="heading" style="text-align:left;" id="the-developer-first-ai-governance-d"><b>The Developer-First AI Governance Dilemma</b></h3><p class="paragraph" style="text-align:left;">As AI tools become embedded in developer workflows from Claude Code to GitHub Copilot to custom MCP servers security teams face a familiar challenge with unfamiliar characteristics: how do you govern a technology that&#39;s being adopted bottom-up by individual contributors, evolves weekly, and produces non-deterministic outputs?</p><p class="paragraph" style="text-align:left;">This week&#39;s conversation with Bryan Woolgar-O&#39;Neil reveals why traditional security approaches fail in AI adoption and what actually works when developers are your primary threat vector not because they&#39;re malicious, but because they&#39;re solving business problems faster than your security policies can keep up.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="featured-experts-this-week"><b>Featured Experts This Week </b>🎤</h2><ul><li><p class="paragraph" style="text-align:left;"><b>Bryan Woolgar-O&#39;Neil- </b> CTO & Co-Founder, Harmonic Security</p></li><li><p class="paragraph" style="text-align:left;"><b><a class="link" href="https://www.linkedin.com/in/ashishrajan/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">Ashish Rajan</a></b> - CISO | Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> , Host of <a class="link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a></p></li></ul><h2 class="heading" style="text-align:left;" id="definitions-and-core-concepts"><b>Definitions and Core Concepts 📚</b></h2><p class="paragraph" style="text-align:left;">Before diving into our insights, let&#39;s clarify some key terms:</p><ul><li><p class="paragraph" style="text-align:left;"><b>MCP (Model Context Protocol)</b>: A standardization framework released by Anthropic that provides a uniform interface for AI models to interact with various APIs and data sources. Think of it as a plug-and-play system that allows AI agents to connect to GitHub, databases, Jira, or custom internal tools without developers needing to write custom API integrations for each service. Critically, MCP servers can run locally on developer workstations or be hosted centrally, with approximately 70% currently running locally.</p></li><li><p class="paragraph" style="text-align:left;"><b>Shadow AI</b>: Unauthorized or unmonitored AI tool usage within an organization, similar to shadow IT but with higher velocity and risk. This includes personal accounts on ChatGPT, Claude, or Gemini being used for work tasks, unapproved MCP servers accessing production data, or AI coding assistants with unrestricted access to internal repositories.</p></li><li><p class="paragraph" style="text-align:left;"><b>Agent/Agentic Flows</b>: AI systems that can perform multi-step tasks autonomously, making decisions and calling tools or APIs without human intervention at each step. In enterprise contexts, this might mean an AI agent that reads logs, performs root cause analysis, writes code, tests it, and deploys what previously took human developers 3-5 days, now happening in 20 minutes.</p></li><li><p class="paragraph" style="text-align:left;"><b>Small Language Models (SLMs)</b>: Specialized, lightweight AI models trained for specific tasks or domains rather than general-purpose use. Unlike large language models (LLMs) that try to handle everything, SLMs can be optimized for faster inference (sub-200ms response times), deployed locally, and fine-tuned for particular use cases like detecting sensitive data patterns or understanding security policy intent.</p></li></ul><hr class="content_break"><p class="paragraph" style="text-align:center;">This week&#39;s issue is sponsored by <a class="link" href="https://links.cloudsecuritypodcast.tv/push-security-feb-2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow"><b>Push Security</b></a></p><p class="paragraph" style="text-align:center;"><b>Stop browser-based attacks with </b><a class="link" href="https://links.cloudsecuritypodcast.tv/push-security-feb-2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow"><b>Push Security</b></a><br><br>Major breaches are increasingly originating in the browser, where traditional security controls can’t detect them. This is a conscious shift, with both criminal and nation-state actors adopting browser-native TTPs into their standard toolkit.<br><br>Push Security brings real-time detection and response into every browser — where today’s work and attacks actually happen. Push gives security teams visibility into modern threats, proactive control over user risk, and powerful telemetry to detect, investigate, and stop attacks fast.<br><br>Frictionless deployment. Instant protection. </p><p class="paragraph" style="text-align:center;"><a class="link" href="https://links.cloudsecuritypodcast.tv/push-security-feb-2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">Check out Push Security now.</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="our-insights-from-this-practitioner">💡<b>Our Insights from this Practitioner 🔍</b></h2><h3 class="heading" style="text-align:left;" id="is-developer-friendly-ai-security-p"><b>Is Developer Friendly AI Security Possible with MCP & Shadow AI </b><b>(</b><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/is-developer-friendly-ai-security-possible-with-mcp-shadow-ai?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow"><b>Full Episode here</b></a><b>)</b></h3><hr class="content_break"><h3 class="heading" style="text-align:left;" id="the-false-sense-of-ai-security-cont"><b>The False Sense of AI Security Control</b></h3><p class="paragraph" style="text-align:left;">Most organizations believe they have AI usage under control because they&#39;ve purchased enterprise licenses for ChatGPT, Claude, or Copilot. Bryan Woolgar-O&#39;Neil destroys this illusion quickly: <i>&quot;Even though they can tell that there&#39;s a certain amount of traffic going to this website, it is like, what? What does that actually mean to an organization? And I think a lot of them don&#39;t know.&quot;</i></p><p class="paragraph" style="text-align:left;">The fundamental problem isn&#39;t that organizations lack AI tools it&#39;s that they lack visibility into what employees are actually doing with them. Traditional security telemetry from firewalls or CASB tools can tell you that users visited <a class="link" href="https://claude.ai?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">claude.ai</a>, but they can&#39;t tell you whether those users were:</p><ul><li><p class="paragraph" style="text-align:left;">Experimenting with prompt engineering for personal learning</p></li><li><p class="paragraph" style="text-align:left;">Processing production customer data to generate support responses</p></li><li><p class="paragraph" style="text-align:left;">Using personal Gmail accounts instead of enterprise licenses</p></li><li><p class="paragraph" style="text-align:left;">Building MCP servers that connect AI directly to internal databases</p></li></ul><p class="paragraph" style="text-align:left;">This visibility gap creates what Bryan calls &quot;shadow AI&quot; and unlike shadow IT, the blast radius is fundamentally different. <i>&quot;The power that AI can bring to a business... you&#39;re doing the same jobs, but the AI&#39;s doing it a lot faster, a lot more,&quot;</i> he explains. When an engineer can reduce a 3-5 day bug fix process to 20 minutes using Claude Code with MCP access to logs, repositories, and deployment systems, you&#39;re not just dealing with unauthorized software you&#39;re dealing with unauthorized automation of privileged operations.</p><p class="paragraph" style="text-align:left;">The enterprise impact? Organizations that think they&#39;re &quot;AI-ready&quot; because they bought licenses are discovering that:</p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Developers bypass enterprise tools</b> when personal accounts are faster or more capable</p></li><li><p class="paragraph" style="text-align:left;"><b>Data exfiltration happens at AI velocity</b>, not human cut-and-paste speeds</p></li><li><p class="paragraph" style="text-align:left;"><b>Personal accounts create data portability risks</b> when employees leave, they take all their prompts and organizational knowledge with them</p></li></ol><p class="paragraph" style="text-align:left;">For security leaders, this means your first priority isn&#39;t building AI security policies, it&#39;s understanding where AI is actually being used and for what purposes. As Bryan puts it: <i>&quot;Getting that initial visibility. So understand where usage is, and not just at the &#39;you are using these sites&#39;, but with the right level of context and intent.&quot;</i></p><h3 class="heading" style="text-align:left;" id="why-traditional-security-controls-f"><b>Why Traditional Security Controls Fail for AI</b></h3><p class="paragraph" style="text-align:left;">Traditional approach:</p><p class="paragraph" style="text-align:left;">Define policy → Block → Create ticket → Add friction</p><p class="paragraph" style="text-align:left;">Developers route around it.</p><p class="paragraph" style="text-align:left;">AI evolves weekly.<br>Outputs are non-deterministic.<br>Adoption is bottom-up.</p><p class="paragraph" style="text-align:left;">You can’t gate this the way you gated cloud migration.</p><h3 class="heading" style="text-align:left;" id="the-shift-coaching-over-blocking"><b>The Shift: Coaching Over Blocking</b></h3><p class="paragraph" style="text-align:left;">So what actually works? Harmonic Security&#39;s approach, which Bryan&#39;s team uses internally before deploying to customers, centers on<i> &quot;coaching in real time rather than blocking everything.&quot;</i></p><p class="paragraph" style="text-align:left;">Here&#39;s how this manifests in practice:</p><p class="paragraph" style="text-align:left;"><b>Scenario</b>: An AI agent is running a 12-minute automated workflow to fix a bug. It needs to pull logs from production, perform root cause analysis, write code, and deploy. At the 10-minute mark, it attempts to send production customer data to a public LLM for analysis.</p><p class="paragraph" style="text-align:left;"><b>Traditional Blocking Approach</b>: The security tool kills the entire workflow, the engineer sees &quot;Access Denied,&quot; their 10 minutes of progress is lost, and they&#39;re furious.</p><p class="paragraph" style="text-align:left;"><b>Coaching Approach</b>: The security tool intercepts the data transfer and provides context to the AI agent itself: &quot;This data contains PII and cannot be sent to public LLMs per company policy. Here&#39;s what you should do instead: [alternatives].&quot; The AI reads this coaching, adjusts its approach, and completes the task without exposing data.</p><p class="paragraph" style="text-align:left;">The key insight is that most AI models will read coaching information and work out a new plan to complete the task without data exposure. You&#39;re not blocking the developer&#39;s work you&#39;re teaching the AI to work within security boundaries.</p><p class="paragraph" style="text-align:left;">Bryan extends this to human users as well: <i>&quot;On the browser, we think of the end user similarly... you&#39;re trying to coach the end users, the actual employees, to say, well, you&#39;re trying to do this, but you&#39;re sending this type of data that the organization doesn&#39;t want to go into this type of destination.&quot;</i></p><p class="paragraph" style="text-align:left;">For example, if an employee tries to paste sensitive data into a free version of a tool when the company has an enterprise license, the coaching system redirects them: &quot;We have an enterprise version of this tool that&#39;s approved for this data type use that instead.&quot; Or if they&#39;re using a public LLM like DeepSeek for data that shouldn&#39;t leave the organization: &quot;This model trains on your input data and is hosted in a jurisdiction we don&#39;t allow for this data classification. Try [approved alternative].&quot;</p><p class="paragraph" style="text-align:left;">The coaching approach works because it assumes positive intent and provides just-in-time education rather than frustration. As Bryan says: <i>&quot;It&#39;s almost like a focus on coaching at the point of time of data loss or other types of events that you wanna step in, in the middle of.&quot;</i></p><h3 class="heading" style="text-align:left;" id="the-mcp-security-challenge-local-se"><b>The MCP Security Challenge: Local Servers and Data Access</b></h3><p class="paragraph" style="text-align:left;">Model Context Protocol (MCP) represents one of the most significant changes in how developers interact with AI, and one of the hardest to secure. Bryan breaks down why:</p><p class="paragraph" style="text-align:left;"><i>&quot;Roughly like 70% of MCP servers are local servers. So like... servers that run locally rather than things you connect to.&quot;</i></p><p class="paragraph" style="text-align:left;">Why does this matter for security? Three reasons:</p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Developers can build MCP servers in 3-4 hours</b> without needing infrastructure approval</p></li><li><p class="paragraph" style="text-align:left;"><b>Local MCP servers inherit the developer&#39;s access rights</b> if the engineer can access production databases from their laptop, so can their MCP-enabled AI</p></li><li><p class="paragraph" style="text-align:left;"><b>No network visibility</b> unlike hosted services that security can monitor via CASB or proxies, local MCP servers operate entirely on the endpoint</p></li></ol><p class="paragraph" style="text-align:left;">This creates a governance nightmare. As Bryan describes from customer conversations: <i>&quot;People were probably seeing it in a similar way to things like third party libraries you might add into code repository. So we wanna have some way of reviewing and risk assessing those and then putting some controls around what is available to be used or not.&quot;</i></p><p class="paragraph" style="text-align:left;">The specific risks organizations worry about:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Cross-system data flows</b>: An MCP server could <i>&quot;plug into a data store, pull data from that and then push it to the web. And an engineer could have done that previously, but they probably wouldn&#39;t &#39;cause they&#39;ve got an in-built knowledge of what is acceptable, whereas the AI doesn&#39;t have that knowledge.&quot;</i></p></li><li><p class="paragraph" style="text-align:left;"><b>Destructive actions</b>: AI agents with MCP access to Jira, GitHub, or cloud platforms could create projects, repositories, or infrastructure without human oversight</p></li><li><p class="paragraph" style="text-align:left;"><b>Production data access</b>: In permissive organizations where developers have production database access for troubleshooting, MCP servers can now query production data automatically</p></li></ul><p class="paragraph" style="text-align:left;">Bryan&#39;s team runs MCP gateway internally: <i>&quot;We&#39;ve been running it harmonics. So we had our MCP gateway like three or four months before we kind of put it on for customers where we were kind of trying different things out, seeing where we wanted to add in additional controls.&quot;</i></p><p class="paragraph" style="text-align:left;">The result? They can allow developers to experiment with MCP locally while maintaining governance:<i> &quot;We don&#39;t really want AI to be creating new Jira projects. We don&#39;t want it to be creating git repositories because we are not trying to build like a &#39;go and build my whole website from scratch&#39; prototype thing. It&#39;s more like we want you to do this feature, so we want you to exist within our own architecture.&quot;</i></p><h3 class="heading" style="text-align:left;" id="the-ai-governance-maturity-model"><b>The AI Governance Maturity Model</b></h3><p class="paragraph" style="text-align:left;">For CISOs trying to figure out where to start, Bryan offers a practical maturity framework that organizations can progress through:</p><p class="paragraph" style="text-align:left;"><b>Level 1 – Basic Logs</b><br>You see AI domains being accessed.</p><p class="paragraph" style="text-align:left;"><b>Level 2 – Usage Visibility</b><br>You understand use cases + intent.</p><p class="paragraph" style="text-align:left;"><b>Level 3 – Access Controls</b><br>You enforce approved tools and block destructive MCP actions.</p><p class="paragraph" style="text-align:left;"><b>Level 4 – Intent-Based Coaching</b><br>You intervene intelligently without breaking productivity.</p><p class="paragraph" style="text-align:left;">Most organizations are stuck at Level 1.</p><p class="paragraph" style="text-align:left;">They think they’re at Level 3.</p><h3 class="heading" style="text-align:left;" id="developer-first-reality-new-organiz"><b>Developer-First Reality: New Organizational Structures for AI</b></h3><p class="paragraph" style="text-align:left;">Perhaps most interesting is how AI adoption is changing organizational structures themselves. Bryan observes: <i>&quot;There&#39;s a whole bunch of new job titles and they&#39;re not all the same yet.&quot;</i></p><p class="paragraph" style="text-align:left;">What he&#39;s seeing in the wild:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Head of AI Development</b> - Brought into engineering organizations specifically to manage AI tool adoption, typically from engineering and data backgrounds rather than security</p></li><li><p class="paragraph" style="text-align:left;"><b>Evolution of AI Committees</b> - What started as cross-functional committees (CIO, CISO, Legal, Compliance, Privacy) are becoming permanent AI departments with dedicated staff</p></li><li><p class="paragraph" style="text-align:left;"><b>Department-Level AI Specialists</b> - Individual departments hiring AI-focused roles to drive adoption in their specific contexts</p></li></ul><p class="paragraph" style="text-align:left;">This creates interesting dynamics for security teams. As Bryan notes:<i> &quot;Most organizations have a AI committee now... but that committee model&#39;s evolved into like we have a department now and it&#39;s a small department around AI.&quot;</i></p><p class="paragraph" style="text-align:left;">For CISOs, this means AI governance isn&#39;t purely a security problem it requires partnership with these new organizational structures. The security team that tries to unilaterally define AI policy will find themselves routed around by departments that have their own AI leadership.</p><p class="paragraph" style="text-align:left;">One customer example particularly stands out: their company objective is to <i>&quot;10x every employee via AI.&quot;</i> When AI adoption is a board-level business priority at that scale, security teams can&#39;t be seen as obstacles they need to enable that 10x productivity while managing risk.</p><h3 class="heading" style="text-align:left;" id="the-future-small-language-models-an"><b>The Future: Small Language Models and Specialized AI</b></h3><p class="paragraph" style="text-align:left;">Looking ahead, Bryan sees a shift from general-purpose LLMs toward specialized Small Language Models (SLMs) optimized for specific business functions:</p><p class="paragraph" style="text-align:left;"><i>&quot;Organizations will end up having [specialized models] around their different job functions... what will win out is someone will develop their own small language models or constrained models that are great at sales, and they&#39;ll be way better than something like OpenAI will be because they&#39;re being trained for that specific purpose.&quot;</i></p><p class="paragraph" style="text-align:left;">Why does this matter for security? Because each specialized model represents a new attack surface to govern. Harmonic uses SLMs internally because they need sub-200ms response times for real-time coaching something general-purpose LLMs can&#39;t deliver consistently.</p><p class="paragraph" style="text-align:left;"><i>&quot;We need to get response back within like 200 milliseconds. And we wanna look at intent. So we need to have a model that can understand the context and semantics of what&#39;s going in there. And we also want the model to explain to the end user what was wrong.&quot;</i></p><p class="paragraph" style="text-align:left;">Building SLMs requires actual ML engineering teams data collection, labeling, feature engineering, training, testing, deployment, and management. <i>&quot;I don&#39;t think organizations will build them out of the box,&quot;</i> Bryan predicts. Instead, vendors will provide specialized models for different domains: sales AI, coding AI, customer support AI, security AI.</p><p class="paragraph" style="text-align:left;">For security teams, this means the &quot;approve these three AI tools&quot; approach won&#39;t scale. You&#39;ll need frameworks that can govern hundreds of specialized models, each with different capabilities and risk profiles.</p><h3 class="heading" style="text-align:left;" id="practical-takeaways-for-cloud-secur"><b>Practical Takeaways for Cloud Security Leaders</b></h3><p class="paragraph" style="text-align:left;">From Bryan&#39;s experience building AI security at Harmonic and deploying it to enterprise customers, here are the actionable lessons:</p><ul><li><p class="paragraph" style="text-align:left;">Start with visibility, not policy</p></li><li><p class="paragraph" style="text-align:left;">Assume developers are already using AI</p></li><li><p class="paragraph" style="text-align:left;">Focus on specific pinch points (data exfiltration, destructive ops)</p></li><li><p class="paragraph" style="text-align:left;">Design for coaching, not blocking</p></li><li><p class="paragraph" style="text-align:left;">Treat MCP servers like third-party libraries</p></li><li><p class="paragraph" style="text-align:left;">Expect AI capability to evolve weekly</p></li><li><p class="paragraph" style="text-align:left;">Partner with emerging AI leadership roles</p></li><li><p class="paragraph" style="text-align:left;">Accept that productivity objectives will win — enable safely</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>RELATED RESOURCES 🎧</b></h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.cisa.gov/known-exploited-vulnerabilities-catalog?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">CISA Known Exploited Vulnerabilities Catalog</a> - Authoritative source for actively exploited CVEs requiring remediation</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://docs.anthropic.com/claude/docs/model-context-protocol?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">Anthropic&#39;s Model Context Protocol Documentation</a> - Technical specifications and security considerations for MCP</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://cloudsecurityalliance.org/research/topics/ai?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Alliance: AI Security Guidance</a> - Enterprise frameworks for AI governance</p></li></ul><h3 class="heading" style="text-align:left;" id="cloud-security-podcast"><b>Cloud Security Podcast</b></h3><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/is-developer-friendly-ai-security-possible-with-mcp-shadow-ai?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow"><b>Cloud Security Podcast – Full Episode with Bryan Woolgar-O’Neil</b></a></p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>Related Podcast Episodes 🎧</b></h2><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/is-developer-friendly-ai-security-possible-with-mcp-shadow-ai?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/6317e98d-7af5-47d9-ab5e-f6eede682bbc/S07EP03_Harmonic_Security.jpg?t=1769591481"/></a><div class="image__source"><span class="image__source_text"><p><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/is-developer-friendly-ai-security-possible-with-mcp-shadow-ai?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">Is Developer Friendly AI Security Possible with MCP & Shadow AI</a></p></span></div></div><h3 class="heading" style="text-align:left;" id="question-for-you-reply-to-this-emai">Question for you? (Reply to this email)</h3><p class="paragraph" style="text-align:left;">🤔<b> </b>Are you currently:</p><p class="paragraph" style="text-align:left;">A) Blocking AI tools<br>B) Allowing everything<br>C) Trying to build decision-point governance</p><p class="paragraph" style="text-align:left;">Reply and tell me where you are.<br></p><p class="paragraph" style="text-align:left;">Next week, we&#39;ll explore another critical aspect of cloud security. Stay tuned!</p><hr class="content_break"><p class="paragraph" style="text-align:left;">📬 Want weekly expert takes on AI & Cloud Security? [<a class="link" href="https://www.cloudsecuritynewsletter.com/subscribe?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">Subscribe here</a>]”</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><span style="color:rgb(238, 40, 60);"><b><a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">We would love to hear from you</a></b></span>📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙</p><p class="paragraph" style="text-align:start;">Peace!</p><p class="paragraph" style="text-align:start;"><a class="link" href="https://www.linkedin.com/in/shilpi-bhattacharjee/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">Shilpi Bhattacharjee</a></p><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dc094a1c-678a-43e0-adc9-1bee6c3499e2/CSP_Logo_Blue_ScreenRes_3000x3000_v2.jpg"/></a></div><div class="button" style="text-align:left;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="{{rp_referral_hub_url}}"><span class="button__text" style=""> Share the newsletter </span></a></div><p class="paragraph" style="text-align:left;">Was this forwarded to you? You can <a class="link" href="https://www.cloudsecuritynewsletter.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">Sign up here</a>, to join our growing readership.</p><p class="paragraph" style="text-align:left;">Want to <b>sponsor</b> the next newsletter edition! <a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">Lets make it happen </a></p><p class="paragraph" style="text-align:left;">Have you joined our FREE <b>Monthly</b> <a class="link" href="https://www.cloudsecuritybootcamp.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Bootcamp</a> yet?</p><p class="paragraph" style="text-align:left;">checkout our <b>sister podcast</b> <a class="link" href="https://www.youtube.com/@AISecurityPodcast?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=60k-cloud-servers-compromised-the-ai-governance-illusion" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=4d47ae20-4206-4ddc-881e-4c744e9a7e75&utm_medium=post_rss&utm_source=cloud_security_newsletter">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>🚨 Palo Alto&#39;s $3.35B Observability Bet Why Palo Alto’s $3.35B Observability Bet Signals the End of Vulnerability Management</title>
  <description>This week&#39;s newsletter explores the strategic shift from siloed vulnerability management to unified exposure management, featuring insights from Brad Hibbert (COO &amp; Chief Strategy Officer at Brinqa) on how enterprises can reduce risk at scale, plus analysis of major security acquisitions that signal the future of platform consolidation and AI-driven security operations.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/b42b8fbc-c04a-4d10-b09d-d043b3f57f8a/Screenshot_2026-02-04_at_10.35.26_PM.png" length="3931985" type="image/png"/>
  <link>https://www.cloudsecuritynewsletter.com/p/exposure-management-replaces-vulnerability-management</link>
  <guid isPermaLink="true">https://www.cloudsecuritynewsletter.com/p/exposure-management-replaces-vulnerability-management</guid>
  <pubDate>Thu, 05 Feb 2026 06:39:46 +0000</pubDate>
  <atom:published>2026-02-05T06:39:46Z</atom:published>
    <dc:creator>Ashish Rajan</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="hello-from-the-cloudverse"><span style="background-color:#28EEDA;"><b>Hello from the Cloud-verse!</b></span></h2><p class="paragraph" style="text-align:left;">This week’s Cloud Security Newsletter topic<b>: From Vulnerability Chaos to Exposure Clarity: How Enterprises Are Winning the Risk Reduction Game </b><a class="link" href="#cloud-security-topic-of-the-week" rel="noopener noreferrer nofollow">(continue reading)</a> </p><div class="button" style="text-align:center;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="https://links.cloudsecuritypodcast.tv/see-prowler-in-action-jan-2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management"><span class="button__text" style=""> This issue is sponsored by Prowler </span></a></div><hr class="content_break"><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/vulnerability-management-vs-exposure-management?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/b42b8fbc-c04a-4d10-b09d-d043b3f57f8a/Screenshot_2026-02-04_at_10.35.26_PM.png?t=1770273376"/></a><div class="image__source"><span class="image__source_text"><p>This image was generated by AI. It&#39;s still experimental, so it might not be a perfect match!</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Incase, this is your 1st Cloud Security Newsletter! You are in good company! </b><br>You are reading this issue along with your friends and colleagues from companies like <i>Netflix</i>, Citi, <i>JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more</i> who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to <a class="link" href="https://open.spotify.com/show/6LZgeh4GecRYPc0WrwMB4I?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a> & <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> every week.</p><hr class="content_break"><p class="paragraph" style="text-align:left;">Welcome to this week’s Cloud Security Newsletter</p><p class="paragraph" style="text-align:left;">The security industry is quietly abandoning vulnerability management — and Palo Alto’s $3.35B observability acquisition makes that impossible to ignore.</p><p class="paragraph" style="text-align:left;">In cloud-native environments, patch lists and CVSS scores no longer reflect real business risk. Exposure now spans infrastructure, identity, APIs, SaaS, containers, and increasingly, autonomous AI systems. Recent acquisitions by Palo Alto and Varonis signal a structural shift: from counting vulnerabilities to orchestrating risk reduction across business services.</p><p class="paragraph" style="text-align:left;">This week, we&#39;re examining this evolution through two lenses: major vendor consolidation moves that signal where the market is heading (Palo Alto&#39;s $3.35B Chronosphere acquisition and Varonis&#39; $125M AllTrue.ai purchase), and practical guidance from Brad Hibbert, a 20-year security veteran who has watched vulnerability management mature from simple patch lists to enterprise-wide exposure orchestration.</p><p class="paragraph" style="text-align:left;">Brad brings a unique perspective, having worked across vulnerability management, privileged access management, and third-party risk before landing at exposure management. His insights reveal why the industry is shifting from reporting vulnerabilities to orchestrating remediation, and how organizations can make this transition without losing the domain knowledge they&#39;ve built over years. <i>[</i><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/vulnerability-management-vs-exposure-management?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow">Listen to the episode</a><i>]</i></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="tldr-for-busy-readers">📰 TL;DR for Busy Readers</h2><ul><li><p class="paragraph" style="text-align:left;"><b>Palo Alto&#39;s $3.35B bet on observability</b>: Unifying security + observability signals shift to AI-driven, autonomous remediation at scale</p></li><li><p class="paragraph" style="text-align:left;"><b>Varonis acquires AI TRiSM</b>: Data security vendors now own AI agent governance expect policy enforcement over prompts, connectors, and sensitive data access</p></li><li><p class="paragraph" style="text-align:left;"><b>Exposure management is the new standard</b>: Moving beyond risk-based vulnerability management to business service-aligned risk reduction</p></li><li><p class="paragraph" style="text-align:left;"><b>Trust before automation</b>: AI-powered remediation only works when built on normalized data and explainable models</p></li><li><p class="paragraph" style="text-align:left;"><b>Start small, prove impact</b>: Pick critical services, demonstrate risk reduction, then scale don&#39;t boil the ocean</p></li><li><p class="paragraph" style="text-align:left;">“<i>Exposure management isn’t about more data. It’s about deciding what actually matters and mobilizing teams to fix it.</i>” — <b>Brad Hibbert, COO & CSO, Brinqa</b></p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-security-news">📰 <b>THIS WEEK&#39;S TOP 5 SECURITY HEADLINES</b></h2><p class="paragraph" style="text-align:left;">Each story includes <b>why it matters</b> and <b>what to do next</b> — no vendor fluff.</p><h3 class="heading" style="text-align:left;" id="1-palo-alto-networks-completes-335-"><b>1. </b>📈<b> Palo Alto Networks Completes $3.35B Chronosphere Acquisition</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> Palo Alto Networks finalized its acquisition of Chronosphere, a cloud-native observability platform and 2025 Gartner Magic Quadrant leader, in a $3.35 billion deal originally announced in November 2025. Chronosphere&#39;s telemetry pipeline reduces data volumes by approximately 30% and requires 20x less infrastructure than legacy observability tools. The company plans to integrate Chronosphere with its Cortex AgentiX platform, enabling AI agents to automatically detect and remediate security and IT issues across applications, infrastructure, and AI systems.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> This acquisition represents Palo Alto&#39;s strategic response to the &quot;data tax&quot; problem plaguing modern security operations. For cloud security teams managing massive telemetry volumes across containers, microservices, and distributed systems, the integration of observability and security signals a fundamental shift from reactive detection to proactive, AI-driven remediation.</p><p class="paragraph" style="text-align:left;">Consider the implications for your security architecture:</p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Platform consolidation pressures</b>: Organizations will face strategic decisions about continuing standalone observability tools (Datadog, New Relic, Dynatrace) versus adopting Palo Alto&#39;s unified platform. This creates both opportunity (simplified vendor management) and risk (increased vendor lock-in).</p></li><li><p class="paragraph" style="text-align:left;"><b>Cost optimization at scale</b>: The 30% data reduction capability directly addresses escalating SIEM/SOAR data ingestion costs a pain point for every enterprise dealing with petabytes of telemetry. Security teams should evaluate how this affects their data pipeline strategies and storage costs.</p></li><li><p class="paragraph" style="text-align:left;"><b>Autonomous incident response</b>: The Cortex AgentiX integration promises to reduce MTTR through autonomous remediation. However, as Brad Hibbert notes in our feature interview, &quot;<i>You want it to speed the right things up. So you want to make sure that your AI is based on a sound data foundation.</i>&quot;</p></li><li><p class="paragraph" style="text-align:left;"><b>Vendor concentration dynamics</b>: Combined with Palo Alto&#39;s pending $25B CyberArk acquisition, this positions them as a dominant force in platform-based security. CISOs should assess dependency risks and negotiate accordingly.</p></li></ol><p class="paragraph" style="text-align:left;"><b>Source:</b><a class="link" href="https://www.paloaltonetworks.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow"> Palo Alto Networks Press Release</a>,<a class="link" href="https://www.prnewswire.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow"> PR Newswire</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="2-varonis-acquires-all-trueai-for-1"><b>2. </b>🤖<b> Varonis Acquires AllTrue.ai for $125M: AI TRiSM Meets Data Security</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> On February 3, 2026, Varonis Systems announced its acquisition of AllTrue.ai, an AI Trust, Risk, and Security Management (AI TRiSM) platform, for $125 million in an all-cash deal. AllTrue.ai provides real-time visibility and governance for AI systems across enterprises, enabling organizations to inventory AI systems, understand their intent and connections, control AI behavior in real-time, and prove accountability for governance and compliance. The acquisition addresses the challenge of autonomous AI systems (models, copilots, and agents) operating at machine speed without clear visibility or guardrails.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> This acquisition positions Varonis at the forefront of an emerging security category: securing AI agents and autonomous systems that act on enterprise data. As organizations deploy GenAI tools, chatbots, and AI agents at scale, these systems move beyond passive data analysis to making autonomous decisions and taking actions creating a fundamentally new risk profile.</p><p class="paragraph" style="text-align:left;">For cloud security practitioners, this signals several critical trends:</p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>AI security evolution</b>: The focus is shifting beyond prompt injection and model attacks to include governance, behavior monitoring, and accountability for AI actions. This aligns with Brad Hibbert&#39;s observation that &quot;teams are drowning in data&quot; and need better ways to &quot;make connections that as a human is difficult to make.&quot;</p></li><li><p class="paragraph" style="text-align:left;"><b>Data security vendors as AI gatekeepers</b>: Companies with visibility into data access patterns are positioning themselves as the logical owners of AI security. Expect AI governance controls (model risk monitoring, exposure controls, usage policies) to converge with data security posture (SaaS + cloud data stores).</p></li><li><p class="paragraph" style="text-align:left;"><b>Least-privilege extends to AI</b>: Traditional identity and access management principles must now apply to AI systems, not just users. Organizations should start mapping where LLM/agent workflows touch cloud data (S3, Azure Blob, Google Drive, SaaS apps) and decide whether AI controls belong in DSPM/CASB, IAM, or a dedicated AI security layer.</p></li><li><p class="paragraph" style="text-align:left;"><b>Compliance and auditability</b>: As AI systems make autonomous decisions affecting business operations, proving compliance becomes critical. Organizations need audit trails for AI decisions, especially where AI touches regulated data.</p></li></ol><p class="paragraph" style="text-align:left;"><b>Action for defenders:</b> Prioritize establishing inventories of AI systems and their data access, implementing real-time monitoring for AI agent behavior, defining policies for acceptable AI actions, and ensuring audit trails for AI decisions.</p><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://www.varonis.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow"> Varonis Press Release</a>,<a class="link" href="https://investors.varonis.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow"> Varonis Q4 2025 Earnings</a>,<a class="link" href="https://www.wsj.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow"> Wall Street Journal</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-thoma-bravo-explores-sale-of-impr"><b>3. 💰 Thoma Bravo Explores Sale of Imprivata (Up to ~$7B Valuation)</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> Reuters reports that private equity firm Thoma Bravo is exploring a sale of healthcare identity vendor Imprivata, potentially valuing it at up to $7 billion. Imprivata sits at the intersection of healthcare identity, privileged workflows, and regulated access, often integrated into hybrid cloud and SaaS environments.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> A transaction of this scale signals continued consolidation in the IAM space, particularly around specialized identity solutions for regulated industries. For enterprises relying on Imprivata or similar &quot;workflow IAM&quot; solutions, this potential ownership change could impact product roadmaps, pricing models, and integration strategies with major identity platforms like Microsoft Entra ID, Okta, and PAM solutions.</p><p class="paragraph" style="text-align:left;">This relates directly to the exposure management theme: as identity becomes increasingly central to security architecture, organizations need to understand their dependencies on identity vendors and ensure they have contingency plans. As Brad Hibbert emphasizes, &quot;It&#39;s not just about the server, it&#39;s about the services...those services could be spread across multiple processing units and storage units.&quot;</p><p class="paragraph" style="text-align:left;"><b>Action for defenders:</b> If you rely on Imprivata or similar solutions, confirm exportability of identity and audit data, validate integration roadmaps with major identity providers, and build vendor-change risk into your 2026 identity program planning.</p><p class="paragraph" style="text-align:left;"><b>Source:</b><a class="link" href="https://www.reuters.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow"> Reuters</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-crowd-strike-acquires-sgnl-for-74"><b>4. ☁️ Cloud Provider Security Updates Worth Acting On</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> Several notable security feature updates were announced across major cloud providers this week:</p><ul><li><p class="paragraph" style="text-align:left;"><b>AWS CloudFront</b>: Added mTLS to origins, enabling true end-to-end client authentication patterns beyond viewer-to-edge</p></li><li><p class="paragraph" style="text-align:left;"><b>AWS EC2/VPC</b>: Introduced &quot;Related resources&quot; view for security groups to reduce misconfiguration risk during rule changes</p></li><li><p class="paragraph" style="text-align:left;"><b>Microsoft Defender for Cloud</b>: Highlighted Microsoft Security Private Link (public preview) to keep Defender traffic on private connectivity</p></li><li><p class="paragraph" style="text-align:left;"><b>Google Cloud (Apigee)</b>: Shipped Advanced API Security updates supporting richer condition logic in security actions</p></li></ul><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> These are &quot;quiet&quot; changes that materially improve control-plane security (private telemetry paths), edge-to-origin trust, and safe change management for network controls. In the context of exposure management, these updates represent the kind of incremental security improvements that reduce attack surface without requiring major architectural changes.</p><p class="paragraph" style="text-align:left;">Brad Hibbert&#39;s perspective is relevant here: exposure management is about &quot;not just about the exposures, but exposures that could impact your business. Are those exposures in your environment? Are they reachable? Are they exploitable? If they are exploited, what&#39;s the blast radius?&quot; These cloud provider updates help reduce both reachability and exploitability.</p><p class="paragraph" style="text-align:left;"><b>Actions for defenders:</b></p><ul><li><p class="paragraph" style="text-align:left;">If you operate CloudFront with sensitive origins, evaluate mTLS-to-origin as a hardening lever for internal services and partner integrations</p></li><li><p class="paragraph" style="text-align:left;">In Azure, assess Private Link options for security tooling connectivity patterns to reduce exposure and simplify egress controls</p></li><li><p class="paragraph" style="text-align:left;">Review Google Cloud Apigee updates if you&#39;re managing API security at scale</p></li></ul><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://aws.amazon.com/new/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow"> AWS What&#39;s New</a>,<a class="link" href="https://aws.amazon.com/blogs/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow"> AWS Blog</a>,<a class="link" href="https://techcommunity.microsoft.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow"> Microsoft TechCommunity</a>,<a class="link" href="https://cloud.google.com/release-notes?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow"> Google Cloud Release Notes</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-crowd-strike-acquires-sgnl-for-74"><b>5. </b>🤖<b> Agentic AI Security: Exposed Control Panels + Prompt Injection = One-Click RCE</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> Axios reported security concerns around an open-source autonomous agent (Moltbot), including exposed/misconfigured control panels and susceptibility to prompt injection. Separately, a high-severity OpenClaw flaw was disclosed enabling one-click RCE via a malicious link (now patched). Security researcher Bruce Schneier also highlighted &quot;indirect prompt injection&quot; as a growing attack class.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> Agents frequently run with broad API keys, SaaS tokens, and cloud credentials. Prompt injection turns &quot;content&quot; (emails, tickets, docs, web pages) into a command channel that can exfiltrate secrets or trigger destructive actions. This is the next wave of supply chain attacks, but at the workflow layer attackers target what the agent reads and what tools it can use.</p><p class="paragraph" style="text-align:left;">This connects directly to our main theme: as organizations adopt AI-powered automation in their security operations (including exposure management and remediation), they must understand the security implications. Brad Hibbert cautions: &quot;You have to trust the opinions that these AI models are producing...automation works well for known patterns...but if it&#39;s risky change, if it&#39;s something where it could have an impact on a critical service...you may not automate it 100%. You still might want a human in the loop.&quot;</p><p class="paragraph" style="text-align:left;"><b>Actions for defenders:</b></p><ul><li><p class="paragraph" style="text-align:left;">Enforce tool-scoped permissions with least-privilege API keys for all AI agents</p></li><li><p class="paragraph" style="text-align:left;">Isolate agent execution environments from production systems</p></li><li><p class="paragraph" style="text-align:left;">Require human approval gates for high-risk actions (data export, IAM changes, code deployments)</p></li><li><p class="paragraph" style="text-align:left;">Add monitoring for agent-initiated abnormal access patterns (bulk reads, new destinations, unusual OAuth scopes)</p></li></ul><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://www.axios.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow"> Axios</a>,<a class="link" href="https://thehackernews.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow"> The Hacker News</a>,<a class="link" href="https://www.schneier.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow"> Schneier on Security</a>,<a class="link" href="https://www.tenable.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow"> Tenable</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="cloud-security-topic-of-the-week">🎯 Cloud Security Topic of the Week: </h2><h3 class="heading" style="text-align:left;" id="from-vulnerability-chaos-to-exposur">From Vulnerability Chaos to Exposure Clarity: How Enterprises Are Winning the Risk Reduction Game</h3><p class="paragraph" style="text-align:left;">The security industry has a dirty secret: most organizations are drowning in vulnerability data but starving for actionable insights. According to Brad Hibbert, who has spent 20 years watching vulnerability management evolve from quarterly server scans to AI-powered exposure orchestration, the fundamental problem isn&#39;t finding vulnerabilities, it&#39;s deciding what to do about them.</p><p class="paragraph" style="text-align:left;">&quot;It really wasn&#39;t about the tools,&quot; Hibbert explains. &quot;It was really about decision clarity and kind of moving beyond the tools.&quot;</p><p class="paragraph" style="text-align:left;">This week, we&#39;re exploring why leading enterprises are moving from risk-based vulnerability management to exposure management, and what this means for cloud security teams trying to protect increasingly complex environments spanning infrastructure, applications, containers, identities, and AI systems.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="featured-experts-this-week"><b>Featured Experts This Week </b>🎤</h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.linkedin.com/in/bradhibbert/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow"><b>Brad Hibbert</b></a><b> - </b> COO & Chief Strategy Officer, Brinqa</p></li><li><p class="paragraph" style="text-align:left;"><b><a class="link" href="https://www.linkedin.com/in/ashishrajan/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow">Ashish Rajan</a></b> - CISO | Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> , Host of <a class="link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a></p></li></ul><h2 class="heading" style="text-align:left;" id="definitions-and-core-concepts"><b>Definitions and Core Concepts 📚</b></h2><p class="paragraph" style="text-align:left;">Before diving into our insights, let&#39;s clarify some key terms:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Risk-Based Vulnerability Management (RBVM):</b> An approach that prioritizes vulnerabilities based on context like asset criticality, threat intelligence, and CVSS scores. Typically operates within individual tools and teams, providing prioritized lists of exposures but remaining siloed by domain (infrastructure, applications, cloud, etc.).</p></li><li><p class="paragraph" style="text-align:left;"><b>Exposure Management:</b> A holistic, enterprise-wide approach that sits above individual security tools to normalize, correlate, and prioritize risks across all domains. Focuses on business outcomes (risk reduction to critical services) rather than metrics (number of vulnerabilities closed). Emphasizes decision orchestration and remediation coordination across teams.</p></li><li><p class="paragraph" style="text-align:left;"><b>Business Services vs. Assets:</b> In exposure management, the unit of analysis shifts from individual assets (servers, containers, VMs) to business services (customer-facing applications, internal workflows, revenue-generating systems). A single business service may span multiple assets, and a single asset may support multiple services.</p></li><li><p class="paragraph" style="text-align:left;"><b>Remediation Owner vs. Risk Owner:</b> Risk owners (typically service owners or business unit leaders) decide what risk to accept or remediate based on business impact. Remediation owners (infrastructure, cloud, AppSec teams) perform the actual fixes. This separation is critical for effective exposure management.</p></li><li><p class="paragraph" style="text-align:left;"><b>Decision Orchestration vs. Data Orchestration:</b> Data orchestration normalizes and aggregates vulnerability data from multiple sources. Decision orchestration provides opinionated recommendations on what to fix based on business context, reachability, exploitability, and blast radius enabling faster, more confident action.</p></li><li><p class="paragraph" style="text-align:left;"><b>Explainable AI in Security:</b> AI models that provide clear reasoning for their recommendations, allowing security teams to understand why a particular vulnerability or exposure is prioritized. Critical for building trust in AI-driven security decisions.</p></li></ul><hr class="content_break"><p class="paragraph" style="text-align:center;">This week&#39;s issue is sponsored by <a class="link" href="https://links.cloudsecuritypodcast.tv/see-prowler-in-action-jan-2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow"><b>Prowler</b></a><b> </b></p><p class="paragraph" style="text-align:center;"><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;"><b>The world’s most widely adopted open cloud security platform</b></span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;">Trusted by modern cloud security teams, </span><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;"><b>Prowler</b></span><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;"> detects vulnerabilities and misconfigurations, prioritizes risk, accelerates remediation, and delivers audit-ready compliance reports.</span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;">With </span><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;"><b>44M+ downloads</b></span><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;">, </span><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;"><b>12K+ GitHub stars</b></span><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;">, and </span><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;"><b>300+ contributors</b></span><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;">, Prowler is the open standard for cloud security.</span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;">Ask </span><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;"><b>Lighthouse AI</b></span><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;"> security questions just like a trusted colleague. Get actionable insights and remediation plans instantly. Secure your cloud programmatically with the Prowler MCP server.</span></p><p class="paragraph" style="text-align:center;"><a class="link" href="https://links.cloudsecuritypodcast.tv/see-prowler-in-action-jan-2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow"><b>See Prowler In Action</b></a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="our-insights-from-this-practitioner">💡<b>Our Insights from this Practitioner 🔍</b></h2><h3 class="heading" style="text-align:left;" id="from-vulnerability-chaos-to-exposur">From Vulnerability Chaos to Exposure Clarity: How Enterprises Are Winning the Risk Reduction Game<b> </b><b>(</b><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/vulnerability-management-vs-exposure-management?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow"><b>Full Episode here</b></a><b>)</b></h3><hr class="content_break"><h3 class="heading" style="text-align:left;" id="the-evolution-from-patch-lists-to-b"><b>The Evolution: From Patch Lists to Business Impact</b></h3><p class="paragraph" style="text-align:left;">Twenty years ago, vulnerability management was straightforward. Organizations scanned servers quarterly, generated reports sorted by CVSS score, and worked down the critical/high list until the next audit cycle. &quot;<i>We were dealing with large customers...it was all about the servers and patching servers&quot;,</i> Hibbert recalls. &quot;<i>They might have done a scan once a quarter to meet their compliance requirement. Then they moved it to monthly&quot;.</i></p><p class="paragraph" style="text-align:left;">But the cloud changed everything.</p><p class="paragraph" style="text-align:left;">&quot;<i>Today, assets are a lot more dynamic and interconnected than they were in the past</i>&quot;, Hibbert explains. &quot;<i>It&#39;s not just about a server, it&#39;s about the business services that server is supporting. A server could be supporting multiple services. Those services could be spread across multiple processing units and storage units and those sorts of things&quot;.</i></p><p class="paragraph" style="text-align:left;">This interconnectedness creates a fundamental challenge: traditional vulnerability management approaches, even risk-based ones, remain siloed within individual teams and tools. Infrastructure teams manage server vulnerabilities. AppSec teams manage code vulnerabilities. Cloud teams manage container and configuration issues. Each has their own priorities, their own dashboards, their own understanding of &quot;critical&quot;.</p><p class="paragraph" style="text-align:left;">&quot;The team&#39;s really drowning in data&quot;, Hibbert observes. &quot;It&#39;s not, you don&#39;t have a telemetry, it&#39;s how do you kind of work through that data? I come to you as a remediation owner, so you crack open a spreadsheet and start arguing that your data&#39;s different from my data. Right? That happens a lot&quot;.</p><h3 class="heading" style="text-align:left;" id="the-exposure-management-difference-"><b>The Exposure Management Difference: Context Over Volume</b></h3><p class="paragraph" style="text-align:left;">The shift to exposure management represents a fundamental change in how organizations think about security risk. Instead of optimizing for vulnerability closure rates or compliance metrics, exposure management focuses on a more strategic question: <b>What exposures could actually impact the business, and how do we coordinate their removal across teams?</b></p><p class="paragraph" style="text-align:left;">Hibbert breaks this down into several key questions that exposure management must answer:</p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Do I have this exposure in my environment?</b> (Discovery still important but no longer sufficient)</p></li><li><p class="paragraph" style="text-align:left;"><b>Is it reachable in my environment?</b> (Network context, segmentation, access controls)</p></li><li><p class="paragraph" style="text-align:left;"><b>Is it exploitable?</b> (Threat intelligence, exploit availability, attack complexity)</p></li><li><p class="paragraph" style="text-align:left;"><b>What&#39;s the blast radius if exploited?</b> (Connected services, data access, lateral movement paths)</p></li><li><p class="paragraph" style="text-align:left;"><b>Which business services does this affect?</b> (Service mapping, criticality assessment)</p></li><li><p class="paragraph" style="text-align:left;"><b>Who owns the risk and who fixes it?</b> (Service owners vs. remediation teams)</p></li></ol><p class="paragraph" style="text-align:left;">&quot;<i>It&#39;s a shift from reporting to remediation&quot;, Hibbert emphasizes. &quot;Exposure management is all about getting that prioritization done at a finer level, but then mobilizing and orchestrating the remediation that needs to happen to remove that risk from the environment across the different teams</i>&quot;.</p><p class="paragraph" style="text-align:left;">This distinction is critical. Risk-based vulnerability management gives you better prioritized lists. Exposure management gives you coordinated action.</p><h3 class="heading" style="text-align:left;" id="the-ownership-problem-service-owner"><b>The Ownership Problem: Service Owners vs. Remediation Teams</b></h3><p class="paragraph" style="text-align:left;">One of the most persistent challenges in vulnerability management has always been ownership. Who&#39;s responsible for fixing what? In traditional environments, the answer was simple: if it&#39;s on your server, it&#39;s your problem. But modern architectures don&#39;t work that way.</p><p class="paragraph" style="text-align:left;">&quot;<i>The server team&#39;s not gonna know which one of these services is most important to the business&quot;,</i> Hibbert points out. &quot;<i>It would be the service owners that understand, am I willing to accept this risk?</i>&quot;</p><p class="paragraph" style="text-align:left;">He provides a concrete example: &quot;<i>If a server has multiple applications or is supporting multiple services, one might be a service that&#39;s providing a service to your internal employees. It might also be providing support for a service that&#39;s providing a service to your customers. The server team&#39;s not gonna know which one of these services is most important to the business</i>&quot;.</p><p class="paragraph" style="text-align:left;">This leads to a crucial distinction in exposure management: <b>risk owners</b> versus <b>remediation owners</b>.</p><p class="paragraph" style="text-align:left;"><b>Risk owners</b> (typically service owners or business unit leaders) understand business impact and make decisions about risk acceptance or mitigation priority. They can answer: &quot;<i>Is this service revenue-generating? Is it customer-facing? What&#39;s the SLA? What&#39;s the business impact if it goes down?</i>&quot;</p><p class="paragraph" style="text-align:left;"><b>Remediation owners</b> (infrastructure, cloud, AppSec teams) have the technical expertise and access to implement fixes. They can answer: &quot;<i>How do we patch this? What&#39;s the change control process? What&#39;s the testing requirement? What&#39;s the rollback plan?</i>&quot;</p><p class="paragraph" style="text-align:left;">&quot;<i>I own the risk as the service owner</i>&quot;, Hibbert explains. &quot;<i>But when exposures happen that could impact my service, I have to mobilize those remediation teams to go fix it. Those teams could be, if it&#39;s in the code, could be the code team. If it&#39;s in the cloud, it could be the cloud team. If it&#39;s on the server, it could be a server team&quot;.</i></p><p class="paragraph" style="text-align:left;">This separation enables more effective decision-making because business context and technical execution are properly aligned.</p><h3 class="heading" style="text-align:left;" id="the-ai-opportunity-and-its-limits"><b>The AI Opportunity (and Its Limits)</b></h3><p class="paragraph" style="text-align:left;">Given the massive volumes of security data organizations must process vulnerabilities, configurations, threat intelligence, asset context, network topology, identity permissions AI seems like an obvious solution. And Hibbert agrees, with important caveats.</p><p class="paragraph" style="text-align:left;">&quot;<i>AI is certainly a great way to help you kind of do that&quot;,</i> he says. &quot;<i>It can help you make the connections that as a human are difficult to make. So if you start to look at attacker behavior, attack techniques, the way that these exploits are being leveraged, what mitigating controls you have in place...there&#39;s just a number of different data elements that could be tied together. And the use of AI certainly helps you make those connections very quickly&quot;.</i></p><p class="paragraph" style="text-align:left;">But here&#39;s the critical insight: <b>AI is only as good as the data foundation it&#39;s built on.</b></p><p class="paragraph" style="text-align:left;">&quot;<i>You have to trust the opinions that these AI models are producing</i>&quot;, Hibbert warns. &quot;<i>And so, for me, you know, I always tell people AI&#39;s great. Automation&#39;s always great. It can speed things up. You want it to speed the right things up. So you want to make sure that your AI is based on a sound data foundation</i>&quot;.</p><p class="paragraph" style="text-align:left;">This means:</p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Normalize data from all sources:</b> Multiple scanners, multiple CMDBs, multiple threat feeds must speak a common language</p></li><li><p class="paragraph" style="text-align:left;"><b>Establish shared risk models:</b> Teams must agree on how risk is calculated and prioritized before AI starts making recommendations</p></li><li><p class="paragraph" style="text-align:left;"><b>Ensure explainability:</b> AI must show its work why is this vulnerability prioritized? What factors contributed to the score?</p></li><li><p class="paragraph" style="text-align:left;"><b>Build trust gradually:</b> &quot;If you don&#39;t believe in the data, if you&#39;re suspect over the opinions coming out of the AI, then you&#39;re not gonna take action based on those outputs&quot;</p></li></ol><p class="paragraph" style="text-align:left;">Regarding the holy grail of automated remediation, Hibbert is pragmatic: &quot;<i>Automation works well for known patterns. Like if it&#39;s a simple fix or configuration change or something you&#39;ve done in the past where the AI can look back and see previous behavior and say, &#39;Hey, I have a 90% confidence that this is the right thing to do.</i>&#39;&quot;</p><p class="paragraph" style="text-align:left;">But for high-impact changes? &quot;<i>If it&#39;s a risky change, if it&#39;s something where it could have an impact on a critical service...or it could have significant damage in some way, then you may not automate it 100%. You still might want a human in the loop where you automate everything, but a human has to click the button</i>&quot;.</p><p class="paragraph" style="text-align:left;">This aligns perfectly with the emerging threats we&#39;re seeing around AI agents and prompt injection attacks. As AI systems gain more autonomy in security operations, the blast radius of a compromised or misdirected AI agent increases dramatically. The guardrails matter.</p><h3 class="heading" style="text-align:left;" id="compliance-vs-impact-a-critical-dis"><b>Compliance vs. Impact: A Critical Distinction</b></h3><p class="paragraph" style="text-align:left;">One of the most persistent challenges in vulnerability management is the compliance-driven approach many organizations are forced to adopt. PCI DSS requires critical vulnerabilities to be remediated within 30 days. Other frameworks have similar requirements. But Hibbert argues these metrics miss the point.</p><p class="paragraph" style="text-align:left;">&quot;<i>Compliance in many cases just proves activity</i>&quot;, he observes. &quot;<i>Hey, I closed my critical vulnerability in 21 days, but it&#39;s not tied to the actual risk</i>&quot;.</p><p class="paragraph" style="text-align:left;">The problem is that compliance frameworks tend to lag behind the reality of modern threat landscapes. They focus on metrics that are easy to measure (number of vulnerabilities, time to patch, coverage percentages) rather than outcomes that actually matter (risk reduction to business-critical services, prevention of real-world attacks).</p><p class="paragraph" style="text-align:left;">&quot;<i>Compliance proves activity</i>&quot;, Hibbert says. &quot;<i>Exposure management proves impact risk reduction in the environment. So you tie it back to, again, I removed this much risk from this particular business service as opposed to like, &#39;Hey, I cleared off my critical vulnerabilities off my servers</i>.&#39;&quot;</p><p class="paragraph" style="text-align:left;">This doesn&#39;t mean ignoring compliance requirements. It means understanding that compliance is a floor, not a ceiling. Organizations that only optimize for compliance metrics are playing a dangerous game: they&#39;re measuring activity while adversaries are measuring opportunity.</p><h3 class="heading" style="text-align:left;" id="the-maturity-path-how-to-start-with"><b>The Maturity Path: How to Start Without Boiling the Ocean</b></h3><p class="paragraph" style="text-align:left;">For organizations looking to evolve from risk-based vulnerability management to exposure management, Hibbert&#39;s advice is consistent: start small, prove value, then scale.</p><p class="paragraph" style="text-align:left;">&quot;<i>Don&#39;t try to boil the ocean&quot;, </i>he cautions. <i>&quot;Pick an area of the business that you want to evolve beyond risk-based vulnerability management to an area where risks are visible, where they&#39;re painful in the organization today. So that could be on a certain set of services. Maybe it&#39;s one service or two services that you want to focus on getting that next level of prescription. Maybe it&#39;s your external attack surface...whatever the area is within your organization, focus on that first</i>&quot;.</p><p class="paragraph" style="text-align:left;">The key is demonstrating actual risk reduction to business services, not just improved metrics: &quot;<i>Show that you have success. We have many customers of ours who have dozens of different applications that they&#39;ve kind of brought into the program, but they started with two. And then kind of once they showed some value and showed how they&#39;re making better decisions that better aligned to the business itself, then they expanded&quot;.</i></p><p class="paragraph" style="text-align:left;">He recommends several prerequisites before beginning an exposure management initiative:</p><p class="paragraph" style="text-align:left;"><b>1. Executive commitment:</b> &quot;<i>There has to be executive commitment at the top layer that exposure management is a discipline that an organization wants to embrace&quot;.</i></p><p class="paragraph" style="text-align:left;"><b>2. Shared understanding of risk:</b> &quot;<i>You have to have that shared understanding of risk across the different teams...in many cases, I would say there needs to be a shared incentive program. So they&#39;re all working towards the same goal across the different teams&quot;.</i></p><p class="paragraph" style="text-align:left;"><b>3. Dedicated resources:</b> &quot;<i>Most of the programs that we&#39;ve seen...just to drive the accountability and the movement across the different teams and coordinate the activities across the teams, it does require some resourcing and some investment</i>&quot;.</p><p class="paragraph" style="text-align:left;"><b>4. Start with friendlies:</b> Begin with teams that understand the limitations of current approaches and are motivated to try something new. Success with early adopters builds momentum.</p><p class="paragraph" style="text-align:left;"><b>5. Accommodate existing workflows:</b> &quot;<i>You don&#39;t wanna force them to some sort of corporate way of doing things. We&#39;ve got customers who use 20 different flavors of Jira, and we can accommodate that. We&#39;re not asking them all to change the way they develop code, but we&#39;re showing them where in their code that they should fix exposures by using a broader, more business-aligned prioritization model&quot;.</i></p><h3 class="heading" style="text-align:left;" id="the-data-consistency-challenge"><b>The Data Consistency Challenge</b></h3><p class="paragraph" style="text-align:left;">One recurring theme in Hibbert&#39;s experience is the &quot;competing spreadsheets&quot; problem. Different teams use different tools, which produce different data, which leads to endless debates about whose numbers are correct.</p><p class="paragraph" style="text-align:left;">&quot;What you don&#39;t wanna do in an exposure management program is I come up with this great list of things I need to get done, but then I come to you as the remediation owner and I say, &#39;Hey, can you go fix this?&#39; And you crack open a spreadsheet and start arguing that your data&#39;s different from my data. Right? That happens a lot&quot;.</p><p class="paragraph" style="text-align:left;">The solution isn&#39;t to force everyone onto the same tooling that&#39;s neither practical nor desirable, as different teams need different capabilities. Instead, exposure management platforms sit above the tools, normalizing and correlating data to create a single source of truth for prioritization decisions.</p><p class="paragraph" style="text-align:left;">&quot;Having that shared understanding of risk and how you&#39;re gonna calculate risk is critical&quot;, Hibbert explains. &quot;And then having explainability on why...you feel that your data has given you the best decisioning capability that you can, is pretty important&quot;.</p><p class="paragraph" style="text-align:left;">This normalization layer also helps address another common challenge: legacy systems and technical debt. &quot;You&#39;re never gonna start with perfect data, perfect systems&quot;, Hibbert acknowledges. &quot;But pick an area of the business that you want to focus on...and then focus on the decisions. Don&#39;t focus on the scope of the program&quot;.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>RELATED RESOURCES 🎧</b></h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.gartner.com/en/information-technology/glossary/ai-trust-risk-and-security-management-ai-trism?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow">AI TRiSM Overview by Gartner</a> - Understanding the AI Trust, Risk, and Security Management category</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://aws.amazon.com/security/best-practices/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow">AWS Security Best Practices</a> - Official AWS security guidance</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://learn.microsoft.com/en-us/security/benchmark/azure/introduction?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow">Microsoft Cloud Security Benchmark</a> - Azure security baseline recommendations</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://cloud.google.com/security/foundations-guide?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow">Google Cloud Security Foundations Guide</a> - GCP security architecture patterns</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.brinqa.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow">Brinqa Exposure Management Platform</a> - Enterprise-grade exposure management and decision orchestration</p></li></ul><h3 class="heading" style="text-align:left;" id="cloud-security-podcast"><b>Cloud Security Podcast</b></h3><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/vulnerability-management-vs-exposure-management?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow"><b>Cloud Security Podcast – Full Episode with Brad Hibbert</b></a></p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>Related Podcast Episodes 🎧</b></h2><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/vulnerability-management-vs-exposure-management?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/730b39dd-9c27-462c-935f-69755ec22e0d/S07EP04_Brinqa_Brad_Hibbert.jpg?t=1770270022"/></a><div class="image__source"><a class="image__source_link" href="https://www.cloudsecuritypodcast.tv/videos/vulnerability-management-vs-exposure-management?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" rel="noopener" target="_blank"><span class="image__source_text"><p>Vulnerability Management vs. Exposure Management</p></span></a></div></div><h3 class="heading" style="text-align:left;" id="question-for-you-reply-to-this-emai">Question for you? (Reply to this email)</h3><p class="paragraph" style="text-align:left;">🤔 <b> </b> <b>Is your team still fighting spreadsheet wars over vulnerability priorities which service owner will you pilot exposure management with first?</b><br></p><p class="paragraph" style="text-align:left;">Next week, we&#39;ll explore another critical aspect of cloud security. Stay tuned!</p><hr class="content_break"><p class="paragraph" style="text-align:left;">📬 Want weekly expert takes on AI & Cloud Security? [<a class="link" href="https://www.cloudsecuritynewsletter.com/subscribe?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow">Subscribe here</a>]”</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><span style="color:rgb(238, 40, 60);"><b><a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">We would love to hear from you</a></b></span>📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙</p><p class="paragraph" style="text-align:start;">Peace!</p><p class="paragraph" style="text-align:start;"><a class="link" href="https://www.linkedin.com/in/shilpi-bhattacharjee/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow">Shilpi Bhattacharjee</a></p><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dc094a1c-678a-43e0-adc9-1bee6c3499e2/CSP_Logo_Blue_ScreenRes_3000x3000_v2.jpg"/></a></div><div class="button" style="text-align:left;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="{{rp_referral_hub_url}}"><span class="button__text" style=""> Share the newsletter </span></a></div><p class="paragraph" style="text-align:left;">Was this forwarded to you? You can <a class="link" href="https://www.cloudsecuritynewsletter.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow">Sign up here</a>, to join our growing readership.</p><p class="paragraph" style="text-align:left;">Want to <b>sponsor</b> the next newsletter edition! <a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">Lets make it happen </a></p><p class="paragraph" style="text-align:left;">Have you joined our FREE <b>Monthly</b> <a class="link" href="https://www.cloudsecuritybootcamp.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Bootcamp</a> yet?</p><p class="paragraph" style="text-align:left;">checkout our <b>sister podcast</b> <a class="link" href="https://www.youtube.com/@AISecurityPodcast?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=palo-alto-s-3-35b-observability-bet-why-palo-alto-s-3-35b-observability-bet-signals-the-end-of-vulnerability-management" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=06a24c62-3f06-4b42-ba50-4a9fc09f1a9a&utm_medium=post_rss&utm_source=cloud_security_newsletter">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>🚨 Google Cloud Phishing Bypasses Email Security: Lessons from Anthropic&#39;s MCP Security Response</title>
  <description>This week&#39;s newsletter examines sophisticated attacks exploiting legitimate cloud services—from Google Cloud&#39;s email features to AI agent tooling—and explores how enterprises like Anthropic are building secure-by-design systems. We feature insights from Caleb Sima and Ashish Rajan on implementing defense-in-depth architectures that assume breach and verify continuously.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/0ea8d888-3141-43b4-a4cf-dae1423837eb/Screenshot_2026-01-28_at_10.57.42_PM.png" length="1278338" type="image/png"/>
  <link>https://www.cloudsecuritynewsletter.com/p/archive-google-cloud-phishing-exploits-trusted-infrastructure-ai-security-s-blueprint-for-developer</link>
  <guid isPermaLink="true">https://www.cloudsecuritynewsletter.com/p/archive-google-cloud-phishing-exploits-trusted-infrastructure-ai-security-s-blueprint-for-developer</guid>
  <pubDate>Wed, 28 Jan 2026 23:07:50 +0000</pubDate>
  <atom:published>2026-01-28T23:07:50Z</atom:published>
    <dc:creator>Ashish Rajan</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="hello-from-the-cloudverse"><span style="background-color:#28EEDA;"><b>Hello from the Cloud-verse!</b></span></h2><p class="paragraph" style="text-align:left;">This week’s Cloud Security Newsletter topic<b>: AI Security 2026 Predictions: The &quot;Zombie Tool&quot; Crisis & The Rise of AI Platforms </b><a class="link" href="#cloud-security-topic-of-the-week" rel="noopener noreferrer nofollow">(continue reading)</a> </p><div class="button" style="text-align:center;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="https://links.cloudsecuritypodcast.tv/see-prowler-in-action-jan-2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response"><span class="button__text" style=""> This issue is sponsored by Prowler </span></a></div><hr class="content_break"><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/why-ai-cant-replace-detection-engineers-build-vs-buy-the-future-of-soc?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/0ea8d888-3141-43b4-a4cf-dae1423837eb/Screenshot_2026-01-28_at_10.57.42_PM.png?t=1769641130"/></a><div class="image__source"><span class="image__source_text"><p>This image was generated by AI. It&#39;s still experimental, so it might not be a perfect match!</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Incase, this is your 1st Cloud Security Newsletter! You are in good company! </b><br>You are reading this issue along with your friends and colleagues from companies like <i>Netflix</i>, Citi, <i>JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more</i> who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to <a class="link" href="https://open.spotify.com/show/6LZgeh4GecRYPc0WrwMB4I?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a> & <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> every week.</p><hr class="content_break"><p class="paragraph" style="text-align:left;">Welcome to this week’s Cloud Security Newsletter</p><p class="paragraph" style="text-align:left;">The emergence of &quot;living off the cloud&quot; attacks where threat actors weaponize trusted platform features rather than compromising infrastructure signals a fundamental shift in cloud security. Traditional authentication-based trust models are failing as attackers abuse legitimate Google Cloud services to bypass email filters and leverage Azure Copy for undetectable data exfiltration.</p><p class="paragraph" style="text-align:left;">This week, we&#39;re diving into architectural patterns that address this challenge with insights from <b>Caleb Sima</b> and <b>Ashish Rajan</b>, who share proven strategies for building resilient cloud security programs that maintain effectiveness even when attackers operate through trusted channels.<i>[</i><a class="link" href="https://www.aisecuritypodcast.com/videos/ai-security-2026-predictions-the-zombie-tool-crisis-the-rise-of-ai-platforms?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">Listen to the episode</a><i>]</i></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="tldr-for-busy-readers">📰 TL;DR for Busy Readers</h2><ul><li><p class="paragraph" style="text-align:left;"><b>Google Cloud email abuse</b>: Attackers sent 9,394 phishing emails through legitimate Google services, bypassing SPF/DKIM/DMARC—behavioral analytics now essential</p></li><li><p class="paragraph" style="text-align:left;"><b>AI tooling expands attack surface</b>: Anthropic patched Git MCP server vulnerabilities showing agent connectors need privileged integration treatment</p></li><li><p class="paragraph" style="text-align:left;"><b>OVHcloud acquires Seald</b>: Zero-knowledge E2EE pressures SaaS providers to adopt customer-controlled encryption architectures</p></li><li><p class="paragraph" style="text-align:left;"><b>GitLab 2FA bypass patched</b>: Self-managed instances vulnerable to account takeover leading to CI/CD credential theft</p></li><li><p class="paragraph" style="text-align:left;"><b>Microsoft 365 outage became security event</b>: When logging pipelines and identity flows fail, IR runbooks assuming SaaS availability collapse</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-security-news">📰 <b>THIS WEEK&#39;S TOP 5 SECURITY HEADLINES</b></h2><p class="paragraph" style="text-align:left;">Each story includes <b>why it matters</b> and <b>what to do next</b> — no vendor fluff.</p><h3 class="heading" style="text-align:left;" id="1-phishing-campaign-exploits-google"><b>1. </b>Phishing Campaign Exploits Google Cloud Automation to Bypass Email Security</h3><p class="paragraph" style="text-align:left;">Check Point researchers uncovered a sophisticated campaign where attackers abused Google Cloud Application Integration&#39;s &quot;Send Email&quot; feature to distribute 9,394 malicious emails to approximately 3,200 organizations. The emails originated from Google&#39;s authentic address (<a class="link" href="mailto:noreply-application-integration@google.com" target="_blank" rel="noopener noreferrer nofollow">noreply-application-integration@google.com</a>), perfectly passing SPF, DKIM, and DMARC validation. The three-stage attack chain started with trusted <a class="link" href="https://storage.cloud.google.com?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">storage.cloud.google.com</a> links, moved through fake CAPTCHA validation on <a class="link" href="https://googleusercontent.com?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">googleusercontent.com</a>, and terminated at credential harvesting pages.</p><p class="paragraph" style="text-align:left;"><b>Why this matters</b>: This represents a paradigm shift in email security. Traditional defenses fail completely because the email <i>is</i> legitimate from a technical standpoint—only the intent is malicious. Organizations with cloud-dependent architectures have inherently trusted Google infrastructure, creating exploitable blind spots. The campaign&#39;s targeting (48.6% US, heavy focus on manufacturing, technology, and financial services) suggests attackers specifically exploit enterprises&#39; trust relationships with major cloud providers.</p><p class="paragraph" style="text-align:left;"><b>Action for defenders</b>: Implement behavioral analytics that identify suspicious patterns regardless of sender authentication. Monitor for unexpected Application Integration notifications in environments not using this service, analyze multi-hop link redirection chains, and flag urgent credential requests even from authenticated cloud services. Update security awareness training to emphasize that @<a class="link" href="https://google.com?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">google.com</a> emails can be malicious when features are abused.</p><p class="paragraph" style="text-align:left;"><b>Source</b>: <a class="link" href="https://research.checkpoint.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">Check Point Research</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="2-git-lab-patches-a-highseverity-2-"><b>2. GitLab patches a high-severity 2FA bypass affecting self-managed instances</b></h3><p class="paragraph" style="text-align:left;">What happened: GitLab fixed CVE-2026-0723, enabling 2FA bypass/account takeover in certain conditions; self-managed users were urged to upgrade (GitLab.com not impacted).</p><p class="paragraph" style="text-align:left;"><b>Why it matters (enterprise CloudSec):</b></p><ul><li><p class="paragraph" style="text-align:left;">GitLab is a supply chain and identity chokepoint. Account takeover can become: CI/CD secret theft → pipeline tampering → cloud key exfiltration.</p></li><li><p class="paragraph" style="text-align:left;">Many orgs treat “developer systems” as separate from “cloud security” but this is exactly where cloud compromises start.<br>Mitigation / next steps (30–60 mins):</p></li><li><p class="paragraph" style="text-align:left;">Upgrade GitLab self-managed to patched versions; confirm coverage.</p></li><li><p class="paragraph" style="text-align:left;">Rotate high-value CI/CD secrets, audit recent runner registrations, and review pipeline edits for anomalies.</p><p class="paragraph" style="text-align:left;"></p><p class="paragraph" style="text-align:left;"><b>Sources</b>: <a class="link" href="https://www.techradar.com/pro/security/gitlab-patches-major-security-flaw-heres-what-we-know?utm_source=chatgpt.com" target="_blank" rel="noopener noreferrer nofollow">TechRadar</a></p></li></ul><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-microsoft-365-outage-resilience-g"><b>3. Microsoft 365 outage: resilience gaps become security gaps</b></h3><p class="paragraph" style="text-align:left;"><b>What happened:</b> A major service disruption on 22 Jan 2026 impacted Microsoft 365 services; Microsoft attributed it to service infrastructure not processing traffic as expected during remediation/maintenance.</p><p class="paragraph" style="text-align:left;"><b>Why it matters (enterprise CloudSec):</b></p><ul><li><p class="paragraph" style="text-align:left;">Outages create “security brownouts”: broken logging pipelines, delayed alerts, impaired incident comms, and even conditional access dependencies.</p></li><li><p class="paragraph" style="text-align:left;">If your IR plan assumes M365 is always available, you risk losing coordination during a real incident.<br>Mitigation / next steps (30–60 mins):</p></li><li><p class="paragraph" style="text-align:left;">Add “SaaS outage mode” to IR: alternate comms channel, offline runbooks, and a plan for identity disruptions.</p></li><li><p class="paragraph" style="text-align:left;">Test whether your SIEM/SOAR continues ingesting critical telemetry during provider degradation.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Sources:</b> <a class="link" href="https://www.techradar.com/news/live/microsoft-outlook-365-outage-january-22-2026?utm_source=chatgpt.com" target="_blank" rel="noopener noreferrer nofollow">TechRadar</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-crowd-strike-acquires-sgnl-for-74"><b>4. Anthropic patched flaws in its Git MCP server agentic tooling expands the attack surface</b></h3><p class="paragraph" style="text-align:left;"><b>What happened:</b> Reporting described vulnerabilities in Anthropic’s Git MCP server where chained configurations could enable file tampering or even code execution risks in some setups; patches were issued.</p><p class="paragraph" style="text-align:left;"><b>Why it matters (enterprise CloudSec):</b></p><ul><li><p class="paragraph" style="text-align:left;">MCP/agent tooling turns connectors into execution-adjacent infrastructure.</p></li><li><p class="paragraph" style="text-align:left;">The risk is rarely “the model got hacked” and more “connectors + permissions + tool calls created a new path to repos/secrets/build systems.”<br>Mitigation / next steps (30–60 mins):</p></li><li><p class="paragraph" style="text-align:left;">Treat MCP servers like privileged integrations: least privilege tokens, egress controls, allowlisted repos, auditing.</p></li><li><p class="paragraph" style="text-align:left;">Separate tool permissions: read-only vs write/execute by default; require explicit approvals for destructive actions.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Sources</b>: <a class="link" href="https://www.techradar.com/pro/security/anthropics-official-git-mcp-server-had-some-worrying-security-flaws-this-is-what-happened-next?utm_source=chatgpt.com" target="_blank" rel="noopener noreferrer nofollow">TechRadar</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-crowd-strike-acquires-sgnl-for-74"><b>5. </b>OVHcloud Acquires Seald for Zero-Knowledge End-to-End Encryption</h3><p class="paragraph" style="text-align:left;">OVHcloud announced acquisition of French encryption company Seald to natively embed end-to-end encryption across OVHcloud services, where only end users can decrypt content. This represents a move toward confidential-by-default SaaS patterns protecting data from intermediaries, including cloud operators and administrators.</p><p class="paragraph" style="text-align:left;"><b>Why this matters</b>: Customer-controlled cryptography fundamentally changes the threat model for sensitive workloads. When cloud providers can&#39;t access plaintext data, entire classes of insider threats, government access requests, and supply chain compromises become irrelevant. This acquisition pressures competing cloud and SaaS providers to offer similar capabilities or risk losing security-conscious enterprises.</p><p class="paragraph" style="text-align:left;"><b>Action for defenders</b>: Identify workloads that should migrate from &quot;encryption at rest&quot; to true E2EE or client-side encryption—collaboration content, secrets repositories, regulated documents. Critically, validate how this impacts eDiscovery requirements, DLP effectiveness, key recovery procedures, and insider threat controls before broad adoption.</p><p class="paragraph" style="text-align:left;"><b>Source</b>: <a class="link" href="https://corporate.ovhcloud.com/en/newsroom/news/ovhcloud-acquires-seald/?utm_source=chatgpt.com" target="_blank" rel="noopener noreferrer nofollow">OVHcloud</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="cloud-security-topic-of-the-week">🎯 Cloud Security Topic of the Week: </h2><hr class="content_break"><h2 class="heading" style="text-align:left;" id="featured-experts-this-week"><b>Featured Experts This Week </b>🎤</h2><ul><li><p class="paragraph" style="text-align:left;"><b>Caleb Sima- </b><a class="link" href="https://wr.vc/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow"> WhiteRabbit.vc</a> | Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></li><li><p class="paragraph" style="text-align:left;"><b><a class="link" href="https://www.linkedin.com/in/ashishrajan/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">Ashish Rajan</a></b> - CISO | Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> , Host of <a class="link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a></p></li></ul><h2 class="heading" style="text-align:left;" id="definitions-and-core-concepts"><b>Definitions and Core Concepts 📚</b></h2><p class="paragraph" style="text-align:left;">Before diving into our insights, let&#39;s clarify some key terms:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Living Off the Cloud (LOTC)</b>: Attack technique where threat actors weaponize legitimate cloud platform features and services to conduct malicious activities, analogous to &quot;Living Off the Land&quot; tactics in traditional endpoint security. Examples include abusing Google Cloud Application Integration for phishing or using Azure Copy for data exfiltration.</p></li><li><p class="paragraph" style="text-align:left;"><b>Model Context Protocol (MCP)</b>: Framework enabling AI agents to interact with external systems through standardized connectors. MCP servers act as bridges between language models and tools like Git repositories, databases, or APIs, expanding AI capabilities while creating new integration security challenges.</p></li><li><p class="paragraph" style="text-align:left;"><b>Zero-Knowledge Encryption (E2EE)</b>: Cryptographic architecture where service providers cannot access plaintext data because encryption/decryption occurs exclusively on client devices using keys the provider never possesses. Differs from standard &quot;encryption at rest&quot; where cloud providers control encryption keys.</p></li><li><p class="paragraph" style="text-align:left;"><b>Behavioral Analytics</b>: Security monitoring approach that establishes baselines of normal activity patterns and identifies deviations indicating potential threats, rather than relying solely on signature-based detection or known indicators of compromise.</p></li><li><p class="paragraph" style="text-align:left;"><b>SPF/DKIM/DMARC</b>: Email authentication protocols (Sender Policy Framework, DomainKeys Identified Mail, Domain-based Message Authentication Reporting & Conformance) designed to verify sender legitimacy and prevent spoofing. These protocols fail when attackers abuse legitimate services rather than spoofing domains.</p></li></ul><hr class="content_break"><p class="paragraph" style="text-align:center;">This week&#39;s issue is sponsored by <b><a class="link" href="https://links.cloudsecuritypodcast.tv/see-prowler-in-action-jan-2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">Prowler</a></b><b> </b></p><p class="paragraph" style="text-align:center;"><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;"><b>The world’s most widely adopted open cloud security platform</b></span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;">Trusted by modern cloud security teams, </span><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;"><b>Prowler</b></span><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;"> detects vulnerabilities and misconfigurations, prioritizes risk, accelerates remediation, and delivers audit-ready compliance reports.</span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;">With </span><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;"><b>44M+ downloads</b></span><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;">, </span><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;"><b>12K+ GitHub stars</b></span><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;">, and </span><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;"><b>300+ contributors</b></span><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;">, Prowler is the open standard for cloud security.</span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;">Ask </span><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;"><b>Lighthouse AI</b></span><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;"> security questions just like a trusted colleague. Get actionable insights and remediation plans instantly. Secure your cloud programmatically with the Prowler MCP server.</span></p><p class="paragraph" style="text-align:center;"><b><a class="link" href="https://links.cloudsecuritypodcast.tv/see-prowler-in-action-jan-2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">See Prowler In Action</a></b></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="our-insights-from-this-practitioner">💡<b>Our Insights from this Practitioner 🔍</b></h2><h3 class="heading" style="text-align:left;" id="ai-security-2026-predictions-the-zo"><b>AI Security 2026 Predictions: The &quot;Zombie Tool&quot; Crisis & The Rise of AI Platforms</b><b>(</b><a class="link" href="https://www.aisecuritypodcast.com/videos/ai-security-2026-predictions-the-zombie-tool-crisis-the-rise-of-ai-platforms?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow"><b>Full Episode here</b></a><b>)</b></h3><hr class="content_break"><h3 class="heading" style="text-align:left;" id="the-trust-inversion-when-authentica">The Trust Inversion: When Authentication Becomes the Attack Vector</h3><p class="paragraph" style="text-align:left;">The Google Cloud phishing campaign and AI tooling vulnerabilities represent what Caleb Sima identifies as a fundamental inversion in cloud security assumptions. &quot;We&#39;ve spent decades building authentication and authorization systems that grant access based on identity verification,&quot; Sima explains. &quot;But when attackers operate through legitimately authenticated services—whether that&#39;s Google Cloud Application Integration or Azure Copy—they inherit the trust we&#39;ve built into our architectures.&quot;</p><p class="paragraph" style="text-align:left;">This observation captures why traditional email security failed completely against the Google Cloud abuse. The emails passed every authentication check because they <i>were</i> authentic. SPF records validated correctly, DKIM signatures were genuine, and DMARC policies approved the messages—all technically correct because Google&#39;s infrastructure was being used as designed, just with malicious intent.</p><p class="paragraph" style="text-align:left;">Ashish Rajan emphasizes the architectural implications: &quot;The challenge for cloud security teams is that we&#39;ve optimized our defenses around perimeter thinking—trusted versus untrusted, internal versus external. Cloud platforms dissolve these boundaries. When an attacker uses Azure Copy to exfiltrate data, they&#39;re not breaching a perimeter; they&#39;re using the exact same APIs your employees use for legitimate file synchronization.&quot;</p><h3 class="heading" style="text-align:left;" id="building-detection-that-survives-tr">Building Detection That Survives Trust Exploitation</h3><p class="paragraph" style="text-align:left;">The failure of authentication-based trust requires a fundamental shift in detection strategies. Sima advocates for what he terms &quot;intent-level verification&quot; rather than identity-level verification: &quot;You can&#39;t ask &#39;is this sender authenticated?&#39;—you have to ask &#39;does this authenticated action make business sense?&#39; That requires understanding context, relationships, and patterns.&quot;</p><p class="paragraph" style="text-align:left;">This approach directly addresses the Google Cloud phishing campaign&#39;s success. A behavioral analytics system wouldn&#39;t focus on whether the email came from @<a class="link" href="https://google.com?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">google.com</a>—it would flag that an organization not using Google Cloud Application Integration suddenly received urgent credential requests through that service, or that a link chain traversed multiple Google domains (<a class="link" href="https://storage.cloud.google.com?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">storage.cloud.google.com</a> → <a class="link" href="https://googleusercontent.com?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">googleusercontent.com</a>) before landing on a credential harvesting page.</p><p class="paragraph" style="text-align:left;">Rajan provides practical implementation guidance: &quot;Start by inventorying which cloud services your organization legitimately uses and which features within those services are actually enabled. If you don&#39;t use Google Cloud Application Integration, any email from that service should be investigated. If you don&#39;t use Azure Copy for cross-region replication, large data movements through that tool deserve scrutiny regardless of who initiated them.&quot;</p><p class="paragraph" style="text-align:left;">The Anthropic MCP vulnerabilities reinforce this principle. As AI agents gain deployment, Sima notes that &quot;AI connectors are execution-adjacent infrastructure masquerading as data integrations. You need to treat them with the same security rigor as CI/CD pipelines or database connection pools—least privilege, segregated networks, comprehensive logging, and continuous verification of what they&#39;re accessing.&quot;</p><h3 class="heading" style="text-align:left;" id="architectural-patterns-for-zero-tru">Architectural Patterns for Zero-Trust Cloud Operations</h3><p class="paragraph" style="text-align:left;">Both experts emphasize that addressing trust exploitation requires architectural decisions, not just operational adjustments. The OVHcloud acquisition of Seald exemplifies one pattern: moving toward cryptographic guarantees that remain valid even when cloud providers or attackers gain administrative access.</p><p class="paragraph" style="text-align:left;">&quot;Zero-knowledge encryption fundamentally changes your threat model,&quot; Sima explains. &quot;When the cloud provider literally cannot decrypt your data, entire categories of risk—insider threats, government access requests, supply chain compromises—become architecturally impossible rather than operationally mitigated.&quot;</p><p class="paragraph" style="text-align:left;">However, Rajan cautions about implementation complexity: &quot;True E2EE creates operational trade-offs. You need to solve key management, ensure key recovery mechanisms exist for legitimate business needs, and recognize that security controls like DLP and eDiscovery that rely on inspecting plaintext content will need architectural redesign. Organizations implementing customer-controlled encryption need to think through these implications before deployment.&quot;</p><p class="paragraph" style="text-align:left;">For organizations not ready for zero-knowledge architectures, both experts advocate defense-in-depth patterns that assume breach even of trusted services. Sima recommends: &quot;Implement egress monitoring that tracks data movements regardless of tool legitimacy. Deploy network segmentation that limits blast radius even when credentials are compromised. Build incident response procedures that don&#39;t assume availability of cloud services you depend on.&quot;</p><p class="paragraph" style="text-align:left;">The Microsoft 365 outage crystallizes this point. Rajan notes: &quot;If your incident response runbooks assume M365 is available, you&#39;ve created a single point of failure. When that platform goes down during an active incident, you lose logging, alerting, identity verification, and team communications simultaneously. Building resilience requires alternative channels and offline contingencies.&quot;</p><h3 class="heading" style="text-align:left;" id="operationalizing-security-for-ai-ag">Operationalizing Security for AI Agent Deployments</h3><p class="paragraph" style="text-align:left;">The Anthropic Git MCP vulnerabilities reveal emerging challenges as AI agents gain broader deployment. Sima emphasizes treating these integrations with heightened scrutiny: &quot;When you connect an AI agent to your Git repositories or cloud APIs, you&#39;re creating an execution path from natural language prompts to privileged operations. That pathway needs the same controls you&#39;d apply to any other automation with similar privileges.&quot;</p><p class="paragraph" style="text-align:left;">Practical implementation includes several critical patterns. First, enforce strict separation between read and write capabilities: &quot;An AI agent analyzing code should have read-only repository access. An agent that creates pull requests needs write access, but that should be a separate, more restricted deployment with comprehensive audit logging.&quot;</p><p class="paragraph" style="text-align:left;">Second, implement allowlisting at multiple levels. Rajan recommends: &quot;Don&#39;t give an AI connector access to all repositories—explicitly define which repos it can access. Don&#39;t allow arbitrary tool invocations—define specific, approved actions. Each layer of restriction reduces the potential blast radius if the integration is compromised or behaves unexpectedly.&quot;</p><p class="paragraph" style="text-align:left;">Third, deploy egress controls and network segmentation: &quot;AI connectors should operate in isolated network segments with explicit egress rules. If your AI agent only needs to access internal Git repositories, it shouldn&#39;t have general internet access that could be exploited for data exfiltration.&quot;</p><h3 class="heading" style="text-align:left;" id="practical-response-to-living-off-th">Practical Response to Living Off the Cloud Threats</h3><p class="paragraph" style="text-align:left;">Both experts provide concrete guidance for organizations responding to the current threat landscape. For email security in light of the Google Cloud phishing campaign, Sima advocates immediate tactical adjustments: &quot;Update security awareness training to explicitly address the fact that emails from @<a class="link" href="https://google.com?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">google.com</a>, @<a class="link" href="https://microsoft.com?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">microsoft.com</a>, or other major providers can be malicious when features are abused. Employees need to verify through alternative channels before responding to urgent credential requests, regardless of sender authenticity.&quot;</p><p class="paragraph" style="text-align:left;">Technically, implement advanced threat protection capable of analyzing entire link redirection chains rather than just first-hop URLs. &quot;The Google Cloud campaign used multi-stage redirections specifically because traditional email security only examines initial links. You need systems that follow the complete chain and flag suspicious patterns like traversing multiple domains before reaching a credential input form.&quot;</p><p class="paragraph" style="text-align:left;">For cloud data exfiltration risks, Rajan recommends: &quot;Implement data classification and sensitivity-based monitoring. Large movements of sensitive data should trigger investigation regardless of the tool used. If someone uses Azure Copy to move 500GB of customer records to an external storage account, that deserves immediate scrutiny even if the operation was executed through legitimate APIs with valid credentials.&quot;</p><p class="paragraph" style="text-align:left;">Organizations should also audit their cloud service usage comprehensively: &quot;Many enterprises have cloud services and features enabled that nobody remembers authorizing. Conduct an inventory of what&#39;s actually in use, disable features you don&#39;t need, and implement monitoring for unexpected usage of services that shouldn&#39;t be active in your environment.&quot;</p><h3 class="heading" style="text-align:left;" id="building-resilience-against-saa-s-c">Building Resilience Against SaaS Concentration Risk</h3><p class="paragraph" style="text-align:left;">The Microsoft 365 outage highlights systematic risks from cloud service concentration. Sima advocates for explicit &quot;SaaS outage mode&quot; planning: &quot;Your incident response procedures need to work when the services you depend on are unavailable. That means alternative communication channels, offline copies of runbooks, cached contact lists, and contingency plans for identity system failures.&quot;</p><p class="paragraph" style="text-align:left;">Testing is critical: &quot;Actually simulate a Microsoft 365 outage during a security exercise. Can your SIEM still ingest logs? Can your team coordinate response activities? Can you verify user identities when conditional access policies can&#39;t be evaluated? Most organizations discover significant gaps when they actually test these scenarios.&quot;</p><p class="paragraph" style="text-align:left;">Rajan adds: &quot;Consider geographic and provider diversity for critical security functions. If all your logging, alerting, identity, and communication runs through a single cloud provider, you&#39;ve created systemic risk. Building resilience might mean deploying backup SIEM infrastructure on a different cloud platform or maintaining secondary communication channels that don&#39;t share dependencies with your primary systems.&quot;</p><h3 class="heading" style="text-align:left;" id="strategic-evolution-from-perimeter-">Strategic Evolution: From Perimeter to Continuous Verification</h3><p class="paragraph" style="text-align:left;">Both experts see the current threat landscape as accelerating a necessary evolution in cloud security thinking. Sima articulates the strategic shift: &quot;The perimeter model assumed that once you verified identity and granted access, you could trust subsequent actions. Cloud environments require continuous verification—every action needs to be evaluated for business legitimacy regardless of who&#39;s performing it.&quot;</p><p class="paragraph" style="text-align:left;">This manifests in specific architectural decisions. Organizations should implement just-in-time access elevation rather than standing privileges, require business justification for sensitive operations even when performed by authorized users, and deploy comprehensive behavioral analytics that establish normal patterns and flag deviations.</p><p class="paragraph" style="text-align:left;">Rajan emphasizes the cultural dimension: &quot;Senior cloud security professionals need to advocate internally for realistic threat models. If your organization assumes that authentication from Google or Microsoft guarantees benign intent, you need to change that assumption at the leadership level. The Google Cloud phishing campaign proves that authenticated ≠ safe.&quot;</p><p class="paragraph" style="text-align:left;">The path forward requires accepting architectural complexity in exchange for resilience. &quot;Building secure cloud environments means layering controls that remain effective even when outer layers are bypassed,&quot; Sima concludes. &quot;Defense in depth isn&#39;t just good practice—it&#39;s the only viable strategy when attackers operate through channels you&#39;ve explicitly trusted.&quot;</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-resources">RELATED RESOURCES</h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://research.checkpoint.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">Check Point Research Blog</a> - Original research on cloud service abuse and emerging threat vectors</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://blog.morphisec.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">Morphisec Labs</a> - Analysis of ransomware evolution and cloud-based attack techniques</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.anthropic.com/security?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">Anthropic Security Advisories</a> - Insights on securing AI agent deployments</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://csrc.nist.gov/publications/detail/sp/800-207/final?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">NIST Zero Trust Architecture (SP 800-207)</a> - Framework for implementing continuous verification</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://cloudsecurityalliance.org/research/cloud-controls-matrix/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Alliance - Cloud Controls Matrix</a> - Comprehensive cloud security control framework</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://owasp.org/www-project-ai-security-and-privacy-guide/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">OWASP AI Security and Privacy Guide</a> - Security patterns for AI integrations</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://attack.mitre.org/matrices/enterprise/cloud/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">MITRE ATT&CK for Cloud</a> - Cloud-specific adversary tactics and techniques</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://github.com/nccgroup/ScoutSuite?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">ScoutSuite</a> - Multi-cloud security auditing tool for configuration assessment</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://github.com/prowler-cloud/prowler?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">Prowler</a> - Open-source cloud security assessment tool</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>Related Podcast Episodes 🎧</b></h2><div class="image"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/7695c1af-0cb0-4a31-bcd2-630f53a7074b/S04EP02.png?t=1769640015"/></div><h3 class="heading" style="text-align:left;" id="question-for-you-reply-to-this-emai">Question for you? (Reply to this email)</h3><p class="paragraph" style="text-align:left;">🤔 <b> How do you balance AI adoption speed with security controls are your developers choosing secure paths or working around them?</b><br></p><p class="paragraph" style="text-align:left;">Next week, we&#39;ll explore another critical aspect of cloud security. Stay tuned!</p><hr class="content_break"><p class="paragraph" style="text-align:left;">📬 Want weekly expert takes on AI & Cloud Security? [<a class="link" href="https://www.cloudsecuritynewsletter.com/subscribe?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">Subscribe here</a>]”</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><span style="color:rgb(238, 40, 60);"><b><a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">We would love to hear from you</a></b></span>📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙</p><p class="paragraph" style="text-align:start;">Peace!</p><p class="paragraph" style="text-align:start;"><a class="link" href="https://www.linkedin.com/in/shilpi-bhattacharjee/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">Shilpi Bhattacharjee</a></p><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dc094a1c-678a-43e0-adc9-1bee6c3499e2/CSP_Logo_Blue_ScreenRes_3000x3000_v2.jpg"/></a></div><div class="button" style="text-align:left;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="{{rp_referral_hub_url}}"><span class="button__text" style=""> Share the newsletter </span></a></div><p class="paragraph" style="text-align:left;">Was this forwarded to you? You can <a class="link" href="https://www.cloudsecuritynewsletter.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">Sign up here</a>, to join our growing readership.</p><p class="paragraph" style="text-align:left;">Want to <b>sponsor</b> the next newsletter edition! <a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">Lets make it happen </a></p><p class="paragraph" style="text-align:left;">Have you joined our FREE <b>Monthly</b> <a class="link" href="https://www.cloudsecuritybootcamp.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Bootcamp</a> yet?</p><p class="paragraph" style="text-align:left;">checkout our <b>sister podcast</b> <a class="link" href="https://www.youtube.com/@AISecurityPodcast?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=google-cloud-phishing-bypasses-email-security-lessons-from-anthropic-s-mcp-security-response" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=34a7d951-e897-420b-9a63-f81d94fe3cc2&utm_medium=post_rss&utm_source=cloud_security_newsletter">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>🚨 Gemini Prompt Injection + Copilot Reprompt: Why LLMs Can’t Tell Instructions from Data</title>
  <description>This week&#39;s newsletter examines critical prompt injection vulnerabilities across Microsoft Copilot, Google Gemini, and GitHub Copilot, alongside AWS CodeBuild&#39;s supply-chain risks. Learn from Ramp&#39;s Principal Security Engineer Antoinette Stevens about building engineering-led detection programs that scale with AI while maintaining human oversight, managing false positives, and balancing build-versus-buy decisions in 2026&#39;s threat landscape.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/9b2e5771-4344-48a0-82f2-37c11e2b36b4/Screenshot_2026-01-22_at_8.15.21_AM.png" length="3915191" type="image/png"/>
  <link>https://www.cloudsecuritynewsletter.com/p/prompt-injection-gemini-copilot-ai-security</link>
  <guid isPermaLink="true">https://www.cloudsecuritynewsletter.com/p/prompt-injection-gemini-copilot-ai-security</guid>
  <pubDate>Wed, 21 Jan 2026 21:17:12 +0000</pubDate>
  <atom:published>2026-01-21T21:17:12Z</atom:published>
    <dc:creator>Ashish Rajan</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="hello-from-the-cloudverse"><span style="background-color:#28EEDA;"><b>Hello from the Cloud-verse!</b></span></h2><p class="paragraph" style="text-align:left;">This week’s Cloud Security Newsletter topic<b>: Engineering-Led Detection Programs in the AI Era </b><a class="link" href="#cloud-security-topic-of-the-week" rel="noopener noreferrer nofollow">(continue reading)</a> </p><div class="button" style="text-align:center;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="https://www.aisecuritypodcast.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data"><span class="button__text" style=""> This issue is sponsored by AI Security Podcast </span></a></div><hr class="content_break"><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/why-ai-cant-replace-detection-engineers-build-vs-buy-the-future-of-soc?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/9b2e5771-4344-48a0-82f2-37c11e2b36b4/Screenshot_2026-01-22_at_8.15.21_AM.png?t=1769030168"/></a><div class="image__source"><span class="image__source_text"><p>This image was generated by AI. It&#39;s still experimental, so it might not be a perfect match!</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Incase, this is your 1st Cloud Security Newsletter! You are in good company! </b><br>You are reading this issue along with your friends and colleagues from companies like <i>Netflix</i>, Citi, <i>JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more</i> who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to <a class="link" href="https://open.spotify.com/show/6LZgeh4GecRYPc0WrwMB4I?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a> & <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> every week.</p><hr class="content_break"><p class="paragraph" style="text-align:left;">Welcome to this week’s Cloud Security Newsletter</p><p class="paragraph" style="text-align:left;">As enterprises scale AI across security and engineering workflows, a hard truth is emerging: <b>LLMs fundamentally cannot distinguish instructions from data.</b><br>This week’s Cloud Security Newsletter connects the dots between prompt injection vulnerabilities across Google Gemini, Microsoft Copilot, and GitHub Copilot, CI/CD trust boundary failures in AWS CodeBuild, and what detection programs must look like in an AI-first threat landscape.</p><p class="paragraph" style="text-align:left;">🎙️ This week’s practitioner deep dive features <b>Antoinette Stevens</b>, Principal Security Engineer at <b>Ramp</b>, sharing how engineering-led detection programs scale with AI without outsourcing judgment to models.<i>[</i><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/why-ai-cant-replace-detection-engineers-build-vs-buy-the-future-of-soc?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">Listen to the episode</a><i>]</i></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="tldr-for-busy-readers">📰 TL;DR for Busy Readers</h2><ul><li><p class="paragraph" style="text-align:left;"><b>Prompt injection is now OWASP LLM01</b> — models cannot reliably separate instructions from data</p></li><li><p class="paragraph" style="text-align:left;"><b>CI/CD pipelines are auth surfaces</b>: AWS CodeBuild regex flaw shows how supply-chain attacks scale</p></li><li><p class="paragraph" style="text-align:left;"><b>Semantic attacks beat string-based defenses</b> (Gemini + Calendar invites)</p></li><li><p class="paragraph" style="text-align:left;"><b>Microsoft Copilot “Reprompt” enabled silent, persistent data exfiltration</b></p></li><li><p class="paragraph" style="text-align:left;"><b>Engineering-led detection treats alerts as production code, not rules</b></p></li><li><p class="paragraph" style="text-align:left;"><b>AI SOC agents help at L1 — but hallucinate without human validation</b></p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-security-news">📰 <b>THIS WEEK&#39;S TOP 5 SECURITY HEADLINES</b></h2><p class="paragraph" style="text-align:left;">Each story includes <b>why it matters</b> and <b>what to do next</b> — no vendor fluff.</p><h3 class="heading" style="text-align:left;" id="1-code-breach-vulnerability-in-aws-"><b>1. </b>&quot;CodeBreach&quot; Vulnerability in AWS CodeBuild Exposed Supply-Chain Risk</h3><p class="paragraph" style="text-align:left;">Wiz Research disclosed a critical vulnerability pattern in AWS CodeBuild&#39;s GitHub integration where improperly anchored regex checks could enable attackers to impersonate authorized maintainers and trigger privileged build workflows. The flaw, dubbed &quot;CodeBreach,&quot; demonstrates how authorization weaknesses in CI/CD pipelines can escalate into software supply-chain compromise events. AWS remediated the issue within approximately 48 hours and confirmed no customer environments were impacted.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> This vulnerability exemplifies a classic CI/CD trust boundary failure where seemingly minor authorization logic can become a blast radius multiplier. For cloud security teams, every build trigger represents a production authentication surface—particularly those that can mint credentials, publish artifacts, or merge to protected repositories. As Antoinette Stevens noted in our conversation, detection engineering must extend beyond traditional security boundaries to include developer workflows and automation systems.</p><p class="paragraph" style="text-align:left;"><b>Recommended Actions:</b> Audit all CodeBuild and GitHub integration patterns for regex anchoring and identity mapping issues. Implement manual approval gates for privileged PR-comment workflows. Restrict build roles to least-privilege, short-lived credentials. Deploy detections for anomalous trigger identities and unusual PR-comment patterns.</p><p class="paragraph" style="text-align:left;"><b>Source:</b> <a class="link" href="https://www.itpro.com/security/aws-codebuild-vulnerability-codebreach-wiz?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">IT Pro</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="2-prompt-injection-officially-named"><b>2. </b>Prompt Injection Officially Named Top AI Threat - OWASP LLM01 for 2026</h3><p class="paragraph" style="text-align:left;">Multiple January 2026 research publications and incidents confirmed prompt injection as the primary attack vector against AI systems, with OWASP formally designating it as LLM01 in their updated Top 10 for LLM Applications. A comprehensive academic review analyzed 45 sources documenting real-world attacks including GitHub Copilot&#39;s CVE-2025-53773 (CVSS 9.6 remote code execution), ChatGPT&#39;s Windows license key exposure, and research demonstrating that just five carefully crafted documents can manipulate AI responses 90% of the time through RAG poisoning. IEEE Security & Privacy 2026 research revealed that 8 of 17 third-party chatbot plugins fail to enforce conversation history integrity, enabling client-side message manipulation.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> This represents a fundamental architectural vulnerability in LLM systems with no complete technical solution models cannot reliably distinguish instructions from data. For cloud security teams deploying AI-powered tools including code assistants, chatbots, and automation agents, prompt injection creates risks across the entire attack chain: data exfiltration, credential theft, unauthorized actions, and supply-chain compromise. The OWASP update added new categories for System Prompt Leakage and Vector/Embedding Weaknesses, reflecting the maturation of attack techniques. Organizations must implement defense-in-depth strategies including input sanitization, output validation, privilege minimization for AI agents, sandboxing for tool execution, monitoring for anomalous behavior, and strict separation between trusted system prompts and untrusted user or external content. Reports from Palo Alto Network’s Unit 42 were referred for this news coverage.</p><p class="paragraph" style="text-align:left;"><b>Sources:</b> <a class="link" href="https://www.mdpi.com?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">MDPI Research</a>, <a class="link" href="https://owasp.org?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">OWASP</a>, <a class="link" href="https://unit42.paloaltonetworks.com?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">Palo Alto Unit 42</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-microsoft-copilot-reprompt-vulner"><b>3. </b>Microsoft Copilot &quot;Reprompt&quot; Vulnerability Enabled Silent Data Exfiltration</h3><p class="paragraph" style="text-align:left;">Varonis Threat Labs disclosed a critical vulnerability in Microsoft Copilot Personal that enabled attackers to silently exfiltrate sensitive user data through a single malicious link click. The &quot;Reprompt&quot; attack exploited the &#39;q&#39; URL parameter to inject hidden prompts, enabling continuous data extraction even after the Copilot session closed—all without requiring plugins, direct user interaction with Copilot, or additional authentication. Microsoft patched the flaw on January 13, 2026, following responsible disclosure in August 2025. The vulnerability bypassed built-in safety controls through parameter injection, double-request techniques, and chain-requests that maintained persistence.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> Reprompt demonstrates the fundamental security challenge in LLM-based assistants: the inability to reliably distinguish between trusted instructions and untrusted data. For enterprise cloud security teams, AI assistants integrated into workflows create new attack surfaces for credential theft and data exfiltration. Notably, Microsoft 365 Copilot enterprise customers were not affected, highlighting the critical importance of enterprise-grade security controls around AI deployments. Organizations deploying AI assistants must implement input sanitization, session integrity checks, and continuous monitoring—treating LLM interfaces as untrusted user input vectors equivalent to traditional web applications.</p><p class="paragraph" style="text-align:left;"><b>Sources:</b> <a class="link" href="https://thehackernews.com?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">The Hacker News</a>, <a class="link" href="https://www.varonis.com?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">Varonis</a>, <a class="link" href="https://www.securityweek.com?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">SecurityWeek</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-crowd-strike-acquires-sgnl-for-74"><b>4. </b>Google Gemini Calendar Integration Exploited for Semantic Attacks</h3><p class="paragraph" style="text-align:left;">Miggo Security reported a prompt injection and authorization bypass vulnerability where malicious payloads embedded in Google Calendar invites could be interpreted by Gemini-integrated workflows, exposing private meeting data and enabling creation of deceptive events. Google confirmed and mitigated the issue following responsible disclosure.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> This vulnerability represents the clearest manifestation of the cloud productivity-to-AI agent risk pattern: trusted business objects like calendar invites, documents, and tickets become instruction channels for AI systems. Traditional security controls looking for malicious strings cannot defend against semantic attacks. Defense requires tool-permission governance, data provenance tracking, and runtime policy enforcement to prevent models from exfiltrating sensitive context into attacker-visible fields.</p><p class="paragraph" style="text-align:left;"><b>Recommended Actions:</b> For enterprise AI assistants and agents, restrict tool scopes particularly for write actions and cross-tenant sharing. Implement allowlists for sensitive actions including create, share, and export operations. Deploy monitoring for unusual automated edits and creates in Calendar and Drive. Treat AI integrations as new privileged applications requiring threat modeling, comprehensive logging, and incident response playbooks.</p><p class="paragraph" style="text-align:left;"><b>Source:</b> <a class="link" href="https://www.miggo.io/post/weaponizing-calendar-invites-a-semantic-attack-on-google-gemini?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">Miggo Security</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-crowd-strike-acquires-sgnl-for-74"><b>5. </b>AWS Publishes Updated SOC Reports: 185 Services Now In Scope</h3><p class="paragraph" style="text-align:left;">AWS announced availability of Fall 2025 SOC 1/2/3 reports with 185 services now in scope, providing customer assurance, control mapping capabilities, and audit readiness support.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> SOC scope expansions reduce control gaps for regulated cloud programs and can accelerate onboarding of additional managed services—provided teams update their control mappings accordingly. This is particularly relevant for organizations standardizing evidence collection across multi-region, multi-account environments.</p><p class="paragraph" style="text-align:left;"><b>Recommended Actions:</b> Refresh GRC evidence packages. Validate which newly in-scope services your organization relies on. Align internal control narratives to the updated SOC boundary. Leverage this expansion to simplify vendor questionnaires and reduce bespoke audit work.</p><p class="paragraph" style="text-align:left;"><b>Source:</b> <a class="link" href="https://aws.amazon.com/blogs/security/fall-2025-soc-1-2-and-3-reports-are-now-available-with-185-services-in-scope/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">AWS Security Blog</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="cloud-security-topic-of-the-week">🎯 Cloud Security Topic of the Week: </h2><h3 class="heading" style="text-align:left;" id="engineering-led-detection-programs-">Engineering-Led Detection Programs in the AI Era</h3><p class="paragraph" style="text-align:left;">As AI systems integrate deeper into enterprise security operations, the traditional approach to detection engineering faces fundamental challenges. </p><p class="paragraph" style="text-align:left;">This week&#39;s topic explores how engineering principles testing suites, validation frameworks, and architectural thinking—are becoming essential for building detection programs that scale effectively while maintaining accuracy and business context. </p><p class="paragraph" style="text-align:left;">We examine the reality of AI-augmented security operations, the critical distinction between AI as a force multiplier versus a replacement for human expertise, and practical strategies for maturing detection capabilities in 2026&#39;s threat landscape.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="featured-experts-this-week"><b>Featured Experts This Week </b>🎤</h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.linkedin.com/in/antoinettemstevens/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow"><b>Antoinette Stevens</b></a> –  Principal Security Engineer at Ramp</p></li><li><p class="paragraph" style="text-align:left;"><b><a class="link" href="https://www.linkedin.com/in/ashishrajan/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">Ashish Rajan</a></b> - CISO | Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> , Host of <a class="link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a></p></li></ul><h2 class="heading" style="text-align:left;" id="definitions-and-core-concepts"><b>Definitions and Core Concepts 📚</b></h2><p class="paragraph" style="text-align:left;">Before diving into our insights, let&#39;s clarify some key terms:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Prompt Injection:</b> A vulnerability class where attackers manipulate AI system behavior by embedding malicious instructions within user inputs or external data sources. Unlike traditional injection attacks, prompt injection exploits the fundamental architectural limitation that LLMs cannot reliably distinguish between trusted system instructions and untrusted user data.</p></li><li><p class="paragraph" style="text-align:left;"><b>Detection as Code:</b> An engineering approach to detection engineering where detection rules are managed as source-controlled code with testing suites, validation frameworks, and automated deployment pipelines. This methodology brings software engineering disciplines to security operations.</p></li><li><p class="paragraph" style="text-align:left;"><b>RAG Poisoning:</b> An attack technique targeting Retrieval-Augmented Generation systems where attackers inject malicious content into knowledge bases or vector databases that AI systems query to enhance their responses. Research demonstrates that as few as five carefully crafted documents can manipulate AI responses 90% of the time.</p></li><li><p class="paragraph" style="text-align:left;"><b>Model Evaluation (Evals):</b> A framework for measuring AI model accuracy and reliability by testing outputs against known-good examples or validation criteria. In security contexts, evals measure how accurately AI agents perform investigations, triage alerts, and make security decisions.</p></li><li><p class="paragraph" style="text-align:left;"><b>Model Memory:</b> The capability of AI systems to retain and reference previous interactions within a session or across sessions. Modern AI assistants use conversation history to inform future responses, which is critical for maintaining investigation context but also creates new attack surfaces for memory poisoning.</p></li><li><p class="paragraph" style="text-align:left;"><b>CI/CD Trust Boundary:</b> The authentication and authorization perimeter around continuous integration and deployment systems. Failures in these boundaries can enable supply-chain attacks where unauthorized actors trigger privileged automation, mint credentials, or publish malicious artifacts.</p></li><li><p class="paragraph" style="text-align:left;"><b>Alert Bloom:</b> A condition where detection rules generate excessive false positives, overwhelming security teams and reducing the signal-to-noise ratio. Engineering-led detection programs treat alert bloom as a code quality issue requiring systematic refactoring.</p></li></ul><hr class="content_break"><p class="paragraph" style="text-align:center;">This week&#39;s issue is sponsored by <a class="link" href="https://links.cloudsecuritypodcast.tv/push-security-jan-2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(17, 85, 204)">Push Security</a><b> </b></p><p class="paragraph" style="text-align:center;"><span style="color:rgb(0, 0, 0);"><b>Want to learn how to respond to modern attacks that don’t touch the endpoint? </b></span></p><p class="paragraph" style="text-align:center;"><span style="color:rgb(0, 0, 0);">Modern attacks have evolved — most breaches today don’t start with malware or vulnerability exploitation. Instead, attackers are targeting business applications directly over the internet. </span></p><p class="paragraph" style="text-align:center;"><span style="color:rgb(0, 0, 0);">This means that the way security teams need to detect and respond has changed too. </span></p><p class="paragraph" style="text-align:center;"><span style="color:rgb(0, 0, 0);">Register for the latest webinar from Push Security on February 11 for an interactive, “choose-your-own-adventure” experience walking through modern IR scenarios, where your inputs will determine the course of our investigations. </span></p><p class="paragraph" style="text-align:center;"><a class="link" href="https://links.cloudsecuritypodcast.tv/push-security-jan-2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(17, 85, 204)">Register Now</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="our-insights-from-this-practitioner">💡<b>Our Insights from this Practitioner 🔍</b></h2><h3 class="heading" style="text-align:left;" id="the-engineering-foundation-why-test">The Engineering Foundation: Why Testing and Validation Matter More Than Ever<b> </b><b>(</b><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/why-ai-cant-replace-detection-engineers-build-vs-buy-the-future-of-soc?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow"><b>Full Episode here</b></a><b>)</b></h3><hr class="content_break"><p class="paragraph" style="text-align:left;">One of the most striking insights from Antoinette Stevens centers on what distinguishes engineering-led detection programs from traditional security operations approaches. The difference isn&#39;t merely technical—it&#39;s philosophical. Engineering-trained practitioners bring disciplines that seem obvious until you realize they&#39;re not universally practiced: testing, validation, reliability, and observability.</p><p class="paragraph" style="text-align:left;">&quot;I think there are certain things that you learn when you&#39;ve been trained to do engineering work, especially around testing and validation and reliability and observability,&quot; Stevens explained. &quot;Some of those things that some people might take for granted—it feels obvious until you realize you haven&#39;t been doing it.&quot;</p><p class="paragraph" style="text-align:left;">This engineering mindset fundamentally changes how detection programs scale. Traditional approaches often treat detection rules as one-off implementations: identify a threat, write a rule, deploy it, and move on. Engineering-led programs instead view detection rules as production software requiring lifecycle management. Stevens described this evolution: &quot;We&#39;ve seen a rise in detection as code becoming more popular. That&#39;s an engineering-led approach where you are source controlling something, you might make a test suite to make sure it works. You have validations, you do various things before you move things to production.&quot;</p><p class="paragraph" style="text-align:left;">The practical implications are substantial. At Ramp, Stevens built validation directly into the detection pipeline. Her detection engineer created GitHub-based rule validation, while their detection platform provides built-in testing frameworks where mock logs validate that alerts fire as expected. This isn&#39;t theoretical testing—it&#39;s automated validation that runs before any detection rule reaches production.</p><p class="paragraph" style="text-align:left;"><b>Practical Application:</b> For security leaders building or maturing detection programs, the first step is treating detection rules as production code. This means implementing source control for all rules, creating testing frameworks that validate detection logic against both historical and synthetic data, building automated validation into your deployment pipeline, and establishing observability for detection effectiveness including false positive rates and coverage gaps.</p><h3 class="heading" style="text-align:left;" id="the-build-vs-buy-calculation-change">The Build vs. Buy Calculation Changes with Engineering Capability</h3><p class="paragraph" style="text-align:left;">The traditional security vendor landscape assumes teams lack engineering capability. Stevens challenges this assumption, arguing that engineering-led teams should fundamentally reconsider their approach to tooling procurement. &quot;If you have engineers on your team—people who can build software—your approach to buying tooling changes such that my approach is now: can I build it myself? And if so, is the cost of me building it myself more or less than the cost of buying it?&quot;</p><p class="paragraph" style="text-align:left;">This isn&#39;t about blanket build-versus-buy recommendations. Stevens applies a nuanced framework considering maintenance burden, support requirements, and long-term sustainability. &quot;There are platforms that help you with log ingestion—I&#39;m not paying for that. I could write a script and then never touch it again.&quot; But for more complex capabilities like AI agent evaluation frameworks, &quot;I don&#39;t want to run that myself. I want to stay away from building an entire product internally that won&#39;t outlast my tenure.&quot;</p><p class="paragraph" style="text-align:left;">The decision matrix centers on complexity and required ongoing investment. Simple, stable functionality that rarely requires updates becomes a build candidate. Complex systems requiring continuous tuning, evaluation frameworks, and vendor support become buy candidates. This is particularly relevant for AI-powered security tools where evaluation infrastructure, model rotation capabilities, and observability platforms represent significant ongoing engineering investments.</p><p class="paragraph" style="text-align:left;"><b>Practical Application:</b> When evaluating security tools, assess your team&#39;s engineering capability honestly. For teams with strong engineering skills, conduct build-versus-buy analyses for each major capability area. Calculate total cost of ownership including development time, maintenance burden, and opportunity cost of not focusing on core security problems. Reserve building for stable, low-maintenance capabilities where vendor solutions add limited value. Buy complex platforms requiring continuous evaluation, model management, or specialized expertise your team lacks.</p><h3 class="heading" style="text-align:left;" id="ai-as-force-multiplier-the-reality-">AI as Force Multiplier: The Reality Check on AI SOCs</h3><p class="paragraph" style="text-align:left;">The promise of fully autonomous AI security operations centers has captured significant vendor marketing attention. Stevens provides a grounded reality check based on actual implementation experience. At Ramp, AI handles first-level triage, conducting initial investigations before human analysts review and make final decisions. But Stevens is emphatic about the limitations: &quot;I do not trust it to close out alerts. I have watched ChatGPT just lie to me continuously. I am definitely not fully on board with just letting it close out things.&quot;</p><p class="paragraph" style="text-align:left;">The challenges are fundamental, not merely teething problems with immature technology. Stevens identified several critical issues. First, AI agents make logical inferences that may be incorrect: &quot;If you&#39;re not clear with it in how an investigation should be run, it tends to try to fill in the gaps for you. It likes to make a lot of logical summarizations where it says &#39;and this happened because of X&#39; and &#39;the result of this is that&#39;... I don&#39;t need you to guess at why someone did something. I just need the facts of the situation.&quot;</p><p class="paragraph" style="text-align:left;">Second, AI agents lack business context that&#39;s obvious to human analysts. Stevens cited an example where their AI flagged a subnet opened to the internet as legitimate because &quot;this action was taken by an engineer, and so it is legitimate.&quot; The agent missed the critical point: &quot;It doesn&#39;t matter if it&#39;s legitimate. We should always know and want to do something if a resource is open to the internet.&quot;</p><p class="paragraph" style="text-align:left;">Third, model variability poses operational risks. &quot;A new version of GPT could come out and be completely wrong,&quot; Stevens noted, highlighting that AI SOC implementations must account for model regression risks. This requires robust evaluation frameworks to catch degradation in investigation quality across model updates.</p><p class="paragraph" style="text-align:left;">Despite these limitations, AI has delivered measurable value. &quot;AI should be a force multiplier, but it should not be your brain,&quot; Stevens emphasized. For her team, AI has been &quot;helpful with noise reduction&quot; and &quot;helpful with tuning.&quot; The key is appropriate human oversight: &quot;I still have a base philosophy that if an alert is not useful, it should not fire,&quot; meaning even AI-triaged alerts require human validation to ensure quality.</p><p class="paragraph" style="text-align:left;"><b>Practical Application:</b> When implementing AI for security operations, deploy it for L1 triage with mandatory human review before any automated response actions. Implement evaluation frameworks to continuously measure investigation accuracy and catch model regression. Develop clear investigation procedures that constrain AI to factual reporting rather than inferential reasoning. Maintain business context through well-defined prompts and investigation playbooks. Track false positive rates separately for AI-triaged versus human-triaged alerts to measure actual effectiveness. Plan for model variability by designing systems that can gracefully handle model updates or degradation.</p><h3 class="heading" style="text-align:left;" id="the-prerequisites-why-ai-doesnt-rep">The Prerequisites: Why AI Doesn&#39;t Replace Fundamentals</h3><p class="paragraph" style="text-align:left;">Perhaps Stevens&#39; most important insight concerns who can effectively leverage AI for detection engineering. The answer challenges popular narratives about AI democratizing technical capabilities. &quot;If you don&#39;t know how to write code, AI won&#39;t help you get anywhere faster because it&#39;s going to write slop and then things will break and you won&#39;t know how to fix it,&quot; Stevens stated bluntly.</p><p class="paragraph" style="text-align:left;">This isn&#39;t gatekeeping—it&#39;s recognition that AI amplifies existing capabilities rather than creating them. Stevens explained: &quot;For people who already know how to build software, I think it accelerates. But for almost anyone else, it likely slows you down.&quot; The reasoning is straightforward: without understanding code architecture, developers cannot evaluate whether AI-generated code is over-engineered, missing critical functionality, or introducing vulnerabilities.</p><p class="paragraph" style="text-align:left;">The security implications are particularly concerning. &quot;If you don&#39;t understand how architecture works or the basics of writing code, then if your AI generates code that is over-engineered or missing something, especially if you&#39;re prompting it and you don&#39;t know how... you end up with a product that down the line isn&#39;t sustainable. At best isn&#39;t sustainable, at worst is vulnerable to something.&quot;</p><p class="paragraph" style="text-align:left;">This extends to understanding both security and cloud domains. Using AWS as an example, Stevens noted: &quot;You already have AWS experience, you have generic cloud computing understanding. You could easily walk into an Azure environment and go, show me the equivalent of object storage. I&#39;m looking for this specific type of thing. What&#39;s possible here?&quot; But that contextual knowledge is prerequisite—without it, you cannot effectively prompt AI or evaluate its responses.</p><p class="paragraph" style="text-align:left;"><b>Practical Application:</b> When building detection engineering teams, prioritize fundamental skills: coding ability, cloud architecture understanding, and security domain expertise. AI training should be additive, not foundational. For existing team members adopting AI tools, invest in prompt engineering training that emphasizes contextual expertise—teaching people to validate AI outputs rather than blindly trust them. For detection programs considering AI augmentation, ensure team members can review and modify AI-generated code, understand the architecture of systems being monitored, and possess security domain knowledge to catch hallucinations or incorrect inferences.</p><h3 class="heading" style="text-align:left;" id="multi-agent-architectures-the-futur">Multi-Agent Architectures: The Future of AI Security Operations</h3><p class="paragraph" style="text-align:left;">As detection programs mature their AI implementations, architectural sophistication increases. Stevens discussed exploring multi-agent architectures where specialized agents handle specific tasks with an orchestrator coordinating overall investigation workflow. &quot;Individual agents do very specific things and there&#39;s an orchestrator agent pulling data from each of them,&quot; she explained.</p><p class="paragraph" style="text-align:left;">This approach addresses a fundamental limitation of general-purpose AI agents: they perform better with narrow, well-defined responsibilities. &quot;They do really well when they do a very specific job,&quot; Stevens noted. She referenced vendor implementations with specialized agents for legal analysis, penetration testing, and validation that collaborate to reach consensus before taking actions.</p><p class="paragraph" style="text-align:left;">The multi-agent pattern also improves accuracy through specialization and cross-validation. Rather than a single agent making investigation decisions, specialized agents provide domain-specific analysis that an orchestrator synthesizes. This architectural approach mirrors how security operations centers organize human analysts by specialty areas, suggesting it may represent a sustainable pattern for AI-augmented security operations.</p><p class="paragraph" style="text-align:left;"><b>Practical Application:</b> For organizations with mature AI implementations, consider transitioning from monolithic AI agents to multi-agent architectures. Design specialized agents for distinct investigation tasks such as log analysis, threat intelligence lookup, compliance checking, and business context validation. Implement orchestrator patterns that coordinate specialized agents and synthesize their outputs. Develop evaluation frameworks that measure both individual agent accuracy and overall investigation quality. Start with lower-risk investigation areas before expanding to critical alerts.</p><h3 class="heading" style="text-align:left;" id="career-implications-the-shift-in-en">Career Implications: The Shift in Entry-Level Security Positions</h3><p class="paragraph" style="text-align:left;">Stevens raised a sobering reality about AI&#39;s impact on security career paths. &quot;I am really nervous for the entry-level positions in security,&quot; she admitted. &quot;A lot of the entry-level work that we would hire people for can be done now with AI.&quot; This isn&#39;t a distant future concern—it&#39;s happening now.</p><p class="paragraph" style="text-align:left;">The implications extend beyond technical skills. Stevens offered crucial advice for early-career professionals: &quot;If you&#39;re early on in your career, your job is to be a personality hire. That is your goal. If you don&#39;t do anything, you contribute almost no value at that stage. Your job is to be a good person and to learn as much as you can.&quot;</p><p class="paragraph" style="text-align:left;">This represents a fundamental shift in what organizations value in junior security professionals. Technical tasks that previously provided entry points into security careers are increasingly automated. What remains irreplaceable are interpersonal skills, adaptability, composure under pressure, and the ability to collaborate effectively—the very skills that enable someone to be, as Stevens put it, &quot;cool as a cucumber&quot; during incidents when everyone else is panicking.</p><p class="paragraph" style="text-align:left;">For those entering security now, Stevens recommended targeting larger, established enterprises rather than startups. &quot;Finding an older, larger company is going to be your best bet,&quot; she advised, noting that these organizations are slower to adopt AI and still maintain traditional career progression paths. &quot;The realistic possibilities of getting a job at a smaller startup without some sort of deeply technical skill to go with it is very slim.&quot;</p><p class="paragraph" style="text-align:left;"><b>Practical Application:</b> For hiring managers, recalibrate entry-level requirements to emphasize interpersonal effectiveness, learning agility, and composure under pressure alongside technical fundamentals. For early-career professionals, focus on developing differentiated skills that AI cannot replicate: incident response composure, cross-functional collaboration, business context understanding, and technical communication. Target internships and entry-level positions at larger enterprises with established security programs. Build foundational technical skills—particularly coding ability—that enable effective AI utilization rather than replacement by AI.</p><h3 class="heading" style="text-align:left;" id="the-emerging-threat-landscape-back-">The Emerging Threat Landscape: Back to Basics</h3><p class="paragraph" style="text-align:left;">Despite AI&#39;s prominence, Stevens emphasized that effective security in 2026 still requires fundamentals. &quot;A lot of this is just getting the basics right,&quot; she stressed. Her specific recommendations reflect current threat patterns: &quot;If you don&#39;t have a good endpoint detection program, you should probably consider getting one now, considering all the NPM packages that are getting compromised and the move to targeting engineering and developer machines.&quot;</p><p class="paragraph" style="text-align:left;">Shadow IT management has become increasingly critical as threat actors exploit AI hype. &quot;Getting a Shadow IT program going, if you don&#39;t have one, is a good move. We&#39;ve seen a lot of malware be propagated through tooling claiming to be AI or like different AI software,&quot; Stevens noted.</p><p class="paragraph" style="text-align:left;">This advice grounds AI security concerns in actionable defensive measures. While prompt injection and AI agent vulnerabilities represent real threats, they layer atop traditional attack vectors that remain highly effective. The solution isn&#39;t choosing between AI-focused or traditional defenses—it&#39;s ensuring foundational controls are robust before adding AI-specific protections.</p><p class="paragraph" style="text-align:left;"><b>Practical Application:</b> Audit your endpoint detection coverage, particularly for developer and engineering workstations that are increasingly targeted through compromised development tools and packages. Implement or strengthen Shadow IT programs with specific focus on unapproved AI tools that may introduce data exfiltration or prompt injection risks. Review software supply chain security including NPM, PyPI, and other package repositories your developers rely on. Establish baseline security controls before investing heavily in AI-specific security tools. Remember that threat actors are also learning AI—they&#39;re not yet expert adversaries, providing a window to strengthen fundamentals.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>RELATED RESOURCES 🎧</b></h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://owasp.org/www-project-top-10-for-large-language-model-applications/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">OWASP Top 10 for LLM Applications 2025</a> — Comprehensive guide to AI security risks including prompt injection, supply-chain vulnerabilities, and system prompt leakage</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://detectionengineering.io?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">Detection Engineering Maturity Matrix</a> — Framework for assessing and improving detection program maturity with focus on engineering practices</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://docs.aws.amazon.com/codebuild/latest/userguide/security-best-practices.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">AWS CodeBuild Security Best Practices</a> — Official guidance on securing CI/CD pipelines including least-privilege build roles and trigger authentication</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://attack.mitre.org/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">MITRE ATT&CK for ICS: AI/ML Attack Techniques</a> — Taxonomy of adversarial techniques targeting AI systems including data poisoning and model evasion</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://github.com/detection-as-code?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">Detection as Code: A Practical Guide</a> — Open-source resources for implementing source-controlled, tested, and validated detection rules</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://simonwillison.net/2023/Apr/14/worst-that-can-happen/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">Prompt Injection Defenses: A Comprehensive Guide</a> — Simon Willison&#39;s authoritative resource on understanding and mitigating prompt injection attacks</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://cloudsecurityalliance.org?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Alliance: AI Security Guidance</a> — Industry consortium guidance on securing AI deployments in cloud environments</p></li></ul><h3 class="heading" style="text-align:left;" id="cloud-security-podcast"><b>Cloud Security Podcast</b></h3><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/why-ai-cant-replace-detection-engineers-build-vs-buy-the-future-of-soc?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast – Full Episode with Antoinette Stevens</a></p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>Related Podcast Episodes 🎧</b></h2><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/why-ai-cant-replace-detection-engineers-build-vs-buy-the-future-of-soc?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/8fffb35f-aaed-40e7-bb92-7a775e64a57c/S07EP02_Antoinette_Stevens.jpg?t=1769027876"/></a><div class="image__source"><a class="image__source_link" href="https://www.cloudsecuritypodcast.tv/videos/why-ai-cant-replace-detection-engineers-build-vs-buy-the-future-of-soc?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" rel="noopener" target="_blank"><span class="image__source_text"><p>Why AI Can&#39;t Replace Detection Engineers: Build vs. Buy & The Future of SOC</p></span></a></div></div><h3 class="heading" style="text-align:left;" id="question-for-you-reply-to-this-emai">Question for you? (Reply to this email)</h3><p class="paragraph" style="text-align:left;">🤔 <b>Is your detection program treating AI as a force multiplier or a replacement — and what’s the first capability you’d automate </b><i><b>with human oversight still in the loop</b></i><b>?</b><br></p><p class="paragraph" style="text-align:left;">Next week, we&#39;ll explore another critical aspect of cloud security. Stay tuned!</p><hr class="content_break"><p class="paragraph" style="text-align:left;">📬 Want weekly expert takes on AI & Cloud Security? [<a class="link" href="https://www.cloudsecuritynewsletter.com/subscribe?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">Subscribe here</a>]”</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><span style="color:rgb(238, 40, 60);"><b><a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">We would love to hear from you</a></b></span>📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙</p><p class="paragraph" style="text-align:start;">Peace!</p><p class="paragraph" style="text-align:start;"><a class="link" href="https://www.linkedin.com/in/shilpi-bhattacharjee/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">Shilpi Bhattacharjee</a></p><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dc094a1c-678a-43e0-adc9-1bee6c3499e2/CSP_Logo_Blue_ScreenRes_3000x3000_v2.jpg"/></a></div><div class="button" style="text-align:left;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="{{rp_referral_hub_url}}"><span class="button__text" style=""> Share the newsletter </span></a></div><p class="paragraph" style="text-align:left;">Was this forwarded to you? You can <a class="link" href="https://www.cloudsecuritynewsletter.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">Sign up here</a>, to join our growing readership.</p><p class="paragraph" style="text-align:left;">Want to <b>sponsor</b> the next newsletter edition! <a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">Lets make it happen </a></p><p class="paragraph" style="text-align:left;">Have you joined our FREE <b>Monthly</b> <a class="link" href="https://www.cloudsecuritybootcamp.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Bootcamp</a> yet?</p><p class="paragraph" style="text-align:left;">checkout our <b>sister podcast</b> <a class="link" href="https://www.youtube.com/@AISecurityPodcast?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=gemini-prompt-injection-copilot-reprompt-why-llms-can-t-tell-instructions-from-data" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=f11053b5-dc7c-4676-aed0-eb825816367e&utm_medium=post_rss&utm_source=cloud_security_newsletter">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>VMware ESXi Zero-Days Exploited for Year: Lessons from Dayforce&#39;s AI-FirstVulnerability Management Strategy</title>
  <description>This week&#39;s newsletter covers critical enterprise vulnerabilities including year-long VMware ESXi exploitation by Chinese threat actors, HPE OneView&#39;s maximum-severity RCE flaw, and CrowdStrike&#39;s $740M identity security acquisition. Plus, Dayforce&#39;s Sapna Paul shares how AI is transforming vulnerability management from scan and patch workflows to continuous observation, detection, and model retraining.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/a3495c5c-68a0-45ea-98fa-354604ecd87e/Screenshot_2026-01-13_at_6.02.36_PM.png" length="1706887" type="image/png"/>
  <link>https://www.cloudsecuritynewsletter.com/p/vmware-esxi-zero-days-exploited-for-a-year</link>
  <guid isPermaLink="true">https://www.cloudsecuritynewsletter.com/p/vmware-esxi-zero-days-exploited-for-a-year</guid>
  <pubDate>Wed, 14 Jan 2026 18:15:12 +0000</pubDate>
  <atom:published>2026-01-14T18:15:12Z</atom:published>
    <dc:creator>Ashish Rajan</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="hello-from-the-cloudverse"><span style="background-color:#28EEDA;"><b>Hello from the Cloud-verse!</b></span></h2><p class="paragraph" style="text-align:left;">This week’s Cloud Security Newsletter topic<b>: AI Vulnerability Management: Why You Can&#39;t Patch a Neural Network </b><a class="link" href="#cloud-security-topic-of-the-week" rel="noopener noreferrer nofollow">(continue reading)</a> </p><div class="button" style="text-align:center;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="https://links.cloudsecuritypodcast.tv/push-security-jan-2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy"><span class="button__text" style=""> This issue is sponsored by Push Security </span></a></div><hr class="content_break"><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/ai-vulnerability-management-why-you-cant-patch-a-neural-network?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/45bad825-181b-4970-aceb-518ee9b5cc6a/Screenshot_2026-01-13_at_6.02.36_PM.png?t=1768327422"/></a><div class="image__source"><span class="image__source_text"><p>This image was generated by AI. It&#39;s still experimental, so it might not be a perfect match!</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Incase, this is your 1st Cloud Security Newsletter! You are in good company! </b><br>You are reading this issue along with your friends and colleagues from companies like <i>Netflix</i>, Citi, <i>JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more</i> who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to <a class="link" href="https://open.spotify.com/show/6LZgeh4GecRYPc0WrwMB4I?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a> & <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> every week.</p><hr class="content_break"><p class="paragraph" style="text-align:left;">Welcome to this week’s Cloud Security Newsletter</p><p class="paragraph" style="text-align:left;">As we enter the first full week of 2026, a troubling pattern is emerging: sophisticated threat actors are exploiting critical infrastructure vulnerabilities months, even years before public disclosure, while enterprises struggle to secure increasingly dynamic, AI-driven assets.</p><p class="paragraph" style="text-align:left;">This week, <b>Sapna Paul</b>, Senior Manager of Vulnerability Management at Dayforce, joins us to explain why traditional scan-and-patch workflows are breaking down — and how AI is forcing a fundamental rethink of what “vulnerability management” actually means.<i>[</i><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/ai-vulnerability-management-why-you-cant-patch-a-neural-network?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow">Listen to the episode</a><i>]</i></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="tldr-for-busy-readers">📰 TL;DR for Busy Readers</h2><ul><li><p class="paragraph" style="text-align:left;">Chinese threat actors exploited <b>VMware ESXi zero-days</b> for <b>12+ months</b> before disclosure, targeting <b>30,000+ exposed instances</b></p></li><li><p class="paragraph" style="text-align:left;"><b>HPE OneView vulnerability (CVSS 10.0) actively exploited</b>, CISA sets January 28 remediation deadline for infrastructure management platforms</p></li><li><p class="paragraph" style="text-align:left;"><b>CrowdStrike acquires SGNL for $740M</b>, signaling major consolidation in identity security and zero-trust markets</p></li><li><p class="paragraph" style="text-align:left;"><b>AI vulnerability management requires new approach</b>: observe, detect anomalies, retrain models—not just scan and patch</p></li><li><p class="paragraph" style="text-align:left;"><b>FBI warns North Korean Kimsuky APT using QR code phishing</b> to bypass email security and harvest credentials via mobile devices</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-security-news">📰 <b>THIS WEEK&#39;S TOP 5 SECURITY HEADLINES</b></h2><h3 class="heading" style="text-align:left;" id="1-chinese-hackers-exploit-v-mware-e"><b>1. </b>Chinese Hackers Exploit VMware ESXi Zero-Days a Year Before Disclosure</h3><p class="paragraph" style="text-align:left;">Security researchers at <b>Huntress</b> uncovered a VM escape toolkit (“MAESTRO”) used by Chinese threat actors to exploit three <b>VMware ESXi</b> zero-days:</p><ul><li><p class="paragraph" style="text-align:left;">CVE-2025-22224</p></li><li><p class="paragraph" style="text-align:left;">CVE-2025-22225</p></li><li><p class="paragraph" style="text-align:left;">CVE-2025-22226</p></li></ul><p class="paragraph" style="text-align:left;"><b>Active since:</b> Feb 2024<br><b>Disclosed:</b> March 2025<br><b>Exposed instances:</b> 30,000+</p><h3 class="heading" style="text-align:left;" id="why-this-matters">Why This Matters</h3><p class="paragraph" style="text-align:left;"><b>Why Security Leaders & Teams Should Care</b></p><ul><li><p class="paragraph" style="text-align:left;">Hypervisors remain <i>silent, high-impact</i> targets</p></li><li><p class="paragraph" style="text-align:left;">Signature-based scanning failed for a full year</p></li><li><p class="paragraph" style="text-align:left;">VM escape = total trust boundary collapse</p></li></ul><p class="paragraph" style="text-align:left;"><b>Action</b></p><ul><li><p class="paragraph" style="text-align:left;">Patch ESXi immediately</p></li><li><p class="paragraph" style="text-align:left;">Enable hypervisor-level monitoring</p></li><li><p class="paragraph" style="text-align:left;">Reduce ESXi internet exposure</p></li><li><p class="paragraph" style="text-align:left;">Segment management planes</p></li></ul><p class="paragraph" style="text-align:left;">Virtualization platforms remain <b>high-value APT targets</b>.</p><p class="paragraph" style="text-align:left;"><b>Sources:</b> <a class="link" href="https://www.huntress.com/blog/esxi-vm-escape-exploit?utm_source=chatgpt.com" target="_blank" rel="noopener noreferrer nofollow">Huntress Research</a>, <a class="link" href="https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/25390?utm_source=chatgpt.com" target="_blank" rel="noopener noreferrer nofollow">Broadcom Security Advisory</a>, <a class="link" href="https://www.securityweek.com/exploit-for-vmware-zero-day-flaws-likely-built-a-year-before-public-disclosure?utm_source=chatgpt.com" target="_blank" rel="noopener noreferrer nofollow">SecurityWeek</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="2-hpe-one-view-zero-day-exploited-i"><b>2. </b>HPE OneView Zero-Day Exploited in the Wild (CVE-2025-37164)</h3><p class="paragraph" style="text-align:left;">A maximum-severity vulnerability in <b>HPE OneView</b> was added to <b>CISA</b>’s KEV list.</p><p class="paragraph" style="text-align:left;"><b>Remediation deadline:</b> January 28, 2026</p><p class="paragraph" style="text-align:left;"><b>Why This Matters</b><br> Infrastructure control planes are becoming <b>single points of enterprise failure</b>.</p><p class="paragraph" style="text-align:left;"><b>Action</b></p><ul><li><p class="paragraph" style="text-align:left;">Patch immediately</p></li><li><p class="paragraph" style="text-align:left;">Restrict OneView network access</p></li><li><p class="paragraph" style="text-align:left;">Monitor admin activity and command execution</p></li></ul><p class="paragraph" style="text-align:left;">Infrastructure management platforms are becoming <b>single points of failure</b>.</p><p class="paragraph" style="text-align:left;"><b>Sources:</b> <a class="link" href="https://www.cisa.gov/news-events/alerts/2026/01/07/cisa-adds-two-known-exploited-vulnerabilities-catalog?utm_source=chatgpt.com" target="_blank" rel="noopener noreferrer nofollow">CISA KEV Catalog</a>, <a class="link" href="https://www.itpro.com/security/hpe-oneview-critical-vulnerability-cisa-advisory?utm_source=chatgpt.com" target="_blank" rel="noopener noreferrer nofollow">ITPro</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-crowd-strike-acquires-sgnl-for-74"><b>3. </b>CrowdStrike Acquires SGNL for $740M</h3><p class="paragraph" style="text-align:left;">CrowdStrike announced its acquisition of <b>SGNL</b>, expanding into <b>AI-powered continuous authorization</b>.</p><h3 class="heading" style="text-align:left;" id="why-this-matters">Why This Matters</h3><p class="paragraph" style="text-align:left;">Traditional IAM struggles with:</p><ul><li><p class="paragraph" style="text-align:left;">Static access policies</p></li><li><p class="paragraph" style="text-align:left;">Multi-cloud fragmentation</p></li><li><p class="paragraph" style="text-align:left;">Limited entitlement visibility</p></li></ul><p class="paragraph" style="text-align:left;">This acquisition reinforces trends toward:</p><ul><li><p class="paragraph" style="text-align:left;">Continuous authorization</p></li><li><p class="paragraph" style="text-align:left;">Zero-trust architectures</p></li><li><p class="paragraph" style="text-align:left;">Real-time access decisions</p></li></ul><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://www.aikido.dev/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow"> </a><a class="link" href="https://www.cnbc.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow">CNBC</a>, <a class="link" href="https://www.crowdstrike.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow">CrowdStrike Announcement </a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-crowd-strike-acquires-sgnl-for-74"><b>4. </b>FBI Warns: Kimsuky Using QR-Code Phishing (“Quishing”)</h3><p class="paragraph" style="text-align:left;">The FBI warns that <b>North Korean APT Kimsuky</b> is embedding malicious QR codes in emails to harvest credentials via mobile devices.</p><h3 class="heading" style="text-align:left;" id="why-this-matters">Why This Matters</h3><p class="paragraph" style="text-align:left;">Key risks include:</p><ul><li><p class="paragraph" style="text-align:left;">Mobile devices bypassing enterprise security</p></li><li><p class="paragraph" style="text-align:left;">QR codes evading email inspection</p></li><li><p class="paragraph" style="text-align:left;">Session token theft bypassing MFA</p></li></ul><h3 class="heading" style="text-align:left;" id="recommended-actions">Recommended Actions</h3><ul><li><p class="paragraph" style="text-align:left;">Deploy Mobile Threat Defense (MTD)</p></li><li><p class="paragraph" style="text-align:left;">Treat mobile as lower-trust endpoints</p></li><li><p class="paragraph" style="text-align:left;">Educate users on QR phishing</p></li><li><p class="paragraph" style="text-align:left;">Monitor mobile authentication anomalies</p></li></ul><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://www.aikido.dev/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow"> </a><a class="link" href="https://www.ic3.gov/CSA/2026/260108.pdf?utm_source=chatgpt.com" target="_blank" rel="noopener noreferrer nofollow">FBI Flash Alert</a>, <a class="link" href="https://www.techradar.com/pro/security/north-korean-hackers-using-malicious-qr-codes-in-spear-phishing-fbi-warns?utm_source=chatgpt.com" target="_blank" rel="noopener noreferrer nofollow">Tech Radar</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-crowd-strike-acquires-sgnl-for-74"><b>5. </b>Insider Threat: Cybersecurity Professionals in BlackCat Ransomware</h3><p class="paragraph" style="text-align:left;">Two U.S. cybersecurity professionals pleaded guilty to acting as <b>BlackCat/ALPHV ransomware affiliates</b>, earning <b>$1.27M</b>.</p><h3 class="heading" style="text-align:left;" id="why-this-matters">Why This Matters</h3><p class="paragraph" style="text-align:left;">Key Takeaway:</p><ul><li><p class="paragraph" style="text-align:left;">Insider threats now include <b>highly skilled defenders</b></p></li><li><p class="paragraph" style="text-align:left;">Zero-trust must apply to <b>privileged users</b></p></li><li><p class="paragraph" style="text-align:left;">Vendor security assurance needs deeper scrutiny</p></li></ul><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://www.aikido.dev/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow"> </a><a class="link" href="https://www.justice.gov/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow">U.S. Department of Justice</a>, <a class="link" href="https://www.securityweek.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow">SecurityWeek</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="aws-previews-security-agent">AWS Previews Security Agent</h2><p class="paragraph" style="text-align:left;">AWS announced the preview of Security Agent, an AI-powered penetration testing and security review platform that provides context-aware application security testing from design through deployment. The tool represents AWS&#39;s investment in AI-driven security automation, addressing the growing gap between application release velocity and security testing cadence.</p><h3 class="heading" style="text-align:left;" id="why-this-matters">Why This Matters</h3><p class="paragraph" style="text-align:left;">Traditional application security testing struggles to keep pace with modern CI/CD pipelines where organizations deploy code hundreds of times per day. AWS Security Agent attempts to solve this by:</p><ul><li><p class="paragraph" style="text-align:left;">Context-aware security testing</p></li><li><p class="paragraph" style="text-align:left;">CI/CD-integrated AppSec</p></li><li><p class="paragraph" style="text-align:left;">Developer-focused remediation</p></li></ul><p class="paragraph" style="text-align:left;">Organizations should validate AI findings and maintain human oversight.</p><p class="paragraph" style="text-align:left;"><b>Sources</b>:<a class="link" href="https://aws.amazon.com/blogs/security/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow"> AWS Security Blog</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="cloud-security-topic-of-the-week">🎯 Cloud Security Topic of the Week: </h2><h3 class="heading" style="text-align:left;" id="ai-assets-and-the-evolution-of-vuln">AI Assets and the Evolution of Vulnerability Management</h3><p class="paragraph" style="text-align:left;">Traditional vulnerability management has operated on a simple premise: find the flaw, patch it, verify the fix. But what happens when the asset itself is a learning system that can develop flaws organically over time? As AI models become core enterprise assets powering everything from fraud detection to customer service security teams face a fundamental challenge: you can&#39;t patch a neural network the way you patch Windows Server.</p><p class="paragraph" style="text-align:left;">This week, we explore how vulnerability management is evolving to address AI assets, based on insights from <b>Sapna Paul</b>&#39;s work at Dayforce. The implications extend beyond AI-specific risks to fundamentally reshape how we think about asset management, risk quantification, and security operations in dynamic environments.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="featured-experts-this-week"><b>Featured Experts This Week </b>🎤</h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.linkedin.com/in/sapnapaul/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow"><b>Sapna Paul</b></a> – Senior Manager, Vulnerability Management, Dayforce</p></li><li><p class="paragraph" style="text-align:left;"><b><a class="link" href="https://www.linkedin.com/in/ashishrajan/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow">Ashish Rajan</a></b> - CISO | Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> , Host of <a class="link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a></p></li></ul><h2 class="heading" style="text-align:left;" id="definitions-and-core-concepts"><b>Definitions and Core Concepts 📚</b></h2><p class="paragraph" style="text-align:left;">Before diving into our insights, let&#39;s clarify some key terms:</p><ul><li><p class="paragraph" style="text-align:left;"><b>AI Model Vulnerability</b>: A flaw or weakness in an artificial intelligence model that can be exploited to manipulate outcomes, exfiltrate training data, or cause the model to behave incorrectly. Unlike traditional software vulnerabilities, AI model vulnerabilities often stem from traning data poisoning, adversarial inputs, or emergent behaviors that weren&#39;t present during development.</p></li><li><p class="paragraph" style="text-align:left;"><b>Neural Network</b>: The underlying architecture of most modern AI systems, consisting of layers of interconnected nodes (neurons) that process information and learn patterns from data. In vulnerability management contexts, the neural network itself becomes the asset requiring protection, rather than just the application hosting it.</p></li><li><p class="paragraph" style="text-align:left;"><b>Model Retraining</b>: The process of updating an AI model by feeding it new or corrected training data to modify its behavior. This is the AI equivalent of patching, but unlike applying a security patch, retraining can take weeks of compute time and requires extensive validation to ensure the model maintains accuracy while addressing the vulnerability.</p></li><li><p class="paragraph" style="text-align:left;"><b>Data Poisoning</b>: An attack technique where adversaries inject malicious data into an AI model&#39;s training dataset, causing it to learn incorrect patterns or behaviors. This is particularly insidious because the vulnerability becomes embedded in the model&#39;s fundamental understanding, not just in its code.</p></li><li><p class="paragraph" style="text-align:left;"><b>Behavioral Monitoring (AI Context)</b>: Continuous observation of AI model outputs and decision-making patterns to detect anomalies that might indicate vulnerability exploitation, data drift, or emergent problematic behaviors. This replaces traditional point-in-time vulnerability scanning for AI assets.</p></li><li><p class="paragraph" style="text-align:left;"><b>NIST AI Risk Management Framework (AI RMF)</b>: A voluntary framework providing guidance for managing risks associated with artificial intelligence systems. It covers governance, trust, and responsible AI principles, serving as a de facto standard for U.S. organizations building or deploying AI systems.</p></li></ul><hr class="content_break"><p class="paragraph" style="text-align:center;">This week&#39;s issue is sponsored by <a class="link" href="https://links.cloudsecuritypodcast.tv/push-security-jan-2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(17, 85, 204)">Push Security</a><b> </b></p><p class="paragraph" style="text-align:center;"><span style="color:rgb(0, 0, 0);"><b>Want to learn how to respond to modern attacks that don’t touch the endpoint? </b></span></p><p class="paragraph" style="text-align:center;"><span style="color:rgb(0, 0, 0);">Modern attacks have evolved — most breaches today don’t start with malware or vulnerability exploitation. Instead, attackers are targeting business applications directly over the internet. </span></p><p class="paragraph" style="text-align:center;"><span style="color:rgb(0, 0, 0);">This means that the way security teams need to detect and respond has changed too. </span></p><p class="paragraph" style="text-align:center;"><span style="color:rgb(0, 0, 0);">Register for the latest webinar from Push Security on February 11 for an interactive, “choose-your-own-adventure” experience walking through modern IR scenarios, where your inputs will determine the course of our investigations. </span></p><p class="paragraph" style="text-align:center;"><a class="link" href="https://links.cloudsecuritypodcast.tv/push-security-jan-2026?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(17, 85, 204)">Register Now</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="our-insights-from-this-practitioner">💡<b>Our Insights from this Practitioner 🔍</b></h2><h3 class="heading" style="text-align:left;" id="ai-vulnerability-management-why-you"><b>AI Vulnerability Management: Why You Can&#39;t Patch a Neural Network </b><b>(</b><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/ai-vulnerability-management-why-you-cant-patch-a-neural-network?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow"><b>Full Episode here</b></a><b>)</b></h3><hr class="content_break"><h3 class="heading" style="text-align:left;" id="the-three-layers-of-ai-vulnerabilit"><b>The Three Layers of AI Vulnerability Management</b></h3><p class="paragraph" style="text-align:left;">Sapna Paul fundamentally reframes vulnerability management for AI assets around three critical layers that security teams must address simultaneously:</p><p class="paragraph" style="text-align:left;"><b>Layer 1: Production Model Security</b><br> The first layer focuses on how adversaries can impact models actively serving users. &quot;<i>Have to keep analyzing the production models that are running, that are solving a use case,</i>&quot; Sapna explains. &quot;<i>How an adversarial can impact the working of that model through red teaming and security testing of these models is so important.</i>&quot;</p><p class="paragraph" style="text-align:left;">This isn&#39;t traditional penetration testing it requires specialized techniques like:</p><ul><li><p class="paragraph" style="text-align:left;">Adversarial example generation to test model robustness</p></li><li><p class="paragraph" style="text-align:left;">Prompt injection testing for language models</p></li><li><p class="paragraph" style="text-align:left;">Boundary testing to identify model hallucination thresholds</p></li><li><p class="paragraph" style="text-align:left;">Performance degradation analysis under attack conditions</p></li></ul><p class="paragraph" style="text-align:left;">For enterprise security teams, this means dedicating resources to ongoing model security assessment, not just pre-deployment validation. As Sapna notes, vulnerabilities in AI systems &quot;<i>can make AI become rogue</i>&quot; if left unaddressed.</p><p class="paragraph" style="text-align:left;"><b>Layer 2: Data Security and Integrity</b><br>The second layer addresses the training pipeline, the most critical attack surface for AI systems. &quot;<i>The vulnerabilities that can make their way into the model is through data, the training that is happening on the left side,</i>&quot; Sapna emphasizes. &quot;<i>Poisoning, the biasness that an AI can learn is in the left side. So it&#39;s so important to understand what data is going in and the controls that need to be put around that data layer.</i>&quot;</p><p class="paragraph" style="text-align:left;">This requires security teams to:</p><ul><li><p class="paragraph" style="text-align:left;">Implement access controls on training datasets with full audit logging</p></li><li><p class="paragraph" style="text-align:left;">Validate data integrity and provenance before training</p></li><li><p class="paragraph" style="text-align:left;">Monitor for bias indicators in training data</p></li><li><p class="paragraph" style="text-align:left;">Test models for signs of data poisoning (unusual decision boundaries, performance degradation on specific inputs)</p></li></ul><p class="paragraph" style="text-align:left;">The complexity here is that data poisoning attacks can be subtle. An attacker doesn&#39;t need to corrupt the entire dataset; carefully crafted examples injected into training data can cause specific failure modes that only activate under certain conditions.</p><p class="paragraph" style="text-align:left;"><b>Layer 3: Behavioral Ethics and Compliance</b><br> The third layer addresses what Sapna calls the &quot;technically correct but ethically wrong&quot; problem. &quot;<i>A model is giving you an outcome which is technically correct but ethically wrong if compliance is not there, AI can become rogue.</i>&quot;</p><p class="paragraph" style="text-align:left;">This layer requires entirely new evaluation frameworks:</p><ul><li><p class="paragraph" style="text-align:left;">Ethical review of model outputs across diverse scenarios</p></li><li><p class="paragraph" style="text-align:left;">Demographic fairness testing to identify discriminatory patterns</p></li><li><p class="paragraph" style="text-align:left;">Regulatory compliance validation (EU AI Act, state-level AI regulations)</p></li><li><p class="paragraph" style="text-align:left;">Alignment testing to ensure model behavior matches organizational values</p></li></ul><p class="paragraph" style="text-align:left;">Consider a fraud detection model that accurately identifies fraud but disproportionately flags transactions from specific demographic groups. The model is technically working it&#39;s catching fraud but its operation violates fairness principles and potentially regulatory requirements.</p><p class="paragraph" style="text-align:left;">&quot;<i>You have to make business understand what risk AI is bringing to your business,</i>&quot; Sapna notes. &quot;<i>If you cannot tie back to the business or whatever your company&#39;s goals are, it&#39;s just not a useful risk register.</i>&quot;</p><h3 class="heading" style="text-align:left;" id="shifting-from-patching-to-retrainin"><b>Shifting from Patching to Retraining: A Fundamental Workflow Change</b></h3><p class="paragraph" style="text-align:left;">Traditional vulnerability management follows a predictable cycle: discover vulnerability → apply patch → verify fix → move to next vulnerability. This workflow assumes the asset is static between updates.</p><p class="paragraph" style="text-align:left;">AI assets break this model completely. &quot;<i>What would you do if an AI model has learned something wrong? There&#39;s no patch,</i>&quot; Sapna explains. &quot;<i>There&#39;s no Patch Tuesdays. You can&#39;t just go there and patch the vulnerabilities. You have to take a step back, detect what it&#39;s doing, what weaknesses and flaws the model has, and then retrain.</i>&quot;</p><p class="paragraph" style="text-align:left;">The retraining process introduces complexity that traditional patch management never encountered:</p><p class="paragraph" style="text-align:left;"><b>Compute Resource Requirements</b>: Model retraining can consume weeks of GPU time and cost tens of thousands of dollars for large models. This makes the &quot;just patch everything&quot; approach financially impractical.</p><p class="paragraph" style="text-align:left;"><b>Validation Cycles</b>: After retraining, teams must validate that:</p><ul><li><p class="paragraph" style="text-align:left;">The vulnerability has been addressed</p></li><li><p class="paragraph" style="text-align:left;">The model maintains its original accuracy and performance</p></li><li><p class="paragraph" style="text-align:left;">No new failure modes have been introduced</p></li><li><p class="paragraph" style="text-align:left;">Regulatory compliance is maintained</p></li></ul><p class="paragraph" style="text-align:left;"><b>Deployment Risk</b>: Deploying a retrained model carries risk that doesn&#39;t exist with traditional patches. A misconfigured model update could cause widespread service degradation or incorrect business decisions.</p><p class="paragraph" style="text-align:left;">&quot;<i>It takes so many cycles of compute and weeks of testing and revalidation and redeployment,</i>&quot; Sapna notes. &quot;<i>If I just put it in one line: you don&#39;t scan and patch anymore. You observe, you detect anomalies, and you retrain the model.</i>&quot;</p><h3 class="heading" style="text-align:left;" id="asset-management-meets-ai-when-asse"><b>Asset Management Meets AI: When Assets Learn and Evolve</b></h3><p class="paragraph" style="text-align:left;">Perhaps the most fundamental shift Sapna describes is reconceptualizing what constitutes an &quot;asset&quot; in vulnerability management. &quot;The asset is a neural network. The asset is a model, right? It&#39;s an AI, artificial intelligence. We have not done or assessed that sort of asset before in a typical traditional vulnerability management or cybersecurity space.&quot;</p><p class="paragraph" style="text-align:left;">This creates several challenges for traditional asset management systems:</p><p class="paragraph" style="text-align:left;"><b>Dynamic Asset Behavior</b>: Traditional assets have predictable behavior. A web server processes HTTP requests the same way today as it did yesterday. AI models evolve their behavior based on input patterns and can develop emergent properties that weren&#39;t present during testing.</p><p class="paragraph" style="text-align:left;"><b>Parameter Space Complexity</b>: &quot;There are billions of parameters, billions of different things you have to think about when you are doing vulnerability management in this space,&quot; Sapna explains. Traditional configuration management might track hundreds or thousands of settings; AI models have billions of weights and connections that collectively determine behavior.</p><p class="paragraph" style="text-align:left;"><b>Lack of Determinism</b>: Unlike traditional software where the same input reliably produces the same output, AI models incorporate randomness and can produce different results for identical inputs. This makes vulnerability reproduction and testing significantly more complex.</p><p class="paragraph" style="text-align:left;">For security teams, this means rethinking asset inventory practices:</p><ul><li><p class="paragraph" style="text-align:left;">Document not just the model, but its training data lineage, hyperparameters, and validation metrics</p></li><li><p class="paragraph" style="text-align:left;">Track model versions with the same rigor as critical infrastructure</p></li><li><p class="paragraph" style="text-align:left;">Maintain test datasets that can verify model behavior over time</p></li><li><p class="paragraph" style="text-align:left;">Establish baselines for &quot;normal&quot; model behavior to detect drift or compromise</p></li></ul><h3 class="heading" style="text-align:left;" id="speaking-the-language-of-data-teams"><b>Speaking the Language of Data Teams: Bridging the Security-AI Gap</b></h3><p class="paragraph" style="text-align:left;">One of Sapna&#39;s most valuable insights addresses the communication gap between security and AI development teams. &quot;<i>You have to talk in the language that data people understand. If you go and say your model has this security vulnerability, they&#39;ll be like, &#39;What? No.&#39; But if you go and say your model is going to be 30% less effective because an adversarial can do something to it and change its scores, now you can sit with that person and make them understand.</i>&quot;</p><p class="paragraph" style="text-align:left;">This reframing is critical because:</p><p class="paragraph" style="text-align:left;"><b>Different Risk Perspectives</b>: Security teams think in terms of vulnerabilities and exploits. Data science teams think in terms of model accuracy, precision, and recall. The same issue must be translated across these mental models.</p><p class="paragraph" style="text-align:left;"><b>Business Impact Translation</b>: Instead of technical security jargon, frame AI vulnerabilities in terms of:</p><ul><li><p class="paragraph" style="text-align:left;">Model performance degradation under attack</p></li><li><p class="paragraph" style="text-align:left;">Business revenue impact from compromised models</p></li><li><p class="paragraph" style="text-align:left;">Regulatory penalties from biased or non-compliant model behavior</p></li><li><p class="paragraph" style="text-align:left;">Reputational damage from AI failures</p></li></ul><p class="paragraph" style="text-align:left;"><b>Upskilling Requirements</b>: &quot;For that, we need to upskill as well. Security compliance people need to upskill their AI concepts, knowledge, subject matter... so you can talk in their language,&quot; Sapna emphasizes.</p><p class="paragraph" style="text-align:left;">Organizations should invest in:</p><ul><li><p class="paragraph" style="text-align:left;">AI fundamentals training for security teams</p></li><li><p class="paragraph" style="text-align:left;">Security awareness training for data science teams</p></li><li><p class="paragraph" style="text-align:left;">Cross-functional working groups that bring both disciplines together</p></li><li><p class="paragraph" style="text-align:left;">Shared metrics that matter to both security and AI teams</p></li></ul><h3 class="heading" style="text-align:left;" id="governance-over-perfection-the-prag"><b>Governance Over Perfection: The Pragmatic Path Forward</b></h3><p class="paragraph" style="text-align:left;">Sapna offers a refreshingly pragmatic perspective on AI security: &quot;<i>You can&#39;t really have a perfect system. An ML model cannot just explain how it is reaching that particular outcome. So you won&#39;t have perfect explainability, but what you can have is good governance of AI systems</i>.&quot;</p><p class="paragraph" style="text-align:left;">This governance-first approach recognizes that:</p><p class="paragraph" style="text-align:left;"><b>Explainability Has Limits</b>: Many of the most powerful AI models (deep neural networks, ensemble methods) are inherently difficult to fully explain. Demanding complete explainability would prevent deployment of valuable AI systems.</p><p class="paragraph" style="text-align:left;"><b>Governance Provides Guardrails</b>: Instead of trying to make AI perfect, focus on:</p><ul><li><p class="paragraph" style="text-align:left;">Version control for models (like Git for code)</p></li><li><p class="paragraph" style="text-align:left;">Audit trails of data access and model training</p></li><li><p class="paragraph" style="text-align:left;">Security gates in ML pipelines</p></li><li><p class="paragraph" style="text-align:left;">Continuous monitoring of production models</p></li><li><p class="paragraph" style="text-align:left;">Incident response procedures for AI failures</p></li></ul><p class="paragraph" style="text-align:left;">&quot;If governed properly and audited properly, these can become big enablers for people who are trying to get AI models out the door,&quot; Sapna notes.</p><h3 class="heading" style="text-align:left;" id="using-ai-to-manage-ai-vulnerabiliti"><b>Using AI to Manage AI Vulnerabilities: The Meta-Solution</b></h3><p class="paragraph" style="text-align:left;">Perhaps most intriguingly, Sapna describes how AI itself can help manage the vulnerability burden that AI creates. &quot;<i>Use AI in your workflow if you have not done that. Start thinking and start planning for it... You can use AI to lessen and lessen the burden of vulnerabilities on our stakeholders.</i>&quot;</p><p class="paragraph" style="text-align:left;">Practical applications include:</p><p class="paragraph" style="text-align:left;"><b>Intelligent Prioritization</b>: AI can analyze vulnerability data alongside threat intelligence, asset criticality, and exploitability to surface the most critical risks. &quot;<i>It needs a lot of planning. You can&#39;t just use public models and start giving vulnerability data to it... In a more contained environment, how can you take the model, feed the data it needs to, and then see what outcomes you can get from the model to help prioritize even farther than you are prioritizing today.</i>&quot;</p><p class="paragraph" style="text-align:left;"><b>Operational Efficiency</b>: &quot;<i>Your analyst hours can be saved if you just give them the magic of AI,</i>&quot; Sapna explains. Instead of spending time on coordination and manual data collection, analysts can focus on vulnerability research and quality assessment. &quot;<i>All those traditional workflows can just die. The data would be on the plate for you to investigate and quality assess.</i>&quot;</p><p class="paragraph" style="text-align:left;"><b>Persona-Based Assistants</b>: Sapna describes implementing AI bots tailored to specific user roles: &quot;<i>Think about the personas that you deal with in your day-to-day life. Put yourself in those shoes. Write a prompt that if I am a system engineer and I need to patch through these servers in this weekend cycle, what should I focus on?</i>&quot;</p><p class="paragraph" style="text-align:left;">The key is providing these AI assistants with appropriate context:</p><ul><li><p class="paragraph" style="text-align:left;">Current vulnerability data and compensating controls</p></li><li><p class="paragraph" style="text-align:left;">Asset criticality and business impact</p></li><li><p class="paragraph" style="text-align:left;">Patch schedules and maintenance windows</p></li><li><p class="paragraph" style="text-align:left;">Historical patching success rates</p></li></ul><h3 class="heading" style="text-align:left;" id="the-build-vs-buy-decision-for-ai-se"><b>The Build vs. Buy Decision for AI Security Tools</b></h3><p class="paragraph" style="text-align:left;">Organizations face the critical decision of whether to build custom AI security capabilities or purchase commercial tools. Sapna&#39;s guidance emphasizes starting with available resources:</p><p class="paragraph" style="text-align:left;"><b>Cloud Provider Platforms</b>: &quot;<i>Every company is using a cloud provider... Every cloud provider has an AI platform. You have access to their own platform. You can just go to your cloud service provider platform, start using AWS Bedrock, Azure OpenAI Foundry... start writing prompts of what would help me in my job.</i>&quot;</p><p class="paragraph" style="text-align:left;"><b>Vendor Tool AI Features</b>: &quot;<i>Plug AI into your existing workflow systems like ServiceNow or Jira they have their own AIs as well. Each tool has their own AI, and these companies are doing a great job in giving that functionality to users.</i>&quot;</p><p class="paragraph" style="text-align:left;"><b>Gradual Adoption</b>: &quot;<i>Start with one video and just make a note that you have to see one video every week on AI,&quot; Sapna advises. &quot;Start with a basic prompt first. This will improve over time. Once you put it in their hands, your users, you will be amazed to see how they can build such great prompts.</i>&quot;</p><p class="paragraph" style="text-align:left;">The message: don&#39;t let perfect be the enemy of good. Start experimenting with AI tools using low-stakes use cases, learn from the results, and gradually expand to more critical applications.</p><h3 class="heading" style="text-align:left;" id="compliance-and-innovation-complemen"><b>Compliance and Innovation: Complementary, Not Contradictory</b></h3><p class="paragraph" style="text-align:left;">Sapna challenges the common perception that compliance and security slow innovation, particularly with AI. &quot;<i>Compliance and innovation go hand in hand. They complement each other. Compliance allows you to put guardrails in your AI. If compliance is not there, if risk and assessment is not there, AI can become rogue.</i>&quot;</p><p class="paragraph" style="text-align:left;">The key is framing security as an enabler:</p><p class="paragraph" style="text-align:left;"><b>Guardrails Enable Speed</b>: By establishing clear security boundaries, organizations can move faster with AI deployments because they&#39;ve pre-approved certain architectures and controls. Teams don&#39;t need to reinvent security for each new AI project.</p><p class="paragraph" style="text-align:left;"><b>Shift-Left Principle</b>: &quot;<i>You have to shift left. You can&#39;t think compliance or security is an afterthought. From the moment you have started your pipeline, from the moment the data comes into your organization, it has to be within certain guardrails.</i>&quot;</p><p class="paragraph" style="text-align:left;">Practical implementation:</p><ul><li><p class="paragraph" style="text-align:left;">Access controls on training data with full audit trails</p></li><li><p class="paragraph" style="text-align:left;">Version control from day one of model development</p></li><li><p class="paragraph" style="text-align:left;">Integrity testing and bias detection in data pipelines</p></li><li><p class="paragraph" style="text-align:left;">Security gates at each stage of the ML pipeline</p></li></ul><p class="paragraph" style="text-align:left;">&quot;<i>That mindset needs to be there to see them as enablers, not blockers,&quot; </i>Sapna emphasizes. <i>&quot;And I think that is also relevant to other innovations, not just AI.</i>&quot;</p><h3 class="heading" style="text-align:left;" id="future-skills-what-vulnerability-ma"><b>Future Skills: What Vulnerability Management Teams Need</b></h3><p class="paragraph" style="text-align:left;">Looking ahead, Sapna outlines the evolving skill requirements for vulnerability management professionals: &quot;<i>You just have to have …to learn AI…</i>”</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>RELATED RESOURCES 🎧</b></h2><h3 class="heading" style="text-align:left;" id="cloud-security-podcast"><b>Cloud Security Podcast</b></h3><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/ai-vulnerability-management-why-you-cant-patch-a-neural-network?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast – Full Episode with Sapna Paul</a></p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>Related Podcast Episodes 🎧</b></h2><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/ai-vulnerability-management-why-you-cant-patch-a-neural-network?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/23cfffe5-084e-4e79-89af-64e6317b0e47/S07EP01_Sapna_Paul_.jpg?t=1768321950"/></a><div class="image__source"><a class="image__source_link" href="https://www.cloudsecuritypodcast.tv/videos/ai-vulnerability-management-why-you-cant-patch-a-neural-network?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" rel="noopener" target="_blank"><span class="image__source_text"><p>AI Vulnerability Management: Why You Can&#39;t Patch a Neural Network</p></span></a></div></div><p class="paragraph" style="text-align:left;"><br></p><h3 class="heading" style="text-align:left;" id="question-for-you-reply-to-this-emai">Question for you? (Reply to this email)</h3><p class="paragraph" style="text-align:left;"> 🤖 <b>Should </b><b>AI agents be treated as “identities” with least privilege and audit trails or as tools your org implicitly trusts?</b><br></p><p class="paragraph" style="text-align:left;">Next week, we&#39;ll explore another critical aspect of cloud security. Stay tuned!</p><hr class="content_break"><p class="paragraph" style="text-align:left;">📬 Want weekly expert takes on AI & Cloud Security? [<a class="link" href="https://www.cloudsecuritynewsletter.com/subscribe?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow">Subscribe here</a>]”</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><span style="color:rgb(238, 40, 60);"><b><a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">We would love to hear from you</a></b></span>📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙</p><p class="paragraph" style="text-align:start;">Peace!</p><p class="paragraph" style="text-align:start;"><a class="link" href="https://www.linkedin.com/in/shilpi-bhattacharjee/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow">Shilpi Bhattacharjee</a></p><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dc094a1c-678a-43e0-adc9-1bee6c3499e2/CSP_Logo_Blue_ScreenRes_3000x3000_v2.jpg"/></a></div><div class="button" style="text-align:left;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="{{rp_referral_hub_url}}"><span class="button__text" style=""> Share the newsletter </span></a></div><p class="paragraph" style="text-align:left;">Was this forwarded to you? You can <a class="link" href="https://www.cloudsecuritynewsletter.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow">Sign up here</a>, to join our growing readership.</p><p class="paragraph" style="text-align:left;">Want to <b>sponsor</b> the next newsletter edition! <a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">Lets make it happen </a></p><p class="paragraph" style="text-align:left;">Have you joined our FREE <b>Monthly</b> <a class="link" href="https://www.cloudsecuritybootcamp.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Bootcamp</a> yet?</p><p class="paragraph" style="text-align:left;">checkout our <b>sister podcast</b> <a class="link" href="https://www.youtube.com/@AISecurityPodcast?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=vmware-esxi-zero-days-exploited-for-year-lessons-from-dayforce-s-ai-firstvulnerability-management-strategy" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=a748d51b-191a-4f3a-94d3-4e7d15bc37c0&utm_medium=post_rss&utm_source=cloud_security_newsletter">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>🚨IDEsaster: AI IDE Vulnerabilities Turn Developer Tools into an Enterprise Attack Surface</title>
  <description>This week covers a new class of AI supply chain attacks targeting developer workflows. Security researchers disclosed 24 CVE-assigned vulnerabilities across popular AI-enhanced IDEs, where prompt injection enables remote code execution, data exfiltration, and credential theft directly from developer machines.We also unpack ServiceNow’s reported $7B Armis acquisition as a signal of asset visibility convergence and why Rubrik’s Matt Castriotta argues identity backup is now non-negotiable for real cyber resilience.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/2e2af671-226e-465d-a3d5-eba762b7db90/Screenshot_2025-12-17_at_10.08.53_PM.png" length="2672722" type="image/png"/>
  <link>https://www.cloudsecuritynewsletter.com/p/idesaster-ai-ide-vulnerabilities-attack-surface</link>
  <guid isPermaLink="true">https://www.cloudsecuritynewsletter.com/p/idesaster-ai-ide-vulnerabilities-attack-surface</guid>
  <pubDate>Wed, 17 Dec 2025 17:20:31 +0000</pubDate>
  <atom:published>2025-12-17T17:20:31Z</atom:published>
    <dc:creator>Ashish Rajan</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="hello-from-the-cloudverse"><span style="background-color:#28EEDA;"><b>Hello from the Cloud-verse!</b></span></h2><p class="paragraph" style="text-align:left;">This week’s Cloud Security Newsletter topic<b>: Your Backup Strategy Is Incomplete Without Identity Recovery </b><a class="link" href="#cloud-security-topic-of-the-week" rel="noopener noreferrer nofollow">(continue reading)</a> </p><div class="button" style="text-align:center;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="https://www.aisecuritypodcast.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface"><span class="button__text" style=""> This issue is sponsored by the AI Security Podcast </span></a></div><hr class="content_break"><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/the-terraform-lift-shift-playbook-migrating-200-apps-to-multi-cloud-with-terraform?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/2e2af671-226e-465d-a3d5-eba762b7db90/Screenshot_2025-12-17_at_10.08.53_PM.png?t=1765989625"/></a><div class="image__source"><span class="image__source_text"><p>This image was generated by AI. It&#39;s still experimental, so it might not be a perfect match!</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Incase, this is your 1st Cloud Security Newsletter! You are in good company! </b><br>You are reading this issue along with your friends and colleagues from companies like <i>Netflix</i>, Citi, <i>JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more</i> who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to <a class="link" href="https://open.spotify.com/show/6LZgeh4GecRYPc0WrwMB4I?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a> & <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> every week.</p><hr class="content_break"><p class="paragraph" style="text-align:left;">Welcome to this week’s Cloud Security Newsletter</p><p class="paragraph" style="text-align:left;">This week brings sobering reminders that cyber resilience isn&#39;t just about having backups it&#39;s about having the ability to recover when everything you trust becomes compromised. From a CVSS 10.0 React vulnerability under active nation-state exploitation to AI coding assistants becoming supply chain attack vectors, the attack surface continues expanding in ways traditional disaster recovery plans never anticipated.</p><p class="paragraph" style="text-align:left;">In this edition, we explore why identity systems have become &quot;ground zero&quot; for cyber resilience with <b>Matt Castriotta, Field CTO for Cloud at Rubrik</b>. Matt brings seven years of shepherding cloud security at Rubrik and over two decades in data management, offering hard-won insights on why having backups doesn&#39;t mean you can recover, and why your Active Directory might be your most critical unprotected asset.<i> [</i><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/why-backups-arent-enough-identity-recovery-is-key-against-ransomware?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow">Listen to the episode</a><i>]</i></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="tldr-for-busy-readers">📰 TL;DR for Busy Readers</h2><ul><li><p class="paragraph" style="text-align:left;">💻 <b>IDEsaster:</b> 24 CVEs across Cursor, Windsurf, GitHub Copilot, Zed, Roo Code, and Junie - 100% of tested AI IDEs vulnerable to prompt injection, enabling RCE and data exfiltration from developer machines</p></li><li><p class="paragraph" style="text-align:left;">🔐 <b>Identity is ground zero:</b> Forest-level Active Directory recovery must be part of every cyber resilience plan</p></li><li><p class="paragraph" style="text-align:left;">🤖 <b>AI supply chain attacks:</b> PromptPwnd + IDEsaster show AI coding assistants acting as high-privilege non-human identities</p></li><li><p class="paragraph" style="text-align:left;">💰 <b>ServiceNow-Armis ($7B):</b> Consolidation signals asset visibility and CMDB accuracy are now resilience prerequisites.</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-security-news">📰 <b>THIS WEEK&#39;S TOP 3 SECURITY HEADLINES</b></h2><h3 class="heading" style="text-align:left;" id="1-id-esaster-30-vulnerabilities-in-"><b>1. </b><b>IDEsaster: 30+ Vulnerabilities in AI IDEs</b></h3><p class="paragraph" style="text-align:left;">A comprehensive security analysis has uncovered 24 CVE-assigned vulnerabilities across popular AI-enhanced IDEs including Cursor, Windsurf, GitHub Copilot, Zed, Roo Code, and Junie. The research found that 100% of tested AI IDEs are vulnerable to prompt injection attacks that, when combined with legacy IDE features, enable remote code execution and data exfiltration. The vulnerabilities affect developer workstations directly, potentially compromising source code and credentials stored in local development environments.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> Developer workstations have become high-value targets, and AI IDE vulnerabilities provide attackers with direct access to intellectual property and credentials. This complements the PromptPwnd findings and underscores a broader theme: AI integration is outpacing security controls. For security leaders, this means extending zero-trust principles to developer tools and ensuring that credential management doesn&#39;t rely on the security of local development environments. Recovery plans must account for scenarios where developer machines and associated credentials are compromised.</p><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://twitter.com/maccariTA?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow"> Independent Security Researcher MaccariTA</a>,<a class="link" href="https://thehackernews.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow"> The Hacker News</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="2-service-now-in-talks-to-acquire-a"><b>2. </b><b>ServiceNow in Talks to Acquire Armis for $7 Billion</b></h3><p class="paragraph" style="text-align:left;">ServiceNow is negotiating to acquire cyber asset management platform Armis for approximately $7 billion, marking what would be the company&#39;s largest M&A deal to date. Armis specializes in providing visibility into operational technology (OT), Internet of Things (IoT), and unmanaged assets critical blind spots for enterprise security teams.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> This acquisition signals major consolidation in the enterprise asset management space and highlights the convergence of CMDB (Configuration Management Database) capabilities with cybersecurity. For cloud security leaders, this underscores the growing importance of comprehensive asset inventory something Matt Castriotta emphasizes as foundational to cyber resilience: &quot;<i>You&#39;d be amazed at how many customers I talk to that don&#39;t have a CMDB and are not inventorying exactly what they have. Shadow IT is still a thing.</i>&quot; The deal also reflects increasing focus on critical infrastructure protection, particularly OT/IoT environments that traditional security tools struggle to cover.</p><p class="paragraph" style="text-align:left;"><b>Source:</b><a class="link" href="https://industrialcyber.co/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow"> Industrial Cyber</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-prompt-pwnd-ai-coding-assistant-s"><b>3. </b><b>PromptPwnd: AI Coding Assistant Supply Chain Attacks</b></h3><p class="paragraph" style="text-align:left;">Security researchers have disclosed a new vulnerability class affecting AI coding assistants in CI/CD pipelines, dubbed &quot;PromptPwnd.&quot; The attack vector uses prompt injection in GitHub Actions and GitLab CI/CD workflows to exploit AI agents like Gemini CLI, Claude Code, and OpenAI Codex. Successful exploitation enables secret exfiltration, remote code execution, and GitHub token theft. At least five Fortune 500 companies have been confirmed as impacted. Google patched Gemini CLI within four days of disclosure.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> This represents an entirely new attack surface that traditional security controls weren&#39;t designed to address. As organizations rush to adopt AI coding assistants for productivity gains, they&#39;re introducing non-human identities with broad access to code repositories and secrets. Castriotta&#39;s warning about AI agents resonates here: &quot;If we&#39;re gonna use AI to increase productivity, we&#39;re gonna need to remove the human in the loop... You need the ability to understand what that agent did, and if that agent did something erroneous, you need the ability to be able to rewind that back.&quot; Organizations need visibility into AI agent activity and the capability to recover from erroneous or malicious actions.</p><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://www.aikido.dev/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow"> Aikido Security</a>,<a class="link" href="https://www.securityweek.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow"> SecurityWeek</a>,<a class="link" href="https://thehackernews.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow"> The Hacker News</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="cloud-security-topic-of-the-week">🎯 Cloud Security Topic of the Week: </h2><h3 class="heading" style="text-align:left;" id="identity-as-ground-zero-why-your-ba"><b>Identity as Ground Zero: Why Your Backup Strategy is Incomplete Without Identity Recovery</b></h3><p class="paragraph" style="text-align:left;">The conversation around cyber resilience has evolved beyond <i>&quot;do you have backups?&quot; </i>to a more critical question: Can you actually recover when your identity systems are compromised? Most organizations treat identity as a security control to protect, but few treat it as a data source that requires the same backup and recovery rigor as their databases and applications.</p><p class="paragraph" style="text-align:left;">This oversight creates a dangerous gap. As Matt Castriotta explains: <i>&quot;If identity&#39;s down, everything&#39;s down. You have no ability to access anything. Your identity system is ground zero. It&#39;s the perimeter&quot;.</i></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="featured-experts-this-week"><b>Featured Experts This Week </b>🎤</h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.linkedin.com/in/mattcastriotta/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow"><b>Matt Castriotta</b></a>, Field CTO for Cloud, Rubrik</p></li><li><p class="paragraph" style="text-align:left;"><b><a class="link" href="https://www.linkedin.com/in/ashishrajan/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow">Ashish Rajan</a></b> - CISO | Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> , Host of <a class="link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a></p></li></ul><h2 class="heading" style="text-align:left;" id="definitions-and-core-concepts"><b>Definitions and Core Concepts 📚</b></h2><p class="paragraph" style="text-align:left;">Before diving into our insights, let&#39;s clarify some key terms:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Cyber Resilience vs. Disaster Recovery:</b> Cyber resilience is the ability to recover when data and identity systems are inherently mistrusted due to attacker compromise. Disaster recovery assumes data and identity remain in a &quot;trusted zone&quot; and focuses on business continuity during outages.</p></li><li><p class="paragraph" style="text-align:left;"><b>Operational Recovery:</b> Restoration from accidental changes or errors where systems remain in a trusted state requires rewinding to a known good point in time.</p></li><li><p class="paragraph" style="text-align:left;"><b>Forest-Level Recovery:</b> In Active Directory environments, the process of recovering an entire domain forest structure, typically required when attackers have compromised the root domain or made extensive changes to the directory structure.</p></li><li><p class="paragraph" style="text-align:left;"><b>Minimum Viable Company (MVC):</b> The critical subset of systems and data required to maintain essential business operations during a major cyber incident the foundation of effective cyber recovery planning.</p></li><li><p class="paragraph" style="text-align:left;"><b>Survivable Backups:</b> Backup copies that are immutable, air-gapped, or otherwise protected from tampering by attackers who have gained privileged access to production environments.</p></li></ul><hr class="content_break"><p class="paragraph" style="text-align:center;">This week&#39;s issue is sponsored by <a class="link" href="https://www.aisecuritypodcast.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow"><b>AI Security Podcast</b></a><b> </b></p><p class="paragraph" style="text-align:center;">In-depth practitioner discussions on Enterprise AI risk, governance, and security with guests including AI Bug Bounty Hunters, CISOs from Foundational Models & more.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="our-insights-from-this-practitioner">💡<b>Our Insights from this Practitioner 🔍</b></h2><h3 class="heading" style="text-align:left;" id="why-backups-arent-enough-identity-r"><b>Why Backups Aren&#39;t Enough & Identity Recovery is Key against Ransomware </b><b>(</b><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/why-backups-arent-enough-identity-recovery-is-key-against-ransomware?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow"><b>Full Episode here</b></a><b>)</b></h3><hr class="content_break"><h4 class="heading" style="text-align:left;" id="the-fatal-flaw-in-modern-backup-str"><b>The Fatal Flaw in Modern Backup Strategies</b></h4><p class="paragraph" style="text-align:left;">When organizations say &quot;<i>we have backups</i>&quot;, they&#39;re often conflating business continuity with cyber resilience. This distinction matters enormously when an attacker is inside your environment.</p><p class="paragraph" style="text-align:left;">Matt Castriotta draws a stark line: &quot;<i>Having backup doesn&#39;t mean anything. Do you have the ability to recover? That&#39;s the key. You always have to think about it as: I don&#39;t have a backup, I have an insurance policy. I have a recovery plan. I have the ability to get my business back</i>&quot;.</p><p class="paragraph" style="text-align:left;">The challenge with traditional disaster recovery is that it assumes your data and identity remain trusted. But modern attackers operate as authenticated users within your environment. Once they&#39;re in, everything becomes suspect. Castriotta explains: &quot;<i>Once the cyber attacker gets into the environment, most likely by acting as an authorized and authenticated user on your network, at that point your data and your identity are inherently mistrusted. Now I have to figure out what did they impact and how do I get back to a good clean point</i>&quot;.</p><p class="paragraph" style="text-align:left;">This is why cloud-native continuity features like S3 versioning and cross-region replication create false confidence. Castriotta cautions: &quot;<i>Just because you replicate your data, if your data&#39;s impacted in your primary region, the replication is replicating that impact to the secondary region. Your secondary regions are now impacted too</i>&quot;.</p><p class="paragraph" style="text-align:left;"><b>Practical Application:</b> Audit your current backup strategy by asking three questions:</p><ol start="1"><li><p class="paragraph" style="text-align:left;">Can you identify which backup copy is &quot;clean&quot; if your production systems are compromised?</p></li><li><p class="paragraph" style="text-align:left;">Do your backups exist outside the administrative domain of your production environment?</p></li><li><p class="paragraph" style="text-align:left;">Have you tested recovery scenarios where both production data AND identity systems are assumed compromised?</p></li></ol><p class="paragraph" style="text-align:left;">If you answered &quot;no&quot; to any of these, you have continuity but not resilience.</p><p class="paragraph" style="text-align:left;"><b>Identity: The New Perimeter That No One Is Backing Up</b></p><p class="paragraph" style="text-align:left;">The most striking insight from Castriotta&#39;s experience is how few organizations treat identity as a recoverable data source. &quot;<i>A lot of times conversations around cybersecurity resilience are framed around backup recovery, but not really around identity</i>&quot;, he notes.</p><p class="paragraph" style="text-align:left;">This oversight is particularly dangerous because identity systems enable everything else. As Castriotta puts it: &quot;<i>Identity is the new perimeter. If identity&#39;s down, everything&#39;s down. You have no ability to access anything. Your identity system is ground zero</i>&quot;.</p><p class="paragraph" style="text-align:left;">Attackers understand this better than defenders. They infiltrate environments through compromised identities, escalate privileges through misconfigured IAM roles, and move laterally across accounts and regions. The identity layer is both their entry point and their highway through your environment.</p><p class="paragraph" style="text-align:left;">Yet when organizations plan cyber recovery, identity is often an afterthought. Castriotta observes: &quot;<i>There are businesses that were built solely on just protecting identity systems Active Directory backup solutions. So yes, absolutely, your identity system needs protection not only from an operational error... but also from the fact that an attacker could make many modifications to your identity system to facilitate their lateral movement</i>&quot;.</p><p class="paragraph" style="text-align:left;"><b>Practical Application:</b> Evaluate your identity recovery posture:</p><p class="paragraph" style="text-align:left;"><b>For On-Premises Environments:</b></p><ul><li><p class="paragraph" style="text-align:left;">Do you have the ability to perform forest-level Active Directory recovery?</p></li><li><p class="paragraph" style="text-align:left;">Can you recover domain controllers to known-good states within your RTO?</p></li><li><p class="paragraph" style="text-align:left;">Are your AD backups stored outside the domain they&#39;re backing up?</p></li></ul><p class="paragraph" style="text-align:left;"><b>For Cloud Environments:</b></p><ul><li><p class="paragraph" style="text-align:left;">Are you backing up IAM policies, roles, and trust relationships?</p></li><li><p class="paragraph" style="text-align:left;">Can you detect and revert unauthorized privilege escalations?</p></li><li><p class="paragraph" style="text-align:left;">Do you have offline copies of service principal credentials?</p></li></ul><p class="paragraph" style="text-align:left;"><b>For Hybrid Environments:</b></p><ul><li><p class="paragraph" style="text-align:left;">Can you recover Entra ID (Azure AD) configurations independently of on-premises AD?</p></li><li><p class="paragraph" style="text-align:left;">Are conditional access policies backed up and version-controlled?</p></li><li><p class="paragraph" style="text-align:left;">Can you rebuild federation trusts if both sides are compromised?</p></li></ul><p class="paragraph" style="text-align:left;">Castriotta emphasizes that identity recovery must come first: &quot;Your identity system is ground zero. You need the ability to bring that back first before you bring back anything else&quot;.</p><p class="paragraph" style="text-align:left;"><b>The Assumed Breach Mindset: Gaming Out Total Compromise</b></p><p class="paragraph" style="text-align:left;">Organizations that succeed in cyber resilience share a common trait: they&#39;ve adopted an &quot;assumed breach&quot; mindset and actually practiced large-scale recovery.</p><p class="paragraph" style="text-align:left;">Castriotta explains the difference: &quot;<i>The organizations that really succeed are the ones that have already gone beyond [perimeter security]. They&#39;ve adopted this assumed breach mindset and they&#39;ve gamed out the process of what a large-scale cyber recovery would look like</i>&quot;.</p><p class="paragraph" style="text-align:left;">This isn&#39;t a theoretical exercise. &quot;<i>When I say game it out, I mean they&#39;ve done tabletop exercises with their security organization, with security teams that they partner with. IT and security are in lockstep with each other, and security has a vested interest in recovery—which security doesn&#39;t always have a vested interest in recovery in organizations</i>&quot;.</p><p class="paragraph" style="text-align:left;">The concept of &quot;<i>Minimum Viable Company</i>&quot; becomes critical here. Organizations need to identify: &quot;What comprises my minimum viable company and what would it take for me to bring that back if assuming everything was impacted.&quot;</p><p class="paragraph" style="text-align:left;">This requires brutal honesty about dependencies. Castriotta notes the first step: &quot;<i>The ability to know the assets that you have and the RTO expectations for each of the applications you&#39;re running in your environment. You&#39;d be amazed at how many customers I talk to that don&#39;t have a CMDB and are not inventorying exactly what they have. Shadow IT is still a thing. Untracked buckets are still a thing</i>&quot;.</p><p class="paragraph" style="text-align:left;"><b>Practical Application:</b> Conduct a Minimum Viable Company exercise:</p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Define Core Business Functions:</b> What are the 3-5 capabilities required to keep your business operating at minimal viability? (For many organizations: customer-facing services, payment processing, internal communications, identity/access systems)<br></p></li><li><p class="paragraph" style="text-align:left;"><b>Map Technical Dependencies:</b> For each core function, document:<br></p><ul><li><p class="paragraph" style="text-align:left;">Primary applications and their data sources</p></li><li><p class="paragraph" style="text-align:left;">Identity/authentication requirements</p></li><li><p class="paragraph" style="text-align:left;">Network/connectivity dependencies</p></li><li><p class="paragraph" style="text-align:left;">Third-party service dependencies</p></li><li><p class="paragraph" style="text-align:left;">Regulatory/compliance considerations</p></li></ul></li></ol><p class="paragraph" style="text-align:left;"><b>Establish Recovery Priorities:</b> Assign tiers to applications:</p><ol start="1"><li><p class="paragraph" style="text-align:left;">Tier 0: Identity systems and core infrastructure</p></li><li><p class="paragraph" style="text-align:left;">Tier 1: Revenue-generating applications (bring back within hours)</p></li><li><p class="paragraph" style="text-align:left;">Tier 2: Business-critical applications (bring back within days)</p></li><li><p class="paragraph" style="text-align:left;">Tier 3-4: Important but non-critical applications</p></li></ol><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Test the Recovery Runbook:</b> Execute tabletop exercises where IT and security jointly practice recovery scenarios. Key question: &quot;If we assume our production environment and identity systems are fully compromised, how do we rebuild from backups while ensuring we&#39;re not restoring malicious changes?&quot;<br></p></li><li><p class="paragraph" style="text-align:left;">Organizations that skip this planning discover painful truths during actual incidents—when time pressure and stress make clear thinking nearly impossible. </p></li></ol><p class="paragraph" style="text-align:left;"><b>The AI Agent Challenge: Recovery in the Age of Autonomous Actions</b></p><p class="paragraph" style="text-align:left;">The emergence of AI agents introduces a new dimension to cyber resilience that traditional backup strategies never contemplated. As Castriotta observes: &quot;AI has really opened people&#39;s eyes to the fact that the data is the gold. That is your crown jewels. And then access to it and how you facilitate that access.&quot;</p><p class="paragraph" style="text-align:left;">The challenge intensifies when AI agents act autonomously. &quot;<i>We&#39;re gonna need to remove the human in the loop. The human in the loop right now is what&#39;s getting in the way</i>&quot;, Castriotta explains. &quot;<i>When that does decrease and there&#39;s less humans in the loop, you need the ability to understand what that agent did, and if that agent did something erroneous, you need the ability to be able to rewind that back</i>&quot;.</p><p class="paragraph" style="text-align:left;">This creates unprecedented visibility and recovery requirements. Unlike human operators whose actions can be logged and audited through established patterns, AI agents may make thousands of decisions per hour across multiple systems. The PromptPwnd and IDEsaster vulnerabilities this week demonstrate how AI agents can be manipulated to exfiltrate secrets and execute code essentially becoming insider threats that operate at machine speed.</p><p class="paragraph" style="text-align:left;">Castriotta emphasizes that organizations are creating &quot;<i>overly permissive non-human identities... because AI needs access to all the data</i>&quot;. These privileged service accounts become attractive targets and potential blast radius amplifiers.</p><p class="paragraph" style="text-align:left;"><b>Practical Application:</b> Implement AI agent oversight and recovery capabilities:</p><p class="paragraph" style="text-align:left;"><b>Visibility and Auditability:</b></p><ul><li><p class="paragraph" style="text-align:left;">Deploy comprehensive logging for all AI agent actions, not just successful operations</p></li><li><p class="paragraph" style="text-align:left;">Track which data sources agents access and what changes they make</p></li><li><p class="paragraph" style="text-align:left;">Monitor for unusual patterns in agent behavior (e.g., accessing data outside normal scope)</p></li><li><p class="paragraph" style="text-align:left;">Maintain chain-of-custody logs showing which prompts led to which actions</p></li></ul><p class="paragraph" style="text-align:left;"><b>Access Controls:</b></p><ul><li><p class="paragraph" style="text-align:left;">Apply least-privilege principles to AI agent service accounts</p></li><li><p class="paragraph" style="text-align:left;">Use time-limited credentials where possible (rotate frequently)</p></li><li><p class="paragraph" style="text-align:left;">Implement approval workflows for high-risk agent operations</p></li><li><p class="paragraph" style="text-align:left;">Segment agent access by function don&#39;t give one agent access to everything</p></li></ul><p class="paragraph" style="text-align:left;"><b>Recovery Mechanisms:</b></p><ul><li><p class="paragraph" style="text-align:left;">Ensure you can identify &quot;pre-agent&quot; backup snapshots for critical systems</p></li><li><p class="paragraph" style="text-align:left;">Test recovery scenarios where you need to revert agent-made changes</p></li><li><p class="paragraph" style="text-align:left;">Document the process for disabling compromised agents and revoking their access</p></li><li><p class="paragraph" style="text-align:left;">Maintain offline copies of configurations that agents might modify</p></li></ul><p class="paragraph" style="text-align:left;">Rubrik&#39;s upcoming Agent Cloud platform addresses this gap by providing visibility into agent operations and the ability to rewind changes recognizing that AI agents represent both productivity opportunities and new threat vectors that require specific resilience strategies.</p><p class="paragraph" style="text-align:left;"><b>Multi-Cloud Recovery: Beyond Checkbox Compliance</b></p><p class="paragraph" style="text-align:left;">The DORA regulations in the European Union are forcing financial services organizations to confront multi-cloud recovery realities. But as Castriotta notes, many are approaching this as compliance theater rather than genuine resilience.</p><p class="paragraph" style="text-align:left;">&quot;<i>Right now what I&#39;m seeing is that a lot of folks are treating DORA as sort of a checkbox. I&#39;m just gonna make sure that I copy my backups to Azure. I&#39;m not gonna make sure they&#39;re in a format that Azure can even understand, nor am I even gonna ever test a recovery back into something in Azure</i>&quot;.</p><p class="paragraph" style="text-align:left;">This approach fails on multiple levels. First, egress costs make cross-cloud data movement expensive at scale. Second, and more critically, AWS and Azure are fundamentally different: &quot;VMs that are spun up in AWS look nothing like VMs spun up in Azure. How do you do that conversion and do it on the fly in a way where I can build my application on the other side cleanly?&quot;</p><p class="paragraph" style="text-align:left;">The recent AWS outages underscore why this matters. When the East-1 region went down, it took Amazon an entire day to restore capacity. As Castriotta points out: &quot;Even the hyperscalers struggle with the complexity of what they&#39;ve built. If even the hyperscalers struggle with that, our customers are struggling with the complexity of what they&#39;ve built too. They need a recovery plan that can get them back quickly.&quot;</p><p class="paragraph" style="text-align:left;"><b>Practical Application:</b> Build genuine multi-cloud resilience:</p><p class="paragraph" style="text-align:left;"><b>Assessment Phase:</b></p><ol start="1"><li><p class="paragraph" style="text-align:left;">Identify applications where multi-cloud recovery provides genuine value (typically: tier-0/tier-1 applications, regulatory requirements, geopolitical risk concerns)</p></li><li><p class="paragraph" style="text-align:left;">Calculate the total cost of true multi-cloud recovery (egress, storage, conversion tools, testing)</p></li><li><p class="paragraph" style="text-align:left;">Determine if the investment justifies the risk mitigation</p></li></ol><p class="paragraph" style="text-align:left;"><b>Implementation Phase:</b></p><ol start="1"><li><p class="paragraph" style="text-align:left;">Test recovery to alternate clouds quarterly, not just backing up to them</p></li><li><p class="paragraph" style="text-align:left;">Maintain runbooks for the conversion process (don&#39;t assume hyperscaler migration tools will work during an incident)</p></li><li><p class="paragraph" style="text-align:left;">Pre-position networking, identity, and security configurations in the alternate cloud</p></li><li><p class="paragraph" style="text-align:left;">Ensure your team has actual hands-on experience with both platforms</p></li></ol><p class="paragraph" style="text-align:left;"><b>Reality Check:</b> For most organizations, multi-cloud recovery for all workloads isn&#39;t economically viable. Focus on:</p><ul><li><p class="paragraph" style="text-align:left;">Critical applications that justify the investment</p></li><li><p class="paragraph" style="text-align:left;">Data archives that can be stored cost-effectively across clouds</p></li><li><p class="paragraph" style="text-align:left;">Disaster recovery scenarios where you accept longer RTOs for alternate-cloud recovery</p></li></ul><p class="paragraph" style="text-align:left;">The key insight: multi-cloud backup is easy; multi-cloud recovery is hard. Don&#39;t confuse the two.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>RELATED RESOURCES 🎧</b></h2><h3 class="heading" style="text-align:left;" id="cyber-resilience-and-identity-prote"><b>Cyber Resilience and Identity Protection:</b></h3><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.nist.gov/cyberframework?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow">NIST Cybersecurity Framework 2.0 - Recover Function</a> - Updated framework with enhanced recovery guidance</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://learn.microsoft.com/en-us/windows-server/identity/ad-ds/manage/ad-forest-recovery-guide?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow">Microsoft Active Directory Forest Recovery Guide</a> - Technical runbooks for AD recovery</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.cisa.gov/cyber-resilience-guide?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow">CISA Cyber Resilience Guide</a> - Guidance for critical infrastructure</p></li></ul><h3 class="heading" style="text-align:left;" id="incident-response-and-recovery"><b>Incident Response and Recovery:</b></h3><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.sans.org/white-papers/incident-handlers-handbook/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow">SANS Incident Handler&#39;s Handbook</a> - Practical incident response guidance</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-workloads-on-aws.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow">AWS Disaster Recovery Whitepaper</a> - Cloud-specific DR strategies</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.rubrik.com/en/resources/cyber-recovery?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow">Rubrik Cyber Recovery Guide</a> - Practical framework for cyber resilience</p></li></ul><h3 class="heading" style="text-align:left;" id="cloud-security-podcast"><b>Cloud Security Podcast</b></h3><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/why-backups-arent-enough-identity-recovery-is-key-against-ransomware?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast – Full Episode with Matthew</a></p></li><li><p class="paragraph" style="text-align:left;">For deeper discussion on failed data lakes, AI in detection engineering, and where SIEM still fits.</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>Related Podcast Episodes 🎧</b></h2><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/why-backups-arent-enough-identity-recovery-is-key-against-ransomware?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/6769709c-6419-4052-abea-9537debdb5f4/Matt_Castriotta-YT.jpg?t=1765981564"/></a><div class="image__source"><a class="image__source_link" href="https://www.cloudsecuritypodcast.tv/videos/how-to-secure-your-ai-agents-a-cisos-journey?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" rel="noopener" target="_blank"><span class="image__source_text"><p>How to secure your AI Agents: A CISOs Journey</p></span></a></div></div><h3 class="heading" style="text-align:left;" id="question-for-you-reply-to-this-emai">Question for you? (Reply to this email)</h3><p class="paragraph" style="text-align:left;"> 🤖 <b>Should companies work on a AI Backup Plan?</b><br></p><p class="paragraph" style="text-align:left;">Next week, we&#39;ll explore another critical aspect of cloud security. Stay tuned!</p><hr class="content_break"><p class="paragraph" style="text-align:left;">📬 Want weekly expert takes on AI & Cloud Security? [<a class="link" href="https://www.cloudsecuritynewsletter.com/subscribe?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow">Subscribe here</a>]”</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><span style="color:rgb(238, 40, 60);"><b><a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">We would love to hear from you</a></b></span>📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙</p><p class="paragraph" style="text-align:start;">Peace!</p><p class="paragraph" style="text-align:start;"><a class="link" href="https://www.linkedin.com/in/shilpi-bhattacharjee/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow">Shilpi Bhattacharjee</a></p><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dc094a1c-678a-43e0-adc9-1bee6c3499e2/CSP_Logo_Blue_ScreenRes_3000x3000_v2.jpg"/></a></div><div class="button" style="text-align:left;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="{{rp_referral_hub_url}}"><span class="button__text" style=""> Share the newsletter </span></a></div><p class="paragraph" style="text-align:left;">Was this forwarded to you? You can <a class="link" href="https://www.cloudsecuritynewsletter.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow">Sign up here</a>, to join our growing readership.</p><p class="paragraph" style="text-align:left;">Want to <b>sponsor</b> the next newsletter edition! <a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">Lets make it happen </a></p><p class="paragraph" style="text-align:left;">Have you joined our FREE <b>Monthly</b> <a class="link" href="https://www.cloudsecuritybootcamp.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Bootcamp</a> yet?</p><p class="paragraph" style="text-align:left;">checkout our <b>sister podcast</b> <a class="link" href="https://www.youtube.com/@AISecurityPodcast?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=idesaster-ai-ide-vulnerabilities-turn-developer-tools-into-an-enterprise-attack-surface" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=8e7520da-105c-4265-a2aa-90a0b9359ec9&utm_medium=post_rss&utm_source=cloud_security_newsletter">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>🚨 Zero-Day Exploited in Hours + AI Agent Risk Lessons from CISO of Sendbird</title>
  <description>This week&#39;s newsletter covers critical React Server Components vulnerability (CVE-2025-55182) under active exploitation by Chinese APT groups, record-breaking DDoS attacks, and exclusive insights from Sendbird CSO Yash Kosaraju on securing AI agents through multi-layered trust frameworks, enterprise LLM safeguards, and cultural transformation in the age of autonomous systems</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/83948e42-0204-4c7a-89f7-7cd6e72bc4b5/Screenshot_2025-12-11_at_12.16.31_AM.png" length="856153" type="image/png"/>
  <link>https://www.cloudsecuritynewsletter.com/p/react2shell-zero-day-ai-agent-security-sendbird</link>
  <guid isPermaLink="true">https://www.cloudsecuritynewsletter.com/p/react2shell-zero-day-ai-agent-security-sendbird</guid>
  <pubDate>Thu, 11 Dec 2025 00:17:42 +0000</pubDate>
  <atom:published>2025-12-11T00:17:42Z</atom:published>
    <dc:creator>Ashish Rajan</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="hello-from-the-cloudverse"><span style="background-color:#28EEDA;"><b>Hello from the Cloud-verse!</b></span></h2><p class="paragraph" style="text-align:left;">This week’s Cloud Security Newsletter Topic we cover - <b>How to secure your AI Agents: A CISOs Journey </b><a class="link" href="#cloud-security-topic-of-the-week" rel="noopener noreferrer nofollow">(continue reading)</a> </p><div class="button" style="text-align:center;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="https://links.cloudsecuritypodcast.tv/drata-dec-2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird"><span class="button__text" style=""> Check out this week’s Sponsor: Drata </span></a></div><hr class="content_break"><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/the-terraform-lift-shift-playbook-migrating-200-apps-to-multi-cloud-with-terraform?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/83948e42-0204-4c7a-89f7-7cd6e72bc4b5/Screenshot_2025-12-11_at_12.16.31_AM.png?t=1765412207"/></a><div class="image__source"><span class="image__source_text"><p>This image was generated by AI. It&#39;s still experimental, so it might not be a perfect match!</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Incase, this is your 1st Cloud Security Newsletter! You are in good company! </b><br>You are reading this issue along with your friends and colleagues from companies like <i>Netflix</i>, Citi, <i>JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more</i> who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to <a class="link" href="https://open.spotify.com/show/6LZgeh4GecRYPc0WrwMB4I?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a> & <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> every week.</p><hr class="content_break"><p class="paragraph" style="text-align:left;">Welcome to this week’s Cloud Security Newsletter</p><p class="paragraph" style="text-align:left;">As enterprises race to integrate AI agents into production environments, we&#39;re witnessing a collision of traditional application security challenges with entirely new threat vectors. This week brings critical lessons from the frontlines: a React vulnerability exploited within hours of disclosure, infrastructure providers scrambling to protect customers, and practical guidance from organizations that have successfully navigated the transition from API-driven platforms to AI-first architectures.</p><p class="paragraph" style="text-align:left;">I&#39;m joined this week by <b>Yash Kosaraju</b>, Chief Security Officer at Sendbird, who brings a unique perspective on securing AI agents at scale. Sendbird transformed from a mature chat API platform into an AI agent company practically overnight a microcosm of the transformation many enterprises are experiencing right now. Yash shares candid insights on building security programs that balance innovation velocity with enterprise-grade protection, from embedding security engineers in AI development teams to redefining what constitutes a &quot;security incident&quot; when AI agents make suboptimal decisions.<i> [</i><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/how-to-secure-your-ai-agents-a-cisos-journey?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">Listen to the episode</a><i>]</i></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="tldr-for-busy-readers">📰 TL;DR for Busy Readers</h2><ul><li><p class="paragraph" style="text-align:left;"><b>React2Shell (CVE-2025-55182)</b>: Critical 10.0 CVSS RCE in React Server Components exploited by Chinese APT groups within hours so patch immediately</p></li><li><p class="paragraph" style="text-align:left;"><b>Zero Trust ≠ Multi-Layered Trust</b>: Yash explains why “controls fail” must be your foundational assumption.</p></li><li><p class="paragraph" style="text-align:left;"><b>AI Incidents ≠ Breaches</b>: Wrong answers, poor decisions, and unexpected agent actions need new IR playbooks.</p></li><li><p class="paragraph" style="text-align:left;"><b>LLM Contracts Matter</b>: Scrutinize training exclusions, data deletion clauses, and derivative model usage.</p></li><li><p class="paragraph" style="text-align:left;"><b>Culture &gt; Tools</b>: Security embedded in AI teams reduces Shadow AI and accelerates safe experimentation.</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-security-news">📰 <b>THIS WEEK&#39;S TOP 3 SECURITY HEADLINES</b></h2><h3 class="heading" style="text-align:left;" id="1-react-2-shell-critical-unauthenti"><b>1. </b><b>React2Shell: Critical Unauthenticated RCE Demands Immediate Action</b></h3><p class="paragraph" style="text-align:left;">A perfect <b>10.0 CVSS RCE</b> in React Server Components is now under <b>active exploitation</b> by Earth Lamia and Jackpot Panda. Added to CISA KEV within 48 hours.<br><br><b>Action:</b> Patch immediately. Inventory where React Server Components appear across your cloud estate.<br><br><b>Why it matters:</b> Time-to-exploit is compressing. Automated detection + patch orchestration must be in place before disclosure hits.</p><p class="paragraph" style="text-align:left;"><b>Source</b>:<a class="link" href="https://nvd.nist.gov/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow"> NVD</a>,<a class="link" href="https://aws.amazon.com/blogs/security/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow"> AWS Security Blog</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="2-cloudflare-outage-tied-to-react-2"><b>2. </b><b>Cloudflare Outage Tied to React2Shell Emergency Response</b></h3><p class="paragraph" style="text-align:left;">A 25-minute Cloudflare outage occurred during emergency firewall rule deployment.<br><br><b>Why it matters:</b> Even security fixes can cause availability impact.<br><br><b>Action:</b> Ask vendors for emergency mitigation SLAs and rollback procedures.</p><p class="paragraph" style="text-align:left;"><b>Source</b>:<a class="link" href="https://blog.cloudflare.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow"> The Cloudflare Blog</a>,<a class="link" href="https://www.reuters.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow"> Reuters</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-saviynt-raises-700-m-signaling-id"><b>3. </b><b>Saviynt Raises $700M, Signaling Identity Consolidation Trend</b></h3><p class="paragraph" style="text-align:left;">Identity is the new perimeter, and boards are funding accordingly.<br><br><b>Action:</b> Reevaluate identity governance maturity and JIT access controls.</p><p class="paragraph" style="text-align:left;"><b>Source</b>:<a class="link" href="https://www.wsj.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow"> Wall Street Journal</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="cloud-security-topic-of-the-week">🎯 Cloud Security Topic of the Week: </h2><h3 class="heading" style="text-align:left;" id="building-multi-layered-trust-for-ai">Building Multi-Layered Trust for AI Agents (with Sendbird <br>CISO - Yash Kosaraju)</h3><p class="paragraph" style="text-align:left;">The transition from traditional applications to AI-powered systems isn&#39;t just a technology shift it&#39;s a fundamental reimagining of security architecture, incident response, and organizational culture. This week&#39;s conversation with Yash Kosaraju reveals how Sendbird navigated this transformation while maintaining enterprise-grade security for customers deploying AI agents that take autonomous actions in production environments.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="featured-experts-this-week"><b>Featured Experts This Week </b>🎤</h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.linkedin.com/in/yashvier/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow"><b>Yashvier Kosaraju</b></a>, Chief Security Officer at Sendbird</p></li><li><p class="paragraph" style="text-align:left;"><b><a class="link" href="https://www.linkedin.com/in/ashishrajan/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">Ashish Rajan</a></b> - CISO | Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> , Host of <a class="link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a></p></li></ul><h2 class="heading" style="text-align:left;" id="definitions-and-core-concepts"><b>Definitions and Core Concepts 📚</b></h2><p class="paragraph" style="text-align:left;">Before diving into our insights, let&#39;s clarify some key terms:</p><ul><li><p class="paragraph" style="text-align:left;"><b>AI Agent</b>: An autonomous system that uses large language models (LLMs) to understand context, make decisions, and execute actions without constant human oversight. Unlike traditional chatbots that respond to queries, AI agents can modify backend systems, process transactions, and orchestrate complex workflows.</p></li><li><p class="paragraph" style="text-align:left;"><b>Multi-Layered Security/Trust</b>: A defense-in-depth approach assuming that individual security controls will eventually fail. Rather than relying on a single &quot;zero trust&quot; perimeter, organizations implement overlapping controls across device trust, identity verification, network access, application authorization, and data protection.</p></li><li><p class="paragraph" style="text-align:left;"><b>RAG (Retrieval-Augmented Generation)</b>: A technique where AI models query external knowledge bases before generating responses, improving accuracy and reducing hallucinations by grounding outputs in verified information.</p></li><li><p class="paragraph" style="text-align:left;"><b>Context Injection/Prompt Injection</b>: Attack techniques where adversaries manipulate AI system inputs to override security constraints, extract sensitive data, or cause unintended actions. Similar to SQL injection but exploiting LLM processing logic rather than database queries.</p></li></ul><hr class="content_break"><p class="paragraph" style="text-align:center;">This week&#39;s issue is sponsored by <b><a class="link" href="https://links.cloudsecuritypodcast.tv/drata-dec-2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">Drata</a></b></p><p class="paragraph" style="text-align:center;"><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;">Security teams shouldn’t be buried in manual evidence collection. </span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;">Drata automates compliance end-to-end while providing unified visibility across cloud workloads, identities, and configurations. </span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;">Teams use Drata to cut audit prep from weeks to hours, accelerate security reviews, and reinforce DevSecOps pipelines with real-time controls monitoring. </span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;">If you&#39;re scaling cloud infrastructure and need a smarter path to continuous compliance, Drata is built for you.</span></p><p class="paragraph" style="text-align:center;"><a class="link" href="https://links.cloudsecuritypodcast.tv/drata-dec-2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">Book Demo with Drata</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="our-insights-from-this-practitioner">💡<b>Our Insights from this Practitioner 🔍</b></h2><h3 class="heading" style="text-align:left;" id="how-to-secure-your-ai-agents-a-cis-"><b>How to secure your AI Agents: A CISOs Journey </b><b>(</b><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/how-to-secure-your-ai-agents-a-cisos-journey?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow"><b>Full Episode here</b></a><b>)</b></h3><hr class="content_break"><h4 class="heading" style="text-align:left;" id="the-reality-check-ai-security-isnt-"><b>The Reality Check: AI Security Isn&#39;t Just Application Security 2.0</b></h4><p class="paragraph" style="text-align:left;">When Sendbird pivoted from a mature chat API platform to an AI agent company, the security team faced a challenge that extends far beyond securing APIs and databases. <i>&quot;You go from fine, we are building an application, go test for OWASP Top 10, go test for XSS, SQL injection,&quot;</i> Yash explains. <i>&quot;That changes to a lot of different things. The attack paths are different. The types of security issues are different. The data security models are different.&quot;</i></p><p class="paragraph" style="text-align:left;">This transformation mirrors what many enterprises are experiencing as they integrate AI capabilities. The comfortable world of known vulnerabilities and established testing frameworks gives way to questions about LLM hallucinations, context injection attacks, and data leakage through training corpora. Traditional AppSec engineers suddenly find themselves securing systems they don&#39;t fully understand a humbling experience that Yash addresses directly with his team.</p><p class="paragraph" style="text-align:left;"><b>The Hidden Stress on Security Teams</b>: One of the most candid moments in our conversation came when Yash shared how a senior engineer approached him: <i>&quot;We are building this new AI stuff. We have to review and secure it. I don&#39;t know if I can do a good enough job with this.&quot; </i>This vulnerability reflects a broader industry challenge that isn&#39;t discussed enough the expectation that experienced security professionals should instantly become AI security experts simply because AI is built on software.</p><p class="paragraph" style="text-align:left;">Yash&#39;s response reveals his leadership philosophy: <i>&quot;Let&#39;s acknowledge it&#39;s new technology. We will make mistakes. It&#39;s okay to do that. We&#39;ll eventually catch up to it. A staff AppSec or principal engineer is not automatically a staff AI security engineer.&quot; </i>This psychological safety creates space for teams to learn, experiment, and build expertise without the paralysis that comes from fear of failure.</p><hr class="content_break"><h4 class="heading" style="text-align:left;" id="multi-layered-trust-beyond-zero-tru"><b>Multi-Layered Trust: Beyond Zero Trust Marketing</b></h4><p class="paragraph" style="text-align:left;">When I asked Yash about his approach to zero trust, his response cut through the industry hype: <i>&quot;A true multi-layered approach doesn&#39;t use flashy marketing terms and call it zero trust. It&#39;s plain multi-layers of security.&quot;</i></p><p class="paragraph" style="text-align:left;">The distinction matters. Zero trust has become such an overloaded term that it risks meaning nothing a checkbox on a compliance form rather than a meaningful security architecture. Yash&#39;s multi-layered trust framework at Sendbird is more pragmatic: assume every control will eventually fail, and design your architecture accordingly.</p><p class="paragraph" style="text-align:left;"><b>The Login Flow That Actually Works</b>: Yash walks through Sendbird&#39;s employee authentication flow as a concrete example. When an engineer attempts to access GitHub, the system verifies:</p><ol start="1"><li><p class="paragraph" style="text-align:left;">The request originates from a company-issued device</p></li><li><p class="paragraph" style="text-align:left;">The Chrome browser is enrolled in Google Enterprise</p></li><li><p class="paragraph" style="text-align:left;">Okta performs device health checks via CrowdStrike integration</p></li><li><p class="paragraph" style="text-align:left;">Multi-factor authentication using FIDO2/WebAuthn (YubiKey, fingerprint, or Okta Verify)</p></li><li><p class="paragraph" style="text-align:left;">Password verification for users accessing sensitive systems in certain jurisdictions</p></li></ol><p class="paragraph" style="text-align:left;"><i>&quot;There are four or five layers that happen in the background that you as a user wouldn&#39;t even realize,&quot;</i> Yash notes. This seamless security creates minimal friction while maintaining defense in depth if one control fails, four others remain.</p><p class="paragraph" style="text-align:left;">For cloud security architects, this approach offers a template for designing authentication flows that don&#39;t rely on a single trust decision. The key insight is that AWS IAM, GitHub Enterprise, and Azure Active Directory aren&#39;t<i> &quot;secure by default&quot; </i>they&#39;re powerful platforms with security features that must be deliberately configured and layered.</p><hr class="content_break"><h4 class="heading" style="text-align:left;" id="the-data-question-that-changes-ever"><b>The Data Question That Changes Everything</b></h4><p class="paragraph" style="text-align:left;">The most significant blind spot Yash identified in early AI adoption wasn&#39;t technical, it was contractual. &quot;The major blind spot was the notion of &#39;yes, we&#39;re using GitHub Copilot. It&#39;s an enterprise tool backed by Microsoft, so it&#39;s secure. It&#39;s okay to use,&#39;&quot; he explains.<i> &quot;But as you look at it, the different terms of service for a beta model that Copilot has released versus a GA model it&#39;s important that you look at those, look at the nuances.&quot;</i></p><p class="paragraph" style="text-align:left;">This observation should alarm enterprise security leaders. Many organizations are deploying &quot;enterprise AI&quot; tools without understanding fundamental questions:</p><ul><li><p class="paragraph" style="text-align:left;">Is the vendor using customer data to train their base LLM?</p></li><li><p class="paragraph" style="text-align:left;">What contracts exist between your AI vendor and their LLM provider?</p></li><li><p class="paragraph" style="text-align:left;">How is customer data handled during the training lifecycle?</p></li><li><p class="paragraph" style="text-align:left;">What happens to your data when you terminate the contract?</p></li></ul><p class="paragraph" style="text-align:left;"><b>The 360 Lifecycle of AI Data</b>: Yash emphasizes that data governance in AI systems requires thinking through the complete lifecycle: &quot;If and when I decide to terminate my contract, how do you delete my data? If it has been used in some LLM training context, even if that&#39;s just for my account or app that you use for training how does that work?&quot;</p><p class="paragraph" style="text-align:left;">These questions extend beyond traditional data processing agreements. LLM training creates derivative works that may contain fragments of your data in model weights. Simple deletion of source data may not remove its influence from the model. Enterprises need contractual guarantees about data isolation, training exclusions, and verifiable deletion requirements that many &quot;enterprise&quot; AI offerings don&#39;t yet provide.</p><hr class="content_break"><h4 class="heading" style="text-align:left;" id="redefining-incidents-when-ai-makes-"><b>Redefining Incidents When AI Makes Mistakes</b></h4><p class="paragraph" style="text-align:left;">Perhaps the most thought-provoking insight from our conversation concerns incident response<i>. &quot;The definition of what is an incident is also changing,&quot; </i>Yash observes. <i>&quot;When an AI agent gives a wrong answer or a suboptimal answer, how do you classify that? It is an incident of some sort. It&#39;s not a breach. This is new.&quot;</i></p><p class="paragraph" style="text-align:left;">Consider the implications: An AI customer service agent grants an unauthorized refund. A coding copilot introduces a subtle vulnerability that passes code review. An AI-powered analytics tool misinterprets data and influences a business decision. None of these are traditional security incidents, yet all represent failures with real impact.</p><p class="paragraph" style="text-align:left;"><b>The IR Team&#39;s New Challenge</b>: Yash describes the complexity facing incident responders: <i>&quot;You add this other element where a call is going into a customer&#39;s environment, and then the origination of the incident either could be in our environment or somewhere in a customer&#39;s environment with a dotted line between the two. Figuring out how that works when you&#39;re in the middle of an incident, these unknowns, even though they may be small, cause big roadblocks.&quot;</i></p><p class="paragraph" style="text-align:left;">Traditional incident response playbooks assume clear boundaries: our network, our applications, our data. AI agents that take actions across organizational boundaries blur these lines. When a Sendbird AI agent makes an API call to a customer&#39;s backend system and something goes wrong, who owns the investigation? Who has access to the necessary logs? What SLAs apply?</p><p class="paragraph" style="text-align:left;">Sendbird&#39;s approach involves close collaboration between their incident response team and product teams to build new detection capabilities and response procedures. They&#39;re defining metrics for &quot;AI incidents&quot; that sit somewhere between operational errors and security breaches a new category that the industry hasn&#39;t yet standardized.</p><hr class="content_break"><h4 class="heading" style="text-align:left;" id="the-enterprise-ai-toolkit-strategy"><b>The Enterprise AI Toolkit Strategy</b></h4><p class="paragraph" style="text-align:left;">Rather than trying to prevent AI adoption through restrictions, Sendbird provides employees with enterprise-grade AI tools: Google Gemini, ChatGPT Teams (not Enterprise. Yash is deliberate about cost-benefit tradeoffs), Claude, Cursor, and GitHub Copilot. This strategy acknowledges that employees will use AI regardless of policy the question is whether they&#39;ll use sanctioned, contractually protected tools or shadow IT solutions.</p><p class="paragraph" style="text-align:left;"><i>&quot;If you don&#39;t enable them by providing AI tools, they are going to find tools either pay out of pocket or use free versions to get the job done,&quot; </i>Yash notes. The alternative to approved tools isn&#39;t no AI usage; it&#39;s unmonitored AI usage with unknown data handling practices.</p><p class="paragraph" style="text-align:left;"><b>The Communication Layer</b>: Providing tools isn&#39;t enough. Sendbird accompanies their AI toolkit with clear communication about what&#39;s approved, what&#39;s blocked, and why. <i>&quot;We send out communications like &#39;here&#39;s why we are blocking it. Here&#39;s a reminder, these are the approved AI tools. If you want to try something out, be careful. Do not put real enterprise data.&#39;&quot;</i></p><p class="paragraph" style="text-align:left;">This approach balances enablement with responsibility. Employees understand they can experiment with new AI tools, but they&#39;re expected to make risk-conscious decisions about data handling. When organizations treat employees as partners in security rather than potential threats, they typically get better outcomes than pure lockdown approaches.</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><b>Trust OS: Making AI Agent Actions Observable</b></p><p class="paragraph" style="text-align:left;">Sendbird&#39;s Trust OS represents their answer to a fundamental challenge: how do you give customers confidence in AI agents that take autonomous actions? The platform provides observability into agent decisions, human oversight capabilities, and automated testing frameworks that let customers verify agent behavior before granting expanded permissions.</p><p class="paragraph" style="text-align:left;"><i>&quot;It&#39;s assuring our customers of &#39;yes, there&#39;s an unknown AI that&#39;s performing actions, but here are all the ways you have oversight&#39;&quot; </i>Yash explains. Customers can see conversations, review actions taken, configure permitted operations, and build test cases that alert when agent behavior changes.</p><p class="paragraph" style="text-align:left;">This architecture acknowledges that trust in AI systems comes from transparency and control, not from claims about model accuracy. Enterprises deploying AI agents need to know: What is this agent doing? Can I review its decisions? Can I limit its capabilities? What happens when it makes a mistake?</p><p class="paragraph" style="text-align:left;"><b>The Testing Paradigm</b>: One of Trust OS&#39;s key features is enabling customers to build automated test cases for agent behavior. If an agent should never discuss certain topics, never access particular data types, or always escalate specific request categories, customers can codify these rules and receive alerts when they&#39;re violated. This shifts AI governance from reactive incident response to proactive behavior monitoring.</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><b>The Skills Gap Nobody Talks About</b></p><p class="paragraph" style="text-align:left;">Throughout our conversation, Yash returned to a theme that deserves more attention: the stress on security professionals expected to secure AI systems without adequate preparation. &quot;<i>A staff engineer, staff AppSec or principal engineer is not automatically a staff AI security engineer,</i>&quot; he emphasizes. &quot;<i>It does put a decent amount of stress on AppSec engineers where companies are moving really fast on AI and they&#39;re expected to help secure them.</i>&quot;</p><p class="paragraph" style="text-align:left;">This gap extends beyond technical skills. Understanding LLM behavior, prompt injection attacks, and RAG implementations requires different mental models than traditional application security. The attack surface isn&#39;t just code anymore it&#39;s the interaction between code, models, prompts, context windows, and external data sources.</p><p class="paragraph" style="text-align:left;"><b>Yash&#39;s Advice to Overwhelmed Security Teams</b>: &quot;<i>Let&#39;s acknowledge it&#39;s new technology. We will make mistakes. It&#39;s okay to do that. We&#39;ll eventually catch up to it. When SQL injections and XSS came out in the very early days, those were big deals and nobody knew exactly how they worked. Today we are much ahead of that game. We will eventually get to something similar on the AI front.</i>&quot;</p><p class="paragraph" style="text-align:left;">This perspective offers psychological relief to security teams feeling overwhelmed. The goal isn&#39;t perfect security from day one it&#39;s building capabilities iteratively while maintaining awareness of risks. Organizations that give security teams space to learn, experiment, and occasionally fail will develop more robust AI security programs than those that demand immediate expertise.</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><b>Learning AI with AI</b></p><p class="paragraph" style="text-align:left;">One of the most practical insights Yash shared was almost throwaway: &quot;You can use AI to learn AI.&quot; When new concepts emerge RAG, context windows, fine-tuning, mixture of experts security professionals don&#39;t need to wait for training courses or documentation. They can ask Claude, ChatGPT, or Gemini to explain concepts, provide examples, and answer follow-up questions.</p><p class="paragraph" style="text-align:left;">Yash takes this further: &quot;There&#39;s this Gemini app where you could have conversations with it. So I go on walks and I&#39;m like, &#39;explain this to me.&#39; It&#39;s a conversation with AI on anything and everything, and that could be how AI works.&quot;</p><p class="paragraph" style="text-align:left;">For busy security leaders trying to stay current, this approach is remarkably efficient. Rather than blocking out hours to read documentation, you can integrate learning into your daily routine through conversational AI interfaces. The key is approaching these tools as learning aids rather than definitive sources verify important details, but use AI to accelerate your baseline understanding.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-resources">RELATED RESOURCES</h2><h3 class="heading" style="text-align:left;" id="ai-security-llm-guidance"><b>AI Security & LLM Guidance</b></h3><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://owasp.org/www-project-top-10-for-large-language-model-applications/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">OWASP Top 10 for LLM Applications</a> - Comprehensive framework for AI application security</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.nist.gov/itl/ai-risk-management-framework?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">NIST AI Risk Management Framework</a> - Federal guidance on managing AI risks</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.anthropic.com/constitutional-ai?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">Anthropic: Claude&#39;s Constitutional AI</a> - Understanding AI safety through design</p></li></ul><h3 class="heading" style="text-align:left;" id="multi-layered-security-architecture"><b>Multi-Layered Security Architecture</b></h3><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://cloud.google.com/beyondcorp?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">Google BeyondCorp: A New Approach to Enterprise Security</a> - Foundational zero trust architecture principles</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">AWS Security Best Practices for IAM</a> - Multi-layered access control implementation</p></li></ul><h3 class="heading" style="text-align:left;" id="cloud-security-podcast"><b>Incident Response for Modern Environments:</b></h3><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.sans.org/white-papers/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">SANS Cloud Incident Response Framework</a> - Adapting IR for cloud and AI systems</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.atlassian.com/incident-management?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">Atlassian Incident Management Handbook</a> - Modern incident response processes</p></li></ul><h3 class="heading" style="text-align:left;" id="cloud-security-podcast"><b>Cloud Security Podcast</b></h3><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/how-to-secure-your-ai-agents-a-cisos-journey?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast – Full Episode with Yashvier</a></p></li><li><p class="paragraph" style="text-align:left;">For deeper discussion on failed data lakes, AI in detection engineering, and where SIEM still fits.</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>Related Podcast Episodes 🎧</b></h2><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/how-to-secure-your-ai-agents-a-cisos-journey?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/a37b40b7-4f5b-46d7-916a-879f8cba892d/S06_Yashvier_K.jpg?t=1765410490"/></a><div class="image__source"><a class="image__source_link" href="https://www.cloudsecuritypodcast.tv/videos/how-to-secure-your-ai-agents-a-cisos-journey?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" rel="noopener" target="_blank"><span class="image__source_text"><p>How to secure your AI Agents: A CISOs Journey</p></span></a></div></div><h3 class="heading" style="text-align:left;" id="question-for-you-reply-to-this-emai">Question for you? (Reply to this email)</h3><p class="paragraph" style="text-align:left;"> 🤖 <b>Is there an AI Incident that is not a breach?</b><br></p><p class="paragraph" style="text-align:left;">Next week, we&#39;ll explore another critical aspect of cloud security. Stay tuned!</p><hr class="content_break"><p class="paragraph" style="text-align:left;">📬 Want weekly expert takes on AI & Cloud Security? [<a class="link" href="https://www.cloudsecuritynewsletter.com/subscribe?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">Subscribe here</a>]”</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><span style="color:rgb(238, 40, 60);"><b><a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">We would love to hear from you</a></b></span>📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙</p><p class="paragraph" style="text-align:start;">Peace!</p><p class="paragraph" style="text-align:start;"><a class="link" href="https://www.linkedin.com/in/shilpi-bhattacharjee/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">Shilpi Bhattacharjee</a></p><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dc094a1c-678a-43e0-adc9-1bee6c3499e2/CSP_Logo_Blue_ScreenRes_3000x3000_v2.jpg"/></a></div><div class="button" style="text-align:left;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="{{rp_referral_hub_url}}"><span class="button__text" style=""> Share the newsletter </span></a></div><p class="paragraph" style="text-align:left;">Was this forwarded to you? You can <a class="link" href="https://www.cloudsecuritynewsletter.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">Sign up here</a>, to join our growing readership.</p><p class="paragraph" style="text-align:left;">Want to <b>sponsor</b> the next newsletter edition! <a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">Lets make it happen </a></p><p class="paragraph" style="text-align:left;">Have you joined our FREE <b>Monthly</b> <a class="link" href="https://www.cloudsecuritybootcamp.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Bootcamp</a> yet?</p><p class="paragraph" style="text-align:left;">checkout our <b>sister podcast</b> <a class="link" href="https://www.youtube.com/@AISecurityPodcast?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=zero-day-exploited-in-hours-ai-agent-risk-lessons-from-ciso-of-sendbird" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=f0232fa8-9a38-46f5-aac6-0877c51312bf&utm_medium=post_rss&utm_source=cloud_security_newsletter">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>🚨 ServiceNow Acquires Veza for $1B* as Identity Becomes Critical Attack Vector: Lessons from Building Cloud-Native Data Lakes at Scale</title>
  <description>This week covers ServiceNow&#39;s strategic $1B* acquisition of identity security firm Veza, the Oracle E-Business Suite zero-day campaign affecting 100+ organizations, and Claude AI plugins shown deploying ransomware. Security expert Cliff Crosford shares hard-won lessons from building security data lakes at scale, addressing SIEM cost challenges, and implementing AI-driven detection pipelines for enterprise cloud security teams.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/4a337fab-901a-4464-b644-c0445bf8ff77/Screenshot_2025-12-03_at_7.26.50_AM.png" length="3747727" type="image/png"/>
  <link>https://www.cloudsecuritynewsletter.com/p/servicenow-veza-identity-security-data-lakes</link>
  <guid isPermaLink="true">https://www.cloudsecuritynewsletter.com/p/servicenow-veza-identity-security-data-lakes</guid>
  <pubDate>Wed, 03 Dec 2025 15:32:42 +0000</pubDate>
  <atom:published>2025-12-03T15:32:42Z</atom:published>
    <dc:creator>Ashish Rajan</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="hello-from-the-cloudverse"><span style="background-color:#28EEDA;"><b>Hello from the Cloud-verse!</b></span></h2><p class="paragraph" style="text-align:left;">This week’s Cloud Security Newsletter Topic we cover - <b>From SIEM to Data Lake: Why Security Teams Are Making the Shift (And What It Actually Takes)</b><b> </b><a class="link" href="#cloud-security-topic-of-the-week" rel="noopener noreferrer nofollow">(continue reading)</a> </p><div class="button" style="text-align:center;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="https://links.cloudsecuritypodcast.tv/drata-dec-2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale"><span class="button__text" style=""> Check out this week’s Sponsor: Drata </span></a></div><hr class="content_break"><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/the-terraform-lift-shift-playbook-migrating-200-apps-to-multi-cloud-with-terraform?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/4a337fab-901a-4464-b644-c0445bf8ff77/Screenshot_2025-12-03_at_7.26.50_AM.png?t=1764775656"/></a><div class="image__source"><span class="image__source_text"><p>This image was generated by AI. It&#39;s still experimental, so it might not be a perfect match!</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Incase, this is your 1st Cloud Security Newsletter! You are in good company! </b><br>You are reading this issue along with your friends and colleagues from companies like <i>Netflix</i>, Citi, <i>JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more</i> who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to <a class="link" href="https://open.spotify.com/show/6LZgeh4GecRYPc0WrwMB4I?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a> & <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> every week.</p><hr class="content_break"><p class="paragraph" style="text-align:left;">Welcome to this week’s Cloud Security Newsletter</p><p class="paragraph" style="text-align:left;">As identity security takes center stage with ServiceNow&#39;s billion-dollar acquisition of Veza and AI security vulnerabilities proliferate across enterprise environments, security teams face a critical inflection point: how do we scale visibility without breaking the bank?</p><p class="paragraph" style="text-align:left;">This week, we&#39;re joined by Cliff Crosford, co-founder of <a class="link" href="https://Scanner.dev?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow">Scanner.dev</a> and veteran of multiple security startups including an acquisition into Cisco. Cliff shares hard-won lessons from building security data lakes at massive scale including what works, what fails spectacularly, and why most teams underestimate the engineering lift required to make data lakes actually usable for security operations.<i> [</i><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/siem-vs-data-lake-why-we-ditched-traditional-logging?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow">Listen to the episode</a><i>]</i></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="tldr-for-busy-readers">📰 TL;DR for Busy Readers</h2><ul><li><p class="paragraph" style="text-align:left;"><b>ServiceNow&#39;s $1B* Veza acquisition signals identity + ITSM consolidation</b>-<b> </b>Identity is consolidating fast with ServiceNow’s Veza move, it is a clear play to own “who has access to what” across cloud and SaaS; revisit your IAM/CIEM roadmap now.</p></li><li><p class="paragraph" style="text-align:left;"><b>Claude AI &quot;Skills&quot; plugins shown deploying ransomware</b>  AI plug-ins are the new PowerShell – Claude Skills and shadow AI in the browser demand Tier 0 governance and an allowlist model, not “free install for everyone.” treat AI tools as Tier 0 code execution environments.</p></li><li><p class="paragraph" style="text-align:left;"><b>Oracle EBS zero-day hits 100+ orgs including Penn & Phoenix Universities</b> confirm patch status and SSO integration points immediately</p></li><li><p class="paragraph" style="text-align:left;"><b>AWS and Google Announce Joint Multi-cloud Networking Service- </b>Multicloud networking is getting easier and riskier. AWS + Google’s joint fabric is great for latency and terrible for flat, poorly governed architectures.</p></li><li><p class="paragraph" style="text-align:left;"><b>Data lakes promise SIEM cost savings but require serious engineering</b> budget for schema maintenance, not just storage</p></li><li><p class="paragraph" style="text-align:left;"><b>AI can accelerate detection tuning 80% but humans remain essential</b> implement human-in-the-loop review for all AI-generated detections</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-security-news">📰 <b>THIS WEEK&#39;S TOP 5 SECURITY HEADLINES</b></h2><h3 class="heading" style="text-align:left;" id="1-service-now-moves-to-acquire-veza"><b>1. ServiceNow Moves to Acquire Veza in ~$1B Identity Security Deal</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened</b></p><p class="paragraph" style="text-align:left;">ServiceNow announced a definitive agreement to acquire identity security company Veza in a deal reportedly valued at around $1 billion, according to SecurityWeek&#39;s M&A tracker. Veza specializes in authorization and access intelligence across modern infrastructure and SaaS, mapping &quot;who has access to what&quot; across cloud providers, data platforms, and business applications.</p><p class="paragraph" style="text-align:left;"><b>Why It Matters</b></p><p class="paragraph" style="text-align:left;">Identity is the new control plane in cloud-native environments. Veza&#39;s entitlement graph across AWS, Azure, GCP, SaaS, and data stores represents exactly the visibility layer many organizations are attempting to build themselves. This acquisition signals further consolidation of identity, ITSM, and security operations into single platforms a trend that creates both opportunities and risks.</p><p class="paragraph" style="text-align:left;">As Cliff Crosford notes from his experience building security infrastructure at scale: &quot;Log volumes just get massive and then it becomes impossible to keep all of the logs that you want in your SIEM. Traditional SIEMs were wonderful in the era when you had maybe individual gigabytes or tens of gigabytes of logs per day.&quot; ServiceNow&#39;s move to acquire Veza positions them to become the central control plane not just for workflow, but for identity telemetry at enterprise scale.</p><p class="paragraph" style="text-align:left;">For large enterprises, the strategic implications are significant: tighter vendor integration enables faster response automation, but it also creates a higher blast radius when identity metadata and agentic controls live inside a single vendor platform. Organizations already using ServiceNow for ITSM should immediately map where identity artifacts flow today and prepare for integration changes.</p><p class="paragraph" style="text-align:left;"><b>Actionable Steps:</b></p><ul><li><p class="paragraph" style="text-align:left;">Inventory your current IAM/CIEM/DSPM/IGA stack and note overlaps with Veza&#39;s capabilities</p></li><li><p class="paragraph" style="text-align:left;">If you&#39;re a ServiceNow customer, ask reps about roadmap timelines, data residency, and API access</p></li><li><p class="paragraph" style="text-align:left;">Track this as a signal that &quot;identity + operations + security&quot; convergence is accelerating</p></li><li><p class="paragraph" style="text-align:left;">Update procurement risk registers to evaluate lock-in and data residency implications</p></li></ul><p class="paragraph" style="text-align:left;"><b>Source:</b><a class="link" href="https://www.securityweek.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow"> SecurityWeek M&A Tracker</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="2-oracle-e-business-suite-zero-day-"><b>2. Oracle E-Business Suite Zero-Day Campaign Widens – Penn & Phoenix Universities Confirm Breaches</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened</b></p><p class="paragraph" style="text-align:left;">The University of Pennsylvania and University of Phoenix disclosed they&#39;re victims of the ongoing Oracle E-Business Suite (EBS) hacking campaign, which has already impacted more than 100 organizations. Attackers compromised EBS instances used for supplier payments, general ledger, and other core business processes, exposing PII, dates of birth, Social Security numbers, and banking details. The campaign is linked to Cl0p ransomware activity, with FIN11 suspected behind the intrusion chain, and appears to leverage unknown zero-day vulnerabilities in Oracle EBS that remain publicly undisclosed.</p><p class="paragraph" style="text-align:left;"><b>Why It Matters</b></p><p class="paragraph" style="text-align:left;">ERP systems like Oracle EBS sit at the heart of finance, procurement, payroll, and student information and are increasingly hosted in cloud and hybrid environments. A compromise here is not just a &quot;data breach&quot;; it&#39;s a business operations incident that can halt core business functions.</p><p class="paragraph" style="text-align:left;">The campaign highlights the opacity of third-party risk when organizations consume Oracle as a managed service or via integrators. Limited visibility into vendor-hosted ERP can leave security teams blindsided when zero-days are exploited at scale. Once attackers gain access to EBS, they&#39;re adjacently close to SSO integrations, data lakes, and downstream SaaS applications that consume ERP data multiplying the blast radius if identities or integration credentials are reused.</p><p class="paragraph" style="text-align:left;"><b>Actionable Steps:</b></p><ul><li><p class="paragraph" style="text-align:left;">Immediately confirm whether you run Oracle EBS (on-prem or hosted) or if vendors do on your behalf</p></li><li><p class="paragraph" style="text-align:left;">Demand clear answers on patch level and mitigations applied against the EBS campaign</p></li><li><p class="paragraph" style="text-align:left;">Review how ERP authentication integrates with IdP, VPNs, and cloud platforms</p></li><li><p class="paragraph" style="text-align:left;">Add ERP systems to attack surface management and tabletop exercises</p></li><li><p class="paragraph" style="text-align:left;">Prepare procedures to isolate ERP environments and rotate integration credentials quickly</p></li></ul><p class="paragraph" style="text-align:left;"><b>Source:</b><a class="link" href="https://www.securityweek.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow"> SecurityWeek Oracle EBS Campaign Coverage</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-claude-skills-plugins-shown-able-"><b>3. Claude &quot;Skills&quot; Plug-ins Shown Able to Deploy Ransomware</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened</b></p><p class="paragraph" style="text-align:left;">A Cato Networks researcher demonstrated that an Anthropic Claude &quot;Skill&quot; a plug-in used by Claude Code to automate tasks can be modified to deploy MedusaLocker ransomware without being flagged by the model. By inserting a seemingly benign function into Anthropic&#39;s open-source &quot;GIF Creator&quot; Skill that downloads and executes external code, the researcher showed that Claude reviews only the visible Skill code, not remote payloads fetched at runtime. Anyone can download, tweak, and re-upload Skills in similar fashion, and Anthropic&#39;s current stance is that users are responsible for trusting Skills.</p><p class="paragraph" style="text-align:left;">In parallel, security researchers are warning about &quot;shadow AI in the browser&quot; unmanaged AI extensions and agentic browsers that can read data across SaaS tabs and silently exfiltrate it, often outside CASB/DLP visibility.</p><p class="paragraph" style="text-align:left;"><b>Why It Matters</b></p><p class="paragraph" style="text-align:left;">AI tools are becoming the new &quot;PowerShell&quot; of enterprise environments. Skills, plug-ins, and AI-controlled agents are code execution environments with access to source code, secrets, and internal SaaS not harmless productivity helpers. The Claude demonstration shows how little effort is required to turn them into delivery vectors for ransomware and other malware.</p><p class="paragraph" style="text-align:left;">Traditional code review, software composition analysis, and CASB controls often ignore AI plug-in ecosystems and browser agents, even though they can read session cookies, internal dashboards, and cloud consoles in the browser context. Auto-updating Skills and extensions create an AI supply chain where a poisoned update or compromised extension publisher can instantly impact thousands of developers and analysts.</p><p class="paragraph" style="text-align:left;">This connects directly to Cliff&#39;s point about data engineering complexity: &quot;Every log source is quite a bit of work, and you will be on this endless journey of maintaining a data lake forever.&quot; Just as security teams must continuously maintain log pipelines, they now must maintain AI agent governance treating these tools as critical infrastructure requiring the same rigor as production code.</p><p class="paragraph" style="text-align:left;"><b>Actionable Steps:</b></p><ul><li><p class="paragraph" style="text-align:left;">Treat AI plugins/Skills/Agents as Tier 0 code requiring allowlisting and code review</p></li><li><p class="paragraph" style="text-align:left;">Require provenance checks for internal or forked Skills, just like internal libraries</p></li><li><p class="paragraph" style="text-align:left;">Update acceptable use and AI policies to cover approved tools and prohibited data types</p></li><li><p class="paragraph" style="text-align:left;">Implement restrictions on installing AI extensions/agentic browsers on corporate endpoints</p></li><li><p class="paragraph" style="text-align:left;">Ask EDR and browser security vendors how they detect AI-driven automation and token exfiltration</p></li></ul><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://www.axios.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow"> Axios Claude Ransomware PoC</a>,<a class="link" href="https://thehackernews.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow"> The Hacker News Shadow AI Analysis</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="4-aws-and-google-announce-joint-mul"><b>4. AWS and Google Announce Joint Multicloud Networking Service</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened</b></p><p class="paragraph" style="text-align:left;">AWS and Google Cloud jointly launched a new multicloud networking service combining AWS Interconnect–multicloud with Google Cloud&#39;s Cross-Cloud Interconnect, allowing customers to establish private, high-speed links between AWS and GCP environments in minutes instead of weeks. The providers also introduced an open specification for network interoperability, with Salesforce named as a day-one user.</p><p class="paragraph" style="text-align:left;"><b>Why It Matters</b></p><p class="paragraph" style="text-align:left;">Multicloud private connectivity is becoming dramatically easier great for latency and reliability, but it also increases the need for tight segmentation, routing governance, and inspection between clouds. Many enterprises currently rely on DIY IPsec or third-party SD-WAN for multicloud connections. This offering will tempt teams to move to provider-managed connectivity, which can be beneficial but security controls must move with you (firewalls, IDS/IPS, service mesh, policy as code).</p><p class="paragraph" style="text-align:left;">Once AWS and GCP are connected via one high-speed &quot;trusted&quot; fabric, misconfigurations in one cloud (e.g., flat VPCs, overly broad peering) can more easily propagate risk to the other. As organizations build cross-cloud data lakes and security operations, this interconnect becomes a critical attack surface requiring dedicated monitoring and segmentation.</p><p class="paragraph" style="text-align:left;"><b>Actionable Steps:</b></p><ul><li><p class="paragraph" style="text-align:left;">Request a threat model of current and planned multicloud connectivity from your cloud networking team</p></li><li><p class="paragraph" style="text-align:left;">Define security guardrails: mandatory use of segmented VRFs/VPCs/projects and clear patterns for east-west inspection</p></li><li><p class="paragraph" style="text-align:left;">Update cloud architecture standards to ensure cross-cloud links are visible to SecOps</p></li><li><p class="paragraph" style="text-align:left;">Integrate new cross-cloud resources into logging, NDR, and incident response workflows</p></li><li><p class="paragraph" style="text-align:left;">Review network segmentation and egress policies to ensure flow logs cover cross-cloud transport</p></li></ul><p class="paragraph" style="text-align:left;"><b>Sources:</b><a class="link" href="https://www.reuters.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow"> Reuters AWS-Google Collaboration</a>,<a class="link" href="https://www.itpro.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow"> ITPro Coverage</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="cloud-security-topic-of-the-week">🎯 Cloud Security Topic of the Week: </h2><h3 class="heading" style="text-align:left;" id="from-siem-to-data-lake-why-security"><b>From SIEM to Data Lake: Why Security Teams Are Making the Shift (And What It Actually Takes)</b></h3><p class="paragraph" style="text-align:left;">The traditional SIEM model is breaking under the weight of modern log volumes. This week&#39;s conversation with Cliff Crosford reveals a painful truth that many security leaders are discovering: when your log volume reaches multiple terabytes per day, the economics of traditional SIEMs become untenable, forcing impossible choices about which logs to drop and which blind spots to accept.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="featured-experts-this-week"><b>Featured Experts This Week </b>🎤</h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.linkedin.com/in/cliftoncrosland/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow"><b>Cliff Crosford</b></a>   Co-founder, <a class="link" href="https://Scanner.dev?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow">Scanner.dev</a></p></li><li><p class="paragraph" style="text-align:left;"><b><a class="link" href="https://www.linkedin.com/in/ashishrajan/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow">Ashish Rajan</a></b> - CISO | Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> , Host of <a class="link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a></p></li></ul><h2 class="heading" style="text-align:left;" id="definitions-and-core-concepts"><b>Definitions and Core Concepts 📚</b></h2><p class="paragraph" style="text-align:left;">Before diving into our insights, let&#39;s clarify some key terms:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Data Lake vs. Traditional SIEM:</b> A data lake is an architectural approach that stores massive volumes of raw data in object storage (like S3) at significantly lower cost than traditional SIEMs. Unlike SIEMs that require structured ingestion and charge by volume, data lakes can store petabytes of logs indefinitely. However, they require significant data engineering to make logs searchable and useful.</p></li><li><p class="paragraph" style="text-align:left;"><b>OCSF (Open Cybersecurity Schema Framework):</b> A vendor-agnostic schema that provides a common structure for security log data across different sources. While useful for standardization, it requires significant transformation work to fit diverse log sources into its strict schema requirements.</p></li><li><p class="paragraph" style="text-align:left;"><b>Schema Normalization:</b> The process of transforming logs from different sources into a consistent structure with standardized field names. This enables correlation across log sources but requires ongoing maintenance as applications evolve and schemas change.</p></li><li><p class="paragraph" style="text-align:left;"><b>Security Data Lake</b><br>A log and event repository built on cloud object storage (e.g., S3, GCS, ADLS) plus engines like Presto/Trino, Spark, or specialized lake query engines. Optimized for cheap, long-term storage and large-scale analytics not for the interactive alerting workflows SIEMs excel at.</p></li><li><p class="paragraph" style="text-align:left;"><b>Traditional SIEM</b><br>Platforms like Splunk, Elastic, QRadar, etc. Great for parsing, normalizing, and searching logs up to a point. Licensing is usually volume-based, which becomes prohibitive beyond hundreds of GB/day, pushing teams to sample or drop logs.</p></li><li><p class="paragraph" style="text-align:left;"><b>Normalization vs “Messy Data”</b><br>Normalization = forcing all logs into a consistent schema. In practice, custom apps and constantly changing sources make perfect normalization a full-time job. Cliff argues for “best-effort normalization” + tools that can search nested JSON and free text without requiring strict tables.</p></li><li><p class="paragraph" style="text-align:left;"><b>Detection Engineering</b><br>The practice of designing, tuning, and maintaining detection rules and pipelines. In a data lake world, this includes SQL rules, full-text searches, anomaly jobs, and AI-assisted investigations, all closely tied to schema evolution and data quality.</p></li></ul><hr class="content_break"><p class="paragraph" style="text-align:center;">This week&#39;s issue is sponsored by <a class="link" href="https://links.cloudsecuritypodcast.tv/drata-dec-2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow"><b>Drata</b></a></p><p class="paragraph" style="text-align:center;"><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;">Security teams shouldn’t be buried in manual evidence collection. </span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;">Drata automates compliance end-to-end while providing unified visibility across cloud workloads, identities, and configurations. </span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;">Teams use Drata to cut audit prep from weeks to hours, accelerate security reviews, and reinforce DevSecOps pipelines with real-time controls monitoring. </span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(0, 0, 0);font-family:Arial, sans-serif;font-size:11pt;">If you&#39;re scaling cloud infrastructure and need a smarter path to continuous compliance, Drata is built for you.</span></p><p class="paragraph" style="text-align:center;"><a class="link" href="https://links.cloudsecuritypodcast.tv/drata-dec-2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow">Book Demo with Drata</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="our-insights-from-this-practitioner">💡<b>Our Insights from this Practitioner 🔍</b></h2><h3 class="heading" style="text-align:left;" id="siem-vs-data-lake-why-we-ditched-tr"><b>SIEM vs. Data Lake: Why We Ditched Traditional Logging? </b><b>(</b><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/siem-vs-data-lake-why-we-ditched-traditional-logging?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow"><b>Full Episode here</b></a><b>)</b></h3><hr class="content_break"><h4 class="heading" style="text-align:left;" id="the-economic-reality-forcing-change"><b>The Economic Reality Forcing Change</b></h4><p class="paragraph" style="text-align:left;">Cliff&#39;s experience building security infrastructure at a prior startup (later acquired by Cisco) illustrates the breaking point many organizations face. &quot;To increase our license, it would&#39;ve been more expensive than the entire budget for the engineering team,&quot; Cliff explains. When faced with exploding log volumes at his previous company, his team redirected 90% of log data to S3 buckets a seemingly logical cost-saving measure.</p><p class="paragraph" style="text-align:left;">But the reality proved more complex. &quot;It became a bit of a black hole where like you couldn&#39;t really search through very much data once the data set became large. Querying that data lake at S3 became more and more painful over time.&quot;</p><p class="paragraph" style="text-align:left;">At modern scale, volume-based SIEM licensing becomes economically impossible. Data lakes promise cheaper, near-infinite retention for multi-TB/day security data and the ability to keep all security-relevant logs instead of sampling or dropping them. They also provide a better substrate for AI-driven detection and investigation. But today, they come with hard trade-offs around engineering effort, usability, and search performance.</p><hr class="content_break"><h4 class="heading" style="text-align:left;" id="the-hidden-cost-of-cheap-storage"><b>The Hidden Cost of &quot;Cheap&quot; Storage</b></h4><p class="paragraph" style="text-align:left;">What looked like a financial win turned into an operational nightmare. The logs existed, but they were essentially unusable for actual security investigations. This experience highlights a fundamental misunderstanding about data lakes: <b>cheap storage is only valuable if you can actually query the data when you need it</b>. For security teams, that means during active incidents when every minute counts.</p><p class="paragraph" style="text-align:left;">When Cliff&#39;s team tried to use Athena for queries, they hit the wall that many organizations eventually face: &quot;The queries would take three hours to run and might cost a few hundred dollars.&quot; This isn&#39;t a configuration problem it&#39;s an architectural reality of how SQL-based data lake engines work.</p><hr class="content_break"><h4 class="heading" style="text-align:left;" id="the-engineering-reality-most-teams-"><b>The Engineering Reality Most Teams Underestimate</b></h4><p class="paragraph" style="text-align:left;">The fundamental issue? &quot;Every log source is quite a bit of work, and you will be on this endless journey of maintaining a data lake forever,&quot; Cliff warns. Security teams often think of data lake migration as a one-time project, but his experience reveals it as an <b>ongoing engineering commitment</b>. Each new log source requires custom work to fetch, transform, and fit into your schema. And when applications update and schemas change which happens constantly your detection rules break.</p><p class="paragraph" style="text-align:left;">One particularly painful reality: &quot;Basically every week, at least one of them was misbehaving because the schema changed a little bit. New fields showed up that were important or got renamed and then suddenly it stopped getting inserted.&quot; For a team managing 40-50 log sources, this becomes a weekly firefighting exercise.</p><hr class="content_break"><h4 class="heading" style="text-align:left;" id="why-traditional-data-lake-tools-fai"><b>Why Traditional Data Lake Tools Fail Security Teams</b></h4><p class="paragraph" style="text-align:left;">The challenge goes deeper than engineering effort. Most data lake tools were designed for business analytics structured, columnar data that fits neatly into SQL tables. Security logs are fundamentally different. As Cliff explains: &quot;For a lot of security logs they can be a lot messier. They can be like deeply nested JSON or lots of text like PowerShell command line text and so on. That&#39;s where SQL engines that are the typical data lake engines really break down.&quot;</p><p class="paragraph" style="text-align:left;">This mismatch creates a paradox: you move to a data lake for visibility and cost savings, but then you can&#39;t effectively search the very data you&#39;re storing. Traditional full-text search capabilities that security teams rely on in Splunk or Elastic simply don&#39;t exist in most SQL-based data lake platforms like Athena, Presto, or Trino</p><hr class="content_break"><h4 class="heading" style="text-align:left;" id="the-hard-choices-what-gets-sacrific"><b>The Hard Choices: What Gets Sacrificed?</b></h4><p class="paragraph" style="text-align:left;">When SIEM costs become prohibitive, security teams face brutal triage decisions. Cliff describes a common pattern: &quot;They&#39;ll use Cribl to delete fields from logs, filter them down, sample them down try to just keep the log volume down to avoid spending too much on ingestion volume.&quot;</p><p class="paragraph" style="text-align:left;">But here&#39;s the critical difference between SRE and security use cases: &quot;For observability use cases, getting sampled data is all right, but for security teams, it can be pretty terrifying to be like, well, I&#39;m only keeping like 20% of my log data. The threat actor activity and maybe the IOCs that I care about, like malicious IP addresses, are invisible to me.&quot;</p><p class="paragraph" style="text-align:left;">Sampling works for understanding system health. It&#39;s catastrophic for security, where a single malicious event can represent a critical breach. The threat actor doesn&#39;t show up in every log entry just the ones you chose to drop.</p><hr class="content_break"><h4 class="heading" style="text-align:left;" id="amazon-security-lake-promise-vs-rea"><b>Amazon Security Lake: Promise vs. Reality</b></h4><p class="paragraph" style="text-align:left;">Many teams look to AWS Security Lake as a turnkey solution, but Cliff&#39;s analysis reveals important limitations. While it&#39;s &quot;a really good first step&quot; with built-in support for many log sources and automatic OCSF (Open Cybersecurity Schema Framework) transformation, two major challenges remain:</p><p class="paragraph" style="text-align:left;"><b>First, the custom log problem: </b>&quot;<i>For custom log sources or log sources that aren&#39;t in their list of supported log sources, you&#39;re gonna have to do the work to get it to fit into this very strict schema, and that can be a massive amount of work</i>.&quot; Given that most enterprises run hundreds of custom applications, this isn&#39;t an edge case it&#39;s the majority use case.</p><p class="paragraph" style="text-align:left;"><b>Second, the search performance issue:</b> &quot;<i>If you have things like command line text from EDR logs like PowerShell commands, you&#39;re trying to do substring search and really understand messier log data that is unfortunately still quite slow in the data lake.</i>&quot; The platform is optimized for columnar, SQL-friendly data, not the messy reality of security logs.</p><hr class="content_break"><h4 class="heading" style="text-align:left;" id="the-ai-assisted-future-with-importa"><b>The AI-Assisted Future (With Important Caveats)</b></h4><p class="paragraph" style="text-align:left;">Cliff sees AI as a genuine accelerator for data lake workflows, but with critical human-in-the-loop requirements. &quot;<i>AI can really help you figure out how to normalize your data if you really need to fit it into a SQL schema. It will get you like 80% of the way there. It can be kind of there&#39;s still like a bit more, it&#39;ll hallucinate a little bit.</i>&quot;</p><p class="paragraph" style="text-align:left;">For detection engineering, AI becomes particularly powerful: &quot;<i>Because it kind of knows everything that&#39;s out there, it can be a really great brainstorming partner.</i>&quot; But Cliff is clear about boundaries: &quot;<i>My opinion for now is that it&#39;s not yet ready to be fully trusted with important investigations and response. It is good at getting started, but I think humans are still very much needed.</i>&quot;</p><p class="paragraph" style="text-align:left;">The most promising pattern he&#39;s seeing? Automated detection tuning: &quot;<i>An alert goes off, an agent takes a first cut at the investigation, then will make a recommendation for what the detection rule should probably be like. Maybe this is too noisy and we should tune it. Then it will go open up a pull request in GitHub, and then the team can just review it, accept it, and then the detection rule is now tuned better.</i>&quot;</p><p class="paragraph" style="text-align:left;">This closes the loop on detection engineering without requiring manual Jira tickets and waiting for vacation schedules. But notice the pattern: AI proposes, humans approve. The expertise remains human; the acceleration comes from AI.</p><hr class="content_break"><h4 class="heading" style="text-align:left;" id="strategic-recommendations-for-2026-"><b>Strategic Recommendations for 2026 Planning</b></h4><p class="paragraph" style="text-align:left;">For CISOs evaluating the SIEM vs. data lake decision in 2026 planning, Cliff offers pragmatic guidance: &quot;<i>I don&#39;t think it&#39;s quite time to totally replace your SIEM with a data lake. One pattern that tends to work well is to say, &#39;cool, all my logs used to fit in my SIEM a few years ago. Now it&#39;s like 10% of them. Let me keep those 10% going to my SIEM... and then nine terabytes a day of logs instead of dropping them entirely let&#39;s make the first step which is just store them in S3 for compliance purposes.&#39;</i>&quot;</p><p class="paragraph" style="text-align:left;">This hybrid approach acknowledges reality: traditional SIEMs still provide better usability for hot-path investigations, while data lakes solve the retention and cost problem for the long tail of logs you can&#39;t afford to keep in your SIEM.</p><p class="paragraph" style="text-align:left;">The decision of whether to build or buy depends on your team composition: &quot;<i>If you have a really strong data engineering team, whether in the security side or the rest of your organization already doing a lot of data engineering with data lakes for other purposes like business analytics or observability, then yeah, you can share that work with them. That can be a fun project. It is a forever project though.</i>&quot;</p><p class="paragraph" style="text-align:left;">But for teams without deep data engineering resources: &quot;If you don&#39;t have that data engineering talent and resources, you&#39;ll probably need to buy something.&quot; Be honest about your team&#39;s capabilities before committing to a multi-year engineering project.</p><hr class="content_break"><h4 class="heading" style="text-align:left;" id="the-schema-evolution-problem-nobody"><b>The Schema Evolution Problem Nobody Talks About</b></h4><p class="paragraph" style="text-align:left;">Perhaps the most underestimated challenge Cliff identifies is schema evolution. &quot;Tools in the future need to embrace the fact that logs are going to be messy. We as humans can kind of see these schema changes, be like, &#39;eh, I get it. I get what this new field means.&#39;&quot;</p><p class="paragraph" style="text-align:left;">This human ability to adapt to changing schemas to understand that a renamed field still represents the same data is something current data lake tools completely lack. When an application updates and changes field names, your carefully constructed SQL queries break, your detections fail, and you lose visibility until someone manually fixes the schema mapping.</p><p class="paragraph" style="text-align:left;">The future Cliff envisions: &quot;Messiness will be embraced more&quot; tools that can handle schema drift without requiring constant manual intervention. Until then, budget for weekly schema maintenance as part of your data lake TCO.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-resources">RELATED RESOURCES</h2><h3 class="heading" style="text-align:left;" id="ocsf-schema-standards"><b>OCSF & Schema Standards</b></h3><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://schema.ocsf.io/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow">Open Cybersecurity Schema Framework (OCSF)</a> - Vendor-agnostic schema for security logs</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://docs.aws.amazon.com/security-lake/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow">AWS Security Lake Documentation</a> - Managed data lake for security data</p></li></ul><h3 class="heading" style="text-align:left;" id="ai-assisted-security-operations"><b>AI-Assisted Security Operations</b></h3><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://modelcontextprotocol.io/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow">Model Context Protocol (MCP)</a> - Connect AI assistants to external data sources</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://cursor.sh/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow">Cursor</a> - AI-powered code editor for automation scripts</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://github.com/features/copilot?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow">GitHub Copilot for Security</a> - AI assistance for security workflows</p></li></ul><h3 class="heading" style="text-align:left;" id="cloud-security-podcast"><b>Cloud Security Podcast</b></h3><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/siem-vs-data-lake-why-we-ditched-traditional-logging?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast – Full Episode with Cliff</a></p></li><li><p class="paragraph" style="text-align:left;">For deeper discussion on failed data lakes, AI in detection engineering, and where SIEM still fits.</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>Related Podcast Episodes 🎧</b></h2><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/siem-vs-data-lake-why-we-ditched-traditional-logging?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/9c31448c-99fb-4cdb-8365-853f87609f59/S06_Cliff_Crosland.jpg?t=1764772899"/></a><div class="image__source"><a class="image__source_link" href="https://www.cloudsecuritypodcast.tv/videos/siem-vs-data-lake-why-we-ditched-traditional-logging?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" rel="noopener" target="_blank"><span class="image__source_text"><p>SIEM vs. Data Lake: Why We Ditched Traditional Logging?</p></span></a></div></div><h3 class="heading" style="text-align:left;" id="question-for-you-reply-to-this-emai">Question for you? (Reply to this email)</h3><p class="paragraph" style="text-align:left;"> 🤖 <b>Would you be Team SIEM or Team Security Data Lake?</b><br></p><p class="paragraph" style="text-align:left;">Next week, we&#39;ll explore another critical aspect of cloud security. Stay tuned!</p><hr class="content_break"><p class="paragraph" style="text-align:left;">📬 Want weekly expert takes on AI & Cloud Security? [<a class="link" href="https://www.cloudsecuritynewsletter.com/subscribe?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow">Subscribe here</a>]”</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><span style="color:rgb(238, 40, 60);"><b><a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">We would love to hear from you</a></b></span>📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙</p><p class="paragraph" style="text-align:start;">Peace!</p><p class="paragraph" style="text-align:start;"><a class="link" href="https://www.linkedin.com/in/shilpi-bhattacharjee/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow">Shilpi Bhattacharjee</a></p><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dc094a1c-678a-43e0-adc9-1bee6c3499e2/CSP_Logo_Blue_ScreenRes_3000x3000_v2.jpg"/></a></div><div class="button" style="text-align:left;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="{{rp_referral_hub_url}}"><span class="button__text" style=""> Share the newsletter </span></a></div><p class="paragraph" style="text-align:left;">Was this forwarded to you? You can <a class="link" href="https://www.cloudsecuritynewsletter.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow">Sign up here</a>, to join our growing readership.</p><p class="paragraph" style="text-align:left;">Want to <b>sponsor</b> the next newsletter edition! <a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">Lets make it happen </a></p><p class="paragraph" style="text-align:left;">Have you joined our FREE <b>Monthly</b> <a class="link" href="https://www.cloudsecuritybootcamp.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Bootcamp</a> yet?</p><p class="paragraph" style="text-align:left;">checkout our <b>sister podcast</b> <a class="link" href="https://www.youtube.com/@AISecurityPodcast?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=servicenow-acquires-veza-for-1b-as-identity-becomes-critical-attack-vector-lessons-from-building-cloud-native-data-lakes-at-scale" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=0156f91b-bcbf-4af2-af54-ae877b1a261f&utm_medium=post_rss&utm_source=cloud_security_newsletter">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>🚨 Sha1-Hulud Worm Exposes 25K+ Repos: Lessons from Building Trustworthy AI SOCs For Regulated Environments</title>
  <description>This week: Supply chain attacks compromise enterprise CI/CD pipelines as 600+ npm packages fall to self-replicating malware. Former Mandiant SOC leader Grant Oviatt reveals how Prophet Security achieves 99.3% investigation accuracy with AI agents in regulated environments completing triage in 4 minutes versus traditional teams&#39; multi-hour cycles. Learn the architecture requirements for explainability, traceability, and data sovereignty that regulators demand from AI-driven security operations.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/f740734b-a98c-4535-948e-323efe35c161/Screenshot_2025-11-26_at_10.27.29_PM.png" length="1255491" type="image/png"/>
  <link>https://www.cloudsecuritynewsletter.com/p/sha1-hulud-worm-fluent-bit-ai-soc-trust</link>
  <guid isPermaLink="true">https://www.cloudsecuritynewsletter.com/p/sha1-hulud-worm-fluent-bit-ai-soc-trust</guid>
  <pubDate>Wed, 26 Nov 2025 22:31:18 +0000</pubDate>
  <atom:published>2025-11-26T22:31:18Z</atom:published>
    <dc:creator>Ashish Rajan</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="hello-from-the-cloudverse"><span style="background-color:#28EEDA;"><b>Hello from the Cloud-verse!</b></span></h2><p class="paragraph" style="text-align:left;">This week’s Cloud Security Newsletter Topic we cover - <b>Can You Trust an AI SOC in a Regulated Environment? </b><a class="link" href="#cloud-security-topic-of-the-week" rel="noopener noreferrer nofollow">(continue reading)</a> </p><div class="button" style="text-align:center;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="https://links.cloudsecuritypodcast.tv/brinqa-nov25-see-risk-differently?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments"><span class="button__text" style=""> Check out this week’s Sponsor: Brinqa </span></a></div><hr class="content_break"><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/the-terraform-lift-shift-playbook-migrating-200-apps-to-multi-cloud-with-terraform?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/f740734b-a98c-4535-948e-323efe35c161/Screenshot_2025-11-26_at_10.27.29_PM.png?t=1764196078"/></a><div class="image__source"><span class="image__source_text"><p>This image was generated by AI. It&#39;s still experimental, so it might not be a perfect match!</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Incase, this is your 1st Cloud Security Newsletter! You are in good company! </b><br>You are reading this issue along with your friends and colleagues from companies like <i>Netflix</i>, Citi, <i>JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more</i> who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to <a class="link" href="https://open.spotify.com/show/6LZgeh4GecRYPc0WrwMB4I?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a> & <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> every week.</p><hr class="content_break"><p class="paragraph" style="text-align:left;">Welcome to this week’s Cloud Security Newsletter</p><p class="paragraph" style="text-align:left;">The convergence of sophisticated supply chain attacks and AI-driven security operations creates both unprecedented risk and opportunity for enterprise defenders. This week, we examine the Sha1-Hulud 2.0 worm that compromised 25,000+ GitHub repositories while exploring how organizations are successfully deploying AI Security Operations Centers (AI SOCs) in highly regulated environments.</p><p class="paragraph" style="text-align:left;">This week is all about <b>trust</b>:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Trust in your supply chain:</b> Sha1-Hulud 2.0 tearing through npm and 25K+ GitHub repos. </p></li><li><p class="paragraph" style="text-align:left;"><b>Trust in your telemetry:</b> Fluent Bit RCE and log tampering across 15B+ deployments.</p></li><li><p class="paragraph" style="text-align:left;"><b>Trust in your identity tier:</b> Oracle Identity Manager pre-auth RCE exploited since August.</p></li><li><p class="paragraph" style="text-align:left;"><b>Trust in your SOC:</b> as AI agents move from toy demos to front-line investigations in regulated environments.</p></li></ul><p class="paragraph" style="text-align:left;">Our featured expert this week is <b>Grant Oviatt</b>, Head of Security Operations at Prophet Security and former SOC leader at Mandiant and Red Canary. Grant brings over 15 years of experience in security operations and shares his journey, who went from an AI skeptic to running an AI SOC handling 12K+ investigations in 2 weeks with 99.3% agreement with humans and ~4-minute investigation times.</p><p class="paragraph" style="text-align:left;">Together, they explore a question many of you are wrestling with right now: <i>Can you put an AI SOC in front of regulated workloads and sleep at night? [</i><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/how-to-build-trust-in-an-ai-soc-for-regulated-environments?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow">Listen to the episode</a><i>]</i></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="tldr-for-busy-readers">📰 TL;DR for Busy Readers</h2><ul><li><p class="paragraph" style="text-align:left;"><b>Sha1-Hulud 2.0 worm</b> compromised 600+ npm packages with persistent GitHub Actions backdoors rotate all CI/CD credentials immediately</p></li><li><p class="paragraph" style="text-align:left;"><b>Fluent Bit vulnerabilities</b> (8+ years old) threaten 15B+ cloud deployments attackers can manipulate logs while executing code</p></li><li><p class="paragraph" style="text-align:left;"><b>AI SOC maturity</b> ( with Grant Oviatt) now enables 4-minute investigations with 99.3% accuracy in regulated environments through explainability and traceability</p></li><li><p class="paragraph" style="text-align:left;"><b>Data sovereignty controls</b> allow regulated customers to host AI SOC data planes in their own cloud with single-tenant architecture</p></li><li><p class="paragraph" style="text-align:left;"><b>Oracle Identity Manager zero-day</b> exploited since August IAM compromises provide &quot;keys to the kingdom&quot; across hybrid environments</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-security-news">📰 <b>THIS WEEK&#39;S TOP 5 SECURITY HEADLINES</b></h2><h3 class="heading" style="text-align:left;" id="1-sha-1-hulud-20-self-replicating-n"><span style="color:rgb(27, 28, 29);"><b>1. </b></span><b> Sha1-Hulud 2.0: Self-Replicating npm Worm Compromises 25,000+ Repositories</b></h3><p class="paragraph" style="text-align:left;">Between November 21-23, a second wave of the Sha1-Hulud supply chain attack compromised over 600 npm packages, including popular tools from Zapier, PostHog, ENS Domains, and Postman. The malware has infected 25,000+ GitHub repositories across approximately 500 users, with cross-victim credential exfiltration actively occurring. This evolved variant now installs self-hosted GitHub Actions runners on infected systems, providing persistent backdoor access that survives reboots, while harvesting credentials from AWS, GCP, and Azure environments across 17 AWS regions.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters to Cloud Security Leaders:</b></p><p class="paragraph" style="text-align:left;">This represents a fundamental escalation in software supply chain attacks, directly threatening enterprise CI/CD pipelines and cloud infrastructure. This worm isn’t “just another” npm typo-squatting event. It directly targets:</p><ul><li><p class="paragraph" style="text-align:left;"><b>CI/CD runners</b> – Execution happens in <span style="color:rgb(24, 128, 56);">preinstall</span>, i.e., before dependencies resolve, when your build agents and secrets are fully exposed.</p></li><li><p class="paragraph" style="text-align:left;"><b>Cloud control planes</b> – It reaches into AWS/GCP/Azure creds across 17 AWS regions and exfiltrates them to attacker-controlled repos.</p></li><li><p class="paragraph" style="text-align:left;"><b>GitHub Actions fleet</b> – The worm installs <b>self-hosted runners</b> as durable, authenticated control channels into your infra.</p></li></ul><p class="paragraph" style="text-align:left;">If your org consumes affected packages some present in <b>~27% of code/cloud environments</b> in sampled scans you may have both a <b>supply-chain compromise</b> and a <b>cloud persistence</b> problem.</p><p class="paragraph" style="text-align:left;"><b>Immediate Actions:</b> <br>Start by scanning all endpoints for affected packages, rotate ALL credentials (npm tokens, GitHub PATs, SSH keys, AWS/GCP/Azure credentials), </p><p class="paragraph" style="text-align:left;"><b>CI/CD lockdown</b></p><ul><li><p class="paragraph" style="text-align:left;">Disable or tightly restrict <span style="color:rgb(24, 128, 56);">preinstall/postinstall</span> lifecycle scripts in build environments.</p></li><li><p class="paragraph" style="text-align:left;">Treat build agents like production workloads: restrictive egress, zero trust to GitHub, npm, etc.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Incident playbook</b></p><ul><li><p class="paragraph" style="text-align:left;">Cross-check all repos for <span style="color:rgb(24, 128, 56);">&quot;Sha1-Hulud: The Second Coming&quot;</span> and suspicious self-hosted runners (e.g., <span style="color:rgb(24, 128, 56);">SHA1HULUD</span>).</p></li><li><p class="paragraph" style="text-align:left;">Rotate everything: npm tokens, GitHub PATs, SSH keys, cloud credentials used on any affected machine.</p></li></ul><p class="paragraph" style="text-align:left;"><b>AI SOC tie-in</b></p><ul><li><p class="paragraph" style="text-align:left;">This is exactly the sort of large-scale, high-noise event where AI SOC triage can shine auto-enriching build logs, access logs, and GitHub audit trails to spot <b>cross-tenant token reuse</b> that humans would burn out on.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Read More:</b><a class="link" href="https://www.wiz.io/blog/shai-hulud-2-0-ongoing-supply-chain-attack?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow"> Wiz Research</a> |<a class="link" href="https://www.mend.io/blog/shai-hulud-the-second-coming/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow"> </a><a class="link" href="https://Mend.io?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow">Mend.io</a><a class="link" href="https://www.mend.io/blog/shai-hulud-the-second-coming/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow"> Analysis</a> |<a class="link" href="https://snyk.io/blog/sha1-hulud-npm-supply-chain-incident/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow"> Snyk Advisory</a> |<a class="link" href="https://thehackernews.com/2025/11/second-sha1-hulud-wave-affects-25000.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow"> The Hacker News</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="2-critical-fluent-bit-vulnerabiliti"><span style="color:rgb(27, 28, 29);"><b>2. </b></span><b>Critical Fluent Bit Vulnerabilities Expose 15+ Billion Cloud Deployments</b></h3><p class="paragraph" style="text-align:left;">Oligo Security disclosed five critical vulnerabilities in Fluent Bit, the open-source logging agent deployed over 15 billion times across AWS, Google Cloud, Azure, and Kubernetes environments. The most severe flaw, CVE-2025-12972, is a path-traversal vulnerability that has existed for over 8 years, allowing attackers to write or overwrite arbitrary files for remote code execution. Additional vulnerabilities enable authentication bypass, log tampering, tag spoofing, and stack buffer overflow attacks.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters to Cloud Security Leaders:</b></p><p class="paragraph" style="text-align:left;">Fluent Bit sits at the heart of cloud observability infrastructure, processing logs, metrics, and traces before they reach SIEM platforms and detection systems. The tool is embedded in Kubernetes clusters, AI labs, banks, and all major cloud providers, making these vulnerabilities a systemic risk to cloud ecosystems.</p><p class="paragraph" style="text-align:left;">Fluent Bit sits between <b>your workloads</b> and <b>your SIEM/XDR</b>. If it’s compromised, an attacker can:</p><ul><li><p class="paragraph" style="text-align:left;">Hijack the <b>node</b> (DaemonSet → host-level RCE → cluster pivot).</p></li><li><p class="paragraph" style="text-align:left;"><b>Silence or rewrite</b> incriminating logs to defeat detection.</p></li><li><p class="paragraph" style="text-align:left;">Inject <b>fake telemetry</b> to mislead incident responders and AI-driven analytics.</p></li></ul><p class="paragraph" style="text-align:left;">That last bullet is especially relevant if you’re experimenting with <b>LLM-based detection</b>. If your logging substrate can be coerced into lying, AI just makes you confidently wrong, faster.</p><p class="paragraph" style="text-align:left;">The attack surface is particularly concerning for enterprise defenders: attackers could execute malicious code through Fluent Bit while dictating which events are recorded, erasing or rewriting incriminating entries to hide their tracks, injecting fake telemetry, and injecting plausible fake events to mislead responders. Because Fluent Bit is commonly deployed as a Kubernetes DaemonSet, a single compromised log agent can cascade into full node and cluster takeover. This directly connects to Grant Oviatt&#39;s discussion on traceability: if your logging infrastructure can be manipulated, the entire chain of evidence for AI SOC investigations becomes unreliable.</p><p class="paragraph" style="text-align:left;">The disclosure process itself revealed systemic issues: Despite multiple responsible disclosure attempts through official channels, it took over a week and the involvement of AWS before the vulnerabilities received sustained attention.</p><p class="paragraph" style="text-align:left;"><b>Immediate Actions:</b></p><p class="paragraph" style="text-align:left;"><b>Patch or replace immediately</b></p><ul><li><p class="paragraph" style="text-align:left;">Upgrade all agents to <b>v4.1.1 or v4.0.12</b>; use AWS Inspector / Security Hub / Systems Manager to locate vulnerable nodes at scale.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Harden the log plane</b></p><ul><li><p class="paragraph" style="text-align:left;">Lock Fluent Bit configs as <b>read-only</b> where possible.</p></li><li><p class="paragraph" style="text-align:left;">Use <b>static tags + fixed paths</b> to reduce spoofing.</p></li><li><p class="paragraph" style="text-align:left;">Limit network access for Fluent Bit outputs to <b>known log backends</b> only.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Monitor for integrity issues</b></p><ul><li><p class="paragraph" style="text-align:left;">Add controls to detect <b>tag spoofing</b>, missing logs, or anomalous routing changes from Fluent Bit pods.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Read More:</b><a class="link" href="https://www.oligo.security/blog/critical-vulnerabilities-in-fluent-bit-expose-cloud-environments-to-remote-takeover?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow"> Oligo Security Report</a> |<a class="link" href="https://www.csoonline.com/article/4095860/fluent-bit-vulnerabilities-could-enable-full-cloud-takeover.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow"> CSO Online</a> |<a class="link" href="https://thehackernews.com/2025/11/new-fluent-bit-flaws-expose-cloud-to.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow"> The Hacker News</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-oracle-identity-manager-zero-day-"><span style="color:rgb(27, 28, 29);"><b>3. </b></span><b>Oracle Identity Manager Zero-Day Exploited in the Wild Since August</b></h3><p class="paragraph" style="text-align:left;">CISA added CVE-2025-61757 to its Known Exploited Vulnerabilities catalog on November 21, a critical pre-authentication remote code execution flaw (CVSS 9.8) in Oracle Identity Manager. Searchlight Cyber researchers discovered active exploitation dating back to at least August 2025, possibly as a zero-day before Oracle&#39;s October 2025 patch. The vulnerability affects Oracle Identity Manager versions 11.1.2.3 and 12.2.1.3, allowing unauthenticated attackers to compromise systems over the network without user interaction.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters to Cloud Security Leaders:</b></p><p class="paragraph" style="text-align:left;">Oracle Identity Manager is a critical component of enterprise IAM infrastructure, deployed across government agencies and large enterprises. The pre-authentication nature of this vulnerability means attackers can compromise systems without any credentials, making it a prime target for initial access. Given the centralized nature of IAM systems, a compromise here could provide attackers with &quot;keys to the kingdom&quot; across cloud and on-premises environments.</p><p class="paragraph" style="text-align:left;">The extended exploitation timeline is particularly concerning: evidence suggests the vulnerability may have been exploited as a zero-day before Oracle&#39;s October patch, giving attackers months of undetected access. CISA&#39;s inclusion in the KEV catalog indicates active targeting of federal systems, with federal agencies required to patch by December 12, 2025.</p><p class="paragraph" style="text-align:left;">OIM is core IAM infrastructure key points:</p><ul><li><p class="paragraph" style="text-align:left;">Pre-auth RCE on OIM is effectively a full<b> identity tier compromise</b>.</p></li><li><p class="paragraph" style="text-align:left;">Given the <b>months-long exploitation window</b>, assume adversaries have used it as an <b>initial access + persistence</b> vector.</p></li><li><p class="paragraph" style="text-align:left;">CISA has mandated US federal agencies patch by <b>12 Dec 2025</b>, signaling active exploitation at scale.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Immediate Actions:</b> Apply Oracle&#39;s October 2025 Critical Patch Update immediately, conduct forensic review of Oracle Identity Manager logs dating back to August 2025, look for suspicious authentication patterns or privilege escalations, and review all identity provisioning activities during the exploitation window.</p><p class="paragraph" style="text-align:left;">Key points:<br>Patch OIM with Oracle’s <b>October 2025 CPU</b> as an emergency change.</p><p class="paragraph" style="text-align:left;">Assume breach since <b>Aug 2025</b>:</p><ul><li><p class="paragraph" style="text-align:left;">Review OIM logs for anomalous authentications, provisioning events, and privilege escalations.</p></li><li><p class="paragraph" style="text-align:left;">Treat OIM as a suspected <b>initial access vector</b> in any ongoing IR investigations.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Read More:</b><a class="link" href="https://www.bleepingcomputer.com/news/security/cisa-warns-oracle-identity-manager-rce-flaw-is-being-actively-exploited/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow"> BleepingComputer</a> |<a class="link" href="https://www.securityweek.com/cisa-confirms-exploitation-of-recent-oracle-identity-manager-vulnerability/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow"> SecurityWeek</a> |<a class="link" href="https://thehackernews.com/2025/11/cisa-warns-of-actively-exploited.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow"> The Hacker News</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="4-crowd-strike-terminates-insider-s"><span style="color:rgb(27, 28, 29);"><b>4. </b></span><b>CrowdStrike Terminates Insider Sharing Data with Hacker Group</b></h3><p class="paragraph" style="text-align:left;">CrowdStrike terminated an employee on November 21 for sharing internal screenshots with the hacker group &quot;Scattered Lapsus$ Hunters&quot; on Discord. The terminated employee had access to CrowdStrike&#39;s Slack and Confluence systems but did not have access to production systems or customer data. CrowdStrike emphasized that no systems were breached and the incident involved an authorized employee sharing limited internal screenshots.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters to Cloud Security Leaders:</b></p><p class="paragraph" style="text-align:left;">This incident highlights the persistent insider threat risk facing cybersecurity vendors and cloud service providers. While CrowdStrike successfully detected and responded to the threat, the incident demonstrates how social engineering can target even security-conscious organizations. The hacker group attempted to use the screenshots as proof of a broader compromise, amplifying the incident through social media, a tactic that&#39;s increasingly common and can cause reputational damage disproportionate to actual risk.</p><p class="paragraph" style="text-align:left;">For enterprise security leaders, this serves as a reminder that insider threats don&#39;t always involve malicious intent from the outset. Employees can be manipulated into sharing information that seems innocuous but provides attackers with reconnaissance value. The targeting of CrowdStrike, months after the company&#39;s July 2025 global IT outage, suggests adversaries are opportunistically targeting organizations during periods of heightened scrutiny.</p><p class="paragraph" style="text-align:left;"><b>Key Takeaways:</b> Implement robust insider threat programs with behavioral analytics, educate employees on social engineering tactics targeting internal communications, restrict access to internal collaboration tools based on need-to-know principles, monitor for unauthorized data exfiltration from collaboration platforms, and have crisis communication plans for insider threat incidents.</p><p class="paragraph" style="text-align:left;">Key Points</p><ul><li><p class="paragraph" style="text-align:left;">Expand insider threat monitoring to include <b>collaboration platforms</b> and screenshot-heavy workflows.</p></li><li><p class="paragraph" style="text-align:left;">Train staff that <b>“just screenshots”</b> can still be sensitive.</p></li><li><p class="paragraph" style="text-align:left;">Ensure crisis comms plans explicitly cover <b>insider-driven “near miss” incidents</b>.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Read More:</b><a class="link" href="https://techcrunch.com/2025/11/21/crowdstrike-fires-suspicious-insider-who-passed-information-to-hackers/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow"> TechCrunch</a> |<a class="link" href="https://www.bleepingcomputer.com/news/security/crowdstrike-fires-insider-for-sharing-internal-info-with-hacking-group/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow"> BleepingComputer</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="5-sec-drops-solar-winds-case-after-"><span style="color:rgb(27, 28, 29);"><b>5. </b></span><b>SEC Drops SolarWinds Case After Years of Legal Battle</b></h3><p class="paragraph" style="text-align:left;">The SEC voluntarily dismissed its securities fraud lawsuit against SolarWinds and CISO Timothy Brown on November 20, ending a case filed in October 2023 over alleged misrepresentations about cybersecurity practices before the 2020 supply chain breach. The dismissal came after a federal judge ruled in July 2025 that most of the SEC&#39;s claims could not proceed, narrowing the case significantly.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters to Cloud Security Leaders:</b></p><p class="paragraph" style="text-align:left;">The SEC&#39;s decision to abandon the SolarWinds case represents a significant shift in regulatory enforcement around cybersecurity disclosures. While this may bring relief to many companies and CISOs concerned about the chilling effect on proactive security work, organizations must still proceed carefully when making public statements about their security programs.</p><p class="paragraph" style="text-align:left;">In the wake of cyber incidents, federal, state, and international regulators may scrutinize cybersecurity disclosures as evidence of negligence. This includes the SEC&#39;s 2023 requirements for companies to disclose material cyber risks and incidents to investors. Effective governance around drafting and vetting cybersecurity statements and disclosures remains critical for public companies.</p><p class="paragraph" style="text-align:left;"><b>Strategic Implications:</b> Review public-facing security statements for accuracy and consistency with actual practices, ensure board-level oversight of cybersecurity risk disclosures, maintain detailed documentation of security investments and improvements, and prepare for continued regulatory scrutiny even with shifting enforcement priorities.</p><p class="paragraph" style="text-align:left;">Key Points</p><ul><li><p class="paragraph" style="text-align:left;">The dismissal is widely seen as a <b>setback</b> to the SEC’s attempt to pursue <b>personal liability</b> for CISOs around security disclosures.</p></li><li><p class="paragraph" style="text-align:left;">But it should <i>not</i> be read as a retreat from <b>cyber disclosure expectations</b> if anything, it will refocus regulators on cases with clearer evidence trails.</p></li><li><p class="paragraph" style="text-align:left;">For cloud-heavy enterprises, this reinforces the need for <b>defensible, documented risk narratives</b> around supply chain, cloud concentration, and AI usage not marketing-driven security postures.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Read More:</b><a class="link" href="https://www.insideprivacy.com/cybersecurity-2/sec-voluntarily-dismisses-solarwinds-litigation/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow"> Inside Privacy</a> |<a class="link" href="https://thehackernews.com/2025/11/sec-drops-solarwinds-case-after-years.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow"> The Hacker News</a> |<a class="link" href="https://www.cybersecuritydive.com/news/sec-drops-civil-fraud-case-solarwinds/806126/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow"> Cybersecurity Dive</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="cloud-security-topic-of-the-week">🎯 Cloud Security Topic of the Week: </h2><h3 class="heading" style="text-align:left;" id="can-you-trust-an-ai-soc-in-a-regula"><b>Can You Trust an AI SOC in a Regulated Environment?</b></h3><p class="paragraph" style="text-align:left;">This week’s theme: if you want AI-assisted SOC in a regulated org, you have to <b>design for trust first</b>, not bolt it on later.</p><p class="paragraph" style="text-align:left;"><b>The 3 trust pillars for regulated AI SOCs</b></p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Explainability</b> – Every conclusion must read like a senior analyst’s notes, not an opaque score.</p></li><li><p class="paragraph" style="text-align:left;"><b>Traceability</b> – Line-by-line evidence of every query, API call, and log used in the investigation.</p></li><li><p class="paragraph" style="text-align:left;"><b>Sovereignty</b> – A data plane that can live in the customer’s cloud, with <b>single-tenant</b> isolation and BYO-model gateways.</p></li></ol><p class="paragraph" style="text-align:left;"><b>Our takeaway</b><br> If you can’t:</p><ul><li><p class="paragraph" style="text-align:left;">Explain why an alert was closed,</p></li><li><p class="paragraph" style="text-align:left;">Show every step the AI took, and</p></li><li><p class="paragraph" style="text-align:left;">Prove where the data lived and who controlled it,</p><p class="paragraph" style="text-align:left;"> …you don’t have a regulated-ready AI SOC yet. You have an experiment. <span style="color:rgb(24, 128, 56);">[</span><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/how-to-build-trust-in-an-ai-soc-for-regulated-environments?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow">Listen to the full Episode: Grant Oviatt on building a regulated-ready AI SOC</a><span style="color:rgb(24, 128, 56);">]</span></p></li></ul><h2 class="heading" style="text-align:left;" id="featured-experts-this-week"><b>Featured Experts This Week </b>🎤</h2><ul><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><a class="link" href="https://www.linkedin.com/in/grant-oviatt-882111a0/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow"><b>Grant Oviatt </b></a></span><span style="color:rgb(27, 28, 29);"><b>- </b></span>Head of Security Operations, Prophet Security</p></li><li><p class="paragraph" style="text-align:left;"><b><a class="link" href="https://www.linkedin.com/in/ashishrajan/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow">Ashish Rajan</a></b> - CISO | Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> , Host of <a class="link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a></p></li></ul><h2 class="heading" style="text-align:left;" id="definitions-and-core-concepts"><b>Definitions and Core Concepts 📚</b></h2><p class="paragraph" style="text-align:left;">Before diving into our insights, let&#39;s clarify some key terms:</p><ul><li><p class="paragraph" style="text-align:left;"><b>AI SOC (AI Security Operations Center):</b> A security operations center that leverages artificial intelligence agents to perform triage, investigation, and response activities traditionally handled by human security analysts. Modern AI SOCs use large language models and specialized agents to query security tools, analyze logs, and make investigative decisions with human oversight.</p></li><li><p class="paragraph" style="text-align:left;"><b>Explainability:</b> In the context of AI SOC operations, explainability refers to the ability to understand and articulate how an AI agent reached a specific decision. This includes visibility into the reasoning process, the data considered, and the logic applied similar to how a human analyst would explain their investigative conclusions.</p></li><li><p class="paragraph" style="text-align:left;"><b>Traceability:</b> The ability to track the complete lineage of data and decisions from raw inputs through transformation and analysis to final outputs. In AI SOC environments, this means documenting every query issued, every API call made, every log examined, and every decision point in an investigation</p></li><li><p class="paragraph" style="text-align:left;"><b>Single-Tenant AI SOC Architecture:</b> A deployment model where each customer&#39;s data, processing, and AI models operate in complete isolation from other customers. This ensures no cross-contamination of data and allows customers to maintain complete control over their security evidence and analysis.</p></li><li><p class="paragraph" style="text-align:left;"><b>Data Plane:</b> The component of a system that handles the actual processing </p><p class="paragraph" style="text-align:left;">and storage of customer data, as distinct from the control plane that manages configuration and orchestration. In AI SOC contexts, customers in regulated industries often require the data plane to reside within their own cloud environment for sovereignty and compliance.</p></li><li><p class="paragraph" style="text-align:left;"><b>Model Gateway (BYO model):</b> An org-controlled proxy in front of LLMs that enforces which models can see what data (e.g., no PHI to public models), logs every prompt/response, and allows AI SOC traffic to flow through the same AI governance stack as your product workloads.</p></li></ul><hr class="content_break"><p class="paragraph" style="text-align:center;">This week&#39;s issue is sponsored by <b><a class="link" href="https://links.cloudsecuritypodcast.tv/brinqa-nov25-see-risk-differently?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow">Brinqa</a></b></p><p class="paragraph" style="text-align:center;">What If You Could See Risk Differently? </p><p class="paragraph" style="text-align:left;">On Nov 19, Brinqa experts will show how a shift in perspective, adding context, can change everything about how you prioritize risk. Fast-paced, real, and surprisingly fun.</p><p class="paragraph" style="text-align:center;"><a class="link" href="https://links.cloudsecuritypodcast.tv/csa-ai-soc-report-dropzone-oct2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow">📤 </a><b><a class="link" href="https://links.cloudsecuritypodcast.tv/brinqa-nov25-see-risk-differently?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow">Register here</a></b></p><div class="image"><a class="image__link" href="https://links.cloudsecuritypodcast.tv/brinqa-nov25-see-risk-differently?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/8dfbc1e2-c852-4172-9486-17ae541146e0/Webinar_Email_CTA_600x250.png?t=1762987268"/></a></div><hr class="content_break"><h2 class="heading" style="text-align:left;" id="our-insights-from-this-practitioner">💡<b>Our Insights from this Practitioner 🔍</b></h2><h3 class="heading" style="text-align:left;" id="how-to-build-trust-in-an-ai-soc-for"><b>How to Build Trust in an AI SOC for Regulated Environments </b><b>(</b><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/how-to-build-trust-in-an-ai-soc-for-regulated-environments?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow"><b>Full Episode here</b></a><b>)</b></h3><h3 class="heading" style="text-align:left;" id="the-evolution-from-ai-skeptic-to-ai"><b>The Evolution from AI Skeptic to AI SOC Advocate</b></h3><p class="paragraph" style="text-align:left;">Grant Oviatt&#39;s journey from AI skeptic to building enterprise-scale AI SOC solutions mirrors the transformation happening across the security operations industry. &quot;<i>If I think of 15 years ago with AI, anything AI detection in it scared me to death and I was very skeptical,&quot; Grant explains. &quot;I knew it was gonna be the highest false positive in the entire company.</i>&quot;</p><p class="paragraph" style="text-align:left;">What changed? Grant&#39;s perspective shifted when he began experimenting with breaking down security investigations into composable pieces that AI agents could tackle with high consistency. &quot;<i>When it was great, it was great,</i>&quot; he notes. &quot;<i>It started becoming more of a consistency problem of how can you chunk up this problem of doing a security investigation that security operators think of into small bite-sized pieces that agents can tackle really well instead of trying to eat the elephant.</i>&quot;</p><p class="paragraph" style="text-align:left;">This architectural approach decomposing complex security investigations into discrete agent tasks has proven successful at Prophet Security, where they&#39;ve achieved remarkable results: &quot;<i>Our average time to complete an investigation is right around four minutes&quot; compared to traditional SOC teams that &quot;won&#39;t even start to look at an alert in four minutes, much less complete the investigation in that time</i>.&quot;</p><p class="paragraph" style="text-align:left;">The consistency problem that Grant identified is critical for regulated environments. In a recent customer engagement, Prophet Security processed 12,000 investigations over two weeks and &quot;<i>had 99.3% agreement between their security operations team and Prophet during that 12,000 investigation stint. We were significantly faster with it. It was 11x the difference in the meantime to investigate.</i>&quot;</p><h3 class="heading" style="text-align:left;" id="the-twin-pillars-explainability-and"><b>The Twin Pillars: Explainability and Traceability</b></h3><p class="paragraph" style="text-align:left;">For organizations in regulated industries healthcare, financial services, government trust in AI SOC capabilities rests on two fundamental requirements that Grant identifies: explainability and traceability.</p><p class="paragraph" style="text-align:left;"><b>Explainability</b> addresses whether the AI&#39;s decision-making process is reasonable and understandable. &quot;<i>Given this particular situation, is it reasonable to make the decision here</i>,&quot; Grant explains. &quot;<i>In an explainable way, is the analysis understandable and reasonable for a security practitioner?&quot;</i> This mirrors how human analysts would be evaluated not just on their conclusions, but on whether their reasoning makes sense given the evidence available.</p><p class="paragraph" style="text-align:left;"><b>Traceability</b> focuses on the data lineage throughout an investigation. &quot;<i>What queries are the AI SOC issuing API requests issuing across different technologies? What queries is it issuing across your SIEM? What are the requests that it&#39;s making? What are the responses that it&#39;s returning?</i>&quot; Grant details. &quot;<i>Really tracing the inputs and the data transformation that&#39;s happening.&quot;</i></p><p class="paragraph" style="text-align:left;">The difference from traditional SOC operations is striking. Where human analysts might copy-paste a few relevant log entries into their investigation notes, AI SOC platforms capture everything: &quot;<i>The average investigation is 40 to 50 queries across six different tools in a customer&#39;s environment. Instead of just having best effort copying and pasting of logs from a traceability perspective, you have a line by line detail of everything that was gathered, all the evidence that was gathered, and all the decision making that happened along the way.</i>&quot;</p><p class="paragraph" style="text-align:left;">Grant emphasizes that regulators are demanding a &quot;<i>10x magnification</i>&quot; in explainability and transparency for AI SOC compared to human-driven operations, &quot;<i>just given the skepticism with AI and the work product.</i>&quot; This heightened standard actually benefits customers even those not in regulated industries by providing unprecedented visibility into security investigations.</p><p class="paragraph" style="text-align:left;"><b>Design move:</b> Reject any AI SOC or MCP setup where you <i>can’t</i> see the raw queries, tool calls, and evidence chain for each decision. If you can’t audit it, you can’t defend it to a regulator.</p><h3 class="heading" style="text-align:left;" id="architecture-for-data-sovereignty-a"><b>Architecture for Data Sovereignty and Compliance</b></h3><p class="paragraph" style="text-align:left;">One of the most critical architectural decisions for AI SOC in regulated environments involves data management. Grant outlines Prophet Security&#39;s approach: &quot;<i>We&#39;re not a SIEM so we don&#39;t require you to stream all of your data to Prophet to make decisions. We make point in time queries across your different security tools to make decisions much like a person would</i>.&quot;</p><p class="paragraph" style="text-align:left;">This architecture dramatically reduces data exposure. &quot;<i>The subset of data that we&#39;re processing is much, much, much smaller and much more focused, which gets us out of a lot of problems from a data management perspective,</i>&quot; Grant explains. Customers also maintain granular control: &quot;<i>Customers have total control of what they want to send to us and the capabilities that Prophet can issue.</i>&quot;</p><p class="paragraph" style="text-align:left;">For healthcare organizations dealing with HIPAA compliance, this flexibility is essential. Grant shares an example: &quot;<i>We have a healthcare regulated customer that said, &#39;Hey, my HIPAA data, I&#39;m just not even gonna put that in the purview of Prophet to look at.&#39; And we&#39;re gonna start without going down that path and we can explore it later</i>.&quot;</p><p class="paragraph" style="text-align:left;">Beyond selective data exposure, Prophet Security offers a &quot;<i>bring your own cloud&quot;</i> model where &quot;<i>the data plane of our environment can live defensively within the organization&#39;s perimeter. This has been really meaningful for financial organizations for us. There&#39;s defensibility that they own all of the evidence and data. They can remove our access at any time or blow away the data plane and all of that raw evidence is gone.</i>&quot;</p><p class="paragraph" style="text-align:left;">This architectural approach addresses a fundamental concern in regulated industries: who controls the security evidence, and can that access be revoked immediately if needed?</p><h3 class="heading" style="text-align:left;" id="single-tenant-architecture-and-the-"><b>Single-Tenant Architecture and the Training Data Question</b></h3><p class="paragraph" style="text-align:left;">One of the most common concerns Grant encounters is whether customer data is being used to train AI models. His answer is unequivocal: &quot;<i>Our architecture&#39;s entirely single tenant. There&#39;s no cross contamination and you can think of it like onboarding a new security staff member to the team, like it&#39;s a new analyst. All the learning that happens within your tenant stays in your tenant.</i>&quot;</p><p class="paragraph" style="text-align:left;">Prophet Security enforces this &quot;<i>to a contractual level that your data is never used for training in improving the model outside of it improves the product for you, but there&#39;s no model training on the raw data.&quot; This single-tenant approach is &quot;another huge component that&#39;s meaningful for regulated environments.</i>&quot;</p><p class="paragraph" style="text-align:left;">Grant also addresses the emerging trend of model portability: &quot;<i>Model portability is something that we&#39;ve invested in as well, where regulated industries may have their own model gateway, where there&#39;s specific models that they are comfortable with and others that they&#39;re not.&quot; This allows organizations to &quot;manage the traceability of the inputs going to the model, they&#39;re paying the outbound costs and able to manage that through their own AI governance process.</i>&quot;</p><p class="paragraph" style="text-align:left;"><b>Design move:</b> Treat “no cross-tenant training on my data” as a non-negotiable contractual requirement for regulated workloads.</p><h3 class="heading" style="text-align:left;" id="heading-3"></h3><h3 class="heading" style="text-align:left;" id="ai-soc-vs-mdr-replacement-or-comple"><b>AI SOC vs. MDR: Replacement or Complement?</b></h3><p class="paragraph" style="text-align:left;">For organizations already invested in Managed Detection and Response services, Grant sees AI SOC as both a potential replacement and a complementary solution, depending on the use case.</p><p class="paragraph" style="text-align:left;">&quot;<i>We have many customers that are moving from MDR to AI SOC as their holistic approach for doing investigations,&quot;</i> Grant notes. The speed advantage is compelling: traditional MDR teams &quot;<i>won&#39;t even start to look at an alert in four minutes, much less complete the investigation in that time. So there&#39;s just sort of a response velocity that can&#39;t be contended with.</i>&quot;</p><p class="paragraph" style="text-align:left;">However, some organizations take a hybrid approach. &quot;<i>We have a few customers that coexist and have both,</i>&quot; Grant explains. &quot;<i>AI SOCs aren&#39;t scared by custom detections or things that your team has generated. That&#39;s often 20 to 30% of the burden of a security operations team, where they have specific applications or custom detections that their security engineering team has developed that their MDR can&#39;t reasonably look at.</i>&quot;</p><p class="paragraph" style="text-align:left;">The scalability difference is fundamental: MDR providers serve hundreds or thousands of customers and &quot;<i>focused on the base case that&#39;s consistent across all of those customers</i>.&quot; Custom detections don&#39;t scale for human-driven MDR, but &quot;<i>on an AI SOC level, you know, one that&#39;s deployed for your environment specifically, it scales perfectly well.</i>&quot;</p><p class="paragraph" style="text-align:left;">Grant sees customers using this division of labor: &quot;<i>My MDR contract is going for another two years or three years, but I still have all these custom detections that are a problem for me. I would love to put AI SOC on that and then visit if this is a replacement or expand my budget spend to bring Prophet in in a broader way when that renewal is over.</i>&quot;</p><h3 class="heading" style="text-align:left;" id="the-reality-of-building-your-own-ai"><b>The Reality of Building Your Own AI SOC</b></h3><p class="paragraph" style="text-align:left;">For organizations considering building AI SOC capabilities internally, Grant offers a realistic assessment: &quot;<i>It&#39;s a really hard problem. I think it&#39;s a fun thing to experiment with. I think there&#39;s a lot you can do on your own to build sort of enrichment and build context, but trusting decision making and that consistency with AI agents, very hard to do in an internal organization.</i>&quot;</p><p class="paragraph" style="text-align:left;">Grant has encountered creative attempts where teams &quot;<i>built a workflow that&#39;s in their store or something else where they go grab data and then they make an LLM call and then they send the data back and they make another LLM call. And it&#39;s very strict logic to try to make a single investigation work.</i>&quot; The problem? &quot;<i>Now I&#39;ve gotta do it for the other 500 detections in my environment. And I was like, that sounds like a chore. That doesn&#39;t sound like an answer to your problem. You&#39;re just focusing your energy in a different place</i>.&quot;</p><p class="paragraph" style="text-align:left;">The operational challenges extend beyond initial implementation. &quot;<i>Model providers change their models, and so without writing a lot of code, you might see a 3% improvement somewhere else, or a 0.5% degradation in consistency or quality of analysis,</i>&quot; Grant explains. &quot;<i>We have an entire machine learning team that is looking at all the different model providers out there. You have a whole operational team that&#39;s kind of wrangling the weather, so to speak.</i>&quot;</p><p class="paragraph" style="text-align:left;">Grant&#39;s recommendation: &quot;<i>Play around with it, get an understanding of how prompts work and how models work. But when you&#39;re looking to scale this and trust this and not worry about security problems, I would work, try out some AI SOCs and see how it compares to what you build.</i>&quot;</p><h3 class="heading" style="text-align:left;" id="the-model-context-protocol-caution"><b>The Model Context Protocol Caution</b></h3><p class="paragraph" style="text-align:left;">For security leaders evaluating AI SOC technologies, Grant issues a specific warning about the Model Context Protocol (MCP): &quot;<i>MCP adds a whole other layer where there are MCP servers where you want to make it very easy to grab this information from your environment. But when you talk about regulated environments specifically, transparency is lost.</i>&quot;</p><p class="paragraph" style="text-align:left;">The problem is fundamental: &quot;<i>When I ask an MCP agent a question, it&#39;ll give me an answer, but it won&#39;t give me the query that it ran to generate the answer. And so there&#39;s a mismatch in auditability from our perspective to make this clear to you, a human, as to what happened, and that break in the chain is just one too many black boxes in the cycle.</i>&quot;</p><p class="paragraph" style="text-align:left;">Prophet Security made a conscious decision to &quot;<i>build our own collection agents for that transparency piece</i>&quot; rather than rely on MCP. Grant is &quot;<i>hopeful that MCP agents can expand to be more scrutinous on the auditability side and have some sort of REST API response that was queried on their end and tracked along with the results and send this entire package over to maintain that evidence chain of custody.</i>&quot;</p><p class="paragraph" style="text-align:left;">For organizations considering building their own AI SOC: &quot;MCP is the fastest way to do it in a lot of cases, but who knows what&#39;s happening on a bunch of different levels at that point.&quot;</p><h3 class="heading" style="text-align:left;" id="red-lines-what-ai-shouldnt-automate"><b>Red Lines: What AI Shouldn&#39;t Automate (Yet)</b></h3><p class="paragraph" style="text-align:left;">When it comes to automated remediation, Grant advocates for constrained creativity: &quot;<i>Agents are very creative, maybe a nicer way of saying probabilistic. If you were to give an AI agent, &#39;Hey, go and remediate this file on a system,&#39; there&#39;s several different ways that it could go and do that.</i>&quot;</p><p class="paragraph" style="text-align:left;">His approach: &quot;<i>I am a big believer today in strict coupling of, instead of AI having creativity on the query part, there&#39;s a specific API that an agent is allowed to access. It performs one function, and this is the capability that the agent has in your environment.</i>&quot;</p><p class="paragraph" style="text-align:left;">Grant recommends treating remediation actions &quot;<i>more as a traditional integration in that sense, where AI SOC is managing the reasoning of when this is appropriate. But the action is a single call or a series of calls that are more deterministic</i>.&quot;</p><p class="paragraph" style="text-align:left;">The risk assessment is straightforward: &quot;<i>If agents are inspired and agentically performing remediative actions in your environment rather than coming up with the playbook of things that you should do and aligning those to REST API calls, that gets a little scary in redlining it in my opinion.</i>&quot;</p><p class="paragraph" style="text-align:left;">For organizations comfortable with MDR-driven remediation, the risk profile is actually lower with AI SOC: <i>&quot;If you&#39;re working with subprocessors or MDRs that are using your PHI or PII data to do investigations, you&#39;re probably comfortable having an AI SOC do the same thing since it&#39;s the same process and honestly less risk because there&#39;s no human that&#39;s going to go and copy and paste this to some resource and share it on the internet.</i>&quot;</p><p class="paragraph" style="text-align:left;"><b>Design move:</b> Let the AI decide <i>when</i> to remediate; keep <i>how</i> locked to a small set of deterministic, pre-approved API calls.</p><h3 class="heading" style="text-align:left;" id="the-path-to-autonomous-operations"><b>The Path to Autonomous Operations</b></h3><p class="paragraph" style="text-align:left;">Currently, Prophet Security closes &quot;<i>95 plus percent of our investigations automatically as false positive with all of that explainability and transparency. For that additional 5% of things that are malicious or that we have a question on, we think bringing in a human is still the right approach today just to have eyes on, validate the activity, confirm remediation actions and move on.</i>&quot;</p><p class="paragraph" style="text-align:left;">But Grant sees this evolving: &quot;<i>I think that&#39;s more of where the market is and less of where the product is. I think we&#39;re gonna move more and more to a state where only if you have an issue, a threat has been identified and it&#39;s been remediated in three minutes. This has all happened. You get a rollup report of what&#39;s been observed and your team isn&#39;t escalated to because the threat was mitigated. There&#39;s no further action to do.</i>&quot;</p><p class="paragraph" style="text-align:left;">The technology is ready, but operations teams may not be: &quot;<i>I actually don&#39;t think the operations teams are ready to be there, and that&#39;s okay. I think trust becomes the important element</i>.&quot; This mirrors the broader theme of Grant&#39;s insights that successful AI SOC adoption in regulated or non-regulated environments ultimately comes down to building trust through transparency, explainability, and demonstrated consistency.</p><h3 class="heading" style="text-align:left;" id="the-future-from-investigation-to-se"><b>The Future: From Investigation to Security Program Management</b></h3><p class="paragraph" style="text-align:left;">Looking forward, Grant sees AI SOC evolving beyond individual alert triage: &quot;<i>Where I see AI SOC continuing to move is taking the groundwork out of SOC operations, the grunt work level tasks out of people&#39;s purview. Instead, operate more as a manager in the loop where I&#39;m managing my detection program.&quot;</i></p><p class="paragraph" style="text-align:left;">This includes capabilities like threat hunting: &quot;<i>How can I ask bigger hypothesis driven questions and have AI systems go and pull larger sets of data for me and start to find unidentified threats in my environment?&quot;</i></p><p class="paragraph" style="text-align:left;">It also encompasses detection management and posture: &quot;<i>How can I look over alerts that I&#39;ve seen in the past, identify my gaps in line with like MITRE ATT&CK framework or similar and suggest tuning recommendations?&quot;</i></p><p class="paragraph" style="text-align:left;">The vision is for security professionals to shift from conducting individual investigations to orchestrating an AI-driven security program: <i>&quot;Have that orchestration happen by an army of agents and you&#39;re getting the feedback to make decisions on whether this is helpful or expending energy that&#39;s unneeded in your organization.&quot;</i></p><p class="paragraph" style="text-align:left;">Grant&#39;s perspective: <i>&quot;I continue to think that AI SOC is going to shift to build security programs that I honestly dreamed of in past places.&quot;</i></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-resources">RELATED RESOURCES</h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://attack.mitre.org/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow">MITRE ATT&CK Framework</a> - Knowledge base of adversary tactics and techniques for detection planning</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.nist.gov/itl/ai-risk-management-framework?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow">NIST AI Risk Management Framework</a> - Framework for managing AI system risks in regulated environments</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow">EU AI Act Compliance Resources</a> - European Union AI regulation guidance</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.fedramp.gov/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow">FedRAMP Authorization for Cloud Services</a> - Federal authorization program for cloud service offerings</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>Related Podcast Episodes 🎧</b></h2><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/how-to-build-trust-in-an-ai-soc-for-regulated-environments?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/32bc6a4f-8cd9-4dc0-9e55-6da6685c22d8/Grant_Oviatt-ProphetSecurity-YT-Thumbnail.jpg?t=1764193826"/></a><div class="image__source"><a class="image__source_link" href="https://www.cloudsecuritypodcast.tv/videos/how-to-build-trust-in-an-ai-soc-for-regulated-environments?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" rel="noopener" target="_blank"><span class="image__source_text"><p>How to Build Trust in an AI SOC for Regulated Environments</p></span></a></div></div><h3 class="heading" style="text-align:left;" id="question-for-you-reply-to-this-emai">Question for you? (Reply to this email)</h3><p class="paragraph" style="text-align:left;"> 🤖 <b>What would it take for you to let an AI SOC auto-close alerts?</b><br> Choose 1 or both or something else:</p><ul><li><p class="paragraph" style="text-align:left;">A <b>time threshold</b> (e.g., “sub-5-minute investigations”), </p></li><li><p class="paragraph" style="text-align:left;">The <b>minimum audit trail</b> you’d need to sign off.</p></li></ul><p class="paragraph" style="text-align:left;">Next week, we&#39;ll explore another critical aspect of cloud security. Stay tuned!</p><hr class="content_break"><p class="paragraph" style="text-align:left;">📬 Want weekly expert takes on AI & Cloud Security? [<a class="link" href="https://www.cloudsecuritynewsletter.com/subscribe?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow">Subscribe here</a>]”</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><span style="color:rgb(238, 40, 60);"><b><a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">We would love to hear from you</a></b></span>📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙</p><p class="paragraph" style="text-align:start;">Peace!</p><p class="paragraph" style="text-align:start;"><a class="link" href="https://www.linkedin.com/in/shilpi-bhattacharjee/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow">Shilpi Bhattacharjee</a></p><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dc094a1c-678a-43e0-adc9-1bee6c3499e2/CSP_Logo_Blue_ScreenRes_3000x3000_v2.jpg"/></a></div><div class="button" style="text-align:left;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="{{rp_referral_hub_url}}"><span class="button__text" style=""> Share the newsletter </span></a></div><p class="paragraph" style="text-align:left;">Was this forwarded to you? You can <a class="link" href="https://www.cloudsecuritynewsletter.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow">Sign up here</a>, to join our growing readership.</p><p class="paragraph" style="text-align:left;">Want to <b>sponsor</b> the next newsletter edition! <a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">Lets make it happen </a></p><p class="paragraph" style="text-align:left;">Have you joined our FREE <b>Monthly</b> <a class="link" href="https://www.cloudsecuritybootcamp.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Bootcamp</a> yet?</p><p class="paragraph" style="text-align:left;">checkout our <b>sister podcast</b> <a class="link" href="https://www.youtube.com/@AISecurityPodcast?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=sha1-hulud-worm-exposes-25k-repos-lessons-from-building-trustworthy-ai-socs-for-regulated-environments" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=90bf241f-6244-4a61-b505-04c04ae4ca60&utm_medium=post_rss&utm_source=cloud_security_newsletter">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>🚨 Cloudflare Outage &amp; $3.35B Palo Alto Deal: Lessons from Swiss Insurance’s Multi-Cloud Migration</title>
  <description>This week’s Cloud Security Newsletter covers the $3.35B Palo Alto–Chronosphere acquisition, Cloudflare’s global outage, record-breaking Azure DDoS attacks, UK’s new cyber bill, and rising AI prompt injection threats. Insights from Swiss Insurance’s cloud architect Matthias Mertens reveal enterprise-tested strategies for multi-cloud migration, Terraform automation at scale, and serverless modernization</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/a066fa8a-dc63-49e6-96d3-cbcc021d61a3/Screenshot_2025-11-20_at_1.37.10_AM.png" length="1413063" type="image/png"/>
  <link>https://www.cloudsecuritynewsletter.com/p/cloudflare-outage-3billion-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration</link>
  <guid isPermaLink="true">https://www.cloudsecuritynewsletter.com/p/cloudflare-outage-3billion-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration</guid>
  <pubDate>Thu, 20 Nov 2025 01:41:13 +0000</pubDate>
  <atom:published>2025-11-20T01:41:13Z</atom:published>
    <dc:creator>Ashish Rajan</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="hello-from-the-cloudverse"><span style="background-color:#28EEDA;"><b>Hello from the Cloud-verse!</b></span></h2><p class="paragraph" style="text-align:left;">This week’s Cloud Security Newsletter Topic we cover - <b>The IaC &quot;Lift & Shift&quot; Playbook: Migrating 200 Apps to Multi-Cloud </b><a class="link" href="#cloud-security-topic-of-the-week" rel="noopener noreferrer nofollow">(continue reading)</a> </p><div class="button" style="text-align:center;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="https://links.cloudsecuritypodcast.tv/csa-ai-soc-report-dropzone-oct2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration"><span class="button__text" style=""> Check out this week’s Sponsor: Dropzone </span></a></div><hr class="content_break"><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/the-terraform-lift-shift-playbook-migrating-200-apps-to-multi-cloud-with-terraform?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/a066fa8a-dc63-49e6-96d3-cbcc021d61a3/Screenshot_2025-11-20_at_1.37.10_AM.png?t=1763602799"/></a><div class="image__source"><span class="image__source_text"><p>This image was generated by AI. It&#39;s still experimental, so it might not be a perfect match!</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Incase, this is your 1st Cloud Security Newsletter! You are in good company! </b><br>You are reading this issue along with your friends and colleagues from companies like <i>Netflix</i>, Citi, <i>JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more</i> who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to <a class="link" href="https://open.spotify.com/show/6LZgeh4GecRYPc0WrwMB4I?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a> & <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> every week.</p><hr class="content_break"><p class="paragraph" style="text-align:left;">Welcome to this week’s Cloud Security Newsletter</p><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">This week brings one of the biggest strategic moves in cloud security Palo Alto Networks’ $3.35B acquisition of Chronosphere alongside a global Cloudflare outage that disrupted major platforms like X, ChatGPT, and SaaS applications worldwide.</span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">To help you navigate this rapidly shifting landscape, we bring insights from </span><span style="color:rgb(27, 28, 29);"><b>Matthias Mertens</b></span><span style="color:rgb(27, 28, 29);">, Cloud Solution Architect at </span><span style="color:rgb(27, 28, 29);"><b>Swiss Insurance</b></span><span style="color:rgb(27, 28, 29);">, supported by commentary from </span><span style="color:rgb(27, 28, 29);"><b>Ashish Rajan</b></span><span style="color:rgb(27, 28, 29);">. Matthias led a high-speed migration of 200 applications from data centers into AWS and Azure, providing one of the most grounded, enterprise-proven blueprints for modernizing cloud architectures while maintaining regulatory compliance and operational resilience.</span></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="tldr-for-busy-readers">📰 TL;DR for Busy Readers</h2><ul><li><p class="paragraph" style="text-align:left;"><b>Cloudflare Outage:</b> Multi-CDN architectures are no longer optional.</p></li><li><p class="paragraph" style="text-align:left;"><b>Palo Alto x Chronosphere:</b> Observability + security are converging into unified data platforms.</p></li><li><p class="paragraph" style="text-align:left;"><b>Zero-Day:</b> Patch the actively exploited Windows Kernel EoP zero-day (CVE-2025-62215).</p></li><li><p class="paragraph" style="text-align:left;"><b>Azure DDoS:</b> 15.7 Tbps attack proves endpoint-level resilience matters.</p></li><li><p class="paragraph" style="text-align:left;"><b>UK Cyber Bill:</b> NIS2-level penalties + 24-hour reporting for critical incidents.</p></li><li><p class="paragraph" style="text-align:left;"><b>Swiss Insurance Case Study (with </b>Matthias Mertens<b>):</b> Terraform at scale, serverless modernization, and multi-cloud resilience.</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-security-news">📰 <b>THIS WEEK&#39;S SECURITY HEADLINES</b></h2><h3 class="heading" style="text-align:left;" id="1-palo-alto-networks-to-acquire-chr"><span style="color:rgb(27, 28, 29);"><b>1. 💰 Palo Alto Networks to Acquire Chronosphere for $3.35B, Driving Observability-Security Convergence</b></span></h3><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">Palo Alto Networks announced an agreement to acquire Chronosphere, a cloud-native observability platform, for </span><span style="color:rgb(27, 28, 29);"><b>$3.35 billion</b></span><span style="color:rgb(27, 28, 29);">. The deal aims to combine Chronosphere&#39;s massive telemetry pipeline (metrics, traces, logs) with Palo Alto Networks&#39; AI-powered security automation platform, Cortex AgentiX, to move towards real-time, autonomous security and performance remediation.</span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b>Why It Matters:</b></span><span style="color:rgb(27, 28, 29);"> </span></p><ul><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">CISOs must prepare for a world where security platforms require observability-scale data to effectively protect AI-driven, distributed applications. This acquisition forces a critical re-evaluation of siloed security data lakes and legacy tooling that cannot handle the scale or context required for modern cloud operations.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">This deal signals the formal convergence of security + observability + AI into unified platforms.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">Security data lakes built only for logs or SIEM-tier data will struggle to keep up with observability-scale telemetry (traces, metrics, distributed spans).</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">CISOs must also re-evaluate:</span></p><ul><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">future licensing lock-in</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">data gravity and ingestion patterns</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">where AI-driven detection should live</span></p></li></ul></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">This convergence directly relates to the multi-cloud complexity discussed by </span><span style="color:rgb(27, 28, 29);"><i>Matthias Mertens(in our expert insights in the following section)</i></span><span style="color:rgb(27, 28, 29);">. If Insurances had to manage two separate data pipelines (one for AWS, one for Azure, one for security, one for observability), the integration challenge would be geometrically harder. The push toward consolidated data and automation (like Terraform for IaC) is essential for solving complexity at scale.</span></p></li></ul><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b> Read More:</b></span><span style="color:rgb(27, 28, 29);"> </span><a class="link" href="https://www.paloaltonetworks.com/company/press/2025/palo-alto-networks-to-acquire-chronosphere--next-gen-observability-leader--for-the-ai-era?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow">Palo Alto Networks Press Release</a>,<a class="link" href="https://www.google.com/search?q=https%3A%2F%2Fwww.wsj.com%2Farticles%2Fpalo-alto-networks-to-buy-chronosphere-for-3-35-billion-posts-higher-revenue&utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow"> The Wall Street Journal</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="2-microsoft-azure-mitigates-record-"><span style="color:rgb(27, 28, 29);"><b>2. ☁️ Microsoft Azure Mitigates Record-Breaking 15.7 Tbps DDoS Attack</b></span></h3><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">Microsoft reported successfully neutralizing a record-breaking, multi-vector Distributed Denial of Service (DDoS) attack against a single Azure endpoint in late October. The attack measured </span><span style="color:rgb(27, 28, 29);"><b>15.72 Tbps and 3.64 billion packets per second (pps)</b></span><span style="color:rgb(27, 28, 29);">, believed to be the largest single attack ever recorded in the cloud, originating from the Aisuru botnet.</span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b>Why It Matters:</b></span><span style="color:rgb(27, 28, 29);"> </span></p><ul><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">The event confirms the unprecedented scale of volumetric attacks. Cloud architects must validate their cloud provider&#39;s DDoS Protection tiers (Standard/Advanced) and assume that application-specific endpoints will be targeted. This underscores the need for robust, layered network controls (WAF, rate limiting) and stress testing to ensure service resilience against campaigns far exceeding historical benchmarks.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">DDoS scale has reached a point where per-endpoint resiliency matters as much as region-level protections.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">Azure’s mitigation worked, but your DDoS protection tier may not match the level required for modern botnets.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">Cloud architects should stress-test based on 10+ Tbps scenarios, not outdated benchmarks.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">Expert insight for distributed workloads across AWS and Azure help reduce single-cloud blast radius critical in a world where per-endpoint attacks can reach multi-terabit scale.</span></p></li></ul><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b> Read More:</b></span><span style="color:rgb(27, 28, 29);"> </span><a class="link" href="https://www.cybersecuritydive.com/news/record-ddos-attack-microsoft-azure/805886/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow">Cybersecurity Dive</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-proofpoint-warns-of-indirect-prom"><span style="color:rgb(27, 28, 29);"><b>3. </b></span>🤖<span style="color:rgb(27, 28, 29);"><b> Proofpoint Warns of Indirect Prompt Injection Hijacking Autonomous AI Agents</b></span></h3><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">Researchers highlighted the growing threat of </span><span style="color:rgb(27, 28, 29);"><b>Indirect Prompt Injection</b></span><span style="color:rgb(27, 28, 29);">, where malicious instructions are secretly embedded in external, untrusted data (like hidden text in a document). When an LLM-powered assistant or autonomous AI agent scans this data for context, it executes the hidden instruction, potentially leading to unauthorized actions like data exfiltration or internal system manipulation.</span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b>Why It Matters:</b></span><span style="color:rgb(27, 28, 29);"> </span></p><ul><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">This is a foundational new risk for enterprises adopting AI-powered tools. Cloud security teams must recognize that the trust boundary is now between the model and the data it consumes. CISOs need to implement strict sandboxing and Least-Privilege principles for AI agents and enforce a human-in-the-loop requirement for any high-risk actions (e.g., modifying configuration or sending communications).The threat boundary now includes every dataset consumed by an LLM, not just “prompts.”</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">Enterprises must implement:</span></p><ul><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">sandboxed scanning pipelines</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">least-privileged agent roles</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">human approval for high-risk AI actions</span></p></li></ul></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"> Expert emphasis on separating regulated vs. non-regulated workloads mirrors the need to classify AI systems by privilege tiers to avoid accidental overexposure through agent automation.</span></p></li></ul><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b> Read More</b></span><span style="color:rgb(27, 28, 29);">: </span><a class="link" href="https://www.proofpoint.com/us/blog/email-and-cloud-threats/stop-month-how-threat-actors-weaponize-ai-assistants-indirect-prompt?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow">Proofpoint Blog</a>,<a class="link" href="https://genai.owasp.org/llmrisk/llm01-prompt-injection/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow"> OWASP Gen AI Security Project</a><span style="color:rgb(27, 28, 29);"> </span></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="4-u-ks-new-cyber-security-and-resil"><span style="color:rgb(27, 28, 29);"><b>4. </b></span>🇬🇧<span style="color:rgb(27, 28, 29);"><b> UK&#39;s New Cyber Security and Resilience Bill Introduces NIS2-Level Penalties</b></span></h3><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">The UK Government introduced the Cyber Security and Resilience Bill, proposing an overhaul of the existing NIS Regulations. The new legislation introduces new obligations for third-party tech suppliers and data centers, and grants regulators power to impose tougher, turnover-based financial penalties for serious breaches targeting critical infrastructure sectors. Harmful incidents must now be reported within </span><span style="color:rgb(27, 28, 29);"><b>24 hours</b></span><span style="color:rgb(27, 28, 29);">.</span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b>Why It Matters:</b></span><span style="color:rgb(27, 28, 29);"> </span></p><ul><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">This marks a significant elevation of regulatory risk, mirroring the compliance pressure seen with NIS2 and GDPR. Accountability is shifting down the supply chain to MSPs, cloud platform teams, and data centers. CISOs must urgently review MSSP/MSP contracts, implement continuous compliance monitoring (e.g., using a CSPM), and validate that incident response plans can meet the new, aggressive 24-hour reporting deadline for harmful incidents.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">Enterprises must validate that suppliers and cloud partners meet new compliance minimums.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">Third-party risk management becomes a regulator-enforced requirement.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">Incident response plans should include:</span></p><ul><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">accelerated reporting windows</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">automated evidence capture</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">vendor coordination mechanisms</span></p></li></ul></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">Experts shared that regulated workloads show why multi-cloud isolation and cloud-native services matter; different clouds may offer better compliance alignment for specific workloads.</span></p></li></ul><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b> Read More:</b></span><span style="color:rgb(27, 28, 29);"> </span><a class="link" href="https://GOV.UK?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow">GOV.UK</a>,<a class="link" href="https://www.pinsentmasons.com/out-law/news/cyber-tech-suppliers-data-centres-uk-cyber-security-scrutiny?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow"> Pinsent Masons</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="5-cloudflare-outage-disrupts-global"><span style="color:rgb(27, 28, 29);"><b>5. </b></span>🌐<span style="color:rgb(27, 28, 29);"><b> Cloudflare Outage Disrupts Global Services, Highlights Configuration Risk</b></span></h3><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">A major global outage at Cloudflare caused widespread service disruptions for platforms like X, ChatGPT, and many other applications. Cloudflare confirmed the cause was a </span><span style="color:rgb(27, 28, 29);"><b>latent bug</b></span><span style="color:rgb(27, 28, 29);"> in a bot mitigation service that triggered a crash after a routine configuration change caused a massive configuration file to propagate across their network. It was </span><span style="color:rgb(27, 28, 29);"><b>not a cyberattack</b></span><span style="color:rgb(27, 28, 29);">.</span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b>Why It Matters:</b></span><span style="color:rgb(27, 28, 29);"> </span></p><ul><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">This incident is a powerful reminder of third-party risk and resilience planning. Enterprise cloud architects must design for multi-CDN or multi-Cloud distribution for mission-critical applications to avoid single points of failure. The root cause of a configuration bug underscores the need for strict, automated change management and pre-deployment verification for core infrastructure as code (IaC) and network configuration files.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">Even trusted providers can become single points of failure.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">Multi-CDN strategies aren’t “nice-to-have” ; they&#39;re an operational necessity.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">IaC-based network configuration must include pre-deployment validation pipelines to prevent cascading outages</span></p></li></ul><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b> Read More: </b></span><a class="link" href="https://blog.cloudflare.com/18-november-2025-outage/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow">Cloudflare Blog (Post-Mortem)</a>,<a class="link" href="https://www.theguardian.com/technology/2025/nov/18/cloudflare-outage-causes-error-messages-across-the-internet?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow"> The Guardian</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="cloud-security-topic-of-the-week">🎯 Cloud Security Topic of the Week: </h2><h3 class="heading" style="text-align:left;" id="how-swiss-insurance-migrated-200-ap"><span style="color:rgb(27, 28, 29);"><b>How Swiss Insurance Migrated 200 Apps in One Year Without Breaking the Business</b></span></h3><p class="paragraph" style="text-align:left;">This week’s featured topic distills Matthias Mertens’ experience executing a fast, regulated, multi-cloud migration for an enterprise with legacy workloads, tight deadlines, and compliance constraints.</p><p class="paragraph" style="text-align:left;">What makes this discussion uniquely valuable is that Matthias had to migrate 200+ applications under real-world pressure, coordinating AWS and Azure adoption simultaneously while retiring costly data center leases.</p><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">The speed and scope of cloud migrations often compromise security architecture, leading to technical debt. The experience of Insurances in migrating 200 applications from legacy data centers to a multi-cloud AWS/Azure environment within one year provides a clear blueprint for prioritizing speed through automation while ensuring a secure foundation for future modernization.</span>.</p><h2 class="heading" style="text-align:left;" id="featured-experts-this-week"><b>Featured Experts This Week </b>🎤</h2><ul><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><a class="link" href="https://www.linkedin.com/in/matthias-mertens-15662650/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow"><b>Matthias Mertens</b></a></span><span style="color:rgb(27, 28, 29);"><b> - </b></span><span style="color:rgb(27, 28, 29);">Cloud Solution Architect, </span>Helvetia Assurances Suisse</p></li><li><p class="paragraph" style="text-align:left;"><b><a class="link" href="https://www.linkedin.com/in/ashishrajan/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow">Ashish Rajan</a></b> - CISO | Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> , Host of <a class="link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a></p></li></ul><h2 class="heading" style="text-align:left;" id="definitions-and-core-concepts"><b>Definitions and Core Concepts 📚</b></h2><p class="paragraph" style="text-align:left;">Before diving into our insights, let&#39;s clarify some key terms:</p><ul><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b>Lift-and-Shift:</b></span><span style="color:rgb(27, 28, 29);"> The strategy of moving an application and its associated virtual machines or workloads from an on-premises environment (data center) to a cloud provider with minimal changes. This is often done to meet tight deadlines for data center exit.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b>ECS Fargate:</b></span><span style="color:rgb(27, 28, 29);"> An AWS compute engine for Amazon Elastic Container Service (ECS) that allows you to run containers without having to provision, configure, or scale clusters of virtual machines. It is a </span><span style="color:rgb(27, 28, 29);"><b>serverless</b></span><span style="color:rgb(27, 28, 29);"> container service.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b>Terraform Module: </b></span><span style="color:rgb(27, 28, 29);">A reusable IaC package that ensures consistent networking, security, and configuration across hundreds of cloud accounts.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b>CSPM (Cloud Security Posture Management):</b></span><span style="color:rgb(27, 28, 29);"> Automated tools that continuously monitor cloud environments (AWS, Azure, GCP) for configuration drift, misconfigurations, and compliance violations, directly supporting the regulatory needs.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b>Multi-CDN Strategy: </b></span><span style="color:rgb(27, 28, 29);">Using two or more CDN providers so no single outage takes your global applications offline.</span></p></li></ul><hr class="content_break"><p class="paragraph" style="text-align:center;">This week&#39;s issue is sponsored by <a class="link" href="https://links.cloudsecuritypodcast.tv/csa-ai-soc-report-dropzone-oct2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow">Dropzone</a></p><p class="paragraph" style="text-align:left;">New independent research from Cloud Security Alliance proves AI SOC agents dramatically improve analyst performance. </p><p class="paragraph" style="text-align:left;">In controlled testing with 148 security professionals using Dropzone AI, analysts achieved 22-29% higher accuracy, completed investigations 45-61% faster, and maintained superior quality even under fatigue. </p><p class="paragraph" style="text-align:left;">The study reveals that 94% of participants viewed AI more positively after hands-on use. See the full benchmark results.</p><p class="paragraph" style="text-align:center;"><a class="link" href="https://links.cloudsecuritypodcast.tv/csa-ai-soc-report-dropzone-oct2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow">📤 Read the Full Study from CSA</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="our-insights-from-this-practitioner">💡<b>Our Insights from this Practitioner 🔍</b></h2><h3 class="heading" style="text-align:left;" id="the-terraform-lift-shift-playbook-m"><b>The Terraform &quot;Lift & Shift&quot; Playbook: Migrating 200 Apps to Multi-Cloud with Terraform </b><b>(</b><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/the-terraform-lift-shift-playbook-migrating-200-apps-to-multi-cloud-with-terraform?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow"><b>Full Episode here</b></a><b>)</b></h3><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">The primary driver for their cloud migration was a rapid </span><span style="color:rgb(27, 28, 29);">data center exit</span><span style="color:rgb(27, 28, 29);"> due to license and lifecycle issues, necessitating a </span><span style="color:rgb(27, 28, 29);">lift-and-shift</span><span style="color:rgb(27, 28, 29);"> approach for 200 applications within a year. This strategy prioritized speed and cost savings (ending leases) over immediate modernization. The security lessons lie in the </span><span style="color:rgb(27, 28, 29);"><i>enablers</i></span><span style="color:rgb(27, 28, 29);"> that made this massive, rapid, multi-cloud move possible and sustainable.</span></p><h3 class="heading" style="text-align:left;" id="1-start-with-liftand-shift-moderniz"><span style="color:rgb(27, 28, 29);"><b>1. Start with Lift-and-Shift-Modernize Later</b></span></h3><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">Matthias’ team moved 200 applications in one year. The only feasible approach? “</span><span style="color:rgb(27, 28, 29);"><i>We decided for a lift-and-shift approach… because we needed to empty our data center as fast as possible.</i></span><span style="color:rgb(27, 28, 29);">”</span><span style="color:rgb(27, 28, 29);"><i> - </i></span><span style="color:rgb(27, 28, 29);">Matthias Mertens</span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">This echoes a pattern seen at companies like </span><span style="color:rgb(27, 28, 29);"><b>Airbnb</b></span><span style="color:rgb(27, 28, 29);"> and </span><span style="color:rgb(27, 28, 29);"><b>Capital One</b></span><span style="color:rgb(27, 28, 29);"> modernization succeeds faster when decoupled from high-pressure migrations.</span></p><h3 class="heading" style="text-align:left;" id="2-multi-cloud-was-not-a-luxury-it-w"><span style="color:rgb(27, 28, 29);"><b>2. Multi-Cloud Was Not a Luxury - It Was a Regulatory Requirement</b></span></h3><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">The Company chose a </span><span style="color:rgb(27, 28, 29);"><b>multi-cloud architecture (AWS and Azure)</b></span><span style="color:rgb(27, 28, 29);"> for deliberate risk separation, both physical and legal. &quot;</span><span style="color:rgb(27, 28, 29);"><i>And having different, uh, cloud providers also helps this. . . For regulation issues. Yeah, because it&#39;s good to have workloads separated again, physically and also legally.</i></span><span style="color:rgb(27, 28, 29);">&quot; – </span><span style="color:rgb(27, 28, 29);"><b>Matthias Mertens</b></span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">For senior leaders, this is a clear strategic decision to </span><span style="color:rgb(27, 28, 29);"><b>mitigate concentration risk, a lesson</b></span><span style="color:rgb(27, 28, 29);"> reinforced by the Cloudflare outage. When regulatory compliance is involved (as in the insurance sector), spreading workloads across providers helps satisfy requirements for resilience, sovereignty, and regional disaster recovery.</span></p><h3 class="heading" style="text-align:left;" id="3-ia-c-automation-is-the-only-way-t"><span style="color:rgb(27, 28, 29);"><b>3. IaC Automation is the Only Way to Secure at Scale</b></span></h3><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">The sheer scale of the 200-application migration meant that manual deployment was impossible. Insurances relied on the cloud-agnostic </span><span style="color:rgb(27, 28, 29);"><b>Terraform</b></span><span style="color:rgb(27, 28, 29);"> to manage infrastructure across both AWS and Azure.</span></p><ul><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b>Automation of the Foundation:</b></span><span style="color:rgb(27, 28, 29);"> Terraform was used to </span><span style="color:rgb(27, 28, 29);"><b>automate the deployment of new cloud accounts and subscriptions</b></span><span style="color:rgb(27, 28, 29);">. This is a critical security win, as it ensures all new cloud environments start with the necessary security controls: networking, IAM access management, and base policies. Matthias noted this allowed them to &quot;create an account within minutes&quot;. This capability is essential for fast, secure enablement, reflecting the purpose of their Cloud Enablement team.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b>The Power of Modules:</b></span><span style="color:rgb(27, 28, 29);"> Their partner used </span><span style="color:rgb(27, 28, 29);"><b>Terraform modules</b></span><span style="color:rgb(27, 28, 29);"> to describe and deploy the virtual machines for the lift-and-shift, ensuring consistency and integrating surrounding services like monitoring. This modular, repeatable approach made the 200-app migration feasible within the tight one-year deadline.</span></p></li></ul><h3 class="heading" style="text-align:left;" id="4-serverless-containers-for-moderni"><span style="color:rgb(27, 28, 29);"><b>4. Serverless Containers for Modernizing Legacy Container Workloads</b></span></h3><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">When faced with deploying a vendor-provided Docker image, Matthias&#39;s team rejected running it on a traditional Virtual Machine, stating: &quot;</span><span style="color:rgb(27, 28, 29);"><i>it make no sense to run a Docker container on a virtual machine in production. Yeah, no way we would do that</i></span><span style="color:rgb(27, 28, 29);">&quot;. They also lacked the resources to manage a Kubernetes (K8s) cluster for a single application.</span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">Their solution was </span><span style="color:rgb(27, 28, 29);"><b>AWS ECS Fargate</b></span><span style="color:rgb(27, 28, 29);">, a fully managed, serverless container service.</span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">&quot;</span><span style="color:rgb(27, 28, 29);"><i>We use ECS Fargate... And with this one, we can run containers without having to manage any underlying infrastructure.</i></span><span style="color:rgb(27, 28, 29);">&quot; – </span><span style="color:rgb(27, 28, 29);"><b>Matthias Mertens</b></span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);">This decision provides key security takeaways:</span></p><ul><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b>Reduced Attack Surface:</b></span><span style="color:rgb(27, 28, 29);"> By leveraging Fargate, the security team offloads the responsibility of patching and managing the underlying host operating system from the Cloud Shared Responsibility Model to AWS.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b>Focus on Application Security:</b></span><span style="color:rgb(27, 28, 29);"> The team could then focus on securing the surrounding services essential for production-grade deployment: load balancing, certificates, secret storage, and logging. For security leaders, this proves that </span><span style="color:rgb(27, 28, 29);"><b>serverless technologies enable SecOps to shift focus from infrastructure patching to the higher-value tasks of application governance and data protection.</b></span></p></li></ul><h3 class="heading" style="text-align:left;" id="5-choose-the-right-partner-dont-sel"><span style="color:rgb(27, 28, 29);"><b>5. Choose the Right Partner-Don’t Self-Inflict Risk</b></span></h3><p class="paragraph" style="text-align:left;">Matthias is straightforward about this: “<i>You do not do this kind of project just… ‘let’s try.’ Find a partner with experience</i>.”</p><p class="paragraph" style="text-align:left;">This is a hallmark of mature cloud programs at companies like <b>Block</b>, <b>Atlassian</b>, and <b>Shopify</b>, all of whom lean heavily on expert integrators during major transitions.</p><h3 class="heading" style="text-align:left;" id="actionable-takeaways-for-senior-clo"><span style="color:rgb(27, 28, 29);"><b>Actionable Takeaways for Senior Cloud Professionals:</b></span></h3><ol start="1"><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b>Mandate Cloud-Agnostic IaC:</b></span><span style="color:rgb(27, 28, 29);"> Ensure your Cloud Enablement team uses tools like </span><span style="color:rgb(27, 28, 29);"><b>Terraform</b></span><span style="color:rgb(27, 28, 29);"> for provisioning cloud accounts and subscriptions. This enforces a secure baseline across multi-cloud environments from day one, which is vital for compliance with new regulations like the UK Bill.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b>Use Serverless Containers Strategically:</b></span><span style="color:rgb(27, 28, 29);"> Avoid placing containers on managed VMs to reduce operational overhead and attack surface. Leverage Fargate/Azure Container Apps for isolated workloads to dedicate security resources to the application layer (WAF, IAM, Secrets Manager).</span></p></li></ol><p class="paragraph" style="text-align:left;"><span style="color:rgb(27, 28, 29);"><b>Embed Security in Migration Planning:</b></span><span style="color:rgb(27, 28, 29);"> As Ashish Rajan summarized, the key is to prioritize and &quot;figure out what applications are suitable to be deployed in the first place&quot; and assess the risks, even in a block move scenario.</span></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-resources">RELATED RESOURCES</h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://owasp.org/www-project-top-10-for-large-language-model-applications/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow">OWASP Top 10 for LLMs</a></p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>Related Podcast Episodes 🎧</b></h2><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/the-terraform-lift-shift-playbook-migrating-200-apps-to-multi-cloud-with-terraform?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/3be49f2c-0737-40d8-b503-fbc3955be68c/Matthias_Mertens-YT-Thumbnail.jpg?t=1763601318"/></a><div class="image__source"><a class="image__source_link" href="https://www.cloudsecuritypodcast.tv/videos/the-terraform-lift-shift-playbook-migrating-200-apps-to-multi-cloud-with-terraform?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" rel="noopener" target="_blank"><span class="image__source_text"><p>The Terraform &quot;Lift & Shift&quot; Playbook: Migrating 200 Apps to Multi-Cloud with Terraform</p></span></a></div></div><h3 class="heading" style="text-align:left;" id="question-for-you-reply-to-this-emai">Question for you? (Reply to this email)</h3><p class="paragraph" style="text-align:left;"> 🤖 <span style="color:rgb(27, 28, 29);">Is your cloud architecture resilient against a </span><span style="color:rgb(27, 28, 29);"><b>Cloudflare-scale outage</b></span><span style="color:rgb(27, 28, 29);">, or are you still relying on a single CDN?</span></p><p class="paragraph" style="text-align:left;">Next week, we&#39;ll explore another critical aspect of cloud security. Stay tuned!</p><hr class="content_break"><p class="paragraph" style="text-align:left;">📬 Want weekly expert takes on AI & Cloud Security? [<a class="link" href="https://www.cloudsecuritynewsletter.com/subscribe?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow">Subscribe here</a>]”</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><span style="color:rgb(238, 40, 60);"><b><a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">We would love to hear from you</a></b></span>📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙</p><p class="paragraph" style="text-align:start;">Peace!</p><p class="paragraph" style="text-align:start;"><a class="link" href="https://www.linkedin.com/in/shilpi-bhattacharjee/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow">Shilpi Bhattacharjee</a></p><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dc094a1c-678a-43e0-adc9-1bee6c3499e2/CSP_Logo_Blue_ScreenRes_3000x3000_v2.jpg"/></a></div><div class="button" style="text-align:left;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="{{rp_referral_hub_url}}"><span class="button__text" style=""> Share the newsletter </span></a></div><p class="paragraph" style="text-align:left;">Was this forwarded to you? You can <a class="link" href="https://www.cloudsecuritynewsletter.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow">Sign up here</a>, to join our growing readership.</p><p class="paragraph" style="text-align:left;">Want to <b>sponsor</b> the next newsletter edition! <a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">Lets make it happen </a></p><p class="paragraph" style="text-align:left;">Have you joined our FREE <b>Monthly</b> <a class="link" href="https://www.cloudsecuritybootcamp.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Bootcamp</a> yet?</p><p class="paragraph" style="text-align:left;">checkout our <b>sister podcast</b> <a class="link" href="https://www.youtube.com/@AISecurityPodcast?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=cloudflare-outage-3-35b-palo-alto-deal-lessons-from-swiss-insurance-s-multi-cloud-migration" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=00aff33e-cd32-40c6-8152-43faf40a6e6c&utm_medium=post_rss&utm_source=cloud_security_newsletter">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>🚨 Container Escape + AI Agent Risk: Lessons from Box’s Security Lead</title>
  <description>This week&#39;s newsletter examines critical runC container escape vulnerabilities affecting all major cloud providers, the evolving threat landscape of AI agent exploitation, and practical security controls for agentic AI systems. Learn from Box&#39;s Mohan Kumar, Production Security Lead with 14 years in cybersecurity about memory poisoning attacks, tool misuse patterns, and the three-layer security evolution needed for AI agent production deployments. Plus: Active Cisco firewall zero-day exploitation, China-linked Congressional Budget Office breach, and Google Cloud&#39;s 2026 forecast predicting surge in prompt injection attacks.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/afee1f7b-0323-48f4-931a-8758e54ae3c2/Screenshot_2025-11-12_at_10.06.26_PM.png" length="1329917" type="image/png"/>
  <link>https://www.cloudsecuritynewsletter.com/p/container-escape-ai-agent-risk-lessons-from-box-s-security-lead</link>
  <guid isPermaLink="true">https://www.cloudsecuritynewsletter.com/p/container-escape-ai-agent-risk-lessons-from-box-s-security-lead</guid>
  <pubDate>Wed, 12 Nov 2025 22:57:15 +0000</pubDate>
  <atom:published>2025-11-12T22:57:15Z</atom:published>
    <dc:creator>Ashish Rajan</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="hello-from-the-cloudverse"><span style="background-color:#28EEDA;"><b>Hello from the Cloud-verse!</b></span></h2><p class="paragraph" style="text-align:left;">This week’s Cloud Security Newsletter Topic we cover - <b>Securing AI Agents in Production: From LLM Applications to Autonomous Systems</b><b> </b><a class="link" href="#cloud-security-topic-of-the-week" rel="noopener noreferrer nofollow">(continue reading)</a> </p><div class="button" style="text-align:center;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="https://links.cloudsecuritypodcast.tv/brinqa-nov25-see-risk-differently?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead"><span class="button__text" style=""> Check out this week’s Sponsor: Brinqa </span></a></div><hr class="content_break"><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/ai-is-already-breaking-the-silos-between-appsec-cloudsec?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/afee1f7b-0323-48f4-931a-8758e54ae3c2/Screenshot_2025-11-12_at_10.06.26_PM.png?t=1762986151"/></a><div class="image__source"><span class="image__source_text"><p>This image was generated by AI. It&#39;s still experimental, so it might not be a perfect match!</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Incase, this is your 1st Cloud Security Newsletter! You are in good company! </b><br>You are reading this issue along with your friends and colleagues from companies like <i>Netflix</i>, Citi, <i>JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more</i> who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to <a class="link" href="https://open.spotify.com/show/6LZgeh4GecRYPc0WrwMB4I?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a> & <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> every week.</p><hr class="content_break"><p class="paragraph" style="text-align:left;">Welcome to this week&#39;s Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">As AI agents move from experimentation to production deployments, security teams face an entirely new threat landscape, one where traditional monitoring falls short and autonomous systems create &quot;covert channels&quot; we never anticipated. This week, we&#39;re joined by <b>Mohan Kumar</b>, Production Security Lead at Box with over 14 years in cybersecurity, who breaks down the critical security challenges of agentic AI and shares practical threat modeling approaches for organizations deploying autonomous agents.</p><p class="paragraph" style="text-align:left;">Meanwhile, the security news cycle has been dominated by critical infrastructure vulnerabilities: three high-severity runC container escape flaws affecting every major cloud provider, active exploitation of Cisco firewall zero-days, and a China-linked APT breach of the U.S. Congressional Budget Office.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="tldr-for-busy-readers">📰 TL;DR for Busy Readers</h2><ul><li><p class="paragraph" style="text-align:left;"><b>Container breakout risk</b>: Patch runC CVEs immediately affects Docker, Kubernetes across AWS, Azure, GCP</p></li><li><p class="paragraph" style="text-align:left;">AI agents ≠ LLMs: Autonomous agents create new attack surfaces through dynamic tool use and memory poisoning</p></li><li><p class="paragraph" style="text-align:left;"><b>Active exploitation</b>: Cisco ASA/FTD zero-days now weaponized for both RCE and DoS attacks</p></li><li><p class="paragraph" style="text-align:left;"><b>Nation-state escalation</b>: Chinese APT Silk Typhoon breached Congressional Budget Office systems</p></li><li><p class="paragraph" style="text-align:left;"><b>Identity remains critical</b>: Granular access controls and session isolation are foundational for AI agent security</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-security-news">📰 <b>THIS WEEK&#39;S SECURITY HEADLINES</b></h2><h3 class="heading" style="text-align:left;" id="1-bugcrowd-acquires-mayhem-security"><b>1. Bugcrowd Acquires Mayhem Security for AI-Driven Offensive Security Testing</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> Bugcrowd announced the acquisition of Mayhem Security on November 4, 2025, adding AI-driven offensive security testing capabilities to its crowdsourced vulnerability testing platform. The acquisition aims to augment human security researchers with AI-powered automation for discovering and exploiting vulnerabilities at scale.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> This M&A signals a significant shift in the offensive security market toward AI-augmented testing. As AI agents become more sophisticated, security testing must evolve beyond human-only approaches. The combination of crowdsourced human expertise with AI-driven automation could dramatically increase vulnerability discovery rates and testing coverage. Security teams should monitor how this affects the bug bounty landscape and consider whether their vulnerability disclosure programs are prepared for AI-augmented researcher capabilities.</p><ul><li><p class="paragraph" style="text-align:left;">Signals a shift toward <b>AI-augmented bug bounties</b> and offensive testing</p></li><li><p class="paragraph" style="text-align:left;">Increases the <b>volume and sophistication</b> of findings your teams will see</p></li><li><p class="paragraph" style="text-align:left;">Forces security orgs to ask: <i>“Are we ready for researchers who come with AI agents out of the box?”</i></p></li></ul><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.bugcrowd.com/press-release/bugcrowd-acquires-mayhem-security-to-bring-human-augmented-ai-automation-to-security-testing/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Read More: BugCrowd Press Release</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="2-critical-run-c-container-vulnerab"><b>2. Critical runC Container Vulnerabilities Enable Escape Across All Major Cloud Providers</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> Three high-severity vulnerabilities (CVE-2025-31133, CVE-2025-52565, CVE-2025-52881) were disclosed in runC, the underlying container runtime powering Docker, Kubernetes, and containerized workloads across AWS, Azure, and Google Cloud. These flaws enable container escape attacks, allowing attackers to break out of isolated containers and access the host system. AWS, Azure, and GCP all issued security bulletins with patches on November 5, 2025.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> Container escape represents one of the most serious threats in cloud-native environments. These vulnerabilities affect the foundational layer of container orchestration, meaning any organization running containerized workloads is potentially at risk. The swift disclosure and patching demonstrates the maturity of the container security ecosystem, but organizations must prioritize immediate patching across their Kubernetes clusters and container hosts. This isn&#39;t just a Kubernetes problem any Docker-based deployment is vulnerable.</p><ul><li><p class="paragraph" style="text-align:left;">This is a <b>foundational runtime issue</b>, not “just” a Kubernetes misconfig</p></li><li><p class="paragraph" style="text-align:left;">Any Docker-based deployment (including CI runners and self-hosted services) is in scope</p></li><li><p class="paragraph" style="text-align:left;">The window between “patch available” and “weaponised PoC” is typically <b>measured in days</b></p></li></ul><p class="paragraph" style="text-align:left;"><a class="link" href="https://sysdig.com/blog/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Read More: Sysdig Threat Research</a><b>,</b> <a class="link" href="https://aws.amazon.com/security/security-bulletins/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">AWS Security Bulletin</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-google-mandiant-exposes-ongoing-g"><b>3. Google Mandiant Exposes Ongoing Gladinet Triofox Zero-Day Exploitation</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> Google Mandiant revealed that threat actor UNC6485 has been exploiting a critical authentication bypass vulnerability (CVE-2025-12480) in Gladinet Triofox file-sharing platform since August 2025. This marks the third Triofox vulnerability exploited in 2025, indicating sustained attacker interest in the platform. The flaw enables authenticated remote code execution, providing attackers with deep access to enterprise file-sharing infrastructure.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> Enterprise file-sharing platforms are treasure troves of sensitive data and often have broad access to cloud storage systems. The sustained exploitation since August suggests this may be part of a broader campaign targeting file-sharing infrastructure. Organizations using Triofox or similar platforms should audit access logs, verify patch levels, and review authentication mechanisms. The pattern of multiple exploited vulnerabilities in one platform within a year raises questions about the vendor&#39;s security posture.</p><ul><li><p class="paragraph" style="text-align:left;">File-sharing platforms sit close to <b>crown-jewel data</b> and often bridge on-prem and cloud</p></li><li><p class="paragraph" style="text-align:left;">Three exploited vulns in a year suggest a <b>structural maturity problem</b> at the vendor</p></li><li><p class="paragraph" style="text-align:left;">Auth bypass + RCE = <b>full platform takeover</b> and stealthy data exfil</p></li></ul><p class="paragraph" style="text-align:left;"><a class="link" href="https://thehackernews.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Read More: </a><a class="link" href="https://thehackernews.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">The Hacker News</a>, <a class="link" href="https://www.mandiant.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Google Mandiant Threat Intelligence Report</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="4-microsoft-publishes-azure-blob-st"><b>4. Microsoft Publishes Azure Blob Storage Attack Chain Analysis with AI-Powered Detection</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> Microsoft released detailed analysis of attack chains targeting Azure Blob Storage, accompanied by new AI-powered detection capabilities in Defender XDR. The guidance covers cloud misconfiguration exploitation, unauthorized access patterns, and lateral movement techniques specific to Azure storage infrastructure. The updates enhance Microsoft&#39;s ability to detect and respond to cloud storage threats using machine learning models.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> Cloud storage is often the final destination for data exfiltration campaigns and frequently suffers from misconfiguration issues. Microsoft&#39;s investment in AI-powered detection for storage attack chains reflects the evolving threat landscape where traditional signature-based detection falls short. Security teams should review their Azure Blob Storage configurations against Microsoft&#39;s attack chain analysis, enable Defender XDR enhancements, and ensure storage access logging feeds into their SIEM. This is particularly relevant for organizations storing AI/ML training data in cloud object storage.</p><ul><li><p class="paragraph" style="text-align:left;">Blob/S3/GCS are often the <b>final stop</b> for data exfil</p></li><li><p class="paragraph" style="text-align:left;">Storage buckets also underpin <b>AI/ML training data</b> – poisoning and theft risks are high</p></li><li><p class="paragraph" style="text-align:left;">AI-assisted detections can help close blind spots around <b>subtle access patterns</b></p></li></ul><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.microsoft.com/security/blog/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Read More: </a><a class="link" href="https://www.microsoft.com/security/blog/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Microsoft Security Blog</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="5-google-cloud-forecasts-surge-in-a"><b>5. Google Cloud Forecasts Surge in AI Agent Exploitation and Prompt Injection for 2026</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> Google Cloud released its Cybersecurity Forecast 2026, predicting a significant increase in prompt injection attacks, AI agent exploitation, and cyber-physical attacks targeting European infrastructure. The report emphasizes how adversaries are weaponizing AI capabilities and developing new attack vectors specific to autonomous agent systems. Google&#39;s threat intelligence team identifies AI agent security as a critical emerging risk for 2026.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> This forward-looking intelligence directly connects to our main topic this week AI agent security. Google&#39;s prediction isn&#39;t speculative; it&#39;s based on observed threat actor behavior and the rapid adoption of AI agents in enterprise environments. As Mohan Kumar discusses in detail below, AI agents introduce fundamentally new attack surfaces including memory poisoning and tool misuse. Organizations deploying or planning to deploy AI agents should incorporate these threat predictions into their 2026 security roadmaps and investment decisions.</p><ul><li><p class="paragraph" style="text-align:left;">Attackers already experiment with <b>indirect prompt injection</b> into RAG pipelines</p></li><li><p class="paragraph" style="text-align:left;">Early <b>agent-to-agent abuse</b> patterns are emerging</p></li><li><p class="paragraph" style="text-align:left;">Enterprises are rolling out agents <b>faster than guardrails</b></p></li></ul><p class="paragraph" style="text-align:left;"><a class="link" href="https://cloud.google.com/blog/products/identity-security/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Read More: </a><a class="link" href="https://cloud.google.com/blog/products/identity-security/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Google Cloud Security Blog</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="cloud-security-topic-of-the-week">🎯 Cloud Security Topic of the Week: </h2><h3 class="heading" style="text-align:left;" id="securing-ai-agents-in-production-fr"><b>Securing AI Agents in Production: From LLM Applications to Autonomous Systems</b></h3><p class="paragraph" style="text-align:left;">The transition from single-shot LLM applications to goal-driven AI agents represents one of the most significant architectural shifts in cloud security since the move to containers. </p><p class="paragraph" style="text-align:left;">Unlike traditional LLM applications that process a prompt and return a response, AI agents operate autonomously thinking, acting, observing, and adapting their behavior in runtime. They connect to external tools, maintain memory across sessions, communicate with other agents, and make decisions without human intervention. This autonomy creates entirely new attack surfaces that traditional security controls weren&#39;t designed to address.</p><h2 class="heading" style="text-align:left;" id="featured-experts-this-week"><b>Featured Experts This Week </b>🎤</h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.linkedin.com/in/vimokumar/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow"><b>Mohan Kumar</b></a><b> - </b>Production Security Lead, Box</p></li><li><p class="paragraph" style="text-align:left;"><b><a class="link" href="https://www.linkedin.com/in/ashishrajan/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Ashish Rajan</a></b> - CISO | Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> , Host of <a class="link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a></p></li></ul><h2 class="heading" style="text-align:left;" id="definitions-and-core-concepts"><b>Definitions and Core Concepts 📚</b></h2><p class="paragraph" style="text-align:left;">Before diving into our insights, let&#39;s clarify some key terms:</p><ul><li><p class="paragraph" style="text-align:left;"><b>AI Agent vs. LLM Application:</b> An LLM application provides one-shot responses to prompts (like ChatGPT&#39;s basic chat interface). An AI agent is dynamic and goal-driven, following a think-act-observe cycle, accessing external tools, maintaining memory, and making autonomous decisions to achieve objectives.</p></li><li><p class="paragraph" style="text-align:left;"><b>Memory Poisoning:</b> An attack technique where adversaries inject malicious information into an AI agent&#39;s long-term, short-term, or entity memory stores. Since agents trust their own memory when making decisions, poisoned memory can alter future behaviors and decisions without the agent&#39;s awareness.</p></li><li><p class="paragraph" style="text-align:left;"><b>Tool Misuse:</b> Exploitation of an AI agent&#39;s access to external tools and APIs. Even legitimate tools (like calendar APIs or file systems) can be abused if agents don&#39;t have proper permission scoping and validation.</p></li><li><p class="paragraph" style="text-align:left;"><b>MCP (Model Context Protocol):</b> An emerging standard for connecting AI agents to external tools and services. When configured incorrectly (particularly the &quot;sender identification&quot; flag), MCP can enable attackers to impersonate trusted users.</p></li><li><p class="paragraph" style="text-align:left;"><b>Agentic Orchestration:</b> The central coordination layer that manages task delegation between multiple specialized AI agents, similar to how Kubernetes orchestrates containers. This layer is becoming a critical security control point.</p></li><li><p class="paragraph" style="text-align:left;"><b>Maestro Framework:</b> Cloud Security Alliance&#39;s seven-layer threat modeling framework specifically designed for agentic AI systems, covering foundation models, data operations, agent frameworks, infrastructure, observability, security controls, and the agent ecosystem..</p></li></ul><hr class="content_break"><p class="paragraph" style="text-align:center;">This week&#39;s issue is sponsored by <a class="link" href="https://links.cloudsecuritypodcast.tv/brinqa-nov25-see-risk-differently?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow"><b>Brinqa</b></a></p><p class="paragraph" style="text-align:center;">What If You Could See Risk Differently? </p><p class="paragraph" style="text-align:left;">On Nov 19, Brinqa experts will show how a shift in perspective, adding context, can change everything about how you prioritize risk. Fast-paced, real, and surprisingly fun.</p><p class="paragraph" style="text-align:center;"><a class="link" href="https://links.cloudsecuritypodcast.tv/csa-ai-soc-report-dropzone-oct2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">📤 </a><b><a class="link" href="https://links.cloudsecuritypodcast.tv/brinqa-nov25-see-risk-differently?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Register here</a></b></p><div class="image"><a class="image__link" href="https://links.cloudsecuritypodcast.tv/brinqa-nov25-see-risk-differently?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/8dfbc1e2-c852-4172-9486-17ae541146e0/Webinar_Email_CTA_600x250.png?t=1762987256"/></a></div><hr class="content_break"><h2 class="heading" style="text-align:left;" id="our-insights-from-this-practitioner">💡<b>Our Insights from this Practitioner 🔍</b></h2><h3 class="heading" style="text-align:left;" id="threat-modeling-the-ai-agent-archit"><b>Threat Modeling the AI Agent: Architecture, Threats & Monitoring </b><b>(</b><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/threat-modeling-the-ai-agent-architecture-threats-monitoring?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow"><b>Full Episode here</b></a><b>)</b></h3><h3 class="heading" style="text-align:left;" id="the-fundamental-shift-why-ai-agents"><b>The Fundamental Shift: Why AI Agents Aren&#39;t Just Better LLMs</b></h3><p class="paragraph" style="text-align:left;">Mohan Kumar opens with a critical distinction that many security teams miss: <b>&quot;</b><i>A typical LLM application is different than an AI agent. Think of more LLM applications as a one shot response to a prompt... it cannot make autonomous decisions or any actions on our behalf. But whereas an agent in contrast is pretty dynamic. They&#39;re goal-driven.</i><b>&quot;</b></p><p class="paragraph" style="text-align:left;">This isn&#39;t semantic nitpicking, it&#39;s the foundation of an entirely new security paradigm. Traditional LLM applications operate within bounded interactions: you send a prompt, the model processes it, you receive a response. The attack surface is relatively contained. AI agents, however, operate in a continuous think-act-observe loop, adapting their behavior in runtime, connecting to external tools, and persisting context across sessions.</p><p class="paragraph" style="text-align:left;">Kumar explains the three-step process that defines agent behavior: <b>&quot;</b><i>You give some query to the agent and then the agent thinks and then acts, and then does some observation and how things are going.</i><b>&quot;</b> This seemingly simple loop creates profound security implications. Each &quot;act&quot; phase might involve calling external APIs, accessing databases, modifying files, or communicating with other agents all based on dynamic runtime decisions rather than predetermined workflows.</p><p class="paragraph" style="text-align:left;">For enterprise security architects, this means rethinking threat models from the ground up. You&#39;re no longer securing a stateless API endpoint; you&#39;re securing an autonomous system that makes real-time decisions about what tools to use, what data to access, and how to achieve goals you&#39;ve only described in natural language.</p><h3 class="heading" style="text-align:left;" id="the-top-three-ai-agent-threats-ente"><b>The Top Three AI Agent Threats Enterprise Teams Must Address</b></h3><p class="paragraph" style="text-align:left;">Kumar identifies three critical threat categories that keep him up at night, starting with what he considers the highest risk:</p><h4 class="heading" style="text-align:left;" id="1-memory-poisoning-and-context-mani"><b>1. Memory Poisoning and Context Manipulation</b></h4><p class="paragraph" style="text-align:left;"><b>&quot;</b><i>Memory poisoning... involves exploiting the three kinds of memory that I laid out [long-term, short-term, and entity memory]. And the context manipulation involves the agent&#39;s context window,&quot; Kumar explains. The attack vector is particularly insidious: &quot;Agent typically trust its memory, because it&#39;s its own memory... if its own memory is being compromised, then, these agents think, hey, you know, I&#39;m just doing the job that I&#39;m intended to do. But the goal or the context itself has been changed.</i><b>&quot;</b></p><p class="paragraph" style="text-align:left;">This is fundamentally different from traditional injection attacks. You&#39;re not exploiting input validation, you&#39;re poisoning the agent&#39;s knowledge base so it believes it&#39;s operating correctly while actually executing attacker-defined objectives. Kumar points to indirect prompt injection as the primary vector: malicious information inserted into RAG (Retrieval Augmented Generation) pipelines that the agent consumes as trusted context.</p><p class="paragraph" style="text-align:left;">For production deployments, this demands implementing memory content validation, session isolation, and robust authentication specifically for memory access controls that don&#39;t exist in traditional application security frameworks.</p><h4 class="heading" style="text-align:left;" id="2-tool-misuse-through-inadequate-pe"><b>2. Tool Misuse Through Inadequate Permission Scoping</b></h4><p class="paragraph" style="text-align:left;">Kumar uses a practical example to illustrate the risk: <b>&quot;</b><i>I rely on a copilot that has access to calendar tool to book my meetings. What if an attacker are able to abuse this processes and misuse the tools here. The tools is a calendar. So instead of sending a regular calendar invite... we could misuse the same Calendar tool to exfiltrate data.</i><b>&quot;</b></p><p class="paragraph" style="text-align:left;">The problem isn&#39;t that the calendar API is vulnerable, it&#39;s that the agent has legitimate access to it for one purpose (scheduling meetings) but lacks the contextual boundaries to prevent abuse for another purpose (data exfiltration). Traditional API security focused on authentication and authorization at the endpoint level. AI agents require contextual authorization understanding not just who is calling the API, but why and whether that aligns with the agent&#39;s intended goal.</p><p class="paragraph" style="text-align:left;">Kumar&#39;s mitigation guidance is clear: <b>&quot;</b><i>We have to scope like a minimal scope with short duration as much as possible. And then if there is like high risk action that has to be like a human in the loop involved.</i><b>&quot;</b> This represents a significant shift from &quot;set it and forget it&quot; API keys to dynamic, context-aware permission grants with built-in time limits.</p><h4 class="heading" style="text-align:left;" id="3-privilege-compromise-through-misc"><b>3. Privilege Compromise Through Misconfiguration</b></h4><p class="paragraph" style="text-align:left;">While this threat applies broadly across security domains, Kumar emphasizes its particular relevance to AI agents: <b>&quot;</b><i>This privileged compromise will be a huge threat... mostly through the misconfiguration in the agent. An attacker executes queries and RAG databases to access files and data that it shouldn&#39;t be able to access.</i><b>&quot;</b></p><p class="paragraph" style="text-align:left;">The challenge here is that AI agents, by their nature, need broader access than traditional applications to be useful. An agent designed to help with data analysis might legitimately need read access to many databases. The security control isn&#39;t binary (access or no access) it&#39;s about granular, conditional access that adapts based on the specific task and context.</p><h3 class="heading" style="text-align:left;" id="the-monitoring-gap-traditional-tool"><b>The Monitoring Gap: Traditional Tools Won&#39;t Cut It</b></h3><p class="paragraph" style="text-align:left;">When asked how security teams can detect these new threats, Kumar&#39;s response underscores a hard truth: <b>&quot;</b><i>Traditional tooling today is not gonna fully flag these all the behaviors. That&#39;s why we need more of these either in-house or from vendors that can dynamically listen and observe the actions and flag the risky operations.</i><b>&quot;</b></p><p class="paragraph" style="text-align:left;">The fundamental challenge is that AI agents operate faster than human review cycles and make decisions in ways that don&#39;t align with traditional signature-based detection. Kumar suggests an innovative approach: <b>&quot;</b><i>We could have some secondary agent who can help in monitoring that can follow the behaviors... over the time there has to be some baseline said, hey, for these kind of goal-driven actions, these are the new norm.</i><b>&quot;</b></p><p class="paragraph" style="text-align:left;">This represents AI-powered security for AI systems using one agent to monitor another&#39;s behavior against learned baselines. Kumar emphasizes the need for <b>&quot;</b><i>granular access control, which means getting into more granular, as much as possible, and when the identity shift, those log has to be in place.</i><b>&quot;</b></p><p class="paragraph" style="text-align:left;">For production operations, this translates to:</p><p class="paragraph" style="text-align:left;">• <b>Implementing comprehensive logging</b> of all agent actions, tool calls, and identity shifts</p><p class="paragraph" style="text-align:left;">• <b>Taking regular snapshots</b> of agent memory for forensic analysis</p><p class="paragraph" style="text-align:left;">• <b>Establishing behavioral baselines</b> for normal agent operations</p><p class="paragraph" style="text-align:left;">• <b>Deploying secondary monitoring agents</b> to detect anomalies in real-time</p><p class="paragraph" style="text-align:left;">• <b>Ensuring human-in-the-loop authorization</b> for high-risk actions</p><h3 class="heading" style="text-align:left;" id="the-architecture-of-ai-agent-securi"><b>The Architecture of AI Agent Security: Six Critical Components</b></h3><p class="paragraph" style="text-align:left;">Kumar breaks down AI agent architecture into six components that security teams must threat model:</p><p class="paragraph" style="text-align:left;"><b>1. Role Playing:</b> The specialized function the agent is trained to perform</p><p class="paragraph" style="text-align:left;"><b>2. Focus:</b> The specific goal or objective driving agent decisions</p><p class="paragraph" style="text-align:left;"><b>3. Tools:</b> External APIs and services the agent can access</p><p class="paragraph" style="text-align:left;"><b>4. Cooperation:</b> How agents communicate and coordinate with each other</p><p class="paragraph" style="text-align:left;"><b>5. Guardrails:</b> Both operational and security-focused controls</p><p class="paragraph" style="text-align:left;"><b>6. Memory:</b> Long-term, short-term, and entity-specific information stores</p><p class="paragraph" style="text-align:left;">Each component presents unique security considerations. Kumar emphasizes that <b>&quot;</b><i>when we do threat model these agents, there are various frameworks that can be adopted. OWASP Top 10 LLM applications and Agent AI is a great list. And then Agentic AI threat modeling framework, which is dubbed as Maestro by the Cloud Security Alliance, is a great one.</i><b>&quot;</b></p><p class="paragraph" style="text-align:left;">The Maestro framework provides a seven-layer approach covering foundation models, data operations, agent frameworks, deployment infrastructure, evaluation and observability, security and compliance, and the broader agent ecosystem. Kumar stresses this systematic approach: <b>&quot;</b><i>Security teams can use that as a blueprint to systematically analyze where controls are needed.</i><b>&quot;</b></p><h3 class="heading" style="text-align:left;" id="the-model-agnostic-reality-of-ai-vu"><b>The Model-Agnostic Reality of AI Vulnerabilities</b></h3><p class="paragraph" style="text-align:left;">A common misconception is that premium AI providers like OpenAI and Anthropic are immune to the vulnerabilities affecting open-source models. Kumar dispels this myth: <b>&quot;</b><i>I would say it&#39;s vendor agnostic at this point. All the models out there can spill credential leaks... any model is susceptible for those kind of tricks</i><b>.&quot;</b></p><p class="paragraph" style="text-align:left;">He explains the fundamental vulnerability through the &quot;Grandma Trick&quot; example, a prompt engineering technique where attackers manipulate the model by framing harmful requests as benign storytelling. <b>&quot;</b><i>Models are generally non-deterministic in nature, so no matter if it&#39;s OpenAI model or Claude model, most of them because there are few guardrails at the model layer but they&#39;re just basic... it&#39;s easy to trick those models in a way they can get into those bias nature and follow the things that you would ask for.</i><b>&quot;</b></p><p class="paragraph" style="text-align:left;">This has critical implications for enterprise deployment decisions. You can&#39;t simply outsource AI agent security by choosing premium providers you must implement controls at the orchestration, data, and tool-access layers regardless of which foundation model you&#39;re using.</p><h3 class="heading" style="text-align:left;" id="the-path-forward-three-layers-of-se"><b>The Path Forward: Three Layers of Security Evolution</b></h3><p class="paragraph" style="text-align:left;">Looking ahead, Kumar predicts security improvements will concentrate in three critical layers:</p><h4 class="heading" style="text-align:left;" id="the-orchestration-layer"><b>The Orchestration Layer</b></h4><p class="paragraph" style="text-align:left;"><b>&quot;</b><i>There will be a lot of guardrails which will be introduced because these orchestrations are the central brain that delegate tasks... perhaps inspired by Kubernetes in cloud. Kubernetes started adding lot of security features once it became a standard orchestration. Similarly, agent orchestration platforms will come up with more security features.</i><b>&quot;</b></p><h4 class="heading" style="text-align:left;" id="the-data-layer"><b>The Data Layer</b></h4><p class="paragraph" style="text-align:left;"><b>&quot;</b><i>We talked about memory poisoning and how can we ensure this memory is not poisoned... that boils down to verifying the trust source or the source information. We might be able to see better those detections if those memories indeed poisoned, if the knowledge base is being poisoned</i><b>.&quot;</b></p><h4 class="heading" style="text-align:left;" id="the-interface-layer"><b>The Interface Layer</b></h4><p class="paragraph" style="text-align:left;"><b>&quot;</b><i>More from the UI the user will be able to see more of these under-hood actions visibly so that they can intervene if needed... more transparency are, hey, you know, ask me before doing X, Y, Z kind of interface.&quot;</i></p><p class="paragraph" style="text-align:left;">This evolution mirrors the maturity curve we saw with container security starting with minimal controls and gradually building comprehensive security frameworks as the technology moved to production at scale.</p><h3 class="heading" style="text-align:left;" id="practical-implementation-guidance-f"><b>Practical Implementation Guidance for Security Leaders</b></h3><p class="paragraph" style="text-align:left;">Based on Kumar&#39;s insights, security teams deploying AI agents should prioritize these immediate actions:</p><p class="paragraph" style="text-align:left;"><b>1. Implement Memory Protection Controls</b></p><ul><li><p class="paragraph" style="text-align:left;">Validate all data entering agent memory stores</p></li><li><p class="paragraph" style="text-align:left;">Isolate sessions to prevent cross-contamination</p></li><li><p class="paragraph" style="text-align:left;">Implement authentication specifically for memory access</p></li><li><p class="paragraph" style="text-align:left;">Take regular snapshots for forensic analysis</p></li></ul><p class="paragraph" style="text-align:left;"><b>2. Enforce Granular, Time-Limited Permissions</b></p><ul><li><p class="paragraph" style="text-align:left;">Replace static API keys with dynamic, context-aware grants</p></li><li><p class="paragraph" style="text-align:left;">Implement minimum necessary scope for each tool</p></li><li><p class="paragraph" style="text-align:left;">Set automatic expiration on permissions</p></li><li><p class="paragraph" style="text-align:left;">Require human approval for high-risk actions</p></li></ul><p class="paragraph" style="text-align:left;"><b>3. Deploy Multi-Layer Monitoring</b></p><ul><li><p class="paragraph" style="text-align:left;">Log all agent actions, tool calls, and identity shifts</p></li><li><p class="paragraph" style="text-align:left;">Establish behavioral baselines for normal operations</p></li><li><p class="paragraph" style="text-align:left;">Consider deploying monitoring agents to watch production agents</p></li><li><p class="paragraph" style="text-align:left;">Integrate agent logs into existing SIEM platforms</p></li></ul><p class="paragraph" style="text-align:left;"><b>4. Adopt Systematic Threat Modeling</b></p><ul><li><p class="paragraph" style="text-align:left;">Use frameworks like CSA&#39;s Maestro or OWASP Top 10 for LLM Applications</p></li><li><p class="paragraph" style="text-align:left;">Threat model all six agent architecture components (role, focus, tools, cooperation, guardrails, memory)</p></li><li><p class="paragraph" style="text-align:left;">Don&#39;t assume premium AI providers eliminate security risks</p></li><li><p class="paragraph" style="text-align:left;">Plan for model-agnostic security controls</p></li></ul><p class="paragraph" style="text-align:left;"><b>5. Prepare for the Evolution</b></p><ul><li><p class="paragraph" style="text-align:left;">Invest in orchestration-layer security as it matures</p></li><li><p class="paragraph" style="text-align:left;">Develop capabilities for detecting poisoned memory and knowledge bases</p></li><li><p class="paragraph" style="text-align:left;">Design interfaces that provide visibility into agent decision-making</p></li><li><p class="paragraph" style="text-align:left;">Build human-in-the-loop workflows for critical operations</p></li></ul><p class="paragraph" style="text-align:left;">The transition to AI-augmented operations isn&#39;t optional; organizations that don&#39;t deploy these capabilities will fall behind competitors who do. But rushing into production without addressing these fundamental security concerns creates risks that traditional security tools can&#39;t detect or prevent. Kumar&#39;s guidance provides a roadmap for navigating this transformation strategically.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-resources">RELATED RESOURCES</h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://owasp.org/www-project-top-10-for-large-language-model-applications/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">OWASP Top 10 for LLM Applications</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://cloudsecurityalliance.org/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Alliance Maestro Framework</a> for Agentic AI Threat Modeling</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.anthropic.com/research?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Anthropic Research on AI Agent Security</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://aws.amazon.com/blogs/containers/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">AWS Container Security Best Practices</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://sysdig.com/blog/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Sysdig Container Escape Techniques</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://kubernetes.io/docs/concepts/security/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Kubernetes Security Hardening Guide</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.mandiant.com/resources/blog?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Google Mandiant Threat Reports</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.crowdstrike.com/blog/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">CrowdStrike Threat Intelligence</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://techcommunity.microsoft.com/category/microsoftdefenderforcloud?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Microsoft Defender for Cloud Blog</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://learn.microsoft.com/en-us/azure/storage/blobs/security-recommendations?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Azure Blob Storage Security Best Practices</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">AWS S3 Security Configuration Guide</a></p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>Related Podcast Episodes 🎧</b></h2><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/threat-modeling-the-ai-agent-architecture-threats-monitoring?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/99b30943-ada9-4359-9c0e-1a205caec5af/S06_Mohan_Kumar.jpg?t=1762986189"/></a><div class="image__source"><a class="image__source_link" href="https://www.cloudsecuritypodcast.tv/videos/threat-modeling-the-ai-agent-architecture-threats-monitoring?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" rel="noopener" target="_blank"><span class="image__source_text"><p>Threat Modeling he AI Agent: Architecture, Threats & Monitoring</p></span></a></div></div><h3 class="heading" style="text-align:left;" id="question-for-you-reply-to-this-emai">Question for you? (Reply to this email)</h3><p class="paragraph" style="text-align:left;"> 🤖 Are your AI agents auditable? (Yes/No/Maybe)<br>Could you reconstruct their decision chain after a security incident?</p><p class="paragraph" style="text-align:left;">Next week, we&#39;ll explore another critical aspect of cloud security. Stay tuned!</p><hr class="content_break"><p class="paragraph" style="text-align:left;">📬 Want weekly expert takes on AI & Cloud Security? [<a class="link" href="https://www.cloudsecuritynewsletter.com/subscribe?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Subscribe here</a>]”</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><span style="color:rgb(238, 40, 60);"><b><a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">We would love to hear from you</a></b></span>📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙</p><p class="paragraph" style="text-align:start;">Peace!</p><p class="paragraph" style="text-align:start;"><a class="link" href="https://www.linkedin.com/in/shilpi-bhattacharjee/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Shilpi Bhattacharjee</a></p><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dc094a1c-678a-43e0-adc9-1bee6c3499e2/CSP_Logo_Blue_ScreenRes_3000x3000_v2.jpg"/></a></div><div class="button" style="text-align:left;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="{{rp_referral_hub_url}}"><span class="button__text" style=""> Share the newsletter </span></a></div><p class="paragraph" style="text-align:left;">Was this forwarded to you? You can <a class="link" href="https://www.cloudsecuritynewsletter.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Sign up here</a>, to join our growing readership.</p><p class="paragraph" style="text-align:left;">Want to <b>sponsor</b> the next newsletter edition! <a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">Lets make it happen </a></p><p class="paragraph" style="text-align:left;">Have you joined our FREE <b>Monthly</b> <a class="link" href="https://www.cloudsecuritybootcamp.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Bootcamp</a> yet?</p><p class="paragraph" style="text-align:left;">checkout our <b>sister podcast</b> <a class="link" href="https://www.youtube.com/@AISecurityPodcast?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=container-escape-ai-agent-risk-lessons-from-box-s-security-lead" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=92cd469d-5b4d-4003-bf1e-cc6537b02f8f&utm_medium=post_rss&utm_source=cloud_security_newsletter">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>🚨 DOJ clears Google Wiz Purchase: How Bloomberg Navigate AI-Powered Security at Scale</title>
  <description>This week we explore breaking vulnerabilities in Microsoft Teams enabling message manipulation and caller ID forgery, AI-powered malware with self-modifying capabilities discovered by Google, and exclusive insights from Bloomberg&#39;s application security leader and cloud security architects on breaking down silos between AppSec and CloudSec teams as AI transforms the enterprise security landscape.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/ef9f2b9c-1763-4bb7-8406-6ba196f3e03c/Screenshot_2025-11-05_at_11.47.53_PM.png" length="1575618" type="image/png"/>
  <link>https://www.cloudsecuritynewsletter.com/p/doj-clears-google-wiz-teams-impersonation-ai-malware</link>
  <guid isPermaLink="true">https://www.cloudsecuritynewsletter.com/p/doj-clears-google-wiz-teams-impersonation-ai-malware</guid>
  <pubDate>Wed, 05 Nov 2025 23:51:13 +0000</pubDate>
  <atom:published>2025-11-05T23:51:13Z</atom:published>
    <dc:creator>Ashish Rajan</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="hello-from-the-cloudverse"><span style="background-color:#28EEDA;"><b>Hello from the Cloud-verse!</b></span></h2><p class="paragraph" style="text-align:left;">This week’s Cloud Security Newsletter Topic we cover - <b>Breaking Down Silos: How AI Security Demands AppSec and CloudSec Convergence</b><b> </b><a class="link" href="#cloud-security-topic-of-the-week" rel="noopener noreferrer nofollow">(continue reading)</a> </p><div class="button" style="text-align:center;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="https://links.cloudsecuritypodcast.tv/csa-ai-soc-report-dropzone-oct2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale"><span class="button__text" style=""> Check out this week’s Sponsor: Dropzone </span></a></div><hr class="content_break"><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/ai-is-already-breaking-the-silos-between-appsec-cloudsec?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/ef9f2b9c-1763-4bb7-8406-6ba196f3e03c/Screenshot_2025-11-05_at_11.47.53_PM.png?t=1762386535"/></a><div class="image__source"><span class="image__source_text"><p>This image was generated by AI. It&#39;s still experimental, so it might not be a perfect match!</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Incase, this is your 1st Cloud Security Newsletter! You are in good company! </b><br>You are reading this issue along with your friends and colleagues from companies like <i>Netflix</i>, Citi, <i>JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more</i> who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to <a class="link" href="https://open.spotify.com/show/6LZgeh4GecRYPc0WrwMB4I?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a> & <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> every week.</p><hr class="content_break"><p class="paragraph" style="text-align:left;">Welcome to this week’s Cloud Security Newsletter</p><p class="paragraph" style="text-align:left;">As AI reshapes application development and cloud security at an unprecedented pace, traditional security boundaries are dissolving. This week, we examine critical vulnerabilities in collaboration platforms that 320 million users depend on daily, alongside the emergence of AI-powered polymorphic malware that can dynamically evade detection. More importantly, we bring you a candid conversation with <b>Tejas Dakve</b> (Application Security Leader at <b>Bloomberg Industry Group</b>), <b>Aditya Patel</b> (Security Architect at a major cloud provider), and Ashish Rajan about how enterprises are breaking down the historical silos between application security and cloud security teams to address AI-native threats that refuse to respect traditional organizational boundaries.</p><p class="paragraph" style="text-align:left;">The key insight: security teams can no longer operate as gatekeepers. As Tejas puts it, teams must transform &quot;from department of no to department of safe yes&quot; while building guardrails that enable rather than obstruct innovation.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="tldr-for-busy-readers">📰 TL;DR for Busy Readers</h2><ul><li><p class="paragraph" style="text-align:left;"><b>Wiz x Google:</b> DOJ antitrust <b>cleared</b>; expect deeper GCP ↔ Wiz coupling. </p></li><li><p class="paragraph" style="text-align:left;"><b>Teams flaws:</b> Silent edits + spoofed identities → zero-trust verification for high-stakes comms.</p></li><li><p class="paragraph" style="text-align:left;"><b>Identity & Collab:</b> <b>Microsoft Teams</b> spoofing/vulns mean treat chat as authoritative systems of record with controls.</p></li><li><p class="paragraph" style="text-align:left;"><b>M&A:</b> Dataminr buys ThreatConnect ($290M) → external signals + internal intel.</p></li><li><p class="paragraph" style="text-align:left;"><b>AI-polymorphic malware:</b> “PROMPTFLUX” queries Gemini for JIT obfuscation → behavior over signatures.</p></li><li><p class="paragraph" style="text-align:left;"><b>Supply chain / Dev:</b> React Native CLI <b>RCE</b> (CVE-2025-11953) → protect dev laptops & CI</p></li><li><p class="paragraph" style="text-align:left;"><b>AppSec + CloudSec convergence</b> required for AI security T-shaped engineers with cross-functional knowledge becoming the new baseline</p></li><li><p class="paragraph" style="text-align:left;"><b>Paved road solutions</b> work: provide secure defaults and automated approval workflows instead of manual security gates</p></li><li><p class="paragraph" style="text-align:left;"><b>Agentic AI shifts focus</b> from conversational threats to IAM policies and action-based permissions across cloud environments</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-security-news">📰 <b>THIS WEEK&#39;S SECURITY HEADLINES</b></h2><h3 class="heading" style="text-align:left;" id="1-doj-clears-alphabets-32-b-acquisi"><b>1. DOJ clears Alphabet’s $32B acquisition of Wiz (antitrust review)</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened</b></p><p class="paragraph" style="text-align:left;">The U.S. Department of Justice(DOJ) approved Alphabet/Google&#39;s planned $32B purchase of Wiz, removing the last major regulatory hurdle. This is the largest security deal on record and cements Google&#39;s push to own more of the enterprise cloud security stack.</p><p class="paragraph" style="text-align:left;"><b>Why it matters (practitioner take):</b></p><ul><li><p class="paragraph" style="text-align:left;">Expect tighter <b>Wiz ↔ Google Cloud</b> integrations (posture/CNAPP/AI-assisted detections).</p></li><li><p class="paragraph" style="text-align:left;">Re-check <b>vendor concentration</b> and <b>dual-vendor exits</b> in FY26 plans.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Do this now</b></p><ul><li><p class="paragraph" style="text-align:left;"><b>Board brief:</b> “Wiz x Google—integration paths and lock-in risk.”</p></li><li><p class="paragraph" style="text-align:left;"><b>Risk register:</b> Add <b>vendor-exit play</b> + <b>multi-cloud coverage test</b>.</p></li><li><p class="paragraph" style="text-align:left;"><b>Red team tasker:</b> Simulate <b>GCP-native ↔ Wiz</b> control gaps during M&A integration.</p></li></ul><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.reuters.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">Read More: Reuters</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="2-microsoft-teams-vulns-allow-messa"><b>2. Microsoft Teams vulns allow message manipulation & exec impersonation</b></h3><p class="paragraph" style="text-align:left;"><b>What happened:</b> Check Point disclosed <b>four</b> Teams issues enabling silent edits, spoofed identities, and forged call/caller notifications; one tracked as <i>CVE-2024-38197</i>; fixes landed through 2024–Oct 2025.</p><p class="paragraph" style="text-align:left;"><b>Why it matters:</b> Teams is an authoritative system of record for many workflows (approvals/IR/chatOps). Silent history edits + spoofed senders undermine financial approvals, IR channels, and change control.</p><p class="paragraph" style="text-align:left;"><b>Do this now</b></p><ul><li><p class="paragraph" style="text-align:left;"><b>Policy:</b> Treat <b>chat like code</b> → require <b>out-of-band verification</b> for $$/secrets.</p></li><li><p class="paragraph" style="text-align:left;"><b>Controls:</b> Inline <b>DLP/Malware scanning</b> for Teams file/message payloads.</p></li><li><p class="paragraph" style="text-align:left;"><b>Detection:</b> Alert on <b>retroactive edits</b> in high-risk channels (finance/IR).</p></li><li><p class="paragraph" style="text-align:left;"><b>Admin:</b> Confirm <b>October 2025</b> client/service updates are deployed org-wide.</p></li></ul><p class="paragraph" style="text-align:left;"><a class="link" href="https://research.checkpoint.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">Read More: Check Point Research</a><b>,</b><a class="link" href="https://thehackernews.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow"> The Hacker News</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-a-ipowered-malware-promptflux-per"><b>3. AI-powered malware (“PROMPTFLUX”) performs JIT self-obfuscation</b></h3><p class="paragraph" style="text-align:left;"><b>What happened:</b> Google Threat Intelligence tracked PROMPTFLUX, an experimental VBScript dropper that calls Gemini to generate new evasion code at runtime. Also notes other families abusing LLMs.</p><p class="paragraph" style="text-align:left;"><b>Why it matters:</b> First publicly documented use of LLM JIT obfuscation static signatures will lag; behavioral + API telemetry becomes table stakes.</p><p class="paragraph" style="text-align:left;"><b>Do this now</b></p><ul><li><p class="paragraph" style="text-align:left;"><b>EDR:</b> Prioritize <b>behavioral rules</b> (script child-process anomalies, LOLBins).</p></li><li><p class="paragraph" style="text-align:left;"><b>Egress:</b> Monitor/alert on <b>unexpected LLM API calls</b> from endpoints/servers.</p></li><li><p class="paragraph" style="text-align:left;"><b>Hunt:</b> Search for <b>metamorphic VBS</b> with periodic network beacons to AI APIs.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Background:</b> Google’s prior work on Gemini for large-scale malware analysis</p><p class="paragraph" style="text-align:left;"><a class="link" href="https://thehackernews.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">Read More: The Hacker News</a><b>,</b><a class="link" href="https://cloud.google.com/blog?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow"> Google Cloud Blog</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="4-major-ma-dataminr-acquires-threat"><b>4. Major M&A: Dataminr Acquires ThreatConnect for $290M</b></h3><p class="paragraph" style="text-align:left;"><b>What happened:</b> Dataminr combines external signals with ThreatConnect’s intel/TIP; SecurityWeek logged 45 cyber M&A deals in Oct 2025; Veeam–Securiti also announced.</p><p class="paragraph" style="text-align:left;"><b>Why it matters:</b> Expect agentic correlation (external+internal) and recommendations in SOC stacks; fewer point tools, more platform workflows.</p><p class="paragraph" style="text-align:left;"><b>Do this now</b></p><ul><li><p class="paragraph" style="text-align:left;"><b>SOC roadmap:</b> Define <b>intel→detection→response</b> data flows before tooling.</p></li><li><p class="paragraph" style="text-align:left;"><b>Contracting:</b> Tie SLAs to <b>false-positive budgets</b> + <b>MTTD</b> improvements.</p></li><li><p class="paragraph" style="text-align:left;"><b>Privacy:</b> Re-review <b>data residency</b> for external signal ingestion.</p></li></ul><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.securityweek.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">Read More: SecurityWeek</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="5-critical-react-native-cli-vulnera"><b>5. Critical React Native CLI Vulnerability Exposes 2 Million Weekly Users</b></h3><p class="paragraph" style="text-align:left;"><b>What happened:</b> JFrog disclosed command injection in <span style="color:rgb(24, 128, 56);">@react-native-community/cli-server-api</span>; Metro dev server binds to all interfaces by default, making a local bug remotely exploitable. Affects 4.8.0 → &lt;20.0.0; fixed in 20.0.0.</p><p class="paragraph" style="text-align:left;"><b>Why it matters:</b> Developer laptops and CI agents become cloud pivots; supply chain risk.</p><p class="paragraph" style="text-align:left;"><b>Do this now</b></p><ul><li><p class="paragraph" style="text-align:left;"><b>Patch: Pin </b><span style="color:rgb(24, 128, 56);"><b>@react-native-community/cli-server-api</b></span><b> ≥ 20.0.0 (every repo).</b></p></li><li><p class="paragraph" style="text-align:left;"><b>Network: Isolate dev subnets; block Metro ports from WAN/LAN cross-talk.</b></p></li><li><p class="paragraph" style="text-align:left;"><b>CI: Rotate tokens/SSH keys; scan for indicators of compromise on dev hosts.</b></p></li></ul><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.securityweek.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">Read More: SecurityWeek</a><b>,</b><a class="link" href="https://jfrog.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow"> JFrog</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="cloud-security-topic-of-the-week">🎯 Cloud Security Topic of the Week: </h2><h3 class="heading" style="text-align:left;" id="breaking-down-silos-how-ai-security"><b>Breaking Down Silos: How AI Security Demands AppSec and CloudSec Convergence</b></h3><p class="paragraph" style="text-align:left;">The rise of AI-native applications is forcing a fundamental reckoning with how security teams are structured and operate. For years, application security and cloud security have existed as distinct disciplines with separate tools, threat models, and areas of responsibility. But as AI becomes deeply embedded into both application logic and cloud infrastructure, this separation is no longer tenable.</p><p class="paragraph" style="text-align:left;"><b>The challenge:</b> Agentic AI crosses app logic, IAM, data, and multi-cloud APIs two separate threat models that interlock.</p><p class="paragraph" style="text-align:left;"><b>Field lessons (Bloomberg + Cloud Architects):</b></p><ul><li><p class="paragraph" style="text-align:left;">Move from <b>gates → guardrails</b>: pre-approved <b>“paved road”</b> paths with automated checks.</p></li><li><p class="paragraph" style="text-align:left;">Treat agents like <b>non-human workloads</b>: identity-first design; least-privilege scopes.</p></li><li><p class="paragraph" style="text-align:left;"><b>Continuous threat modeling</b>: automated prompts + human in loop; update as threats evolve.</p></li><li><p class="paragraph" style="text-align:left;">Build <b>T-shaped</b> teams: deep specialty + broad fluency (AppSec understands IAM; CloudSec groks prompt/LLM issues).</p></li></ul><h3 class="heading" style="text-align:left;" id="starter-blueprint"><b>Starter blueprint</b></h3><ul><li><p class="paragraph" style="text-align:left;"><b>Org:</b> Cross-functional <b>pods</b> (AppSec, CloudSec, Data, Platform, Compliance, Product).</p></li><li><p class="paragraph" style="text-align:left;"><b>Identity:</b> <b>Agent roles</b> with action-scoped permissions; rotate secrets from a vault; no static MFA hacks.</p></li><li><p class="paragraph" style="text-align:left;"><b>Pipelines:</b> Governance-as-code checks + AI eval gates; approvals in hours, not weeks.</p></li><li><p class="paragraph" style="text-align:left;"><b>Ops:</b> AISPM mindset assets: datasets, models, agents, embeddings, endpoints.</p></li></ul><h2 class="heading" style="text-align:left;" id="featured-experts-this-week"><b>Featured Experts This Week </b>🎤</h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.linkedin.com/in/tejas-dakve-168b7b88/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow"><b>Tejas Dakve</b></a> - Senior Manager Application Security, Bloomberg Industry Group</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.linkedin.com/in/adityarpatel/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow"><b>Aditya Patel</b></a> - Security Architect, </p></li><li><p class="paragraph" style="text-align:left;"><b><a class="link" href="https://www.linkedin.com/in/ashishrajan/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">Ashish Rajan</a></b> - CISO | Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> , Host of <a class="link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a></p></li></ul><h2 class="heading" style="text-align:left;" id="definitions-and-core-concepts"><b>Definitions and Core Concepts 📚</b></h2><p class="paragraph" style="text-align:left;">Before diving into our insights, let&#39;s clarify some key terms:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Agentic AI:</b> LLMs that <b>act</b> (APIs/emails/resources), not just chat.</p></li><li><p class="paragraph" style="text-align:left;"><b>T-Shaped Engineer:</b> Deep in one, <b>conversant across</b> adjacent domains.</p></li><li><p class="paragraph" style="text-align:left;"><b>Paved Road:</b> <b>Secure-by-default</b> patterns + auto approvals.</p></li><li><p class="paragraph" style="text-align:left;"><b>JIT AI Malware:</b> Malware that <b>calls LLMs</b> at runtime for new obfuscation.</p></li><li><p class="paragraph" style="text-align:left;"><b>CVE-2025-11953:</b> React Native CLI <b>RCE</b> via Metro server exposure.</p></li></ul><hr class="content_break"><p class="paragraph" style="text-align:center;">This week&#39;s issue is sponsored by <a class="link" href="https://links.cloudsecuritypodcast.tv/csa-ai-soc-report-dropzone-oct2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">Dropzone</a></p><p class="paragraph" style="text-align:left;">New independent research from Cloud Security Alliance proves AI SOC agents dramatically improve analyst performance. </p><p class="paragraph" style="text-align:left;">In controlled testing with 148 security professionals using Dropzone AI, analysts achieved 22-29% higher accuracy, completed investigations 45-61% faster, and maintained superior quality even under fatigue. </p><p class="paragraph" style="text-align:left;">The study reveals that 94% of participants viewed AI more positively after hands-on use. See the full benchmark results.</p><p class="paragraph" style="text-align:center;"><a class="link" href="https://links.cloudsecuritypodcast.tv/csa-ai-soc-report-dropzone-oct2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">📤 Read the Full Study from CSA</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="our-insights-from-this-practitioner">💡<b>Our Insights from this Practitioner 🔍</b></h2><h3 class="heading" style="text-align:left;" id="ai-is-already-breaking-the-silos-be"><b>AI is already breaking the Silos Between AppSec & CloudSec</b><b>(</b><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/ai-is-already-breaking-the-silos-between-appsec-cloudsec?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow"><b>Full Episode here</b></a><b>)</b></h3><h3 class="heading" style="text-align:left;" id="the-volume-problem-ai-generated-cod"><b>The Volume Problem: AI-Generated Code Overwhelms Traditional Security Gates</b></h3><p class="paragraph" style="text-align:left;">The conversation began with a stark reality check about the scale challenge facing security teams. As Tejas Dakve explains, &quot;<i>The speed of writing the code and the volume of code that security teams have to secure is unparallel. It&#39;s absolutely impossible to use our traditional security gates where security teams used to be a gate checker</i>.&quot;</p><p class="paragraph" style="text-align:left;">This isn&#39;t hyperbole. When developers leverage AI coding assistants like GitHub Copilot or Cursor, they&#39;re producing code at 3-5x their previous velocity. But security teams haven&#39;t scaled proportionally. The math simply doesn&#39;t work anymore for manual review processes.</p><p class="paragraph" style="text-align:left;">Aditya Patel reinforces this point with a practical example: &quot;<i>The problem is now we have to acknowledge the fact that new attacks are being introduced or new sort of architectural considerations that need to be there for multi-agent or Agentic workflows.</i>&quot; Traditional threat modeling frameworks like STRIDE or PASTA don&#39;t address prompt injection, data poisoning, or model theft threats that require specialized knowledge.</p><p class="paragraph" style="text-align:left;">The solution both practitioners advocate? Automated, continuous threat modeling paired with &quot;paved road&quot; solutions that provide secure defaults developers can adopt without friction.</p><h3 class="heading" style="text-align:left;" id="from-department-of-no-to-department"><b>From &quot;Department of No&quot; to &quot;Department of Safe Yes&quot;</b></h3><p class="paragraph" style="text-align:left;">Perhaps the most striking cultural insight came from Tejas Dakve&#39;s reframing of security&#39;s role: &quot;<i>Security leaders we have traditionally been recognized as a department of no. I think we have to change our mindset from department of no to department of safe yes.</i>&quot;</p><p class="paragraph" style="text-align:left;">What does &quot;safe yes&quot; look like in practice? It means building guardrails instead of gates. When a developer wants to use an open-Read More LLM model from Hugging Face, the old model required a two-week security review. The new model involves an automated request workflow that:</p><ol start="1"><li><p class="paragraph" style="text-align:left;">Triggers automated scans for LLM-specific vulnerabilities</p></li><li><p class="paragraph" style="text-align:left;">Evaluates model resiliency and hallucination risks</p></li><li><p class="paragraph" style="text-align:left;">Routes to compliance and legal for licensing review</p></li><li><p class="paragraph" style="text-align:left;">Provides approval within hours, not weeks</p></li></ol><p class="paragraph" style="text-align:left;">As Tejas emphasizes, &quot;<i>We have to take security into the AI lifecycle the way we took security and baked it into CI/CD pipelines.</i>&quot; This isn&#39;t about lowering security standards it&#39;s about automating the evaluation and approval processes so velocity doesn&#39;t suffer while maintaining rigor.</p><h3 class="heading" style="text-align:left;" id="the-silo-problem-when-app-sec-and-c"><b>The Silo Problem: When AppSec and CloudSec Can&#39;t See the Full Picture</b></h3><p class="paragraph" style="text-align:left;">One of the most illuminating moments in the discussion revealed how organizational silos create blind spots for AI security. Tejas described a common scenario: &quot;<i>There is a gap between application security and cloud security when it comes to agentic AI, because AppSec cares about source code, logic, things like that. But we don&#39;t always have visibility into roles, permissions, policies associated with that agent.</i>&quot;</p><p class="paragraph" style="text-align:left;">This is where the traditional division breaks down. An AppSec team threat modeling an AI agent might focus on prompt injection and output validation critical concerns, but incomplete. Meanwhile, the cloud security team owns IAM policies but may not understand what actions the agent is designed to perform or what data it accesses.</p><p class="paragraph" style="text-align:left;">Aditya Patel offers a concrete distinction: &quot;<i>For all practical purposes, these are two different threat models. There&#39;s a threat model for the underlying platform or the cloud, and then there&#39;s a threat model for the application itself.</i>&quot;</p><p class="paragraph" style="text-align:left;">The solution? Cross-functional pods that bring together AppSec, CloudSec, data security, compliance, and product engineering. As Tejas notes, &quot;<i>These teams need to come together, and only then we can have some sort of AI governance within an organization before Shadow AI starts taking place everywhere.</i>&quot;</p><h3 class="heading" style="text-align:left;" id="agentic-ai-the-authentication-chall"><b>Agentic AI: The Authentication Challenge Nobody Talks About</b></h3><p class="paragraph" style="text-align:left;">When the conversation turned to agentic workflows, Aditya Patel highlighted a technical challenge that many organizations haven&#39;t fully grasped. Traditional authentication mechanisms assume a human user. But what happens when an AI agent needs to authenticate across five or ten different endpoints, some internal and some external?</p><p class="paragraph" style="text-align:left;">&quot;<i>For authentication, you can authenticate using something you know and something you have,</i>&quot; Aditya explains. &quot;<i>For an agent, it needs to pass a secret password or certificate to authenticate itself. Hopefully it&#39;s not hard coded. So hopefully there&#39;s an API call where it goes to a vault or a secrets manager and fetches the credential.</i>&quot;</p><p class="paragraph" style="text-align:left;">But it gets more complex: &quot;<i>What about if there is an MFA? Then it needs to have a way of getting the OTPs from somewhere. And one more thing you need to keep in mind is bot detection or CAPTCHAs. It needs to solve CAPTCHAs and because most things like Akamai or CloudFront or Cloudflare, they have automated bot detection.</i>&quot;</p><p class="paragraph" style="text-align:left;">This technical reality underscores why Tejas emphasizes IAM policies as the critical security control for agentic AI: &quot;<i>With agentic AI, the IAM role policies, permissions associated with that agent, becomes a very important factor. My focus would be more on what are IAM roles and permissions and policies associated with it. What can go wrong if that agent gets controlled by some malicious actors?</i>&quot;</p><h3 class="heading" style="text-align:left;" id="the-skills-evolution-t-shaped-engin"><b>The Skills Evolution: T-Shaped Engineers and Security Generalists</b></h3><p class="paragraph" style="text-align:left;">Both practitioners were candid about how the expectations for security professionals have evolved and how overwhelming it can feel. Tejas laid out the progression: &quot;<i>Before introduction of AI, security engineers were expected to know core AppSec concepts like threat modeling, web penetration testing, mobile related threats. Then came along cloud security 10 years ago. Then in between came DevSecOps. Couple of years back, AI came into the limelight.</i>&quot;</p><p class="paragraph" style="text-align:left;">The solution isn&#39;t to become a &quot;jack of all trades&quot; both emphasized the importance of depth. Instead, they advocate for &quot;<i>T-shaped engineers</i>&quot;: professionals with deep expertise in one domain (the vertical bar of the &quot;T&quot;) but broad understanding across multiple areas (the horizontal bar).</p><p class="paragraph" style="text-align:left;">Aditya drew an analogy from David Epstein&#39;s book &quot;<i>Range</i>,&quot; contrasting Tiger Woods (who played only golf from age two) with Roger Federer (who played everything before focusing on tennis as a teenager). &quot;<i>There are two types of environments,</i>&quot; Aditya explains. &quot;<i>One is like a very deterministic environment like chess or music, or even golf. But the other one is more like a non-deterministic environment: tennis or science, arts, or cybersecurity. If you have the multidisciplinary background training there, that helps.</i>&quot;</p><p class="paragraph" style="text-align:left;">The practical advice for security leaders? Create paths for cross-functional learning and reward breadth as well as depth. Tejas adds, &quot;<i>We have to develop T-shaped engineers. We want security engineers who are experts in their own domain, but they should also be able to connect dots between various different sections.</i>&quot;</p><h3 class="heading" style="text-align:left;" id="automated-threat-modeling-the-git-h"><b>Automated Threat Modeling: The GitHub Copilot Example</b></h3><p class="paragraph" style="text-align:left;">To illustrate why periodic threat modeling matters for AI systems, Tejas offered a compelling case study: &quot;<i>GitHub Copilot when it got rolled out, almost every organization did some sort of threat modeling before they introduced it into their development ecosystem. Then a few months back, a researcher released a paper saying that GitHub Copilot can be tricked to introduce backdoor into your product.</i>&quot;</p><p class="paragraph" style="text-align:left;">This emergence of new threats makes previous threat models obsolete. The solution? Automated, continuous threat modeling that evolves with the threat landscape. And yes, using AI for this purpose. As Aditya suggests: &quot;<i>Use AI for it, right? You can just ask your chatbot to give you top 10 threats. This is my architecture, these are my business cases, use cases, technical use cases. Give me what threats to expect.</i>&quot;</p><p class="paragraph" style="text-align:left;">But both practitioners emphasized a critical caveat: human expertise remains essential. Aditya warns, &quot;<i>I don&#39;t think we are at a point where we can completely rely on these systems. There has to be a human in the loop from a security point of view.</i>&quot; The term he uses for AI-generated outputs is memorable: &quot;<i>These chatbots are dreaming up the internet. They have read the internet, and now they are word by word, they&#39;re dreaming up things.</i>&quot;</p><h3 class="heading" style="text-align:left;" id="prioritization-at-scale-the-quality"><b>Prioritization at Scale: The Quality Over Quantity Shift</b></h3><p class="paragraph" style="text-align:left;">With thousands of vulnerabilities flooding security dashboards, how do teams decide what to fix? Tejas advocates for a pipeline approach: &quot;<i>Accumulate all vulnerabilities onto one single platform, a posture management type solution. Create a pipeline from all vulnerabilities to just top 5% or 1% of vulnerabilities that matter to you the most.</i>&quot;</p><p class="paragraph" style="text-align:left;">The factors in this pipeline include:</p><ul><li><p class="paragraph" style="text-align:left;">Applicable and reachable vulnerabilities</p></li><li><p class="paragraph" style="text-align:left;">EPSS scores and CISA KEV advisories</p></li><li><p class="paragraph" style="text-align:left;">Internet-facing versus internal-only</p></li><li><p class="paragraph" style="text-align:left;">Proof of concept availability</p></li><li><p class="paragraph" style="text-align:left;">Actual exploitation paths</p></li></ul><p class="paragraph" style="text-align:left;">The mindset shift is crucial: &quot;<i>Your CTO is not going to care about the fact that you fixed a hundred vulnerabilities. What if half of those vulnerabilities are internal only or have no exploitation path? Fix only five vulnerabilities which are internet facing with highest likelihood of exploitation.</i>&quot;</p><p class="paragraph" style="text-align:left;">Aditya adds a cloud-specific dimension: separating environments by velocity requirements. &quot;<i>You can have more restrictive IAM policies in regulated or business-critical workloads. But you also need environments where the restrictions are not as much, where velocity needs to be higher, so developers don&#39;t have to come back to the security team.</i>&quot;</p><h3 class="heading" style="text-align:left;" id="advice-for-those-starting-their-ai-"><b>Advice for Those Starting Their AI Security Journey</b></h3><p class="paragraph" style="text-align:left;">For organizations just beginning their AI security maturity journey, both practitioners offered pragmatic guidance. Aditya drew a memorable analogy: &quot;<i>AI is on a similar curve to the five stages of grief denial, anger, bargain, depression, and acceptance. Some companies are on the denial stage still, but most have moved past that stage. Very few have reached the acceptance stage.</i>&quot;</p><p class="paragraph" style="text-align:left;">The first step? Acceptance that AI is here to stay and requires dedicated attention. From there, mature organizations are:</p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Building paved road solutions</b> with secure defaults</p></li><li><p class="paragraph" style="text-align:left;"><b>Investing in specialized tooling</b> for AI-specific threats</p></li><li><p class="paragraph" style="text-align:left;"><b>Reorganizing teams</b> into cross-functional pods</p></li><li><p class="paragraph" style="text-align:left;"><b>Creating risk matrices</b> specific to AI threats</p></li><li><p class="paragraph" style="text-align:left;"><b>Developing remediation libraries</b> for common AI security issues</p></li><li><p class="paragraph" style="text-align:left;"><b>Establishing AI governance</b> through clear policies</p></li></ol><p class="paragraph" style="text-align:left;">Tejas emphasizes governance as the foundation: &quot;<i>Mature organizations have gone ahead and established AI governance through some sort of AI related policy. That clearly describes what is okay, what is not okay, and how to pursue what is okay.</i>&quot;</p><p class="paragraph" style="text-align:left;">For individuals looking to transition into AI security, Aditya recommends working backward from job market requirements: &quot;<i>Don&#39;t create this ideal plan that, &#39;Hey, I will learn only AppSec, or I will learn AppSec first and then CloudSec.&#39; It&#39;s a security generalist type of role. You need to know a bit about everything.</i>&quot;</p><p class="paragraph" style="text-align:left;">Tejas adds practical career advice for those struggling to break into security: &quot;<i>Look for other entry level roles: IT help desk, tier one or tier two support, network engineer. You don&#39;t have to start in cybersecurity. Those skills like how to network, how to communicate, how to troubleshoot are important in cybersecurity as well.</i>&quot;</p><h3 class="heading" style="text-align:left;" id="the-bottom-line"><b>The Bottom Line</b></h3><p class="paragraph" style="text-align:left;">AI security isn&#39;t a separate discipline; it&#39;s forcing the convergence of application security, cloud security, data security, and compliance into unified, cross-functional teams. The silos that worked when applications and infrastructure were separate are breaking down because AI systems span both layers intrinsically.</p><p class="paragraph" style="text-align:left;">Security leaders must shift from gatekeepers to enablers, building automated guardrails that allow innovation at AI speed while maintaining rigorous security standards. The challenge is significant, but as both Tejas and Aditya emphasize, the fundamentals haven&#39;t changed authentication, authorization, encryption, and defense in depth still matter. What&#39;s changed is the attack surface, the velocity of development, and the need for security professionals who can bridge multiple domains.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-resources">RELATED RESOURCES</h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://owasp.org/www-project-top-10-for-large-language-model-applications/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">OWASP Top 10 for LLMs</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://atlas.mitre.org/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">MITRE ATLAS Framework</a> </p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://cloudsecurityalliance.org/research/working-groups/artificial-intelligence?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Alliance AI Security Guidance</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.nist.gov/itl/ai-risk-management-framework?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">NIST AI Risk Management Framework </a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://cloud.google.com/blog/products/identity-security/securing-ai-agents?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">Securing AI Agents: A Practitioner&#39;s Guide (Google Cloud)</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://docs.github.com/en/code-security?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">GitHub Advanced Security for AI-Generated Code</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://slsa.dev/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">Supply Chain Levels for Software Artifacts (SLSA)</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://learn.microsoft.com/en-us/security/engineering/threat-modeling-aiml?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">Threat Modeling AI/ML Systems (Microsoft)</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://incidentdatabase.ai/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">AI Incident Database (Partnership on AI)</a></p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>Related Podcast Episodes 🎧</b></h2><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/ai-is-already-breaking-the-silos-between-appsec-cloudsec?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/f74db125-8791-4e88-912e-82281583abc8/S06_Tejas_and_Aditya.jpg?t=1762385232"/></a><div class="image__source"><a class="image__source_link" href="https://www.cloudsecuritypodcast.tv/videos/ai-is-already-breaking-the-silos-between-appsec-cloudsec?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" rel="noopener" target="_blank"><span class="image__source_text"><p><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/ai-is-already-breaking-the-silos-between-appsec-cloudsec?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">AI is already breaking the Silos Between AppSec & CloudSec</a></p></span></a></div></div><h3 class="heading" style="text-align:left;" id="question-for-you-reply-to-this-emai">Question for you? (Reply to this email)</h3><p class="paragraph" style="text-align:left;"> 🤖 Are your AppSec and CloudSec still siloed or are you already building <b>T-shaped</b> pods with agent-aware IAM and paved roads?</p><p class="paragraph" style="text-align:left;">Next week, we&#39;ll explore another critical aspect of cloud security. Stay tuned!</p><hr class="content_break"><p class="paragraph" style="text-align:left;">📬 Want weekly expert takes on AI & Cloud Security? [<a class="link" href="https://www.cloudsecuritynewsletter.com/subscribe?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">Subscribe here</a>]”</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><span style="color:rgb(238, 40, 60);"><b><a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">We would love to hear from you</a></b></span>📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙</p><p class="paragraph" style="text-align:start;">Peace!</p><p class="paragraph" style="text-align:start;"><a class="link" href="https://www.linkedin.com/in/shilpi-bhattacharjee/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">Shilpi Bhattacharjee</a></p><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dc094a1c-678a-43e0-adc9-1bee6c3499e2/CSP_Logo_Blue_ScreenRes_3000x3000_v2.jpg"/></a></div><div class="button" style="text-align:left;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="{{rp_referral_hub_url}}"><span class="button__text" style=""> Share the newsletter </span></a></div><p class="paragraph" style="text-align:left;">Was this forwarded to you? You can <a class="link" href="https://www.cloudsecuritynewsletter.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">Sign up here</a>, to join our growing readership.</p><p class="paragraph" style="text-align:left;">Want to <b>sponsor</b> the next newsletter edition! <a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">Lets make it happen </a></p><p class="paragraph" style="text-align:left;">Have you joined our FREE <b>Monthly</b> <a class="link" href="https://www.cloudsecuritybootcamp.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Bootcamp</a> yet?</p><p class="paragraph" style="text-align:left;">checkout our <b>sister podcast</b> <a class="link" href="https://www.youtube.com/@AISecurityPodcast?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=doj-clears-google-wiz-purchase-how-bloomberg-navigate-ai-powered-security-at-scale" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=9ee1f2ac-faf7-435f-b5be-8b28327aeea9&utm_medium=post_rss&utm_source=cloud_security_newsletter">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>🚨 Atlas AI Security Breach + Critical WSUS Flaw: The Real ROI of AI-Augmented SOC Teams</title>
  <description>This week&#39;s newsletter examines CISA&#39;s urgent WSUS vulnerability mandate, OpenAI Atlas browser&#39;s prompt injection vulnerabilities, and the evolution of AI-powered threats including APT28&#39;s LameHug malware. Learn how Dropzone&#39;s Cloud Security Alliance research demonstrates 45-60% faster alert investigation with AI-augmented SOC analysts, alongside insights on ransomware payment decline to 23% and operational strategies for securing hybrid cloud environments in 2025.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/f94f903c-8015-47df-a433-0c6b8e8b1eb3/Screenshot_2025-10-29_at_9.19.08_PM.png" length="932262" type="image/png"/>
  <link>https://www.cloudsecuritynewsletter.com/p/atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams</link>
  <guid isPermaLink="true">https://www.cloudsecuritynewsletter.com/p/atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams</guid>
  <pubDate>Wed, 29 Oct 2025 21:22:13 +0000</pubDate>
  <atom:published>2025-10-29T21:22:13Z</atom:published>
    <dc:creator>Ashish Rajan</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="hello-from-the-cloudverse"><span style="background-color:#28EEDA;"><b>Hello from the Cloud-verse!</b></span></h2><p class="paragraph" style="text-align:left;">This week’s Cloud Security Newsletter Topic we cover - <b>AI Agents for SOC: Hype Curve vs. Measurable ROI </b><a class="link" href="#cloud-security-topic-of-the-week" rel="noopener noreferrer nofollow">(continue reading)</a> </p><div class="button" style="text-align:center;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="https://links.cloudsecuritypodcast.tv/csa-ai-soc-report-dropzone-oct2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams"><span class="button__text" style=""> Check out this week’s Sponsor: Dropzone </span></a></div><hr class="content_break"><div class="image"><a class="image__link" href="https://links.cloudsecuritypodcast.tv/csa-ai-soc-report-dropzone-oct2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/f94f903c-8015-47df-a433-0c6b8e8b1eb3/Screenshot_2025-10-29_at_9.19.08_PM.png?t=1761772785"/></a><div class="image__source"><span class="image__source_text"><p>This image was generated by AI. It&#39;s still experimental, so it might not be a perfect match!</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Incase, this is your 1st Cloud Security Newsletter! You are in good company! </b><br>You are reading this issue along with your friends and colleagues from companies like <i>Netflix</i>, Citi, <i>JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more</i> who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to <a class="link" href="https://open.spotify.com/show/6LZgeh4GecRYPc0WrwMB4I?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a> & <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> every week.</p><hr class="content_break"><p class="paragraph" style="text-align:left;">Welcome to this week’s Cloud Security Newsletter</p><p class="paragraph" style="text-align:left;">The convergence of AI augmentation and traditional security operations reached a critical inflection point this week. While CISA ordered emergency patching of actively exploited Windows Server Update Services vulnerabilities demonstrating how legacy infrastructure remains a vector for supply chain compromise OpenAI&#39;s Atlas browser launch simultaneously exposed fundamental architectural challenges in agentic AI security that traditional defenses cannot address.</p><p class="paragraph" style="text-align:left;">This week, we feature <b>Edward Wu</b>, Founder and CEO of Dropzone AI, whose recent Cloud Security Alliance research quantifies what many security leaders have only intuited: AI-augmented SOC analysts operate 45-60% faster with measurably higher accuracy than unassisted teams. With patents in anomaly detection using device relationship graphs and eight years building detection and response capabilities for enterprises, Edward brings both technical depth and operational wisdom to the AI SOC transformation conversation.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="tldr-for-busy-readers">📰 TL;DR for Busy Readers</h2><ul><li><p class="paragraph" style="text-align:left;">⚠️ <b>WSUS RCE (CVE-2025-59287)</b> <b>enables unauthenticated RCE</b> treat as supply chain attack vector, not “isolated server issue”, not “just Windows.” Your cloud admin laptops are in scope.</p></li><li><p class="paragraph" style="text-align:left;">🧠 <b>AI-augmented SOC analysts investigate alerts 45-60% faster</b> with higher completion rates CSA research validates operational ROI</p></li><li><p class="paragraph" style="text-align:left;">🏥 <b>Healthcare, public sector, and industrial/OT environments are at active, systemic risk</b>. Ransomware payment rates are down, but attacker sophistication is up.</p></li><li><p class="paragraph" style="text-align:left;">🔐 <b>Microsoft Entra Conditional Access is no longer guidance</b>; it’s policy enforcement. Identity is now a gated control plane.</p></li><li><p class="paragraph" style="text-align:left;">🤖 <b>OpenAI Atlas browser&#39;s prompt injection attacks</b> within days of launch expose fundamental AI agent security challenges requiring defense-in-depth</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-security-news">📰 <b>THIS WEEK&#39;S SECURITY HEADLINES</b></h2><h3 class="heading" style="text-align:left;" id="google-cloud-functions-vulnerabilit"><b>1. . CISA Orders Urgent Patching of Actively Exploited WSUS Flaw in Windows Server</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> CISA issued a binding directive requiring federal agencies to immediately patch a critical Windows Server Update Services (WSUS) vulnerability (CVE-2025-59287) after evidence of active exploitation. The flaw, rated 9.8/10, allows unauthenticated remote code execution through deserialization of untrusted data on exposed WSUS servers. Microsoft shipped an emergency fix post-Patch Tuesday, and security researchers from Huntress, Eye Security, and Shadowserver observed attackers scanning WSUS servers on default ports 8530/8531, with thousands of internet-facing instances identified. Federal agencies face a November 14, 2025 deadline to patch or shut down vulnerable systems.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> WSUS persists in legacy infrastructure even within predominantly cloud organizations, and its privileged access makes it a critical supply chain risk. If attackers achieve SYSTEM-level access on WSUS, they can push malicious updates to downstream Windows fleets enabling lateral movement from a single exposed server into hybrid AD/Azure AD joined endpoints and potentially cloud admin workstations. This represents a software supply chain attack vector that traditional perimeter defenses cannot prevent.</p><p class="paragraph" style="text-align:left;"><i><b>Sources</b></i><i>: </i><a class="link" href="https://www.techradar.com/pro/security/us-government-orders-patching-of-critical-windows-server-security-issue?utm_source=chatgpt.com" target="_blank" rel="noopener noreferrer nofollow">TechRadar</a>, <a class="link" href="https://www.cisa.gov/known-exploited-vulnerabilities-catalog?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">CISA advisory</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="2-open-a-is-atlas-browser-under-fir"><b>2. ⚠️ OpenAI&#39;s Atlas Browser Under Fire: Prompt Injection Attacks Surface Within Days of Launch</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> Within days of OpenAI&#39;s ChatGPT Atlas AI-powered browser launch, security researchers demonstrated successful prompt injection attacks that manipulated the browser to exfiltrate data, modify settings, and execute unintended actions. Researchers from Brave Security, Johann Rehberger, and others published exploits showing malicious instructions embedded in web pages, Google Docs, and screenshot content could hijack AI agent behavior. OpenAI CISO Dane Stuckey acknowledged that &quot;prompt injection remains a frontier, unsolved security problem&quot; requiring significant adversary investment to exploit.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> This represents the enterprise security community&#39;s first major production exposure to agentic AI browser risks. Traditional web security models break when AI agents act with user credentials and the same-origin policy becomes irrelevant because the AI assistant executes with authenticated privileges. Security researcher Johann Rehberger notes that prompt injection &quot;cannot be &#39;fixed&#39; as soon as a system takes untrusted data and includes it in an LLM query, the untrusted data influences the output.&quot; For enterprises considering agentic AI adoption, this requires fundamental architectural changes: capability limiting, data boundary controls, sandboxed execution, least privilege enforcement, and comprehensive logging. Treat agentic browsing as authenticated privilege escalation as a service.</p><p class="paragraph" style="text-align:left;"><b><i>Sources</i></b><i>: </i><a class="link" href="https://www.theregister.com/2025/10/28/ai_browsers_prompt_injection/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">The Register</a>,<a class="link" href="https://brave.com/blog/unseeable-prompt-injections/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow"> Brave Blog</a>,<a class="link" href="https://unit42.paloaltonetworks.com/indirect-prompt-injection-poisons-ai-longterm-memory/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">Palo Alto Networks Unit 42</a><i> </i></p><hr class="content_break"><p class="paragraph" style="text-align:left;"></p><h3 class="heading" style="text-align:left;" id="3-trellix-report-ai-powered-malware"><b>3. </b>🎯<b> Trellix Report: AI-Powered Malware and Nation-State Convergence Accelerate</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> Trellix&#39;s October 2025 CyberThreat Report reveals significant AI tool adoption among cybercriminals, with APT28&#39;s LameHug the first publicly reported AI-powered infostealer discovered in July 2025. The malware integrates an LLM for dynamic command generation, sending prompts and receiving tailored command sequences for reconnaissance and data exfiltration. Qilin ransomware became the most active group with 441 victim posts (13.45% of activity), while the industrial sector emerged as the primary target with 890 posts (36.57% of attacks). The U.S. accounted for approximately 1,285 victims, representing 55% of geo-identified incidents.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> LameHug represents the industrialization of AI-enhanced threats its operational deployment against Ukrainian organizations demonstrates AI-powered attack tools have transitioned from theoretical to weaponized in state-sponsored arsenals. The convergence of nation-state operations and financially motivated campaigns erodes traditional threat attribution models. In July-August 2025, Iran-aligned threat actors resumed active operations, with over 35 pro-Iranian hacktivist groups coordinating attacks against Israeli targets during the escalated Israel-Iran conflict. For industrial and critical infrastructure operators, this combination of most-targeted sector status with AI-accelerated attacker capabilities demands immediate OT security investment.</p><p class="paragraph" style="text-align:left;"><b><i>Sources</i></b><i>: </i><a class="link" href="https://industrialcyber.co/industrial-cyber-attacks/trellix-reports-nation-state-espionage-and-ai-driven-financial-attacks-converging-as-industrial-sector-most-targeted/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow"> Industrial Cyber</a><i>, Trellix CyberThreat Report</i></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="4-ransomware-evolution-payments-dro"><b>4. </b>📉<b> Ransomware Evolution: Payments Drop to 23% as Attack Sophistication Increases</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> Ransomware payment rates reached a historic low of 23% of breached companies, while Hornetsecurity&#39;s 2025 report shows attacks increased to 24% of organizations (up from 18.6% in 2024), with only 13% paying ransoms. Qilin ransomware escalated rapidly with approximately 700 attacks targeting critical sectors, while new groups like WorldLeaks (rebranded from Hunters International) and Sinobi emerged using pure extortion tactics. Microsoft&#39;s Digital Defense Report covering July 2024-June 2025 highlights that over half of cyberattacks with known motives were extortion or ransomware-driven, with 97% of identity attacks being password-based and a 32% surge in identity-based attacks in H1 2025.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> Declining payment rates signal organizational backup and recovery maturity, but attackers are adapting. The 39.5% quarter-over-quarter spike in email-borne malware suggests threat actors pivot toward persistence-based payloads, while 46% of incidents still begin with phishing combined with compromised endpoints and credential theft. For cloud security leaders, the shift from encryption-focused to data exfiltration-first tactics means cloud storage, databases, and SaaS applications are now primary targets rather than just on-premises file servers. This evolution fundamentally changes security architecture requirements from backup-centric to data loss prevention and exfiltration detection capabilities.</p><p class="paragraph" style="text-align:left;"><b><i>Sources</i></b><i>: BleepingComputer, Hornetsecurity, Microsoft Blogs</i></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="5-public-sector-under-siege-196-gov"><b>5. 🏛️ Public Sector Under Siege: 196 Government Entities Hit by Ransomware in 2025</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> Approximately 196 public sector entities worldwide fell victim to ransomware campaigns in 2025, with operational downtime costs between 2018-2024 reaching $1.09 billion for government entities alone. The United States experienced the highest number with 69 confirmed victims, followed by Canada (7), the United Kingdom (6), and France, India, Pakistan, and Indonesia (5 each). The most active threat actors include Babuk (43 confirmed victims), Qilin (21), and INC Ransom (18). The first half of 2025 witnessed a 60% increase in government sector attacks compared to the same period in 2024.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> Public institutions represent critical national infrastructure soft targets that often lack resources and technical depth for robust cybersecurity defenses. Services such as police dispatch systems, court operations, and public health portals face immense pressure to restore functionality quickly, creating leverage that attackers exploit through aggressive timelines and public data exposure threats. For private sector security leaders, government agencies are often required technology partners for regulated industries (healthcare, finance, defense contractors), creating supply chain risk. The 60% year-over-year surge suggests threat actors systematically catalog and exploit public sector limited security maturity tactics and tools proven against government targets will eventually target similarly under-resourced private enterprises.</p><p class="paragraph" style="text-align:left;"><b><i>Sources</i></b><i>: CyberSecurity News</i></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="6-massive-healthcare-imaging-breach"><b>6. </b>🏥<b> Massive Healthcare Imaging Breach: 1.2M Patient Records Exposed</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> SimonMed Imaging, one of the largest outpatient radiology providers in the U.S., disclosed a breach affecting approximately 1.2 million patients. Stolen data includes medical records, diagnostic reports, financial details, and patient identifiers. Reports indicate data has already circulated in criminal channels. The disclosure became public between October 24-27, 2025.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> Healthcare has become nearly fully hybrid: PHI is stored, shared, and analyzed across cloud PACS systems, AI diagnostic pipelines, and third-party billing SaaS. When imaging, billing, and identity data are stolen together, attackers can build extremely high-fidelity synthetic identities and commit high-value fraud at scale. For defenders in any regulated sector (finance, government, critical infrastructure), this warns about vendor blast radius: a single specialized provider can quietly aggregate crown-jewel data across multiple hospitals and insurers. Organizations must revisit BAAs/DPAs with imaging, billing, and AI-diagnostics vendors to ensure right-to-audit on cloud posture, logging, and incident response. Treat PHI/PII fusion data stores (images + financial + ID) as Tier 0 assets with tighter tokenization/encryption than generic &quot;medical records.&quot;</p><p class="paragraph" style="text-align:left;"><b><i>Sources</i></b><i>: </i><a class="link" href="https://www.yahoo.com/news/articles/hackers-steal-medical-records-financial-170017348.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">Yahoo</a><i>, </i><a class="link" href="https://cyberguy.com/privacy/hackers-leak-medical-reports-breach-hits-1-2m-patients/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">CyberGuy</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="7-aws-ships-nova-web-grounding-to-r"><b>7. </b>🤖<b> AWS Ships &quot;Nova Web Grounding&quot; to Reduce Hallucination in AI Apps</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> On October 28, 2025, AWS announced &quot;Amazon Nova Web Grounding,&quot; a Bedrock/Nova capability that automatically retrieves current, attributed information from external sources and injects it into model responses. The positioning focuses on reducing hallucination and increasing traceability in AI-generated answers by attaching live referenced context.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> This advancement addresses more than answer quality it&#39;s about evidence and auditability. Most enterprises block or throttle AI assistants because they cannot prove where answers originate, breaking audit, SOX compliance, model risk management, and legal defensibility. Grounded and cited responses become prerequisites for AI copilots making change recommendations to IAM/SCP, firewall rules, and Kubernetes manifests; automated policy generation that can pass audit review; and AI agents acting in production with traceable justification. This lands directly in AI governance and SecOps workflows: organizations cannot approve autonomous changes without attribution. Security teams should ask whether their AI assistants can show provenance for every suggested security control change (network rule, IAM role, token TTL) if not, auto-apply should be prohibited. Start logging AI assistant citations alongside change control tickets to create audit trails when regulators ask, &quot;Why did you trust this AI?&quot;</p><p class="paragraph" style="text-align:left;"><b><i>Source</i></b><i>: </i><a class="link" href="https://aws.amazon.com/blogs/aws/build-more-accurate-ai-applications-with-amazon-nova-web-grounding/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">Amazon Web Services, Blog</a><i>.</i></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="8-microsoft-entra-conditional-acces"><b>8. </b>🔐<b> Microsoft Entra Conditional Access Quietly Becoming Enforcement, Not &quot;Best Practice&quot;</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> Microsoft continues pushing Entra Conditional Access as the mandatory Zero Trust policy engine for accessing Microsoft 365, Azure, and other corporate resources. Recent Microsoft guidance (updated through late September, reinforced through October) ties Conditional Access and MFA enforcement directly to baseline security expectations, and Microsoft has already begun mandating MFA for privileged Azure operations.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters:</b> &quot;Identity is the new perimeter&quot; has transitioned from slideware to product policy. Microsoft is moving from &quot;we recommend MFA/CA&quot; to &quot;you will run MFA/CA to touch Azure.&quot; For multinational organizations, this effectively outsources access control policy to your cloud provider and creates audit artifacts you don&#39;t fully control. Simultaneously, it raises the bar for attackers: token theft, browser session hijack, and SIM swap aren&#39;t sufficient if Conditional Access policies enforce device posture, location, and risk signals at authentication time. Security teams should treat Conditional Access configurations like Terraform for firewalls: version them, peer-review them, and run change control on them. Align Conditional Access policies with data classification (production tenant ≠ test tenant). If everything has &quot;global admin if you pass MFA,&quot; you&#39;ve missed the point. Capture Conditional Access decision logs centrally IR teams will need that telemetry when investigating suspicious console activity originating from &quot;trusted&quot; sessions.</p><p class="paragraph" style="text-align:left;"><b><i>Source</i></b><i>: </i><a class="link" href="https://learn.microsoft.com/en-us/entra/identity/conditional-access/overview?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">Microsoft Learn</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="cloud-security-topic-of-the-week">🎯 Cloud Security Topic of the Week: </h2><h3 class="heading" style="text-align:left;" id="from-hype-to-reality-how-ai-augment"><b>From Hype to Reality: How AI-Augmented SOC Analysts Deliver Measurable ROI</b></h3><p class="paragraph" style="text-align:left;">The cybersecurity community has watched AI promises cycle through hype and skepticism for years. Security automation through playbooks and SOAR platforms has existed for over a decade, yet alert fatigue persists and tier-one analyst turnover remains painfully high. The fundamental question isn&#39;t whether AI can theoretically improve SOC operations it&#39;s whether AI augmentation delivers measurable, reproducible outcomes that justify the investment, complexity, and organizational change required.</p><p class="paragraph" style="text-align:left;">This week, we examine findings from Dropzone AI&#39;s Cloud Security Alliance research that quantifies AI augmentation&#39;s impact on real-world SOC operations. Drawing on insights from Edward Wu whose company recently completed benchmark testing with 148 operational security analysts investigating AWS S3 bucket policy changes and Microsoft Entra ID failed login alerts we explore what AI augmentation actually delivers, where it falls short, and how security leaders should approach SOC transformation in 2025 and beyond.</p><h2 class="heading" style="text-align:left;" id="featured-experts-this-week"><b>Featured Experts This Week </b>🎤</h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.linkedin.com/in/edwardxwu/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow"><b>Edward Wu</b></a><b> - </b>Founder & CEO, <a class="link" href="https://links.cloudsecuritypodcast.tv/csa-ai-soc-report-dropzone-oct2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">Dropzone AI</a></p></li><li><p class="paragraph" style="text-align:left;"><b><a class="link" href="https://www.linkedin.com/in/ashishrajan/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">Ashish Rajan</a></b> - CISO | Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> , Host of <a class="link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a></p></li></ul><h2 class="heading" style="text-align:left;" id="definitions-and-core-concepts"><b>Definitions and Core Concepts 📚</b></h2><p class="paragraph" style="text-align:left;">Before diving into our insights, let&#39;s clarify some key terms:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Agentic AI:</b> AI systems that can autonomously take actions, make decisions, and adapt strategies based on context rather than following rigid, pre-programmed workflows. In SOC operations, agentic AI goes beyond simple automation to replicate the investigative reasoning and adaptive decision-making of expert human analysts.</p></li><li><p class="paragraph" style="text-align:left;"><b>Alert Investigation:</b> The process of analyzing security alerts to determine whether they represent genuine threats, gathering evidence, formulating hypotheses, and validating or invalidating those hypotheses through additional data collection fundamentally resembling detective work in the physical world.</p></li><li><p class="paragraph" style="text-align:left;"><b>Prompt Injection:</b> A fundamental security vulnerability in AI systems where malicious instructions embedded in external content (web pages, documents, screenshots) can manipulate the AI agent&#39;s behavior to execute unintended actions. Security researchers note this cannot be &quot;fixed&quot; through traditional means because untrusted data inherently influences LLM outputs.</p></li><li><p class="paragraph" style="text-align:left;"><b>SOAR (Security Orchestration, Automation, and Response):</b> Traditional security automation platforms that use playbooks rigid, pre-programmed sequences of API calls and actions to automate routine SOC tasks. While useful for deterministic workflows, SOAR lacks the adaptive reasoning required for complex security investigations.</p></li><li><p class="paragraph" style="text-align:left;"><b>Tier-One Analyst:</b> Entry-level SOC analysts primarily responsible for initial alert triage, basic investigation, and escalation to senior analysts. These roles face high turnover due to repetitive work, alert fatigue, and limited career development opportunities.</p></li></ul><hr class="content_break"><p class="paragraph" style="text-align:center;">This week&#39;s issue is sponsored by <a class="link" href="https://links.cloudsecuritypodcast.tv/csa-ai-soc-report-dropzone-oct2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">Dropzone</a></p><p class="paragraph" style="text-align:left;">New independent research from Cloud Security Alliance proves AI SOC agents dramatically improve analyst performance. </p><p class="paragraph" style="text-align:left;">In controlled testing with 148 security professionals using Dropzone AI, analysts achieved 22-29% higher accuracy, completed investigations 45-61% faster, and maintained superior quality even under fatigue. </p><p class="paragraph" style="text-align:left;">The study reveals that 94% of participants viewed AI more positively after hands-on use. See the full benchmark results.</p><p class="paragraph" style="text-align:center;"><a class="link" href="https://links.cloudsecuritypodcast.tv/csa-ai-soc-report-dropzone-oct2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">📤 Read the Full Study from CSA</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="our-insights-from-this-practitioner">💡<b>Our Insights from this Practitioner 🔍</b></h2><h3 class="heading" style="text-align:left;" id="the-vibe-coding-trap-why-ai-soc-req"><b>The Vibe Coding Trap: Why AI SOC Requires More Than LLM Prompts(</b><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/ai-agents-for-soc-hype-curve-vs-measurable-roi?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow"><b>Full Episode here</b></a><b>)</b></h3><h3 class="heading" style="text-align:left;" id="where-soc-automation-actually-deliv"><b>Where SOC Automation Actually Delivers Value - Today?</b></h3><p class="paragraph" style="text-align:left;">For years, security leaders have heard vendors promise that automation will solve SOC analyst shortage, eliminate alert fatigue, and dramatically accelerate incident response. Yet traditional SOAR platforms delivered underwhelming results. Edward Wu explains why: <i>&quot;The challenge with these type of automation is they are very robotic. From our perspective, the technology has underdelivered compared to the promise that they are made.&quot;</i></p><p class="paragraph" style="text-align:left;">The fundamental limitation of traditional security automation lies in its rigidity. Playbook-based systems require security architects to anticipate every possible investigation path and explicitly code API calls, parameters, and decision logic. This works for simple, repetitive tasks but fails for the complex, adaptive reasoning required in security investigations. As Edward notes, <b>&quot;</b><i>If we look at the type of tasks we&#39;re trying to automate within the SOC, like alert investigations being a SOC analyst or investigating alerts require somebody to go through a sequence of steps that actually resembles being a detective in the physical world. You have to look at the evidence, you have to look at, you know, blood stains or fingerprints on the window trims and start to formulate hypothesis and gather additional evidence to validate or invalidate hypothesis.&quot;</i></p><h3 class="heading" style="text-align:left;" id="quantifying-ai-augmentation-the-csa"><b>Quantifying AI Augmentation: The CSA Research Findings</b></h3><p class="paragraph" style="text-align:left;">Dropzone AI&#39;s Cloud Security Alliance research provides rare, quantified evidence of AI augmentation&#39;s operational impact. The study recruited 148 operational security analysts individuals actively working in SOC roles and tested them on two common alert types: AWS S3 bucket policy changes and Microsoft Entra ID failed login attempts. Half the analysts investigated these alerts manually, while the other half used AI augmentation.</p><p class="paragraph" style="text-align:left;">The results exceeded expectations. Edward shares: <i>&quot;Maybe the biggest surprise is the actual magnitude of the differences were honestly larger than we originally anticipated. Because keep in mind these... recruited 148 participants. So they are operational, they are in the seat of security analysts. And this is their first time using our product. So we&#39;re looking at the impact of AI assistance when it is the first time they have even experienced such technology.&quot;</i></p><p class="paragraph" style="text-align:left;">The quantified outcomes speak to both velocity and quality:</p><ul><li><p class="paragraph" style="text-align:left;"><b>45-60% faster alert investigation speed</b> for first-time AI-augmented users</p></li><li><p class="paragraph" style="text-align:left;"><b>Higher investigation completion rates and accuracy</b></p></li><li><p class="paragraph" style="text-align:left;"><b>Reduced analyst fatigue</b> across tier-one through tier-three skill levels</p></li></ul><p class="paragraph" style="text-align:left;">These improvements materialized immediately without training periods, learning curves, or workflow optimization. For security leaders evaluating AI augmentation ROI, this represents measurable operational impact from day one.</p><h3 class="heading" style="text-align:left;" id="what-ai-augmentation-actually-means"><b>What AI Augmentation Actually Means for SOC Staffing</b></h3><p class="paragraph" style="text-align:left;">The natural question following any automation discussion is: will AI replace security analysts? Edward provides a nuanced, realistic assessment: <i>&quot;Can AI SOC analyst automate everything in a SOC? And as a security leader, you can fire everybody in your SOC. That&#39;s not going to happen. I do see a world where in the future there will not be that many tier one security analysts as a job role. What we will have is a whole lot more security architects, a whole lot more, you know, security transformation folks.&quot;</i></p><p class="paragraph" style="text-align:left;">This shift mirrors transformations in software development. Just as AI coding tools like Cursor didn&#39;t eliminate developers but changed their work from writing every line of code to architectural design and solution orchestration, AI SOC augmentation will evolve analyst roles from manual alert processing to higher-level security architecture and strategic decision-making.</p><p class="paragraph" style="text-align:left;">Edward elaborates on this parallel:<i> &quot;With AI coding tools, what we have seen is it&#39;s a lot more important for software developers to essentially pick up more program management or project management skills, because now with an army of AI coding agents, a single software developer can operate as a team of developers. That means as a human developer, there&#39;s actually more quasi-like managerial technical leadership tasks, like you have to divvy up the feature into different components and then you will assign each component to an AI coding agent or Cursor to help you with it.&quot;</i></p><p class="paragraph" style="text-align:left;">For SOC operations, this translates to analysts spending less time on repetitive alert triage and more time on:</p><ul><li><p class="paragraph" style="text-align:left;">Architecting detection logic and response workflows</p></li><li><p class="paragraph" style="text-align:left;">Tuning and configuring AI agents for maximum efficiency</p></li><li><p class="paragraph" style="text-align:left;">Evaluating investigation quality and identifying edge cases</p></li><li><p class="paragraph" style="text-align:left;">Strategic threat hunting and security transformation initiatives</p></li></ul><h3 class="heading" style="text-align:left;" id="the-complexity-and-cost-of-building"><b>The Complexity and Cost of Building AI SOC Capabilities</b></h3><p class="paragraph" style="text-align:left;">Given the hype surrounding generative AI, many organizations consider building internal AI SOC capabilities. &quot;How hard can it be to attach an AI to my SIEM or my log aggregator?&quot; is a common refrain. Edward provides sobering reality checks on this assumption.</p><p class="paragraph" style="text-align:left;"><i>&quot;Obviously nowadays it&#39;s very cool to start new projects around, hey, you know, I can take this open source library, I can connect it to a couple APIs, and voila, I have an AI SOC analyst,&quot; Edward notes. &quot;Based on what we have seen in the field, it&#39;s definitely not this easy. In fact, you might have noticed there are close to 30 or 40 different startups building, trying to build similar technologies, but very few actually have working technology, so it&#39;s much more difficult than it looks on paper.&quot;</i></p><p class="paragraph" style="text-align:left;">The technical challenges extend far beyond connecting APIs to large language models. Edward emphasizes the core difficulty: <i>&quot;The biggest challenge when building AI agents for security is how do you manage large language models? How do you find the right balance between allowing large language models to improvise and adapt while keeping them within certain guardrails so they can offer trustworthy and deterministic outputs. And that&#39;s actually very difficult.&quot;</i></p><p class="paragraph" style="text-align:left;">The financial reality underscores this complexity. Edward shares: <i>&quot;At Dropzone, if we look at 2020, by the end of 2026, Dropzone would have spent close to $20 million purely on R&D to build this technology. Unless as a security organization you are maybe allocating five or $10 million of budget to build such a technology, you are probably not going to be able to get it right.&quot;</i></p><p class="paragraph" style="text-align:left;">Organizations attempting internal builds also face talent challenges. Building agentic AI systems requires not just security expertise but also:</p><ul><li><p class="paragraph" style="text-align:left;">Data science and machine learning engineering capabilities</p></li><li><p class="paragraph" style="text-align:left;">Data pipeline architecture and management</p></li><li><p class="paragraph" style="text-align:left;">LLM fine-tuning and prompt engineering specialization</p></li><li><p class="paragraph" style="text-align:left;">Iterative testing and validation frameworks</p></li></ul><p class="paragraph" style="text-align:left;">For most organizations, buying mature AI SOC technology delivers faster time-to-value than multi-year, multi-million-dollar internal development efforts.</p><h3 class="heading" style="text-align:left;" id="who-benefits-most-from-ai-augmented"><b>Who Benefits Most from AI-Augmented SOC Operations</b></h3><p class="paragraph" style="text-align:left;">AI augmentation isn&#39;t equally valuable across all organizational contexts. Edward identifies two primary beneficiary groups based on Dropzone&#39;s customer base.</p><p class="paragraph" style="text-align:left;">The first group comprises mid-market to enterprise organizations (typically 200+ employees) with internal SOC teams.<i> &quot;Those are the folks who are leveraging internal resources. They have full-time security analysts,&quot;</i> Edward explains. <i>&quot;Those security teams will definitely benefit from our technology.&quot;</i></p><p class="paragraph" style="text-align:left;">However, the second group represents perhaps the more transformative use case: managed security service providers (MSSPs) and managed detection and response (MDR) providers. Edward notes: <i>&quot;AI augmentation actually drastically increase the quality of the security services that service providers like MSPs or MDRs can offer. So this is where I don&#39;t think with AI, like if you are an organization of 200 employees, I still think security service providers are the best way to get the initial set of security protections and capabilities. But now with AI augmentation, you can get a whole lot more from your security service providers.&quot;</i></p><p class="paragraph" style="text-align:left;">This insight has important strategic implications. Smaller organizations that historically couldn&#39;t justify full-time SOC analysts can now access sophisticated security operations through AI-augmented service providers at price points previously impossible. The service provider economic model where a single analyst team serves multiple clients combines powerfully with AI augmentation that multiplies each analyst&#39;s investigation capacity 1.5-2x.</p><h3 class="heading" style="text-align:left;" id="training-and-skill-development-in-t"><b>Training and Skill Development in the AI SOC Era</b></h3><p class="paragraph" style="text-align:left;">As security leaders plan year-end training budgets, understanding which skills matter in AI-augmented environments becomes critical. Edward&#39;s guidance mirrors software development transformations:<i> &quot;A lot of that is, again, using software development as analogies. With AI coding tools, what we have seen is it&#39;s a lot more important for software developers to pick up more program management or project management skills.&quot;</i></p><p class="paragraph" style="text-align:left;">The specific capabilities Edward recommends SOC teams develop include:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Technical leadership:</b> The ability to function as a tech lead, dividing complex security projects into smaller, manageable components that AI agents can execute</p></li><li><p class="paragraph" style="text-align:left;"><b>Quality assessment:</b> Developing the judgment to distinguish high-quality from low-quality AI-generated investigations and recommendations</p></li><li><p class="paragraph" style="text-align:left;"><b>AI agent configuration:</b> The ability to coach, tune, and configure AI solutions to achieve maximum efficiency within organizational context</p></li><li><p class="paragraph" style="text-align:left;"><b>Strategic architecture:</b> Moving beyond tactical alert response to designing detection strategies, response workflows, and security transformation initiatives</p></li></ul><p class="paragraph" style="text-align:left;">Training programs should shift focus from teaching analysts &quot;how to investigate alerts&quot; to &quot;how to architect, oversee, and optimize automated investigation systems.&quot; This parallels how software engineering education now emphasizes system design, API integration, and solution architecture over memorizing syntax.</p><h3 class="heading" style="text-align:left;" id="practical-implementation-what-secur"><b>Practical Implementation: What Security Leaders Should Do Now</b></h3><p class="paragraph" style="text-align:left;">For security leaders evaluating AI SOC augmentation, Edward&#39;s research and operational experience suggest several concrete actions:</p><p class="paragraph" style="text-align:left;"><b>1. Test AI augmentation with realistic scenarios.</b> Don&#39;t rely on vendor demos. Request proof-of-concept evaluations using your actual alert types, your SIEM data, and your analysts. The CSA research used common alerts (S3 bucket policy changes, Entra ID failed logins) precisely because they represent real-world SOC workloads.</p><p class="paragraph" style="text-align:left;"><b>2. Measure velocity and quality, not just speed.</b> AI augmentation should improve both investigation speed (45-60% faster) and completion rates/accuracy. If a solution only improves speed by generating low-quality investigations, it will create downstream problems rather than solving them.</p><p class="paragraph" style="text-align:left;"><b>3. Realistically assess build vs. buy economics.</b> Unless you can allocate $5-10M and multi-year timelines for AI SOC development, buying mature technology will deliver faster ROI. The 30-40 startups attempting to build this technology with only a few succeeding demonstrates the difficulty.</p><p class="paragraph" style="text-align:left;"><b>4. Evolve SOC career paths proactively.</b> Communicate to your teams that tier-one analyst roles will evolve, but that security careers aren&#39;t disappearing; they&#39;re becoming more strategic, architectural, and impactful. Invest in training that prepares analysts for these elevated responsibilities.</p><p class="paragraph" style="text-align:left;"><b>5. For smaller organizations, prioritize AI-augmented service providers.</b> If your organization has fewer than 200 employees and lacks dedicated SOC staff, leverage MSSPs and MDR providers that use AI augmentation. You&#39;ll access enterprise-grade capabilities at small-business price points.</p><p class="paragraph" style="text-align:left;"><b>6. Establish AI agent governance frameworks now.</b> As agentic AI systems gain more autonomy, you need guardrails, approval workflows, and audit trails. Start simple: require human approval for any AI-suggested changes to production systems, log all AI agent actions, and establish review processes for AI investigation quality.</p><p class="paragraph" style="text-align:left;">The transition to AI-augmented SOC operations isn&#39;t theoretical; it&#39;s happening now, with measurable outcomes that validate the investment. Security leaders who approach this transformation strategically, grounded in evidence rather than hype, will build more resilient, scalable, and effective security operations for the hybrid cloud era.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-resources">RELATED RESOURCES</h2><ul><li><p class="paragraph" style="text-align:left;"><b>Cloud Security Alliance: Dropzone AI SOC Benchmark Research Report</b>   Comprehensive analysis of AI augmentation impact on SOC analyst performance with 148 operational participants</p></li><li><p class="paragraph" style="text-align:left;"><b>Dropzone AI Test Drive</b>   Ungated, live environment demonstrating AI SOC analyst investigations across different alert types (<a class="link" href="https://Dropzone.ai?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">Dropzone.ai</a>)</p></li><li><p class="paragraph" style="text-align:left;"><b>CISA Known Exploited Vulnerabilities Catalog</b>   Authoritative list of actively exploited vulnerabilities requiring immediate patching (<a class="link" href="https://cisa.gov/known-exploited-vulnerabilities?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">cisa.gov/known-exploited-vulnerabilities</a>)</p></li><li><p class="paragraph" style="text-align:left;"><b>Microsoft Entra Conditional Access Documentation</b>   Implementation guidance for Zero Trust identity controls in Azure and Microsoft 365 environments</p></li><li><p class="paragraph" style="text-align:left;"><b>Trellix CyberThreat Report October 2025</b>   Analysis of AI-powered malware evolution and nation-state threat convergence</p></li><li><p class="paragraph" style="text-align:left;"><b>Johann Rehberger&#39;s Prompt Injection Research</b>   Security researcher&#39;s analysis of fundamental AI agent vulnerabilities and mitigation strategies</p></li><li><p class="paragraph" style="text-align:left;"><b>AWS Security Best Practices for AI/ML Workloads</b>   Cloud-native guidance for securing AI model deployments and data pipelines</p></li><li><p class="paragraph" style="text-align:left;"><b>MITRE ATT&CK Framework for Cloud</b>   Comprehensive matrix of cloud-specific tactics, techniques, and procedures (<a class="link" href="https://attack.mitre.org?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">attack.mitre.org</a>)</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>Related Podcast Episodes 🎧</b></h2><div class="image"><a class="image__link" href="https://links.cloudsecuritypodcast.tv/csa-ai-soc-report-dropzone-oct2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/00b4dd3f-69ad-4b71-bf9b-88fe84a3be97/S06_Edward_Wu_October.jpg?t=1761771429"/></a><div class="image__source"><a class="image__source_link" href="https://links.cloudsecuritypodcast.tv/csa-ai-soc-report-dropzone-oct2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" rel="noopener" target="_blank"><span class="image__source_text"><p>AI Agents for SOC: Hype Curve vs. Measurable ROI</p></span></a></div></div><h3 class="heading" style="text-align:left;" id="question-for-you-reply-to-this-emai">Question for you? (Reply to this email)</h3><p class="paragraph" style="text-align:left;"> 🤖 Is your SOC ready to trust AI agents for autonomous alert investigation what&#39;s the first use case you&#39;d pilot?</p><p class="paragraph" style="text-align:left;">Next week, we&#39;ll explore another critical aspect of cloud security. Stay tuned!</p><hr class="content_break"><p class="paragraph" style="text-align:left;">📬 Want weekly expert takes on AI & Cloud Security? [<a class="link" href="https://www.cloudsecuritynewsletter.com/subscribe?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">Subscribe here</a>]”</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><span style="color:rgb(238, 40, 60);"><b><a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">We would love to hear from you</a></b></span>📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙</p><p class="paragraph" style="text-align:start;">Peace!</p><p class="paragraph" style="text-align:start;"><a class="link" href="https://www.linkedin.com/in/shilpi-bhattacharjee/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">Shilpi Bhattacharjee</a></p><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dc094a1c-678a-43e0-adc9-1bee6c3499e2/CSP_Logo_Blue_ScreenRes_3000x3000_v2.jpg"/></a></div><div class="button" style="text-align:left;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="{{rp_referral_hub_url}}"><span class="button__text" style=""> Share the newsletter </span></a></div><p class="paragraph" style="text-align:left;">Was this forwarded to you? You can <a class="link" href="https://www.cloudsecuritynewsletter.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">Sign up here</a>, to join our growing readership.</p><p class="paragraph" style="text-align:left;">Want to <b>sponsor</b> the next newsletter edition! <a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">Lets make it happen </a></p><p class="paragraph" style="text-align:left;">Have you joined our FREE <b>Monthly</b> <a class="link" href="https://www.cloudsecuritybootcamp.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Bootcamp</a> yet?</p><p class="paragraph" style="text-align:left;">checkout our <b>sister podcast</b> <a class="link" href="https://www.youtube.com/@AISecurityPodcast?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=atlas-ai-security-breach-critical-wsus-flaw-the-real-roi-of-ai-augmented-soc-teams" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=7ff689c6-994c-4d88-b4e5-7dc14015a688&utm_medium=post_rss&utm_source=cloud_security_newsletter">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>🚨$1.73B Veeam–Securiti AI Deal + F5 Zero-Day Risk: Reality of Building an AI-Native SOC Architecture</title>
  <description>This week&#39;s newsletter covers critical infrastructure threats including the F5 BIG-IP breach enforcement, AWS US-East-1 outage resilience lessons, and AI security vulnerabilities. We feature Ariful Huq from Exaforce on building AI-native SOC platforms beyond traditional SIEM architectures, exploring data lake design, detection engineering at scale, and the evolution of security operations teams.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/fad01f3f-b07b-43ce-abd5-f6bf19375c54/Screenshot_2025-10-23_at_1.42.38_AM.png" length="1554380" type="image/png"/>
  <link>https://www.cloudsecuritynewsletter.com/p/f5-enforcement-veeam-securiti-ai-soc-lessons</link>
  <guid isPermaLink="true">https://www.cloudsecuritynewsletter.com/p/f5-enforcement-veeam-securiti-ai-soc-lessons</guid>
  <pubDate>Thu, 23 Oct 2025 00:44:41 +0000</pubDate>
  <atom:published>2025-10-23T00:44:41Z</atom:published>
    <dc:creator>Ashish Rajan</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="hello-from-the-cloudverse"><span style="background-color:#28EEDA;"><b>Hello from the Cloud-verse!</b></span></h2><p class="paragraph" style="text-align:left;">This week’s Cloud Security Newsletter Topic we cover - <b>The Vibe Coding Trap: Why AI SOC Requires More Than LLM Prompts</b><b> </b><a class="link" href="#cloud-security-topic-of-the-week" rel="noopener noreferrer nofollow">(continue reading)</a> </p><div class="button" style="text-align:center;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="https://links.cloudsecuritypodcast.tv/ai-for-soc-exaforce-oct-2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture"><span class="button__text" style=""> Check out this week’s Sponsor: Exaforce </span></a></div><hr class="content_break"><div class="image"><a class="image__link" href="https://links.cloudsecuritypodcast.tv/ai-for-soc-exaforce-oct-2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/fad01f3f-b07b-43ce-abd5-f6bf19375c54/Screenshot_2025-10-23_at_1.42.38_AM.png?t=1761180191"/></a><div class="image__source"><span class="image__source_text"><p>This image was generated by AI. It&#39;s still experimental, so it might not be a perfect match!</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Incase, this is your 1st Cloud Security Newsletter! You are in good company! </b><br>You are reading this issue along with your friends and colleagues from companies like <i>Netflix</i>, Citi, <i>JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more</i> who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to <a class="link" href="https://open.spotify.com/show/6LZgeh4GecRYPc0WrwMB4I?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a> & <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> every week.</p><hr class="content_break"><p class="paragraph" style="text-align:left;">Welcome to this week’s Cloud Security Newsletter</p><p class="paragraph" style="text-align:left;">This week brings critical enforcement deadlines for the F5 breach, a major AWS outage that tested multi-region resilience, and groundbreaking research showing how prompt injection can lead to remote code execution in AI agents. </p><p class="paragraph" style="text-align:left;">We&#39;re joined by Ariful Huq, CEO of Exaforce and former Palo Alto Networks veteran, who shares hard-earned lessons from building an AI-native SOC platform from first principles tackling everything from data architecture to detection engineering and the evolution of security teams in an AI-driven future.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="tldr-for-busy-readers">📰 TL;DR for Busy Readers</h2><ul><li><p class="paragraph" style="text-align:left;"><b>$1.73B Data Consolidation: </b>Veeam to acquire Securiti AI in a <i>~$1.73 billion cash-and-stock deal</i>. Data resilience + DSPM/privacy/AI-trust are converging. Expect backup inventories and multi-cloud data maps to unify.</p></li><li><p class="paragraph" style="text-align:left;"><b>F5 Enforcement Starts Now (Oct 22):</b><br>CISA ED-26-01 deadlines in effect. Treat BIG-IP as Tier-0. Remove internet-exposed mgmt, apply Oct QSN patches, add TMSH/API anomaly detections.</p></li><li><p class="paragraph" style="text-align:left;"><b>Expert Insights from the Cloud Security Podcast Episode</b></p><ul><li><p class="paragraph" style="text-align:left;"><b>Data &gt; SIEM:</b><br>Legacy SIEM + AI “bolt-ons” fail without unified <b>logs + config + code + permissions</b> context.</p></li><li><p class="paragraph" style="text-align:left;"><b>SaaS Blind Spots:</b><br>GitHub, Snowflake, and Workspace lack native detections — require domain-specific engineering.</p></li><li><p class="paragraph" style="text-align:left;"><b>Cost Reality:</b><br>Cloud telemetry (100× traditional logs) breaks ingestion-priced SIEMs; separate <b>hot vs cold</b> data for speed + economy.</p></li><li><p class="paragraph" style="text-align:left;"><b>SOC Evolution:</b><br>Future teams = <b>full-stack security engineers</b>; AI handles triage, humans own investigation.</p></li></ul></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-security-news">📰 <b>THIS WEEK&#39;S SECURITY HEADLINES</b></h2><h3 class="heading" style="text-align:left;" id="google-cloud-functions-vulnerabilit"><b>1. </b>Veeam Acquires Securiti AI for $1.73B: Data Resilience Meets DSPM</h3><p class="paragraph" style="text-align:left;">Veeam announced plans to acquire DSPM and AI data governance vendor Securiti AI for $1.725 billion in a cash and stock deal expected to close in Q4 2025. The acquisition aims to unify backup and disaster recovery capabilities with data discovery, privacy management, DSPM, and AI trust functions.</p><p class="paragraph" style="text-align:left;"><b>Why this matters</b>: This represents major consolidation at the intersection of data resilience and data security posture management. Enterprise security leaders should anticipate tighter integrations between backup inventories and multi-cloud data mapping across S3, Azure Blob, GCS, and SaaS platforms. For organizations running Veeam today, this creates an opportunity to converge policies around retention, privacy, data sovereignty, and response orchestration that spans both backup restoration and data-level containment. Key questions for your Veeam roadmap discussions: How will auto-classification work for backed-up data? What cross-cloud toxic data flow detection capabilities will emerge? How will immutable restore capabilities extend to AI model rollback scenarios?</p><p class="paragraph" style="text-align:left;">This consolidation signals that data-centric security is moving beyond point solutions toward integrated platforms that understand both where data lives and how to recover it securely.</p><p class="paragraph" style="text-align:left;"><b>Sources</b>: <a class="link" href="https://www.bloomberg.com?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">Bloomberg</a>, <a class="link" href="https://www.veeam.com?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">Veeam press release</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="2-cisa-emergency-directive-on-f-5-b"><b>2. CISA Emergency Directive on F5 Breach Reaches Critical Enforcement Deadlines</b></h3><p class="paragraph" style="text-align:left;">Following F5&#39;s disclosure that a nation-state actor stole BIG-IP source code and internal vulnerability data, CISA issued Emergency Directive 26-01 ordering federal agencies to inventory and patch or replace affected F5 devices. Enforcement deadlines began October 22, with new reporting highlighting hundreds of thousands of internet-reachable BIG-IP instances at risk.</p><p class="paragraph" style="text-align:left;"><b>Why this matters</b>: Source code and vulnerability intelligence theft fundamentally changes the threat landscape for these devices. This isn&#39;t just about known CVEs attackers now have the blueprint to craft 0-day exploits against ADC and WAF gateways that front critical applications in both on-premises and hybrid cloud environments. For enterprise security architects, BIG-IP and related F5 infrastructure should be reclassified as Tier-0 assets requiring immediate action: restrict management plane access, enforce MFA, remove any internet-exposed management interfaces, apply F5&#39;s October Quarterly Security Notification packages, and implement CISA ED 26-01 hardening actions including asset discovery, configuration review, and log analysis. Additionally, security teams should add custom detections for anomalous TMSH and API activity, and hunt for any credential or API key leakage patterns referenced in the advisories going back at least 12 months.</p><p class="paragraph" style="text-align:left;"><b>Sources</b>:<a class="link" href="https://www.cisa.gov/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow"> CISA Emergency Directive & Alert</a>,<a class="link" href="https://www.f5.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow"> F5 Advisory</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="3-aws-us-east-1-outage-a-multi-regi"><b>3 - AWS US-East-1 Outage: A Multi-Region Resilience Wake-Up Call</b></h3><p class="paragraph" style="text-align:left;">A multi-hour incident in AWS&#39;s US-East-1 region on October 20 cascaded across load balancers and dependent services, disrupting thousands of major consumer and government applications. AWS attributed the trigger to internal infrastructure issues, and service has been restored.</p><p class="paragraph" style="text-align:left;"><b>Why this matters</b>: While not a security breach, this outage serves as a critical resilience test for cloud architects. The incident exposed dependencies that many organizations didn&#39;t realize existed particularly around authentication services, payment processing, and messaging infrastructure that assumed US-East-1 availability. Key architectural reviews for your team: Revisit cell and region isolation strategies for critical paths, validate control-plane dependencies including Route 53 health checks, ELB/ALB configurations, and IAM token lifetime assumptions. Most importantly, prove graceful degradation and blast-radius limits through game-day exercises, and ensure incident communication automations don&#39;t depend on the impaired region. This outage reinforces that multi-region architecture isn&#39;t just about disaster recovery it&#39;s about maintaining operations when a primary region experiences prolonged disruption.</p><p class="paragraph" style="text-align:left;"><b>Sources</b>:<a class="link" href="https://www.theguardian.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow"> The Guardian</a>,<a class="link" href="https://health.aws.amazon.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow"> AWS Health Dashboard</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="4-muddy-water-apt-targets-100-gover"><b>4 - MuddyWater APT Targets 100+ Government Entities with Phoenix Backdoor</b></h3><p class="paragraph" style="text-align:left;">New reporting details a broad phishing campaign from MuddyWater (Iranian state-sponsored group) using compromised mailboxes and VPN infrastructure to deliver macro-enabled payloads containing the Phoenix backdoor. The campaign targets government organizations across the Middle East and Africa.</p><p class="paragraph" style="text-align:left;"><b>Why this matters</b>: Organizations that federate identities to Office 365 or Google Workspace and operate hybrid workloads in MEA regions should expect living-off-the-land persistence techniques and mailbox-to-mailbox lateral movement. The campaign demonstrates sophisticated use of compromised infrastructure to evade traditional perimeter defenses. Immediate hardening steps: Tighten conditional access policies, disable legacy authentication protocols, restrict macro execution via Attack Surface Reduction (ASR) rules, and enrich detection logic with VPN exit node indicators and suspicious OAuth token grant patterns. This campaign also highlights the importance of monitoring for anomalous authentication patterns and mailbox rules that could indicate compromise.</p><p class="paragraph" style="text-align:left;"><b>Source</b>:<a class="link" href="https://www.darkreading.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow"> Dark Reading</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="5-cisa-adds-five-actively-exploited"><b>5 - CISA Adds Five Actively Exploited Vulnerabilities to KEV Catalog</b></h3><p class="paragraph" style="text-align:left;">CISA updated the Known Exploited Vulnerabilities catalog on October 20, highlighting multiple bugs under active exploitation in the wild. Separate reporting confirms exploitation of a patched Windows SMB vulnerability.</p><p class="paragraph" style="text-align:left;"><b>Why this matters</b>: KEV-listed vulnerabilities must be prioritized over CVSS-only scoring approaches. These represent confirmed threats that attackers are actively weaponizing. Security teams should integrate KEV feeds into vulnerability management programs to auto-create SLAs and prioritize remediation workflows. Where immediate patching isn&#39;t feasible, verify compensating controls are in place particularly network segmentation, SMB signing enforcement, and EDR coverage on domain controllers and file servers that underpin cloud synchronization services. The Windows SMB exploitation is especially concerning given how critical these services are to hybrid cloud authentication and file sharing architectures.</p><p class="paragraph" style="text-align:left;"><b>Source</b>:<a class="link" href="https://www.cisa.gov/known-exploited-vulnerabilities?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow"> CISA Alert</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="6-healthcare-cyberattack-forces-mas"><b>6- Healthcare Cyberattack Forces Massachusetts Hospitals to Divert Patients</b></h3><p class="paragraph" style="text-align:left;">A cyber incident on October 20 forced two Massachusetts hospitals to take IT and radiology systems offline and divert ambulance traffic. Early indicators point to ransomware as the attack vector.</p><p class="paragraph" style="text-align:left;"><b>Why this matters</b>: This incident underscores that clinical systems including PACS/RIS and imaging workstations alongside affiliate and third-party network connections represent critical weak points in healthcare infrastructure. For security leaders supporting healthcare workloads, this reinforces the need to: isolate imaging networks from general IT infrastructure, require privileged access workstations for administrative functions, and rehearse downtime procedures including cloud-hosted EHR failover capabilities. The attack also highlights how ransomware continues to target healthcare organizations where operational disruption directly impacts patient care, making these organizations more likely to pay ransoms under duress.</p><p class="paragraph" style="text-align:left;"><b>Source</b>:<a class="link" href="https://www.bankinfosecurity.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow"> Bank Info Security</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="7-ai-security-research-prompt-injec"><b>7 - AI Security Research: Prompt Injection Leads to Remote Code Execution</b></h3><p class="paragraph" style="text-align:left;">Trail of Bits published research demonstrating complete attack chains from prompt injection to remote code execution in LLM agent systems through tool use and environment bridges. The research coincided with commentary from OpenAI&#39;s CISO emphasizing risk concerns for newly launched agent features.</p><p class="paragraph" style="text-align:left;"><b>Why this matters</b>: Organizations piloting agent-assisted SecOps or internal copilots with cloud tool access must treat these systems as untrusted execution environments similar to web browsers. Security teams should: constrain agent tools with least-privilege access and time-boxed tokens, implement out-of-process sandboxes, add content provenance checks including URL/domain allowlists and HTML/script stripping, enforce model-side policy guards, log all tool invocations comprehensively, and bind guardrails to secrets managers with no inline keys. This research demonstrates that AI agents aren&#39;t just productivity tools they&#39;re potential attack surfaces that require the same security rigor as any privileged system with access to production environments.</p><p class="paragraph" style="text-align:left;"><b>Sources</b>:<a class="link" href="https://blog.trailofbits.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow"> Trail of Bits Blog</a>,<a class="link" href="https://simonwillison.net/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow"> Simon Willison&#39;s commentary</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="8-aws-releases-amazon-corretto-quar"><b>8 - AWS Releases Amazon Corretto Quarterly Security Patches</b></h3><p class="paragraph" style="text-align:left;">AWS released Corretto (OpenJDK) quarterly security and critical updates across all LTS versions (25, 21, 17, 11, 8). Many Java-based services including Lambda functions and containerized microservices pin these runtimes as base dependencies.</p><p class="paragraph" style="text-align:left;"><b>Why this matters</b>: Java supply-chain risk frequently enters through base container images and managed runtime environments. Security and DevOps teams should ensure CI/CD pipelines rebuild container images against the updated Corretto versions, rotate JDK installations in Elastic Beanstalk, EKS, and ECS environments, and redeploy Lambda functions using updated layers. This seemingly routine patching cycle represents a critical supply-chain security control especially given how widely Java runtimes are deployed across enterprise cloud workloads.</p><p class="paragraph" style="text-align:left;"><b>Source</b>:<a class="link" href="https://aws.amazon.com/about-aws/whats-new/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow"> AWS &quot;What&#39;s New&quot; (Corretto)</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="cloud-security-topic-of-the-week">🎯 Cloud Security Topic of the Week: </h2><h3 class="heading" style="text-align:left;" id="building-ai-native-soc-platforms-wh"><b>Building AI-Native SOC Platforms: Why Traditional SIEM + AI Bolt-Ons Fall Short</b></h3><p class="paragraph" style="text-align:left;">“Can you simply use Claude Code or any advanced LLM to build an AI SOC?“<br>This question reflects broader confusion in the market about what it truly takes to operationalize AI in security operations. This week, we explore the architectural and operational realities of building AI-native SOC capabilities lessons that challenge conventional wisdom about &quot;bolting AI onto existing tools.&quot;</p><h2 class="heading" style="text-align:left;" id="featured-experts-this-week"><b>Featured Experts This Week </b>🎤</h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.linkedin.com/in/arifhuq/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow"><b>Ariful Huq</b></a> - CEO and Co-founder, <a class="link" href="https://links.cloudsecuritypodcast.tv/ai-for-soc-exaforce-oct-2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">Exaforce</a></p></li><li><p class="paragraph" style="text-align:left;"><b><a class="link" href="https://www.linkedin.com/in/ashishrajan/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">Ashish Rajan</a></b> - CISO | Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> , Host of <a class="link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a></p></li></ul><h2 class="heading" style="text-align:left;" id="definitions-and-core-concepts"><b>Definitions and Core Concepts 📚</b></h2><p class="paragraph" style="text-align:left;">Before diving into our insights, let&#39;s clarify some key terms:</p><ul><li><p class="paragraph" style="text-align:left;"><b>AI-Native SOC</b>: A security operations center built from first principles with AI capabilities embedded at every layer from data ingestion to detection, triage, investigation, and response rather than AI features added to existing SIEM platforms.</p></li><li><p class="paragraph" style="text-align:left;"><b>DSPM (Data Security Posture Management)</b>: A category of security tools that discover, classify, and monitor sensitive data across multi-cloud and SaaS environments, providing visibility into data exposure, access patterns, and compliance risks.</p></li><li><p class="paragraph" style="text-align:left;"><b>SIEM (Security Information and Event Management)</b>: Traditional platforms that aggregate logs and events for correlation and analysis. Legacy SIEMs primarily process event data without broader context like configuration, code, or business logic.</p></li><li><p class="paragraph" style="text-align:left;"><b>Agentic SOC</b>: Security operations platforms that use autonomous AI agents to perform multi-step security tasks including detection, triage, investigation, and response with minimal human intervention.</p></li><li><p class="paragraph" style="text-align:left;"><b>Detection Engineering</b>: The practice of creating, testing, and maintaining threat detection logic including rules, behavioral analytics, and machine learning models to identify security incidents across infrastructure and applications.</p></li></ul><hr class="content_break"><p class="paragraph" style="text-align:center;">This week&#39;s issue is sponsored by <a class="link" href="https://links.cloudsecuritypodcast.tv/ai-for-soc-exaforce-oct-2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">Exaforce</a></p><p class="paragraph" style="text-align:left;">Exaforce transforms how security teams operate by delivering enterprise-grade SOC capabilities without increasing headcount. </p><p class="paragraph" style="text-align:left;">Powered by agentic, multi-model AI and advanced data exploration capabilities, the Exaforce platform automates detection, triage, investigation, and response with expert analyst-level accuracy at machine speed. </p><p class="paragraph" style="text-align:left;">Cut false positives by up to 80%, reduce MTTR by 70%, and lower costs while strengthening coverage across IaaS, SaaS, code, and identity. </p><p class="paragraph" style="text-align:left;">Build or scale your SOC in hours, not months, and give your team the intelligence edge it deserves.</p><p class="paragraph" style="text-align:center;"><a class="link" href="https://links.cloudsecuritypodcast.tv/ai-for-soc-exaforce-oct-2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">📤 See Agentic SOC Platform in Action</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="our-insights-from-this-practitioner">💡<b>Our Insights from this Practitioner 🔍</b></h2><h3 class="heading" style="text-align:left;" id="the-vibe-coding-trap-why-ai-soc-req"><b>The Vibe Coding Trap: Why AI SOC Requires More Than LLM Prompts(</b><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/can-you-build-an-ai-soc-with-claude-code-the-reality-vs-hype?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow"><b>Full Episode here</b></a><b>)</b></h3><p class="paragraph" style="text-align:left;">&quot;Can I just use Claude Code to build an AI SOC?&quot; This question surfaces constantly in enterprise security discussions, fueled by impressive demos of LLMs generating code and solving problems. Ariful Huq, who has spent the past two years building Exaforce&#39;s AI-native SOC platform, offers a nuanced perspective that every security leader evaluating AI capabilities should understand.</p><p class="paragraph" style="text-align:left;">&quot;I think this technology is incredibly powerful,&quot; Ariful begins. &quot;It&#39;s given me superpowers as a product person. Just over the weekend, I was building session hijacking demos and exploring APIs things I wouldn&#39;t have done five years ago without a coding background.&quot; However, the critical distinction emerges when discussing production SOC platforms versus exploratory tooling: <b>&quot;</b><i>If you&#39;re starting from scratch and thinking about building a data platform with everything on top of it from a management and resourcing perspective, that might not be worthwhile.</i><b>&quot;</b></p><p class="paragraph" style="text-align:left;">The core issue isn&#39;t whether LLMs can generate useful code (they can), but whether organizations understand the full stack required for production security operations at scale. Traditional approaches that &quot;bolt on&quot; AI capabilities to existing SIEM architectures encounter fundamental limitations rooted in data architecture, context, and operational maturity.</p><h3 class="heading" style="text-align:left;" id="1-first-principles-starting-with-da"><b>1️⃣ First Principles: Starting With Data Architecture, Not Detection Rules</b></h3><p class="paragraph" style="text-align:left;">Drawing an analogy to autonomous vehicles, Ariful explains why point solutions fall short: &quot;If you look at Tesla and Waymo, they&#39;re not building bolt-on autonomous vehicles. They&#39;ve built the infrastructure, the cameras, the data collection, the training. You can see they&#39;re making the most progress because they own the underlying platform.&quot;</p><p class="paragraph" style="text-align:left;">This same principle applies to AI-native SOC platforms. <b>&quot;</b><i>I think the bolt-on approach is tough. If you&#39;re really looking for good outcomes, you really need to think about an approach starting from first principles with the data,</i><b>&quot;</b> Ariful emphasizes. &quot;It&#39;s well more than just logs and event data, it&#39;s config, code context, bringing it all together. I think it&#39;s going well beyond what SIEMs are capable of.&quot;</p><p class="paragraph" style="text-align:left;">The data challenge manifests in several dimensions that enterprise SOC teams face today:</p><p class="paragraph" style="text-align:left;"><b>Volume Explosion from Cloud Infrastructure</b>: At Exaforce, analyzing their own security telemetry revealed that IaaS logs specifically from public cloud platforms generate 100 times the volume of all other data sources combined. This astronomical scale puts tremendous pressure on architectural choices made for traditional on-premises environments. Legacy SIEM architectures that charge based on data ingestion become prohibitively expensive when cloud workloads generate petabytes of security-relevant telemetry.</p><p class="paragraph" style="text-align:left;"><b>The Real-Time Processing Tax</b>: Security operations fundamentally differs from other data analytics use cases because SOC teams require real-time threat detection and response. Ariful highlights a critical insight about data lake economics: &quot;If you&#39;re building a SIEM on top of Snowflake, every time you run a query or detection, you&#39;re getting charged. The real-time data processing is the critical thing here.&quot;</p><p class="paragraph" style="text-align:left;">This continuous processing requirement drove Exaforce to architect a hybrid approach: hot data in memory for real-time querying, combined with cold storage leveraging Snowflake and Apache Iceberg on S3 for historical analysis. <b>&quot;</b><i>We had to decouple ourselves to leverage technologies like Snowflake for what they&#39;re good at, while not getting charged for every real-time detection query,</i><b>&quot;</b> Ariful explains.</p><p class="paragraph" style="text-align:left;">One customer example crystallizes the problem: they stored all cloud logs in S3 using Athena for queries resulting in 50 minutes to one hour query times. The cost savings from avoiding SIEM ingestion fees were completely negated by operational paralysis. Another customer tried sending all cloud logs to their existing SIEM, watching costs skyrocket before pulling back and seeking augmentation technologies to handle cloud telemetry separately.</p><h4 class="heading" style="text-align:left;" id="2-context-is-everything-why-log-dat"><b>2️⃣ Context Is Everything: Why Log Data Alone Fails AI Systems</b></h4><p class="paragraph" style="text-align:left;">The most common mistake organizations make when attempting to build AI SOC capabilities centers on data completeness. Traditional SIEMs excel at processing logs and events, but AI systems require substantially more context to produce reliable, actionable results.</p><p class="paragraph" style="text-align:left;">Ariful illustrates this with a concrete example: &quot;<i>Let&#39;s say you&#39;re trying to do insider threat detection. One behavior is somebody without edit permissions copying files and making them public. To build that detection, you need event data about the action, config information on the resource, and permission data to determine if they had edit rights.&quot;</i></p><p class="paragraph" style="text-align:left;">This pattern repeats across cloud security use cases. Investigating anomalous S3 bucket access requires understanding: Who performed the action? What IAM role or user identity was assumed? Who provisioned the S3 bucket originally? What sensitivity classification applies to the objects? Which specific keys were accessed?</p><p class="paragraph" style="text-align:left;"><b>&quot;</b><i>You have to give LLMs the right context. You can remove the guesswork,</i><b>&quot;</b> Ariful explains, drawing a human analogy. &quot;<i>If you ask me a vague question, my thought process could go many different directions. But if you remove the guesswork and give me preciseness plus reasoning freedom, you&#39;ll get consistent answers even between different people. That&#39;s exactly what we&#39;re trying to do with LLMs to remove guesswork by providing context from logs, config, code, and business understanding.</i>&quot;</p><p class="paragraph" style="text-align:left;">This insight proved pivotal in Exaforce&#39;s development journey. In their first year, they started with third-party detections and leveraged LLMs for triage. &quot;<i>The results were unpredictable, not precise, not what we expected,</i>&quot; Ariful recalls. &quot;<i>It really boiled down to the data. We needed config context, permission information, code context for GitHub not just event logs.</i>&quot;</p><h3 class="heading" style="text-align:left;" id="3-detection-engineering-at-scale-th"><b>3️⃣ Detection Engineering at Scale: The SaaS Blind Spot</b></h3><p class="paragraph" style="text-align:left;">Modern enterprises operate across dozens of SaaS platforms: GitHub for code, Snowflake for analytics, Google Workspace for collaboration, OpenAI for AI capabilities. A critical gap emerges: <b>none of these platforms provide native threat detection capabilities</b>.</p><p class="paragraph" style="text-align:left;">&quot;<i>GitHub has no native threat detection,</i>&quot; Ariful points out. &quot;<i>If you have a personal access token or SSH key compromised, you need your own detections to figure that out.</i>&quot; This reality creates a detection coverage gap that traditional endpoint and email security tools don&#39;t address those problems are largely solved by CrowdStrike, Sentinel One, and mature email security providers.</p><p class="paragraph" style="text-align:left;">Exaforce focused detection engineering efforts specifically on SaaS platforms, but the work required goes far beyond writing simple correlation rules. <b>&quot;</b><i>You need domain-specific understanding of data. You almost need to build domain-specific knowledge by understanding events, resources, and platform intricacies for every single data source. GWS has its own intricacies. GitHub has its own intricacies.</i><b>&quot;</b></p><p class="paragraph" style="text-align:left;">Detection engineering at this level requires understanding platform-specific concepts like GitHub personal access tokens, Google Workspace OAuth scopes, and Snowflake account privileges. Teams must then build statistical models on top of this domain knowledge to identify anomalous patterns. &quot;<i>You have to figure out what is potentially anomalous, build statistical models, and determine thresholds,</i>&quot; Ariful explains. &quot;<i>It&#39;s hard to just build rule-based detections where step one happens, step two happens, fire alert. It&#39;s much more complex. You need anomaly detection capabilities that many SIEMs simply don&#39;t provide.</i>&quot;</p><p class="paragraph" style="text-align:left;">This detection engineering challenge explains why organizations with strong detection engineering teams still struggle with SaaS coverage. Even with skilled engineers, building and maintaining domain-specific detection logic across a dozen SaaS platforms becomes a Sisyphean task without purpose-built tooling and data integration.</p><h3 class="heading" style="text-align:left;" id="4-the-trust-and-transparency-proble"><b>4️⃣ The Trust and Transparency Problem: How Do You Know AI Got It Right?</b></h3><p class="paragraph" style="text-align:left;">Perhaps the most challenging question for AI SOC platforms centers on trust. In security operations, false positives erode analyst confidence while false negatives create blind spots. When AI makes triage decisions, security leaders rightfully ask: &quot;<i>How do I know it got this right?</i>&quot;</p><p class="paragraph" style="text-align:left;">Ariful&#39;s response draws again on the autonomous vehicle analogy: &quot;<i>There&#39;s no magic pixie dust for trust. For some time, you didn&#39;t see Waymo vehicles drive autonomously; you saw them with somebody inside. That&#39;s the phase we&#39;re in with SOC operations where you still need people driving.</i>&quot;</p><p class="paragraph" style="text-align:left;">Exaforce approached this through three mechanisms:</p><p class="paragraph" style="text-align:left;"><b>Transparency Through Complete Context</b>: &quot;<i>You have to provide every aspect of the decision, the data the agent relied on, all relevant questions it answered to reach conclusions. You can&#39;t ask people to context switch to other data platforms to validate decisions.</i>&quot; The system must surface all reasoning paths and supporting evidence inline.</p><p class="paragraph" style="text-align:left;"><b>Human-in-the-Loop Learning</b>: Exaforce built a dedicated team to review AI triage decisions as humans, labeling true positives versus false positives. This evolved into a Managed Detection and Response (MDR) service that serves dual purposes: providing customers with better outcomes through human oversight while simultaneously improving the product through labeled training data. <b>&quot;</b><i>We invest in MDR not just as a service but as a mechanism to build better technology,</i><b>&quot;</b> Ariful explains.</p><p class="paragraph" style="text-align:left;"><b>Continuous Learning from Historical Context</b>: The system maintains historical data about past detections and their classifications. When a specific alert type has been marked as a false positive with documented reasoning, that context informs future triage decisions. Organizations can also provide business context knowledge bases, operational playbooks, environment-specific details that the system incorporates into every analysis.</p><p class="paragraph" style="text-align:left;">&quot;<i>There&#39;s no easy answer</i>,&quot; Ariful admits. &quot;<i>It&#39;s a combination of transparency, human validation loops, and continuous learning from historical classifications that helps us get better results over time.</i>&quot;</p><h3 class="heading" style="text-align:left;" id="5-rethinking-soc-teams-the-rise-of-"><b>5️⃣ Rethinking SOC Teams: The Rise of Full-Stack Security Engineers</b></h3><p class="paragraph" style="text-align:left;">If AI can handle alert triage, investigation assistance, and even multi-step response workflows, what does this mean for SOC staffing? Ariful sees a fundamental shift emerging that elevates rather than replaces human talent.</p><p class="paragraph" style="text-align:left;"><b>&quot;</b><i>I really think the future is where security engineers become full stack. You&#39;re not an analyst,</i><b>&quot;</b> Ariful states. &quot;<i>Your best analyst doesn&#39;t want to be the guy that&#39;s just doing investigations. Your best analyst is probably someone incredibly valuable to the organization in many other ways.</i>&quot;</p><p class="paragraph" style="text-align:left;">This perspective aligns with what Exaforce observes across customer engagements. Growth-stage companies rethinking their SOC approach are hiring differently: &quot;<i>They&#39;re not hiring analysts. They&#39;re hiring individuals that can perform all these tasks: security engineering, some detection work, incident response because they&#39;re going to leverage a combination of human talent and AI.</i>&quot;</p><p class="paragraph" style="text-align:left;">The traditional SOC staffing model buying a SIEM, hiring detection engineers, hiring L1/L2 analysts requires massive upfront capital and headcount investment with value realization potentially taking more than a year. In contrast, an AI-native approach allows a small team of full-stack security engineers to leverage technology for the mundane work while focusing on high-value threat hunting, complex investigations, and security engineering projects.</p><p class="paragraph" style="text-align:left;">&quot;<i>If you&#39;re in the SOC, leverage AI to be a builder and a better defender not to do mundane tasks,</i>&quot; Ariful emphasizes. This shift particularly benefits early-career security professionals. Those with 2-3 years of experience who know enough to ask good questions but want to advance to senior roles can use AI capabilities to punch above their weight class. They can engage with complex investigations and advanced analysis previously reserved for principal-level engineers, while AI handles the repetitive triage work that has historically created burnout and high turnover among SOC analysts.</p><h3 class="heading" style="text-align:left;" id="6-beyond-bedrock-the-engineering-re"><b>6️⃣ Beyond Bedrock: The Engineering Reality of Production AI Systems</b></h3><p class="paragraph" style="text-align:left;">For organizations building on AWS, Amazon Bedrock provides an obvious starting point for LLM capabilities. Exaforce leverages Bedrock extensively and even won AWS&#39;s Partner Startup of the Year award. However, Ariful cautions that Bedrock alone doesn&#39;t constitute a production AI agent system.</p><p class="paragraph" style="text-align:left;">&quot;<i>Bedrock is incredibly beneficial it&#39;s a native AWS service, so all your data stays within AWS infrastructure, which helps with data residency and sovereignty questions,</i>&quot; Ariful notes. &quot;But it&#39;s not just about Bedrock. Building a robust agent system goes well beyond what Bedrock offers.&quot;</p><p class="paragraph" style="text-align:left;">Production requirements that teams must build themselves include:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Retry mechanisms</b>: When agents reach out to third-party systems or individuals, how do you handle failures?</p></li><li><p class="paragraph" style="text-align:left;"><b>Asynchronous processing</b>: If an agent performs multiple tasks, should they execute sequentially or in parallel?</p></li><li><p class="paragraph" style="text-align:left;"><b>Upgrade handling</b>: When upgrading software, how do you ensure in-flight agent tasks complete or restart appropriately?</p></li><li><p class="paragraph" style="text-align:left;"><b>State management</b>: How do you track agent progress across distributed systems?</p></li></ul><p class="paragraph" style="text-align:left;">Exaforce&#39;s engineering team evaluated open-source frameworks like LangChain but ultimately built custom infrastructure because production requirements exceeded what generic frameworks provided. &quot;<i>We obviously started with Bedrock quickly that&#39;s the benefit of cloud,</i>&quot; Ariful explains. &quot;<i>But then there&#39;s all this other infrastructure you have to build for enterprise reliability.</i>&quot;</p><p class="paragraph" style="text-align:left;">This reality check matters for security leaders evaluating build-versus-buy decisions. The impressive demos of LLMs solving problems don&#39;t capture the operational complexity of running agent systems at scale in production security environments with SLAs and compliance requirements.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-resources">RELATED RESOURCES</h2><ul><li><p class="paragraph" style="text-align:left;"><b>Threat Intelligence & Research</b></p><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.cisa.gov/news-events/directives/ed-26-01?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">CISA Emergency Directive 26-01 Full Text</a> - Complete guidance on F5 BIG-IP remediation requirements</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://blog.trailofbits.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">Trail of Bits: Prompt Injection to RCE Research</a> - Technical deep-dive on AI agent security vulnerabilities</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://docs.aws.amazon.com/whitepapers/latest/aws-multi-region-fundamentals/aws-multi-region-fundamentals.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">AWS Multi-Region Architecture Best Practices</a> - Official guidance on region failover design</p></li></ul></li><li><p class="paragraph" style="text-align:left;"><b>AI Security & Development</b></p><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://docs.aws.amazon.com/bedrock/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">Amazon Bedrock Documentation</a> - Technical reference for building AI applications on AWS</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://python.langchain.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">LangChain Documentation</a> - Framework for developing LLM applications</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://owasp.org/www-project-top-10-for-large-language-model-applications/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">OWASP LLM Top 10</a> - Security risks in LLM applications</p></li></ul></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>Related Podcast Episodes 🎧</b></h2><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/incident-response-of-kubernetes-and-how-to-automate-containment?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/2d80873c-bd9c-4023-baf0-38637b989110/S06_Ariful_Huq.jpg?t=1761178785"/></a><div class="image__source"><a class="image__source_link" href="https://www.cloudsecuritypodcast.tv/videos/can-you-build-an-ai-soc-with-claude-code-the-reality-vs-hype?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" rel="noopener" target="_blank"><span class="image__source_text"><p>Can You Build an AI SOC with Claude Code? The Reality vs. Hype</p></span></a></div></div><h3 class="heading" style="text-align:left;" id="question-for-you-reply-to-this-emai">Question for you? (Reply to this email)</h3><p class="paragraph" style="text-align:left;">⚙️ Is your SOC team spending more time managing queries or hunting threats?</p><p class="paragraph" style="text-align:left;">Next week, we&#39;ll explore another critical aspect of cloud security. Stay tuned!</p><hr class="content_break"><p class="paragraph" style="text-align:left;">📬 Want weekly expert takes on AI & Cloud Security? [<a class="link" href="https://www.cloudsecuritynewsletter.com/subscribe?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">Subscribe here</a>]”</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><span style="color:rgb(238, 40, 60);"><b><a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">We would love to hear from you</a></b></span>📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙</p><p class="paragraph" style="text-align:start;">Peace!</p><p class="paragraph" style="text-align:start;"><a class="link" href="https://www.linkedin.com/in/shilpi-bhattacharjee/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">Shilpi Bhattacharjee</a></p><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dc094a1c-678a-43e0-adc9-1bee6c3499e2/CSP_Logo_Blue_ScreenRes_3000x3000_v2.jpg"/></a></div><div class="button" style="text-align:left;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="{{rp_referral_hub_url}}"><span class="button__text" style=""> Share the newsletter </span></a></div><p class="paragraph" style="text-align:left;">Was this forwarded to you? You can <a class="link" href="https://www.cloudsecuritynewsletter.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">Sign up here</a>, to join our growing readership.</p><p class="paragraph" style="text-align:left;">Want to <b>sponsor</b> the next newsletter edition! <a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">Lets make it happen </a></p><p class="paragraph" style="text-align:left;">Have you joined our FREE <b>Monthly</b> <a class="link" href="https://www.cloudsecuritybootcamp.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Bootcamp</a> yet?</p><p class="paragraph" style="text-align:left;">checkout our <b>sister podcast</b> <a class="link" href="https://www.youtube.com/@AISecurityPodcast?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=1-73b-veeam-securiti-ai-deal-f5-zero-day-risk-reality-of-building-an-ai-native-soc-architecture" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=71c7deb7-560d-4296-add3-ffa643d77bd5&utm_medium=post_rss&utm_source=cloud_security_newsletter">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>🚨 F5 BIG-IP Breach Exposes Supply Chain Risk: Lessons from Automating Incident Response in Hybrid Cloud Environments</title>
  <description>Nation-state hackers didn’t just breach F5, they exposed how fragile cloud-era supply chains remain. Add Harvard’s Oracle zero-day and GitHub’s AI-powered data leak, and you see why automation, not dashboards, defines modern incident response.Featured expert Damien Burks shares proven strategies for automating incident response in containerized environments, particularly for Kubernetes and EKS clusters in regulated industries.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/264cb096-5fc1-4667-a5d5-2f188f29e5b8/Screenshot_2025-10-15_at_10.52.25_PM.png" length="1978975" type="image/png"/>
  <link>https://www.cloudsecuritynewsletter.com/p/f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud</link>
  <guid isPermaLink="true">https://www.cloudsecuritynewsletter.com/p/f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud</guid>
  <pubDate>Wed, 15 Oct 2025 22:01:13 +0000</pubDate>
  <atom:published>2025-10-15T22:01:13Z</atom:published>
    <dc:creator>Ashish Rajan</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="hello-from-the-cloudverse"><span style="background-color:#28EEDA;"><b>Hello from the Cloud-verse!</b></span></h2><p class="paragraph" style="text-align:left;">This week’s Cloud Security Newsletter Topic we cover - <b>Building Automated Incident Response for the Cloud Age </b><a class="link" href="#cloud-security-topic-of-the-week" rel="noopener noreferrer nofollow">(continue reading)</a> </p><hr class="content_break"><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/incident-response-of-kubernetes-and-how-to-automate-containment?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/264cb096-5fc1-4667-a5d5-2f188f29e5b8/Screenshot_2025-10-15_at_10.52.25_PM.png?t=1760565178"/></a><div class="image__source"><span class="image__source_text"><p>This image was generated by AI. It&#39;s still experimental, so it might not be a perfect match!</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Incase, this is your 1st Cloud Security Newsletter! You are in good company! </b><br>You are reading this issue along with your friends and colleagues from companies like <i>Netflix</i>, Citi, <i>JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more</i> who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to <a class="link" href="https://open.spotify.com/show/6LZgeh4GecRYPc0WrwMB4I?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a> & <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> every week.</p><hr class="content_break"><p class="paragraph" style="text-align:left;">Welcome to this week’s Cloud Security Newsletter</p><p class="paragraph" style="text-align:left;">The past week has reinforced a sobering reality for enterprise cloud security teams: sophisticated nation-state actors are targeting critical infrastructure software, and our incident response capabilities must evolve to match the complexity of modern cloud architectures. From F5&#39;s disclosure of persistent access by nation-state actors to Harvard&#39;s confirmation as a victim in the Oracle EBS zero-day campaign, the stakes have never been higher.</p><p class="paragraph" style="text-align:left;">This week, we&#39;re featuring insights from <b>Damien Burks</b>, a seasoned cloud security engineer with over six years of experience building incident response platforms in highly regulated financial services environments. Damien has hands-on experience automating containment for complex Kubernetes environments and brings practical wisdom on navigating the increasingly sophisticated cloud security landscape.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="tldr-for-busy-readers">📰 TL;DR for Busy Readers</h2><ul><li><p class="paragraph" style="text-align:left;"><b>F5 BIG-IP breach:</b> Nation-state actors stole source code; treated as supply-chain exposure for all BIG-IP deployments including cloud VE instances<b>.</b> Treat F5 BIG-IP as software provenance risk; verify image digests and lock down management planes now.</p></li><li><p class="paragraph" style="text-align:left;"><b>Oracle EBS zero-day @ Harvard</b>: 1.3TB exfiltrated via CVE-2025-61882; check EBS integrations with cloud IAM and data warehouses</p></li><li><p class="paragraph" style="text-align:left;"><b>GitHub Copilot CamoLeak:</b> CamoLeak proves AI dev tools can exfiltrate secrets via hidden prompt injection; restrict scopes and disable risky render paths.</p></li><li><p class="paragraph" style="text-align:left;"><b>AWS & Google updates: </b>Guardrails for AI agents + Drive anti-ransomware AI align with your data-protection strategy.</p></li><li><p class="paragraph" style="text-align:left;"><b>EKS containment:</b> Use <b>in-VPC automation</b> (e.g., Lambda in cluster VPC/SG) to execute <span style="color:rgb(24, 128, 56);">kubectl</span> playbooks without bastions.</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-security-news">📰 <b>THIS WEEK&#39;S SECURITY HEADLINES</b></h2><h3 class="heading" style="text-align:left;" id="google-cloud-functions-vulnerabilit"><b>1. F5 BIG-IP Source Code Stolen in Nation-State Breach</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> F5 disclosed that sophisticated nation-state actors maintained persistent access to its internal systems from an unknown date until discovery on August 9, 2025, successfully exfiltrating portions of BIG-IP source code and information about undisclosed vulnerabilities. The U.S. Department of Justice granted a one-month disclosure delay citing national security concerns marking one of the first public acknowledgments of DOJ intervention in SEC cybersecurity disclosures. CISA issued Emergency Directive ED 26-01 ordering all federal civilian agencies to immediately mitigate unauthorized access risks and report detailed F5 BIG-IP inventories by December 3, 2025.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters to Cloud Security Leaders:</b></p><p class="paragraph" style="text-align:left;">This incident represents a textbook supply-chain compromise affecting thousands of enterprises running BIG-IP appliances in cloud VPCs and on-premises data centers. For cloud security architects, the implications extend beyond traditional on-prem ADC deployments. Many organizations deploy F5 virtual editions (VE) from AWS Marketplace, Azure Marketplace, or as custom AMIs all potential attack surfaces if adversaries possess source code and vulnerability intelligence.</p><p class="paragraph" style="text-align:left;">The multi-month detection gap is particularly concerning. As Damien Burks emphasizes in this week&#39;s featured interview: <b>&quot;</b><i>Without automation, in a regulated environment, it&#39;s gonna take you hours</i><b>&quot;</b> to respond to incidents in complex environments. The F5 breach underscores why detection alone is insufficient. Organizations need automated response capabilities that can contain threats within minutes, not hours.</p><p class="paragraph" style="text-align:left;">From an architectural perspective, BIG-IP often sits at the network boundary between internet-facing services and application tiers, making compromised instances ideal pivot points for lateral movement. Cloud security teams should treat this as an opportunity to harden management plane access, implement zero-trust network segmentation, and deploy enhanced telemetry on ADC zones.</p><p class="paragraph" style="text-align:left;"><b>Sources:</b> <a class="link" href="https://www.bloomberg.com/news/articles/2025-10-15/nation-state-hackers-breached-cyber-firm-f5-networks-stole-code?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow">Bloomberg</a>, <a class="link" href="https://www.securityweek.com/f5-blames-nation-state-hackers-for-theft-of-source-code-and-vulnerability-data/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow">SecurityWeek</a><span style="color:rgb(17, 85, 204);"><span style="text-decoration:underline;">,</span></span><span style="color:rgb(17, 85, 204);"> </span><a class="link" href="https://therecord.media/cisa-directive-f5-nation-state-incident?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow">The Record</a><span style="color:rgb(17, 85, 204);"><span style="text-decoration:underline;">,</span></span><span style="color:rgb(17, 85, 204);"> </span><a class="link" href="https://www.helpnetsecurity.com/2025/10/15/f5-big-ip-data-breach/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow">Help Net Security</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="2-harvard-confirms-breach-in-oracle"><b>2. Harvard Confirms Breach in Oracle EBS Zero-Day Campaign</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> Harvard University became the first confirmed victim of a widespread Oracle E-Business Suite exploitation campaign orchestrated by the Cl0p ransomware group. Attackers exploited CVE-2025-61882 (CVSS 9.8), a critical authentication bypass vulnerability allowing unauthenticated remote code execution, to exfiltrate 1.3 TB of data including financial records, HR information, and internal source code. Google Threat Intelligence and Mandiant report that dozens of organizations were targeted beginning as early as July 10, 2025, with the campaign accelerating through October.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters to Cloud Security Leaders:</b></p><p class="paragraph" style="text-align:left;">Oracle EBS presents a critical attack surface that often flies under the radar of cloud security programs focused on native cloud services. EBS is deeply embedded in financial operations, human resources, supply chain management, and customer relationship systems across Fortune 500 companies. More importantly for cloud security teams, EBS instances frequently integrate with cloud IAM systems, data warehouses (Snowflake, Redshift, BigQuery), and SaaS platforms (Salesforce, Workday), making compromised instances potential bridges into broader cloud environments.</p><p class="paragraph" style="text-align:left;">The FBI characterized CVE-2025-61882 as a &quot;stop-what-you&#39;re-doing and patch immediately&quot; vulnerability. The multi-month exploitation window before public disclosure highlights a detection gap that sophisticated threat actors exploit repeatedly. As Damien Burks notes when discussing incident response complexity: <b>&quot;You have to understand how your environment is configured to begin with&quot;</b> a principle that applies equally to understanding how legacy ERP systems like Oracle EBS connect to modern cloud infrastructure.</p><p class="paragraph" style="text-align:left;">This incident also reinforces the importance of data classification and egress monitoring. Harvard lost 1.3 TB of sensitive data, suggesting inadequate data loss prevention controls between EBS and external networks. Cloud security teams should implement enhanced monitoring for unusual data exfiltration patterns from any system with access to sensitive data stores.</p><p class="paragraph" style="text-align:left;"><b>Sources:</b> <a class="link" href="https://www.securityweek.com/harvard-is-first-confirmed-victim-of-oracle-ebs-zero-day-hack/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow">SecurityWeek</a><span style="color:rgb(17, 85, 204);"><span style="text-decoration:underline;">,</span></span><span style="color:rgb(17, 85, 204);"> </span><a class="link" href="https://securityaffairs.com/183379/security/harvard-university-hit-in-oracle-ebs-cyberattack-1-3-tb-of-data-leaked-by-cl0p-group.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow">Security Affairs</a><span style="color:rgb(17, 85, 204);"><span style="text-decoration:underline;">,</span></span><span style="color:rgb(17, 85, 204);"> </span><a class="link" href="https://www.darkreading.com/cyberattacks-data-breaches/harvard-breached-oracle-zero-day-attack?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow">Dark Reading</a><span style="color:rgb(17, 85, 204);"><span style="text-decoration:underline;">,</span></span><span style="color:rgb(17, 85, 204);"> </span><a class="link" href="https://therecord.media/harvard-says-limited-number-linked-to-data-theft?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow">The Record</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="camo-leak-git-hub-copilot-vulnerabi"><b>CamoLeak: GitHub Copilot Vulnerability Enables Silent Data Exfiltration</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> Legit Security detailed CamoLeak, a critical vulnerability chaining remote prompt injection (hidden in pull request content) with an image-proxy CSP bypass to exfiltrate secrets and private repository code via GitHub&#39;s own infrastructure. The vulnerability received a CVSS score of 9.6. GitHub mitigated the issue by disabling image rendering in Copilot Chat.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters to Cloud Security Leaders:</b></p><p class="paragraph" style="text-align:left;">CamoLeak confirms what many security teams feared: AI assistants represent new data egress channels that can bridge from source control systems to external endpoints. For cloud security programs, this vulnerability is particularly concerning because private repositories often contain cloud access keys, API credentials, and information about undisclosed vulnerabilities exactly the type of data exfiltrated in the F5 breach.</p><p class="paragraph" style="text-align:left;">The attack vector is sophisticated yet practical: adversaries can hide malicious instructions in pull request content that Copilot processes, bypassing traditional security controls. As organizations increasingly adopt AI-powered development tools, the attack surface expands beyond traditional application security boundaries into the toolchain itself.</p><p class="paragraph" style="text-align:left;">This connects directly to Damien Burks&#39; emphasis on understanding entry points and preventing incidents before they require response: <b>&quot;Secure coding is definitely recommended. I mean, like it&#39;s enforced in a heavily regulated environment.&quot;</b> The same rigor applied to application code must now extend to AI assistant configurations and permissions.</p><p class="paragraph" style="text-align:left;"><b>Sources:</b> <a class="link" href="https://www.legitsecurity.com/blog/camoleak-critical-github-copilot-vulnerability-leaks-private-source-code?utm_source=chatgpt.com" target="_blank" rel="noopener noreferrer nofollow">Legit Security Research</a></p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="4-level-blue-to-acquire-cybereason-"><b>4. LevelBlue to Acquire Cybereason in MDR/XDR Consolidation Move</b></h3><p class="paragraph" style="text-align:left;"><b>What Happened:</b> MSSP LevelBlue signed a definitive agreement to acquire Cybereason (XDR/DFIR platform). SoftBank Corp., SoftBank Vision Fund 2, and Liberty Strategic Capital will become investors in LevelBlue, with former U.S. Treasury Secretary Steven Mnuchin joining the board. The integration will fold Cybereason&#39;s XDR and digital forensics capabilities into LevelBlue&#39;s managed detection and response portfolio.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters to Cloud Security Leaders:</b></p><p class="paragraph" style="text-align:left;">This acquisition represents continued consolidation in the detection and response market, with implications for organizations running XDR agents in cloud workloads. The integration will likely result in bundled MDR/XDR/IR offerings with tighter sensor-to-SOC integration across cloud and endpoint environments.</p><p class="paragraph" style="text-align:left;">For cloud security teams, M&A activity among security vendors creates operational risk around roadmap changes, SKU consolidation, and potential disruptions to existing telemetry pipelines. Organizations running Cybereason agents in containerized workloads (EKS, AKS, GKE) should proactively engage with vendor account teams to understand migration plans and ensure continuity of cloud-native detection capabilities.</p><p class="paragraph" style="text-align:left;"><b>Source:</b><a class="link" href="https://www.businesswire.com/news/home/20251014364175/en/LevelBlue-to-Acquire-Cybereason-Expanding-Global-Leadership-in-Managed-Detection-and-Response-XDR-Incident-Response-and-Threat-Intelligence?utm_source=chatgpt.com" target="_blank" rel="noopener noreferrer nofollow">Business Wire press release; LevelBlue newsroom/blog</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="cloud-security-topic-of-the-week">🎯 Cloud Security Topic of the Week: </h2><h3 class="heading" style="text-align:left;" id="the-kubernetes-incident-response-au"><b>The Kubernetes Incident Response Automation Gap: Why Manual Containment Fails in Regulated Environments</b></h3><ul><li><p class="paragraph" style="text-align:left;">One of the most pressing challenges in cloud incident response doesn&#39;t involve detection tools, threat intelligence, or even adversary tactics; it&#39;s the fundamental operational reality that containerized workloads, particularly Kubernetes clusters, remain extraordinarily difficult to contain during active incidents.</p></li><li><p class="paragraph" style="text-align:left;">This week&#39;s conversation with Damien Burks, who built an incident response platform from the ground up in financial services, reveals a sobering truth: despite billions spent on runtime security and CNAPP solutions, most organizations still manually respond to Kubernetes incidents, often taking hours to achieve containment that should happen in minutes.</p></li></ul><h2 class="heading" style="text-align:left;" id="featured-experts-this-week"><b>Featured Experts This Week </b>🎤</h2><ul><li><p class="paragraph" style="text-align:left;"><b>Damien Burks</b>   Senior Cybersecurity Engineer (FinTech)</p></li><li><p class="paragraph" style="text-align:left;"><b><a class="link" href="https://www.linkedin.com/in/ashishrajan/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow">Ashish Rajan</a></b> - CISO | Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> , Host of <a class="link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a></p></li></ul><h2 class="heading" style="text-align:left;" id="definitions-and-core-concepts"><b>Definitions and Core Concepts 📚</b></h2><p class="paragraph" style="text-align:left;">Before diving into our insights, let&#39;s clarify some key terms:</p><ul><li><p class="paragraph" style="text-align:left;"><b>EKS (Elastic Kubernetes Service):</b> AWS-managed Kubernetes control plane service. While AWS manages the control plane, customers are responsible for worker nodes, networking, and security configurations.</p></li><li><p class="paragraph" style="text-align:left;"><b>Private Cluster:</b> A Kubernetes cluster configuration where the API endpoint is not accessible from the public internet, requiring requests to originate from within the same VPC or through VPN/VPC peering connections.</p></li><li><p class="paragraph" style="text-align:left;"><b>CNAPP (Cloud-Native Application Protection Platform): </b>Security platform category that combines multiple cloud security capabilities including CSPM, CWPP, and runtime protection for cloud-native applications.</p></li><li><p class="paragraph" style="text-align:left;"><b>CSPM (Cloud Security Posture Management): </b>Tools that continuously monitor cloud infrastructure for misconfigurations and compliance violations against security best practices and regulatory standards.</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="our-insights-from-this-practitioner">💡<b>Our Insights from this Practitioner 🔍</b></h2><h3 class="heading" style="text-align:left;" id="building-automated-incident-respons"><b>Building Automated Incident Response for the Cloud Age (</b><a class="link" href="https://www.cloudsecuritypodcast.tv/videos/incident-response-of-kubernetes-and-how-to-automate-containment?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow"><b>Full Episode here</b></a><b>)</b></h3><p class="paragraph" style="text-align:left;">The conversation with Damien Burks reveals fundamental truths about incident response that every cloud security leader should internalize: detection is only half the battle, and in complex cloud environments, manual response processes guarantee unacceptable containment times.</p><h3 class="heading" style="text-align:left;" id="1-from-hours-to-minutes-the-case-fo"><b>1️⃣ From Hours to Minutes: The Case for Automation in Regulated Environments</b></h3><p class="paragraph" style="text-align:left;">Perhaps the most striking insight from Damien&#39;s experience comes from his work in financial services, where he built an incident response platform specifically designed for private EKS clusters. The contrast between manual and automated approaches is stark.</p><p class="paragraph" style="text-align:left;"><b>&quot;</b><i>Without the automation, in a regulated environment, it&#39;s gonna take you hours,&quot; Damien explains. &quot;I was able to contain an EKS node within about 10 minutes.&quot;</i></p><p class="paragraph" style="text-align:left;">This isn&#39;t merely about efficiency, it&#39;s about reducing blast radius during active incidents. In regulated industries, manual containment requires navigating multiple approval layers, establishing secure access to isolated environments, and coordinating across teams that may span different time zones. Meanwhile, adversaries continue to operate, exfiltrate data, establish persistence mechanisms, and pivot to additional targets.</p><p class="paragraph" style="text-align:left;">The Lambda-based automation Damien developed addresses the core challenge of private clusters: <b>&quot;</b><i>I had to create a Lambda function and basically automate the creation of the Lambda function to throw itself into the cluster&#39;s VPC, subnet, and then also the security group.</i><b>&quot;</b> This approach solves the networking isolation problem that defeats traditional security tools while maintaining security boundaries.</p><p class="paragraph" style="text-align:left;">For security leaders, the lesson is clear: in 2025, incident response automation isn&#39;t optional it&#39;s a fundamental requirement for protecting cloud workloads, particularly in containerized environments where architectural complexity makes manual response impractical.</p><h4 class="heading" style="text-align:left;" id="2-the-sophistication-gap-why-commer"><b>2️⃣ The Sophistication Gap: Why Commercial Tools Fall Short</b></h4><p class="paragraph" style="text-align:left;">Despite the proliferation of CNAPP and CSPM vendors promising comprehensive cloud security, Damien found a consistent gap when evaluating commercial platforms: <b>&quot;</b><i>The majority of what I see is there isn&#39;t a sophisticated approach to automating incident response or containment activities and actions inside of those type of complex environments.</i><b>&quot;</b></p><p class="paragraph" style="text-align:left;">The problem stems from a fundamental misalignment between how security tools are architected and how modern cloud infrastructure actually works. Detection is relatively straightforward; runtime agents can identify suspicious processes, unauthorized network connections, or privilege escalation attempts. But responding to those detections in private, network-isolated clusters requires understanding Kubernetes networking, AWS VPC configurations, IAM-to-RBAC mappings, and the specific architectural choices each organization makes.</p><p class="paragraph" style="text-align:left;"><b>&quot;</b><i>Let&#39;s say for instance, you have a private cluster in EKS,&quot; Damien explains. &quot;You cannot reach out. To, you have to be in the same networking configuration. Same VPC, subnet and Security Group in order for you to be able to interact with the Kubernetes cluster.</i><b>&quot;</b></p><p class="paragraph" style="text-align:left;">Commercial tools that work across multiple cloud platforms often sacrifice this level of deep integration for broader compatibility. As a result, they excel at telling you what&#39;s wrong but offer little help fixing it within the constraints of your specific architecture.</p><p class="paragraph" style="text-align:left;">This creates an imperative for security teams: build custom automation for your specific environment, or accept that incident response will remain a manual, time-consuming process. Given the stakes particularly in the face of sophisticated nation-state actors as revealed in the F5 breach the choice should be obvious.</p><h3 class="heading" style="text-align:left;" id="3-the-multi-layer-authentication-ch"><b>3️⃣ The Multi-Layer Authentication Challenge</b></h3><p class="paragraph" style="text-align:left;">One of the most technically complex aspects of Kubernetes incident response involves the intersection of cloud IAM and Kubernetes RBAC. Damien describes this challenge in detail:</p><p class="paragraph" style="text-align:left;"><i>&quot;Whatever role that you&#39;re using, you gotta make sure that that role is mapped to the config map that&#39;s attached to an RBAC role within the Kubernetes cluster. If you don&#39;t have that there, then of course it&#39;s not going to give you the permissions or access that you need.&quot;</i></p><p class="paragraph" style="text-align:left;">This represents a fundamental shift from traditional incident response. In conventional cloud environments, IAM permissions alone determine what actions you can perform. In Kubernetes, you need:</p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>AWS IAM permissions</b> to call EKS control plane APIs</p></li><li><p class="paragraph" style="text-align:left;"><b>Kubernetes RBAC roles</b> defined within the cluster</p></li><li><p class="paragraph" style="text-align:left;"><b>aws-auth ConfigMap mappings</b> linking your IAM identity to Kubernetes identities</p></li><li><p class="paragraph" style="text-align:left;"><b>Network connectivity</b> to the API endpoint (either direct or through VPN/bastion)</p></li><li><p class="paragraph" style="text-align:left;"><b>Valid kubeconfig</b> with authentication tokens</p></li></ol><p class="paragraph" style="text-align:left;">Each layer introduces complexity and potential failure points during incidents. Security teams must document these dependencies in advance and test incident response procedures regularly to ensure all pieces work together under pressure.</p><p class="paragraph" style="text-align:left;">For organizations just beginning their Kubernetes security journey, Damien&#39;s experience offers a clear roadmap: <i>&quot;I need to understand Kubernetes first. I need to understand how it works internally.&quot; </i>Without this foundational knowledge, incident response becomes guesswork and in regulated environments with strict compliance requirements, guesswork is unacceptable.</p><h3 class="heading" style="text-align:left;" id="4-prevention-through-developer-enab"><b>4️⃣ Prevention Through Developer Enablement</b></h3><p class="paragraph" style="text-align:left;">While much of the conversation focuses on response and containment, Damien emphasizes the critical role of prevention, particularly through secure development practices. His perspective reflects years of working across application security, cloud security, and DevSecOps:</p><p class="paragraph" style="text-align:left;"><i>&quot;From an application security side, secure coding is definitely recommended. I mean, like it&#39;s enforced in a heavily regulated environment.&quot;</i></p><p class="paragraph" style="text-align:left;">The layers of defense he describes represent a comprehensive approach:</p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Secure coding</b> to prevent injection attacks and common vulnerabilities</p></li><li><p class="paragraph" style="text-align:left;"><b>Dependency scanning</b> to identify vulnerable open-source components</p></li><li><p class="paragraph" style="text-align:left;"><b>Container hardening</b> to limit the impact of application compromise</p></li><li><p class="paragraph" style="text-align:left;"><b>Kubernetes cluster configuration</b> following best practices for network policies, service accounts, and pod security</p></li></ol><p class="paragraph" style="text-align:left;">This layered approach acknowledges a fundamental truth: no single security control will prevent all incidents. Instead, defense-in-depth reduces the attack surface at each layer and increases the work required for adversaries to achieve their objectives.</p><p class="paragraph" style="text-align:left;">Notably, Damien emphasizes the importance of understanding the developer&#39;s perspective when implementing security controls: <i>&quot;Programming is something that a DevSecOps engineer can benefit from...you have to put yourself in a developer&#39;s shoes.&quot; </i>This human-centered approach to security increases adoption of controls and improves the quality of security outcomes.</p><h3 class="heading" style="text-align:left;" id="5-the-specialization-imperative-why"><b>5️⃣ The Specialization Imperative: Why Multi-Cloud Is Chaos</b></h3><p class="paragraph" style="text-align:left;">Perhaps the most controversial insight from this week&#39;s conversation challenges the widespread push toward multi-cloud architectures and multi-platform expertise. Damien is blunt in his assessment:</p><p class="paragraph" style="text-align:left;"><i>&quot;Multi-cloud to me just means chaos. I don&#39;t see that being, that&#39;s, to me, that&#39;s not realistic.&quot;</i></p><p class="paragraph" style="text-align:left;">His reasoning stems from practical experience trying to maintain expertise across platforms: <i>&quot;How can you focus your time across multiple different cloud service providers and stay up to date with everything?&quot;</i></p><p class="paragraph" style="text-align:left;">AWS alone releases hundreds of service updates annually. Azure and Google Cloud maintain similar innovation velocities. For individual practitioners or even small security teams, attempting to master all three platforms simultaneously results in surface-level knowledge of everything and deep expertise in nothing.</p><p class="paragraph" style="text-align:left;"><i>&quot;I think that you get good at one. You specialize in that one. And you lock into that one,&quot; Damien advises. &quot;Growth is T-shaped. You specialize and then spread after the fact.&quot;</i></p><p class="paragraph" style="text-align:left;">This specialization philosophy has important implications for both individual career development and organizational security strategy. For practitioners, deep expertise in one platform makes you more valuable than shallow knowledge of multiple platforms. For organizations, hiring specialists for each cloud platform you use produces better security outcomes than expecting generalists to secure everything.</p><p class="paragraph" style="text-align:left;">The comparison to programming languages illustrates the point: <i>&quot;It&#39;s like learning a programming language, right? While the concepts of parallelism and concurrency, recursion, object oriented programming may be the same, the implementation is very different.&quot;</i></p><p class="paragraph" style="text-align:left;">AWS IAM, Azure AD/Entra, and Google Cloud IAM all implement identity and access management, but the specific mechanisms, best practices, and edge cases differ significantly. Security teams that specialize can navigate these nuances; generalists inevitably make mistakes that sophisticated adversaries exploit.</p><h3 class="heading" style="text-align:left;" id="6-practical-guidance-for-incident-r"><b>6️⃣ Practical Guidance for Incident Responders Transitioning to Cloud</b></h3><p class="paragraph" style="text-align:left;">For incident response professionals with deep data center or traditional security backgrounds, Damien offers specific guidance on transitioning to cloud security:</p><p class="paragraph" style="text-align:left;"><i>&quot;Go get the cloud certifications and learn the theoretical aspects of it. And then some CSPs have incident response practices and guides to help you navigate those things.&quot;</i></p><p class="paragraph" style="text-align:left;">The certification path he recommends provides structured learning: AWS Solutions Architect Associate for foundational knowledge, followed by the Security Specialty certification for deeper security focus. But certifications alone aren&#39;t sufficient:</p><p class="paragraph" style="text-align:left;"><i>&quot;Start playing around in your own project. And then from there you keep moving forward.&quot;</i></p><p class="paragraph" style="text-align:left;">Hands-on experience with cloud services, combined with coding/scripting skills, enables incident responders to build the automation that makes cloud incident response practical. The theoretical knowledge from certifications provides context; the practical experience builds competence.</p><p class="paragraph" style="text-align:left;">Importantly, Damien emphasizes that incident responders already possess critical knowledge that translates directly to cloud: <i>&quot;You have the knowledge and you have the background...You just have to understand how the cloud services all work together and how they can be architected.&quot;</i></p><p class="paragraph" style="text-align:left;">This perspective should encourage traditional security professionals concerned about cloud skills gaps. The fundamentals of incident response, understanding attack vectors, analyzing logs, containing threats, and preserving forensic evidence remain constant. The challenge lies in applying those fundamentals within cloud architectures that differ from traditional data center environments</p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="what-security-leaders-should-do-now">🧭<b> What Security Leaders Should Do Now</b></h3><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Automated incident response is no longer optional</b> for containerized workloads, particularly in regulated industries where manual processes take hours</p></li><li><p class="paragraph" style="text-align:left;"><b>Private Kubernetes clusters require purpose-built automation</b> that can operate within network isolation constraints</p></li><li><p class="paragraph" style="text-align:left;"><b>Commercial security platforms excel at detection but often lack sophisticated automated response</b> capabilities for complex cloud architectures</p></li><li><p class="paragraph" style="text-align:left;"><b>IAM-to-RBAC mapping complexity</b> in Kubernetes creates authentication challenges that must be solved in advance of incidents</p></li><li><p class="paragraph" style="text-align:left;"><b>Specialization in one cloud platform</b> produces better security outcomes than surface-level multi-cloud expertise</p></li><li><p class="paragraph" style="text-align:left;"><b>Prevention through secure development</b> practices remains crucial, but detection and response must assume prevention will sometimes fail</p></li><li><p class="paragraph" style="text-align:left;"><b>The cloud security role has expanded dramatically</b> to include application security, container security, AI/ML security, and data governance</p></li><li><p class="paragraph" style="text-align:left;"><b>Time-to-containment metrics</b> should be measured and optimized, with targets in minutes rather than hours</p></li><li><p class="paragraph" style="text-align:left;"><b>Build custom automation</b> for environment-specific response actions while leveraging commercial tools for detection and visibility</p></li><li><p class="paragraph" style="text-align:left;"><b>Test incident response procedures regularly</b> in environments that mirror production complexity</p></li></ol><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-resources">RELATED RESOURCES</h2><ul><li><p class="paragraph" style="text-align:left;"><b>Kubernetes Security & Incident Response:</b></p><ul><li><p class="paragraph" style="text-align:left;">AWS EKS Best Practices Guide for Security:<a class="link" href="https://aws.github.io/aws-eks-best-practices/security/docs/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow"> https://aws.github.io/aws-eks-best-practices/security/docs/</a></p></li><li><p class="paragraph" style="text-align:left;">Kubernetes Hardening Guide (NSA/CISA):<a class="link" href="https://media.defense.gov/2022/Aug/29/2003066362/-1/-1/0/CTR_KUBERNETES_HARDENING_GUIDANCE_1.2_20220829.PDF?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow"> https://media.defense.gov/2022/Aug/29/2003066362/-1/-1/0/CTR_KUBERNETES_HARDENING_GUIDANCE_1.2_20220829.PDF</a></p></li><li><p class="paragraph" style="text-align:left;">Falco Open Source Runtime Security:<a class="link" href="https://falco.org/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow"> https://falco.org/</a></p></li><li><p class="paragraph" style="text-align:left;">Kubernetes Security Checklist and Requirements:<a class="link" href="https://kubernetes.io/docs/concepts/security/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow"> https://kubernetes.io/docs/concepts/security/</a></p></li></ul><p class="paragraph" style="text-align:left;"><b>DevSecOps & Cloud Security Learning:</b></p><ul><li><p class="paragraph" style="text-align:left;">DevSecOps Blueprint (Damien Burks):<a class="link" href="https://devsecblueprint.com?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow"> https://devsecblueprint.com</a></p></li><li><p class="paragraph" style="text-align:left;">Cloud Security Alliance (CSA) Security Guidance:<a class="link" href="https://cloudsecurityalliance.org/research/guidance/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow"> https://cloudsecurityalliance.org/research/guidance/</a></p></li><li><p class="paragraph" style="text-align:left;">OWASP Cloud-Native Application Security Top 10:<a class="link" href="https://owasp.org/www-project-cloud-native-application-security-top-10/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow"> https://owasp.org/www-project-cloud-native-application-security-top-10/</a></p></li></ul></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>Related Podcast Episodes 🎧</b></h2><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/incident-response-of-kubernetes-and-how-to-automate-containment?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/9da5c4a2-8327-44c5-9570-2b697e3e8f4e/S06_Damien_Burks.jpg?t=1760563559"/></a><div class="image__source"><a class="image__source_link" href="https://www.cloudsecuritypodcast.tv/videos/incident-response-of-kubernetes-and-how-to-automate-containment?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" rel="noopener" target="_blank"><span class="image__source_text"><p>Incident Response of Kubernetes and how to Automate Containment</p></span></a></div></div><h3 class="heading" style="text-align:left;" id="question-for-you-reply-to-this-emai">Question for you? (Reply to this email)</h3><p class="paragraph" style="text-align:left;">⚙️ Have you tested your Kubernetes incident response procedures in a private cluster environment, what was your time-to-containment?</p><p class="paragraph" style="text-align:left;">Next week, we&#39;ll explore another critical aspect of cloud security. Stay tuned!</p><hr class="content_break"><p class="paragraph" style="text-align:left;">📬 Want weekly expert takes on AI & Cloud Security? [<a class="link" href="https://www.cloudsecuritynewsletter.com/subscribe?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow">Subscribe here</a>]”</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><span style="color:rgb(238, 40, 60);"><b><a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">We would love to hear from you</a></b></span>📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙</p><p class="paragraph" style="text-align:start;">Peace!</p><p class="paragraph" style="text-align:start;"><a class="link" href="https://www.linkedin.com/in/shilpi-bhattacharjee/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow">Shilpi Bhattacharjee</a></p><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dc094a1c-678a-43e0-adc9-1bee6c3499e2/CSP_Logo_Blue_ScreenRes_3000x3000_v2.jpg"/></a></div><div class="button" style="text-align:left;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="{{rp_referral_hub_url}}"><span class="button__text" style=""> Share the newsletter </span></a></div><p class="paragraph" style="text-align:left;">Was this forwarded to you? You can <a class="link" href="https://www.cloudsecuritynewsletter.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow">Sign up here</a>, to join our growing readership.</p><p class="paragraph" style="text-align:left;">Want to <b>sponsor</b> the next newsletter edition! <a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">Lets make it happen </a></p><p class="paragraph" style="text-align:left;">Have you joined our FREE <b>Monthly</b> <a class="link" href="https://www.cloudsecuritybootcamp.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Bootcamp</a> yet?</p><p class="paragraph" style="text-align:left;">checkout our <b>sister podcast</b> <a class="link" href="https://www.youtube.com/@AISecurityPodcast?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=f5-big-ip-breach-exposes-supply-chain-risk-lessons-from-automating-incident-response-in-hybrid-cloud-environments" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=2c339fb7-1e5d-4845-a5bf-2d0cd950e276&utm_medium=post_rss&utm_source=cloud_security_newsletter">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>⚙️ From Asahi’s Ransomware Recovery to Google’s AI Bug Bounty -The SOC’s Big 2025 Reboot</title>
  <description>Asahi’s ransomware recovery and Google’s AI Vulnerability Reward Program highlight how threat and defense are evolving together. Forrester’s Allie Mellen and Cloud Security Podcast host Ashish Rajan share what a modern SOC looks like in 2025 automated, AI-assisted, and built on detection engineering, not ticket queues. How detection engineering and AI agents are transforming security operations in 2025.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/de576b3b-6d8c-4772-8ed0-499feead8b13/Screenshot_2025-10-08_at_4.11.33_PM.png" length="2211963" type="image/png"/>
  <link>https://www.cloudsecuritynewsletter.com/p/from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot</link>
  <guid isPermaLink="true">https://www.cloudsecuritynewsletter.com/p/from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot</guid>
  <pubDate>Wed, 08 Oct 2025 18:00:00 +0000</pubDate>
  <atom:published>2025-10-08T18:00:00Z</atom:published>
    <dc:creator>Ashish Rajan</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="hello-from-the-cloudverse"><span style="background-color:#28EEDA;"><b>Hello from the Cloud-verse!</b></span></h2><p class="paragraph" style="text-align:left;">This week’s Cloud Security Newsletter Topic we cover - <b>The Truth About AI in the SOC: From Alert Fatigue to Detection Engineering </b><a class="link" href="#cloud-security-topic-of-the-week" rel="noopener noreferrer nofollow">(continue reading)</a> </p><hr class="content_break"><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/the-truth-about-ai-in-the-soc-from-alert-fatigue-to-detection-engineering?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/de576b3b-6d8c-4772-8ed0-499feead8b13/Screenshot_2025-10-08_at_4.11.33_PM.png?t=1759936315"/></a><div class="image__source"><span class="image__source_text"><p>This image was generated by AI. It&#39;s still experimental, so it might not be a perfect match!</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Incase, this is your 1st Cloud Security Newsletter! You are in good company! </b><br>You are reading this issue along with your friends and colleagues from companies like <i>Netflix</i>, Citi, <i>JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more</i> who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to <a class="link" href="https://open.spotify.com/show/6LZgeh4GecRYPc0WrwMB4I?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a> & <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> every week.</p><hr class="content_break"><p class="paragraph" style="text-align:left;">Welcome to this week’s Cloud Security Newsletter</p><p class="paragraph" style="text-align:left;">This week in cloud and cyber, a ransomware recovery, a massive consulting leak, and a fresh wave of AI-driven initiatives redefine the operational risk landscape.<br><br>We examine how Asahi Group’s ransomware recovery, Red Hat’s GitLab breach, and Google’s AI Bug Bounty reveal the dual challenge of defending hybrid infrastructure while embracing generative AI.</p><p class="paragraph" style="text-align:left;">Guiding this week’s theme are insights from Allie Mellen (Principal Analyst, Forrester) and Ashish Rajan (Cloud Security Podcast), who explore what they call the “<i>SOC reset.</i>”<br>“<i>This is a moment of reset… the next five years are gonna be wild,</i>” Allie shared, reflecting on how the role of detection engineering and AI are reshaping security operations.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="tldr-for-busy-readers">📰 TL;DR for Busy Readers</h2><ul><li><p class="paragraph" style="text-align:left;"><i><b>Asahi’s ransomware recovery</b></i><i> underscores the cost of weak OT–cloud segmentation.</i></p></li><li><p class="paragraph" style="text-align:left;"><i><b>Red Hat Consulting’s GitLab breach</b></i><i> is a wake-up call for secret sprawl in vendor repositories.</i></p></li><li><p class="paragraph" style="text-align:left;"><i><b>Google’s New AI bug bounty</b></i><i> sets the stage for responsible AI security testing.</i></p></li><li><p class="paragraph" style="text-align:left;"><i><b>SOC modernization</b></i><i> means replacing alert queues with </i><i><b>detection engineering + task-specific AI agents</b></i><i>.</i></p></li><li><p class="paragraph" style="text-align:left;"><i><b>Information sharing gaps</b></i><i> caused by the U.S. government shutdown could hinder threat visibility; strengthen private channels.</i></p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-security-news">📰 <b>THIS WEEK&#39;S SECURITY HEADLINES</b></h2><h3 class="heading" style="text-align:left;" id="google-cloud-functions-vulnerabilit"><b>1) Red Hat Consulting GitLab breach exposes 28k repos and 800+ client projects</b></h3><p class="paragraph" style="text-align:left;"><b>What happened:</b> A threat group calling itself “Crimson Collective” claims to have exfiltrated 570GB of data from Red Hat’s internal GitLab system, including customer source code, cloud configurations, and API keys. Red Hat has confirmed the incident and initiated rotations.<br><br><b>Why it matters:</b> Consulting repositories often store privileged cloud access data, CI/CD secrets, and customer network maps making them high-value breach amplifiers.<br><br><b>Action:</b> Rotate all credentials shared with Red Hat since 2020, revoke aged tokens, and ensure that contracts mandate time-based key expiry and minimal retention of customer data.<br><br><b>Sources:</b><a class="link" href="https://www.darkreading.com/application-security/red-hat-widespread-breaches-private-gitlab-repositories?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow"> Dark Reading</a>,<a class="link" href="https://www.bleepingcomputer.com/news/security/red-hat-confirms-security-incident-after-hackers-breach-gitlab-instance/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow"> BleepingComputer</a>,<a class="link" href="https://www.securityweek.com/red-hat-confirms-gitlab-instance-hack-data-theft/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow"> SecurityWeek</a>,<a class="link" href="https://www.redhat.com/en/blog/security-update-incident-related-red-hat-consulting-gitlab-instance?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow"> Red Hat Official Statement</a></p><h3 class="heading" style="text-align:left;" id="2-us-cybersecurity-information-shar"><b>2) U.S. Cybersecurity Information Sharing Act expires amid government shutdown</b></h3><p class="paragraph" style="text-align:left;"><b>What happened:</b> The U.S. government’s temporary shutdown caused a lapse in the law granting liability protection for private-sector threat-intelligence sharing. Legal advisors warn this could reduce participation by up to 80% until renewal.<br><br><b>Why it matters:</b> Enterprises depending on ISAC/ISAO threat feeds may see slower sharing and weaker correlation visibility.<br><br><b>Action:</b> Use bilateral intelligence-sharing NDAs, reinforce internal intel routing, and diversify with commercial feeds and automated peer exchanges.<br><br><b>Sources:</b><a class="link" href="https://www.washingtonpost.com/technology/2025/10/02/cisa-shutdown-cybersecurity/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow"> Washington Post</a>,<a class="link" href="https://www.weforum.org/stories/2025/10/key-us-cyber-law-expire-cybersecurity-news/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow"> World Economic Forum</a></p><h3 class="heading" style="text-align:left;" id="3-qilin-ransomware-halts-asahi-grou"><b>3) Qilin ransomware halts Asahi Group beer production; factories restart</b></h3><p class="paragraph" style="text-align:left;"><b>What happened:</b> The Qilin ransomware gang crippled Asahi’s production operations in Japan and Europe last week. The company has restored operations after isolating OT networks and rebuilding key systems.<br><br><b>Why it matters:</b> Asahi’s experience reinforces that OT–IT convergence can magnify cloud risk. Cloud backups, API tokens, and telemetry data are frequent pivots for ransomware groups.<br><br><b>Action:</b> Segment OT environments, apply immutable storage controls (S3 Object Lock / Azure Immutable Blob), and rotate long-lived service credentials.<br><br><b>Sources:</b> <a class="link" href="https://www.reuters.com/business/asahi-group-restarts-production-after-cyberattack-2025-10-05?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow">Reuters</a>, <a class="link" href="https://therecord.media/asahi-qilin-ransomware-attack-2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow">The Record</a></p><h3 class="heading" style="text-align:left;" id="4-draft-kings-reports-new-wave-of-c"><b>4) DraftKings reports new wave of credential-stuffing attacks</b></h3><p class="paragraph" style="text-align:left;"><b>What happened:</b> Sports betting platform DraftKings detected mass credential-stuffing campaigns using reused passwords and automated bots. The company enforced password resets and additional MFA challenges.<br><br><b>Why it matters:</b> Consumer cloud apps remain low-hanging fruit for account takeover; credential reuse continues to undermine MFA adoption.<br><br><b>Action:</b> Implement adaptive MFA, WebAuthn for high-risk actions, and layered rate-limiting on login APIs.<br><br><b>Sources:</b> <a class="link" href="https://www.reuters.com/technology/draftkings-investigating-credential-stuffing-cyberattack-2025-10-06?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow">Reuters</a>, <a class="link" href="https://www.bleepingcomputer.com/news/security/draftkings-credential-stuffing-attack-2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow">BleepingComputer</a></p><h3 class="heading" style="text-align:left;" id="5-discord-breach-linked-to-compromi"><b>5) Discord breach linked to compromised Zendesk third-party provider</b></h3><p class="paragraph" style="text-align:left;"><b>What happened:</b> Attackers compromised a third-party Zendesk instance used by Discord’s support team, accessing user emails and ticket data. Attribution points to the Scattered Lapsus$ Hunters group.<br><br><b>Why it matters:</b> SaaS support ecosystems can become indirect threat paths; sub-processor security is an often-overlooked part of vendor governance.<br><br><b>Action:</b> Require sub-processors to use SSO + FIDO2, apply retention limits, and add DLP controls to ticketing systems.<br><br><b>Sources:</b><a class="link" href="https://research.checkpoint.com/2025/6th-october-threat-intelligence-report/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow"> Check Point Research</a></p><h3 class="heading" style="text-align:left;" id="6-google-launches-ai-vulnerability-"><b>6) Google launches AI Vulnerability Reward Program</b></h3><p class="paragraph" style="text-align:left;"><b>What happened:</b> Google introduced a dedicated AI bug bounty, offering up to $30,000 for findings such as prompt-injection exploits or unsafe agent actions in Gemini, Search, Workspace, and Google Home.<br><br><b>Why it matters:</b> It formalizes a security-testing channel for generative AI acknowledging that AI-driven features are now a core enterprise attack surface.<br><br><b>Action:</b> Include prompt-injection testing in application security programs, adopt policy-as-code to restrict model tool access, and define AI QA gates before production release.<br><br><b>Sources:</b><a class="link" href="https://www.theverge.com/news/793362/google-ai-security-vulnerability-rewards?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow"> The Verge</a>, <a class="link" href="https://bughunters.google.com/blog/6116887259840512/announcing-google-s-new-ai-vulnerability-reward-program?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow">Google Security Blog</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="cloud-security-topic-of-the-week">🎯 Cloud Security Topic of the Week: </h2><h3 class="heading" style="text-align:left;" id="soc-2025-from-alert-queues-to-detec"><b>SOC 2025: From Alert Queues to Detection Engineering and Task-Specific AI Agents</b></h3><p class="paragraph" style="text-align:left;">Traditional L1/L2/L3 SOC tiers are collapsing under alert fatigue and data deluge. Modern teams are shifting to detection engineering, data pipeline optimization, and AI-driven assistance.</p><p class="paragraph" style="text-align:left;">“<i>No one knows how to secure AI… this is a moment of reset,</i>” said Allie Mellen, noting that GenAI will rewrite how security teams manage data and automation.</p><p class="paragraph" style="text-align:left;">Ashish Rajan echoed the same sentiment: “<i>AI should reduce L1 toil so people graduate to L2 work context building, incident narrative, and detection creation.</i>”</p><p class="paragraph" style="text-align:left;"><b>Key transformation patterns:</b></p><ul><li><p class="paragraph" style="text-align:left;"><b>Flattened SOC structure:</b> Move from queue-based triage to cross-functional detection pods that own rules, telemetry, and automation.</p></li><li><p class="paragraph" style="text-align:left;"><b>Data-driven focus:</b> Align log ingestion to active detections and cost efficiency feed only what fuels high-confidence analytics.</p></li><li><p class="paragraph" style="text-align:left;"><b>AI as augmentation:</b> Build specialized agents for triage and enrichment, not chatbots; each must have human oversight and explainability.</p></li><li><p class="paragraph" style="text-align:left;"><b>Governance for AI tools:</b> Enforce “least agency” for AI restrict agent tool access, monitor decisions, and track data provenance.</p></li></ul><p class="paragraph" style="text-align:left;"><b>30-minute actions:</b></p><ol start="1"><li><p class="paragraph" style="text-align:left;">Map detections to telemetry sources; drop unused feeds to a cheaper lake tier.</p></li><li><p class="paragraph" style="text-align:left;">Define least-agency policies for internal or vendor AI integrations.</p></li><li><p class="paragraph" style="text-align:left;">Convert repetitive alert playbooks (phishing or brute-force triage) into task agents with QA oversight.</p></li></ol><h2 class="heading" style="text-align:left;" id="featured-experts-this-week"><b>Featured Experts This Week </b>🎤</h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.linkedin.com/in/hackerxbella/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow"><b>Allie Mellen</b></a><b> </b>Principal Analyst, Forrester</p></li><li><p class="paragraph" style="text-align:left;"><b><a class="link" href="https://www.linkedin.com/in/ashishrajan/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow">Ashish Rajan</a></b> - CISO | Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> , Host of <a class="link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a></p></li></ul><h2 class="heading" style="text-align:left;" id="definitions-and-core-concepts"><b>Definitions and Core Concepts 📚</b></h2><p class="paragraph" style="text-align:left;">Before diving into our insights, let&#39;s clarify some key terms:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Detection Engineering:</b> Analysts who both create and maintain detections; a hybrid of developer and responder.</p></li><li><p class="paragraph" style="text-align:left;"><b>Security Data Lake:</b> Structured, cost-efficient telemetry storage aligned with OCSF and long-term analytics.</p></li><li><p class="paragraph" style="text-align:left;"><b>Agentic AI:</b> Task-specific AI built for defined security workflows with explainability and guardrails.</p></li><li><p class="paragraph" style="text-align:left;"><b>Least Agency:</b> Limiting AI or automation access to only necessary tools and scopes.</p></li><li><p class="paragraph" style="text-align:left;"><b>AI Observability:</b> Capturing prompt inputs, tool calls, and decisions for monitoring and auditing model behavior.</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="our-insights-from-this-practitioner">💡<b>Our Insights from this Practitioner 🔍</b></h2><h3 class="heading" style="text-align:left;" id="the-great-soc-reset-from-human-bott"><i><b>The Great SOC Reset: From Human Bottlenecks to Human-Guided Automation</b></i></h3><p class="paragraph" style="text-align:left;">Security operations in 2025 are at a breaking point. The sheer velocity of alerts, hybrid data sources, and AI-generated telemetry has outpaced the human SOC’s linear model. As Allie Mellen puts it, <i>“No one knows how to secure AI… this is a moment of reset.”</i></p><p class="paragraph" style="text-align:left;">This reset isn’t about replacing people, it&#39;s about re-architecting how they operate. The modern SOC must behave like an engineering team: building, testing, and shipping detections and automations the same way developers ship code.</p><h3 class="heading" style="text-align:left;" id="1-why-traditional-soc-models-are-fa"><b>1️⃣ Why Traditional SOC Models Are Failing</b></h3><p class="paragraph" style="text-align:left;">Forrester’s latest field data shows the L1/L2/L3 hierarchy breaks under cloud-scale telemetry. Analysts spend 60–70% of their time on enrichment, correlation, and false-positive filtering tasks that machines can now automate.</p><p class="paragraph" style="text-align:left;">Allie notes that <i>“the structure evolving to detection engineer should stay consistent regardless of AI.”</i> This means moving from a tiered hierarchy to cross-functional detection pods that own their use cases end-to-end: data sourcing, rule logic, validation, and metrics.</p><p class="paragraph" style="text-align:left;">When incidents like Asahi’s ransomware outbreak occur, success depends not on alert volume but on how fast those pods can pivot detections from IT to OT telemetry proving architecture beats manpower.</p><h3 class="heading" style="text-align:left;" id="2-the-rise-of-the-detection-enginee"><b>2️⃣ The Rise of the Detection Engineer</b></h3><p class="paragraph" style="text-align:left;">Detection engineers blend developer discipline with responder intuition. They treat rules like code, apply CI/CD pipelines for detection deployment, and test every change against a known dataset before it hits production.</p><p class="paragraph" style="text-align:left;">Ashish Rajan explains that this shift <i>“reduces L1 toil so people graduate to L2 work context building, incident narrative, and detection creation.”</i> In practice, it means every alert pipeline has an owner, every rule has version control, and metrics focus on mean time to learning (MTTL) rather than mean time to respond.</p><h3 class="heading" style="text-align:left;" id="3-building-credibility-for-ai-in-th"><b>3️⃣ Building Credibility for AI in the SOC</b></h3><p class="paragraph" style="text-align:left;">As enterprises experiment with AI triage agents, credibility is the currency. If an AI falsely dismisses a true incident twice, analysts lose trust and automation adoption stalls.<br>Allie warns that <i>“efficacy is incredibly important… and we don’t really have the automated testing infrastructure”</i> for AI security agents yet.</p><p class="paragraph" style="text-align:left;">High-maturity teams are tackling this by building “golden datasets” recorded investigations that serve as QA baselines for AI output. Every week, they replay those cases, compare AI vs. human decisions, and publish variance metrics in the SOC dashboard. The result: agents that actually <i>earn</i> analyst trust through repeatable performance.</p><h3 class="heading" style="text-align:left;" id="4-the-data-engineering-imperative"><b>4️⃣ The Data Engineering Imperative</b></h3><p class="paragraph" style="text-align:left;">AI doesn’t fix bad data pipelines. Both experts emphasized that SIEM performance, false-positive rates, and automation fidelity depend on clean, contextualized telemetry.<br>Detection engineers are applying data engineering practices: routing, normalization, tokenization, and redaction at ingestion time to ensure that what flows into the SOC is fit for purpose.</p><p class="paragraph" style="text-align:left;">This model popularized in cloud-native organizations like Netflix and Block makes data lineage traceable across tools and enables AI triage with reliable context.</p><h3 class="heading" style="text-align:left;" id="5-from-alert-fatigue-to-ai-assisted"><b>5️⃣ From Alert Fatigue to AI-Assisted Focus</b></h3><p class="paragraph" style="text-align:left;">When applied correctly, AI agents aren’t replacing analysts they’re removing the noise that prevents analysts from thinking.<br>A large financial institution Allie recently studied used a task-specific agent for phishing triage: it handled header analysis, URL detonation, and enrichment autonomously, forwarding only 8% of cases to humans. That reclaimed hundreds of analyst hours per week and shifted human focus to incident narrative and adversary tracking.</p><p class="paragraph" style="text-align:left;">As Ashish framed it, <i>“AI should help you tell better stories, not just faster ones.”</i></p><h3 class="heading" style="text-align:left;" id="6-governance-the-least-agency-princ"><b>6️⃣ Governance: The “Least Agency” Principle</b></h3><p class="paragraph" style="text-align:left;">AI’s power lies in action; its risk lies in overreach. The best teams are adopting what Allie calls <b>“least agency”</b> a principle limiting agents to the smallest possible tool and data scope needed for their role.<br>Every AI integration is reviewed like a service account: time-boxed credentials, explicit tool allow-lists, and auditable prompt changes. This policy mindset transforms AI from a security risk to a security multiplier.</p><hr class="content_break"><h3 class="heading" style="text-align:left;" id="what-security-leaders-should-do-now">🧭<b> What Security Leaders Should Do Now</b></h3><p class="paragraph" style="text-align:left;"><b>Week 1:</b> Identify your top three manual SOC workflows (e.g., phishing triage, credential abuse, log correlation). Document inputs/outputs.<br><b>Week 2:</b> Build a <i>sandbox AI agent</i> to automate 50% of steps no production access yet.<br><b>Week 3:</b> Implement “golden dataset” QA tests and add AI observability (log prompts, actions, decisions).<br><b>Week 4:</b> Formalize a <i>least-agency</i> governance policy for any AI automation or vendor integration.</p><p class="paragraph" style="text-align:left;">The leaders who operationalize these four steps aren’t just adopting AI they’re redesigning their SOC to be human-directed, AI-accelerated, and data-driven.</p><p class="paragraph" style="text-align:left;">As Allie concluded, <i>“This is a reset that redefines what we even mean by security operations.”</i></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-resources">RELATED RESOURCES</h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://security.googleblog.com/2025/10/ai-vulnerability-reward-program.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow">Google Security Blog   AI Vulnerability Reward Program</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-safety.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow">AWS Bedrock Prompt Safety Guidance</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://research.checkpoint.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow">Check Point Research   Weekly Threat Intel Report</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.redhat.com/en/blog?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow">Red Hat Consulting GitLab Incident Statement</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.forrester.com/blogs/category/security-operations?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow">Forrester: The Future of Security Operations</a></p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>Related Podcast Episodes 🎧</b></h2><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/videos/the-truth-about-ai-in-the-soc-from-alert-fatigue-to-detection-engineering?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/4036f374-2713-4a82-8079-13133ebe50f8/S06_Allie_Mellen_.jpg?t=1759935361"/></a><div class="image__source"><a class="image__source_link" href="https://www.cloudsecuritypodcast.tv/videos/the-truth-about-ai-in-the-soc-from-alert-fatigue-to-detection-engineering?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" rel="noopener" target="_blank"><span class="image__source_text"><p>The Truth About AI in the SOC: From Alert Fatigue to Detection Engineering</p></span></a></div></div><h3 class="heading" style="text-align:left;" id="question-for-you-reply-to-this-emai">Question for you? (Reply to this email)</h3><p class="paragraph" style="text-align:left;">⚙️ If you could automate just one SOC playbook with AI today, which would it be and why?</p><p class="paragraph" style="text-align:left;">Next week, we&#39;ll explore another critical aspect of cloud security. Stay tuned!</p><hr class="content_break"><p class="paragraph" style="text-align:left;">📬 Want weekly expert takes on AI & Cloud Security? [<a class="link" href="https://www.cloudsecuritynewsletter.com/subscribe?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow">Subscribe here</a>]”</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><span style="color:rgb(238, 40, 60);"><b><a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">We would love to hear from you</a></b></span>📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙</p><p class="paragraph" style="text-align:start;">Peace!</p><p class="paragraph" style="text-align:start;"><a class="link" href="https://www.linkedin.com/in/shilpi-bhattacharjee/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow">Shilpi Bhattacharjee</a></p><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dc094a1c-678a-43e0-adc9-1bee6c3499e2/CSP_Logo_Blue_ScreenRes_3000x3000_v2.jpg"/></a></div><div class="button" style="text-align:left;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="{{rp_referral_hub_url}}"><span class="button__text" style=""> Share the newsletter </span></a></div><p class="paragraph" style="text-align:left;">Was this forwarded to you? You can <a class="link" href="https://www.cloudsecuritynewsletter.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow">Sign up here</a>, to join our growing readership.</p><p class="paragraph" style="text-align:left;">Want to <b>sponsor</b> the next newsletter edition! <a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">Lets make it happen </a></p><p class="paragraph" style="text-align:left;">Have you joined our FREE <b>Monthly</b> <a class="link" href="https://www.cloudsecuritybootcamp.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Bootcamp</a> yet?</p><p class="paragraph" style="text-align:left;">checkout our <b>sister podcast</b> <a class="link" href="https://www.youtube.com/@AISecurityPodcast?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=from-asahi-s-ransomware-recovery-to-google-s-ai-bug-bounty-the-soc-s-big-2025-reboot" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=fdd2240f-63f0-4c6b-b5c7-d07f2f6aa376&utm_medium=post_rss&utm_source=cloud_security_newsletter">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>🚨 Salesforce &amp; Microsoft Hit by Prompt Injection (CVSS 9+):Red Teamers Expose AI Reality</title>
  <description>The security industry has reached an inflection point: AI security is no longer theoretical. This week covers the maturation of AI security threats in production environments, featuring insights from offensive security leaders Jason Haddix (Arcanum Information Security), Daniel Miessler (Unsupervised Learning), and Caleb Sima from AI Security Podcast on prompt injection attacks, the current state of automated vulnerability discovery, and strategic implications of AI agents accessing enterprise systems plus critical zero-days in Cisco ASA/FTD devices and major industry consolidation.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/caa29759-6d85-43f3-962f-3d1eb4a2dc1c/Screenshot_2025-10-02_at_1.12.12_AM.png" length="755699" type="image/png"/>
  <link>https://www.cloudsecuritynewsletter.com/p/salesforce-microsoft-prompt-injection-cisco-zero-days</link>
  <guid isPermaLink="true">https://www.cloudsecuritynewsletter.com/p/salesforce-microsoft-prompt-injection-cisco-zero-days</guid>
  <pubDate>Thu, 02 Oct 2025 00:19:43 +0000</pubDate>
  <atom:published>2025-10-02T00:19:43Z</atom:published>
    <dc:creator>Ashish Rajan</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="hello-from-the-cloudverse"><span style="background-color:#28EEDA;"><b>Hello from the Cloud-verse!</b></span></h2><p class="paragraph" style="text-align:left;">This week’s Cloud Security Newsletter Topic we cover - <b>The Reality Check: What AI Can Actually Do in Offensive Security (And What It Can&#39;t)</b><a class="link" href="#cloud-security-topic-of-the-week" rel="noopener noreferrer nofollow">(continue reading)</a> </p><hr class="content_break"><div class="image"><a class="image__link" href="https://www.aisecuritypodcast.com/videos/the-future-of-ai-security-is-scaffolding-agents-the-browser?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/caa29759-6d85-43f3-962f-3d1eb4a2dc1c/Screenshot_2025-10-02_at_1.12.12_AM.png?t=1759364008"/></a><div class="image__source"><span class="image__source_text"><p>This image was generated by AI. It&#39;s still experimental, so it might not be a perfect match!</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Incase, this is your 1st Cloud Security Newsletter! You are in good company! </b><br>You are reading this issue along with your friends and colleagues from companies like <i>Netflix</i>, Citi, <i>JP Morgan, Linkedin, Reddit, Github, Gitlab, CapitalOne, Robinhood, HSBC, British Airways, Airbnb, Block, Booking Inc & more</i> who subscribe to this newsletter, who like you want to learn what’s new with Cloud Security each week from their industry peers like many others who listen to <a class="link" href="https://open.spotify.com/show/6LZgeh4GecRYPc0WrwMB4I?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Podcast</a> & <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> every week.</p><hr class="content_break"><p class="paragraph" style="text-align:left;">Welcome to this week’s Cloud Security Newsletter</p><p class="paragraph" style="text-align:left;">This week&#39;s newsletter examines how prompt injection has evolved from academic research to active exploitation in enterprise platforms like Salesforce Agentforce and Microsoft 365 Copilot, the reality of AI-powered offensive security capabilities demonstrated in competitions like DARPA&#39;s AIxCC, and what cloud security leaders must understand about defending increasingly complex AI-integrated systems. Meanwhile, CISA&#39;s emergency directive on Cisco firewall zero-days and strategic acquisitions in the AI security space signal the urgency of modernizing both traditional and AI-era defenses.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="tldr-for-busy-readers">📰 TL;DR for Busy Readers</h2><ul><li><p class="paragraph" style="text-align:left;">🚨 <i>Salesforce Agentforce (CVSS 9.4)</i> & <i>Microsoft Copilot (CVSS 9.3)</i>: Prompt injection is now real-world exploitation.</p></li><li><p class="paragraph" style="text-align:left;">AI red teams show LLMs can already solve CTFs & find vulns, but can’t handle complex business logic.</p></li><li><p class="paragraph" style="text-align:left;">Cisco ASA/FTD zero-days enable ROM-level persistence,<i>CISA mandates immediate patching</i>.</p></li><li><p class="paragraph" style="text-align:left;">Microsoft launches Security Store for Copilot agents.</p></li><li><p class="paragraph" style="text-align:left;">UK’s JLR cyberattack halts manufacturing, triggers £1.5B loan guarantee. </p></li></ul><p class="paragraph" style="text-align:left;">👉 <b>Takeaway for You:</b> Treat AI agents as <i>privileged users</i>, not SaaS add-ons.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="this-weeks-security-news">📰 <b>THIS WEEK&#39;S SECURITY HEADLINES</b></h2><h3 class="heading" style="text-align:left;" id="google-cloud-functions-vulnerabilit"><b>1 - </b>🔴<b> CISA Issues Emergency Directive for Cisco ASA Zero-Days Under Active Exploitation</b></h3><p class="paragraph" style="text-align:left;">Cisco disclosed two actively exploited zero-day vulnerabilities (CVE-2025-20333 and CVE-2025-20362) affecting Cisco Secure Firewall ASA and FTD software on September 25, 2025. CISA issued Emergency Directive 25-03 requiring federal agencies to account for all affected devices, collect forensic data, and upgrade systems by September 26, 2025. The campaign involves exploiting zero-days to gain unauthenticated remote code execution and manipulating ROM to persist through reboots and system upgrades.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters</b>: This widespread campaign demonstrates sophisticated adversaries&#39; ability to achieve persistence at the firmware level, bypassing traditional security controls. For cloud security teams, the ROM manipulation capability is particularly concerning as it enables attackers to maintain access even after patches are applied. Organizations using Cisco ASA/FTD devices at cloud ingress points or for site-to-site VPN connections to cloud environments face potential long-term compromise. The emergency directive&#39;s 24-hour compliance window reflects the severity and active exploitation.</p><p class="paragraph" style="text-align:left;"><b>Sources</b>:<a class="link" href="https://www.cisa.gov/news-events/news/cisa-issues-emergency-directive-requiring-federal-agencies-identify-and-mitigate-cisco-zero-day?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">Read CISA Emergency Directive</a>,<a class="link" href="https://thehackernews.com/2025/09/urgent-cisco-asa-zero-day-duo-under.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow"> See The Hacker News Coverage</a>,<a class="link" href="https://www.tenable.com/blog/cve-2025-20333-cve-2025-20362-faq-cisco-asa-ftd-zero-days-uat4356?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">See Tenable Analysis</a></p><h3 class="heading" style="text-align:left;" id="2-critical-prompt-injection-vulnera"><b>2 - </b>🚨<b> Critical Prompt Injection Vulnerabilities Expose Salesforce and Microsoft AI Platforms</b></h3><p class="paragraph" style="text-align:left;">Cybersecurity researchers disclosed <a class="link" href="https://thehackernews.com/2025/09/salesforce-patches-critical-forcedleak.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">ForcedLeak</a> (CVSS 9.4), a critical vulnerability in Salesforce Agentforce that allows attackers to exfiltrate sensitive CRM data through indirect prompt injection attacks. The vulnerability was discovered on July 28, 2025, with Salesforce implementing Trusted URLs Enforcement on September 8, 2025. Additionally, Microsoft patched CVE-2025-32711 affecting Microsoft 365 Copilot in June a vulnerability with a CVSS score of 9.3 involving AI command injection that could allow attackers to steal sensitive data over a network, named <a class="link" href="https://thehackernews.com/2025/09/shadowleak-zero-click-flaw-leaks-gmail.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">ShadowLeak</a></p><p class="paragraph" style="text-align:left;"><b>Why This Matters</b>: These vulnerabilities demonstrate that prompt injection has transitioned from theoretical research to real-world exploitation in enterprise environments. Unlike traditional web vulnerabilities, prompt injection attacks exploit the fundamental architecture of how LLMs process instructions and data together, making them extremely difficult to prevent. As our featured expert Jason Haddix explains: &quot;The LLM becomes a delivery system to attack the ecosystem. We call it attacking the ecosystem. And I just see no one talking about it right now.&quot; Cloud security teams deploying AI agents with access to sensitive systems CRM platforms, databases, email, or internal documentation face a vastly expanded attack surface.</p><p class="paragraph" style="text-align:left;"><b>Sources</b>:<a class="link" href="https://thehackernews.com/2025/09/salesforce-patches-critical-forcedleak.html?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow"> </a><a class="link" href="https://noma.security/blog/forcedleak-agent-risks-exposed-in-salesforce-agentforce/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">Noma Security Blog</a>,<a class="link" href="https://www.trendmicro.com/vinfo/us/security/news/threat-landscape/trend-micro-state-of-ai-security-report-1h-2025?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow"> Trend Micro AI Security Report</a></p><h3 class="heading" style="text-align:left;" id="3-microsoft-launches-security-store"><b>3 - 🛡️ Microsoft Launches Security Store with Agentic Security Copilot Updates</b></h3><p class="paragraph" style="text-align:left;">Microsoft unveiled a Security Store to procure security SaaS and customizable Security Copilot agents, integrated with Defender, Sentinel, Entra, and Purview. The announcement highlights new AI-era controls including task adherence, PII guardrails, and prompt-shielding capabilities designed to address emerging risks in agentic AI deployments.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters</b>: Centralized procurement plus agentic AI controls could accelerate security tool rollouts, but also introduce new supply-chain risks around marketplace vetting and agent permission scopes. Copilot agents interacting with tenants heighten prompt-injection and data-loss risk if guardrails are misconfigured. Cloud security teams must treat Copilot agents like applications enforcing app consent policies, granular scopes, and monitored egress. The introduction of these controls validates concerns about AI security risks while providing a framework for governance. Organizations should require vendor SBOMs and data-handling attestations for Store apps, pre-production red-team agents for prompt-injection vulnerabilities, and map new telemetry to Sentinel analytics.</p><p class="paragraph" style="text-align:left;"><b>Sources</b>:<a class="link" href="https://www.theverge.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow"> The Verge</a>,<a class="link" href="https://www.microsoft.com/security/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow"> Microsoft Security Blog</a></p><h3 class="heading" style="text-align:left;" id="4-uk-critical-manufacturing-jlr-cyb"><b>4 - 🏭 UK Critical Manufacturing: JLR Cyberattack Triggers £1.5B Government Loan Guarantee</b></h3><p class="paragraph" style="text-align:left;">After a late-August cyberattack halted Jaguar Land Rover (JLR) production for weeks, the UK government announced a £1.5 billion loan guarantee on September 27, 2025, to stabilize the auto supply chain. JLR is now preparing a controlled, phased restart of operations.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters</b>: This incident demonstrates real-world macro-scale risk from OT/IT ransomware: national supply-chain disruption, emergency public financing, and cascading vendor insolvency risks. The government intervention underscores how critical infrastructure cyberattacks now require sovereign-level economic response. For cloud security teams managing OT/plant-adjacent enterprises or supply chain integrations, this validates the need for robust network segmentation, immutable backups, and incident response tabletop exercises that include finance and treasury scenarios. Organizations should validate cyber insurance clauses for business interruption and simulate extended ERP/PLM outages that might drive logistics cloud failovers.</p><p class="paragraph" style="text-align:left;"><b>Sources</b>:<a class="link" href="https://www.reuters.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow"> Reuters</a>,<a class="link" href="https://www.securityweek.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow"> SecurityWeek</a></p><h3 class="heading" style="text-align:left;" id="5-google-patches-gemini-ai-vulnerab">5 - 📊<b> Google Patches Gemini AI Vulnerabilities Enabling Data Theft</b></h3><p class="paragraph" style="text-align:left;">Cybersecurity researchers disclosed three patched vulnerabilities in Google&#39;s Gemini AI assistant that could have exposed users to privacy risks and data theft, including search-injection attacks on the Search Personalization Model, log-to-prompt injection against Gemini Cloud Assist, and exfiltration via the Gemini Browsing Tool. The vulnerabilities, collectively named the &quot;Gemini Trifecta&quot; by Tenable, have been patched by Google.</p><p class="paragraph" style="text-align:left;"><b>Why This Matters</b>: These vulnerabilities highlight the security challenges of integrating AI assistants into cloud platforms and productivity tools. Gemini Cloud Assist, designed to help users manage cloud resources and troubleshoot issues, could have been exploited to compromise cloud environments through manipulated logs a particularly concerning vector as many organizations are implementing AI-powered cloud management tools.</p><p class="paragraph" style="text-align:left;"><b>Source</b>:<a class="link" href="https://thehackernews.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow"> The Hacker News</a></p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="cloud-security-topic-of-the-week">🎯 Cloud Security Topic of the Week: </h2><h3 class="heading" style="text-align:left;" id="the-reality-check-what-ai-can-actua"><b>The Reality Check: What AI Can Actually Do in Offensive Security (And What It Can&#39;t)</b></h3><p class="paragraph" style="text-align:left;">As AI security tools flood the market and vendors promise autonomous penetration testing, our featured experts provide a critical reality check on current capabilities versus hype. The discussion reveals a nuanced landscape where AI excels at certain tasks while remaining fundamentally limited in others insights that every cloud security leader needs when evaluating AI-powered security tools or defending against AI-enabled attacks.</p><h2 class="heading" style="text-align:left;" id="featured-experts-this-week"><b>Featured Experts This Week </b>🎤</h2><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.linkedin.com/in/jhaddix/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow"><b>Jason Haddix</b></a> - Founder, Arcanum Information Security</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.linkedin.com/in/danielmiessler/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow"><b>Daniel Miessler</b></a> - Founder, Unsupervised Learning</p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.linkedin.com/in/calebsima/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow"><b>Caleb Sima</b></a> - Builder, WhiteRabbit, Co-Host <a class="link" href="https://open.spotify.com/show/3nV4eijfzdHKIvDOaycVII?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a> </p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.linkedin.com/in/ashishrajan/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow"><b>Ashish Rajan</b></a> - CISO | Co-Host AI Security Podcast, Host of Cloud Security Podcast</p></li></ul><h2 class="heading" style="text-align:left;" id="definitions-and-core-concepts"><b>Definitions and Core Concepts 📚</b></h2><p class="paragraph" style="text-align:left;">Before diving into our insights, let&#39;s clarify some key terms:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Prompt Injection</b>: A vulnerability in Large Language Models where malicious instructions embedded in user input cause the model to bypass safety controls or execute unintended actions. Unlike SQL injection, prompt injection exploits the non-deterministic nature of LLMs, making it extremely difficult to prevent completely.</p></li><li><p class="paragraph" style="text-align:left;"><b>MCP (Model Context Protocol)</b>: A standardized protocol introduced by Anthropic for enabling AI systems to interact with external tools and data sources. MCP servers allow AI agents to access databases, APIs, and other resources in a structured way, but also expand the attack surface significantly.</p></li><li><p class="paragraph" style="text-align:left;"><b>Agentic AI</b>: AI systems that can autonomously plan, make decisions, and execute multi-step workflows without constant human guidance. These systems represent a paradigm shift from traditional prompt-response models to autonomous agents that can interact with enterprise systems.</p></li><li><p class="paragraph" style="text-align:left;"><b>RAG (Retrieval Augmented Generation)</b>: A technique that provides LLMs with additional context by retrieving relevant information from external knowledge bases before generating responses. While RAG improves accuracy, it doesn&#39;t solve prompt injection vulnerabilities.</p></li><li><p class="paragraph" style="text-align:left;"><b>Context Engineering</b>: The practice of carefully structuring and organizing information provided to AI systems to maximize output quality. Our experts emphasize this is currently more important than model selection for achieving reliable results.</p></li><li><p class="paragraph" style="text-align:left;"><b>Scaffolding</b>: The architecture and integration layers that connect AI models to tools, data sources, and workflows. Daniel Miessler notes: &quot;<i>The intelligence of the model and the intelligence of the system are like two separate things. I believe the intelligence of the system is likely to win that competition.</i>&quot;</p></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="our-insights-from-this-practitioner">💡<b>Our Insights from this Practitioner 🔍</b></h2><h3 class="heading" style="text-align:left;" id="the-prompt-injection-problem-isnt-g"><b>The Prompt Injection Problem Isn&#39;t Going Away It&#39;s Getting Worse</b></h3><p class="paragraph" style="text-align:left;">One of the most sobering revelations from this week&#39;s discussion is the consensus that prompt injection ranked #1 in OWASP&#39;s LLM Top 10 has proven to be a more persistent and severe problem than anticipated. Jason Haddix recounts a revealing exchange at an OpenAI security summit where Sam Altman was asked about his prediction from five years ago that prompt injection would be solved:</p><p class="paragraph" style="text-align:left;">&quot;<i>We had about an hour with Sam Altman and Dan asked him like, Hey, you know, five years ago you made this statement that prompt injection would be solved in future models of youth. Still think that. And he was like, really? Yeah.</i>&quot;</p><p class="paragraph" style="text-align:left;">The fundamental issue is architectural: LLMs are designed to provide helpful responses, not to maintain security boundaries. Daniel Miessler explains: &quot;<i>They&#39;re literally designed to just give you answers. Like that&#39;s, and they&#39;re non-deterministic. Like it&#39;s not really designed to have barriers. It&#39;s designed to do the opposite.</i>&quot;</p><p class="paragraph" style="text-align:left;">For cloud security teams, this has profound implications. Organizations implementing AI agents with access to sensitive systems whether Salesforce CRM, internal databases, or cloud infrastructure must accept that prompt injection is an inherent risk that cannot be completely eliminated through guardrails or filters. Each additional control layer slows inference time and reduces accuracy, creating a fundamental trade-off between security and functionality.</p><p class="paragraph" style="text-align:left;"><b>What This Means for Your Organization</b>: Treat AI agents like privileged users. Implement strict scoping for API access (read-only where possible), monitor all AI-initiated actions, and maintain human-in-the-loop workflows for high-stakes operations. Don&#39;t rely solely on prompt firewalls or classifiers; they can be bypassed, and they introduce performance penalties.</p><h3 class="heading" style="text-align:left;" id="the-real-attack-surface-its-not-jus"><b>The Real Attack Surface: It&#39;s Not Just the Model</b></h3><p class="paragraph" style="text-align:left;">One of Jason Haddix&#39;s most important insights challenges how most organizations think about AI security:</p><p class="paragraph" style="text-align:left;">&quot;<i>I think the big disconnect, at least for me, and something that I&#39;m talking about all week at like Defcon Black Hat, is also a giant disconnect in testing a model in isolation, and then testing actually an implementation of an app that uses a model. You have, not only do you have the model that you&#39;re trying to red team and get to do things, but you also have things like its agents and tools and its protocols. But then you also have like six other systems that are hoisting these thing up</i>.&quot;</p><p class="paragraph" style="text-align:left;">This ecosystem-based perspective fundamentally changes how we should approach AI security assessments. In real enterprise deployments, AI systems integrate with:</p><ul><li><p class="paragraph" style="text-align:left;">Logging and observability platforms</p></li><li><p class="paragraph" style="text-align:left;">Prompt libraries and management systems</p></li><li><p class="paragraph" style="text-align:left;">Guardrails and classifiers running inline</p></li><li><p class="paragraph" style="text-align:left;">Data stores and vector databases</p></li><li><p class="paragraph" style="text-align:left;">Multiple APIs and integration points</p></li></ul><p class="paragraph" style="text-align:left;">Jason describes attacks his team has executed that demonstrate this expanded attack surface: &quot;<i>We&#39;ve done attacks similar to blind cross site scripting where the model just passes along attacks and we hit internal developers and we&#39;re able to attack them through JavaScript attacks.</i>&quot;</p><p class="paragraph" style="text-align:left;">The lesson here is critical: <b>the LLM becomes a delivery mechanism to attack the broader ecosystem</b>. An attacker doesn&#39;t need to jailbreak the model itself if they can inject malicious content that gets processed by downstream systems.</p><p class="paragraph" style="text-align:left;"><b>What This Means for Your Organization</b>: When conducting AI security assessments, map the entire architecture of every system that touches AI inputs or outputs. Test not just the model&#39;s responses but how those responses flow through logging systems, analytics platforms, and integration points. Consider second-order effects where malicious content might be stored and later executed.</p><h3 class="heading" style="text-align:left;" id="the-current-state-of-ai-in-offensiv"><b>The Current State of AI in Offensive Security: Capability vs. Hype</b></h3><p class="paragraph" style="text-align:left;">The discussion provides valuable ground truth on what AI can actually accomplish in offensive security today versus vendor marketing claims. Jason Haddix offers this assessment based on testing OpenAI&#39;s new open source model:</p><p class="paragraph" style="text-align:left;">&quot;<i>I hooked that up to our offensive agenting framework and it was able to solve like four CTFs I threw at it with just some RAG and just access to puppeteer and playwright MCPs and that model. I mean, that&#39;s a junior pen tester right there.</i>&quot;</p><p class="paragraph" style="text-align:left;">However, the experts are clear about current limitations. When it comes to complex business logic flaws and multi-stage attacks, Caleb Sima observes:</p><p class="paragraph" style="text-align:left;">&quot;<i>I think it&#39;s easy to find a cross-site scripting vulnerability. I think it&#39;s easy to find SQL. Like these things, I think you can because there&#39;s enough knowledge about the space. It is. They&#39;re very distinct, very, very clear attacks. It&#39;s the multi-stage stuff that&#39;s really [challenging].</i>&quot;</p><p class="paragraph" style="text-align:left;">The key differentiator for successful AI offensive tools isn&#39;t just model intelligence, it&#39;s architecture. Jason explains that leading companies fragment workflows into multiple specialized agents:</p><p class="paragraph" style="text-align:left;">&quot;<i>The architecture for almost every company, including XBOW and including anybody else who&#39;s doing this, is like overseer, planner, agent, and then an agent for XSS and agent for SSTI and Agent forever. All those have RAG, which enrich their ability to do that type of testing.</i>&quot;</p><p class="paragraph" style="text-align:left;">Daniel Miessler emphasizes the importance of this system-level intelligence: &quot;<i>The intelligence of the model and the intelligence of the system are like two separate things. And I believe the intelligence of the system is likely to win that competition.&quot;</i></p><p class="paragraph" style="text-align:left;"><b>What This Means for Your Organization</b>: When evaluating AI-powered security tools, look beyond model claims. Ask vendors about their architecture, how they fragment tasks, maintain context, handle complex multi-step workflows, and incorporate domain expertise. For defensive purposes, understand that attackers with good architecture can achieve significant scale even with commodity models.</p><h3 class="heading" style="text-align:left;" id="the-browser-is-becoming-the-ai-inte"><b>The Browser is Becoming the AI Interface And It Changes Everything</b></h3><p class="paragraph" style="text-align:left;">One of the most fascinating strategic insights from this discussion concerns the race toward AI-integrated browsers. Jason Haddix identifies what he believes is the &quot;killer feature&quot; of the next generation of the web:</p><p class="paragraph" style="text-align:left;">&quot;<i>The killer feature of the next generation of the web is just in time GUIs. This is the killer feature. So you go to any site you like, let&#39;s say Reddit. You&#39;re a Reddit, you love Reddit, but you hate the GUI. You just tell the browser, Hey, I wanna rewrite the GUI like this, and this is what I want to see. And it&#39;s in the browser as an overlay</i>.&quot;</p><p class="paragraph" style="text-align:left;">This shift has profound implications for both user experience and security. As Caleb Sima points out:</p><p class="paragraph" style="text-align:left;">&quot;<i>All of the context for a person is in the browser. 98% of your work is in this browser. That you, if you own that browser, have all of the context of [what they do]. And you have the ability to take actions in that browser using the same state and authentication of what you normally can use.</i>&quot;</p><p class="paragraph" style="text-align:left;">For enterprises, this creates both opportunities and risks. On the opportunity side, companies can shift focus from perfecting GUIs to providing high-quality APIs and data, letting AI browsers handle presentations. On the risk side, AI agents with browser-level access and authentication represent an enormous attack surface for prompt injection and credential theft.</p><p class="paragraph" style="text-align:left;"><b>What This Means for Your Organization</b>: Start planning for AI-integrated browsers in your threat model. Consider how authentication flows, session management, and data sensitivity change when AI agents can navigate your applications with user credentials. For product teams, begin thinking about API-first approaches that enable AI browser integration while maintaining security controls.</p><h3 class="heading" style="text-align:left;" id="why-observability-and-logging-are-m"><b>Why Observability and Logging Are More Complex Than You Think</b></h3><p class="paragraph" style="text-align:left;">The conversation reveals a critical challenge that many organizations haven&#39;t fully considered: in some geographies, prompts exchanged between employees and AI systems are considered private communications that cannot be logged. Jason explains:</p><p class="paragraph" style="text-align:left;">&quot;<i>Prompts between employees and AI and some geolocations in the world are considered private, so you can&#39;t even do observability or logging on them. And so, you know, how do you monitor for a breach or malicious activity or something like that from an employee or a partner or something like that, or anyone who&#39;s using your feature, you know, when you can&#39;t log.</i>&quot;</p><p class="paragraph" style="text-align:left;">This creates a fundamental tension between security monitoring and privacy regulations. Organizations may receive alerts about &quot;malicious intent&quot; from AI guardrails and classifiers but cannot examine the actual prompts that triggered those alerts.</p><p class="paragraph" style="text-align:left;"><b>What This Means for Your Organization</b>: Consult with legal and compliance teams now about logging requirements and constraints for AI interactions across your global operations. Design monitoring that can operate effectively with limited visibility, focusing on behavioral anomalies, API access patterns, and downstream effects rather than prompt content analysis.</p><h3 class="heading" style="text-align:left;" id="the-reality-of-incident-detection-a"><b>The Reality of Incident Detection and Response in AI Systems</b></h3><p class="paragraph" style="text-align:left;">When asked about defining and detecting incidents in AI systems, the experts converge on an important insight: the fundamental impacts remain the same, but the attack paths differ. Daniel Miessler frames it clearly:</p><p class="paragraph" style="text-align:left;">&quot;<i>I think I&#39;m likely to say that they&#39;re largely gonna be the same. Because I feel like the impacts are still gonna be very similar. I think it&#39;s more the attack path that changes. Did you lose data? Was something stolen? Was there IP theft?</i>&quot;</p><p class="paragraph" style="text-align:left;">However, Jason adds a crucial complication around forensics and detection:</p><p class="paragraph" style="text-align:left;">&quot;<i>You can write prompts that say don&#39;t execute this attack until like a week later. So now how do you go back once the attack does execute? One of the techniques right now is variable expansion. So you define a variable, a prompt injection as a variable in one prompt today and then tomorrow or the next day, I go back and I call the variable to the system and it detonates basically.</i>&quot;</p><p class="paragraph" style="text-align:left;">This time-delayed execution pattern similar to malware detonation makes traditional incident response significantly more complex. Without proper logging and correlation of AI interactions over time, forensic investigation becomes nearly impossible.</p><p class="paragraph" style="text-align:left;"><b>What This Means for Your Organization</b>: Extend your incident response playbooks to cover AI-specific scenarios. Build detection for unusual patterns in AI agent behavior, unexpected data access, and anomalous tool usage. Consider implementing retention policies for AI interaction logs that balance privacy requirements with investigation needs.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-resources">RELATED RESOURCES</h2><ul><li><p class="paragraph" style="text-align:left;"><b>AI Security Resources</b>:</p><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://genai.owasp.org/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">OWASP Top 10 for LLM Applications 2025</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://docs.claude.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">Anthropic&#39;s Model Context Protocol (MCP) Documentation</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://learn.microsoft.com/en-us/copilot/security/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">Microsoft Security Copilot Best Practices</a></p></li></ul><p class="paragraph" style="text-align:left;"></p></li><li><p class="paragraph" style="text-align:left;"><b>Threat Intelligence</b>:</p><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.cisa.gov/known-exploited-vulnerabilities-catalog?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">CISA Known Exploited Vulnerabilities Catalog</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://noma.security/blog/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">Noma Security - AI Security Research</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://unit42.paloaltonetworks.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">Palo Alto Networks Unit 42 Research</a></p></li></ul><p class="paragraph" style="text-align:left;"></p></li><li><p class="paragraph" style="text-align:left;"><b>Prompt Injection Resources</b>:</p><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://simonwillison.net/tags/prompt-injection/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">Simon Willison&#39;s Prompt Injection Articles</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.haizelabs.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">Haize Labs - Bijection Attack Research</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.trailofbits.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">Trail of Bits - AI Security Guidance</a></p></li></ul></li></ul><hr class="content_break"><h2 class="heading" style="text-align:left;" id="related-podcast-episodes"><b>Related Podcast Episodes 🎧</b></h2><div class="image"><a class="image__link" href="https://www.aisecuritypodcast.com/videos/the-future-of-ai-security-is-scaffolding-agents-the-browser?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/1e211445-1bb8-4822-9097-970b34d014dd/S03EP14___1_.jpg?t=1759361527"/></a><div class="image__source"><a class="image__source_link" href="https://www.aisecuritypodcast.com/videos/the-future-of-ai-security-is-scaffolding-agents-the-browser?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" rel="noopener" target="_blank"><span class="image__source_text"><p>The Future of AI Security is Scaffolding, Agents & The Browser</p></span></a></div></div><h3 class="heading" style="text-align:left;" id="question-for-you-reply-to-this-emai">Question for you? (Reply to this email)</h3><p class="paragraph" style="text-align:left;">How are you scoping AI agent access in your org - least privilege like a user, or broad SaaS-wide? </p><p class="paragraph" style="text-align:left;">Next week, we&#39;ll explore another critical aspect of cloud security. Stay tuned!</p><hr class="content_break"><p class="paragraph" style="text-align:left;">📬 Want weekly expert takes on AI & Cloud Security? [<a class="link" href="https://www.cloudsecuritynewsletter.com/subscribe?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">Subscribe here</a>]”</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><span style="color:rgb(238, 40, 60);"><b><a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">We would love to hear from you</a></b></span>📢 for a feature or topic request or if you would like to sponsor an edition of Cloud Security Newsletter. </p><p class="paragraph" style="text-align:left;">Thank you for continuing to subscribe and Welcome to the new members in tis newsletter community💙</p><p class="paragraph" style="text-align:start;">Peace!</p><p class="paragraph" style="text-align:start;"><a class="link" href="https://www.linkedin.com/in/shilpi-bhattacharjee/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">Shilpi Bhattacharjee</a></p><div class="image"><a class="image__link" href="https://www.cloudsecuritypodcast.tv/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" rel="noopener" target="_blank"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dc094a1c-678a-43e0-adc9-1bee6c3499e2/CSP_Logo_Blue_ScreenRes_3000x3000_v2.jpg"/></a></div><div class="button" style="text-align:left;"><a target="_blank" rel="noopener nofollow noreferrer" class="button__link" style="" href="{{rp_referral_hub_url}}"><span class="button__text" style=""> Share the newsletter </span></a></div><p class="paragraph" style="text-align:left;">Was this forwarded to you? You can <a class="link" href="https://www.cloudsecuritynewsletter.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">Sign up here</a>, to join our growing readership.</p><p class="paragraph" style="text-align:left;">Want to <b>sponsor</b> the next newsletter edition! <a class="link" href="mailto:info@cloudsecuritypodcast.tv" target="_blank" rel="noopener noreferrer nofollow">Lets make it happen </a></p><p class="paragraph" style="text-align:left;">Have you joined our FREE <b>Monthly</b> <a class="link" href="https://www.cloudsecuritybootcamp.com/?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">Cloud Security Bootcamp</a> yet?</p><p class="paragraph" style="text-align:left;">checkout our <b>sister podcast</b> <a class="link" href="https://www.youtube.com/@AISecurityPodcast?utm_source=www.cloudsecuritynewsletter.com&utm_medium=newsletter&utm_campaign=salesforce-microsoft-hit-by-prompt-injection-cvss-9-red-teamers-expose-ai-reality" target="_blank" rel="noopener noreferrer nofollow">AI Security Podcast</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=0d11f205-1f4e-4c69-8856-1072cfeba4fd&utm_medium=post_rss&utm_source=cloud_security_newsletter">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

  </channel>
</rss>
