<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Matt James — Proactive Security</title>
    <description>Security strategies for modern organizations.</description>
    
    <link>https://odnd.com/</link>
    <atom:link href="https://rss.beehiiv.com/feeds/Bk9WbMaU4m.xml" rel="self"/>
    
    <lastBuildDate>Mon, 13 Apr 2026 17:04:13 +0000</lastBuildDate>
    <pubDate>Mon, 30 Mar 2026 16:44:57 +0000</pubDate>
    <atom:published>2026-03-30T16:44:57Z</atom:published>
    <atom:updated>2026-04-13T17:04:13Z</atom:updated>
    
      <category>Venture Capital</category>
      <category>Artificial Intelligence</category>
      <category>Cybersecurity</category>
    <copyright>Copyright 2026, Matt James — Proactive Security</copyright>
    
    
    
    <docs>https://www.rssboard.org/rss-specification</docs>
    <generator>beehiiv</generator>
    <language>en-us</language>
    <webMaster>support@beehiiv.com (Beehiiv Support)</webMaster>

      <item>
  <title>We&#39;re Still Testing the Wrong Thing: AI Red Teaming in 2026</title>
  <description></description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/a5c31a49-27fd-4524-9b65-90abbc9cbc53/roadmap.png" length="1404966" type="image/png"/>
  <link>https://odnd.com/p/we-re-still-testing-the-wrong-thing-ai-red-teaming-in-2026</link>
  <guid isPermaLink="true">https://odnd.com/p/we-re-still-testing-the-wrong-thing-ai-red-teaming-in-2026</guid>
  <pubDate>Mon, 30 Mar 2026 16:44:57 +0000</pubDate>
  <atom:published>2026-03-30T16:44:57Z</atom:published>
    <dc:creator>Matt James</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;"><i>An updated look at where AI red teaming stands nine months after the original piece, and why the gap between what we test and what actually breaks has only gotten wider.</i></p><hr class="content_break"><p class="paragraph" style="text-align:left;">Last July, I <a class="link" href="https://odnd.com/p/rethinking-ai-red-teaming-from-model-bugs-to-systemic-resilience?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=we-re-still-testing-the-wrong-thing-ai-red-teaming-in-2026" target="_blank" rel="noopener noreferrer nofollow">wrote about</a> a <a class="link" href="https://arxiv.org/abs/2507.05538?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=we-re-still-testing-the-wrong-thing-ai-red-teaming-in-2026" target="_blank" rel="noopener noreferrer nofollow">Stanford and Georgetown paper</a> that argued AI red teaming was too narrowly focused on individual model vulnerabilities. The authors made the case for two levels of red teaming: micro-level (testing the model itself) and macro-level (testing the entire system lifecycle, including how humans and institutions interact with it). At the time, the argument felt slightly ahead of the conversation. Most organizations were still figuring out prompt injection, and &quot;sociotechnical risk&quot; sounded academic.</p><p class="paragraph" style="text-align:left;">Nine months later, the landscape has shifted enough that the paper reads less like a forward-looking proposal and more like a description of problems we are actively failing to solve.</p><h2 class="heading" style="text-align:left;" id="what-changed">What Changed</h2><p class="paragraph" style="text-align:left;">Three things happened since July 2025 that make the original argument sharper.</p><p class="paragraph" style="text-align:left;"><b>Agentic AI went mainstream.</b> The paper talked about risks that emerge from complex interactions between models, users, and environments. That was somewhat theoretical when most deployments were chatbots and content generators. It is no longer theoretical. Gartner projects that <a class="link" href="https://www.bvp.com/atlas/securing-ai-agents-the-defining-cybersecurity-challenge-of-2026?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=we-re-still-testing-the-wrong-thing-ai-red-teaming-in-2026" target="_blank" rel="noopener noreferrer nofollow">40% of enterprise applications will embed task-specific AI agents</a> by 2026, up from under 5% in 2025. These agents call APIs, access data stores, execute workflows, and make decisions across multi-step chains. The attack surface is not the model; it is the entire execution environment.</p><p class="paragraph" style="text-align:left;"><b>OWASP published the </b><b><a class="link" href="https://genai.owasp.org/resource/owasp-top-10-for-agentic-applications-for-2026/?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=we-re-still-testing-the-wrong-thing-ai-red-teaming-in-2026" target="_blank" rel="noopener noreferrer nofollow">Top 10 for Agentic Applications</a></b><b>.</b> Released in late 2025 with input from over 100 practitioners, this framework classifies the risks the original paper was gesturing at. Agent Goal Hijacking (ASI01) and Tool Misuse (ASI02) sit at the top. These are not prompt-level vulnerabilities. They are system-level failures where an agent&#39;s mission gets redirected across many execution steps, or where legitimate tool access gets exploited in ways the developers never anticipated. The OWASP list makes it concrete: red teaming agents means testing systems that act, not components that respond.</p><p class="paragraph" style="text-align:left;"><b>The US policy environment shifted.</b> Biden&#39;s <a class="link" href="https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=we-re-still-testing-the-wrong-thing-ai-red-teaming-in-2026" target="_blank" rel="noopener noreferrer nofollow">Executive Order 14110</a> had required red teaming for high-risk AI models and tasked NIST with developing guidelines. Trump&#39;s <a class="link" href="https://en.wikipedia.org/wiki/Executive_Order_14179?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=we-re-still-testing-the-wrong-thing-ai-red-teaming-in-2026" target="_blank" rel="noopener noreferrer nofollow">Executive Order 14148</a> rescinded it in January 2025, replacing the safety-focused framework with one centered on deregulation and innovation leadership. The practical effect is that mandatory red teaming requirements for frontier models evaporated at the federal level. NIST continues its <a class="link" href="https://www.nist.gov/news-events/news/2025/12/draft-nist-guidelines-rethink-cybersecurity-ai-era?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=we-re-still-testing-the-wrong-thing-ai-red-teaming-in-2026" target="_blank" rel="noopener noreferrer nofollow">AI Risk Management Framework</a> work and launched an <a class="link" href="https://www.pillsburylaw.com/en/news-and-insights/nist-ai-agent-standards.html?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=we-re-still-testing-the-wrong-thing-ai-red-teaming-in-2026" target="_blank" rel="noopener noreferrer nofollow">AI Agent Standards Initiative</a> in early 2026, but the regulatory pressure that was pushing companies toward structured red teaming has weakened considerably. Organizations that depended on regulatory mandates to justify their red teaming programs now need to justify them on their own terms.</p><h2 class="heading" style="text-align:left;" id="where-the-original-argument-holds-u">Where the Original Argument Holds Up</h2><p class="paragraph" style="text-align:left;">The core thesis still stands: most red teaming focuses on model-level bugs while the serious risks are systemic. If anything, agentic AI has proven the point more aggressively than the authors probably expected.</p><p class="paragraph" style="text-align:left;">The paper&#39;s recommendation to build multifunctional red teams (ML engineers, social scientists, domain experts, security practitioners) looks even more necessary now. You cannot red-team an agentic system by throwing adversarial prompts at it. You need people who understand the business process the agent is embedded in, the data flows it can access, the authorization boundaries it should respect, and the failure modes that emerge when it chains together a sequence of individually reasonable actions that produce an unreasonable outcome.</p><p class="paragraph" style="text-align:left;">The call for continuous red teaming over one-time assessments has also aged well. The <a class="link" href="https://www.paloaltonetworks.com/blog/network-security/how-ai-red-teaming-evolves-with-the-agentic-attack-surface/?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=we-re-still-testing-the-wrong-thing-ai-red-teaming-in-2026" target="_blank" rel="noopener noreferrer nofollow">best practice emerging in 2026</a> is integrating adversarial testing into CI/CD pipelines so that model updates, prompt changes, or agent reconfigurations automatically trigger attack suites. Red teaming as a pre-launch checkbox is not viable when your system&#39;s behavior changes every time you update a prompt template.</p><h2 class="heading" style="text-align:left;" id="where-the-original-piece-was-too-co">Where the Original Piece Was Too Conservative</h2><p class="paragraph" style="text-align:left;">Looking back at what I wrote in July, I underestimated a few things.</p><p class="paragraph" style="text-align:left;">First, the speed at which agents would move from experimental to production. The paper framed macro-level red teaming as something organizations should prepare for. In practice, many organizations deployed agentic systems before they had any red teaming program at all, let alone a macro-level one. The gap is not &quot;we test models but not systems.&quot; For a lot of organizations, the gap is &quot;we do not test.&quot;</p><p class="paragraph" style="text-align:left;">Second, I did not spend enough time on the supply chain dimension. Agents consume tools, plugins, and external data sources. Each of those is a trust boundary. Indirect prompt injection, where malicious instructions arrive through untrusted external content rather than direct user input, <a class="link" href="https://mindgard.ai/blog/ai-red-teaming-statistics?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=we-re-still-testing-the-wrong-thing-ai-red-teaming-in-2026" target="_blank" rel="noopener noreferrer nofollow">showed up in 73% of production AI deployments</a> in 2025. Multi-agent denial-of-service attacks <a class="link" href="https://www.helpnetsecurity.com/2026/03/03/enterprise-ai-agent-security-2026/?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=we-re-still-testing-the-wrong-thing-ai-red-teaming-in-2026" target="_blank" rel="noopener noreferrer nofollow">succeeded in over 80% of tests</a> in one ACL study. These are not edge cases. They are the default attack surface for any system that lets an LLM interact with external data.</p><p class="paragraph" style="text-align:left;">Third, I glossed over the organizational incentive problem. The paper recommended transparency and external testing, and I nodded along without pushing hard enough on why that is so difficult. Internal red teams operate under institutional constraints. They may not test scenarios that would embarrass leadership or challenge product decisions. External red teaming and independent disclosure mechanisms are not nice-to-haves; they are structural necessities for surfacing the risks that internal teams are incentivized to overlook.</p><h2 class="heading" style="text-align:left;" id="what-matters-now">What Matters Now</h2><p class="paragraph" style="text-align:left;">If you are building or deploying AI systems in 2026, here is what I would emphasize differently than I did last July:</p><p class="paragraph" style="text-align:left;"><b>Test the system, not the model.</b> This was the paper&#39;s thesis and it is now the operational reality. If your agent can read emails, query a database, and send Slack messages, your red team needs to test what happens when a poisoned email redirects the agent to exfiltrate the database contents via Slack. Model-level jailbreak testing does not find this.</p><p class="paragraph" style="text-align:left;"><b>Adopt the </b><b><a class="link" href="https://genai.owasp.org/resource/owasp-top-10-for-agentic-applications-for-2026/?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=we-re-still-testing-the-wrong-thing-ai-red-teaming-in-2026" target="_blank" rel="noopener noreferrer nofollow">OWASP ASI framework</a></b><b>.</b> It did not exist when I wrote the original piece. It does now, and it gives red teams a structured taxonomy for agentic risks. Use it. It covers goal hijacking, tool misuse, delegated trust failures, memory poisoning, and the other failure modes that are specific to autonomous systems.</p><p class="paragraph" style="text-align:left;"><b>Do not wait for regulation.</b> The federal regulatory environment for AI safety is weaker than it was a year ago. NIST is still doing good work, but mandatory requirements are not coming soon. If your red teaming program only exists because a regulation says it should, it will not survive the current policy climate. Build the program because the risk is real, not because someone told you to.</p><p class="paragraph" style="text-align:left;"><b>Red team continuously, not periodically.</b> Wire adversarial testing into your release process. When a prompt changes, when a tool gets added, when an agent&#39;s scope expands, test it. The systems that break in production are the ones that changed since the last time anyone looked.</p><p class="paragraph" style="text-align:left;"><b>Bring in outsiders.</b> Your internal team has blind spots shaped by the same incentive structures that built the system. Independent red teaming is not a luxury. It is how you find the things you are not looking for.</p><h2 class="heading" style="text-align:left;" id="the-bigger-picture">The Bigger Picture</h2><p class="paragraph" style="text-align:left;">The Stanford and Georgetown paper was right about the direction. The field needed to move from model-level adversarial testing to system-level resilience evaluation. Nine months later, the need is more urgent and the tools to address it are starting to materialize. But the gap between where most organizations are and where they need to be has widened, not narrowed.</p><p class="paragraph" style="text-align:left;">AI red teaming in 2026 is not about finding clever jailbreaks. It is about understanding how autonomous systems fail when they interact with messy, adversarial, real-world environments, and building the organizational discipline to test for that continuously. The paper gave us the framework. The question now is whether the industry will actually use it.</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><i>Matt James is a Product Security practitioner focused on threat modeling, AI security, and building security programs that work in practice. He writes at </i><i><a class="link" href="https://odnd.com?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=we-re-still-testing-the-wrong-thing-ai-red-teaming-in-2026" target="_blank" rel="noopener noreferrer nofollow">odnd.com</a></i><i>.</i></p><p class="paragraph" style="text-align:left;"><b>Reference:</b><br>Sharkey, L., Pasquinelli, M., Cheng, B., Dobbe, R., et al. <i><a class="link" href="https://arxiv.org/abs/2507.05538?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=we-re-still-testing-the-wrong-thing-ai-red-teaming-in-2026" target="_blank" rel="noopener noreferrer nofollow">Operationalizing Red Teaming for AI Systems</a></i>. arXiv preprint arXiv:2507.05538. July 2025.</p><p class="paragraph" style="text-align:left;">Nine months later, the landscape has shifted enough that the paper reads less like a forward-looking proposal and more like a description of problems we are actively failing to solve.</p><h2 class="heading" style="text-align:left;" id="what-changed">What Changed</h2><p class="paragraph" style="text-align:left;">Three things happened since July 2025 that make the original argument sharper.</p><p class="paragraph" style="text-align:left;"><b>Agentic AI went mainstream.</b> The paper talked about risks that emerge from complex interactions between models, users, and environments. That was somewhat theoretical when most deployments were chatbots and content generators. It is no longer theoretical. Gartner projects that 40% of enterprise applications will embed task-specific AI agents by 2026, up from under 5% in 2025. These agents call APIs, access data stores, execute workflows, and make decisions across multi-step chains. The attack surface is not the model; it is the entire execution environment.</p><p class="paragraph" style="text-align:left;"><b>OWASP published the Top 10 for Agentic Applications.</b> Released in late 2025 with input from over 100 practitioners, this framework classifies the risks the original paper was gesturing at. Agent Goal Hijacking (ASI01) and Tool Misuse (ASI02) sit at the top. These are not prompt-level vulnerabilities. They are system-level failures where an agent&#39;s mission gets redirected across many execution steps, or where legitimate tool access gets exploited in ways the developers never anticipated. The OWASP list makes it concrete: red teaming agents means testing systems that act, not components that respond.</p><p class="paragraph" style="text-align:left;"><b>The US policy environment shifted.</b> Biden&#39;s Executive Order 14110 had required red teaming for high-risk AI models and tasked NIST with developing guidelines. Trump&#39;s Executive Order 14148 rescinded it in January 2025, replacing the safety-focused framework with one centered on deregulation and innovation leadership. The practical effect is that mandatory red teaming requirements for frontier models evaporated at the federal level. NIST continues its AI Risk Management Framework work and launched an AI Agent Standards Initiative in early 2026, but the regulatory pressure that was pushing companies toward structured red teaming has weakened considerably. Organizations that depended on regulatory mandates to justify their red teaming programs now need to justify them on their own terms.</p><h2 class="heading" style="text-align:left;" id="where-the-original-argument-holds-u">Where the Original Argument Holds Up</h2><p class="paragraph" style="text-align:left;">The core thesis still stands: most red teaming focuses on model-level bugs while the serious risks are systemic. If anything, agentic AI has proven the point more aggressively than the authors probably expected.</p><p class="paragraph" style="text-align:left;">The paper&#39;s recommendation to build multifunctional red teams (ML engineers, social scientists, domain experts, security practitioners) looks even more necessary now. You cannot red-team an agentic system by throwing adversarial prompts at it. You need people who understand the business process the agent is embedded in, the data flows it can access, the authorization boundaries it should respect, and the failure modes that emerge when it chains together a sequence of individually reasonable actions that produce an unreasonable outcome.</p><p class="paragraph" style="text-align:left;">The call for continuous red teaming over one-time assessments has also aged well. The best practice emerging in 2026 is integrating adversarial testing into CI/CD pipelines so that model updates, prompt changes, or agent reconfigurations automatically trigger attack suites. Red teaming as a pre-launch checkbox is not viable when your system&#39;s behavior changes every time you update a prompt template.</p><h2 class="heading" style="text-align:left;" id="where-the-original-piece-was-too-co">Where the Original Piece Was Too Conservative</h2><p class="paragraph" style="text-align:left;">Looking back at what I wrote in July, I underestimated a few things.</p><p class="paragraph" style="text-align:left;">First, the speed at which agents would move from experimental to production. The paper framed macro-level red teaming as something organizations should prepare for. In practice, many organizations deployed agentic systems before they had any red teaming program at all, let alone a macro-level one. The gap is not &quot;we test models but not systems.&quot; For a lot of organizations, the gap is &quot;we do not test.&quot;</p><p class="paragraph" style="text-align:left;">Second, I did not spend enough time on the supply chain dimension. Agents consume tools, plugins, and external data sources. Each of those is a trust boundary. Indirect prompt injection, where malicious instructions arrive through untrusted external content rather than direct user input, showed up in 73% of production AI deployments in 2025. Multi-agent denial-of-service attacks succeeded in over 80% of tests in one ACL study. These are not edge cases. They are the default attack surface for any system that lets an LLM interact with external data.</p><p class="paragraph" style="text-align:left;">Third, I glossed over the organizational incentive problem. The paper recommended transparency and external testing, and I nodded along without pushing hard enough on why that is so difficult. Internal red teams operate under institutional constraints. They may not test scenarios that would embarrass leadership or challenge product decisions. External red teaming and independent disclosure mechanisms are not nice-to-haves; they are structural necessities for surfacing the risks that internal teams are incentivized to overlook.</p><h2 class="heading" style="text-align:left;" id="what-matters-now">What Matters Now</h2><p class="paragraph" style="text-align:left;">If you are building or deploying AI systems in 2026, here is what I would emphasize differently than I did last July:</p><p class="paragraph" style="text-align:left;"><b>Test the system, not the model.</b> This was the paper&#39;s thesis and it is now the operational reality. If your agent can read emails, query a database, and send Slack messages, your red team needs to test what happens when a poisoned email redirects the agent to exfiltrate the database contents via Slack. Model-level jailbreak testing does not find this.</p><p class="paragraph" style="text-align:left;"><b>Adopt the OWASP ASI framework.</b> It did not exist when I wrote the original piece. It does now, and it gives red teams a structured taxonomy for agentic risks. Use it. It covers goal hijacking, tool misuse, delegated trust failures, memory poisoning, and the other failure modes that are specific to autonomous systems.</p><p class="paragraph" style="text-align:left;"><b>Do not wait for regulation.</b> The federal regulatory environment for AI safety is weaker than it was a year ago. NIST is still doing good work, but mandatory requirements are not coming soon. If your red teaming program only exists because a regulation says it should, it will not survive the current policy climate. Build the program because the risk is real, not because someone told you to.</p><p class="paragraph" style="text-align:left;"><b>Red team continuously, not periodically.</b> Wire adversarial testing into your release process. When a prompt changes, when a tool gets added, when an agent&#39;s scope expands, test it. The systems that break in production are the ones that changed since the last time anyone looked.</p><p class="paragraph" style="text-align:left;"><b>Bring in outsiders.</b> Your internal team has blind spots shaped by the same incentive structures that built the system. Independent red teaming is not a luxury. It is how you find the things you are not looking for.</p><h2 class="heading" style="text-align:left;" id="the-bigger-picture">The Bigger Picture</h2><p class="paragraph" style="text-align:left;">The Stanford and Georgetown paper was right about the direction. The field needed to move from model-level adversarial testing to system-level resilience evaluation. Nine months later, the need is more urgent and the tools to address it are starting to materialize. But the gap between where most organizations are and where they need to be has widened, not narrowed.</p><p class="paragraph" style="text-align:left;">AI red teaming in 2026 is not about finding clever jailbreaks. It is about understanding how autonomous systems fail when they interact with messy, adversarial, real-world environments, and building the organizational discipline to test for that continuously. The paper gave us the framework. The question now is whether the industry will actually use it.</p><hr class="content_break"><p class="paragraph" style="text-align:left;"><i>Matt James is a Product Security practitioner focused on threat modeling, AI security, and building security programs that work in practice. He writes at </i><i><a class="link" href="https://odnd.com?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=we-re-still-testing-the-wrong-thing-ai-red-teaming-in-2026" target="_blank" rel="noopener noreferrer nofollow">odnd.com</a></i><i>.</i></p><p class="paragraph" style="text-align:left;"><b>Reference:</b><br>Sharkey, L., Pasquinelli, M., Cheng, B., Dobbe, R., et al. <i>Operationalizing Red Teaming for AI Systems</i>. arXiv preprint arXiv:2507.05538. July 2025.</p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=f646f8ce-3e44-45a9-a929-6c4ace8949ba&utm_medium=post_rss&utm_source=matt_james_proactive_security">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Emerging Cybersecurity Trends in 2026</title>
  <description>Lessons from 2024 and the Reality of Security Today</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/2a80cc40-248d-41e3-808e-6f60f034d2b7/Gemini_Generated_Image_ehraxpehraxpehra.png" length="1378570" type="image/png"/>
  <link>https://odnd.com/p/emerging-cybersecurity-trends-in-2026</link>
  <guid isPermaLink="true">https://odnd.com/p/emerging-cybersecurity-trends-in-2026</guid>
  <pubDate>Mon, 29 Dec 2025 00:48:04 +0000</pubDate>
  <atom:published>2025-12-29T00:48:04Z</atom:published>
    <dc:creator>Matt James</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">In 2024, I <a class="link" href="https://odnd.com/p/emerging-cybersecurity-trends-2024?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=emerging-cybersecurity-trends-in-2026" target="_blank" rel="noopener noreferrer nofollow">wrote about the cybersecurity trends</a> I believed would shape the near future. At the time, the industry was focused on generative AI, ransomware, remote work, and an expanding regulatory landscape. Many of those themes turned out to be directionally correct—but the way they played out surprised even experienced practitioners.</p><p class="paragraph" style="text-align:left;">Two years later, it’s worth revisiting those predictions. Some held up well. Others missed the mark. More importantly, a few of the most impactful changes weren’t obvious at all in 2024.</p><p class="paragraph" style="text-align:left;">This post reflects on what came true, what didn’t, and what I believe now defines cybersecurity heading into 2026.</p><h2 class="heading" style="text-align:left;" id="ai-didnt-create-new-attacks-it-made">AI Didn’t Create New Attacks — It Made Old Ones Relentless</h2><p class="paragraph" style="text-align:left;">In 2024, I warned that AI-driven cyberattacks would become more sophisticated and adaptive. While that framing wasn’t wrong, it overstated the technical novelty of what actually happened.</p><p class="paragraph" style="text-align:left;">AI didn’t introduce fundamentally new attack techniques. Instead, it dramatically lowered the cost and effort of executing existing ones. Phishing became faster, more personalized, and easier to scale. Social engineering improved in quality and volume. Reconnaissance and targeting became trivial.</p><p class="paragraph" style="text-align:left;">The real shift wasn’t intelligence—it was economics. Attackers didn’t need to outsmart defenders; they only needed to overwhelm them.</p><p class="paragraph" style="text-align:left;">What I underestimated in 2024 was how quickly human-dependent security controls would fail under AI-driven scale.</p><h2 class="heading" style="text-align:left;" id="the-perimeter-didnt-just-expand-it-">The Perimeter Didn’t Just Expand — It Disappeared</h2><p class="paragraph" style="text-align:left;">I argued in 2024 that the traditional security perimeter was dissolving and that zero trust would become increasingly important. That prediction proved correct—but incomplete.</p><p class="paragraph" style="text-align:left;">What disappeared wasn’t just the network boundary. The distinction between “inside” and “outside” stopped mattering altogether. Identity became the control plane, and authenticated access became the primary target.</p><p class="paragraph" style="text-align:left;">By 2025, session hijacking, token theft, OAuth abuse, and adversary-in-the-middle attacks were no longer edge cases. They were common. MFA was still necessary—but no longer sufficient on its own.</p><p class="paragraph" style="text-align:left;">The biggest realization for many organizations was uncomfortable:<br><b>A successfully authenticated user could no longer be assumed trustworthy.</b></p><h2 class="heading" style="text-align:left;" id="ransomware-lost-center-stage">Ransomware Lost Center Stage</h2><p class="paragraph" style="text-align:left;">Ransomware-as-a-Service was a major concern in 2024, and it absolutely continued to cause damage. But by 2026, it was no longer the dominant threat model.</p><p class="paragraph" style="text-align:left;">Attackers increasingly favored quieter approaches:</p><ul><li><p class="paragraph" style="text-align:left;">Data theft without encryption</p></li><li><p class="paragraph" style="text-align:left;">Identity persistence instead of disruption</p></li><li><p class="paragraph" style="text-align:left;">Monetization through fraud, resale of access, or secondary abuse</p></li></ul><p class="paragraph" style="text-align:left;">Ransomware was noisy and expensive. Silent compromise was easier to sustain and harder to detect.</p><p class="paragraph" style="text-align:left;">In hindsight, I overweighted ransomware relative to the broader shift toward identity-driven attacks and long-lived access.</p><h2 class="heading" style="text-align:left;" id="io-t-security-mattered-just-not-eve">IoT Security Mattered — Just Not Everywhere</h2><p class="paragraph" style="text-align:left;">In 2024, I highlighted IoT as a growing risk due to weak security controls and rapid adoption. That risk didn’t disappear, but it didn’t materialize evenly across industries.</p><p class="paragraph" style="text-align:left;">IoT proved most critical in:</p><ul><li><p class="paragraph" style="text-align:left;">Healthcare</p></li><li><p class="paragraph" style="text-align:left;">Manufacturing</p></li><li><p class="paragraph" style="text-align:left;">Critical infrastructure</p></li><li><p class="paragraph" style="text-align:left;">Nation-state activity</p></li></ul><p class="paragraph" style="text-align:left;">For most enterprises, however, IoT wasn’t the primary breach vector. Identity systems, SaaS platforms, and cloud control planes were far more attractive targets.</p><p class="paragraph" style="text-align:left;">The risk was real—but narrower than I anticipated.</p><h2 class="heading" style="text-align:left;" id="regulations-increased-accountabilit">Regulations Increased Accountability, Not Safety</h2><p class="paragraph" style="text-align:left;">I expected evolving cybersecurity regulations to materially improve organizational security posture. What actually improved was <b>visibility</b>, not resilience.</p><p class="paragraph" style="text-align:left;">Disclosure requirements, audits, and compliance frameworks forced organizations to acknowledge incidents more transparently. They did not, on their own, prevent breaches or meaningfully reduce impact.</p><p class="paragraph" style="text-align:left;">By 2026, it became clear that compliance answers <i>“Did you follow the rules?”</i><br>It does not answer <i>“Can you withstand failure?”</i></p><p class="paragraph" style="text-align:left;">That distinction matters.</p><h2 class="heading" style="text-align:left;" id="insider-threats-were-mostly-about-a">Insider Threats Were Mostly About Access, Not People</h2><p class="paragraph" style="text-align:left;">In 2024, I pointed to insider threats as a growing concern. What changed was my understanding of the root cause.</p><p class="paragraph" style="text-align:left;">Most “insider” incidents weren’t driven by malicious employees. They were driven by:</p><ul><li><p class="paragraph" style="text-align:left;">Excessive access</p></li><li><p class="paragraph" style="text-align:left;">Weak authorization boundaries</p></li><li><p class="paragraph" style="text-align:left;">Stolen sessions operating under legitimate identities</p></li></ul><p class="paragraph" style="text-align:left;">Attackers didn’t need insiders. <i>They simply became them</i>.</p><h2 class="heading" style="text-align:left;" id="what-defines-cybersecurity-in-2026">What Defines Cybersecurity in 2026</h2><p class="paragraph" style="text-align:left;">The biggest change between 2024 and 2026 wasn’t a new technology or a breakthrough attack technique. It was a shift in how security failures actually happen.</p><p class="paragraph" style="text-align:left;">Most incidents didn’t occur because defenses were missing. They happened because trust was granted too easily and held for too long.</p><p class="paragraph" style="text-align:left;">That reality forces a different starting point for modern security programs:</p><ul><li><p class="paragraph" style="text-align:left;">Compromise is not an edge case—it’s something to plan for</p></li><li><p class="paragraph" style="text-align:left;">Authentication buys you a moment, not lasting confidence</p></li><li><p class="paragraph" style="text-align:left;">Trust has to be reevaluated continuously, not assumed</p></li><li><p class="paragraph" style="text-align:left;">Limiting blast radius matters as much as trying to prevent intrusion</p></li></ul><p class="paragraph" style="text-align:left;">The organizations that adapted weren’t the ones that bought the most tools. They were the ones willing to challenge long-held assumptions about users, access, and control.</p><hr class="content_break"><h2 class="heading" style="text-align:left;" id="closing-thoughts">Closing Thoughts</h2><p class="paragraph" style="text-align:left;">My predictions in 2024 weren’t wrong, but they missed the center of gravity. I spent too much time focused on emerging threats and not enough on how existing trust models would be exploited at scale.</p><p class="paragraph" style="text-align:left;">By 2026, cybersecurity is less about keeping attackers out and more about controlling the damage once they’re in.</p><p class="paragraph" style="text-align:left;">That shift has fundamentally changed how I think about identity, access, and what “secure” really means.</p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=adf24a6e-3658-4c21-a0c1-f1c00269ea4f&utm_medium=post_rss&utm_source=matt_james_proactive_security">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Adversary-in-the-Middle</title>
  <description></description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/773f0bf2-e1b1-49b9-8303-f91bc49701b7/ChatGPT_Image_Nov_15__2025_at_02_59_32_PM.png" length="1847993" type="image/png"/>
  <link>https://odnd.com/p/adversary-in-the-middle</link>
  <guid isPermaLink="true">https://odnd.com/p/adversary-in-the-middle</guid>
  <pubDate>Sat, 15 Nov 2025 20:00:35 +0000</pubDate>
  <atom:published>2025-11-15T20:00:35Z</atom:published>
    <dc:creator>Matt James</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">For years, organizations have treated multi-factor authentication as the finish line of identity security. Deploy MFA, check the box, and assume phishing is solved.</span></p><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">It isn’t.</span></p><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">Over the past year, a class of attack that was once niche has gone fully mainstream: Adversary-in-the-Middle (AiTM). It’s fast, scalable, and brutally effective—and it breaks one of the most deeply held assumptions in modern security programs:</span></p><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:14px;">“If a user completes MFA, the session can be trusted.”</span></p><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">It can’t.</span></p><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">AiTM doesn’t defeat MFA. It steps around it by stealing the authenticated session after MFA succeeds. Once an attacker has the session token, they are the user. No password. No second factor. No additional friction.</span></p><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">This is why AiTM matters, how it actually works, and what organizations can do about it.</span></p><p class="paragraph" style="text-align:left;"><span style="font-family:TimesNewRomanPS-BoldMT;font-size:24px;"><b>Why AiTM Exists: Attackers Adapted</b></span></p><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">Attackers didn’t suddenly get smarter. They adjusted their business model.</span></p><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">As MFA became widespread, password theft stopped being enough. To keep compromising accounts at scale, attackers needed a way to impersonate users after authentication.</span></p><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">The answer was reverse-proxy phishing.</span></p><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">What began as a technique used by sophisticated actors is now sold as turnkey SaaS. These kits come with dashboards, analytics, and automation that would look familiar to anyone in marketing ops.</span></p><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">AiTM isn’t an advanced trick anymore. It’s a commodity.</span></p><p class="paragraph" style="text-align:left;"><span style="font-family:TimesNewRomanPS-BoldMT;font-size:24px;"><b>How Adversary-in-the-Middle Works</b></span></p><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">When a user clicks an AiTM phishing link, here’s what happens:</span></p><ol start="1"><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">The link routes through a reverse proxy</span><br><span style="font-family:"Times New Roman";font-size:19px;">The attacker places themselves between the victim and the real login service.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">The real login page is displayed</span><br><span style="font-family:"Times New Roman";font-size:19px;">This isn’t a fake page—it’s the legitimate one, transparently proxied.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">Credentials are captured in real time</span><br><span style="font-family:"Times New Roman";font-size:19px;">Everything the user types passes through the attacker.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">MFA succeeds</span><br><span style="font-family:"Times New Roman";font-size:19px;">The user completes their second factor normally.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">The session token is intercepted</span><br><span style="font-family:"Times New Roman";font-size:19px;">This is the critical moment. The authenticated session cookie is copied.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">The attacker replays the session</span><br><span style="font-family:"Times New Roman";font-size:19px;">They now have a fully authenticated, high-trust session—no MFA required.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">Persistence and abuse begin</span><br><span style="font-family:"Times New Roman";font-size:19px;">Common follow-on actions include:</span><br></p><ul><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">Registering a new MFA method</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">Granting OAuth access</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">Creating mailbox rules for BEC</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">Exfiltrating data from SaaS platforms</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">Spinning up cloud resources</span></p></li></ul></li></ol><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">From the platform’s perspective, this all looks like legitimate user activity. Many detections fail precisely because nothing “breaks.”</span></p><p class="paragraph" style="text-align:left;"><span style="font-family:TimesNewRomanPS-BoldMT;font-size:24px;"><b>Why This Keeps Working: Identity</b></span><span style="font-family:TimesNewRomanPS-BoldMT;font-size:24px;"><b> </b></span><span style="font-family:TimesNewRomanPS-BoldMT;font-size:24px;"><b>Is</b></span><span style="font-family:TimesNewRomanPS-BoldMT;font-size:24px;"><b> </b></span><span style="font-family:TimesNewRomanPS-BoldMT;font-size:24px;"><b>the Perimeter</b></span></p><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">AiTM thrives because modern environments are built this way:</span></p><ul><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">SSO is everywhere</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">Sessions last hours—or weeks</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">Conditional access trusts authenticated sessions by default</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">OTPs, push prompts, and SMS aren’t cryptographically bound</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">Browser sessions aren’t tied to a specific device</span></p></li></ul><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">Attackers don’t need malware.</span></p><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">They don’t need a zero-day.</span></p><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">They just need a user to log in—through them.</span></p><p class="paragraph" style="text-align:left;"><span style="font-family:TimesNewRomanPS-BoldMT;font-size:24px;"><b>Phishing-Resistant MFA Is the Real Fix</b></span></p><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">The only authentication methods that reliably stop AiTM are those that are cryptographically bound and origin-verified:</span></p><ul><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">FIDO2 security keys</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">Passkeys</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">WebAuthn device-bound credentials</span></p></li></ul><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">These methods don’t transmit secrets that can be intercepted. Authentication is bound to:</span></p><ul><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">the device</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">the browser</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">the legitimate domain</span></p></li></ul><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">A proxy can’t fake that.</span></p><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">If your organization relies on MFA but hasn’t standardized on phishing-resistant MFA, AiTM isn’t hypothetical. It’s an active risk.</span></p><p class="paragraph" style="text-align:left;"><span style="font-family:TimesNewRomanPS-BoldMT;font-size:24px;"><b>Detecting AiTM in Practice</b></span></p><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">AiTM leaves fingerprints if you’re looking in the right places:</span></p><ul><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">MFA completion from one device followed by session use from another location minutes later</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">Browser or JA4 fingerprint mismatches between authentication and session activity</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">Unexpected OAuth consent grants</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">New MFA methods added by the “user”</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">Silent mailbox forwarding rules</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">Near-instant impossible travel events</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">Token refreshes from unusual IP ranges (residential proxies, cloud VMs)</span></p></li></ul><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">When identity telemetry is collected and correlated properly, AiTM stands out clearly. The challenge isn’t visibility—it’s knowing what matters.</span></p><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:19px;">More on that later. This part gets long.</span></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=e68fde13-ba10-4716-81e5-cb9813b10ae8&utm_medium=post_rss&utm_source=matt_james_proactive_security">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Rethinking AI Red Teaming: From Model Bugs to Systemic Resilience</title>
  <description>A systems-level approach to AI red teaming for managing emergent risks and sociotechnical vulnerabilities</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/0a0ce2a4-ac71-4de0-924a-5144708b7a36/ai-redteaming.png" length="1178318" type="image/png"/>
  <link>https://odnd.com/p/rethinking-ai-red-teaming-from-model-bugs-to-systemic-resilience</link>
  <guid isPermaLink="true">https://odnd.com/p/rethinking-ai-red-teaming-from-model-bugs-to-systemic-resilience</guid>
  <pubDate>Fri, 18 Jul 2025 18:13:40 +0000</pubDate>
  <atom:published>2025-07-18T18:13:40Z</atom:published>
    <dc:creator>Matt James</dc:creator>
    <category><![CDATA[Cybersecurity]]></category>
    <category><![CDATA[Ai Governance]]></category>
    <category><![CDATA[Red Teaming]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;"><span style="color:rgb(0, 0, 0);font-size:medium;">Red teaming started as a military practice and eventually became a staple in cybersecurity, but it’s now being reshaped by the rise of artificial intelligence. As generative AI systems weave deeper into everyday life and major industries, both companies and policymakers have leaned on red teaming to spot weaknesses and improve safety. But a recent paper from Stanford and Georgetown researchers argues that today’s AI red-teaming efforts are too limited — and that we may be focusing on small issues while overlooking much bigger risks.</span></p><h3 class="heading" style="text-align:left;" id="the-problem-a-narrow-focus-on-model">The Problem: A Narrow Focus on Models</h3><p class="paragraph" style="text-align:left;"><span style="color:rgb(0, 0, 0);font-size:medium;">Most AI red-teaming work today zeroes in on finding weaknesses in a single model — what the authors call “micro-level” red teaming. These tests usually revolve around prompt injection, jailbreaks, and other adversarial tactics to coax out bad behavior. Useful as that is, it doesn’t get at the broader, more complicated risks that emerge once AI systems are deployed in real-world settings.</span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(0, 0, 0);font-size:medium;">The authors argue that this narrow focus misses the bigger picture. Most red-teaming efforts ignore the sociotechnical side of things — how AI systems behave once real people, institutions, and environments get involved. Many of the most serious risks don’t come from a model misfiring on its own, but from how it’s used in the world: misinformation that snowballs, feedback loops that amplify bias, or users finding and exploiting gaps in the overall system rather than the model itself.</span></p><h3 class="heading" style="text-align:left;" id="a-broader-vision-two-levels-of-ai-r">A Broader Vision: Two Levels of AI Red Teaming</h3><p class="paragraph" style="text-align:left;">To fill this gap, the authors propose a comprehensive framework that distinguishes between two complementary levels of AI red teaming:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Micro-level red teaming</b> targets the <i>model itself</i>, evaluating prompt behavior, guardrails, and output reliability. This is the current norm in most AI labs.</p></li><li><p class="paragraph" style="text-align:left;"><b>Macro-level red teaming</b>, by contrast, considers the <i>entire AI system lifecycle</i>, from data sourcing and training pipeline to deployment environment and downstream impacts. It involves assessing risks that emerge from complex interactions between models, users, interfaces, and institutions.</p></li></ul><p class="paragraph" style="text-align:left;">By operationalizing red teaming at both levels, organizations can shift from reactive patchwork to proactive risk management.</p><h3 class="heading" style="text-align:left;" id="recommendations-for-the-future">Recommendations for the Future</h3><p class="paragraph" style="text-align:left;">Drawing on decades of experience from cybersecurity and systems engineering, the authors offer a set of actionable recommendations to improve AI red teaming practices:</p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Treat Red Teaming as an Ongoing Process</b>: Red teaming should not be a one-time product launch checklist but an iterative, continuous evaluation across the AI lifecycle.</p></li><li><p class="paragraph" style="text-align:left;"><b>Build Multifunctional Red Teams</b>: Effective red teams should include experts in machine learning, human-computer interaction, social science, cybersecurity, and domain-specific knowledge. This diversity is essential to uncovering emergent, cross-cutting vulnerabilities.</p></li><li><p class="paragraph" style="text-align:left;"><b>Evaluate Sociotechnical Contexts</b>: Red teams must consider how AI systems are used in practice—including how users might manipulate them, how they interact with other tools, and how organizational incentives shape deployment.</p></li><li><p class="paragraph" style="text-align:left;"><b>Incorporate Systems Thinking</b>: Rather than viewing AI safety as a matter of model robustness alone, organizations must assess systemic resilience—identifying feedback loops, cascades, and coordination failures that lead to large-scale harm.</p></li><li><p class="paragraph" style="text-align:left;"><b>Embrace Transparency and External Testing</b>: Independent red teaming and disclosure mechanisms can surface risks that internal teams may miss due to institutional blind spots or incentive structures.</p></li></ol><p class="paragraph" style="text-align:left;">The field of AI safety is at an inflection point. As AI systems become more powerful and socially embedded, the traditional model-focused approach to red teaming must evolve. This paper offers a timely and much-needed reframing of red teaming—not as a narrow adversarial testing tool, but as a holistic practice rooted in systems thinking, interdisciplinary collaboration, and continuous scrutiny.</p><p class="paragraph" style="text-align:left;">Organizations that adopt this broader vision will be better positioned to identify and mitigate not just model-level bugs, but the real-world harms that arise when AI meets messy, unpredictable human systems.</p><p class="paragraph" style="text-align:left;"><b>Citation:</b><br><span style="color:rgb(0, 0, 0);font-size:medium;">Sharkey, L., Pasquinelli, M., Cheng, B., Dobbe, R., et al. </span><i>Operationalizing Red Teaming for AI Systems</i><span style="color:rgb(0, 0, 0);font-size:medium;">. arXiv preprint arXiv:2507.05538. July 2025.</span><br><a class="link" href="https://arxiv.org/abs/2507.05538?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=rethinking-ai-red-teaming-from-model-bugs-to-systemic-resilience" target="_blank" rel="noopener noreferrer nofollow">https://arxiv.org/abs/2507.05538</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=40c1583c-effc-4552-abee-f1f7d2469871&utm_medium=post_rss&utm_source=matt_james_proactive_security">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>AI-Driven Social Engineering</title>
  <description>Information Operations with AI, Part 1</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/772d2bcf-0fd7-44b7-a46e-8621579838e9/female-hacker.jpeg" length="265791" type="image/jpeg"/>
  <link>https://odnd.com/p/aidriven-social-engineering</link>
  <guid isPermaLink="true">https://odnd.com/p/aidriven-social-engineering</guid>
  <pubDate>Wed, 11 Sep 2024 15:35:00 +0000</pubDate>
  <atom:published>2024-09-11T15:35:00Z</atom:published>
    <dc:creator>Matt James</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">It’s been a while since I’ve posted, so I want to start stretching this muscle again over the coming months. As it is timely and pertinent, I wanted to cover information operations in the context of influencing others through the use of AI.</p><h2 class="heading" style="text-align:left;" id="ai-driven-social-engineering"><span style="color:inherit;font-family:inherit;font-size:inherit;">AI-Driven Social Engineering</span></h2><p class="paragraph" style="text-align:left;"><span style="color:rgb(15, 20, 25);font-family:TwitterChirp, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica, Arial, sans-serif;font-size:15px;">The advent of AI has ushered in a new era of social engineering, where attackers leverage machine learning algorithms to craft phishing emails or messages that are not just personalized but dynamically adaptive. These</span><span style="color:inherit;font-family:inherit;font-size:inherit;"> attackers leverage machine learning (ML) algorithms to craft phishing emails or messages that are not just personalized but dynamically adaptive. Here&#39;s how it works:</span></p><ul><li><p class="paragraph" style="text-align:left;"><span style="color:inherit;font-family:inherit;font-size:inherit;"><b>Personalization:</b></span><span style="color:inherit;font-family:inherit;font-size:inherit;"> AI analyzes vast datasets, including social media profiles, purchase histories, and browsing habits, to tailor messages that resonate with the target&#39;s interests, fears, or needs. This personalization goes beyond just using the recipient&#39;s name; it might reference recent activities or events in their life, making the communication seem genuinely relevant.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:inherit;font-family:inherit;font-size:inherit;"><b>Dynamic Adaptation:</b></span><span style="color:inherit;font-family:inherit;font-size:inherit;"> Unlike traditional phishing where messages are static, AI-driven systems can adjust in real-time based on user interaction. If a recipient engages with the message, the AI might alter its approach, perhaps offering more incentives or changing the tone to sound more urgent or authoritative.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:inherit;font-family:inherit;font-size:inherit;"><b>Natural Language Generation (NLG):</b></span><span style="color:inherit;font-family:inherit;font-size:inherit;"> AI uses NLG to create text that&#39;s indistinguishable from human writing, often incorporating emotional triggers or persuasive language that traditional phishing emails might lack.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:inherit;font-family:inherit;font-size:inherit;"><b>Multi-Channel Engagement:</b></span><span style="color:inherit;font-family:inherit;font-size:inherit;"> These attacks don&#39;t just stop at email. AI can orchestrate campaigns across multiple platforms, including SMS, social media, or even voice calls, creating a multi-faceted assault that&#39;s harder to dismiss as spam.</span></p></li></ul><p class="paragraph" style="text-align:left;"><span style="color:rgb(15, 20, 25);font-family:TwitterChirp, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica, Arial, sans-serif;font-size:15px;">The implications of AI-driven social engineering are profound:</span></p><ul><li><p class="paragraph" style="text-align:left;"><span style="color:inherit;font-family:inherit;font-size:inherit;"><b>Misinformation Propagation:</b></span><span style="color:inherit;font-family:inherit;font-size:inherit;"> By impersonating trusted sources like banks, government officials, or even friends and family, these attacks can spread misinformation with alarming efficiency. For instance, an email purportedly from a health organization might spread false information about a health crisis, influencing public behavior or policy.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:inherit;font-family:inherit;font-size:inherit;"><b>Corporate Espionage:</b></span><span style="color:inherit;font-family:inherit;font-size:inherit;"> In corporate settings, these techniques can be used to extract sensitive information or manipulate decisions. An AI-crafted email from a supposed executive could direct employees to transfer funds or disclose confidential data, all under the guise of legitimate business operations.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:inherit;font-family:inherit;font-size:inherit;"><b>Public Opinion Manipulation:</b></span><span style="color:inherit;font-family:inherit;font-size:inherit;"> On a larger scale, AI-driven social engineering can sway public opinion by flooding social media with bots that push specific narratives or by creating fake news stories that resonate with targeted demographics, potentially influencing elections or market trends.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:inherit;font-family:inherit;font-size:inherit;"><b>Increased Effectiveness:</b></span><span style="color:inherit;font-family:inherit;font-size:inherit;"> The dynamic nature of these attacks means they can bypass traditional security measures like spam filters or user training, which often look for static signs of phishing. The adaptability of AI makes each interaction potentially unique, reducing the effectiveness of static defense strategies.</span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:inherit;font-family:inherit;font-size:inherit;"><b>Long-term Impact:</b></span><span style="color:inherit;font-family:inherit;font-size:inherit;"> Beyond immediate actions like clicking a link or downloading a file, these attacks can erode trust in digital communications, leading to broader societal impacts where skepticism towards any form of digital communication becomes the norm.</span></p></li></ul><p class="paragraph" style="text-align:left;"><span style="color:inherit;font-family:inherit;font-size:inherit;">This evolution in social engineering represents a significant leap in sophistication, requiring not just technical countermeasures but also a shift in how individuals and organizations approach digital trust and verification. The challenge lies in developing defenses that are equally adaptive, leveraging AI to detect and counteract these nuanced attacks while fostering a culture of skepticism without paralyzing digital communication.</span></p><p class="paragraph" style="text-align:left;">If you’re interested in more nuanced and specific attack vectors, please feel free to connect with me on <a class="link" href="https://www.linkedin.com/in/purpleheart/?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=ai-driven-social-engineering" target="_blank" rel="noopener noreferrer nofollow">LinkedIn</a> and <a class="link" href="https://x.com/themattjames?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=ai-driven-social-engineering" target="_blank" rel="noopener noreferrer nofollow">X</a>. </p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=2c9125da-dee6-459e-82f6-9b7031d642ea&utm_medium=post_rss&utm_source=matt_james_proactive_security">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>The Evolution from Classical to Post-Quantum Cryptography</title>
  <description>The Next Step in Securing Our Digital World</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/9082dea3-fc1b-4954-be26-b97cca61aa39/smb-security.png" length="1306199" type="image/png"/>
  <link>https://odnd.com/p/understanding-classical-postquantum-cryptography</link>
  <guid isPermaLink="true">https://odnd.com/p/understanding-classical-postquantum-cryptography</guid>
  <pubDate>Wed, 10 Jul 2024 14:00:00 +0000</pubDate>
  <atom:published>2024-07-10T14:00:00Z</atom:published>
    <dc:creator>Matt James</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;"><span style="color:rgb(0, 0, 0);font-size:medium;">Just as the internet evolved from HTTP to HTTPS to protect data-in-transit, the field of cryptography is undergoing a transformation to safeguard against the looming threat of quantum computing. This blog post explores the analogy between the shift from HTTP to HTTPS and the current transition from classical cryptography to post-quantum cryptography (PQC), highlighting why this evolution is essential for our digital future.</span></p><h4 class="heading" style="text-align:start;" id="understanding-the-evolution-http-to">Understanding the Evolution: HTTP to HTTPS</h4><p class="paragraph" style="text-align:start;"><a class="link" href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Overview?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=the-evolution-from-classical-to-post-quantum-cryptography" target="_blank" rel="noopener noreferrer nofollow"><b>HTTP (Hypertext Transfer Protocol)</b></a> was the original protocol for transferring web data. It served its purpose well in the early days of the internet, enabling the exchange of information between web servers and clients. However, as the internet grew, so did the capabilities of attackers. HTTP&#39;s lack of robust security features became a significant vulnerability, allowing for data to be intercepted and manipulated.</p><p class="paragraph" style="text-align:start;"><span style="color:rgb(0, 0, 0);font-size:medium;">To address these security concerns, </span><a class="link" href="https://www.cloudflare.com/learning/ssl/what-is-https/?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=the-evolution-from-classical-to-post-quantum-cryptography" target="_blank" rel="noopener noreferrer nofollow"><b>HTTPS (Hypertext Transfer Protocol Secure)</b></a><span style="color:rgb(0, 0, 0);font-size:medium;"> was developed. By adding layers of encryption through SSL/TLS protocols, HTTPS ensures that data transmitted over the internet is secure and cannot be easily intercepted or tampered with. This transition from HTTP to HTTPS marked a significant improvement in web security, making it the standard for protecting online communications.</span></p><h4 class="heading" style="text-align:start;" id="classical-cryptography-vs-post-quan">Classical Cryptography vs. Post-Quantum Cryptography</h4><p class="paragraph" style="text-align:start;"><a class="link" href="https://sagrawalx.github.io/crypt/classical-cryptosystems?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=the-evolution-from-classical-to-post-quantum-cryptography" target="_blank" rel="noopener noreferrer nofollow"><b>Classical cryptography</b></a> has been the cornerstone of secure communications for decades. Protocols like RSA and ECC rely on mathematical problems that are challenging for classical computers to solve, providing a robust defense against unauthorized access. However, the advent of quantum computing threatens to undermine these cryptographic methods. Quantum computers, with their immense computational power, have the potential to break classical cryptographic algorithms, rendering them obsolete.</p><p class="paragraph" style="text-align:start;">Enter <a class="link" href="https://csrc.nist.gov/projects/post-quantum-cryptography?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=the-evolution-from-classical-to-post-quantum-cryptography" target="_blank" rel="noopener noreferrer nofollow"><b>Post-Quantum Cryptography (PQC)</b></a>. Similar to how HTTPS was created to address the security shortcomings of HTTP, PQC is being developed to counter the future threat posed by quantum computers. PQC algorithms are designed to be secure against both classical and quantum attacks, ensuring that our data remains protected in the quantum era.</p><h4 class="heading" style="text-align:start;" id="the-analogy-httphttps-and-classical">The Analogy: HTTP/HTTPS and Classical/PQC</h4><p class="paragraph" style="text-align:start;">The analogy between HTTP/HTTPS and classical/PQC helps illustrate the need for this cryptographic evolution:</p><ul><li><p class="paragraph" style="text-align:left;"><b>HTTP (Classical Cryptography)</b>: Just as HTTP was sufficient for the early internet but became vulnerable as attack methods advanced, classical cryptographic methods are currently effective but will become vulnerable with the advent of quantum computers.</p></li><li><p class="paragraph" style="text-align:left;"><b>HTTPS (PQC)</b>: HTTPS was developed to address the security shortcomings of HTTP, adding encryption to secure web communications. Similarly, PQC is being developed to address the security shortcomings of classical cryptography in the face of quantum computing threats.</p></li></ul><h4 class="heading" style="text-align:start;" id="key-points-of-the-analogy">Key Points of the Analogy:</h4><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Coexistence</b>:</p><ul><li><p class="paragraph" style="text-align:left;">HTTP still exists but is largely replaced by HTTPS for secure communications. Similarly, classical cryptography will still exist but is likely to be supplemented or replaced by PQC algorithms for enhanced security.</p></li></ul></li><li><p class="paragraph" style="text-align:left;"><b>Enhanced Security</b>:</p><ul><li><p class="paragraph" style="text-align:left;">HTTPS provides a more secure alternative to HTTP. Likewise, PQC offers more secure alternatives to classical cryptographic methods to protect against advanced threats.</p></li></ul></li><li><p class="paragraph" style="text-align:left;"><b>Transition Period</b>:</p><ul><li><p class="paragraph" style="text-align:left;">The shift from HTTP to HTTPS involved a transition period with the development of standards, tools, and widespread adoption. The shift to PQC will also involve a transition period with ongoing research, standardization, and gradual adoption.</p></li></ul></li></ol><h3 class="heading" style="text-align:start;" id="preparing-for-the-quantum-future">Preparing for the Quantum Future</h3><p class="paragraph" style="text-align:start;">The move from classical cryptography to post-quantum cryptography is not just an upgrade but a necessary evolution to secure our digital future. As quantum computing continues to advance, the urgency to develop and implement PQC solutions grows. Just as HTTPS became the standard for secure web communications, PQC will become essential to protect our data in a post-quantum world.</p><p class="paragraph" style="text-align:start;">By understanding this analogy and recognizing the importance of PQC, we can better prepare for the challenges and opportunities that lie ahead. The future of cybersecurity depends on our ability to adapt and evolve, ensuring that our digital infrastructure remains robust and secure against the threats of tomorrow.</p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=e7c4b2a6-6fa5-462d-83fc-764c8a070bb0&utm_medium=post_rss&utm_source=matt_james_proactive_security">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Navigating the Quantum Shift</title>
  <description>Why Post-Quantum Cryptography Matters</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/02e0b58d-74c3-418d-b166-a8e1a53eade9/nist.png" length="57068" type="image/png"/>
  <link>https://odnd.com/p/navigating-quantum-shift</link>
  <guid isPermaLink="true">https://odnd.com/p/navigating-quantum-shift</guid>
  <pubDate>Mon, 06 May 2024 20:00:00 +0000</pubDate>
  <atom:published>2024-05-06T20:00:00Z</atom:published>
    <dc:creator>Matt James</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:start;">Quantum computing is stepping out of science fiction and into reality, challenging our current cryptographic standards. With quantum algorithms such as Shor’s potentially unraveling the complexities of RSA and ECC, and Grover&#39;s accelerating the brute force attacks, the cryptographic community is steering towards Post-Quantum Cryptography (PQC). PQC aims to devise systems that can withstand the onslaught of both quantum and traditional computational threats, thereby securing our digital communications and data.</p><div class="image"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/02e0b58d-74c3-418d-b166-a8e1a53eade9/nist.png?t=1715026796"/></div><p class="paragraph" style="text-align:start;">This proactive approach, spearheaded by the National Institute of Standards and Technology (NIST), involves evaluating quantum-resistant algorithms through a meticulous public process. As we edge closer to the quantum era, understanding and integrating PQC is not just advisable; it&#39;s imperative for safeguarding sensitive information across various sectors.</p><p class="paragraph" style="text-align:start;">For a deeper dive into the specifics of these quantum challenges and the ongoing efforts in cryptographic innovations, check out NIST&#39;s dedicated project page on post-quantum cryptography: <a class="link" href="https://csrc.nist.gov/projects/post-quantum-cryptography?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=navigating-the-quantum-shift" target="_blank" rel="noopener noreferrer nofollow" style="color: var(--link)">NIST Post-Quantum Cryptography</a>.</p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=ea1b0053-8065-4f14-8913-5fd354040cfb&utm_medium=post_rss&utm_source=matt_james_proactive_security">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Cybersecurity Tips for Small and Medium Businesses</title>
  <description>Practical and Affordable Ways to Boost Your Cyber Defenses</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/6f0be22c-00fb-4694-a0e6-9187d48d6144/smb-security.png" length="1306199" type="image/png"/>
  <link>https://odnd.com/p/cybersecurity-tips-small-medium-businesses</link>
  <guid isPermaLink="true">https://odnd.com/p/cybersecurity-tips-small-medium-businesses</guid>
  <pubDate>Mon, 25 Mar 2024 16:51:08 +0000</pubDate>
  <atom:published>2024-03-25T16:51:08Z</atom:published>
    <dc:creator>Matt James</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">As a small or medium business owner, you may think you&#39;re not a prime target for cyber attacks. Unfortunately, that misconception can leave you vulnerable. In 2022, 61% of data breaches involved small businesses.</p><p class="paragraph" style="text-align:start;">Cyber criminals go after SMBs because they typically have fewer security resources than larger enterprises. But implementing some basic cybersecurity practices can go a long way in protecting your business data and assets.</p><div class="image"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/6f0be22c-00fb-4694-a0e6-9187d48d6144/smb-security.png?t=1711385368"/></div><p class="paragraph" style="text-align:start;"><b>Use Strong Passwords and Multi-Factor Authentication</b></p><p class="paragraph" style="text-align:start;">Weak passwords are one of the most common entry points for hackers. Require all employees to use long, complex passwords that are unique for each account and application. Better yet, use a password manager to store and encrypt passwords.</p><p class="paragraph" style="text-align:start;">You should also enable multi-factor authentication (MFA) whenever possible. MFA adds an extra layer of security by requiring a second form of verification, like a code sent to your phone, in addition to a password.</p><p class="paragraph" style="text-align:start;"><b>Keep Software Up-To-Date</b></p><p class="paragraph" style="text-align:start;">Hackers are constantly finding new software vulnerabilities to exploit. That&#39;s why it&#39;s critical to install security updates for operating systems, applications, and firmware as soon as they are released.</p><p class="paragraph" style="text-align:start;">Set up automatic updates whenever possible. For systems that don&#39;t allow automatic patching, assign an employee to regularly check for and install updates.</p><p class="paragraph" style="text-align:start;"><b>Back Up Data Regularly</b></p><p class="paragraph" style="text-align:start;">Ransomware and other malware can make your business data unusable or permanently delete it unless you pay a ransom. The best way to recover from such attacks is to maintain frequent backups of critical data.</p><p class="paragraph" style="text-align:start;">Use the 3-2-1 approach: Keep at least 3 backup copies on 2 different storage types, with 1 copy offsite or in the cloud. Test restoring from backups periodically.</p><p class="paragraph" style="text-align:start;"><b>Provide Cybersecurity Training</b></p><p class="paragraph" style="text-align:start;">Your employees are the first line of defense against phishing, social engineering, and other cyber threats that exploit human vulnerability. Train them on cybersecurity best practices and how to spot potential attacks.</p><p class="paragraph" style="text-align:start;">Conduct routine security awareness activities to keep cybersecurity top of mind. Consider requiring all employees to complete annual cybersecurity training.</p><p class="paragraph" style="text-align:start;"><b>Encrypt Sensitive Data</b></p><p class="paragraph" style="text-align:start;">If sensitive customer, employee or financial data falls into the wrong hands, it could devastate your business. Encryption encodes information so it appears like gibberish to anyone not authorized to access it.</p><p class="paragraph" style="text-align:start;">Encrypt data both at rest (stored data) and in transit (data being transmitted) using industry-standard encryption protocols.</p><p class="paragraph" style="text-align:start;">No business is too small to ignore cybersecurity. Invest time and resources into protecting your digital assets. It&#39;s one of the smartest business decisions you can make.</p><p class="paragraph" style="text-align:start;">If you’re interested in learning more about any of these topics or more, please do not hesitate to reach out. </p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=3aa352f3-8eb3-486f-83c1-86d88f2a35ed&utm_medium=post_rss&utm_source=matt_james_proactive_security">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>The Impact of AI on Software Security</title>
  <description>The Challenges Ahead</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/acd81da5-8721-4a38-ba05-faa4bfb507a7/human-AI.png" length="2221767" type="image/png"/>
  <link>https://odnd.com/p/impact-ai-software-security</link>
  <guid isPermaLink="true">https://odnd.com/p/impact-ai-software-security</guid>
  <pubDate>Tue, 19 Mar 2024 12:00:00 +0000</pubDate>
  <atom:published>2024-03-19T12:00:00Z</atom:published>
    <dc:creator>Matt James</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">The digital age is witnessing a significant transformation in software security, primarily driven by advancements in Artificial Intelligence (AI). As this technology continues to evolve, it brings both opportunities and challenges for cybersecurity. Understanding the nuanced role of AI in enhancing and complicating software security is crucial for industry professionals and organizations aiming to navigate this evolving era.</p><div class="image"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/acd81da5-8721-4a38-ba05-faa4bfb507a7/human-AI.png?t=1710850439"/></div><h3 class="heading" style="text-align:start;" id="the-essence-of-ai-in-enhancing-soft"><b>The Essence of AI in Enhancing Software Security</b></h3><p class="paragraph" style="text-align:start;">AI embodies the creation of intelligent systems that can simulate human thought processes, decision-making, and problem-solving abilities. In the realm of software security, AI&#39;s capability to mimic and exceed human cognitive functions is a game-changer. It offers unprecedented advantages in detecting, analyzing, and responding to cyber threats with speed and efficiency far beyond human capabilities.</p><p class="paragraph" style="text-align:start;">AI systems in cybersecurity can process vast datasets to identify patterns and anomalies indicative of malicious activity. They can predict potential vulnerabilities and threats by understanding the nuances of cyberattack strategies. Furthermore, AI-driven security systems can automate response protocols, shutting down attacks or isolating affected systems before significant damage is done.</p><h3 class="heading" style="text-align:start;" id="distinguishing-between-ai-and-ml-in"><b>Distinguishing Between AI and ML in Cybersecurity</b></h3><p class="paragraph" style="text-align:start;">While discussing AI&#39;s role in software security, it&#39;s essential to clarify the distinction between AI and its subset, Machine Learning (ML). AI is the broader discipline that encompasses creating machines capable of performing tasks that typically require human intelligence. This includes reasoning, learning, problem-solving, and understanding natural language.</p><p class="paragraph" style="text-align:start;">ML, a subset of AI, focuses specifically on the ability of machines to learn from data, improve from experience, and make decisions based on that learning. In cybersecurity, ML algorithms analyze patterns in data to detect anomalies that could indicate a security breach, learning and adapting to new threats over time.</p><p class="paragraph" style="text-align:start;">However, AI&#39;s application in cybersecurity is not limited to ML. It also includes other technologies like natural language processing (NLP) and rule-based systems, which can interpret and respond to human language or follow complex sets of instructions to identify threats. This broad arsenal of AI technologies enhances the ability of security systems to protect against a wide range of cyberattacks with greater precision and efficiency.</p><h3 class="heading" style="text-align:start;" id="the-challenges-posed-by-ai"><b>The Challenges Posed by AI</b></h3><p class="paragraph" style="text-align:start;">The integration of AI into software security does not come without its challenges. The sophistication of AI systems also provides cybercriminals with powerful tools to develop more complex and adaptive forms of malware and phishing attacks. AI can be used to automate the generation of malicious software that can learn and adapt to evade detection or to craft personalized phishing emails at scale, which are more likely to deceive users.</p><p class="paragraph" style="text-align:start;">Moreover, the reliance on AI for cybersecurity raises significant ethical and privacy concerns. The extensive data analysis capabilities of AI systems pose risks to personal privacy and data protection, necessitating stringent safeguards and ethical guidelines for AI use in cybersecurity.</p><h3 class="heading" style="text-align:start;" id="looking-ahead"><b>Looking Ahead</b></h3><p class="paragraph" style="text-align:start;">As we look to the future, the role of AI in software security is set to become even more pivotal. The dynamic nature of cyber threats requires security systems that are not just reactive but predictive and adaptive. AI, with its comprehensive capabilities, stands at the forefront of this shift, offering solutions that can evolve in tandem with emerging threats.</p><p class="paragraph" style="text-align:start;">However, harnessing the full potential of AI in cybersecurity will require ongoing efforts to address the ethical and privacy concerns associated with its use. It will also necessitate a synergistic approach where human expertise and AI capabilities complement each other, ensuring a robust and resilient cybersecurity posture.</p><p class="paragraph" style="text-align:start;">AI is reshaping the landscape of software security, offering sophisticated tools to protect digital assets while also presenting new challenges. Understanding the distinction between AI and its subsets, such as ML, is crucial for leveraging these technologies effectively. As we continue to explore the possibilities, it&#39;s clear that its impact will be profound, driving innovations that will define the future of digital security.</p><p class="paragraph" style="text-align:start;">The following references offer a deeper insight into the topics discussed, such as the role of AI in cybersecurity, the distinction between AI and ML, and the ethical considerations surrounding the use of AI:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Artificial Intelligence and Cybersecurity: The New Age of Protection</b> by Cade Metz. This book provides an overview of how artificial intelligence is revolutionizing the field of cybersecurity, offering both opportunities and challenges for protection against cyber threats.</p></li><li><p class="paragraph" style="text-align:left;"><b>Machine Learning and Security: Protecting Systems with Data and Algorithms</b> by Clarence Chio and David Freeman. This book dives into the specific role of machine learning within the broader context of AI in cybersecurity, detailing how algorithms can detect and defend against cyber attacks.</p></li><li><p class="paragraph" style="text-align:left;"><b>Cybersecurity Ethics: An Introduction</b> by Mary Manjikian. As ethical considerations become increasingly important in the deployment of AI technologies, this book explores the ethical dilemmas and responsibilities facing professionals in cybersecurity.</p></li><li><p class="paragraph" style="text-align:left;"><b>The Future of Cybersecurity: AI and Autonomous Attacks</b> in the <i>Harvard Business Review</i>. This article discusses the future implications of AI in cybersecurity, including the potential for AI to both enhance defense mechanisms and be used in autonomous cyber attacks.</p></li><li><p class="paragraph" style="text-align:left;"><b>Ethics of Artificial Intelligence and Robotics</b> in the <i>Stanford Encyclopedia of Philosophy</i>. This comprehensive entry examines the ethical considerations surrounding the development and use of artificial intelligence, including privacy concerns and the moral implications of AI decisions.</p></li></ul></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=f6e68f37-eee7-4aa3-9418-b365cddd3f06&utm_medium=post_rss&utm_source=matt_james_proactive_security">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Understanding Auditing and Accountability</title>
  <description>Logging Practices</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/01cb6648-8d0c-4517-b0b5-a7311b293b45/logging.png" length="947868" type="image/png"/>
  <link>https://odnd.com/p/understanding-auditing-accountability</link>
  <guid isPermaLink="true">https://odnd.com/p/understanding-auditing-accountability</guid>
  <pubDate>Mon, 26 Feb 2024 15:00:00 +0000</pubDate>
  <atom:published>2024-02-26T15:00:00Z</atom:published>
    <dc:creator>Matt James</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">Before delving into the intricacies of auditing and accountability in logging, it&#39;s essential to lay down a foundational understanding that caters to a broad audience. While this content may be familiar to some, it serves as a valuable primer for those seeking to learn and explore this crucial aspect of information technology.</p><p class="paragraph" style="text-align:left;">Ensuring accountability and maintaining accurate logs is paramount for businesses across all industries. Whether managing physical infrastructure or operating within cloud environments like SaaS, PaaS, or IaaS, robust auditing practices are essential for security, compliance, and operational transparency.</p><div class="image"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/01cb6648-8d0c-4517-b0b5-a7311b293b45/logging.png?t=1708959301"/></div><p class="paragraph" style="text-align:start;"><b>Auditing Physical Systems:</b></p><p class="paragraph" style="text-align:start;">In traditional IT setups with physical servers and networks, logging plays a critical role in tracking system activities, user actions, and potential security breaches. Auditing these systems involves capturing events such as login attempts, file access, configuration changes, and network traffic. By maintaining comprehensive logs, organizations can trace the root cause of issues, detect anomalies, and demonstrate compliance with regulatory requirements.</p><p class="paragraph" style="text-align:start;"><b>Key Aspects of Auditing Physical Systems:</b></p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Log Integrity:</b> Ensuring that logs are tamper-proof and securely stored to prevent unauthorized alterations or deletions.</p></li><li><p class="paragraph" style="text-align:left;"><b>Access Controls:</b> Implementing strict access controls to limit who can view, modify, or delete log files, thus preventing unauthorized tampering.</p></li><li><p class="paragraph" style="text-align:left;"><b>Regular Reviews:</b> Conducting periodic reviews of logs to identify suspicious activities, security gaps, or compliance violations.</p></li></ol><p class="paragraph" style="text-align:start;"><b>Auditing Cloud Services:</b></p><p class="paragraph" style="text-align:start;">With the widespread adoption of cloud computing, auditing becomes more complex yet equally crucial. Cloud service models such as SaaS, PaaS, and IaaS offer scalability and flexibility but require tailored logging and auditing approaches.</p><p class="paragraph" style="text-align:start;"><b>Challenges in Auditing Cloud Environments:</b></p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Shared Responsibility Model:</b> Understanding the division of responsibilities between the cloud service provider and the customer regarding security and logging.</p></li><li><p class="paragraph" style="text-align:left;"><b>Multi-Tenancy Concerns:</b> Ensuring that logs are isolated and protected in multi-tenant cloud environments to prevent data leakage or unauthorized access.</p></li><li><p class="paragraph" style="text-align:left;"><b>Dynamic Infrastructure:</b> Adapting auditing practices to the dynamic nature of cloud infrastructure where resources are provisioned and de-provisioned on-demand.</p></li></ol><p class="paragraph" style="text-align:start;"><b>Best Practices for Auditing in the Cloud:</b></p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Centralized Logging:</b> Implementing centralized logging solutions that aggregate logs from various cloud services and instances for unified monitoring and analysis.</p></li><li><p class="paragraph" style="text-align:left;"><b>Encryption and Access Controls:</b> Encrypting log data both in transit and at rest, and enforcing granular access controls to protect sensitive information.</p></li><li><p class="paragraph" style="text-align:left;"><b>Automated Auditing:</b> Leveraging automation tools to streamline auditing processes, identify deviations from compliance standards, and trigger alerts for remediation.</p></li></ol><p class="paragraph" style="text-align:start;">Auditing and accountability are integral components of logging, whether managing physical systems or operating in cloud environments. By adopting best practices tailored to the specific infrastructure, organizations can enhance security, ensure compliance, and maintain trust in their systems and services.</p><p class="paragraph" style="text-align:start;">Here&#39;s a list of resources (URLs) that readers can explore to delve deeper into the topic of auditing, logging, and accountability:</p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>NIST Special Publication 800-92: Guide to Computer Security Log Management</b>: <a class="link" href="https://csrc.nist.gov/publications/detail/sp/800-92/final?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=understanding-auditing-and-accountability" target="_blank" rel="noopener noreferrer nofollow">https://csrc.nist.gov/publications/detail/sp/800-92/final</a></p></li><li><p class="paragraph" style="text-align:left;"><b>OWASP Logging Cheat Sheet</b>: <a class="link" href="https://cheatsheetseries.owasp.org/cheatsheets/Logging_Cheat_Sheet.html?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=understanding-auditing-and-accountability" target="_blank" rel="noopener noreferrer nofollow">https://cheatsheetseries.owasp.org/cheatsheets/Logging_Cheat_Sheet.html</a></p></li><li><p class="paragraph" style="text-align:left;"><b>Cloud Security Alliance (CSA) Security Guidance v4</b>: <a class="link" href="https://cloudsecurityalliance.org/research/security-guidance/v4/?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=understanding-auditing-and-accountability" target="_blank" rel="noopener noreferrer nofollow">https://cloudsecurityalliance.org/research/security-guidance/v4/</a></p></li><li><p class="paragraph" style="text-align:left;"><b>PCI DSS Logging Standard</b>: <a class="link" href="https://www.pcisecuritystandards.org/documents/Logging_Standard.pdf?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=understanding-auditing-and-accountability" target="_blank" rel="noopener noreferrer nofollow">https://www.pcisecuritystandards.org/documents/Logging_Standard.pdf</a></p></li><li><p class="paragraph" style="text-align:left;"><b>ISO/IEC 27002:2013 Information technology -- Security techniques -- Code of practice for information security controls</b>: <a class="link" href="https://www.iso.org/standard/54534.html?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=understanding-auditing-and-accountability" target="_blank" rel="noopener noreferrer nofollow">https://www.iso.org/standard/54534.html</a></p></li><li><p class="paragraph" style="text-align:left;"><b>Logging and Auditing in AWS</b>: <a class="link" href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=understanding-auditing-and-accountability" target="_blank" rel="noopener noreferrer nofollow">https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html</a></p></li><li><p class="paragraph" style="text-align:left;"><b>Azure Monitor Logs overview</b>: <a class="link" href="https://docs.microsoft.com/en-us/azure/azure-monitor/logs/data-platform-logs?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=understanding-auditing-and-accountability" target="_blank" rel="noopener noreferrer nofollow">https://docs.microsoft.com/en-us/azure/azure-monitor/logs/data-platform-logs</a></p></li><li><p class="paragraph" style="text-align:left;"><b>Google Cloud&#39;s Operations suite (formerly Stackdriver)</b>: <a class="link" href="https://cloud.google.com/stackdriver?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=understanding-auditing-and-accountability" target="_blank" rel="noopener noreferrer nofollow">https://cloud.google.com/stackdriver</a></p></li><li><p class="paragraph" style="text-align:left;"><b>ELK Stack (Elasticsearch, Logstash, Kibana)</b>: <a class="link" href="https://www.elastic.co/what-is/elk-stack?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=understanding-auditing-and-accountability" target="_blank" rel="noopener noreferrer nofollow">https://www.elastic.co/what-is/elk-stack</a></p></li><li><p class="paragraph" style="text-align:left;"><b>Splunk</b>: <a class="link" href="https://www.splunk.com/?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=understanding-auditing-and-accountability" target="_blank" rel="noopener noreferrer nofollow">https://www.splunk.com/</a></p></li></ol><p class="paragraph" style="text-align:start;">These resources cover a wide range of topics related to auditing, logging, and accountability in both physical systems and cloud environments, providing readers with valuable insights, best practices, and technical guidance.</p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=f4804343-ffbd-45f4-be5b-263f7acecd7c&utm_medium=post_rss&utm_source=matt_james_proactive_security">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Security Due Diligence: A (short) Guide for Technical Consultants</title>
  <description>Elevating Investment Assurance through Market Analysis, Vendor Evaluation, and Rigorous Testing</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/a5c31a49-27fd-4524-9b65-90abbc9cbc53/roadmap.png" length="1404966" type="image/png"/>
  <link>https://odnd.com/p/security-due-diligence-short-guide-technical-consultants</link>
  <guid isPermaLink="true">https://odnd.com/p/security-due-diligence-short-guide-technical-consultants</guid>
  <pubDate>Fri, 26 Jan 2024 17:00:00 +0000</pubDate>
  <atom:published>2024-01-26T17:00:00Z</atom:published>
    <dc:creator>Matt James</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">Security advisors and consultants have the critical responsibility of steering investors and firms through the complexities of new technology capital investment and acquisitions. A key aspect of this guidance involves evaluating how thoroughly the technology has been assessed and vetted, particularly in terms of market context and potential risks. This process is not just about scrutinizing the technology itself, but also about establishing a level of assurance regarding its viability and security. Below are the three pivotal due diligence activities that technology consultants should focus on to ensure informed and secure technology investments.</p><div class="image"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/a5c31a49-27fd-4524-9b65-90abbc9cbc53/roadmap.png?t=1706289522"/></div><h2 class="heading" style="text-align:start;" id="1-comprehensive-market-analysis-and"><b>1. Comprehensive Market Analysis and Vendor Evaluation</b></h2><h3 class="heading" style="text-align:start;" id="understanding-the-technologys-marke"><b>Understanding the Technology&#39;s Market Relevance</b></h3><p class="paragraph" style="text-align:start;">A deep dive into the market landscape is essential to gauge the relevance and potential of the technology. This includes analyzing market trends, consumer demands, and the competitive landscape. Consultants should assess if the technology addresses a genuine market need or presents a unique solution that sets it apart from existing offerings.</p><h3 class="heading" style="text-align:start;" id="assessing-vendors-market-standing-a"><b>Assessing Vendor’s Market Standing and Track Record</b></h3><p class="paragraph" style="text-align:start;">Evaluating the vendor&#39;s position in the market and their history is crucial. This step involves examining the vendor&#39;s financial health, customer feedback, and their record in managing technology deployments. Understanding the vendor&#39;s stability and reputation in the market is key to assessing the longevity and support for the technology.</p><h2 class="heading" style="text-align:start;" id="2-verification-of-prior-assessment-"><b>2. Verification of Prior Assessment and Threat Modeling</b></h2><h3 class="heading" style="text-align:start;" id="reviewing-existing-assessments-and-"><b>Reviewing Existing Assessments and Security Measures</b></h3><p class="paragraph" style="text-align:start;">An important part of due diligence is determining if the product or service has undergone thorough assessment and security analysis. If the vendor has already completed a qualitative threat model or similar assessments, advisors should review these findings in detail. This review helps in understanding the rigor of the vendor&#39;s testing and risk management processes.</p><h3 class="heading" style="text-align:start;" id="recommending-assessment-steps-if-ne"><b>Recommending Assessment Steps if Needed</b></h3><p class="paragraph" style="text-align:start;">In cases where comprehensive assessments haven&#39;t been performed, advisors must be able to recommend necessary steps. This includes suggesting security audits, threat modeling, and other risk assessment strategies. The goal is to ensure that every potential risk is identified and addressed, building a strong foundation of assurance for the client.</p><h2 class="heading" style="text-align:start;" id="3-pilot-testing-and-performance-eva"><b>3. Pilot Testing and Performance Evaluation</b></h2><h3 class="heading" style="text-align:start;" id="conducting-real-world-testing"><b>Conducting Real-World Testing</b></h3><p class="paragraph" style="text-align:start;">Implementing pilot tests in real-life scenarios is critical for evaluating the technology’s practical performance and integration capabilities. These tests help in identifying any operational issues and assessing the overall user experience.</p><h3 class="heading" style="text-align:start;" id="detailed-analysis-of-test-outcomes"><b>Detailed Analysis of Test Outcomes</b></h3><p class="paragraph" style="text-align:start;">Analyzing the results from pilot testing provides valuable insights into the technology&#39;s efficiency, scalability, and reliability. This step is crucial for confirming that the technology not only meets current requirements but is also capable of adapting to future demands and challenges.</p><h2 class="heading" style="text-align:start;" id="in-summary"><b>In summary…</b></h2><p class="paragraph" style="text-align:start;">For technology consultants and advisors, ensuring a high level of assurance in technology investments requires a strategic approach that encompasses thorough market analysis, rigorous vendor and product assessment, and comprehensive testing. By focusing on these key areas, advisors can provide their clients with the confidence that their technology choices are not only innovative but also secure, market-relevant, and future-proof.</p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=8c6fae98-24a8-4068-ba5d-533b4adea207&utm_medium=post_rss&utm_source=matt_james_proactive_security">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Navigating Operational Technology (OT) Security</title>
  <description></description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/05d9818a-86b0-432d-8c82-3f63ed99e9aa/image.png" length="2329679" type="image/png"/>
  <link>https://odnd.com/p/navigating-operational-technology-ot-security</link>
  <guid isPermaLink="true">https://odnd.com/p/navigating-operational-technology-ot-security</guid>
  <pubDate>Mon, 22 Jan 2024 18:55:40 +0000</pubDate>
  <atom:published>2024-01-22T18:55:40Z</atom:published>
    <dc:creator>Matt James</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">The security of Operational Technology (OT) has emerged as a critical concern in the public domain. OT, which encompasses hardware and software that monitors or controls equipment, assets, and processes, has traditionally been isolated from Information Technology (IT) systems. However, with increasing integration and connectivity, the once distinct line between IT and OT is blurring, bringing unique security challenges to the forefront.</p><div class="image"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/05d9818a-86b0-432d-8c82-3f63ed99e9aa/image.png?t=1705949724"/></div><p class="paragraph" style="text-align:left;"><b>The Evolution of OT and Emerging Security Risks</b></p><p class="paragraph" style="text-align:left;">OT systems, historically standalone and disconnected from networks, are increasingly interconnected with IT systems for efficiency and data analysis benefits. This integration, while beneficial, exposes OT systems to cyber threats that were previously limited to IT networks. The stakes are high in OT environments—compromises can lead to operational disruptions, safety hazards, and significant financial losses.</p><p class="paragraph" style="text-align:left;"><b>Unique Challenges in OT Security</b></p><p class="paragraph" style="text-align:left;">1. <b>Legacy Systems</b>: Many OT systems were designed without cybersecurity in mind and updating them can be complex, costly, and could disrupt critical operations.</p><p class="paragraph" style="text-align:left;">2. <b>Different Priorities</b>: Unlike IT systems where data integrity and confidentiality are key, OT security emphasizes system availability and physical safety. This difference requires a distinct approach to security.</p><p class="paragraph" style="text-align:left;">3. <b>Limited Patching and Downtime</b>: Regularly updating and patching, a common practice in IT security, is challenging in OT due to the need for continuous operation and minimal downtime.</p><p class="paragraph" style="text-align:left;"><b>Call to Action</b></p><p class="paragraph" style="text-align:left;">As the landscape of OT security continues to evolve, staying informed and prepared is crucial. If you&#39;re interested in learning more about how to safeguard your OT environment, or if you have specific questions about OT security, don&#39;t hesitate to reach out. Our team of experts is here to provide insights, strategies, and solutions tailored to your unique operational needs. Contact us to explore how you can strengthen your OT security posture.</p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=7f267bdf-f288-47c6-982e-0259ea8174fd&utm_medium=post_rss&utm_source=matt_james_proactive_security">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Emerging Cybersecurity Trends in 2024</title>
  <description>Staying Ahead of the Curve: Navigating New Challenges and Technologies in 2024&#39;s Cybersecurity Landscape</description>
      <enclosure url="https://media0.giphy.com/media/RDZo7znAdn2u7sAcWH/giphy.gif?cid=2450ec30l4haxn7t2vsn1nwj733jwervjwljxc361l6tk0ok&amp;ep=v1_gifs_search&amp;rid=giphy.gif&amp;ct=g"/>
  <link>https://odnd.com/p/emerging-cybersecurity-trends-2024</link>
  <guid isPermaLink="true">https://odnd.com/p/emerging-cybersecurity-trends-2024</guid>
  <pubDate>Wed, 10 Jan 2024 13:00:00 +0000</pubDate>
  <atom:published>2024-01-10T13:00:00Z</atom:published>
    <dc:creator>Matt James</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;"><span style="color:rgb(55, 65, 81);font-family:Söhne, ui-sans-serif, system-ui, -apple-system, Segoe UI, Roboto, Ubuntu, Cantarell, Noto Sans, sans-serif, Helvetica Neue, Arial, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji;font-size:16px;">With the increasing reliance on digital technologies, the sophistication of cyber threats is also reaching new heights. In this article, we&#39;ll explore the key trends shaping cybersecurity in 2024, offering insights into how businesses can stay protected in this ever-changing environment.</span></p><h3 class="heading" style="text-align:start;" id="the-rise-of-ai-driven-cyber-attacks"><b>The Rise of AI-Driven Cyber Attacks</b></h3><p class="paragraph" style="text-align:start;"><b>AI is a double-edged sword in cybersecurity.</b> While it&#39;s used for defensive purposes, such as threat detection and response, cybercriminals are also leveraging AI to orchestrate more complex attacks. These AI-driven attacks can adapt to security measures in real-time, making them particularly challenging to detect and counter.</p><div class="embed"><a class="embed__url" href="https://newsroom.trendmicro.com/2023-12-05-Proliferation-of-AI-driven-Attacks-Anticipated-in-2024?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=emerging-cybersecurity-trends-in-2024" target="_blank"><div class="embed__content"><p class="embed__title"> Proliferation of AI-driven Attacks Anticipated in 2024 </p><p class="embed__description"> Trend Micro Incorporated (TYO: 4704; TSE: 4704), a global cybersecurity leader, today warned of the transformative role of generative AI (GenAI) in the cyber threat landscape and a coming tsunami... </p><p class="embed__link"> newsroom.trendmicro.com/2023-12-05-Proliferation-of-AI-driven-Attacks-Anticipated-in-2024 </p></div><img class="embed__image embed__image--right" src="https://images/sharing-image.jpg"/></a></div><h3 class="heading" style="text-align:start;" id="the-expanding-cybersecurity-perimet"><b>The Expanding Cybersecurity Perimeter</b></h3><p class="paragraph" style="text-align:start;"><b>The traditional security perimeter is dissolving.</b> With the rise of remote work and cloud computing, the line between internal and external networks is blurring. Organizations need to adopt a zero-trust security model, ensuring robust verification processes for anyone trying to access their systems, irrespective of their location.</p><div class="embed"><a class="embed__url" href="https://www.zscaler.com/blogs/security-research/top-5-cyber-predictions-2024-ciso-perspective?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=emerging-cybersecurity-trends-in-2024" target="_blank"><div class="embed__content"><p class="embed__title"> Top 5 Cyber Predictions for 2024: A CISO Perspective </p><p class="embed__description"> Read the top 5 cyberthreat predictions for 2024, covering generative AI-driven attacks, Ransomware-as-a-Service, the response to SEC regulations, and more. </p><p class="embed__link"> www.zscaler.com/blogs/security-research/top-5-cyber-predictions-2024-ciso-perspective </p></div><img class="embed__image embed__image--right" src="https://www.zscaler.com/sites/default/files/images/blogs/2024-prediction-series-part1-blog-img-1080x424_0.jpg"/></a></div><h3 class="heading" style="text-align:start;" id="the-growing-importance-of-io-t-secu"><b>The Growing Importance of IoT Security</b></h3><p class="paragraph" style="text-align:start;"><b>The Internet of Things (IoT) continues to expand rapidly.</b> However, this growth also presents numerous security challenges. Many IoT devices lack robust security features, making them vulnerable to attacks. Strengthening IoT security protocols is imperative to prevent these devices from becoming entry points for cybercriminals.</p><div class="embed"><a class="embed__url" href="https://asimily.com/blog/iot-security-predictions-for-2024-and-beyond/?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=emerging-cybersecurity-trends-in-2024" target="_blank"><div class="embed__content"><p class="embed__title"> IoT Security Predictions for 2024 and Beyond | Asimily </p><p class="embed__description"> Organizations planning their security programs would do well to consider these IoT security predictions in weeks and months ahead.  </p><p class="embed__link"> asimily.com/blog/iot-security-predictions-for-2024-and-beyond </p></div><img class="embed__image embed__image--right" src="https://asimily.com/wp-content/uploads/2023/12/IoT-Security-Predictions-for-2024-and-Beyond-1.png"/></a></div><h3 class="heading" style="text-align:start;" id="the-surge-in-ransomwareasa-service-"><b>The Surge in Ransomware-as-a-Service (RaaS)</b></h3><p class="paragraph" style="text-align:start;"><b>RaaS is transforming the landscape of cybercrime.</b> It allows individuals without extensive technical knowledge to launch ransomware attacks. This democratization of cybercrime means that businesses of all sizes are at risk and must enhance their defenses against ransomware.</p><div class="embed"><a class="embed__url" href="https://www.bitdefender.com/blog/businessinsights/2024-cybersecurity-forecast-ransomwares-new-tactics-and-targets/?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=emerging-cybersecurity-trends-in-2024" target="_blank"><div class="embed__content"><p class="embed__title"> 2024 Cybersecurity Forecast: Ransomware&#39;s New Tactics and Targets </p><p class="embed__description"> In the past, cybercriminals often operated with the motive to &quot;do it for lulz,&quot; engaging in malicious activities purely for the sake of amusement or creating chaos. </p><p class="embed__link"> www.bitdefender.com/blog/businessinsights/2024-cybersecurity-forecast-ransomwares-new-tactics-and-targets </p></div><img class="embed__image embed__image--right" src="https://businessresources.bitdefender.com/hubfs/joshua-sortino-LqKhnDzSF-8-unsplash-1.jpg"/></a></div><h3 class="heading" style="text-align:start;" id="the-evolution-of-cybersecurity-regu"><b>The Evolution of Cybersecurity Regulations</b></h3><p class="paragraph" style="text-align:start;"><b>Regulatory frameworks are evolving to keep up with cybersecurity challenges.</b> Businesses must stay informed about these changes to ensure compliance. Non-compliance not only poses legal risks but also leaves companies vulnerable to cyber threats.</p><div class="embed"><a class="embed__url" href="https://www.spiceworks.com/it-security/cyber-risk-management/guest-article/future-of-cybersecurity-regulation/?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=emerging-cybersecurity-trends-in-2024" target="_blank"><div class="embed__content"><p class="embed__title"> The Impact of Cybersecurity Regulation in 2024 - Spiceworks </p><p class="embed__description"> Greg Bulmash explores the future of cybersecurity regulations in 2024. Prepare your strategy with insights from GitGuardian. </p><p class="embed__link"> www.spiceworks.com/it-security/cyber-risk-management/guest-article/future-of-cybersecurity-regulation </p></div><img class="embed__image embed__image--right" src="https://images.spiceworks.com/wp-content/uploads/2023/12/12103201/Cybersecurity-Regulation.jpg"/></a></div><h3 class="heading" style="text-align:start;" id="enhanced-focus-on-insider-threats"><b>Enhanced Focus on Insider Threats</b></h3><p class="paragraph" style="text-align:start;"><b>Insider threats, both intentional and accidental, are on the rise.</b> Organizations must implement comprehensive strategies to detect and mitigate risks from within. This includes regular security training, robust access controls, and employing behavior analysis to detect anomalies.</p><div class="embed"><a class="embed__url" href="https://cybermagazine.com/articles/navigating-the-threat-landscape-in-2024?utm_source=odnd.com&utm_medium=newsletter&utm_campaign=emerging-cybersecurity-trends-in-2024" target="_blank"><div class="embed__content"><p class="embed__title"> Navigating the threat landscape in 2024 </p><p class="embed__description"> Cybersecurity is a critical aspect of any organisation and implementing a robust security strategy is essential for preventing cyber threats </p><p class="embed__link"> cybermagazine.com/articles/navigating-the-threat-landscape-in-2024 </p></div><img class="embed__image embed__image--right" src="https://assets.bizclikmedia.net/1200/15a0b23a5514c6c2361afc86ea166c76:2584ec9fc66a9d6fb9c39530de99f50c/gettyimages-1141759961-min.jpg.jpg"/></a></div><h3 class="heading" style="text-align:start;" id="conclusion"><b>Conclusion</b></h3><p class="paragraph" style="text-align:start;">The cybersecurity landscape in 2024 is complex and requires a proactive approach. By understanding these emerging trends, businesses can better prepare and protect themselves from new threats. Investing in advanced security solutions, continuous employee training, and staying abreast of regulatory changes are key steps towards a more secure future.</p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=4491f1d5-f5b0-42ad-a1a3-0eb34dccd435&utm_medium=post_rss&utm_source=matt_james_proactive_security">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

  </channel>
</rss>
