<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Digital Economy Dispatches</title>
    <description>News and views on the digital economy by Alan Brown.</description>
    
    <link>https://dispatches.alanbrown.net/</link>
    <atom:link href="https://rss.beehiiv.com/feeds/jD7w7O3Mgp.xml" rel="self"/>
    
    <lastBuildDate>Sun, 8 Mar 2026 09:23:44 +0000</lastBuildDate>
    <pubDate>Sun, 08 Mar 2026 08:25:00 +0000</pubDate>
    <atom:published>2026-03-08T08:25:00Z</atom:published>
    <atom:updated>2026-03-08T09:23:44Z</atom:updated>
    
      <category>Economy</category>
      <category>Software Engineering</category>
      <category>Technology</category>
    <copyright>Copyright 2026, Digital Economy Dispatches</copyright>
    
    
    
    <docs>https://www.rssboard.org/rss-specification</docs>
    <generator>beehiiv</generator>
    <language>en-us</language>
    <webMaster>support@beehiiv.com (Beehiiv Support)</webMaster>

      <item>
  <title>Digital Economy Dispatch #275 -- If Everyone&#39;s Vibe Coding, What Will It Mean For Britain&#39;s AI Future?</title>
  <description>Everyone&#39;s building apps with AI coding tools. The creative energy is real and unprecedented. But without governance, are we just adding to today&#39;s legacy problems?</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/a9e0373b-c453-4340-b898-f359f7b78b74/cranes-photo-1.jpg" length="84117" type="image/jpeg"/>
  <link>https://dispatches.alanbrown.net/p/digital-economy-dispatch-275-if-everyone-s-vibe-coding-what-will-it-mean-for-britain-s-ai-future</link>
  <guid isPermaLink="true">https://dispatches.alanbrown.net/p/digital-economy-dispatch-275-if-everyone-s-vibe-coding-what-will-it-mean-for-britain-s-ai-future</guid>
  <pubDate>Sun, 08 Mar 2026 08:25:00 +0000</pubDate>
  <atom:published>2026-03-08T08:25:00Z</atom:published>
    <dc:creator>Alan Brown</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">Something unusual is happening. In the past few weeks, almost every AI-aware senior leader, academic, and strategist I&#39;ve spoken with has told me, unprompted, about the apps they&#39;re building. Not apps they&#39;re buying. Not apps their IT teams are deploying. Apps they are personally building, late at night, at weekends, on trains, using AI-powered coding tools like Claude Code, Cursor, Lovable, and their fast-multiplying rivals.</p><p class="paragraph" style="text-align:left;">I&#39;ve been working in and around technology for over three decades. I lived through several generations of rapid software building technologies, including CASE tools, 4GLs, RAD, and RPA. Each was supposed to democratise software creation. None of them produced anything like what I&#39;m seeing now. The sheer volume of people building things, and the visible excitement on their faces when they talk about it, is genuinely new.</p><p class="paragraph" style="text-align:left;">One colleague described it as &quot;addictive.&quot; Another admitted to staying up all night working through a series of web applications. These are not junior developers experimenting on a weekend. They are senior executives, policy advisors, and professors. And they are all, to use the phrase of the moment, &quot;vibe coding&quot; their way through problems they&#39;ve been thinking about for years. Ethan Mollick recently captured this phenomenon perfectly <a class="link" href="http://(https/www.oneusefulthing.org/p/management-as-ai-superpower?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-275-if-everyone-s-vibe-coding-what-will-it-mean-for-britain-s-ai-future" target="_blank" rel="noopener noreferrer nofollow">when he challenged executive MBA students at Wharton</a> — doctors, managers, and company leaders, few of whom had ever coded — to build a startup from scratch in four days using these tools. They got remarkably far.</p><p class="paragraph" style="text-align:left;">So what are they actually making? And what does it mean?</p><h2 class="heading" style="text-align:left;" id="the-three-things-everyone-builds"><b>The Three Things Everyone Builds</b></h2><p class="paragraph" style="text-align:left;">Watching this unfold, I&#39;ve noticed a remarkably consistent pattern in what people create.</p><p class="paragraph" style="text-align:left;">First, personal tools. Someone has been irritated by a repetitive task, a clunky process, or a gap in their workflow. Within an hour or two, they have a working solution. Not elegant, not scalable, but functional. It scratches the itch and saves real time. One colleague built a tool to reformat data exports from several different systems into a single view. Another automated a weekly reporting chore that had consumed every Monday morning for years.</p><p class="paragraph" style="text-align:left;">Second, generalisation. Having solved the personal problem, they begin to wonder whether others face the same friction. They extend the tool, add options, and make it usable by colleagues. The personal utility starts to become a shared one. This is the moment it shifts from a private hack to something that begins to look like a product, however rough.</p><p class="paragraph" style="text-align:left;">Third, dashboards. Dashboards everywhere. People are pulling data from scattered sources and formats, aligning it, visualising it, and using it to support decisions. These aren&#39;t the polished business intelligence platforms that enterprise software vendors sell. They are bespoke, fast, and built to answer a specific question that no existing system quite addresses. Mostly, they are used for insight and human judgment rather than triggering automated actions, though a few are beginning to cross that line. But most importantly, they bypass the pain of trying to find, learn, and operate the out-of-date corporate tools, avoid a 3-month wait for IT to respond to your email request, and don’t need a PhD in coding to make rapid progress.</p><h2 class="heading" style="text-align:left;" id="an-idea-laboratory-not-a-software-f"><b>An Idea Laboratory, Not a Software Factory</b></h2><p class="paragraph" style="text-align:left;">Here is the honest observation that tempers some of the excitement: the vast majority of these apps will never be used in anger. They are experiments, learning exercises, and proofs of concept. People build them, play with them, show them to a few friends, and move on. There is nothing wrong with this, but we should be clear about it and set the right expectations. Most vibe-coded creations are not replacing enterprise systems or transforming operations. Not yet.</p><p class="paragraph" style="text-align:left;">What they are doing is something potentially more important. They are turning ideas into tangible prototypes at a speed that was previously impossible. Concepts that sat in notebooks or lingered as &quot;someday&quot; projects for years are now being built, tested, and iterated in hours. To give a sense of how far the capability now stretches, take a look at <a class="link" href="https://www.oneusefulthing.org/p/claude-code-and-what-comes-next?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-275-if-everyone-s-vibe-coding-what-will-it-mean-for-britain-s-ai-future" target="_blank" rel="noopener noreferrer nofollow">how Ethan Mollick gave Claude Code a single open-ended command and watched it work autonomously for over an hour</a>, producing hundreds of code files and a fully deployed website without further human input. AI coding tools have become idea laboratories, places where senior people can think with their hands, explore possibilities, and discover what works before committing significant resources.</p><p class="paragraph" style="text-align:left;">This is a really important shift. When a managing director can prototype her own solution to a workflow problem over a weekend, the conversation on Monday morning changes. She is no longer submitting a vague request to IT. She is showing a working demonstration and asking how to make it real. That changes the power dynamics of innovation inside organisations in ways we are only beginning to understand.</p><h2 class="heading" style="text-align:left;" id="the-looming-governance-shadow"><b>The Looming Governance Shadow</b></h2><p class="paragraph" style="text-align:left;">But there is a darker side to all this creative energy, and it keeps me awake at night rather more than any vibe coding session.</p><p class="paragraph" style="text-align:left;">The scale of this problem is already significant. <a class="link" href="https://www.businesswire.com/news/home/20251105110078/en/Report-Shadow-AI-Crisis-Looms-as-100-of-Companies-Have-AI-Generated-Code-But-81-of-Security-Teams-Lack-Visibility?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-275-if-everyone-s-vibe-coding-what-will-it-mean-for-britain-s-ai-future" target="_blank" rel="noopener noreferrer nofollow">A recent industry study</a> found that while virtually all organisations now have AI-generated code in their codebases, 81% of security teams lack visibility into how that code is being used, and 65% report increased security risk as a direct result.</p><p class="paragraph" style="text-align:left;">Unfortunately, we have seen this pattern before. Every wave of democratised technology, from spreadsheets to departmental databases to robotic process automation, has produced a long tail of ungoverned, undocumented, business-critical systems that eventually become serious liabilities. The speed and ease of AI-assisted coding risks accelerating this pattern dramatically. We could be building the next generation of legacy problems in real time, one enthusiastic all-night session at a time.</p><p class="paragraph" style="text-align:left;">This connects directly to a theme I explore in depth in my forthcoming book, <i><b><a class="link" href="https://futureofai.uk?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-275-if-everyone-s-vibe-coding-what-will-it-mean-for-britain-s-ai-future" target="_blank" rel="noopener noreferrer nofollow">Making AI Work for Britain</a></b></i>. The UK&#39;s track record on major digital technology programmes is sobering. <a class="link" href="https://www.instituteforgovernment.org.uk/article/explainer/major-projects-government?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-275-if-everyone-s-vibe-coding-what-will-it-mean-for-britain-s-ai-future" target="_blank" rel="noopener noreferrer nofollow">The Institute for Government&#39;s analysis of the Government Major Projects Portfolio</a> for 2020 found that no ICT projects were rated &quot;highly likely&quot; to succeed, and over half were rated &quot;in doubt&quot; or worse. The <a class="link" href="https://www.gov.uk/government/publications/state-of-digital-government-review/state-of-digital-government-review?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-275-if-everyone-s-vibe-coding-what-will-it-mean-for-britain-s-ai-future" target="_blank" rel="noopener noreferrer nofollow">UK government’s own review of the state of digital government</a> 5 years later showed only a small improvement: only 9% of projects were considered “green” and likely to be successful. We are far better at starting things than at governing them. If vibe coding produces a wave of unsanctioned, insecure, and unmaintained applications across British organisations, we will have added a new dimension to an already difficult problem.</p><h2 class="heading" style="text-align:left;" id="five-questions-for-monday-morning"><b>Five Questions for Monday Morning</b></h2><p class="paragraph" style="text-align:left;">So where does this leave us? The honest answer is that we don&#39;t yet know whether the vibe coding boom is the early tremor of a genuine revolution in how organisations innovate, or whether it is a brief, intense burst of enthusiasm that leaves behind more mess than value. Probably it is some of both.</p><p class="paragraph" style="text-align:left;">What I do know is that senior leaders need to be asking some pointed questions, not to dampen the energy, but to channel it productively.</p><p class="paragraph" style="text-align:left;">How do we harness the innovation potential of people building their own tools without creating an unmanageable sprawl of shadow applications? What governance framework makes sense for AI-generated code that was never designed, documented, or reviewed by a professional engineering team? How do we distinguish the genuinely useful prototypes, the ones worth investing in, from the interesting-but-disposable experiments? Are we capturing what people learn through building, even when the specific app they create has a short shelf life? And what does this tell us about the skills, structures, and cultures we need to build for an AI-enabled future?</p><p class="paragraph" style="text-align:left;">The agentic era I wrote about in my last Dispatch is still coming. But before we get there, something unexpected has happened. Thousands of people who never thought of themselves as developers are building software, right now, tonight. The tools have opened a door. The question is what we choose to build on the other side of it.</p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=b9289a12-f781-4bf9-a743-a20bec354d0c&utm_medium=post_rss&utm_source=digital_economy_dispatches">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Digital Economy Dispatch #274 -- The 9% Problem: What the Data Says About The UK’s AI Readiness</title>
  <description>Before we talk about making AI work for Britain, we need to look honestly at where we&#39;re starting from. The government&#39;s own data tells a story that deserves far more attention than it has received.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/3009fdb9-5c03-4e18-96e8-a2f00fe7f0d7/ai-head-photo-1.jpg" length="31615" type="image/jpeg"/>
  <link>https://dispatches.alanbrown.net/p/digital-economy-dispatch-274-the-9-problem-what-the-data-says-about-the-uk-s-ai-readiness</link>
  <guid isPermaLink="true">https://dispatches.alanbrown.net/p/digital-economy-dispatch-274-the-9-problem-what-the-data-says-about-the-uk-s-ai-readiness</guid>
  <pubDate>Sun, 01 Mar 2026 08:25:00 +0000</pubDate>
  <atom:published>2026-03-01T08:25:00Z</atom:published>
    <dc:creator>Alan Brown</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">Here&#39;s a question worth considering before you read on.</p><p class="paragraph" style="text-align:left;">The UK government spends £26 billion every year on digital technology. It employs nearly 100,000 digital and data professionals. It has been running large-scale digital transformation programmes for over thirty years. Given all of that investment and accumulated experience, what percentage of the government&#39;s major technology programmes were assessed as &quot;Green&quot; (i.e., successful delivery is considered highly likely) in the government&#39;s own review, published in January 2025?</p><p class="paragraph" style="text-align:left;">Have a guess. How about 70%? 50%? 30%?</p><p class="paragraph" style="text-align:left;"><b>The answer is 9%.</b></p><p class="paragraph" style="text-align:left;">Less than one in ten. And those same technology programmes are 60% more likely to be rated &quot;Red&quot; (i.e., successful delivery is considered highly at risk) than non-technology projects sitting alongside them in the same portfolio.</p><p class="paragraph" style="text-align:left;">This figure comes from the UK’s <a class="link" href="https://www.gov.uk/government/publications/state-of-digital-government-review/state-of-digital-government-review?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-274-the-9-problem-what-the-data-says-about-the-uk-s-ai-readiness" target="_blank" rel="noopener noreferrer nofollow">State of Digital Government Review</a>, published in January 2025 and <a class="link" href="https://hansard.parliament.uk/commons/2025-01-21/debates/25012152000007/DigitalGovernment?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-274-the-9-problem-what-the-data-says-about-the-uk-s-ai-readiness" target="_blank" rel="noopener noreferrer nofollow">presented to Parliament by the Secretary of State for Science, Innovation and Technology</a>. It is one of the most candid official assessments of public sector digital performance this country has ever produced. And it is the essential context for any serious conversation about AI adoption in Britain.</p><h2 class="heading" style="text-align:left;" id="what-else-the-data-shows"><b>What Else the Data Shows</b></h2><p class="paragraph" style="text-align:left;">The 9% headline is striking enough, but it sits within a broader picture that needs careful attention.</p><p class="paragraph" style="text-align:left;">The report also notes that<b> 47%</b> of central government services still rely entirely on non-digital methods such as phone calls, paper forms, and in-person visits. <b>Half</b> of all digital and data recruitment campaigns in 2024 failed to fill the role advertised; in 2019, that failure rate was 22%. The pay gap between the public and private sectors for technical architects is <b>35%</b>, equivalent to around £30,000 per year. The average digital contractor costs <b><a class="link" href="https://www.nao.org.uk/insights/governments-approach-to-technology-suppliers-addressing-the-challenges/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-274-the-9-problem-what-the-data-says-about-the-uk-s-ai-readiness" target="_blank" rel="noopener noreferrer nofollow">three times</a></b><a class="link" href="https://www.nao.org.uk/insights/governments-approach-to-technology-suppliers-addressing-the-challenges/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-274-the-9-problem-what-the-data-says-about-the-uk-s-ai-readiness" target="_blank" rel="noopener noreferrer nofollow"> as much as a permanent employee</a>, and yet headcount restrictions make contractors easier to hire than permanent staff, so that is what organisations do.</p><p class="paragraph" style="text-align:left;">Only <b>four</b> central government departments out of more than twenty have a digital leader on their executive committee. Only around <b>20%</b> of senior civil servants have verified themselves as digitally upskilled against the government&#39;s own framework.</p><p class="paragraph" style="text-align:left;">And on AI specifically: only <b>8%</b> of public sector AI projects show measurable benefits, and only <b>16%</b> show forecast costs.</p><p class="paragraph" style="text-align:left;">These are not isolated data points. They form a pattern, and the review&#39;s authors are very clear about what that pattern means. The successes that do exist in UK public sector digital delivery have typically been achieved, in their own words, &quot;despite the system rather than because of it&quot;, and dependent on the dedication of individuals navigating structures that were not designed for digital-age delivery. This shouldn’t be a surprise. The NAO noted as far back as 2021 that despite 25 years of government strategies, <a class="link" href="https://www.nao.org.uk/press-releases/the-challenges-in-implementing-digital-change/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-274-the-9-problem-what-the-data-says-about-the-uk-s-ai-readiness" target="_blank" rel="noopener noreferrer nofollow">there is a consistent pattern of underperformance in delivering digital business change</a>.</p><h2 class="heading" style="text-align:left;" id="the-policy-challenge"><b>The Policy Challenge</b></h2><p class="paragraph" style="text-align:left;">The State of Digital Government review identifies five root causes for this state of affairs: <a class="link" href="https://www.nao.org.uk/reports/digital-transformation-in-government-addressing-the-barriers/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-274-the-9-problem-what-the-data-says-about-the-uk-s-ai-readiness" target="_blank" rel="noopener noreferrer nofollow">leadership, structure, measurement, talent, and funding</a>. What is striking about all five is that none of them are technology problems. They are organisational and institutional issues. And, unfortunately, they’re the kind that a more capable AI model or a new AI incubation hub will not fix.</p><p class="paragraph" style="text-align:left;">It starts with people. Digital leaders are not consistently represented at executive level. Pay frameworks actively drive technical talent out of the public sector. Funding models are designed for capital projects, not the continuous improvement that digital services require. Governance processes were built for infrastructure delivery, not iterative technology development. And institutional knowledge has been steadily transferred to expensive contractors rather than built into permanent capability.</p><p class="paragraph" style="text-align:left;">We need to acknowledge that this is the foundation on which the UK&#39;s AI ambitions are being built.</p><h2 class="heading" style="text-align:left;" id="the-leadership-question"><b>The Leadership Question</b></h2><p class="paragraph" style="text-align:left;">There is a natural temptation, when confronted with data like this, to argue that AI is different. This time the technology is powerful enough to cut through institutional inertia and deliver results that previous digital programmes could not. I understand the argument. I have heard it made sincerely by people I respect.</p><p class="paragraph" style="text-align:left;">But consider what the data actually shows. The barriers that this review and others identify are not particular to a specific kind of technology. They are barriers to organisational change of any kind. An institution that cannot successfully commission, manage, and embed digital programmes does not automatically get better at doing so because the technology on the table is more impressive. Indeed, the depth and speed of disruption being caused by AI only increase the risks.</p><p class="paragraph" style="text-align:left;">The question that matters is not &quot;how do we deploy AI?&quot;. It is &quot;what does our organisation need to be able to do differently before AI deployment can succeed?&quot;. Those are very different questions, and the gap between them is where most AI programmes quietly founder.</p><h2 class="heading" style="text-align:left;" id="stepping-back"><b>Stepping Back</b></h2><p class="paragraph" style="text-align:left;">None of these comments is written to be discouraging, and it is certainly not a criticism of the many talented and committed people working in digital roles across the public sector. The review itself is full of genuine success stories (e.g., the NHS App, <a class="link" href="https://GOV.UK?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-274-the-9-problem-what-the-data-says-about-the-uk-s-ai-readiness" target="_blank" rel="noopener noreferrer nofollow">GOV.UK</a> One Login, Hillingdon Council&#39;s AI-driven contact system that saved £5 for every pound spent, and DWP&#39;s use of AI to improve bereavement notifications). The potential is real, and the commitment is genuine.</p><p class="paragraph" style="text-align:left;">But it is worth pausing, stepping back, and considering what this data actually means.</p><p class="paragraph" style="text-align:left;">The UK has an ambitious national AI strategy. We have real political will behind it. We have world-class research capability in our universities and a genuine concentration of AI talent. All of that is true and worth celebrating.</p><p class="paragraph" style="text-align:left;">But the honest read of the evidence is this: we cannot simply reach for the AI magic wand and expect results to follow. The gap between AI aspiration and AI implementation in the UK is not primarily a technology gap. It is an institutional gap — in capability, in leadership, in incentive structures, in the basic organisational conditions that determine whether a complex programme succeeds or quietly joins the graveyard of previous well-intentioned initiatives.</p><p class="paragraph" style="text-align:left;">What can we do to bridge this gap? That is the question I have been exploring in the research behind my forthcoming book <i><a class="link" href="https://futureofai.uk/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-274-the-9-problem-what-the-data-says-about-the-uk-s-ai-readiness" target="_blank" rel="noopener noreferrer nofollow">Making AI Work for Britain</a></i>, to be published in April by the London Publishing Partnership.</p><p class="paragraph" style="text-align:left;">Over the coming weeks, I will be writing more about both the gap itself, ways that the UK can face up to the challenges of delivering AI at scale, and identifying organisations that are successfully executing on a path forward. If you want to get early insight into what the book says on these themes, sign up at LinkedIn to a parallel series of articles at <a class="link" href="https://newsletter.alanbrown.net?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-274-the-9-problem-what-the-data-says-about-the-uk-s-ai-readiness" target="_blank" rel="noopener noreferrer nofollow">newsletter.alanbrown.net</a>.</p><p class="paragraph" style="text-align:left;"> </p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=3f6bd4dc-846d-458d-98f0-876da75b2007&utm_medium=post_rss&utm_source=digital_economy_dispatches">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Digital Economy Dispatch #273 -- Recursive AI is Here - and Why it Matters</title>
  <description>AI is now building AI and accelerating its own development in ways that outpace governance, reshape business economics, and challenge the assumption that humans control the pace of change.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/4d57fcb2-cf4d-4d79-9af9-b5f007c2e760/AI-bulb-photo-1.jpg" length="40184" type="image/jpeg"/>
  <link>https://dispatches.alanbrown.net/p/digital-economy-dispatch-273-recursive-ai-is-here-and-why-it-matters</link>
  <guid isPermaLink="true">https://dispatches.alanbrown.net/p/digital-economy-dispatch-273-recursive-ai-is-here-and-why-it-matters</guid>
  <pubDate>Sun, 22 Feb 2026 08:25:00 +0000</pubDate>
  <atom:published>2026-02-22T08:25:00Z</atom:published>
    <dc:creator>Alan Brown</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">Something shifted recently, and I&#39;ve only just begun to realise its significance.</p><p class="paragraph" style="text-align:left;">AI is now being used to develop AI. Not as a metaphor. Not as a future possibility. Right now. The tools are building the tools. The loop has closed.</p><p class="paragraph" style="text-align:left;">I&#39;m calling this <b>Recursive AI</b> -- the use of AI systems to accelerate the development of AI systems. It matters more than most of the AI developments we spend time discussing. It&#39;s not another incremental capability improvement. It&#39;s a change in the nature of the game itself.</p><h2 class="heading" style="text-align:left;" id="whats-actually-happening"><b>What&#39;s Actually Happening</b></h2><p class="paragraph" style="text-align:left;">The numbers are startling. At Anthropic, <a class="link" href="https://fortune.com/2026/01/29/100-percent-of-code-at-anthropic-and-openai-is-now-ai-written-boris-cherny-roon/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-273-recursive-ai-is-here-and-why-it-matters" target="_blank" rel="noopener noreferrer nofollow">engineers report that 70-90% of their code is now AI-generated</a>, with some senior engineers claiming they haven&#39;t written code by hand in months. Boris Cherny, head of Claude Code, says he shipped 22 pull requests in a single day, each one 100% written by AI. At OpenAI, researchers report similar figures. The people building the most advanced AI systems are using those systems to build the next generation.</p><p class="paragraph" style="text-align:left;">And it&#39;s not just the frontier labs. AI-assisted coding tools mean that virtually every AI startup is now using AI to build AI. The <a class="link" href="https://techcrunch.com/2025/03/06/a-quarter-of-startups-in-ycs-current-cohort-have-codebases-that-are-almost-entirely-ai-generated/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-273-recursive-ai-is-here-and-why-it-matters" target="_blank" rel="noopener noreferrer nofollow">Y Combinator statistic</a> that 25% of their current batch has 95% AI-generated codebases includes companies building AI products themselves.</p><p class="paragraph" style="text-align:left;">Former Google CEO Eric Schmidt has been <a class="link" href="https://www.thecrimson.com/article/2025/12/2/google-ceo-ai-self-improvement/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-273-recursive-ai-is-here-and-why-it-matters" target="_blank" rel="noopener noreferrer nofollow">warning about this trajectory</a>, predicting that recursive self-improvement, where A isI learning and improving without human instruction, is now just two to four years away. As he put it at Harvard: &quot;The computers are now doing self-improvement. They&#39;re learning how to plan, and they don&#39;t have to listen to us anymore.&quot;</p><p class="paragraph" style="text-align:left;">The recursion is real. And it&#39;s accelerating.</p><h2 class="heading" style="text-align:left;" id="why-recursive-ai-is-different"><b>Why Recursive AI Is Different</b></h2><p class="paragraph" style="text-align:left;">We&#39;ve had automation in technology development before. Better tools have always enabled better tools. Compilers made it easier to build compilers. Cloud computing made it easier to build cloud services.</p><p class="paragraph" style="text-align:left;">But Recursive AI is qualitatively different. Previous automation amplified human capability. AI is increasingly <i><b>substituting</b></i> for human cognitive work in the development process itself. The system is contributing to its own improvement in ways that go beyond simple tool use.</p><p class="paragraph" style="text-align:left;">This creates feedback dynamics we haven&#39;t seen before. If AI makes AI development faster, and those faster-developed AIs make the next round even faster, the pace of change becomes difficult to predict—and potentially difficult to control.</p><p class="paragraph" style="text-align:left;">As <a class="link" href="https://www.hyperdimensional.co/p/on-recursive-self-improvement-part?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-273-recursive-ai-is-here-and-why-it-matters" target="_blank" rel="noopener noreferrer nofollow">one analysis notes</a>, AI agents that build the next versions of themselves are not science fiction—they&#39;re an explicit milestone on the roadmap of every frontier AI lab. OpenAI has publicly discussed hundreds of thousands of automated research &quot;interns&quot; within months, and a fully automated workforce within two years. The workforce that doesn&#39;t sleep, doesn&#39;t eat, and whose only objective is to make itself smarter.</p><p class="paragraph" style="text-align:left;">I&#39;m not making apocalyptic claims here. But I am noting that the assumption underlying most AI governance discussions—that humans set the pace of AI development—is becoming less obviously true.</p><h2 class="heading" style="text-align:left;" id="the-implications-for-digital-leader"><b>The Implications for Digital Leaders</b></h2><p class="paragraph" style="text-align:left;">If you&#39;re leading digital strategy, Recursive AI matters for several reasons.</p><p class="paragraph" style="text-align:left;"><b>The capability frontier is moving faster than your planning cycles.</b> If AI development is accelerating AI development, the gap between what&#39;s possible today and what&#39;s possible in 18 months may be larger than you&#39;re assuming. Strategies built on current capabilities may be obsolete before they&#39;re implemented.</p><p class="paragraph" style="text-align:left;"><b>Build vs. buy calculations are shifting.</b> When AI can help build AI-powered products, the cost and time to create custom solutions drops. What previously required specialised AI teams may become achievable with smaller groups augmented by AI tools. The <a class="link" href="https://hbr.org/2025/03/strategy-in-an-era-of-abundant-expertise?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-273-recursive-ai-is-here-and-why-it-matters" target="_blank" rel="noopener noreferrer nofollow">economics of expertise</a> are changing faster than most organisations realise.</p><p class="paragraph" style="text-align:left;"><b>Your AI vendors are on this curve too.</b> The products you&#39;re buying or building on will change rapidly. Today&#39;s capabilities are not a stable foundation. Plan for continuous adaptation, not implementation and maintenance.</p><h2 class="heading" style="text-align:left;" id="the-policy-challenge"><b>The Policy Challenge</b></h2><p class="paragraph" style="text-align:left;">For policy makers and regulators, Recursive AI poses genuine challenges.</p><p class="paragraph" style="text-align:left;"><b>Oversight becomes harder.</b> If AI systems are contributing to their own development, understanding what&#39;s being built—and why—becomes more complex. The humans involved may not fully understand the choices being made by their AI assistants.</p><p class="paragraph" style="text-align:left;"><b>Speed outpaces governance.</b> Regulatory frameworks assume there&#39;s time to observe, deliberate, and respond. If the development cycle is compressing because AI is accelerating it, that assumption weakens. By the time a concern is identified and addressed, the technology may have moved on.</p><p class="paragraph" style="text-align:left;"><b>Accountability blurs.</b> When an AI system contributes to building another AI system, and that system causes harm, the chain of responsibility becomes tangled. We need new frameworks for thinking about accountability in recursive development processes.</p><p class="paragraph" style="text-align:left;">None of this means regulation is futile. But it does mean that governance approaches designed for human-paced development may need rethinking.</p><h2 class="heading" style="text-align:left;" id="what-to-watch"><b>What To Watch</b></h2><p class="paragraph" style="text-align:left;">I don&#39;t know where Recursive AI leads. Nobody does. But here&#39;s what I&#39;m paying attention to:</p><ul><li><p class="paragraph" style="text-align:left;"><b>The self-improvement metrics.</b> Labs are measuring what percentage of their development work is AI-assisted. Anthropic says 70-90% company-wide. Watch those numbers. When they cross certain thresholds, the dynamics change fundamentally.</p></li><li><p class="paragraph" style="text-align:left;"><b>The research-to-deployment gap.</b> How quickly are advances in the lab making it into products? That gap seems to be compressing. Recursive AI is one reason why.</p></li><li><p class="paragraph" style="text-align:left;"><b>The concentration question.</b> Does Recursive AI favour incumbents (who have the best models to assist their own work) or challengers (who can use available tools to move fast)? The answer will shape the industry structure.</p></li></ul><h2 class="heading" style="text-align:left;" id="the-honest-position"><b>The Honest Position</b></h2><p class="paragraph" style="text-align:left;">I find Recursive AI fascinating and unsettling in roughly equal measure.</p><p class="paragraph" style="text-align:left;">Fascinating because it&#39;s genuinely novel. We&#39;re watching systems contribute to their own improvement in ways that have no real precedent. The intellectual challenge of understanding what this means is significant.</p><p class="paragraph" style="text-align:left;">Unsettling because the assumptions I&#39;ve relied on—that humans set the pace, that we can observe and adjust, that governance can keep up—feel less solid than they did two years ago.</p><p class="paragraph" style="text-align:left;">The honest position is uncertainty. We&#39;re in a loop now, and we don&#39;t know where it leads. What I do know is that pretending Recursive AI isn&#39;t happening isn&#39;t a strategy. Leaders and policy makers need to engage with this reality, even when, or especially when, it makes planning harder.</p><p class="paragraph" style="text-align:left;"> </p><p class="paragraph" style="text-align:left;"> </p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=5bbb70c9-37e9-4e20-afcb-41de6f80e6e7&utm_medium=post_rss&utm_source=digital_economy_dispatches">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Digital Economy Dispatch #272 -- Why AI Makes Firms Collapse as Expertise Becomes Cheap</title>
  <description>AI is slashing the cost of expertise, breaking the economic logic of the firm. As scale becomes a liability, survival depends on human judgment, not headcount.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/31f3e3fb-1531-4440-85d9-f8cce9225b8d/skyscraper-photo-1.jpg" length="153504" type="image/jpeg"/>
  <link>https://dispatches.alanbrown.net/p/digital-economy-dispatch-272-why-ai-makes-firms-collapse-as-expertise-becomes-cheap</link>
  <guid isPermaLink="true">https://dispatches.alanbrown.net/p/digital-economy-dispatch-272-why-ai-makes-firms-collapse-as-expertise-becomes-cheap</guid>
  <pubDate>Sun, 15 Feb 2026 08:25:05 +0000</pubDate>
  <atom:published>2026-02-15T08:25:05Z</atom:published>
    <dc:creator>Alan Brown</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">In 1937, the economist Ronald Coase asked a deceptively simple question: <a class="link" href="https://onlinelibrary.wiley.com/doi/full/10.1111/j.1468-0335.1937.tb00002.x?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-272-why-ai-makes-firms-collapse-as-expertise-becomes-cheap" target="_blank" rel="noopener noreferrer nofollow">why do firms exist?</a> His answer has shaped how we think about organisations for nearly a century. Now AI is forcing us to revisit it.</p><p class="paragraph" style="text-align:left;">Coase argued that a company&#39;s size and scope are determined by the relationship between internal and external costs. When it&#39;s cheaper to do something inside the firm, organisations grow. When it&#39;s cheaper to buy from outside, they shrink and outsource. The boundary of the firm sits wherever these costs balance.</p><p class="paragraph" style="text-align:left;">A <a class="link" href="https://hbr.org/2025/03/strategy-in-an-era-of-abundant-expertise?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-272-why-ai-makes-firms-collapse-as-expertise-becomes-cheap" target="_blank" rel="noopener noreferrer nofollow">recent Harvard Business Review article</a> by Microsoft&#39;s strategy team and Harvard&#39;s <a class="link" href="https://www.hbs.edu/faculty/Pages/profile.aspx?facId=240491&utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-272-why-ai-makes-firms-collapse-as-expertise-becomes-cheap" target="_blank" rel="noopener noreferrer nofollow">Karim Lakhani</a> explores what happens when AI disrupts this balance. Their core insight is that we&#39;re witnessing two forces colliding. The amount of expertise required to create value keeps increasing. But the cost of accessing that expertise is plummeting.</p><p class="paragraph" style="text-align:left;">This tension has profound implications, not just for corporate strategy, but for how we think about digital leadership and public policy.</p><h2 class="heading" style="text-align:left;" id="the-expertise-paradox"><b>The Expertise Paradox</b></h2><p class="paragraph" style="text-align:left;">Think about what it takes to build a modern digital product. You need software engineering, yes. But also user experience design, data science, cybersecurity, cloud architecture, compliance expertise, accessibility knowledge, and increasingly, AI and machine learning skills. The bar keeps rising.</p><p class="paragraph" style="text-align:left;">At the same time, AI is making much of this expertise dramatically cheaper to access. Need a first draft of code? A security audit checklist? A compliance framework? A data analysis? Tasks that once required hiring specialists or expensive consultants can now be accomplished (at least to a functional level) by anyone with access to AI tools.</p><p class="paragraph" style="text-align:left;">This is Coase&#39;s equation, scrambled.</p><h2 class="heading" style="text-align:left;" id="what-this-means-for-organisations"><b>What This Means for Organisations</b></h2><p class="paragraph" style="text-align:left;">If external costs fall faster than internal costs, Coase&#39;s logic suggests organisations should shrink. Why maintain large in-house teams when you can access expertise on demand?</p><p class="paragraph" style="text-align:left;">We&#39;re already seeing this. <a class="link" href="https://developers.slashdot.org/story/25/03/18/1428226/vibe-coding-is-letting-10-engineers-do-the-work-of-a-team-of-50-to-100-says-yc-ceo?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-272-why-ai-makes-firms-collapse-as-expertise-becomes-cheap" target="_blank" rel="noopener noreferrer nofollow">Small teams are building products that once required hundreds of engineers</a>. Startups are competing with incumbents not by matching their headcount, but by leveraging AI to punch above their weight. The <a class="link" href="https://techcrunch.com/2025/03/06/a-quarter-of-startups-in-ycs-current-cohort-have-codebases-that-are-almost-entirely-ai-generated/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-272-why-ai-makes-firms-collapse-as-expertise-becomes-cheap" target="_blank" rel="noopener noreferrer nofollow">Y Combinator statistic</a> that 25% of their current batch has 95% AI-generated codebases isn&#39;t just a fluke; it&#39;s a signal of major restructuring.</p><p class="paragraph" style="text-align:left;">But it&#39;s not that simple. Some expertise becomes <i><b>more</b></i> valuable as AI commoditises the rest. The ability to judge AI output, to ask the right questions, to integrate across domains, to make decisions under uncertainty are human capabilities that now command premiums precisely because the routine work around them has become cheap.</p><p class="paragraph" style="text-align:left;">The organisations that thrive won&#39;t be the ones that simply cut costs. They&#39;ll be the ones that understand which expertise to internalise (because it&#39;s core to differentiation) and which to access externally (because AI has commoditised it).</p><h2 class="heading" style="text-align:left;" id="the-policy-challenge"><b>The Policy Challenge</b></h2><p class="paragraph" style="text-align:left;">For policy makers, the implications are equally significant.</p><p class="paragraph" style="text-align:left;">If AI dramatically reduces the cost of accessing expertise, what happens to the professions built around providing it? Legal services, accounting, consulting, software development are <a class="link" href="https://global.oup.com/academic/product/the-future-of-the-professions-9780198841890?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-272-why-ai-makes-firms-collapse-as-expertise-becomes-cheap" target="_blank" rel="noopener noreferrer nofollow">all facing versions of this question</a>. The answer isn&#39;t mass unemployment (we&#39;ve heard that prediction before), but it is structural change that policy needs to anticipate.</p><p class="paragraph" style="text-align:left;">More subtly, if small organisations can now access expertise that was previously the preserve of large ones, what does this mean for competition policy? For industrial strategy? For how we think about supporting innovation?</p><p class="paragraph" style="text-align:left;">The old assumption was that scale confers advantages through accumulated expertise. With AI, this is now weakening. A two-person startup with AI tools might genuinely compete with an established player in ways that weren&#39;t possible even three years ago. This changes the calculus for regulators and for government investment in innovation.</p><h2 class="heading" style="text-align:left;" id="the-leadership-question"><b>The Leadership Question</b></h2><p class="paragraph" style="text-align:left;">For digital leaders, the practical question revisiting Coase’s work is which expertise should you own, and which should you rent?</p><p class="paragraph" style="text-align:left;">The answer requires honest assessment. What capabilities really differentiate your organisation? What requires deep contextual knowledge that AI can&#39;t easily replicate? What involves judgement, relationships, and trust that remain fundamentally human?</p><p class="paragraph" style="text-align:left;">Those you invest in. Those you build. Those you protect.</p><p class="paragraph" style="text-align:left;">Everything else AI is making it increasingly available on demand. Fighting that transition is futile. The smart play is to redirect resources from commoditised expertise toward the capabilities that still create differentiation.</p><h2 class="heading" style="text-align:left;" id="coase-updated"><b>Coase Updated</b></h2><p class="paragraph" style="text-align:left;">Coase&#39;s insight remains valid. Firms exist because sometimes it&#39;s more efficient to organise activity internally than to transact externally. What&#39;s changed is the cost curve.</p><p class="paragraph" style="text-align:left;">AI is dramatically reducing the external cost of expertise. That pressure will reshape organisations by making some smaller and more focused, enabling others to expand into areas where they previously lacked capabilities, and forcing all of them to reconsider where their boundaries should sit.</p><p class="paragraph" style="text-align:left;">The economists will eventually update the models. In the meantime, digital leaders and policy makers need to act on the implications now. The expertise that defined your organisation yesterday may be available to everyone tomorrow. The question is: what will you do that still matters?</p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=11b69c74-fe08-47aa-9e40-15d2c67791a8&utm_medium=post_rss&utm_source=digital_economy_dispatches">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Digital Economy Dispatch #271 -- Vibe Coding and the Dawn of Disposable AI</title>
  <description>While AI coding assistants dramatically lower the barrier to building software, the true shift lies in the move toward &quot;disposable code&quot;, where the traditional value of a permanent codebase is replaced by a landscape of rapid prototyping, security risks, and the evaporation of intellectual property moats.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/bb72fe2c-8cec-466b-95e5-d18c156c9091/cat-code-photo-1.jpg" length="58761" type="image/jpeg"/>
  <link>https://dispatches.alanbrown.net/p/digital-economy-dispatch-271-vibe-coding-and-the-dawn-of-disposable-ai</link>
  <guid isPermaLink="true">https://dispatches.alanbrown.net/p/digital-economy-dispatch-271-vibe-coding-and-the-dawn-of-disposable-ai</guid>
  <pubDate>Sun, 08 Feb 2026 08:20:05 +0000</pubDate>
  <atom:published>2026-02-08T08:20:05Z</atom:published>
    <dc:creator>Alan Brown</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">For the past month, I&#39;ve spent time every day creating tools, utilities, and applications using AI coding assistants. Claude Code, Cursor, Copilot, ChatGPT, Gemini, I&#39;ve tried them all. It&#39;s been fun. And frustrating.</p><p class="paragraph" style="text-align:left;">I’ve had a blast. But if you&#39;re only paying attention to the fun and the frustration, you&#39;re missing the bigger picture. Something fundamental is shifting. And the implications go far beyond whether these tools are any good at spitting out code.</p><p class="paragraph" style="text-align:left;">I should start with a warning. I&#39;m probably not the typical target user for what <a class="link" href="https://x.com/karpathy/status/1886192184808149383?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-271-vibe-coding-and-the-dawn-of-disposable-ai" target="_blank" rel="noopener noreferrer nofollow">Andrej Karpathy called &quot;vibe coding&quot;</a>; that approach where you &quot;fully give in to the vibes, embrace exponentials, and forget that the code even exists.&quot; I have over thirty years of software development experience, from BASIC and FORTRAN to Prolog and Haskell. I know quite a bit about what&#39;s happening under the hood.</p><p class="paragraph" style="text-align:left;">That background gives me a useful vantage point. I can see what these tools get right, where they fall short. And what it all means for individuals, businesses, and society. Let me take you through the journey.</p><h2 class="heading" style="text-align:left;" id="the-magic-show"><b>The Magic Show</b></h2><p class="paragraph" style="text-align:left;">The first thing that strikes you is how much seems possible. With just a few prompts, these tools produce astonishing results. You describe a contact form with validation, and minutes later you have working code. It feels like magic. And <a class="link" href="https://techcrunch.com/2025/03/06/a-quarter-of-startups-in-ycs-current-cohort-have-codebases-that-are-almost-entirely-ai-generated/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-271-vibe-coding-and-the-dawn-of-disposable-ai" target="_blank" rel="noopener noreferrer nofollow">it’s taking hold at many organizations</a>.</p><p class="paragraph" style="text-align:left;">I focused on simple web applications that I can host quickly using HTML, CSS, JavaScript, PHP, and MySQL. From idea to running prototype can happen in minutes rather than days. It&#39;s seductive. You start thinking: why would I ever code manually again? But don&#39;t stop here. The magic is real, but it&#39;s also a distraction from what’s really changing.</p><h2 class="heading" style="text-align:left;" id="the-cracks-appear"><b>The Cracks Appear</b></h2><p class="paragraph" style="text-align:left;">The more you use these tools, the more you come up with worrying questions and notice inconsistencies. Remarkably smart in some areas, these tools are bafflingly stupid in others. In one session, a tool elegantly solved a complex data transformation. In the next, it couldn&#39;t figure out why a simple CSS rule wasn&#39;t applying and went into an endless code rewriting loop.</p><p class="paragraph" style="text-align:left;">The intelligence is genuine but uneven. They pattern-match brilliantly until they don&#39;t.</p><h2 class="heading" style="text-align:left;" id="the-nearly-trap"><b>The &quot;Nearly&quot; Trap</b></h2><p class="paragraph" style="text-align:left;">The most insidious problem is how many times the generated code almost does what you want. Nearly right. Just not quite. Followed by endless pushing, pulling, and fiddling.</p><p class="paragraph" style="text-align:left;">So, for example, the form validation works, but error messages appear in the wrong place. You&#39;re 90% there, and that last 10% becomes maddening. Endless loops of refinement, each prompt fixing one thing while breaking another.</p><h2 class="heading" style="text-align:left;" id="the-hammer-problem"><b>The Hammer Problem</b></h2><p class="paragraph" style="text-align:left;">Spend enough time with these tools, and everything starts looking the same. The AI has preferred patterns, favourite libraries, and default approaches. It reaches for React when vanilla JavaScript would suffice.</p><p class="paragraph" style="text-align:left;">Every AI-generated project feels like every other. The tool&#39;s personality overwrites yours.</p><h2 class="heading" style="text-align:left;" id="the-knowledge-dividend"><b>The Knowledge Dividend</b></h2><p class="paragraph" style="text-align:left;">Here&#39;s where my decades of experience proved invaluable. When things go wrong, knowing what&#39;s happening under the hood saves enormous time. I can recognise why code is failing. I can give the AI precise instructions. I can spot dead ends.</p><p class="paragraph" style="text-align:left;">Why does this matter? <a class="link" href="https://www.veracode.com/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-271-vibe-coding-and-the-dawn-of-disposable-ai" target="_blank" rel="noopener noreferrer nofollow">Veracode&#39;s 2025 research</a> found that 45% of AI-generated code contains security vulnerabilities. If you can&#39;t recognise them, there is a strong chance you won&#39;t know to ask about them and won’t notice the impact until it is too late.</p><p class="paragraph" style="text-align:left;">This matters more than you might think. The tools are democratising code creation. They&#39;re also democratising insecurity.</p><h2 class="heading" style="text-align:left;" id="the-prototype-cliff"><b>The Prototype Cliff</b></h2><p class="paragraph" style="text-align:left;">From idea to prototype to show-and-tell is phenomenal. These tools excel at getting something working that you can demonstrate and iterate on.</p><p class="paragraph" style="text-align:left;">But there&#39;s a cliff edge. The moment you want to move beyond prototype to something robust, maintainable, secure, everything changes. The quick wins become technical debt. This is fine if you know where the cliff is. Dangerous if you don&#39;t. And most people don&#39;t.</p><h2 class="heading" style="text-align:left;" id="the-hidden-obligations"><b>The Hidden Obligations</b></h2><p class="paragraph" style="text-align:left;">Given my interests, almost all my experiments involved storing data, managing sign-ons, and creating new knowledge from multiple sources. The technical parts are tricky but doable.</p><p class="paragraph" style="text-align:left;">But creating and sharing apps is much more than coding. do you really understand the obligations you&#39;re taking on when you save a user&#39;s email and password? When you collect personal data? When you infer new knowledge from their inputs?</p><p class="paragraph" style="text-align:left;">The AI will happily generate a user registration form. It won&#39;t ask about GDPR compliance, data retention policies, or breach notification requirements. These aren&#39;t technical problems. They&#39;re governance problems. And they don&#39;t appear in the code.</p><p class="paragraph" style="text-align:left;">This is where the bigger picture starts to come into focus. The tools make building easy. They don&#39;t make responsibility easy. And that gap is about to cause a lot of pain.</p><h2 class="heading" style="text-align:left;" id="oh-no-not-the-comfy-chair"><b>Oh No, Not the Comfy Chair</b></h2><p class="paragraph" style="text-align:left;"><b>For individual users:</b> Vibe coding is genuinely useful for personal productivity and experimentation. But the further you venture toward anything involving other people&#39;s data, money, or trust, the more you need to understand what the code is actually doing. The barrier to building has collapsed. The barrier to building <i>responsibly</i> hasn&#39;t.</p><p class="paragraph" style="text-align:left;"><b>For managers:</b> Your people are already using these tools—whether you&#39;ve sanctioned it or not. You need visibility. What&#39;s being built? Where is it deployed? Who&#39;s accountable when something breaks? The productivity gains are real. So are the risks you can&#39;t see.</p><p class="paragraph" style="text-align:left;"><b>For policy makers:</b> <a class="link" href="https://www.lawfaremedia.org/article/when-the-vibe-are-off--the-security-risks-of-ai-generated-code?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-271-vibe-coding-and-the-dawn-of-disposable-ai" target="_blank" rel="noopener noreferrer nofollow">The security research is sobering</a>. Between 25% and 45% of vibe-coded applications have security flaws. The democratisation of coding has democratised insecurity. Frameworks around software liability are about to be tested like never before.</p><h2 class="heading" style="text-align:left;" id="the-deeper-shift-welcome-to-disposa"><b>The Deeper Shift: Welcome to Disposable AI</b></h2><p class="paragraph" style="text-align:left;">But there&#39;s a bigger realisation that reframes everything. To vibe code effectively, you need to fundamentally shift how you think about what you&#39;re creating. And that shift has consequences far beyond your own productivity.</p><p class="paragraph" style="text-align:left;">For my entire career, software development has been about building things that last. You invest in architecture because code will be maintained for years. The economics of traditional software demanded durability. Vibe coding inverts this entirely.</p><p class="paragraph" style="text-align:left;">When I stopped to think about this, everything changed. I wasn&#39;t failing to build lasting software. I was succeeding at something different: building disposable software, fast.</p><ul><li><p class="paragraph" style="text-align:left;"><b>Testing an idea.</b> Does this concept make sense? The code isn&#39;t the point—the learning is.</p></li><li><p class="paragraph" style="text-align:left;"><b>Exploring a space.</b> What&#39;s possible with this API? Code as a thinking tool.</p></li><li><p class="paragraph" style="text-align:left;"><b>Solving a momentary problem.</b> A utility for a specific task. Something for this context, this moment.</p></li></ul><p class="paragraph" style="text-align:left;">In all these cases, longevity isn&#39;t just unnecessary. Instead, it&#39;s counterproductive.</p><h2 class="heading" style="text-align:left;" id="clone-personalise-move-on"><b>Clone, Personalise, Move On</b></h2><p class="paragraph" style="text-align:left;">But what really hit me was when I showed someone an idea I&#39;d been working on, curious what they thought. An hour later, they&#39;d used AI to deconstruct it, rebuild it, and add new features personalised to their specific needs.</p><p class="paragraph" style="text-align:left;">They didn&#39;t ask for the source code. They didn&#39;t request documentation. They just took the concept and made their own version. Clone, personalise, move on.</p><p class="paragraph" style="text-align:left;">Stop and think about what this means. The competitive advantage of having built something evaporates. The moat you thought you had is gone in an hour. The months of development work are replicable in an afternoon by anyone who sees what you&#39;ve made. This isn&#39;t a small shift. It&#39;s a fundamental reordering of how value is created and captured in software.</p><p class="paragraph" style="text-align:left;">The old rules were that creativity was hard and copying was harder. This seems to no longer apply. Intellectual property is increasingly meaningless. First-mover advantage has shrunk from years to days. The craftsmanship you invested in is invisible to anyone who clones the idea and rebuilds it their way. New rules are emerging. New forms of value. New winners and losers.</p><p class="paragraph" style="text-align:left;">For many kinds of solution (but not all), the losers will be those who cling to the old model of protecting codebases, hoarding technical knowledge, and believing that what they&#39;ve built can&#39;t be replicated.</p><p class="paragraph" style="text-align:left;">In these situations, the winners won&#39;t be those who build the most. They&#39;ll be those who learn the fastest, spot opportunities first, and move on before the crowd arrives.</p><h2 class="heading" style="text-align:left;" id="the-new-literacy"><b>The New Literacy</b></h2><p class="paragraph" style="text-align:left;">We&#39;re entering an era where a new kind of literacy matters: knowing when to build to last and when to build to discard. For decades, the cost of creating software meant anything worth building was worth building properly. Now that calculus has shifted. Creating software is cheap. The expensive thing is maintaining it, securing it, and governing it.</p><p class="paragraph" style="text-align:left;">The people who thrive will be those who can fluidly move between modes—building disposable tools for learning, then switching to serious engineering when something proves its worth.</p><p class="paragraph" style="text-align:left;">They&#39;ll understand that sometimes the best code is code that was never meant to last. My experiences have highlighted to me that the magic show is real. The frustrations are real. But neither is the point.</p><p class="paragraph" style="text-align:left;">The point is that the economics of software have fundamentally changed. The barriers that protected value have fallen. The skills that mattered are shifting. The rules are being rewritten.</p><p class="paragraph" style="text-align:left;">Don&#39;t get distracted by whether these tools are amazing or annoying. They&#39;re both. What matters is what comes next. The vibes are temporary. The disruption is permanent.</p><p class="paragraph" style="text-align:left;"> </p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=9a3a40b7-d9c0-48a8-ab9b-024ecff06dbb&utm_medium=post_rss&utm_source=digital_economy_dispatches">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Digital Economy Dispatch #269 -- Why Strategy is Not Just Delivery</title>
  <description>With the remarkable speed at which AI is moving, it&#39;s important to be reminded that action alone isn&#39;t strategy. True AI success requires balancing agile delivery with deep analysis, long-term governance, and experienced judgment.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/f8ed969b-5deb-4387-a5cb-7cd23069ae53/whiteboard-photo-1.jpg" length="64058" type="image/jpeg"/>
  <link>https://dispatches.alanbrown.net/p/digital-economy-dispatch-269-why-strategy-is-not-just-delivery</link>
  <guid isPermaLink="true">https://dispatches.alanbrown.net/p/digital-economy-dispatch-269-why-strategy-is-not-just-delivery</guid>
  <pubDate>Sun, 01 Feb 2026 08:25:07 +0000</pubDate>
  <atom:published>2026-02-01T08:25:07Z</atom:published>
    <dc:creator>Alan Brown</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">A few weeks ago, I found myself in a heated discussion with a group of senior executives about AI adoption. The conversation had turned to how organisations were approaching their AI strategies, and someone quoted the familiar mantra: &quot;The strategy is delivery.&quot; Heads nodded around the table. It was a phrase everyone knew. A rallying cry that had become gospel in digital transformation circles.</p><p class="paragraph" style="text-align:left;">I pushed back. Hard.</p><p class="paragraph" style="text-align:left;">Don&#39;t get me wrong. I have enormous respect for the work that <a class="link" href="https://mikebracken.com/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-269-why-strategy-is-not-just-delivery" target="_blank" rel="noopener noreferrer nofollow">Mike Bracken</a> and his colleagues did at the <a class="link" href="https://www.gov.uk/government/organisations/government-digital-service?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-269-why-strategy-is-not-just-delivery" target="_blank" rel="noopener noreferrer nofollow">Government Digital Service</a>, and for the ideas captured in their book &quot;<a class="link" href="https://public.digital/pd-insights/digital-transformation-at-scale?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-269-why-strategy-is-not-just-delivery" target="_blank" rel="noopener noreferrer nofollow">Digital Transformation at Scale: Why the Strategy is Delivery</a>” – a well-thumbed copy sits in front of me as I type. Their emphasis on agility, user focus, and iterative improvement was exactly what government IT needed in 2011. After decades of bloated contracts, failed mega-projects, and technology decisions made far from the people using the services, the message was timely and necessary.</p><p class="paragraph" style="text-align:left;">But a rallying cry is not a complete philosophy. In the years since, &quot;strategy is delivery&quot; has been stretched beyond its original intent, becoming too often an excuse to avoid the difficult work of rolling up the sleeves to fix what’s broken, taking ownership of hard decisions, and engaging in genuine strategic thinking.</p><h2 class="heading" style="text-align:left;" id="the-seductive-simplicity-of-just-do"><b>The Seductive Simplicity of &quot;Just Do It&quot;</b></h2><p class="paragraph" style="text-align:left;">The appeal of &quot;strategy is delivery&quot; is obvious. It cuts through bureaucratic paralysis. It demands action over endless planning cycles. It puts user needs at the centre.</p><p class="paragraph" style="text-align:left;">These are good things. Especially in environments where strategy had become synonymous with lengthy documents gathering dust while the world moved on. <a class="link" href="https://GOV.UK?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-269-why-strategy-is-not-just-delivery" target="_blank" rel="noopener noreferrer nofollow">GOV.UK</a> remains an important example of what focused delivery can achieve.</p><p class="paragraph" style="text-align:left;">But somewhere along the way, the message mutated. &quot;Strategy is delivery&quot; became &quot;strategy <i>is only</i> delivery.&quot; The implicit claim shifted from &quot;stop hiding behind strategy&quot; to &quot;strategic thinking is unnecessary overhead”. And that&#39;s where we&#39;ve gone badly wrong.</p><h2 class="heading" style="text-align:left;" id="what-strategy-actually-requires"><b>What Strategy Actually Requires</b></h2><p class="paragraph" style="text-align:left;">Real strategy is not a document. But it&#39;s not something that emerges automatically from iterative delivery. Strategy requires things that cannot be invented on the fly:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Deep analysis.</b> Understanding your competitive landscape, your capabilities, your constraints, and the forces shaping your environment takes time and rigorous thinking. You cannot sprint your way to insight about deep issues such as geopolitical shifts in technology supply chains, the changing economics of AI infrastructure, or the regulatory pressures that will shape your operating context for the next decade.</p></li><li><p class="paragraph" style="text-align:left;"><b>Negotiation and agreement.</b> Strategy in any organisation of scale is not the vision of a single leader. It emerges from difficult conversations, competing priorities, and hard-won consensus. These negotiations take time. They require building relationships, understanding different perspectives, and finding genuine alignment. They are not just superficial agreement in a sprint retrospective.</p></li><li><p class="paragraph" style="text-align:left;"><b>Judgement born of experience.</b> The wisdom comes from knowing which opportunities to pursue and which to decline, when to move fast and when to exercise caution, and what to build and what to buy. These judgements cannot be learned in a two-week iteration. They come from years of accumulated experience, from having seen what works and what fails, from understanding not just the technical possibilities but the human and organisational realities.</p></li><li><p class="paragraph" style="text-align:left;"><span style="font-family:"Times New Roman";font-size:7pt;"> </span><b>Governance and accountability.</b> Strategic decisions have long-term consequences. They commit resources, foreclose alternatives, and shape organisational direction for years. Such choices require proper governance with clear accountability, appropriate oversight, and mechanisms for course correction. Agile ceremonies are not a substitute for board-level strategic governance.</p></li></ul><h2 class="heading" style="text-align:left;" id="the-ai-amplifier"><b>The AI Amplifier</b></h2><p class="paragraph" style="text-align:left;">This matters <a class="link" href="https://hbr.org/2025/09/make-sure-your-ai-strategy-actually-creates-value?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-269-why-strategy-is-not-just-delivery" target="_blank" rel="noopener noreferrer nofollow">more than ever in the age of AI</a>. The technology decisions organisations make today will shape their capabilities, costs, risks, and competitive position for years to come. Consider just a few of the strategic questions that cannot be answered by delivery alone:</p><p class="paragraph" style="text-align:left;">How much of your AI capability should be built versus bought? What are the long-term implications of dependency on a small number of foundation model providers? How do you balance the productivity benefits of AI against workforce implications? What data governance frameworks need to be in place before you scale? How do you position yourself given the <a class="link" href="https://www.weforum.org/stories/2026/01/why-effective-ai-governance-is-becoming-a-growth-strategy/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-269-why-strategy-is-not-just-delivery" target="_blank" rel="noopener noreferrer nofollow">uncertainty about AI regulation</a> across different jurisdictions?</p><p class="paragraph" style="text-align:left;">These are not questions you can A/B test your way through. They require strategic thinking by applying analysis, judgement, and governance that operates on a different timescale than sprint cycles.</p><p class="paragraph" style="text-align:left;">The irony is that the very organisations that pioneered &quot;strategy is delivery&quot; are now grappling with the consequences. The recent <a class="link" href="https://www.gov.uk/government/publications/state-of-digital-government-review/state-of-digital-government-review?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-269-why-strategy-is-not-just-delivery" target="_blank" rel="noopener noreferrer nofollow">UK government review of digital services</a> found that digital strategies too often set out ambitious visions while failing to put in place the performance targets, funding, tools and systems required to deliver them. The problem wasn&#39;t wasting time on too much strategy: it was strategy disconnected from the hard work of implementation planning and resource commitment.</p><h2 class="heading" style="text-align:left;" id="finding-the-balance"><b>Finding the Balance</b></h2><p class="paragraph" style="text-align:left;">None of this is an argument for returning to the bad old days of strategy as a substitute for action. The pendulum doesn&#39;t need to swing back to endless planning cycles and analysis paralysis. But neither should we pretend that strategy emerges spontaneously from rapid iteration.</p><p class="paragraph" style="text-align:left;">The most effective organisations I work with have learned to hold both truths simultaneously. They maintain a clear strategic direction, grounded in deep analysis, properly governed, and built on experienced judgement, while executing with agility and responsiveness to what they learn along the way. They understand that <a class="link" href="https://onlinelibrary.wiley.com/doi/10.1002/smj.4250060306?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-269-why-strategy-is-not-just-delivery" target="_blank" rel="noopener noreferrer nofollow">Mintzberg was right</a>: realised strategy is always a combination of the deliberate and the emergent. But they also understand that without the deliberate, the emergent is just drift.</p><p class="paragraph" style="text-align:left;">Strategy is not just delivery. Strategy enables delivery. Strategy gives delivery direction, purpose, and coherence. Without a strategy, delivery becomes an activity without achievement, in danger of being activity without progress.</p><p class="paragraph" style="text-align:left;">In a world being reshaped by AI, we need both. We need the courage to act and the wisdom to think. We need sprint velocity and strategic patience. We need teams empowered to deliver and leaders capable of genuine strategic thought.</p><p class="paragraph" style="text-align:left;">The organisations that will thrive are those that resist the false choice. They will be neither paralysed by planning nor lost in tactical busyness. They will deliver with purpose, but guided by a strategy that deserves the name.</p><p class="paragraph" style="text-align:left;"> </p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=6322ea9a-2f4e-42f6-888d-f915d4952da8&utm_medium=post_rss&utm_source=digital_economy_dispatches">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Digital Economy Dispatch #268 -- The Reality of AI Adoption</title>
  <description>While boardrooms debate AI strategy and regulators wrestle with governance frameworks, a very different reality is unfolding across organisations. The actual state of AI adoption looks nothing like the official version.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/3679df17-4265-4db3-80a8-711ac751d7f7/mirror-photo-1.jpg" length="12872" type="image/jpeg"/>
  <link>https://dispatches.alanbrown.net/p/digital-economy-dispatch-268-the-reality-of-ai-adoption</link>
  <guid isPermaLink="true">https://dispatches.alanbrown.net/p/digital-economy-dispatch-268-the-reality-of-ai-adoption</guid>
  <pubDate>Sun, 25 Jan 2026 08:27:17 +0000</pubDate>
  <atom:published>2026-01-25T08:27:17Z</atom:published>
    <dc:creator>Alan Brown</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">I spend a lot of time talking to senior leaders about AI strategy. We discuss governance frameworks, enterprise architectures, and the importance of responsible deployment. These are valuable conversations. But I fear that they too often bear little resemblance to what&#39;s actually happening in their organisations.</p><p class="paragraph" style="text-align:left;">The reality on the ground is messier, more organic, and far more interesting than the sanitised version that appears in boardroom discussions and strategy documents. When I talk to the people doing the work (managers, analysts, and team leaders) I hear a completely different story. AI adoption isn&#39;t waiting for the strategy to be finalised or the governance framework to be approved. It&#39;s already happening, in three distinct ways that rarely get discussed in the formal reviews.</p><h2 class="heading" style="text-align:left;" id="the-personal-productivity-revolutio"><b>The Personal Productivity Revolution</b></h2><p class="paragraph" style="text-align:left;">The first face of AI adoption is the simplest and most widespread: individuals using AI tools to get their work done faster and better. This is a different level from enterprise systems or approved platforms. It&#39;s individuals opening ChatGPT, Claude, or Gemini in a browser tab and asking for help with the task in front of them.</p><p class="paragraph" style="text-align:left;">They&#39;re drafting emails, summarising documents, preparing for meetings, writing first drafts of reports, and getting unstuck when they hit a problem. The prompts are often basic, nothing sophisticated about them. But the productivity gains are real. A task that might have taken an hour now takes fifteen minutes. A blank page that once triggered half a day of wondering what to do next now has a working draft within minutes.</p><p class="paragraph" style="text-align:left;">What’s impressive is how personal this has become. People have developed their own ways of working with AI. They know which tools they prefer, what kinds of prompts work for them, and where AI helps versus where it gets in the way. None of this is learned in a corporate training programme. It’s figured out individually, often in people’s own time, because it makes their working lives easier.</p><p class="paragraph" style="text-align:left;">The scale of this quiet revolution is easy to underestimate. <a class="link" href="https://www.microsoft.com/en-us/worklab/work-trend-index?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-268-the-reality-of-ai-adoption" target="_blank" rel="noopener noreferrer nofollow">According to Microsoft&#39;s Work Trend Index</a>, 75% of knowledge workers now use AI at work, with usage nearly doubling in the past year alone. Perhaps more tellingly, 78% of AI users are bringing their own tools to work, what researchers call &quot;Bring Your Own AI&quot;, often without their employers&#39; explicit knowledge or approval. It&#39;s become as routine as using a search engine, and just as invisible to management.</p><h2 class="heading" style="text-align:left;" id="the-embedded-ai-wave"><b>The Embedded AI Wave</b></h2><p class="paragraph" style="text-align:left;">The second face of AI adoption is less visible but arguably more significant: AI capabilities being quietly added to the tools we already use. We’re seeing Microsoft Copilot in Office applications. AI-powered features in Salesforce, Zoom, and Slack. Smart suggestions in email clients. Automated transcription and summaries in video calls.</p><p class="paragraph" style="text-align:left;">This kind of AI adoption often doesn&#39;t require anyone to make a decision. It just appears, enabled by default, as part of a software update. One day, your email starts suggesting replies. Your CRM begins predicting which leads are most likely to convert. Your video conferencing tool offers to summarise the meeting you just finished.</p><p class="paragraph" style="text-align:left;">For organisations, this creates an interesting situation. AI is being adopted at scale without anyone formally adopting it. <a class="link" href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-268-the-reality-of-ai-adoption" target="_blank" rel="noopener noreferrer nofollow">The tools people already use are becoming AI-powered</a>, whether or not that was part of the plan. The enterprise software vendors have made the decision for you—AI is now part of the package.</p><p class="paragraph" style="text-align:left;">This embedded AI wave raises questions that many organisations haven&#39;t thought through. Where is the data going when Copilot summarises your document? What happens to the meeting transcript that got automatically generated? Who&#39;s responsible for checking whether the AI-suggested response to a customer query is actually correct? These aren&#39;t hypothetical concerns. They&#39;re happening now, in millions of interactions every day, mainly without oversight.</p><h2 class="heading" style="text-align:left;" id="the-shadow-ai-reality"><b>The Shadow AI Reality</b></h2><p class="paragraph" style="text-align:left;">The third face of AI adoption is the one that keeps cybersecurity and compliance officers awake at night: shadow AI. Uncontrolled, unmanaged experiments are <a class="link" href="https://www.forbes.com/councils/forbestechcouncil/2025/10/24/shadow-ai-in-2025-five-insights-for-security-teams/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-268-the-reality-of-ai-adoption" target="_blank" rel="noopener noreferrer nofollow">happening in almost every corner of the organisation</a>, underneath the surface, often invisible to senior leadership.</p><p class="paragraph" style="text-align:left;">Teams are signing up for AI tools using personal email addresses or departmental credit cards. They&#39;re uploading company data to free-tier services to see what insights emerge. They&#39;re building workflows with AI components that nobody in IT knows about. Marketing is trying one set of tools, finance another, operations a third—none of them coordinated, few of them approved.</p><p class="paragraph" style="text-align:left;">This isn&#39;t malicious. People aren&#39;t trying to circumvent controls for the sake of it. They&#39;re trying to do their jobs better, and they&#39;ve found tools that help. The official channels are too slow, too restrictive, or simply non-existent. So they improvise. The gap between what employees need and what IT has approved creates the perfect conditions for shadow AI to flourish.</p><p class="paragraph" style="text-align:left;">The scale of this shadow activity is difficult to measure precisely because, by definition, it&#39;s happening outside official view. But the numbers we do have are striking. <a class="link" href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-268-the-reality-of-ai-adoption" target="_blank" rel="noopener noreferrer nofollow">McKinsey&#39;s 2025 State of AI survey</a> found that while 88% of organisations now use AI in at least one business function, nearly two-thirds remain stuck in experimentation or pilot mode. Much of what&#39;s happening isn&#39;t coordinated but scattered across the organisation in unconnected pockets. Every organisation I talk to, when they look honestly, finds more unsanctioned AI use than they expected. Often much more.</p><h2 class="heading" style="text-align:left;" id="the-gap-between-narrative-and-reali"><b>The Gap Between Narrative and Reality</b></h2><p class="paragraph" style="text-align:left;">We need to be open about the current state of AI discussions. Senior management, auditors, and regulators are working from a mental model that assumes AI adoption is something that happens through formal channels via procurement processes, governance reviews, approved vendor lists, and controlled rollouts. That model made sense for previous generations of enterprise technology.</p><p class="paragraph" style="text-align:left;">But AI isn&#39;t following that playbook. It&#39;s coming in through the front door, the back door, and every window simultaneously. By the time the governance framework is ready, thousands of AI interactions have already happened. And by the time the risk assessment is complete, the tools have changed, and new ones have appeared.</p><p class="paragraph" style="text-align:left;">Of course, all this isn&#39;t an argument against governance. Good governance matters more than ever when a technology is this powerful and this easy to misuse. But governance that ignores how AI is actually being adopted will always be playing catch-up. You can&#39;t manage what you refuse to see.</p><h2 class="heading" style="text-align:left;" id="what-this-means-for-leaders"><b>What This Means for Leaders</b></h2><p class="paragraph" style="text-align:left;">If you&#39;re responsible for AI in your organisation, whether formally or informally, the first step is acknowledging this reality. Find out what&#39;s actually happening. Talk to people at every level about the AI tools they&#39;re using. You&#39;ll probably be surprised by what you learn.</p><p class="paragraph" style="text-align:left;">The second step is meeting people where they are. The personal productivity uses and the shadow experiments aren&#39;t problems to be stamped out. They&#39;re signals about what your people need. The question isn&#39;t how to stop them but how to channel that energy into approaches that are sustainable and safe.</p><p class="paragraph" style="text-align:left;">The third step is getting serious about the embedded AI that&#39;s arriving through your existing software. This needs attention now, not when you get around to it. Your enterprise tools are becoming AI-powered, whether you planned for it or not. Understanding what that means for data, privacy, and accuracy is urgent.</p><p class="paragraph" style="text-align:left;">Finally, accept that the old model of technology adoption, where IT approves everything before it enters the organisation, isn&#39;t coming back. AI is too accessible, too useful, and too fast-moving. The task now is building governance that works with this reality rather than pretending it doesn&#39;t exist.</p><p class="paragraph" style="text-align:left;">The official version of AI adoption is of a strategic, controlled approach proceeding according to an approved plan. But it&#39;s not what&#39;s happening. The real story is messier, faster, and already well underway. Leaders who understand this will be better placed to guide it. Those who don&#39;t will find themselves governing a fiction while the reality moves on without them.</p><p class="paragraph" style="text-align:left;"> </p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=81a21fa6-1d9c-42b7-a94b-6d893520bc9a&utm_medium=post_rss&utm_source=digital_economy_dispatches">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Digital Economy Dispatch #267 -- AI and the Reinvigoration of the Creative Economy</title>
  <description>While doom and gloom dominate AI headlines, something remarkable is happening. For innovators, entrepreneurs, and communicators, AI has become a catalyst for creativity and possibility like nothing we&#39;ve seen before.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/b0c649d0-32f0-49b6-bb5f-56c4da1fa22f/creative-photo-1.jpg" length="54377" type="image/jpeg"/>
  <link>https://dispatches.alanbrown.net/p/digital-economy-dispatch-267-ai-and-the-reinvigoration-of-the-creative-economy</link>
  <guid isPermaLink="true">https://dispatches.alanbrown.net/p/digital-economy-dispatch-267-ai-and-the-reinvigoration-of-the-creative-economy</guid>
  <pubDate>Sun, 18 Jan 2026 08:25:06 +0000</pubDate>
  <atom:published>2026-01-18T08:25:06Z</atom:published>
    <dc:creator>Alan Brown</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">I&#39;ve been having a lot of conversations about AI lately. That&#39;s hardly surprising given my work. But what has struck me recently isn&#39;t the familiar litany of concerns about job losses, AI slop, and economic disruption. It&#39;s something quite different. It&#39;s the spark in people&#39;s eyes when they talk about what they&#39;re now able to do.</p><p class="paragraph" style="text-align:left;">Last week, I sat with an entrepreneur who had been sitting on a business idea for three years. He&#39;d always been held back by the cost and complexity of getting from concept to something tangible. Now, in a matter of weeks, he had a working prototype, a refined business model, and was in discussion with his first paying customer. The way he described his journey wasn&#39;t cautious or reserved. This wasn&#39;t someone fearful of AI. This was someone who had found an accelerator for her ambitions.</p><p class="paragraph" style="text-align:left;">I&#39;m seeing this pattern repeatedly. Yes, there&#39;s plenty of legitimate concern about AI&#39;s darker implications. The race to the bottom in content quality, the uncertainty facing many workers, and the genuine disruption to industries and livelihoods. These deserve serious attention, and <a class="link" href="https://dispactches.alanbrown.net?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-267-ai-and-the-reinvigoration-of-the-creative-economy" target="_blank" rel="noopener noreferrer nofollow">I&#39;ve written about them extensively</a>. But I wonder if we&#39;re so focused on what AI might take away that we&#39;re missing what it&#39;s opening up.</p><h2 class="heading" style="text-align:left;" id="innovators-unleashed"><b>Innovators Unleashed</b></h2><p class="paragraph" style="text-align:left;">Consider what&#39;s happening to innovators across every field. The traditional path from idea to experiment has always been constrained by time, resources, and technical barriers. You might have ten ideas worth exploring, but you&#39;d be lucky to test two or three, given the practical limitations you faced. The rest would languish in notebooks and whiteboards, forever theoretical.</p><p class="paragraph" style="text-align:left;">AI has fundamentally changed this equation. I&#39;m talking to product designers who now prototype concepts in hours rather than weeks. Researchers who can explore dozens of hypotheses, where before they could only afford to pursue a handful. Architects and engineers who iterate through design variations at speeds that seemed impossible just a few years ago.</p><p class="paragraph" style="text-align:left;">This isn&#39;t about AI replacing human creativity. It&#39;s about removing <a class="link" href="https://cognitiveworld.com/articles/2025/3/02/the-impact-of-ai-on-research-and-innovation?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-267-ai-and-the-reinvigoration-of-the-creative-economy" target="_blank" rel="noopener noreferrer nofollow">the friction that prevents creative people from expressing the full range of their ideas</a>. When the cost of experimentation drops dramatically, you experiment more. And when you experiment more, you learn faster and discover things you never would have found through more cautious approaches.</p><p class="paragraph" style="text-align:left;">The innovators I meet aren&#39;t threatened by AI. <a class="link" href="https://www.captechu.edu/blog/how-generative-ai-is-transforming-creativity?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-267-ai-and-the-reinvigoration-of-the-creative-economy" target="_blank" rel="noopener noreferrer nofollow">They&#39;re liberated by it</a>. They describe a feeling of finally being able to work at the pace their minds operate.</p><h2 class="heading" style="text-align:left;" id="entrepreneurs-finding-their-footing"><b>Entrepreneurs Finding Their Footing</b></h2><p class="paragraph" style="text-align:left;">The entrepreneurial landscape has shifted just as dramatically. The journey from idea to minimum viable product to first customer has always been arduous. It demanded skills across multiple domains: market research, financial modelling, legal structures, marketing, and product development. Either you acquired these skills yourself, hired people who had them, or paid consultants for guidance. Each step costs time and money that most aspiring entrepreneurs simply don&#39;t have.</p><p class="paragraph" style="text-align:left;">I&#39;ve watched this barrier fall over the past two years. Entrepreneurs who would have been stopped cold by the complexity of business formation are now working through the details systematically, using AI as an always-available thinking partner and critical friend. They&#39;re testing value propositions, refining pricing models, and developing go-to-market strategies with a sophistication that previously required expensive advisors or hard-won experience.</p><p class="paragraph" style="text-align:left;">A good friend who is a business founder recently put it simply: &quot;I can now have the strategy conversations I need to have, when I need to have them.&quot; She wasn&#39;t referring to conversations with customers or investors. She meant the internal dialogues that shape a business -- the back-and-forth of testing assumptions, challenging logic, and refining thinking. AI has become the patient collaborator who&#39;s available at midnight when inspiration strikes.</p><p class="paragraph" style="text-align:left;">The cost and time required to move from idea to traction have compressed significantly. This means more people can try, and more ideas that deserve a chance in the market are actually getting one.</p><h2 class="heading" style="text-align:left;" id="communicators-raising-their-game"><b>Communicators Raising Their Game</b></h2><p class="paragraph" style="text-align:left;">Perhaps nowhere is AI&#39;s reinvigorating effect more visible than among those whose work centres on communication. Educators, writers, analysts, and knowledge workers of every variety are finding that AI has transformed their capacity to create, refine, and deliver.</p><p class="paragraph" style="text-align:left;">I include myself in this category. After decades of writing, speaking, and teaching, I can say without hesitation that AI has changed how I work. Not by doing my thinking for me, but by making it easier to express that thinking clearly, to catch errors I would have missed, to synthesise sources more comprehensively, and to tailor materials for different audiences more effectively.</p><p class="paragraph" style="text-align:left;">The educators I meet describe similar experiences. They&#39;re creating more engaging materials, providing more personalised feedback, and reaching students in ways that their workloads previously made impossible. The analysts I work with are pulling together insights from broader sources, with greater accuracy, than their research budgets ever allowed before. Writers are producing higher-quality first drafts and spending more of their time on the creative decisions that matter most.</p><p class="paragraph" style="text-align:left;">None of this means the work has become easy. It means the barriers that prevented good work have become lower. The communicators who embrace these tools aren&#39;t cutting corners. They&#39;re raising their standards because higher standards have become achievable.</p><h2 class="heading" style="text-align:left;" id="the-doors-that-are-opening"><b>The Doors That Are Opening</b></h2><p class="paragraph" style="text-align:left;">When I step back from these individual conversations, I see a broader pattern. <a class="link" href="https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-predictions.html?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-267-ai-and-the-reinvigoration-of-the-creative-economy" target="_blank" rel="noopener noreferrer nofollow">AI is opening doors</a> that many people thought were permanently closed to them. The person without a technical background who can now build working software. The expert with valuable insights who can finally share them through polished writing. The small team can compete with much larger organisations because AI has given them capabilities that previously required significant headcount.</p><p class="paragraph" style="text-align:left;">This isn&#39;t about AI being a panacea or ignoring its serious challenges. The concerns about job displacement are real. The degradation of online content through AI slop is genuine. The concentration of AI power in a handful of technology giants deserves ongoing scrutiny. I&#39;ll continue to write about these issues.</p><p class="paragraph" style="text-align:left;">But I also think we need to acknowledge what&#39;s happening on the other side of the ledger. For every person worried about AI, I&#39;m meeting someone who has found in it a tool for growth, creativity, and achievement they never expected. The gloom and doom narrative, however justified in parts, is not the whole story.</p><p class="paragraph" style="text-align:left;">The spark in their eyes tells me something important. For many people, AI hasn&#39;t been a source of anxiety. It&#39;s been an invitation to try things they&#39;d given up on, to pursue ambitions they&#39;d shelved, and to work in ways that align better with how their minds function. That&#39;s worth acknowledging, even as we remain clear-eyed about the challenges ahead.</p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=ae54c227-e3bf-444b-8038-80ef80036d7c&utm_medium=post_rss&utm_source=digital_economy_dispatches">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Digital Economy Dispatch #266 -- Moving From AI Pilots to AI Patterns and Playbooks</title>
  <description>AI is currently in its own Wild West phase. The leaders who win won&#39;t be the ones with the biggest GPUs; they will be the ones who successfully define, apply, and align around the frameworks, patterns, and playbooks for a resilient AI era.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/4374d8e0-5121-4944-a170-b68267f75ee9/robot-patterns-photo-1.jpg" length="74478" type="image/jpeg"/>
  <link>https://dispatches.alanbrown.net/p/digital-economy-dispatch-266-moving-from-ai-pilots-to-ai-patterns-and-playbooks</link>
  <guid isPermaLink="true">https://dispatches.alanbrown.net/p/digital-economy-dispatch-266-moving-from-ai-pilots-to-ai-patterns-and-playbooks</guid>
  <pubDate>Sun, 11 Jan 2026 08:25:07 +0000</pubDate>
  <atom:published>2026-01-11T08:25:07Z</atom:published>
    <dc:creator>Alan Brown</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">The era of the AI &quot;proof of concept&quot; is maturing into something much more demanding. Over the past year, the novelty of experimenting with GenAI has given way to a much more rigorous set of requirements. In my conversations with digital leaders, the focus has shifted from the excitement of the initial discovery to the hard reality of engineering: How do we build AI solutions that are truly robust, repeatable, and secure?</p><p class="paragraph" style="text-align:left;">Moving a clever experiment into the core of an enterprise is a daunting leap. It is no longer enough for a system to be &quot;impressive&quot;; it must be resilient. We are entering a phase where &quot;good enough&quot; results are being replaced by the need for enterprise-grade reliability, where data security is non-negotiable, and where the ability to replicate success across different business units is the primary measure of value.</p><p class="paragraph" style="text-align:left;">For leaders defining the path forward, there is no simple checklist. We face a series of dilemmas that must be addressed within specific, often messy, organizational contexts. We are balancing the pressure to deliver immediate competitive advantage against the long-term necessity of building a foundation that won&#39;t crumble under the weight of regulation or technical debt. What is the best way forward? To find the answer, I believe we need to look back at how the software industry solved a remarkably similar problem a generation ago.</p><h2 class="heading" style="text-align:left;" id="the-lessons-of-the-monolith">The Lessons of the Monolith</h2><p class="paragraph" style="text-align:left;">If you look back more than two decades, the software industry faced a similar existential crisis. We were attempting to transition from large, brittle, monolithic systems with multi-year development cycles to a world of agile delivery and distributed, cloud-based services. In those early days of the &quot;internet-scale&quot; transition, chaos reigned. Developers were reinventing the wheel with every new project. Failures were common, not because the technology didn&#39;t work, but because we hadn&#39;t yet figured out the architecture of the new world. We had the tools, but we lacked the discipline of repeatability.</p><p class="paragraph" style="text-align:left;">The breakthrough didn&#39;t come from a single piece of technology; it came from the codification of experience. We moved from &quot;guessing&quot; to &quot;pattern matching&quot;. To illustrate this move, consider what for many of us was an iconic moment of this era: the publication of “<a class="link" href="https://www.amazon.co.uk/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-266-moving-from-ai-pilots-to-ai-patterns-and-playbooks" target="_blank" rel="noopener noreferrer nofollow">Design Patterns: Elements of Reusable Object-Oriented Software</a>” by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides - always affectionately referred to as the <b>&quot;Gang of Four Patterns book&quot;</b>.</p><p class="paragraph" style="text-align:left;">That publication didn&#39;t invent new algorithms. Instead, it captured the shared wisdom emerging from real experiences of successful system delivery and distilled it into &quot;patterns&quot; - reusable solutions to common problems within a given context. Crucially, these were <b>design patterns</b>. They occupied a vital &quot;middle ground&quot; in the engineering hierarchy: they were descriptions of solution approaches that were neither too abstract and ephemeral to be useless, nor too detailed and context-specific to be rigid and not transferable. This positioning increased their value immensely by bridging the gap between broad strategic concepts and the grind of detailed implementation.</p><p class="paragraph" style="text-align:left;">This work galvanized a whole generation of business analysts, solution architects, engineers, and delivery specialists. It gave us a shared language to define, assess, and debate solution alternatives. Most importantly, it provided a set of blueprints for producing robust, resilient solutions. If one of the team said, &quot;We should use a &#39;Factory&#39; or a &#39;Singleton&#39; here,&quot; everyone knew exactly what that implied regarding scale and stability of what was being proposed.</p><h2 class="heading" style="text-align:left;" id="the-need-for-ai-strategy-patterns">The Need for AI Strategy Patterns</h2><p class="paragraph" style="text-align:left;">I believe we are currently waiting for our &quot;Gang of Four Pattern Book&quot; moment in AI. To move toward effective, resilient enterprise deployment, we need more than just better models. We need a playbook of <b>AI Patterns and Anti-patterns</b>. We need a shared understanding of what &quot;secure and robust&quot; looks like in different organizational contexts, backed by exemplars that offer realistic illustrations of what this means in practice.</p><p class="paragraph" style="text-align:left;">The emergence of these patterns is a vital step in establishing the maturity of how we understand emerging technology. This mirrors the earlier evolution of “<a class="link" href="https://www.patternlanguage.com/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-266-moving-from-ai-pilots-to-ai-patterns-and-playbooks" target="_blank" rel="noopener noreferrer nofollow">A Pattern Language</a>”, the concept pioneered by Christopher Alexander that suggests patterns provide a way to describe best practices in a way that is both generative and repeatable.</p><p class="paragraph" style="text-align:left;">With GenAI tools, it is tempting to focus solely on their ability to generate a unique, &quot;magic&quot; solution every time they are used. However, viewing this as a primary strength in an enterprise context is a mistake. True maturity comes when answers share common elements and exhibit recognizable characteristics. Maturity is found in consistency, not just novelty. Identifying recurring patterns is the only way to move from the unpredictability of experiments to the reliability of AI engineering.</p><p class="paragraph" style="text-align:left;">What would these patterns look like? They wouldn&#39;t just be code; they would be the strategic and architectural blueprints that ensure a solution is repeatable. For example:</p><ul><li><p class="paragraph" style="text-align:left;"><b>The &quot;Human-in-the-Loop&quot; Pattern:</b> For high-stakes decision-making, architecting the interface so AI augments rather than replaces judgment.</p></li><li><p class="paragraph" style="text-align:left;"><b>The &quot;Data Flywheel&quot; Pattern:</b> Structuring feedback loops where user interactions safely and ethically improve the model without compromising privacy.</p></li><li><p class="paragraph" style="text-align:left;"><b>The &quot;Orchestrator&quot; Pattern:</b> Moving away from a single &quot;<a class="link" href="https://docs.pinai.io/personal-ai-protocol/God-Models?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-266-moving-from-ai-pilots-to-ai-patterns-and-playbooks" target="_blank" rel="noopener noreferrer nofollow">God-Model</a>&quot; to a series of smaller, specialized agents managed by a central, secure controller.</p></li></ul><p class="paragraph" style="text-align:left;">Equally important are the <b>Anti-patterns</b> describing the traps that compromise security and stability:</p><ul><li><p class="paragraph" style="text-align:left;"><b>The &quot;Magic Wand&quot; Anti-pattern:</b> Assuming that throwing an LLM at a broken business process will fix the underlying process.</p></li><li><p class="paragraph" style="text-align:left;"><b>The &quot;Shadow AI&quot; Anti-pattern:</b> Allowing fragmented, unmanaged AI implementations to proliferate, creating a nightmare for security and data governance.</p></li></ul><h2 class="heading" style="text-align:left;" id="early-signals-the-rise-of-the-ai-pl">Early Signals: The Rise of the AI Playbook</h2><p class="paragraph" style="text-align:left;">It’s possible that we’re already seeing the first wave of this new set of blueprints. Organizations that operate at high stakes and massive scale are leading the way because they <i>have</i> to prioritize security and resilience.</p><p class="paragraph" style="text-align:left;">Look at the <a class="link" href="https://www.gov.uk/government/publications/defence-artificial-intelligence-strategy/defence-artificial-intelligence-strategy?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-266-moving-from-ai-pilots-to-ai-patterns-and-playbooks" target="_blank" rel="noopener noreferrer nofollow">UK Ministry of Defence (MoD) AI Strategy</a> and its associated <a class="link" href="https://www.gov.uk/government/publications/defence-artificial-intelligence-ai-playbook?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-266-moving-from-ai-pilots-to-ai-patterns-and-playbooks" target="_blank" rel="noopener noreferrer nofollow">playbooks</a>. They are dealing with a context where &quot;hallucinations&quot; aren&#39;t just a minor inconvenience but a matter of national security. Their approach focuses on &quot;AI-Ready Infrastructure&quot; and &quot;Ethical Gateways&quot;. It’s a blueprint for deploying AI in environments where trust is the primary currency. Similarly, the <a class="link" href="https://www.gov.uk/government/publications/ai-playbook-for-the-uk-government?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-266-moving-from-ai-pilots-to-ai-patterns-and-playbooks" target="_blank" rel="noopener noreferrer nofollow">UK’s AI Playbook for Government</a> provides a framework for public sector leaders to navigate the tension between innovation and public accountability.</p><p class="paragraph" style="text-align:left;">These documents are significant because they move the conversation from &quot;what is AI?&quot; to &quot;how do we govern and secure AI?&quot;. I see these as &quot;Gang of Four Pattern Book&quot; precursors. They provide the templates that allow leaders to stop starting from scratch and start building for the long term. Yet, they are not yet sufficiently well-formed and consistent to provide a meaningful pattern language to be shared across teams and communities. Surely that’s what must come next.</p><h2 class="heading" style="text-align:left;" id="building-your-own-ai-patterns-and-p">Building Your Own AI Patterns and Playbook</h2><p class="paragraph" style="text-align:left;">If you are a senior leader navigating this space, you cannot wait for the definitive textbook to be written. AI technology is moving too fast for you to wait until the dust settles and these new solution blueprints emerge. Instead, you must become a practitioner of pattern-spotting within your own organization.</p><p class="paragraph" style="text-align:left;">To move toward enterprise-grade AI, I suggest three immediate steps:</p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Audit for Repeatability:</b> Look at your current pilots. Could another team, group, or department take what you&#39;ve built and run with it tomorrow? If not, you haven&#39;t built a manageable solution; you&#39;ve built a one-off.</p></li><li><p class="paragraph" style="text-align:left;"><b>Define your &quot;Hard Rails&quot;:</b> What are the non-negotiable security and robustness standards for your industry? How are these being adopted and adapted in your organization? Document these as the guiderails for solution delivery and a key part of your internal AI pattern library.</p></li><li><p class="paragraph" style="text-align:left;"><b>Adopt a Common Language:</b> Start using the terminology of blueprints, playbooks, and patterns. Seek consistency, repeatability, and reliability as the core of your work and move the discussion from &quot;features&quot; to &quot;architectural integrity&quot;.</p></li></ol><p class="paragraph" style="text-align:left;">The transition from the &quot;Wild West&quot; of internet-era software to the structured world of cloud services allowed the digital economy to flourish. AI is currently in its own Wild West phase. The leaders who win won&#39;t be the ones with the biggest GPUs; they will be the ones who successfully define, apply, and align around the frameworks, patterns, and playbooks for a resilient AI era.</p><p class="paragraph" style="text-align:left;"> </p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=4ec483ac-623b-434e-93e2-4510caa99cda&utm_medium=post_rss&utm_source=digital_economy_dispatches">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Digital Economy Dispatch #265 -- What Do We Want From AI?</title>
  <description>The latest survey from the Ada Lovelace Institute reveals that while the UK public sees potential in AI for areas like healthcare, many fear its current harms and lack of accountability.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/7c49ad9f-9a0a-4ec9-9612-e83aca5936f4/compass-photo-2.jpg" length="113555" type="image/jpeg"/>
  <link>https://dispatches.alanbrown.net/p/digital-economy-dispatch-265-what-do-we-want-from-ai</link>
  <guid isPermaLink="true">https://dispatches.alanbrown.net/p/digital-economy-dispatch-265-what-do-we-want-from-ai</guid>
  <pubDate>Sun, 04 Jan 2026 08:25:07 +0000</pubDate>
  <atom:published>2026-01-04T08:25:07Z</atom:published>
    <dc:creator>Alan Brown</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">As we enter 2026, the year begins with a great deal of hope and expectation in the UK to progress rapidly on its ambitious digital transformation journey with AI – a technology expected to unlock growth, transform public services and secure the country’s place in the global digital race. Yet <a class="link" href="https://www.adalovelaceinstitute.org/policy-briefing/great-expectations/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-265-what-do-we-want-from-ai" target="_blank" rel="noopener noreferrer nofollow">recent research from the Ada Lovelace Institute</a> suggests that, away from ministerial speeches and industry showcases, public expectations are both more grounded and more demanding: people want AI that works in their interests, is properly governed, and comes with real protections when things go wrong.</p><p class="paragraph" style="text-align:left;">UK citizens are therefore walking into 2026 with clear, increasingly confident views about AI. But they are not the views UK policymakers seem most keen to hear. People see real benefits, especially in health and some public services, but they are also reporting widespread harms, deep concern about opaque decision-making, and a loud call for stricter rules and real accountability.</p><h2 class="heading" style="text-align:left;" id="what-do-we-actually-use-ai-for"><b>What do we actually use AI for?</b></h2><p class="paragraph" style="text-align:left;">The <a class="link" href="https://attitudestoai.uk/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-265-what-do-we-want-from-ai" target="_blank" rel="noopener noreferrer nofollow">Ada Lovelace Institute’s latest survey</a> of 3,500 UK residents shows that AI is no longer an abstract future technology; it is woven into everyday digital habits. Awareness and use of general-purpose large language models (LLMs) such as ChatGPT have grown at a remarkable speed: 61% of people have heard of LLMs, and 40% say they have used them for at least one task. A third of the public already uses these tools to search for answers and recommendations, and around one in five uses them for education or for routine tasks, such as drafting emails.</p><p class="paragraph" style="text-align:left;">At the same time, adoption is highly context dependent. Only 11% have used LLMs to support job applications, while nearly four in ten say they would not want to use them for that purpose at all – a strong signal that people draw a line when AI is involved in high-stakes personal decisions. Those with lower incomes or fewer digital skills are more likely to be closed off to using LLMs across the board, underlining a growing divide between those who can make use of AI for their own purposes and those who see it as something being done to them.</p><p class="paragraph" style="text-align:left;"><b>Where do people see benefits – and risks?</b></p><p class="paragraph" style="text-align:left;">When the survey moves away from “AI in general” and looks at specific applications, the picture becomes much more nuanced – and more useful. People see clear upside in some uses: 86% think using AI to assess cancer risk from scans will be beneficial, and 91% see benefit in facial recognition for policing, at least in principle. Around two-thirds think LLMs overall are beneficial, suggesting that for many, the promise of speed and efficiency still outweighs the downsides.</p><p class="paragraph" style="text-align:left;">Yet concern is never far away. Three-quarters of respondents are concerned about driverless cars, 63% about mental health chatbots, and 59% about the use of AI to assess eligibility for welfare benefits. Even in the “high-benefit” cases, anxiety is substantial: 39% are concerned about facial recognition in policing, and 64% worry that AI-driven cancer diagnostics could lead to over-reliance on technology at the expense of professional judgement. People can clearly hold two ideas at once: AI might make things faster and more accurate, but it also introduces new routes for error, unfairness and unaccountable decisions.</p><h2 class="heading" style="text-align:left;" id="who-feels-the-downsides-most"><b>Who feels the downsides most?</b></h2><p class="paragraph" style="text-align:left;">The survey is particularly valuable because it does not treat “the public” as a single, homogenous bloc. It deliberately oversamples people on lower incomes, those with fewer digital skills, and Black and Asian communities, so we get a clearer view of how different groups experience AI.</p><p class="paragraph" style="text-align:left;">Several patterns stand out. Black and Asian respondents are more positive than average about some emerging tools such as LLMs and robotics, but significantly more concerned about facial recognition in policing: over half of Black (57%) and Asian (52%) respondents are fairly or very concerned, compared with 39% of the general population, and they are particularly anxious about false accusations. People on lower incomes consistently report lower “net benefit” scores across most AI technologies – their concerns outweigh perceived benefits more often than for higher-income groups, even after controlling for other factors. This should worry anyone deploying AI in welfare, credit scoring or public services, because it is precisely these groups who are most exposed to automated decisions.</p><h2 class="heading" style="text-align:left;" id="ai-harms-are-already-a-lived-experi"><b>AI harms are already a lived experience</b></h2><p class="paragraph" style="text-align:left;">These concerns are not hypothetical; they are rooted in lived experience. Two-thirds of the UK public say they have encountered at least one form of possible AI-generated harm online a few times, and nearly four in ten report encountering such harms many times. The most common are false or misleading information (experienced by 61%), financial fraud or scams (58%), and deepfake images or videos (58%).</p><p class="paragraph" style="text-align:left;">Unsurprisingly, anxiety about the spread of harmful AI-generated content is almost universal: 94% of respondents say they are very or somewhat concerned. Younger adults report particularly high exposure to deepfakes and misinformation, while older groups report more frequent encounters with financial frauds – a reminder that “online harm” looks different depending on where you stand in the digital ecosystem. Despite this, awareness that AI sits behind many of these experiences remains patchy; around one in five people are unsure whether the harms they experienced were AI-generated or not.</p><h2 class="heading" style="text-align:left;" id="what-do-people-expect-from-governme"><b>What do people expect from government and governance?</b></h2><p class="paragraph" style="text-align:left;">If there is one message that comes through clearly, it is that the public wants more active, visible governance of AI. In this 2024–25 wave, 72% say that laws and regulations would make them more comfortable with AI technologies, up ten percentage points from the previous survey. People support a multi-stakeholder model for AI safety: 58% think an independent regulator should be responsible for ensuring AI is used safely, and 58% also expect responsibility from the companies developing AI.</p><p class="paragraph" style="text-align:left;">Crucially, they want regulators and government to have real powers, not just guidance documents. 87% think it is important that government or regulators – not just private companies – can stop the use of an AI product if it poses a risk of serious harm, and similarly high numbers want active monitoring of risks, robust safety standards, and access to information about system safety. At the same time, 83% are concerned about public bodies sharing their data with private companies to train AI systems, and half of respondents say they do not feel their values are represented in decisions being made about AI and how it affects their lives. There is a palpable sense that AI is something being done by powerful institutions and vendors, with citizens largely on the receiving end.</p><h2 class="heading" style="text-align:left;" id="so-what-do-we-want-from-ai"><b>So, what do we want from AI?</b></h2><p class="paragraph" style="text-align:left;">Taken together, these findings outline a distinctive public agenda for AI in the UK. People are not asking for “an AI pause”, <a class="link" href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-265-what-do-we-want-from-ai" target="_blank" rel="noopener noreferrer nofollow">as was requested in March 2023</a> following numerous concerns about AI safety. Nor are they blindly embracing whatever comes next, especially if driven by commercial concerns. Instead, several expectations are emerging:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Use AI where it clearly helps, especially in areas such as health diagnostics and behind-the-scenes efficiency – but prove that the benefits are real.</b> Speed and accuracy are attractive, yet they need to be evidenced and audited, not just promised in vendor slide decks.</p></li><li><p class="paragraph" style="text-align:left;"><b>Keep humans in the loop for consequential decisions.</b> Across welfare, credit scoring, healthcare and policing, people remain deeply uncomfortable with opaque, automated judgment that cannot be questioned or appealed.</p></li><li><p class="paragraph" style="text-align:left;"><b>Recognise that harms are already here.</b> From scams and deepfakes to biased policing, citizens are living with AI’s downside risks now, not in some distant AGI future.</p></li><li><p class="paragraph" style="text-align:left;"><b>Build governance that matches the stakes.</b> The public is asking for independent regulation, strong “<a class="link" href="https://www.globalgovernmentforum.com/great-public-expectations-new-research-shows-the-growing-disconnect-between-the-public-and-government-on-ai-regulation/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-265-what-do-we-want-from-ai" target="_blank" rel="noopener noreferrer nofollow">safe before sale</a>” style powers, clear red lines, and meaningful routes for challenge and redress.</p></li><li><p class="paragraph" style="text-align:left;"><b>Include those most affected in decisions.</b> Lower-income and marginalised groups are more sceptical, more exposed to harms, and less likely to feel represented in AI decision-making. Any credible AI strategy has to start with their experiences, not treat them as an afterthought.</p></li></ul><p class="paragraph" style="text-align:left;">As 2026 advances, we must recognise that AI is no longer a novel experiment on the edge of the digital economy; it is deeply embedded in the infrastructure of everyday life. The question “What do we want from AI?” is therefore less about speculative futures and more about aligning today’s systems with public expectations of fairness, safety and human dignity. </p><p class="paragraph" style="text-align:left;"><a class="link" href="https://attitudestoai.uk/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-265-what-do-we-want-from-ai" target="_blank" rel="noopener noreferrer nofollow">The Ada Lovelace Institute’s work</a> suggests that UK citizens have already done their homework and are asking sharper, more grounded questions than many of the strategies and press releases produced in their name. The challenge now is whether government, industry and institutional leaders are prepared to listen – and to act accordingly.</p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=8317b9ad-4882-4f5b-832b-685b66fc53f7&utm_medium=post_rss&utm_source=digital_economy_dispatches">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Digital Economy Dispatch #264 -- AI Bottlenecks, Jagged Edges, and the Real Barriers to AI-at-Scale</title>
  <description>In 2025, AI capability has outpaced institutional readiness. The bottleneck has shifted from what AI can do to what organisations will allow it to do.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/0e4f007e-c179-4167-bd29-1ae07313ee1c/jagged.jpg" length="144328" type="image/jpeg"/>
  <link>https://dispatches.alanbrown.net/p/digital-dispatch-264-ai-bottlenecks-jagged-edges-and-the-real-barriers-to-ai-at-scale</link>
  <guid isPermaLink="true">https://dispatches.alanbrown.net/p/digital-dispatch-264-ai-bottlenecks-jagged-edges-and-the-real-barriers-to-ai-at-scale</guid>
  <pubDate>Sun, 28 Dec 2025 08:25:07 +0000</pubDate>
  <atom:published>2025-12-28T08:25:07Z</atom:published>
    <dc:creator>Alan Brown</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">As we head into 2026, the conversation about AI adoption has shifted. We&#39;re no longer debating whether AI works. Instead, leaders and decision makers are wrestling with a more uncomfortable question: Why isn&#39;t it delivering the transformative results we were promised?</p><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.oneusefulthing.org/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-264-ai-bottlenecks-jagged-edges-and-the-real-barriers-to-ai-at-scale" target="_blank" rel="noopener noreferrer nofollow">Ethan Mollick</a>, the Wharton professor whose work on AI I&#39;ve referenced many times in these Dispatches, <a class="link" href="https://www.oneusefulthing.org/p/the-shape-of-ai-jaggedness-bottlenecks?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-264-ai-bottlenecks-jagged-edges-and-the-real-barriers-to-ai-at-scale" target="_blank" rel="noopener noreferrer nofollow">recently offered a compelling framework for understanding this puzzle</a>. His concept of the &quot;Jagged Frontier&quot; describes something that everyone working with AI will recognise: the technology&#39;s bewildering pattern of being superhuman at some tasks while remaining stubbornly inadequate at others. So, while <a class="link" href="https://pmc.ncbi.nlm.nih.gov/articles/PMC12190018/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-264-ai-bottlenecks-jagged-edges-and-the-real-barriers-to-ai-at-scale" target="_blank" rel="noopener noreferrer nofollow">AI can now outperform human doctors at many kinds of medical diagnosis</a> and <a class="link" href="https://www.technologyreview.com/2024/07/25/1095315/google-deepminds-ai-systems-can-now-solve-complex-math-problems/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-264-ai-bottlenecks-jagged-edges-and-the-real-barriers-to-ai-at-scale" target="_blank" rel="noopener noreferrer nofollow">solve complex mathematics problems</a> that would stump most experts, it <a class="link" href="https://futurism.com/frontier-ai-models-stumped-childrens-game?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-264-ai-bottlenecks-jagged-edges-and-the-real-barriers-to-ai-at-scale" target="_blank" rel="noopener noreferrer nofollow">still struggles with simple visual puzzles</a> and <a class="link" href="https://www.newscientist.com/article/2449427-ais-get-worse-at-answering-simple-questions-as-they-get-bigger/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-264-ai-bottlenecks-jagged-edges-and-the-real-barriers-to-ai-at-scale" target="_blank" rel="noopener noreferrer nofollow">may be getting worse at simple tasks such as solving anagrams</a>.</p><p class="paragraph" style="text-align:left;">This jaggedness matters because it creates bottlenecks. A system is only as functional as its weakest components. Even if AI reaches superhuman capability across ninety-nine percent of a task, that remaining one percent can prevent full automation. Consider Mollick&#39;s example of AI-powered medical literature reviews. Researchers found that GPT-4.1, when properly prompted, could reproduce and update an entire issue of <a class="link" href="https://www.cochranelibrary.com/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-264-ai-bottlenecks-jagged-edges-and-the-real-barriers-to-ai-at-scale" target="_blank" rel="noopener noreferrer nofollow">Cochrane reviews</a> (reviewing evidence for medical tests) in 2 days, representing approximately 12 work-years of traditional systematic review effort. The AI outperformed human reviewers on accuracy. Yet it is brittle because it cannot access supplementary files or email authors to request unpublished data, things human reviewers do routinely. That means 12 work-years can become 12 days, but only if a human handles the edge cases.</p><h2 class="heading" style="text-align:left;" id="from-intelligence-bottlenecks-to-in"><b>From Intelligence Bottlenecks to Institutional Bottlenecks</b></h2><p class="paragraph" style="text-align:left;">However, what strikes me most about Mollick&#39;s analysis isn&#39;t the technological limitations. It&#39;s his observation about a different kind of bottleneck altogether. Keeping with the healthcare example, as Mollick describes, AI can now find promising drug candidates far faster than before. But clinical trials still need real patients who take significant time to recruit and monitor. Similarly, regulators still require human review before sign off. So even if AI generates ten times more good drug ideas, the bottleneck shifts from discovery to approval. Intelligence speeds up; institutions don&#39;t.</p><p class="paragraph" style="text-align:left;">This insight should be required reading for every digital leader and policy maker planning their AI strategies for the coming year. We&#39;ve spent the past few years focused almost exclusively on the technology itself: which models to deploy, how to prompt them effectively, where to find the best use cases. But in practice the real constraints on AI-at-Scale have nothing to do with AI capability at all.</p><p class="paragraph" style="text-align:left;">Think about what this means in practice. Your organisation might successfully pilot an AI system that can process procurement requests in minutes rather than days. The technology works. The pilot succeeds. But then you discover that your procurement regulations require human sign-off at three different levels. Your finance systems weren&#39;t designed for this volume of transactions. Your audit processes assume human decision-making at key stages. The AI is ready, but your institution isn&#39;t.</p><p class="paragraph" style="text-align:left;">This is the uncomfortable truth that rarely makes it into vendor presentations or analyst reports. Organisations aren&#39;t just collections of processes waiting to be automated. They&#39;re complex institutional structures shaped by regulations, professional standards, liability frameworks, union agreements, cultural norms, and deeply embedded ways of working. These structures exist for reasons. Some are outdated and should be reformed. Others encode hard-won lessons about accountability, safety, and fairness that we ignore at our peril.</p><p class="paragraph" style="text-align:left;">For digital leaders, this reframing demands a fundamental shift in how we approach AI strategy. Instead of asking &quot;What can AI do?&quot;, we need to ask, &quot;What will our institutions allow AI to do?&quot;. AI success in not measured by pilot project metrics. Instead, we need to map the institutional pathways that determine whether those pilots can ever scale. Hiring more data scientists and AI engineers won’t help. Investing in regulatory expertise, change management capability, and the patient work of institutional reform will.</p><h2 class="heading" style="text-align:left;" id="the-migration-of-ai-bottlenecks"><b>The Migration of AI Bottlenecks</b></h2><p class="paragraph" style="text-align:left;">This isn&#39;t insurmountable. Institutions do change. Regulations get updated. Professional standards evolve. But they do so on their own timescale, and that timescale rarely matches the breathless pace of technology announcements. The organisations that succeed with AI-at-Scale will be those that understand this dynamic and plan accordingly.</p><p class="paragraph" style="text-align:left;">What we&#39;re witnessing is a predictable pattern in how bottlenecks migrate as AI capability advances. Initially, the constraint is capability itself. We ask: Can AI perform the task at all? Can it analyse these documents, generate this code, and identify these patterns? For many tasks, we&#39;ve now moved past this stage. The technology works.</p><p class="paragraph" style="text-align:left;">The next bottleneck is process. Even when AI can perform a task, organisational processes weren&#39;t designed for AI-speed execution. Workflows assume human timescales. Handoffs between teams create delays. Legacy systems can&#39;t ingest AI outputs. Approval chains remain unchanged. This is where many organisations find themselves today, discovering that their operational infrastructure becomes the limiting factor once AI capability is proven.</p><p class="paragraph" style="text-align:left;">But there&#39;s a third bottleneck that receives far less attention: verification. Maybe, like me, you’re now finding that you spend a lot of time reviewing AI outputs. As AI takes on more consequential decisions, the question shifts from &quot;Can AI do this?&quot; to &quot;How do I know AI did this correctly?&quot;.</p><p class="paragraph" style="text-align:left;">In regulated industries, this verification burden is explicit and mandated. Financial services firms must demonstrate model governance. Healthcare organisations must validate clinical AI against established standards. Legal teams must ensure AI-generated contracts meet professional obligations. But in less regulated contexts, the need for human verification is just as important, even if it may not be so obvious.</p><p class="paragraph" style="text-align:left;">This verification bottleneck is particularly challenging because it scales poorly. If AI can process a thousand applications in the time a human processes ten, but each AI decision still requires human review, you&#39;ve simply moved the bottleneck rather than eliminated it. To overcome this, some organisations respond by sampling, reviewing only a percentage of AI decisions. Others implement exception-based workflows, where AI handles straightforward cases autonomously while flagging edge cases for human attention. Of course, both approaches introduce risk and require sophisticated governance frameworks that most organisations have yet to develop.</p><p class="paragraph" style="text-align:left;">The implications are significant. As AI capability continues to advance, the bottleneck will keep migrating. Today&#39;s process constraints will eventually be resolved through system modernisation and workflow redesign. But verification challenges may prove more stubborn, particularly in domains where errors carry serious consequences and accountability structures remain anchored in human decision-making.</p><h2 class="heading" style="text-align:left;" id="navigating-the-institutional-ai-lan"><b>Navigating the Institutional AI Landscape</b></h2><p class="paragraph" style="text-align:left;">As you plan your AI rollout, there are several practical implications of this “Jagged Frontier” worth considering. First, audit your institutional constraints with the same rigour you apply to technical assessments. Which regulations govern your AI use cases? Which professional standards apply? Which stakeholders have legitimate authority over process changes? Understanding these factors upfront will save you from the frustration of successful pilots that go nowhere.</p><p class="paragraph" style="text-align:left;">Second, invest in institutional capacity alongside technical capability. This means building relationships with regulators, engaging with professional bodies, participating in standards development, and developing internal expertise in governance and compliance. These activities feel slow and unglamorous compared to spinning up AI projects, but they determine your organisation&#39;s ability to capture value from AI over the long term.</p><p class="paragraph" style="text-align:left;">Third, choose your battles wisely. Some institutional bottlenecks are immovable in the short term. Others are ripe for reform. Focus your AI efforts on areas where institutional constraints are manageable, while working in parallel to shift the constraints in areas with higher strategic value.</p><p class="paragraph" style="text-align:left;">Finally, remember that institutional bottlenecks affect everyone equally. Your competitors face the same regulatory requirements, the same professional standards, the same procurement rules. This creates an interesting strategic dynamic. The advantage goes not to whoever deploys AI fastest, but to whoever best understands and navigates the institutional landscape in which AI must operate.</p><h2 class="heading" style="text-align:left;" id="the-long-game-of-a-iat-scale"><b>The Long Game of AI-at-Scale</b></h2><p class="paragraph" style="text-align:left;">What I’ve learned over more than two decades of digital transformation is that technology rarely delivers value on its own terms. Success comes from the hard work of organisational change, process redesign, and capability building. AI is proving no different. The jagged frontier of AI capability will continue to advance. The more fundamental question is whether our institutions can keep pace and whether we have the wisdom and patience to help them do so.</p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=f6f8a087-10ba-45ef-b955-a203be4144d5&utm_medium=post_rss&utm_source=digital_economy_dispatches">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Digital Economy Dispatch #263 -- A Review of the Digital Economy in 2025</title>
  <description>As we reflect on the year we find that AI moved from experiment to enterprise reality in 2025, exposing critical gaps in governance, trust, and human-centred leadership that will demand urgent attention in the year to come.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/7af09c5f-a24f-4927-9a41-ae2e2e9651e8/end-2025-photo-1.jpg" length="60647" type="image/jpeg"/>
  <link>https://dispatches.alanbrown.net/p/digital-economy-dispatch-263-a-review-of-the-digital-economy-in-2025</link>
  <guid isPermaLink="true">https://dispatches.alanbrown.net/p/digital-economy-dispatch-263-a-review-of-the-digital-economy-in-2025</guid>
  <pubDate>Sun, 21 Dec 2025 08:25:19 +0000</pubDate>
  <atom:published>2025-12-21T08:25:19Z</atom:published>
    <dc:creator>Alan Brown</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">As 2025 draws to a close, it feels like the right moment to step back and reflect on the themes that have dominated our conversations about digital transformation this year. Looking across the <a class="link" href="https://dispatches.alanbrown.net?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-263-a-review-of-the-digital-economy-in-2025" target="_blank" rel="noopener noreferrer nofollow">50+ dispatches</a> I&#39;ve published since January, several interconnected threads emerge. Together, they paint a picture of an extraordinary year in which the digital economy was powered by AI to move from fascinating experiment to urgent strategic necessity, bringing with it profound questions about governance, trust, and human agency that we are only beginning to address.</p><h2 class="heading" style="text-align:left;" id="the-year-of-a-iat-scale">The Year of AI-at-Scale</h2><p class="paragraph" style="text-align:left;">If I had to choose a single phrase to characterise 2025, it would be &quot;<b>Delivering AI-at-Scale</b>&quot;. In the past year, the conversation has evolved dramatically from asking &quot;What can AI do?&quot; to wrestling with &quot;How do we implement AI responsibly at scale while delivering real value?&quot;. I&#39;ve spent countless hours with executives, CIOs, and transformation leaders, moving pilot projects toward mature enterprise initiatives. In every case, the excitement of early wins is soon tempered by the sobering reality of overcoming numerous implementation challenges. This is creating a fundamental shift in how organisations think about AI — from a fascinating technology advance to a business driver demanding serious strategic review.</p><p class="paragraph" style="text-align:left;">It is not just personal anecdotes. The data tells a stark story. According to the <a class="link" href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-263-a-review-of-the-digital-economy-in-2025" target="_blank" rel="noopener noreferrer nofollow">2025 McKinsey State of AI survey</a>, 75% of organisations now use AI in at least one business function, yet only 28% have clear executive accountability for governance or oversight. AI adoption is staggering in its speed, with 78% of firms now apply AI across core operations, up sharply from 55% in 2024. Yet nearly half of C-suite executives admit their organisations are &quot;tearing apart&quot; under the strain of unmanaged adoption.</p><h2 class="heading" style="text-align:left;" id="cutting-through-the-noise">Cutting Through the Noise</h2><p class="paragraph" style="text-align:left;">In addition, this rapid deployment of AI tools has come at a cost. I began the year with a personal commitment: less noise, more signal. The digital landscape has become overwhelmed with AI-generated content, or &quot;<a class="link" href="https://en.wikipedia.org/wiki/AI_slop?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-263-a-review-of-the-digital-economy-in-2025" target="_blank" rel="noopener noreferrer nofollow">AI slop</a>&quot;. Vast quantities of mediocre, AI-generated material is flooding our digital channels and is making valuable information harder to find. We’re now surrounded by a proliferation of shallow, surface-level analyses and basic explanations, which may be technically accurate but offer few meaningful insights. The deep impact of “AI slop” is such that for many, this term has become the “<a class="link" href="https://www.newscientist.com/article/2507742-this-year-we-were-drowning-in-a-sea-of-slick-nonsensical-ai-slop/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-263-a-review-of-the-digital-economy-in-2025" target="_blank" rel="noopener noreferrer nofollow">phrase of the year</a>”. What can we do in response? The antidote is to place an explicit focus on depth over breadth, curating more selective sources and engaging in forums where meaningful AI discussions flourish.</p><p class="paragraph" style="text-align:left;">A key part of this focus is exposing the core elements of what AI is…and what it is not. Throughout the year, I&#39;ve emphasised that AI is not magic: it&#39;s economics. Drawing on Agrawal, Gans, and Goldfarb&#39;s framework from &quot;<a class="link" href="https://www.amazon.co.uk/Prediction-Machines-Economics-Artificial-Intelligence/dp/1633695670?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-263-a-review-of-the-digital-economy-in-2025" target="_blank" rel="noopener noreferrer nofollow">Prediction Machines</a>&quot;, I’ve become convinced that the true impact of AI isn&#39;t about creating sentient machines but something far more practical: making prediction cheap and accurate. This economic framing helps demystify AI and allows leaders to make more rational decisions about implementation and investment. Understanding AI as fundamentally an economic phenomenon, not a magical one, has been crucial in my attempts this year to cut through the hype.</p><h2 class="heading" style="text-align:left;" id="the-governance-gap">The Governance Gap</h2><p class="paragraph" style="text-align:left;">Perhaps the most pressing theme facing leaders and decision makers in 2025 has been the widening gap between AI adoption speed and governance capability. <a class="link" href="https://www.techradar.com/pro/tackling-ai-sprawl-in-the-modern-enterprise?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-263-a-review-of-the-digital-economy-in-2025" target="_blank" rel="noopener noreferrer nofollow">AI sprawl</a> is overtaking us. The rapid, uncontrolled spread of GenAI tools across organisations is creating an urgent governance crisis. The <a class="link" href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-263-a-review-of-the-digital-economy-in-2025" target="_blank" rel="noopener noreferrer nofollow">McKinsey survey</a> reminds us that while three quarters of companies have scaled AI extensively, fewer than a third have formal governance policies in place. This mismatch has made many enterprises more dependent and more exposed than ever.</p><p class="paragraph" style="text-align:left;">To compound these concerns, the fragility of our digital infrastructure became painfully apparent this year. <a class="link" href="https://cybermagazine.com/news/how-aws-outage-exposes-the-risks-of-cloud-dependency?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-263-a-review-of-the-digital-economy-in-2025" target="_blank" rel="noopener noreferrer nofollow">A simple DNS configuration error inside AWS triggered a cascading failure</a> that silenced half the internet for hours. Meanwhile, ransomware attacks <a class="link" href="https://www.theguardian.com/business/2025/sep/20/jaguar-land-rover-hack-factories-cybersecurity-jlr?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-263-a-review-of-the-digital-economy-in-2025" target="_blank" rel="noopener noreferrer nofollow">like the one on Jaguar Land Rover</a> demonstrated that cyber resilience cannot be expected, only prepared for. The lesson for digital leaders is clear: in deploying AI, speed without foresight multiplies risk. The challenge is not to slow innovation but to stabilise it to ensure that as we scale AI, we also secure it.</p><h2 class="heading" style="text-align:left;" id="who-do-we-trust">Who Do We Trust?</h2><p class="paragraph" style="text-align:left;">The effect of these operational issues with AI also raised a broader conceptual challenge for the digital economy. A recurring question throughout 2025 has been: Who do we trust with AI? <a class="link" href="https://www.amazon.co.uk/Supremacy-ChatGPT-Race-Change-World/dp/1035038226?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-263-a-review-of-the-digital-economy-in-2025" target="_blank" rel="noopener noreferrer nofollow">Parmy Olson&#39;s &quot;Supremacy&quot;</a> captured the tension well, placing us at the centre of the tech world&#39;s most important AI race. The contest between OpenAI and DeepMind, their visionary founders, and the forces of venture capital and Big Tech has shaped the direction of AI development in ways that have profound implications for everyone adopting these tools. The concentration of power in a handful of US and Chinese technology firms is not an abstract business threat. Rather, it has become embedded in our infrastructure, digital services, and strategic decision-making.</p><p class="paragraph" style="text-align:left;">For the UK specifically, this has meant wrestling with questions of digital sovereignty and strategic dependency. At the start of 2025, <a class="link" href="https://www.gov.uk/government/publications/ai-opportunities-action-plan/ai-opportunities-action-plan?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-263-a-review-of-the-digital-economy-in-2025" target="_blank" rel="noopener noreferrer nofollow">the UK government announced ambitious spending commitments</a>, including a £500 million UK Sovereign AI Unit and £750 million for the Edinburgh supercomputer. These represent attempts to boost the UK economy and reduce dependence on foreign AI capabilities. But, later in the year the UK government also announced <a class="link" href="https://www.theguardian.com/commentisfree/2025/sep/18/uk-us-tech-deal-31-billion-ai-government-answer-questions?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-263-a-review-of-the-digital-economy-in-2025" target="_blank" rel="noopener noreferrer nofollow">significant US AI technology investment</a>, raising questions about how the UK plans to deliver on these promises. Such digital technology decisions are smart only if we maintain clarity on data governance, regulatory independence, and the persistent risks of strategic lock-in. A struggle we’ll see played out throughout 2026 and beyond.</p><h2 class="heading" style="text-align:left;" id="the-human-dimension">The Human Dimension</h2><p class="paragraph" style="text-align:left;">However, technological concerns are only part of the story. Throughout 2025, I&#39;ve consistently argued that the real issue isn&#39;t technology itself but how it&#39;s used by those in leadership positions. The Luddite lessons from previous technology revolutions remain relevant. Successful AI integration is not just about implementing new technologies, but about guiding people through significant disruptive change. And AI-driven change is already significant. <a class="link" href="https://www.microsoft.com/en-us/worklab/work-trend-index/2025-the-year-the-frontier-firm-is-born?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-263-a-review-of-the-digital-economy-in-2025" target="_blank" rel="noopener noreferrer nofollow">Microsoft&#39;s Work Trend Index revealed</a> that &quot;Frontier Firms&quot; are fundamentally reshaping work through AI integration, with 82% of leaders expecting to leverage agent-based digital labour within the next 12-18 months.</p><p class="paragraph" style="text-align:left;">Workplace disruption is a reality. Yet contrary to headlines suggesting AI will replace key roles in middle management layers, I&#39;ve argued this year that AI may actually liberate them from administrative drudgery, allowing them to focus on what they were hired for: leading people. Every employee is increasingly becoming an &quot;agent boss&quot;, responsible for building, delegating to, and managing AI agents to amplify their impact. The gap between leaders and employees in AI readiness (67% versus 40% familiarity with agents) represents both a challenge and an opportunity for organisations willing to invest in upskilling.</p><h2 class="heading" style="text-align:left;" id="looking-beyond-gen-ai">Looking Beyond GenAI</h2><p class="paragraph" style="text-align:left;">For many people in 2025, AI interest has been focused on one aspect of this broad space – Generative AI tools. While the world has been captivated by AI that generates text and images, I&#39;ve repeatedly pointed out that some of the most significant breakthroughs are happening in areas that generate fewer headlines but potentially far greater impact. Predictive AI and optimisation systems are transforming areas such as weather forecasting, energy grid management, urban planning, healthcare provision, and pharmaceutical research. Our collective fixation on generative AI may be causing us to miss the forest for the trees.</p><p class="paragraph" style="text-align:left;">Richard Susskind&#39;s framework (from his excellent book, “<a class="link" href="https://www.amazon.co.uk/How-Think-About-AI-Perplexed/dp/0198941927?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-263-a-review-of-the-digital-economy-in-2025" target="_blank" rel="noopener noreferrer nofollow">How to Think about AI</a>”) reminds us that automation of common processes by using AI to computerise existing tasks is just the most obvious and limited application of what AI can do today. Ongoing AI innovation will mean delivering outcomes using radically new processes, often removing the need for certain activities entirely. Leaders trapped in a narrow vision based on today&#39;s AI capabilities risk strategic blindness to what not-yet-invented technologies will bring in the coming years.</p><h2 class="heading" style="text-align:left;" id="the-path-forward">The Path Forward</h2><p class="paragraph" style="text-align:left;">As we enter 2026, the challenge for us all is clear. Advances in 2025 have led us to a phase where we must view AI as &quot;<a class="link" href="https://www.oreilly.com/radar/is-ai-a-normal-technology/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-263-a-review-of-the-digital-economy-in-2025" target="_blank" rel="noopener noreferrer nofollow">normal technology</a>&quot;. It is not some exotic outlier. It is an everyday tool we must use, but it is critical we remain in control of. This perspective stands in contrast to growing fears of AI as a separate, potentially superintelligent entity that could eclipse human control. For digital leaders, this means embracing realism about AI&#39;s capabilities and limitations, prioritising strategies that maintain human control, investing in organisational adaptation, and adopting nuanced approaches to risks that undoubtedly exist with AI.</p><p class="paragraph" style="text-align:left;">The hope for our AI future lies not in the technology itself, but in our collective ability to guide its development and deployment responsibly. We have the experience, frameworks, and wisdom gained from decades of digital transformation. The question is whether we will apply these hard-won lessons with the urgency and seriousness that this moment demands.</p><p class="paragraph" style="text-align:left;">Finally, as we look forward to 2026, let’s remind ourselves that AI-driven digital transformation isn&#39;t something happening to us. Rather, it&#39;s something we collectively shape. By maintaining our critical thinking and human-centric values, we can ensure that AI becomes a tool of empowerment, not constraint. As I&#39;ve said throughout the year: drill deeper, question boldly, and never lose sight of the human dimension in all of AI’s technological change.</p><p class="paragraph" style="text-align:left;"><span style="font-family:Arial, sans-serif;font-size:12pt;"><i>Here&#39;s to a thoughtful and transformative 2026.</i></span></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=d5c71b3d-2553-4282-ac77-38d33abafa41&utm_medium=post_rss&utm_source=digital_economy_dispatches">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Digital Economy Dispatch #262 -- Are We Ready for AI Agents in the Workforce?</title>
  <description>AI agents represent a real technological advance over previous automation attempts. But, organisations are repeating historical mistakes by focusing on technology deployment rather than the organisational change management that actually determines success.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/be557f30-2fe1-40de-a885-638f77435d6b/digitial-numbers-photo-1.jpg" length="67553" type="image/jpeg"/>
  <link>https://dispatches.alanbrown.net/p/digital-economy-dispatch-262-are-we-ready-for-ai-agents-in-the-workforce</link>
  <guid isPermaLink="true">https://dispatches.alanbrown.net/p/digital-economy-dispatch-262-are-we-ready-for-ai-agents-in-the-workforce</guid>
  <pubDate>Sun, 14 Dec 2025 08:25:38 +0000</pubDate>
  <atom:published>2025-12-14T08:25:38Z</atom:published>
    <dc:creator>Alan Brown</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">If you believe the headlines, 2025 has been &quot;<a class="link" href="https://www.perplexity.ai/hub/blog/how-people-use-ai-agents?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-262-are-we-ready-for-ai-agents-in-the-workforce" target="_blank" rel="noopener noreferrer nofollow">the year of the AI agent</a>&quot;. Tech vendors are falling over themselves to announce agentic capabilities, analysts are publishing breathless predictions about autonomous digital workers, and your inbox is probably full of invitations to webinars promising to revolutionise your operations with AI agents that think, plan, and act on your behalf.</p><p class="paragraph" style="text-align:left;">But strip away the marketing gloss and a more interesting picture emerges. Yes, organisations are investing heavily. For example, <a class="link" href="https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-agent-survey.html?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-262-are-we-ready-for-ai-agents-in-the-workforce" target="_blank" rel="noopener noreferrer nofollow">PwC&#39;s latest survey</a> shows 88% of executives plan to increase AI budgets specifically because of agentic AI. Yet when you look at actual implementation maturity, the scores are sobering. <a class="link" href="https://siliconangle.com/2025/12/11/ai-agents-outpace-enterprise-reality-thecube-research-aiagentbuilder/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-262-are-we-ready-for-ai-agents-in-the-workforce" target="_blank" rel="noopener noreferrer nofollow">Research from theCUBE</a> puts execution readiness at just 1.8 out of 5, even while aspirations score 4.1. That gap between ambition and reality should sound familiar to anyone who&#39;s lived through previous waves of business automation. Some of us have long memories and painful scars.</p><h2 class="heading" style="text-align:left;" id="what-actually-is-an-ai-agent"><b>What Actually Is an AI Agent?</b></h2><p class="paragraph" style="text-align:left;">It’s time to cut through the hype. An AI agent, in its current practical form, is software that can observe its environment, make decisions based on what it finds, and take actions to achieve a defined goal. This can involve tying together multiple steps, often without requiring human approval at each stage.</p><p class="paragraph" style="text-align:left;">The key difference from traditional automation is the level of autonomy provided by AI agents. A conventional automated workflow follows predetermined rules: if X happens, do Y. An AI agent is defined to interpret ambiguous situations, decide on an approach, and adapt when things don&#39;t go as expected. For example, rather than previous workflow automation to handle emails (“if you receive a message from X, flag it as high priority”), an AI agent might be able to analyse your calendar, draft an email, check for conflicts, revise the draft based on the recipient&#39;s previous responses, and send it -- all from a single instruction.</p><p class="paragraph" style="text-align:left;">In practice today, most AI agents sit somewhere on this automation spectrum. At one end, you have agents embedded in enterprise applications. The most common include <a class="link" href="https://www.microsoft.com/en-gb/microsoft-365-copilot?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-262-are-we-ready-for-ai-agents-in-the-workforce" target="_blank" rel="noopener noreferrer nofollow">Microsoft&#39;s Copilot</a> surfacing common desktop insights and <a class="link" href="https://help.salesforce.com/s/products/einstein?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-262-are-we-ready-for-ai-agents-in-the-workforce" target="_blank" rel="noopener noreferrer nofollow">Salesforce&#39;s Einstein</a> automating routine customer interactions. These are useful but incremental. At the other end, you have experimental autonomous systems that can conduct research, write code (such as <a class="link" href="https://cognition.ai/blog/introducing-devin?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-262-are-we-ready-for-ai-agents-in-the-workforce" target="_blank" rel="noopener noreferrer nofollow">Cognition’s Devin</a>), or manage complex multi-step processes with minimal human oversight (such as <a class="link" href="https://www.crewai.com/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-262-are-we-ready-for-ai-agents-in-the-workforce" target="_blank" rel="noopener noreferrer nofollow">CrewAI</a>). Most organisations are firmly at the first end, whatever their vendor pitches might suggest.</p><h2 class="heading" style="text-align:left;" id="havent-we-been-here-before"><b>Haven&#39;t We Been Here Before?</b></h2><p class="paragraph" style="text-align:left;">If you&#39;re experiencing a sense of déjà vu, you&#39;re not wrong. The promise of intelligent automation transforming how we work has a long and somewhat chequered history.</p><p class="paragraph" style="text-align:left;">In the 1990s, <a class="link" href="https://hbr.org/1990/07/reengineering-work-dont-automate-obliterate?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-262-are-we-ready-for-ai-agents-in-the-workforce" target="_blank" rel="noopener noreferrer nofollow">Business Process Automation (BPA) and Business Process Reengineering (BPR)</a> promised radical transformation through fundamentally rethinking and automating how work gets done. <a class="link" href="https://www.kellogg.northwestern.edu/academics-research/research/detail/1999/business-process-reengineering-its-history-promises-and-problems/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-262-are-we-ready-for-ai-agents-in-the-workforce" target="_blank" rel="noopener noreferrer nofollow">The results were mixed at best</a>. Many organisations found that &quot;reengineering&quot; became a euphemism for cost-cutting, and the promised productivity gains from automating the steps often failed to materialise because the resistance of the broader organisation to change was dramatically underestimated.</p><p class="paragraph" style="text-align:left;">More recently, <a class="link" href="https://www.ibm.com/think/topics/rpa?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-262-are-we-ready-for-ai-agents-in-the-workforce" target="_blank" rel="noopener noreferrer nofollow">Robotic Process Automation (RPA)</a> arrived with similar fanfare. Software robots would handle repetitive tasks, freeing humans for higher-value work. And <a class="link" href="https://eyfinancialservicesthoughtgallery.ie/get-ready-robots/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-262-are-we-ready-for-ai-agents-in-the-workforce" target="_blank" rel="noopener noreferrer nofollow">RPA did deliver real benefits, but within strict limits</a>. It excelled at structured, rule-based processes with clean data. Throw in exceptions, ambiguity, or the need for judgment, and the robots struggled. <a class="link" href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/how-to-avoid-the-three-common-execution-pitfalls-that-derail-automation-programs?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-262-are-we-ready-for-ai-agents-in-the-workforce" target="_blank" rel="noopener noreferrer nofollow">Many RPA implementations hit a ceiling</a>, automating the easy 20% while the complex 80% remained stubbornly manual.</p><p class="paragraph" style="text-align:left;">So, is the current wave of AI agents any different?</p><p class="paragraph" style="text-align:left;">I think the honest answer is potentially yes, but not in the ways the hype suggests.</p><p class="paragraph" style="text-align:left;">The genuine breakthrough with AI agents isn&#39;t that they can follow more complex rules, but rather that they can handle greater ambiguity. They can interpret intent from natural language, make reasonable judgments when information is incomplete, and learn from feedback. That&#39;s a meaningful step change from RPA&#39;s brittleness.</p><p class="paragraph" style="text-align:left;">But that advance is in danger of being swamped by persistent challenges that surround any business process change. What hasn&#39;t changed is that the organisational challenges remain remarkably similar. BPA/BPR failed not because the technology was wrong but because redesigning processes means redistributing power, changing job roles, and challenging entrenched ways of working. RPA stalled not because the bots couldn&#39;t cope but because organisations couldn&#39;t integrate them into workflows that crossed departmental boundaries.</p><p class="paragraph" style="text-align:left;">AI agents will face exactly the same issues. The technology may be more capable, but capability was never really the limiting constraint.</p><h2 class="heading" style="text-align:left;" id="the-questions-you-should-be-asking"><b>The Questions You Should Be Asking</b></h2><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-262-are-we-ready-for-ai-agents-in-the-workforce" target="_blank" rel="noopener noreferrer nofollow">McKinsey&#39;s latest research reinforces this point</a>. It concludes that organisations achieving real value from AI aren&#39;t just deploying better tools. Instead, they&#39;re fundamentally rewiring how work gets done. That means asking uncomfortable questions that go well beyond technology selection.</p><p class="paragraph" style="text-align:left;"><i>Who is accountable when an AI agent makes a decision that goes wrong? How do you measure productivity when a team consists of three humans and a dozen digital agents? What happens to middle management when much of their coordination role can be automated? How do you maintain institutional knowledge when AI agents handle processes that humans used to learn from?</i></p><p class="paragraph" style="text-align:left;">Some forward-thinking organisations <a class="link" href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/seizing-the-agentic-ai-advantage?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-262-are-we-ready-for-ai-agents-in-the-workforce" target="_blank" rel="noopener noreferrer nofollow">are already experimenting with answers</a> and creating new roles, rethinking performance metrics, and establishing governance frameworks for human-AI collaboration. But they&#39;re the minority. Most are still treating AI agents as a technical implementation rather than a workforce transformation.</p><p class="paragraph" style="text-align:left;">The lesson from BPA/BPR and RPA is clear: the organisations that succeeded were those that recognised automation as an organisational change programme first and a technology project second. There&#39;s no reason to think AI agents will be any different.</p><p class="paragraph" style="text-align:left;">The technology may well be much more capable this time. But if we’re honest, technology has never been the hard part in driving organisational change, has it?</p><p class="paragraph" style="text-align:left;"></p><p class="paragraph" style="text-align:left;"> </p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=b561c61c-98e5-49fb-88cd-ca497cc43e90&utm_medium=post_rss&utm_source=digital_economy_dispatches">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Digital Economy Dispatch #261 -- AI Economics 101</title>
  <description>Facing volatile environment, today&#39;s leaders must understand both how AI changes their business economics and how Big Tech&#39;s trillion-dollar bets will shape their future costs and dependencies.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/f171c871-4467-4455-8fb6-4ca2933794cd/money-photo-2.jpg" length="76037" type="image/jpeg"/>
  <link>https://dispatches.alanbrown.net/p/digital-economy-dispatch-261-ai-economics-101</link>
  <guid isPermaLink="true">https://dispatches.alanbrown.net/p/digital-economy-dispatch-261-ai-economics-101</guid>
  <pubDate>Sun, 07 Dec 2025 08:22:47 +0000</pubDate>
  <atom:published>2025-12-07T08:22:47Z</atom:published>
    <dc:creator>Alan Brown</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">As 2025 draws to a close, busy leaders have a lot on their plates. In a volatile world, the recent <a class="link" href="https://kpmg.com/xx/en/our-insights/value-creation/global-ceo-outlook-survey.html?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">KPMG Global CEO Outlook</a> report concluded that many see AI and talent investment as the keys to their resilience and growth. Yet very few are being given a clear view of what that AI transformation looks like in hard economic terms. Behind the headlines about “AI copilots” and “agentic AI” sits a much bigger story: a small group of tech giants <a class="link" href="https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">is making multi‑trillion dollar bets</a> on AI infrastructure and models, while most enterprises are still <a class="link" href="https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">struggling to turn AI pilots into measurable productivity</a>.<span style="font-family:Arial, sans-serif;">​</span></p><p class="paragraph" style="text-align:left;">This gap matters. If you are responsible for strategy, you now must understand not just how to use AI in your business, but how AI is reshaping the economics of your industry and the platforms you depend on. That means getting comfortable with both “<b>the technology of business</b>” — how AI changes your value proposition and operating model — and “<b>the business of technology</b>” — how your AI providers are funding their ambitions and what that implies for your costs, risks, and options over the next decade.<span style="font-family:Arial, sans-serif;">​</span></p><h2 class="heading" style="text-align:left;" id="two-stories-macro-and-micro"><b>Two Stories: Macro and Micro</b></h2><p class="paragraph" style="text-align:left;">There are really two intertwined issues here. At the macro level, a handful of Big Tech firms are pouring staggering sums into <a class="link" href="https://www.businessinsider.com/ibm-ceo-big-tech-ai-capex-data-center-spending-2025-12?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">AI infrastructure and data centres</a>, pushing up stock indices and raising questions about <a class="link" href="https://www.businessinsider.com/ibm-ceo-big-tech-ai-capex-data-center-spending-2025-12?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">whether any of this will ever pay back</a>. While at the same time, at the micro level, enterprises are quietly <a class="link" href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-new-economics-of-enterprise-technology-in-an-ai-world?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">re‑engineering workflows, roles, and cost structures</a> to extract real value from AI, but at a much slower, messier pace than market narratives suggest.<span style="font-family:Arial, sans-serif;">​</span></p><p class="paragraph" style="text-align:left;">Leaders who only track the macro story risk getting swept up in hype or panic about <a class="link" href="https://www.economist.com/leaders/2025/07/24/the-economics-of-superintelligence?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">bubbles, frontier models, and even “superintelligence”</a>. Leaders who only focus on the micro story risk missing how dependent their AI ambitions are on the evolving <a class="link" href="https://hbr.org/2025/11/ai-companies-dont-have-a-profitable-business-model-does-that-matter?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">business models and pricing power of the Big Tech platforms</a> that provide the models, chips, and cloud capacity.<span style="font-family:Arial, sans-serif;">​</span></p><h3 class="heading" style="text-align:left;" id="the-macro-can-big-techs-ai-bet-ever"><b>The Macro: Can Big Tech’s AI bet ever pay?</b></h3><p class="paragraph" style="text-align:left;">The capital intensity of this AI wave is extraordinary. Estimates put announced <a class="link" href="https://www.businessinsider.com/ibm-ceo-big-tech-ai-capex-data-center-spending-2025-12?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">AI compute commitments and data‑centre build‑outs</a> on the order of many tens to a hundred gigawatts worldwide, translating into <a class="link" href="https://www.economist.com/finance-and-economics/2025/07/17/why-is-ai-so-slow-to-spread-economics-can-explain?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">trillions of dollars of potential capex</a> when you factor in chips, facilities, and energy. Some industry leaders now openly question whether <a class="link" href="https://www.businessinsider.com/ibm-ceo-big-tech-ai-capex-data-center-spending-2025-12?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">today’s infrastructure and energy costs can sustain this level of spending and still generate acceptable returns</a>, without either major price rises or significant breakthroughs in efficiency.<span style="font-family:Arial, sans-serif;">​</span></p><p class="paragraph" style="text-align:left;">On top of the hardware build‑out, staggering sums are being spent to <a class="link" href="https://epoch.ai/blog/how-much-does-it-cost-to-train-frontier-ai-models?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">train the latest generation of frontier models</a>. Analyses of recent large‑scale models suggest that individual <a class="link" href="https://arxiv.org/html/2405.21015v1?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">training runs already cost tens or even hundreds of millions of dollars</a>, with projections that the most ambitious runs could exceed a billion dollars later this decade if current scaling trends continue. As model sizes, data requirements, and safety evaluations grow, the economics of training are becoming a powerful barrier to entry that only a handful of well‑capitalised firms can realistically cross.<span style="font-family:Arial, sans-serif;">​</span></p><p class="paragraph" style="text-align:left;">Then there is the cost of <a class="link" href="https://explodingtopics.com/blog/chatgpt-users?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">serving these models to hundreds of millions of users</a>. Popular tools such as ChatGPT and its peers support vast bases of free users, with <a class="link" href="https://www.demandsage.com/chatgpt-statistics/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">only a minority paying for premium tiers</a>. That means the AI leaders are effectively running a global, always‑on inference service where <a class="link" href="https://hbr.org/2025/11/ai-companies-dont-have-a-profitable-business-model-does-that-matter?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">infrastructure and energy costs scale with usage faster than subscription revenues</a>, relying heavily on a freemium model and investor patience to bridge the gap.<span style="font-family:Arial, sans-serif;">​</span></p><p class="paragraph" style="text-align:left;">At the same time, much of the <a class="link" href="https://hbr.org/2025/11/ai-companies-dont-have-a-profitable-business-model-does-that-matter?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">GenAI ecosystem still lacks clear, proven profit models</a>: usage‑based pricing is still evolving, margins are squeezed by compute and licensing costs, and many providers depend more on expectations of future dominance than on sustainable cash flows. That is why economists increasingly describe <a class="link" href="https://mitsloan.mit.edu/ideas-made-to-matter/a-new-look-economics-ai?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">GenAI as an “infrastructure‑first” experiment</a>, whose financial logic only works if adoption and productivity gains scale far faster than current evidence suggests.<span style="font-family:Arial, sans-serif;">​</span></p><h3 class="heading" style="text-align:left;" id="the-micro-why-ai-is-slow-to-show-up"><b>The Micro: Why AI is slow to show up in productivity</b></h3><p class="paragraph" style="text-align:left;">On the ground, the economics look very different. Careful macroeconomic work, such as <a class="link" href="https://mitsloan.mit.edu/ideas-made-to-matter/a-new-look-economics-ai?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">MIT’s “new look at the economics of AI”</a>, suggests that only a modest share of tasks can be profitably automated with current AI over the next decade, leading to a <a class="link" href="https://mitsloan.mit.edu/ideas-made-to-matter/a-new-look-economics-ai?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">“nontrivial but modest” overall impact on GDP</a> compared with the more agressive forecasts. Researchers emphasise <a class="link" href="https://www.computer.org/publications/tech-news/trends/economics-of-ai?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">adjustment costs</a>: organisations must redesign processes, restructure roles, clean up data, and build new controls, all of which delay and dilute apparent returns.<span style="font-family:Arial, sans-serif;">​</span></p><p class="paragraph" style="text-align:left;">This helps explain the <a class="link" href="https://www.computer.org/publications/tech-news/trends/economics-of-ai?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">“AI paradox”</a> many leaders feel. Individually, teams report impressive point gains in efficiency and speed; collectively, the organisation’s productivity statistics barely move. Economic analyses also show that <a class="link" href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-new-economics-of-enterprise-technology-in-an-ai-world?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">AI benefits flow disproportionately to firms with strong digital infrastructure and high‑performing tech organisations</a>, widening the gap between digital leaders and laggards.<span style="font-family:Arial, sans-serif;">​</span></p><h2 class="heading" style="text-align:left;" id="the-technology-of-business-changing"><b>The Technology of Business: Changing your value proposition</b></h2><p class="paragraph" style="text-align:left;">For individual enterprises, the first economic question for AI is not “How much can we save?” but <a class="link" href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-new-economics-of-enterprise-technology-in-an-ai-world?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">“How does AI change what customers value and what we can uniquely offer?”</a>. AI shifts the production function: it does not just automate tasks, it changes the mix of <a class="link" href="https://www.computer.org/publications/tech-news/trends/economics-of-ai?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">human judgment, data, and computation that creates value in a product or service</a>.<span style="font-family:Arial, sans-serif;">​</span></p><p class="paragraph" style="text-align:left;">This shows up in several tangible ways:</p><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">Personalisation and prediction</a> enable new pricing, bundling, and risk‑sharing models, particularly in data‑rich industries such as finance, retail, and logistics.<span style="font-family:Arial, sans-serif;">​</span></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-new-economics-of-enterprise-technology-in-an-ai-world?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">AI‑enabled workflows</a> redistribute work between frontline staff, specialists, and machines, forcing a rethink of where you want distinctively human capability and where “good enough” automation is sufficient.<span style="font-family:Arial, sans-serif;">​</span></p></li><li><p class="paragraph" style="text-align:left;">The most powerful use cases often require <a class="link" href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-new-economics-of-enterprise-technology-in-an-ai-world?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">cross‑functional, end‑to‑end transformation</a>, not bolt‑on tools, which means the economic impacts results from user journeys and customer outcomes rather than in individual departmental budgets.<span style="font-family:Arial, sans-serif;">​</span></p></li></ul><p class="paragraph" style="text-align:left;">Without a clear line of sight from AI use to <a class="link" href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-new-economics-of-enterprise-technology-in-an-ai-world?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">value proposition and business models</a>, organisations fall into <a class="link" href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-new-economics-of-enterprise-technology-in-an-ai-world?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">“AI pilot purgatory”</a>: scattered experiments that cost real money but never scale enough to change the economics of the business.<span style="font-family:Arial, sans-serif;">​ With </span><span style="font-family:Arial, sans-serif;"><a class="link" href="https://hbr.org/2025/09/what-companies-with-successful-ai-pilots-do-differently?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">dire financial consequences</a></span><span style="font-family:Arial, sans-serif;">.</span></p><h2 class="heading" style="text-align:left;" id="the-business-of-technology-who-pays"><b>The Business of Technology: Who pays for all this?</b></h2><p class="paragraph" style="text-align:left;">The second economic question is about dependence: <a class="link" href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-new-economics-of-enterprise-technology-in-an-ai-world?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">how are your AI providers funding their own ambitions, and what does that mean for you over a five‑ to ten‑year horizon?</a> Big Tech’s AI investments are currently subsidised by <a class="link" href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-top-trends-in-tech?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">high‑margin legacy businesses, generous capital markets, and aggressive expectations of future demand for AI services and agents</a>.<span style="font-family:Arial, sans-serif;">​</span></p><p class="paragraph" style="text-align:left;">For enterprise customers, this creates several strategic risks:</p><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-top-trends-in-tech?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">Pricing power</a>: as usage grows and consolidated providers seek returns on multi‑trillion‑dollar infrastructure bets, per‑unit AI costs may rise faster than many business cases assume, especially for intensive use of frontier models.<span style="font-family:Arial, sans-serif;">​</span></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-new-economics-of-enterprise-technology-in-an-ai-world?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">Lock‑in economics</a>: proprietary models, data‑gravity, and specialised tooling can make switching platforms increasingly expensive, turning today’s discounts and credits into tomorrow’s margin squeeze.<span style="font-family:Arial, sans-serif;">​</span></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://arxiv.org/pdf/2409.13168.pdf?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">Systemic risk</a>: if AI‑driven valuations outrun realised profits for too long, corrections in tech markets can rapidly change vendors’ investment horizons, partnership priorities, and risk appetite.<span style="font-family:Arial, sans-serif;">​</span></p></li></ul><p class="paragraph" style="text-align:left;">Analyses of AI economics point to the benefits for <a class="link" href="https://www.computer.org/publications/tech-news/trends/economics-of-ai?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">data‑rich incumbents that can afford the infrastructure, talent, and governance load needed to scale AI</a>. Meanwhile, smaller firms and public sector organisations face much more fragile economics. Leaders ignoring this structural imbalance risk betting their transformation on a supply side they do not fully understand.<span style="font-family:Arial, sans-serif;">​</span></p><h2 class="heading" style="text-align:left;" id="lessons-for-leaders-ai-economics-10"><b>Lessons for Leaders: AI Economics 101</b></h2><p class="paragraph" style="text-align:left;">For today’s leaders, <a class="link" href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-new-economics-of-enterprise-technology-in-an-ai-world?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">“the economics of AI” needs to become part of the regular leadership conversation</a>, not a once‑a‑year strategy away‑day topic. That means building fluency in three areas:<span style="font-family:Arial, sans-serif;">​</span></p><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-new-economics-of-enterprise-technology-in-an-ai-world?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">Unit economics of AI in your workflows.</a> Know, at a granular level, where AI actually changes cost and revenue curves in your organisation, and where it merely shifts cost from labour to infrastructure.<span style="font-family:Arial, sans-serif;">​</span></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://hbr.org/2025/11/ai-companies-dont-have-a-profitable-business-model-does-that-matter?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">Platform and ecosystem economics.</a> Understand how your core AI providers make money, where their incentives align with yours, and where you are implicitly underwriting their long‑term bets.<span style="font-family:Arial, sans-serif;">​</span></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.computer.org/publications/tech-news/trends/economics-of-ai?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">Adoption and adjustment dynamics.</a> Appreciate that the main bottlenecks are organisational and institutional, not technical, and invest accordingly in skills, governance, and process redesign.<span style="font-family:Arial, sans-serif;">​</span></p></li></ul><p class="paragraph" style="text-align:left;">The uncomfortable reality is that the numbers probably will not add up for everyone. Some <a class="link" href="https://www.economist.com/finance-and-economics/2025/07/17/why-is-ai-so-slow-to-spread-economics-can-explain?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">Big Tech investments will never earn back their cost of capital</a>, and some enterprises will not translate AI enthusiasm into sustainable performance gains. The leaders who thrive will be those who treat AI not as a magical productivity engine, but as a profound shift in both <a class="link" href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-new-economics-of-enterprise-technology-in-an-ai-world?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-261-ai-economics-101" target="_blank" rel="noopener noreferrer nofollow">“the technology of business” and “the business of technology”</a> – and who are willing to do the economic homework that shift demands.<span style="font-family:Arial, sans-serif;">​</span></p><p class="paragraph" style="text-align:left;"> </p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=36281605-3dd0-4af3-8157-4caa03728372&utm_medium=post_rss&utm_source=digital_economy_dispatches">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Digital Economy Dispatch #260 -- In Praise of Middle Management</title>
  <description>We&#39;re being told AI will replace large numbers of workers, with &quot;middle managers&quot; a prime target. But, what if AI is not so much a threat to middle managers, but a way to liberate them from administrative drudgery to allow them to focus on what they were actually hired for: leading people.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/8b783f73-5eb0-4e53-a571-02968f43b4c2/managers-photo-1.jpg" length="123070" type="image/jpeg"/>
  <link>https://dispatches.alanbrown.net/p/digital-economy-dispatch-260-in-praise-of-middle-management</link>
  <guid isPermaLink="true">https://dispatches.alanbrown.net/p/digital-economy-dispatch-260-in-praise-of-middle-management</guid>
  <pubDate>Sun, 30 Nov 2025 08:24:35 +0000</pubDate>
  <atom:published>2025-11-30T08:24:35Z</atom:published>
    <dc:creator>Alan Brown</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">I’m getting a little annoyed by the continual stream of reports about how AI is laying waste to today’s jobs. According to <a class="link" href="https://edition.cnn.com/2025/05/29/tech/ai-anthropic-ceo-dario-amodei-unemployment?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-260-in-praise-of-middle-management" target="_blank" rel="noopener noreferrer nofollow">the common narrative</a>, AI will destroy entry-level jobs and hollow out the corporate hierarchy, with middle managers (those much-maligned figures sandwiched between strategy and execution) being first against the wall. After all, if AI can summarise reports, track performance metrics, and coordinate workflows, what exactly is left for them to do?</p><p class="paragraph" style="text-align:left;">This story is compelling. It&#39;s also wrong.</p><p class="paragraph" style="text-align:left;">The reality emerging from organisations moving down the path of deploying AI at scale tells a different tale. Yes, there is a major shift in the skills required in organizations. Administrative tasks are being displaced. Yet, <a class="link" href="https://hbr.org/2025/07/how-ai-is-redefining-managerial-roles?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-260-in-praise-of-middle-management" target="_blank" rel="noopener noreferrer nofollow">middle managers are becoming more essential than ever</a>. They&#39;re not AI&#39;s victims. In fact, they may well be its primary beneficiaries and most effective champions. But only if we can enable them to focus on what really matters in an AI-augmented world: the people.</p><h2 class="heading" style="text-align:left;" id="the-liberation-dividend"><b>The Liberation Dividend</b></h2><p class="paragraph" style="text-align:left;">Let&#39;s be honest about what middle management has become in many organisations: a grinding cycle of administrative coordination, status updates, and information brokering. Talented people spend their days synthesising reports from below and translating directives from above, their strategic instincts dulled by the sheer volume of operational “busy-work”.</p><p class="paragraph" style="text-align:left;">The data confirms this frustration. <a class="link" href="https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/the-human-side-of-generative-ai-creating-a-path-to-productivity?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-260-in-praise-of-middle-management" target="_blank" rel="noopener noreferrer nofollow">McKinsey research reveals</a> that middle managers currently spend almost half their time on individual contributor and administrative tasks, with only about a quarter devoted to people-related activities. That&#39;s an extraordinary misallocation of experienced talent.</p><p class="paragraph" style="text-align:left;">AI changes this equation fundamentally. When routine information synthesis, scheduling coordination, and progress tracking can be handled automatically, something remarkable happens: Middle managers get their cognitive bandwidth back. The administrative drudgery that consumed their weeks can increasingly be delegated to AI systems, freeing them to do what they were actually hired for -- leading people, solving complex problems, and driving organisational change.</p><p class="paragraph" style="text-align:left;">This isn&#39;t speculative. A <a class="link" href="https://www.library.hbs.edu/working-knowledge/can-ai-help-managers-love-their-jobs-again?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-260-in-praise-of-middle-management" target="_blank" rel="noopener noreferrer nofollow">Harvard Business School study</a> tracking over 50,000 software developers found that generative AI is already helping professionals take on tasks once reserved for managers, freeing those managers from project coordination burdens to focus on higher-value work. As Professor Frank Nagle observes from his research, &quot;You get into a job because you love the core work. And then, as you become more senior, you start doing more management work. This is showing that AI helps people get that balance back closer to what they would prefer it to be.&quot;</p><p class="paragraph" style="text-align:left;">Organisations experimenting with AI-augmented management <a class="link" href="https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/the-organization-blog/middle-managers-hold-the-key-to-unlock-generative-ai?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-260-in-praise-of-middle-management" target="_blank" rel="noopener noreferrer nofollow">are discovering that their middle managers, unburdened from administrative overhead, are generating insights and initiatives that previously never surfaced</a>. Then, instead of spending many hours compiling status reports, auditing actions against incoherent company policies, or endlessly chasing project tracking information, they can now spend that time actually talking to their team, understanding blockers, and identifying opportunities to improve quality and increase performance.</p><h2 class="heading" style="text-align:left;" id="the-translation-layer"><b>The Translation Layer</b></h2><p class="paragraph" style="text-align:left;">At the root of the comments that &quot;AI will replace middle managers&quot; is a misunderstanding about the key role they play in driving (or obstructing) project delivery. They are the critical human layer responsible for synthesizing strategic goals with ground-level realities, managing exceptions, and ensuring cross-functional alignment. The essential tasks require judgment, context, and interpersonal skills beyond mere data processing.</p><p class="paragraph" style="text-align:left;">The effectiveness and health of the middle management layer serve <a class="link" href="https://www.deloitte.com/us/en/insights/topics/talent/human-capital-trends/2025/future-of-the-middle-manager.html?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-260-in-praise-of-middle-management" target="_blank" rel="noopener noreferrer nofollow">as a vital predictor of an organization&#39;s overall performance</a>. When this crucial tier is healthy and functioning effectively (with managers engaged and strategically focused) the organization is well-positioned for success and strong project delivery. Conversely, when the middle management layer is functioning poorly due to bottlenecks, administrative overload, or a lack of clarity, the entire system struggles, and organizational coherence and performance quickly fall apart.</p><p class="paragraph" style="text-align:left;">Senior leadership can set strategy. AI can process information and execute defined tasks. But neither can navigate the messy, contextual, deeply human work of translating strategic intent into operational reality. That translation requires understanding both the vision from above and the constraints from below. It requires reading the room, knowing which battles to fight, and adapting corporate mandates to local conditions.</p><p class="paragraph" style="text-align:left;">This is precisely where middle managers excel, and it&#39;s work that becomes more valuable, not less, as AI capabilities expand. The gap between what AI can theoretically accomplish and what it actually delivers in complex organisational environments is substantial. Bridging that gap requires experienced practitioners who understand both the technology&#39;s potential and the organisation&#39;s realities.</p><p class="paragraph" style="text-align:left;">Middle managers are uniquely positioned to identify where AI can add genuine value versus where it creates more problems than it solves. They know which processes are genuinely ripe for automation and which require human judgment that no current AI can replicate. They understand the informal networks and unwritten rules that determine whether any initiative succeeds or fails.</p><p class="paragraph" style="text-align:left;">McKinsey&#39;s research <a class="link" href="https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/the-human-side-of-generative-ai-creating-a-path-to-productivity?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-260-in-praise-of-middle-management" target="_blank" rel="noopener noreferrer nofollow">points to this evolving reality</a>. In a gen-AI-enabled world, middle managers could significantly reduce hours spent on non-people-related activities and reallocate that time toward supporting direct reports and engaging in broader strategy concerns. The middle manager&#39;s job will evolve to managing both people and the use of this technology to enhance their output. In other words, AI becomes another capability to be orchestrated, and orchestration is precisely what effective middle managers do.</p><p class="paragraph" style="text-align:left;">As a result, the organisations seeing real returns from AI aren&#39;t those that deployed it to replace human judgment. <a class="link" href="https://business.adobe.com/blog/perspectives/how-to-strike-a-balance-between-relying-on-ai-and-emphasizing-a-human-touch?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-260-in-praise-of-middle-management" target="_blank" rel="noopener noreferrer nofollow">They&#39;re those who deployed it to amplify human capability</a>, and middle managers are where that amplification matters most.</p><h2 class="heading" style="text-align:left;" id="the-path-forward"><b>The Path Forward</b></h2><p class="paragraph" style="text-align:left;">None of this happens automatically. The middle managers who thrive in an AI-enabled organisation won&#39;t be those who cling to information gatekeeping or administrative control as sources of authority. They will be those who embrace AI as a tool for enhanced effectiveness and use their liberated capacity to provide what no AI can: genuine leadership. Is that what we’ll see in practice?</p><p class="paragraph" style="text-align:left;">Maybe. But only if there is investment in training, in tools, and in organisational redesign to allow middle managers to operate differently. It requires senior leaders to recognise that their middle management layer isn&#39;t overhead to be eliminated but capacity to be enabled.</p><p class="paragraph" style="text-align:left;">The companies that understand this will build formidable competitive advantages. Those who mistake cost-cutting for transformation will discover they&#39;ve eliminated the very people who could have made their AI investments pay off.</p><p class="paragraph" style="text-align:left;">Middle managers have endured decades of derision. Perhaps it&#39;s time to recognise them for what they actually are: the leaders who translate intention into action, the bridge between strategy and execution, and increasingly, the key to unlocking AI&#39;s genuine potential across the enterprise.</p><p class="paragraph" style="text-align:left;"></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=78650565-0215-42bd-824f-ae34dc6b4b52&utm_medium=post_rss&utm_source=digital_economy_dispatches">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Digital Economy Dispatch #259 -- Untangling the Myth of &quot;Could&quot; vs &quot;Should&quot; in AI Decision-Making</title>
  <description>AI is blurring the lines in decision making. As you evolve your AI strategy, if you still think AI handles the “could&quot;, while humans control the “should”, you may be leading your organization astray.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/86c16e45-115b-437b-bd60-3277d5de9cf1/decision-making-photo.jpg" length="161331" type="image/jpeg"/>
  <link>https://dispatches.alanbrown.net/p/digital-economy-dispatch-259-untangling-the-myth-of-could-vs-should-in-ai-decision-making</link>
  <guid isPermaLink="true">https://dispatches.alanbrown.net/p/digital-economy-dispatch-259-untangling-the-myth-of-could-vs-should-in-ai-decision-making</guid>
  <pubDate>Sun, 23 Nov 2025 08:23:08 +0000</pubDate>
  <atom:published>2025-11-23T08:23:08Z</atom:published>
    <dc:creator>Alan Brown</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">When does AI assistance shift from being a helpful aid in human decision-making to becoming the invisible hand driving our choices? At what point does AI stop supporting our decisions and start making them on our behalf?</p><p class="paragraph" style="text-align:left;">Many senior leaders I speak with act as if these lines are clearly drawn. They&#39;re confident in their statements about where human judgment begins and AI assistance ends. &quot;We use AI for analysis&quot;, they say, “But we make the decisions&quot;. I&#39;m not so sure. In fact, I&#39;m increasingly convinced that this false certainty is becoming one of the biggest risks facing leaders in many organizations today.</p><p class="paragraph" style="text-align:left;">Consider how decisions actually unfold in AI-augmented organizations. A strategy team asks AI to analyze market opportunities, say. But unless very carefully controlled, the AI doesn&#39;t just crunch numbers, it defines what counts as an &quot;opportunity&quot;, weights different factors, and presents options within frameworks it has learned, been given, or simply invented. By the time three strategic choices reach the boardroom, thousands of micro-judgments have already been made about what&#39;s realistic, what&#39;s valuable, and what&#39;s worth considering. The humans making the &quot;final decision&quot; are choosing from a menu they didn&#39;t write, filtered in ways they’d struggle to describe, using criteria that nobody agreed, based on an understanding of context that’s both narrow and flawed.</p><p class="paragraph" style="text-align:left;">Perhaps these limitations can be easily overcome if you’re redesigning a marketing strategy or planning a product launch. The time and opportunities for human review may be clear and obvious. But what about AI use in more urgent, short-term decision making that are automated within hidden workflows, buried in products and services you offer, or implicit in the vast array of everyday mission critical actions that drive the organization.</p><h2 class="heading" style="text-align:left;" id="could-vs-should"><b>Could vs Should</b></h2><p class="paragraph" style="text-align:left;">The comfortable conversation hasn&#39;t changed much in recent years. &quot;AI will show us what we <i>could</i> do&quot;, the executive says confidently, &quot;but humans will always decide what we <i>should</i> do.&quot; Heads nod around the table. The moral authority remains safely in human hands. The machines merely process: we decide. Really?</p><p class="paragraph" style="text-align:left;">Of course, the intent is clear and reasonable. Digital technology in the background while humans maintain oversight and control. I fear that increasingly in the world of AI adoption this comfortable theoretical distinction cannot readily be made in practice. More than that, it&#39;s becoming a dangerous myth for organizations navigating AI transformation. The reality emerging from AI deployment at scale <a class="link" href="https://sorelle.friedler.net/papers/sts_fat2019.pdf?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-259-untangling-the-myth-of-could-vs-should-in-ai-decision-making" target="_blank" rel="noopener noreferrer nofollow">reveals something far more subtle and complex</a>: the line between computational analysis and value-based judgment has become irretrievably blurred.</p><h2 class="heading" style="text-align:left;" id="im-sorry-dave-i-cant-let-you-do-tha"><b>I’m Sorry Dave…I Can’t Let You Do That</b></h2><p class="paragraph" style="text-align:left;">Consider how modern AI systems function in organizational decision-making. When an AI system recommends a course of action such as restructuring supply chains to optimize for resilience over efficiency, it has already made value judgments about risk tolerance, stakeholder priorities, contextual secondary effects, and time horizons. These aren&#39;t neutral calculations. They&#39;re decisions about what matters, embedded in algorithms through weighted predictions, supplied training data, inferred optimization targets, and a million different architectural choices.</p><p class="paragraph" style="text-align:left;">The basic &quot;could/should&quot; framework assumes AI systems present option sets like items on a menu, with humans selecting based on values and judgment. However, to achieve this requires incredible discipline and skill in how the AI tools are used. <a class="link" href="https://cybermaniacs.com/cm-blog/rubber-stamp-risk-why-human-oversight-can-become-false-confidence?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-259-untangling-the-myth-of-could-vs-should-in-ai-decision-making" target="_blank" rel="noopener noreferrer nofollow">Sadly lacking in the vast majority of situations</a> in which these capabilities are being applied. When AI is your preferred hammer, every problem starts looking like a nail.</p><p class="paragraph" style="text-align:left;">As a result, AI systems don&#39;t generate neutral possibility spaces. Every parameter, every training dataset, and every reward function <a class="link" href="https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2024.1320277/full?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-259-untangling-the-myth-of-could-vs-should-in-ai-decision-making" target="_blank" rel="noopener noreferrer nofollow">encodes human values</a>. Often this is happening unconsciously, frequently inconsistently, and usually created by developers worlds away from your organizational context. When AI presents you with three strategic options, it has already eliminated thousands of others based on embedded assumptions about feasibility, desirability, and viability. Many of which you may want to question based on your experience, judgement, and knowledge of the operating environment.</p><h2 class="heading" style="text-align:left;" id="more-decisions-more-often"><b>More Decisions, More Often</b></h2><p class="paragraph" style="text-align:left;">More troublingly, the sheer complexity and speed of AI-mediated environments <a class="link" href="https://journals.sagepub.com/doi/10.1177/0894439320980118?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-259-untangling-the-myth-of-could-vs-should-in-ai-decision-making" target="_blank" rel="noopener noreferrer nofollow">makes pure human judgment increasingly impossible</a>. Algorithmic trading systems execute thousands of transactions per second, content moderation systems process millions of posts daily, predictive maintenance systems monitor countless sensor streams simultaneously. So, where exactly does human judgment intervene? We&#39;ve already ceded the &quot;should&quot; to machines in countless micro-decisions that will aggregate into macro-consequences.</p><p class="paragraph" style="text-align:left;">The healthcare sector illustrates this very clearly. <a class="link" href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11582495/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-259-untangling-the-myth-of-could-vs-should-in-ai-decision-making" target="_blank" rel="noopener noreferrer nofollow">AI diagnostic systems don&#39;t just identify possible conditions</a>; they rank them by probability, recommend treatment pathways, and even suggest resource allocation. These actions are based on encoded medical ethics, liability concerns, and cost-benefit analyses. The radiologist reviewing an AI-flagged scan isn&#39;t making decisions in isolation but <a class="link" href="https://pmc.ncbi.nlm.nih.gov/articles/PMC6268174/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-259-untangling-the-myth-of-could-vs-should-in-ai-decision-making" target="_blank" rel="noopener noreferrer nofollow">within a framework pre-structured by algorithmic judgments about what deserves attention</a>.</p><p class="paragraph" style="text-align:left;">Thankfully, in medical systems the governance processes and regulations around the use of AI systems are pretty robust (although not faultless). But outside of this domain there are many areas lacking all such controls. Think about what happens in your own organization, from hiring practices to supply chain management, and ask yourself how well AI use is being controlled. Worried yet?</p><h2 class="heading" style="text-align:left;" id="toward-a-new-ai-decision-making-fra"><b>Toward a New AI Decision Making Framework</b></h2><p class="paragraph" style="text-align:left;">This isn&#39;t to suggest human judgment becomes irrelevant, quite the opposite. We need frameworks that acknowledge the genuine nature of human-AI decision-making: deeply collaborative, mutually supportive, and inseparably hybrid. The question isn&#39;t whether humans or AI should make decisions, but how to design decision systems that leverage both effectively while maintaining accountability, adaptability, and alignment with organizational values.</p><p class="paragraph" style="text-align:left;">First, we must recognize that <a class="link" href="https://link.springer.com/article/10.1007/s11023-020-09537-4?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-259-untangling-the-myth-of-could-vs-should-in-ai-decision-making" target="_blank" rel="noopener noreferrer nofollow">all AI systems embody values through their design</a>. Organizations need robust processes for interrogating these embedded values, understanding their provenance, and actively shaping them. This means involving ethicists, domain experts, and affected stakeholders in system design, not just data scientists and engineers.</p><p class="paragraph" style="text-align:left;">Second, we need new models of distributed decision authority that map different types of decisions to appropriate human-AI configurations. <a class="link" href="https://www.igminresearch.com/articles/html/igmin158?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-259-untangling-the-myth-of-could-vs-should-in-ai-decision-making" target="_blank" rel="noopener noreferrer nofollow">Some decisions require human creativity and moral inputs; others benefit from AI&#39;s pattern recognition and consistency</a>. Many require both, in carefully orchestrated processes. The challenge lies not in drawing rigid boundaries but in creating flexible, context-aware frameworks for decision delegation.</p><p class="paragraph" style="text-align:left;">Third, organizations must develop new competencies in what we might call &quot;<a class="link" href="https://www.tandfonline.com/doi/full/10.1080/19322909.2024.2395341?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-259-untangling-the-myth-of-could-vs-should-in-ai-decision-making" target="_blank" rel="noopener noreferrer nofollow">algorithmic literacy</a>&quot; to provide the ability to understand not just what AI systems recommend but how they reach those recommendations, what values they encode, and what blindspots they possess. Senior leaders can no longer treat AI as a black box that delivers neutral analysis; they must understand it as a participant in organizational decision-making with its own inherent biases and limitations.</p><p class="paragraph" style="text-align:left;">Finally, we need governance structures that reflect this new reality. <a class="link" href="https://www.oceg.org/ai-governance-organizations-must-evolve-for-a-new-era/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-259-untangling-the-myth-of-could-vs-should-in-ai-decision-making" target="_blank" rel="noopener noreferrer nofollow">Traditional approval hierarchies are inadequate for AI</a>, often assuming human decision-makers at each level. But when AI systems make millions of micro-decisions that shape macro-outcomes, when recommendation algorithms influence rather than merely inform, when predictive models become self-fulfilling prophecies, then governance must evolve. This means new forms of algorithmic auditing, continuous monitoring of decision outcomes, and kill switches for when human intervention becomes necessary.</p><p class="paragraph" style="text-align:left;">The organizations that will thrive in AI-dominated environments won&#39;t be those that maintain an artificial distinction between computational &quot;could&quot; and human &quot;should&quot;. They&#39;ll be those that develop sophisticated frameworks for human-AI collaboration, recognizing that values and judgments are distributed throughout sociotechnical systems, not localized in human minds.</p><p class="paragraph" style="text-align:left;">The question facing senior leaders isn&#39;t whether to preserve human decision-making authority. That need is clear. Instead, it&#39;s how to exercise that authority effectively in a world where the very nature of decision-making has fundamentally changed. The comfortable position of &quot;AI proposes, human disposes&quot; must give way to the complex reality of hybrid intelligence. Only then can organizations harness AI&#39;s transformative potential while maintaining the accountability and wisdom their stakeholders demand.</p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=f24105e4-bcb9-4624-bee2-f60636c98a3c&utm_medium=post_rss&utm_source=digital_economy_dispatches">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Digital Economy Dispatch #258 -- Why GenAI is Driving Me Crazy!</title>
  <description></description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/e8336782-d437-4c7f-b0c1-e1ab3ce5337d/crazy-photo-1.jpg" length="96101" type="image/jpeg"/>
  <link>https://dispatches.alanbrown.net/p/digital-economy-dispatch-258-why-genai-is-driving-me-crazy</link>
  <guid isPermaLink="true">https://dispatches.alanbrown.net/p/digital-economy-dispatch-258-why-genai-is-driving-me-crazy</guid>
  <pubDate>Sun, 16 Nov 2025 08:24:05 +0000</pubDate>
  <atom:published>2025-11-16T08:24:05Z</atom:published>
    <dc:creator>Alan Brown</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">Over the past couple of weeks, I have found myself wrestling (sometimes productively, often painfully) with the growing ecosystem of GenAI tools meant to make my life easier. My intent is simple: to use AI as a companion in researching, writing, and analysing the complex materials that underpin most of my projects. The outcome has been enlightening but also disorienting. These tools, without question, are powerful assistants. Yet, the more deeply I weave them into my workflow, the more I encounter a creeping sense of fragmentation, inconsistency, and loss of control. I think I may be losing my mind!</p><p class="paragraph" style="text-align:left;">Three issues are starting to dominate this experience. Each speaks to the deeper management and governance questions that we confront as AI becomes a more integral part of our professional and creative lives.</p><h3 class="heading" style="text-align:left;" id="1-the-gen-ai-mosaic-problem-too-man"><b>1. The GenAI Mosaic Problem: Too many tools, too little coherence</b></h3><p class="paragraph" style="text-align:left;">At this point, my working environment includes well over half a dozen GenAI tools (each with several model variants), including ChatGPT, Claude, Gemini, Perplexity, Cursor, Grok, and a few others. Each offers a slightly different way of thinking, writing, or reasoning. Each promises some breakthrough combination of intelligence and efficiency. Yet none of them behaves the same way.</p><p class="paragraph" style="text-align:left;">A task that one tool performs flawlessly might completely baffle another. One generates articulate summaries but fails at factual consistency. Another is stronger analytically but struggles with tone or syntax. All of them make mistakes -- sometimes trivial, sometimes catastrophic -- but never in consistent patterns. Furthermore, the errors drift, shift, and mutate in ways I struggle to follow.</p><p class="paragraph" style="text-align:left;">The sheer variety of available GenAI tools at first sounds like a luxury. In practice, it introduces a new admin problem: GenAI tool management. I now spend almost as much time deciding and experimenting to work out which tool to use as I do on the actual work. It has become a constant balancing act of functionality, reliability, and interpretability. And I can’t seem to find a stable pattern.</p><p class="paragraph" style="text-align:left;">This raises a strategic question. As AI systems proliferate inside organisations, who will design the frameworks that coordinate them? More importantly, who will own and maintain the “logic of selection” so we understand the rationale behind which AI is trusted to do what? This is not just a technical question; it’s a consistency, responsibility, and governance one.</p><h3 class="heading" style="text-align:left;" id="2-the-gen-ai-reliability-problem-pr"><b>2. The GenAI Reliability Problem: Progress that collapses without warning</b></h3><p class="paragraph" style="text-align:left;">But it gets worse. Just when I believe I am getting somewhere, the tools collapse --sometimes figuratively, sometimes literally. A carefully constructed prompt chain will suddenly fail because a model version has been updated. A session times out. Usage limits are reached. The interface locks me out with a cheerful yet unhelpful “something went wrong” message. Thanks!</p><p class="paragraph" style="text-align:left;">Even when the systems technically function, their internal behaviour changes subtly over time. A phrase that once produced a rigorous, well written business analysis might now yield a list of random bullets. A model that previously understood structured referencing now reinterprets the same command in a new way based on citations that don’t exist. There is no stable baseline of reliability or accountability.</p><p class="paragraph" style="text-align:left;">In traditional business terms, this would be catastrophic. Imagine any other supplier who unpredictably changes how they deliver services without notice or recourse. Or whose product operates differently each time you use it. Yet with AI tools, we’ve normalized this volatility as “innovation” and “learning from experience”. The consequence is not merely irritation; it destroys trust.</p><p class="paragraph" style="text-align:left;">So, what does “process control” look like when the tools themselves evolve faster than the workflows built around them? This is an emerging leadership question. Managing people was about setting expectations and ensuring repeatability. Managing AI systems will demand a new mentality that acknowledges fluctuation and uncertainty as the norm, not the exception.</p><h3 class="heading" style="text-align:left;" id="3-the-gen-ai-provenance-problem-whe"><b>3. The GenAI Provenance Problem: When origin and authorship disappear</b></h3><p class="paragraph" style="text-align:left;">Of course, I’m finding lots of ways that GenAI tools are speeding up tasks. So, I find I’m drawn into using them more and more. However, the deeper I integrate multiple AI systems into my workflow, the harder it becomes to trace how any specific piece of content was created. Sections of reports, ideas for frameworks, or fragments of analysis now flow from tool to tool in an increasingly opaque process. When something finally looks brilliant, I’m often unable to say <i>how</i> it came into being, or even <i>who</i> authored it: me, a model, or some unresolved mixture of both.</p><p class="paragraph" style="text-align:left;">That might sound rather an abstract issue, but in a corporate or policy context, it’s highly practical. How do we validate a document or report’s accuracy if we don’t know how the insights were generated? How does accountability work in a blended human-AI authorship environment? If asked to reproduce a result or defend a line of reasoning, the trail is gone.</p><p class="paragraph" style="text-align:left;">For me personally, this opacity has created hesitation. Even when I generate strong outputs and reports I consider of high quality, I find myself reluctant to release them. If I can’t fully document the provenance or verify the factual lineage, I can’t ethically stand behind the result. The irony is that the more capable AI becomes, the less confidence I have in output integrity. I’m losing track of how much comes from me, and how much comes from the AI tools.</p><h3 class="heading" style="text-align:left;" id="towards-a-new-discipline-of-ai-work"><b>Towards a New Discipline of AI Work</b></h3><p class="paragraph" style="text-align:left;">These experiences have left me with a strange mix of admiration and anxiety. GenAI is an extraordinary assistant and amplifier of my capabilities. I can do much more with these tools than I ever could before. Yet it is also an amplifier of confusion, errors, and cognitive overload. I have, unintentionally, entered what feels like a new phase of “AI‑centric creative ambiguity.” It’s a state where I am more productive than ever, but less certain about what I’m actually producing.</p><p class="paragraph" style="text-align:left;">I don’t think this is just a personal nuisance. It points to a structural gap in modern digital practice. We are missing a discipline of working <i>with</i> GenAI: a set of methods, audit trails, and governance approaches that help creators, analysts, and decision-makers keep track of what happens inside the AI ecosystem they depend on.</p><p class="paragraph" style="text-align:left;">Perhaps this new discipline will resemble quality management systems for AI-assisted processes with something that is a mix between version control, data governance, and creative attribution. Or perhaps it will evolve into an entirely new profession: “AI work designers” responsible for ensuring that human‑AI collaboration remains transparent and defensible. I really don’t know.</p><p class="paragraph" style="text-align:left;">Until then, I continue operating in a kind of experimental space, where GenAI’s brilliance and brokenness coexist. Perhaps the key is to recognise that this isn’t a passing inconvenience. It’s an early sign of what happens when intelligence becomes distributed, unstable, and shared. The challenge now is not whether I can use GenAI to produce good outputs quickly, but whether I can use it <i>responsibly</i>, <i>reliably</i>, and <i>repeatably</i>.</p><p class="paragraph" style="text-align:left;">And for now, that remains a work in progress.</p><p class="paragraph" style="text-align:left;"> </p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=218389bc-e722-4685-b492-66938ee1e578&utm_medium=post_rss&utm_source=digital_economy_dispatches">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Digital Economy Dispatch #257 -- Who Do We Trust With AI?</title>
  <description>Adopting powerful AI tools demands a critical focus on who owns and controls AI -- a tension very well described in Parmy Olson&#39;s book, Supremacy.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/e57a43a0-6d3d-4dbd-87de-e0df164a5422/trust-photo-4.jpg" length="59635" type="image/jpeg"/>
  <link>https://dispatches.alanbrown.net/p/digital-economy-dispatch-257-who-do-we-trust-with-ai</link>
  <guid isPermaLink="true">https://dispatches.alanbrown.net/p/digital-economy-dispatch-257-who-do-we-trust-with-ai</guid>
  <pubDate>Sun, 09 Nov 2025 08:23:05 +0000</pubDate>
  <atom:published>2025-11-09T08:23:05Z</atom:published>
    <dc:creator>Alan Brown</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">Perhaps the most important change I have seen in the last couple of years is how digital technology has moved from the sidelines to the core of company business models. So much so that it now defines how they operate. More and more, our society is shaped by these technologies. Marc Andreessen’s now-famous insight that “<a class="link" href="https://a16z.com/why-software-is-eating-the-world/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-257-who-do-we-trust-with-ai" target="_blank" rel="noopener noreferrer nofollow">software is eating the world</a>” has become an accepted business principle and a backbone of today’s digital tech community.</p><p class="paragraph" style="text-align:left;">Consequently, as digital technology becomes the foundational layer of modern business and society, it demands we ask a deeper, more critical question about the individuals and organizations behind the key systems we all rely on. Who owns and controls the technology stacks on which businesses and society now depend? How do we choose who to trust with AI’s future? And furthermore, what are the implications if we get these choices wrong?</p><h2 class="heading" style="text-align:left;" id="a-front-row-seat-to-the-ai-arms-rac">A Front-Row Seat to the AI Arms Race</h2><p class="paragraph" style="text-align:left;">With this in mind, I went back to Parmy Olson’s book, <a class="link" href="https://www.amazon.co.uk/Supremacy-ChatGPT-race-changed-world/dp/1035038242?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-257-who-do-we-trust-with-ai" target="_blank" rel="noopener noreferrer nofollow">“Supremacy: AI, ChatGPT, and the Race That Will Change the World”</a>, which rightly earned the Financial Times Business Book of the Year in 2024. It confronts this dilemma head-on and provides us with a context to delve deeper into these questions.</p><p class="paragraph" style="text-align:left;">In Olson’s account, we’re placed right at the centre of the tech world’s most important AI race. A contest between OpenAI and DeepMind, their visionary founders, and the forces of venture capital and big business that ultimately have shaped their focus and directions. It’s such a strong narrative, and I found many parallels with my own experience navigating digital transformation: initial optimism colliding with the stark realities of corporate demands, non-stop evolving priorities, and a constant tension between innovation, governance, and the commercial realities of Big Tech.</p><p class="paragraph" style="text-align:left;">I think what appeals most about the book is the way Olson skillfully describes how these labs have moved from their initial, idealistic stance toward commercial necessities required to secure resources. Then, how these morphed into powerful assets of US tech giants. The book describes the technological impacts this has had, and reveals the human and ethical dramas at the heart of decisions being made by all those adopting and scaling AI tools.</p><p class="paragraph" style="text-align:left;">As Olson explores, there are broad consequences of this tension for all of us. Not just in terms of the availability of AI tools and services for individuals, but also for why organizations of every size will struggle to define an appropriate AI strategy by relying on one or more of these companies, and find it even harder to keep in alignment with them as their key AI technologies evolve.<span style="font-family:Arial, sans-serif;">​</span></p><h2 class="heading" style="text-align:left;" id="lessons-that-hit-close-to-home">Lessons that Hit Close to Home</h2><p class="paragraph" style="text-align:left;">Three themes from Olson’s book resonate especially strongly with my own work and some of the concerns I#ve been writing about recently:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Concentration of Power:</b> The unchecked dominance of a handful of US and Chinese technology firms is not just an abstract business threat. It’s here, now, embedded in the UK’s infrastructure, digital services, and strategic decision-making. We’ve welcomed vast investments, but what are we risking for that privilege?<span style="font-family:Arial, sans-serif;">​</span></p></li><li><p class="paragraph" style="text-align:left;"><b>Ethics vs. Scale:</b> Olson’s critique of ethical shortcuts and the prevalence of bias reminds me that our rush to deploy AI can outpace our ability to regulate or even understand it. The seductive pace of innovation tempts us to leave awkward questions for later…sometimes too late.</p></li><li><p class="paragraph" style="text-align:left;"><b>Operational Dependency:</b> As our critical systems rely increasingly on outside providers, we’re constantly evaluating continuity risks. What if these partners pivot, restrict access, face government pressure, or simply fail? In my own advisory roles, I’ve stressed the importance of describing and mapping these technical dependencies to understand the subtle influence these players exert on our autonomy and values.<span style="font-family:Arial, sans-serif;">​​ But, how much to we really know or understand about how these AI technology providers will take their solutions?</span></p></li></ul><h2 class="heading" style="text-align:left;" id="the-u-ks-ai-crossroads-a-personal-p">The UK’s AI Crossroads, a Personal Perspective</h2><p class="paragraph" style="text-align:left;">Taking a UK perspective, the consequence can be enormous. <a class="link" href="https://d11n7da8rpqbjy.cloudfront.net/digileaders/91027644234UK_AI_Strategy_At_The_Crossroads_Share.pdf?kuid=ff29e127-d629-4cf1-b8bc-02527e3e60a1-1762636242&kref=hauPVJHjw6Za&utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-257-who-do-we-trust-with-ai" target="_blank" rel="noopener noreferrer nofollow">As I’ve written previously</a>, the UK stands at a strategic AI crossroads. The choices we make in the coming years will define the sovereignty, security, and resilience of our digital economy for a generation. I’ve argued before, and Olson’s book only sharpens my view, that our “third way” strategy, balancing US-style dynamism and EU-style governance, is as fraught with risk as it is with opportunity. Welcoming US AI technology investment is smart, but only if we maintain clarity on data governance, regulatory independence, and the persistent risks of strategic lock-in. My own recent writing cautions that early choices on investment can quickly become long-term liabilities if we lose sight of who is ultimately calling the shots and deal with the consequences.<span style="font-family:Arial, sans-serif;">​</span></p><p class="paragraph" style="text-align:left;">These macro-level concerns are echoed in every organization adopting AI. Digital leaders and policymakers are wrestling with these issues every day. The debate isn’t just policy. It’s deeply personal, shaping how we prepare teams, update frameworks, and educate executives for a world in which control can be illusory and risk ever-present.</p><h2 class="heading" style="text-align:left;" id="advice-from-the-trenches">Advice from the Trenches</h2><p class="paragraph" style="text-align:left;">What does this mean in practice? All of us involved in advocating and supporting digital transformation initiatives must respond to these challenges. Here are a few thoughts on how:</p><p class="paragraph" style="text-align:left;"><b>1. Scrutinize Trust, Don’t Assume It</b><br>Interrogate every strategic partnership: Who benefits? Who controls the data? What happens when interests diverge?</p><p class="paragraph" style="text-align:left;"><b>2. Demand and Demonstrate Transparency</b><br>Push for deeper transparency - not just in technical specifications, but in operational processes and commercial terms. It’s not enough to trust; we must verify.</p><p class="paragraph" style="text-align:left;"><b>3. Balance Innovation Against Sovereignty</b><br>Urge your teams to ask: How much autonomy do we really have, and what are we prepared to trade for speed and innovation?</p><p class="paragraph" style="text-align:left;"><b>4. Foster Open, Honest Debate</b><br>From executive workshops to project kick-offs, make space for uncomfortable truths. Optimism should not eclipse caution, nor allow us to avoid difficult policy or ethical questions.</p><p class="paragraph" style="text-align:left;"><b>5. Prepare for Complexity and Divergence</b><br>As regulations and provider policies evolve, coach organizations to expect and adapt to change: compliance, working with different regimes, and surviving technological and operational disruptions.</p><h2 class="heading" style="text-align:left;" id="ai-supremacy-and-its-implications">AI Supremacy and Its Implications</h2><p class="paragraph" style="text-align:left;">Re-reading Olson’s “Supremacy” provides much more than a historical perspective on infighting between today’s AI technology billionaires. It emphasizes the urgency of choices all senior managers must confront today. As the UK pushes forward in AI adoption, my own experience echoes her warning: <i>Be careful who you trust</i>. Every deal, every new system, and every governance compromise helps define the shape of your future autonomy and resilience. And that’s as true for your organization’s AI strategy as it is for the UK as a whole.</p><p class="paragraph" style="text-align:left;">In all the AI technology excitement, it’s easy to forget that with digital technology revolutions, it is essential not just to ride the wave, but to guide the direction with care, vigilance, and humility. The AI revolution promises much, but it is our responsibility to make sure it delivers in a way that preserves the trust, sovereignty, and values we hold dear.</p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=b0938092-5b51-4797-b2a0-1a2369807add&utm_medium=post_rss&utm_source=digital_economy_dispatches">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Digital Economy Dispatch #256 -- How Much of That Did You Write?</title>
  <description>Now, AI touches almost everything we read, from academic papers to strategy presentations. The real question we must ask is: how intelligently and responsibly is AI being applied?</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/dcc4fcdd-5a2b-4fa3-9edf-f53dc273f9c3/typewriter-photo-1.jpg" length="68235" type="image/jpeg"/>
  <link>https://dispatches.alanbrown.net/p/digital-economy-dispatch-256-how-much-of-that-did-you-write</link>
  <guid isPermaLink="true">https://dispatches.alanbrown.net/p/digital-economy-dispatch-256-how-much-of-that-did-you-write</guid>
  <pubDate>Sun, 02 Nov 2025 08:23:05 +0000</pubDate>
  <atom:published>2025-11-02T08:23:05Z</atom:published>
    <dc:creator>Alan Brown</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">I’ve picked up an annoying new habit lately. Every time I read a new report, article, or news item, I find my mind drifting to the same question: <i>how much of that did you actually write</i>? It’s become an obvious question born of experience. Today, I simply assume AI is involved in every piece of content I encounter. In meetings and workshops I attend, it’s become second nature to pause and ask whether this is original work, a clever remix, or the output of a well-tuned language model. And lately, I’m not sure I can tell the difference.</p><p class="paragraph" style="text-align:left;">Yet the more I dwell on this, the more I realise that I may well be asking the wrong question entirely.</p><h2 class="heading" style="text-align:left;" id="from-authorship-to-application-what"><b>From Authorship to Application: What Really Matters?</b></h2><p class="paragraph" style="text-align:left;">These days, AI is everywhere. It is shaping newsletters, policy documents, press releases, and even those “personal” updates we get from industry luminaries and corporate leaders. The genie is out of the bottle, and if you’re expecting a handwritten, human-only narrative, you’re in the wrong era.</p><p class="paragraph" style="text-align:left;">There’s a good chance everything you see has been touched by AI. So, in this context, asking <i>if</i> AI has been used is meaningless. The real challenge today isn’t figuring out whether AI wrote something, but understanding <i>how well</i> AI has been used, and what that means for the value and credibility of what we’re reading.</p><p class="paragraph" style="text-align:left;">Now, when I reflect on any piece of content I’m reading, my concerns shift. Was AI used thoughtfully to summarise, synthesise, and clarify? Was human judgement applied to curate sources, verify facts, and ensure the final piece offers insight or a meaningful perspective? Did the process simply automate the bland, superficial, and lowest-common denominator view…or has it added something new, unusual, or unexpected?</p><h2 class="heading" style="text-align:left;" id="provenance-accuracy-and-context-the"><b>Provenance, Accuracy, and Context: The Three Pillars</b></h2><p class="paragraph" style="text-align:left;">This realisation leads me to three guiding principles that I now always keep front-of-mind when reading or reviewing content: provenance, accuracy, and context.</p><ul><li><p class="paragraph" style="text-align:left;"><b>Provenance</b>: Who stands behind the writing and what’s their reason for creating it? What methods, sources, and tools were used and why? Transparency of authorship now counts more than ever, whether it’s human, hybrid, or wholly machine-driven.</p></li><li><p class="paragraph" style="text-align:left;"><b>Accuracy</b>: How has the content been validated? Are the references solid, the claims substantiated, the figures real? In the AI era, it’s easy for plausible nonsense to slip into even authoritative-looking work, so verification rises to the top of my checklist.</p></li><li><p class="paragraph" style="text-align:left;"><b>Context</b>: How does this piece fit into the wider picture of what’s happening? Does it echo established research, contribute new value, or simply repeat the latest trend? With so much regurgitated material, judgement means going beyond the words themselves to assess the motivations, viewpoints, and environment that produced them.</p></li></ul><h2 class="heading" style="text-align:left;" id="lessons-for-leaders-and-decision-ma"><b>Lessons for Leaders and Decision Makers</b></h2><p class="paragraph" style="text-align:left;">These shifts carry real implications for anyone responsible for strategy, governance, or digital transformation. In this AI era, you now bear additional responsibilities for every piece of content you consume, produce, or refer to in your work. Here are a few thoughts on how to make sure that you’re up to the task.</p><p class="paragraph" style="text-align:left;">1.<span style="font-family:"Times New Roman";font-size:7pt;"> </span><b>Don’t Fixate on the Tool: Assess the Output</b><br>It doesn’t matter whether a human, robot, or committee wrote what you’re reading. What matters is its clarity, relevance, and reliability. Make your judgements based on substance, not origin.</p><p class="paragraph" style="text-align:left;">2.<span style="font-family:"Times New Roman";font-size:7pt;"> </span><b>Demand Transparent Sourcing</b><br>Push your teams to be explicit about how content is created and where information comes from. Ask for clear distinctions between AI-generated material, human analysis, and authoritative reference.</p><p class="paragraph" style="text-align:left;">3.<span style="font-family:"Times New Roman";font-size:7pt;"> </span><b>Expect New Forms of Peer Review</b><br>Consider how you might build new layers of review and validation, from AI-curated bibliographies to collaborative fact-checking. Accuracy is no longer a given; it’s the result of a systematic process.</p><p class="paragraph" style="text-align:left;">4.<span style="font-family:"Times New Roman";font-size:7pt;"> </span><b>Context Is King, But Challenge the Perspective</b><br>When engaging with reports or strategic recommendations, force yourself and your teams to reflect: Does this fit with what we know? Is it supported by real events, verifiable data, actionable insight, and tangible expertise?</p><p class="paragraph" style="text-align:left;">5.<span style="font-family:"Times New Roman";font-size:7pt;"> </span><b>Upgrade Digital Literacy</b><br>Equip your organisation not only to use AI, but to read and critique it. Encourage curiosity about techniques, models, and limits. Make this more mature approach to digital literacy part of your leadership style.</p><h2 class="heading" style="text-align:left;" id="so-did-you-use-ai-for-this"><b>So, Did </b><i><b>You</b></i><b> Use AI For This?</b></h2><p class="paragraph" style="text-align:left;">The question, “how much of that did you write?” belongs to a simpler era that has gone forever. Today, the more relevant challenge is to interrogate what you’re reading for provenance, accuracy, and context, regardless of the blend of human and AI effort involved. Our responsibility is to move away from scepticism and towards discernment, learning to read critically and constructively in an age where intelligence is synthetic, collaborative, and ambiguous by design.</p><p class="paragraph" style="text-align:left;">So, next time you scan an email, report, newsletter, or strategic plan, pause for a moment. Not to speculate on the authorship, but to judge the substance. That’s the leadership skill we need most in the AI era.</p><p class="paragraph" style="text-align:left;">And yes, you should assume that I used AI tools to help me with this article. Your task is to decide how well you think I used it.</p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=952d3567-ae4b-4408-9249-8be5eff499b0&utm_medium=post_rss&utm_source=digital_economy_dispatches">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Digital Economy Dispatch #255 -- Fragile AI and How Not to Fail the Resilience Test</title>
  <description>Our reliance on digital technology has left UK enterprises exposed to breakdowns and attacks. Rapid AI adoption will make this worse. Resilience and governance must now be urgent boardroom priorities.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/66213ee8-eebf-4027-9a4b-8563f3d84e4b/fragile-photo-1.jpg" length="28826" type="image/jpeg"/>
  <link>https://dispatches.alanbrown.net/p/digital-economy-dispatch-255-fragile-ai-and-how-not-to-fail-the-resilience-test</link>
  <guid isPermaLink="true">https://dispatches.alanbrown.net/p/digital-economy-dispatch-255-fragile-ai-and-how-not-to-fail-the-resilience-test</guid>
  <pubDate>Sun, 26 Oct 2025 08:24:19 +0000</pubDate>
  <atom:published>2025-10-26T08:24:19Z</atom:published>
    <dc:creator>Alan Brown</dc:creator>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">Earlier this week, we experienced the fragility at the heart of our hyper-connected digital world. There I sat, all set to kick off an online webinar, when it was clear something had gone wrong. Just before going live, I realized that Zoom was not working. I just couldn’t connect. My phone started pinging with WhatsApp messages and emails: “Is the link broken?”, “Are you having trouble too?”, “How do I connect?”.</p><p class="paragraph" style="text-align:left;">The effect of a relatively minor cloud infrastructure outage was rippling across online platforms, taking out not just video conferencing but access to cloud files, shared workspaces, and supporting apps. Several hours of chaos followed as we all juggled alternative communication channels, tried to find explanations, or just went to find another cup of coffee and waited for things to get back to normal. Was this a one-off fluke, or is it a taste of what we can expect in our deeply interconnected, AI-dependent world? And perhaps more importantly, is widespread adoption of AI capabilities likely to make this better or worse?</p><h2 class="heading" style="text-align:left;" id="the-race-for-ai-adoption"><b>The Race for AI Adoption</b></h2><p class="paragraph" style="text-align:left;">We are all seeing AI spreading through enterprises at breakneck speed, seemingly much faster than governance frameworks can keep up. <a class="link" href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-2024?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-255-fragile-ai-and-how-not-to-fail-the-resilience-test" target="_blank" rel="noopener noreferrer nofollow">The 2025 McKinsey State of AI survey reported that</a> 75% of organizations now use AI in at least one business function, yet only 28% have clear executive accountability for governance or oversight. Similarly, <a class="link" href="https://www.ey.com/en_gl/newsroom/2025/06/ey-survey-ai-adoption-outpaces-governance-as-risk-awareness-among-the-c-suite-remains-low?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-255-fragile-ai-and-how-not-to-fail-the-resilience-test" target="_blank" rel="noopener noreferrer nofollow">the EY Global Responsible AI Pulse Survey found that</a> while 72% of companies have scaled AI extensively, fewer than one in three have formal governance policies in place.</p><p class="paragraph" style="text-align:left;">The scale of AI adoption is staggering. According to <a class="link" href="https://www.netguru.com/blog/ai-adoption-statistics?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-255-fragile-ai-and-how-not-to-fail-the-resilience-test" target="_blank" rel="noopener noreferrer nofollow">Netguru’s 2025 AI Adoption Statistics</a>, 78% of firms now apply AI across core operations, up sharply from 55% in 2024, and usage of generative AI nearly doubled in the same period. Yet <a class="link" href="https://writer.com/blog/enterprise-ai-adoption-survey/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-255-fragile-ai-and-how-not-to-fail-the-resilience-test" target="_blank" rel="noopener noreferrer nofollow">nearly half of C-suite executives admit</a> their organizations are “tearing apart” under the strain of unmanaged adoption, citing frictions between IT and business units and a lack of coherent strategy.</p><p class="paragraph" style="text-align:left;">This mismatch between rapid adoption and weak governance has made many enterprises more dependent and more exposed than ever. It’s against this backdrop of speed and fragility that the recent AWS cloud outage serves as a blunt reminder of how thin the line is between flexible growth and unbounded vulnerability.</p><h2 class="heading" style="text-align:left;" id="when-one-line-of-code-breaks-the-wo"><b>When One Line of Code Breaks the World</b></h2><p class="paragraph" style="text-align:left;"><a class="link" href="https://www.theguardian.com/technology/2025/oct/24/amazon-reveals-cause-of-aws-outage?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-255-fragile-ai-and-how-not-to-fail-the-resilience-test" target="_blank" rel="noopener noreferrer nofollow">According to reports</a>, this week’s failure was not a sophisticated breakdown. Just a simple DNS configuration error inside Amazon Web Services triggered a cascading failure that silenced half the internet for several hours. Banks, airlines, government systems, and even AI model hosting platforms were disrupted. The cause was mundane. A single mismanaged update that exposed deep dependencies that few organizations truly understood.<span style="font-family:Arial, sans-serif;">​</span></p><p class="paragraph" style="text-align:left;">It&#39;s interesting that cloud computing promised resilience through redundancy, yet this event revealed that redundancy is not the same as invulnerability. <a class="link" href="https://techinformed.com/aws-outage-exposes-cloud-reliance-and-sparks-surge-in-social-engineering-threats?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-255-fragile-ai-and-how-not-to-fail-the-resilience-test" target="_blank" rel="noopener noreferrer nofollow">According to multiple analyses</a>, the outage showed how overly centralized infrastructures magnify fragility. Machine learning pipelines, analytics platforms, and digital services that rely on AWS’s backbone all fell silent. We found out once more that the cloud is not an unbreakable safety net.<span style="font-family:Arial, sans-serif;">​</span></p><p class="paragraph" style="text-align:left;">It also reminds us that resilience, in this light, is not just technological; it is strategic. <a class="link" href="https://techinformed.com/aws-outage-exposes-cloud-reliance-and-sparks-surge-in-social-engineering-threats/?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-255-fragile-ai-and-how-not-to-fail-the-resilience-test" target="_blank" rel="noopener noreferrer nofollow">As one commentator observed</a>, “Assuming tech giants are too big to fail is itself a failure of imagination”.<span style="font-family:Arial, sans-serif;">​</span></p><h2 class="heading" style="text-align:left;" id="when-disruption-becomes-deliberate"><b>When Disruption Becomes Deliberate</b></h2><p class="paragraph" style="text-align:left;">Of course, not all failures are accidental. Just days before the AWS incident, we saw more evidence of the fragility of digital technology. Jaguar Land Rover (JLR) revealed that its 2025 ransomware attack, <a class="link" href="https://www.whichcar.com.au/news/jaguar-land-rover-ransomware-attack-cost-revealed?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-255-fragile-ai-and-how-not-to-fail-the-resilience-test" target="_blank" rel="noopener noreferrer nofollow">initially downplayed as a “technical issue”</a>, could cost the company as much as £1.9 billion, <a class="link" href="https://news.sky.com/story/cyber-attack-on-jaguar-land-rover-the-most-financially-damaging-in-uk-history-13455008?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-255-fragile-ai-and-how-not-to-fail-the-resilience-test" target="_blank" rel="noopener noreferrer nofollow">according to news reports</a>. Production halted, 30,000 employees were sent home, and output stopped across key plants.<span style="font-family:Arial, sans-serif;">​</span></p><p class="paragraph" style="text-align:left;">What makes this alarming is not just the magnitude of the loss but the incentive structure behind it. <a class="link" href="https://www.bdemerson.com/article/complete-cybercrime-statistics?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-255-fragile-ai-and-how-not-to-fail-the-resilience-test" target="_blank" rel="noopener noreferrer nofollow">Cybercriminals now operate with the sophistication and capital of legitimate enterprises</a>, exploiting digital dependencies for staggering profit. The attack on JLR underscores that resilience cannot be expected, only prepared for.</p><p class="paragraph" style="text-align:left;">Following a stream of these high profile attacks, we now recognise that boards can no longer treat cyber resilience as an IT expense; it’s a business survival issue. Every digitally connected enterprise is a potential target because the rewards for attackers are so rich and the penalties so limited.</p><h2 class="heading" style="text-align:left;" id="when-ai-expands-the-attack-surface"><b>When AI Expands the Attack Surface</b></h2><p class="paragraph" style="text-align:left;">Into this already volatile environment, AI is introducing new complexity. <a class="link" href="https://www.trendmicro.com/vinfo/us/security/news/threat-landscape/trend-micro-state-of-ai-security-report-1h-2025?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-255-fragile-ai-and-how-not-to-fail-the-resilience-test" target="_blank" rel="noopener noreferrer nofollow">The Trend Micro State of AI Security Report</a> in 2025 found that 93% of organizations expect daily AI-driven cyberattacks, and two-thirds believe AI will have the largest single influence on enterprise security in the coming year.<span style="font-family:Arial, sans-serif;">​</span></p><p class="paragraph" style="text-align:left;">Of course, AI can strengthen defences by detecting anomalies, correlating risk signals, and flagging intrusions before humans can. But it also amplifies threats. <a class="link" href="https://hai.stanford.edu/ai-index/2025-ai-index-report?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-255-fragile-ai-and-how-not-to-fail-the-resilience-test" target="_blank" rel="noopener noreferrer nofollow">Malicious actors now use AI</a> for adaptive phishing, deepfake impersonation, automated vulnerability scanning, and even synthetic data poisoning. The same technology that empowers defenders empowers attackers too, and the arms race is accelerating.</p><h2 class="heading" style="text-align:left;" id="the-double-edged-sword-of-ai-accele"><b>The Double-Edged Sword of AI Acceleration</b></h2><p class="paragraph" style="text-align:left;">The lesson for digital leaders is that in deploying AI, it’s essential to be cautious. The frantic rollout of AI tools across enterprises often occurs without clear security or resilience frameworks. Each model, API, or agent adds a node to an expanding dependency web. Just as AWS’s DNS failure caused downstream chaos, a corrupted model, API, or data pipeline failure could have similar ripple effects inside AI-dependent organizations.</p><p class="paragraph" style="text-align:left;">Unfortunately, <a class="link" href="https://www.stack-ai.com/blog/the-biggest-ai-adoption-challenges?utm_source=dispatches.alanbrown.net&utm_medium=newsletter&utm_campaign=digital-economy-dispatch-255-fragile-ai-and-how-not-to-fail-the-resilience-test" target="_blank" rel="noopener noreferrer nofollow">too many firms are deploying generative and predictive models before defining fallback procedures or validation checks</a>. Going faster is only meaningful if the rules of the road are well defined. Speed without structure creates fragility.</p><h2 class="heading" style="text-align:left;" id="building-resilience-into-the-ai-era"><b>Building Resilience into the AI Era</b></h2><p class="paragraph" style="text-align:left;">For digital leaders, this is a moment to rethink resilience not as redundancy but as adaptive capacity -- the ability to absorb disruption, reorganize rapidly, and keep critical operations functioning. Achieving that demands re-establishing good digital practice:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Expose every dependency. </b>Know exactly where your risks lie—from cloud and models to data sources.</p></li><li><p class="paragraph" style="text-align:left;"><b>Stress-test your systems. </b>Run real failure drills. Don’t wait for an outage to show your blind spots.</p></li><li><p class="paragraph" style="text-align:left;"><b>Design for resilience, not just compliance. </b>Build transparency and accountability into every AI project from the outset.</p></li><li><p class="paragraph" style="text-align:left;"><b>Don’t automate for its own sake. </b>Apply AI where it delivers real results, avoiding unnecessary complexity.</p></li><li><p class="paragraph" style="text-align:left;"><b>Take responsibility. </b>Outsourcing to the cloud is not outsourcing survival. Plan for failure and own your recovery.</p></li></ul><h2 class="heading" style="text-align:left;" id="the-human-dimension"><b>The Human Dimension</b></h2><p class="paragraph" style="text-align:left;">The recent failures also remind us that technology alone will not create resilience. What differentiates enduring organizations is leadership that sees technology as an ecosystem that is interconnected, evolving, and occasionally fragile. In the rush toward AI-driven transformation, it’s tempting to move faster than your governance can adapt. But speed without foresight just multiplies the risk.</p><p class="paragraph" style="text-align:left;">The AWS outage and JLR breach are not anomalies; they are symptoms of deeper structural fragility. Enterprise resilience now demands that leaders treat every line of code, every connection, and every algorithm as potential points of failure. The challenge is not to slow innovation but to stabilize it. To ensure that as we scale AI, we also secure it. Effective digital leadership lies not in chasing AI’s next capability but in mastering its resilience.</p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=e6458e08-d94b-49d3-904b-4b40edae8956&utm_medium=post_rss&utm_source=digital_economy_dispatches">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

  </channel>
</rss>
