<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>TrustGraph</title>
    <description>Adapt to Anything. Control Everything.</description>
    
    <link>https://blog.trustgraph.ai/</link>
    <atom:link href="https://rss.beehiiv.com/feeds/QM0s6nelQt.xml" rel="self"/>
    
    <lastBuildDate>Fri, 6 Mar 2026 03:13:06 +0000</lastBuildDate>
    <pubDate>Thu, 04 Dec 2025 20:33:05 +0000</pubDate>
    <atom:published>2025-12-04T20:33:05Z</atom:published>
    <atom:updated>2026-03-06T03:13:06Z</atom:updated>
    
      <category>Data Science</category>
      <category>Software Engineering</category>
      <category>Artificial Intelligence</category>
    <copyright>Copyright 2026, TrustGraph</copyright>
    
    
    
    <docs>https://www.rssboard.org/rss-specification</docs>
    <generator>beehiiv</generator>
    <language>en-us</language>
    <webMaster>support@beehiiv.com (Beehiiv Support)</webMaster>

      <item>
  <title>Streaming Responses for Ontology-Driven AI in TrustGraph Release 1.6</title>
  <description>Open-Source Platform TrustGraph Delivers Token-by-Token Streaming Across LLMs, RAG, and Agent Workflows.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/ec387ea8-0469-4d15-ab61-4d27b7f7cf82/tg-adapter.png" length="976295" type="image/png"/>
  <link>https://blog.trustgraph.ai/p/streaming-responses-for-ontology-driven-ai-in-trustgraph-release-1-6</link>
  <guid isPermaLink="true">https://blog.trustgraph.ai/p/streaming-responses-for-ontology-driven-ai-in-trustgraph-release-1-6</guid>
  <pubDate>Thu, 04 Dec 2025 20:33:05 +0000</pubDate>
  <atom:published>2025-12-04T20:33:05Z</atom:published>
    <dc:creator>Daniel Davis</dc:creator>
    <category><![CDATA[Trustgraph]]></category>
    <category><![CDATA[Streaming Responses]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">SAN FRANCISCO, CA — December 4, 2025 — TrustGraph, the open-source agentic AI platform built for enterprises, today announces the release of version 1.6, introducing comprehensive streaming support that transforms real-time interactions with large language models, knowledge retrieval systems, and multi-step AI agents.</p><p class="paragraph" style="text-align:left;"><b>Streaming as a First-Class Capability</b><br>Version 1.6 marks a significant milestone in TrustGraph&#39;s evolution: streaming is now a native, production-ready feature across the entire platform. This release addresses a critical gap in enterprise AI infrastructure—the ability to deliver real-time, token-by-token responses that reduce latency, improve user experience, and enable truly interactive agentic workflows.</p><p class="paragraph" style="text-align:left;"><b>Full Streaming Support Across All LLM Providers</b><br>The release introduces comprehensive streaming infrastructure integrated with every major LLM provider, including Azure, Claude, Cohere, Google AI Studio, Mistral, OpenAI, Vertex AI, and private model services like Ollama, LM Studio, Llamafile, and TensorFlow Text Generation Inference (TGI).</p><p class="paragraph" style="text-align:left;">Developers can now configure streaming at the schema level and invoke it seamlessly across TrustGraph&#39;s full API surface—REST, WebSocket, Python client, and CLI tools. Critically, all streaming implementations are <b>fully backward compatible</b> with existing non-streaming clients, ensuring a smooth transition for production deployments.</p><p class="paragraph" style="text-align:left;"><b>Extended Streaming to GraphRAG and Document Retrieval</b><br>TrustGraph 1.6 extends streaming beyond raw LLM completions to knowledge-driven retrieval systems. GraphRAG, driven by ontologies, and DocumentRAG services now stream token-by-token responses, delivering insights from knowledge graphs and document stores in real-time. Queries that previously waited for complete batch processing now return results incrementally, enabling enterprises to surface knowledge faster and improve responsiveness in mission-critical applications.</p><p class="paragraph" style="text-align:left;">The implementation maintains consistency across all TrustGraph services—developers experience a unified streaming UX whether querying an LLM directly, retrieving from a knowledge graph, or invoking a multi-step agent workflow.</p><p class="paragraph" style="text-align:left;"><b>Real-Time Agent Interactions and Streaming Thought Process</b><br>Enhanced streaming support for agentic workflows represents a watershed moment for transparent, observable AI reasoning.</p><ul><li><p class="paragraph" style="text-align:left;"><b>Observe reasoning in flight</b> — Watch agents think through problems step by step</p></li><li><p class="paragraph" style="text-align:left;"><b>Deliver incremental results</b> — Surface intermediate conclusions as agents work through multi-step workflows</p></li><li><p class="paragraph" style="text-align:left;"><b>Build transparent interfaces</b> — Create user-facing applications that show AI reasoning progressively, building trust through explainability</p></li><li><p class="paragraph" style="text-align:left;"><b>Reduce apparent latency</b> — Deliver immediate feedback during long-running agent operations</p></li></ul><p class="paragraph" style="text-align:left;">The release includes a robust streaming parser for agent responses, with comprehensive error handling and automatic recovery mechanisms designed for production reliability.</p><p class="paragraph" style="text-align:left;"><b>Gateway API and WebSocket Support</b><br>The API Gateway now natively supports WebSocket streaming, enabling real-time client applications to receive token-by-token responses without polling or long-polling inefficiencies. This architecture scales effortlessly across multiple concurrent connections, crucial for enterprise deployments serving thousands of users.</p><p class="paragraph" style="text-align:left;"><b>Enhanced Integration Testing</b><br>Comprehensive integration tests validate streaming functionality across LLM providers, RAG systems, and agent workflows, ensuring consistency and reliability in production environments.</p><p class="paragraph" style="text-align:left;"><b>Critical Bug Fixes and Compatibility Improvements</b><br><b>AWS Bedrock Model Invocation</b> — Resolved compatibility issues with newer Bedrock model invocation APIs, including full streaming support for the latest Bedrock models.</p><p class="paragraph" style="text-align:left;"><b>Minio Library Compatibility</b> — Fixed incompatible library changes in the Minio client for blob storage operations, ensuring stable object storage interactions in deployments using S3-compatible backends.</p><p class="paragraph" style="text-align:left;"><b>Streaming Agent Race Conditions</b> — Eliminated race conditions and message ordering issues in streaming agent responses, guaranteeing deterministic, auditable agent behavior—a critical requirement for enterprise compliance.</p><p class="paragraph" style="text-align:left;"><b>Developer Experience Enhancements</b><br>New CLI improvements expand TrustGraph&#39;s developer tooling. The <code>tg-dump-queues</code> utility now provides diagnostic insights into streaming queue states, enabling engineers to troubleshoot and optimize queue configurations in real-time.</p><p class="paragraph" style="text-align:left;"><b>Deployment Flexibility and Model Support</b><br>Version 1.6 includes updated templates for the latest Bedrock and Claude models, ensuring seamless integration with state-of-the-art LLMs. The platform continues to support deployment across cloud providers (AWS, Azure, Google Cloud, OVHcloud, Scaleway), private data centers, and bare-metal infrastructure.</p><p class="paragraph" style="text-align:left;"><b>Building Production-Grade Agentic AI</b><br>TrustGraph&#39;s philosophy remains unchanged: build infrastructure that enables enterprises to move from AI demos to durable, mission-critical systems. Streaming in 1.6 is not a bolt-on feature—it&#39;s architected into the core platform, designed for production use from day one.</p><p class="paragraph" style="text-align:left;">The combination of Apache Pulsar&#39;s event-driven backbone, knowledge graph semantics, and now comprehensive streaming support creates an AI infrastructure stack uniquely suited to enterprises that refuse to compromise on data sovereignty, auditability, or reliability.</p><p class="paragraph" style="text-align:left;"><b>Availability</b><br>Version 1.6 is available immediately via GitHub at <a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=streaming-responses-for-ontology-driven-ai-in-trustgraph-release-1-6" target="_blank" rel="noopener noreferrer nofollow">https://github.com/trustgraph-ai/trustgraph</a> and through the Configuration Builder at <a class="link" href="https://config-ui.demo.trustgraph.ai/?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=streaming-responses-for-ontology-driven-ai-in-trustgraph-release-1-6" target="_blank" rel="noopener noreferrer nofollow">https://config-ui.demo.trustgraph.ai/</a>.</p><p class="paragraph" style="text-align:left;">The release is fully open source under the Apache 2.0 license, enabling enterprises to deploy, modify, and operate the platform on their own infrastructure with complete transparency and control.</p><p class="paragraph" style="text-align:left;"><b>For more information:</b></p><ul><li><p class="paragraph" style="text-align:left;">Documentation: <a class="link" href="https://docs.trustgraph.ai/?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=streaming-responses-for-ontology-driven-ai-in-trustgraph-release-1-6" target="_blank" rel="noopener noreferrer nofollow">https://docs.trustgraph.ai</a></p></li><li><p class="paragraph" style="text-align:left;">GitHub Repository: <a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=streaming-responses-for-ontology-driven-ai-in-trustgraph-release-1-6" target="_blank" rel="noopener noreferrer nofollow">https://github.com/trustgraph-ai/trustgraph</a></p></li><li><p class="paragraph" style="text-align:left;">Discord Community: <a class="link" href="https://discord.gg/sQMwkRz5GX?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=streaming-responses-for-ontology-driven-ai-in-trustgraph-release-1-6" target="_blank" rel="noopener noreferrer nofollow">https://discord.gg/sQMwkRz5GX</a></p></li><li><p class="paragraph" style="text-align:left;">Website: <a class="link" href="https://trustgraph.ai/?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=streaming-responses-for-ontology-driven-ai-in-trustgraph-release-1-6" target="_blank" rel="noopener noreferrer nofollow">https://trustgraph.ai</a></p></li></ul></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=cf5d17f6-2c19-4e27-9a0c-becbc574661b&utm_medium=post_rss&utm_source=trustgraph">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>TrustGraph Launches Automated Ontology-Driven Graph Building for Reliable AI Agents</title>
  <description>TrustGraph announces the release of significant new capabilities and infrastructure improvements in release 1.5.8. </description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/7443ffbd-b7ec-4fa4-85b6-c238d63e5139/tg-adapter.png" length="1757166" type="image/png"/>
  <link>https://blog.trustgraph.ai/p/trustgraph-launches-automated-ontology-driven-graph-building-for-reliable-ai-agents</link>
  <guid isPermaLink="true">https://blog.trustgraph.ai/p/trustgraph-launches-automated-ontology-driven-graph-building-for-reliable-ai-agents</guid>
  <pubDate>Mon, 24 Nov 2025 17:58:06 +0000</pubDate>
  <atom:published>2025-11-24T17:58:06Z</atom:published>
    <dc:creator>Daniel Davis</dc:creator>
    <category><![CDATA[Trustgraph]]></category>
    <category><![CDATA[Ontologies]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">SAN FRANCISCO, CA – November 24, 2025 – <a class="link" href="https://trustgraph.ai?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-launches-automated-ontology-driven-graph-building-for-reliable-ai-agents" target="_blank" rel="noopener noreferrer nofollow">TrustGraph</a> announces the release of significant new capabilities and infrastructure improvements in release 1.5.8. The latest version introduces support for ontology-driven graph construction, dynamic embeddings model selection, MCP authentication support, and full Python 3.13 compatibility—alongside critical stability enhancements and workbench improvements.</p><p class="paragraph" style="text-align:left;"><b>Introducing Ontology-Driven Graph Construction</b><br>The centerpiece of this release is a new ontology-based extraction processor that fundamentally changes how organizations extract knowledge from unstructured data. Rather than relying on generic machine learning heuristics, TrustGraph can now leverage domain-specific ontology definitions to guide extraction logic, ensuring that the constructed knowledge graph aligns with business requirements and domain constraints.</p><p class="paragraph" style="text-align:left;">&quot;Most knowledge graph platforms ask what <i><b>can</b></i> we extract? TrustGraph now asks what <i><b>should</b></i> we extract?&quot; said Mark Adams, Co-Founder of TrustGraph. &quot;The ability to automatically build graphs with a specified ontology puts your data fully in control, dramatically reducing noise and ensuring consistency across distributed data exchange. This is what enterprise-grade knowledge graph infrastructure looks like - except TrustGraph builds it autonomously instead of needing a team of data engineers and 6 months.&quot;</p><p class="paragraph" style="text-align:left;">Key benefits of Ontologies include:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Precise Data Extraction</b>: Define extraction rules once in your ontology configuration; apply them consistently across all extraction jobs</p></li><li><p class="paragraph" style="text-align:left;"><b>Entity Context Preservation</b>: Extract entity contexts alongside triple data for richer semantic understanding</p></li><li><p class="paragraph" style="text-align:left;"><b>Workbench Integration</b>: Seamlessly integrates with TrustGraph Workbench&#39;s ontology editor for visual rule definition</p></li><li><p class="paragraph" style="text-align:left;"><b>Reduced Post-Processing</b>: Dramatically cuts manual data cleaning and validation downstream</p></li></ul><p class="paragraph" style="text-align:left;"><b>Enterprise-Ready Platform Updates</b><br>Beyond ontologies, TrustGraph&#39;s latest release brings several enterprise-critical improvements:</p><p class="paragraph" style="text-align:left;"><b>Infrastructure & Compatibility:</b></p><ul><li><p class="paragraph" style="text-align:left;"><b>Python 3.13 Support</b>: Full compatibility with Python 3.13, including migration from cassandra-driver to scylla-driver for modern language support</p></li><li><p class="paragraph" style="text-align:left;"><b>Vector Store Lifecycle Improvements</b>: Fixed hard-coded vector size limitations; collections now initialize with proper support for variable embedding dimensions</p></li><li><p class="paragraph" style="text-align:left;"><b>Dynamic Embeddings Model Selection</b>: Users can now select embedding models dynamically, enabling optimization for specific use cases rather than one-size-fits-all approaches</p></li></ul><p class="paragraph" style="text-align:left;"><b>Security & Integration:</b></p><ul><li><p class="paragraph" style="text-align:left;"><b>MCP Authentication</b>: Added MCP (Model Context Protocol) authentication token header support, enabling secure integration with enterprise services and AI platforms</p></li></ul><p class="paragraph" style="text-align:left;"><b>Stability & Reliability:</b></p><ul><li><p class="paragraph" style="text-align:left;">Fixed batch deletion errors that occurred during large-scale collection operations</p></li><li><p class="paragraph" style="text-align:left;">Resolved agent step schema validation issues</p></li><li><p class="paragraph" style="text-align:left;">Improved OpenAI service compatibility for broader LLM integration</p></li><li><p class="paragraph" style="text-align:left;">Comprehensive test coverage for ontology extraction, dynamic embeddings, MCP authentication, and agent step parsing</p></li></ul><p class="paragraph" style="text-align:left;"><b>Enhanced TrustGraph Workbench</b><br>The TrustGraph Workbench—the visual interface for designing, testing, and deploying knowledge graphs—also receives substantial improvements:</p><ul><li><p class="paragraph" style="text-align:left;"><b>LLM Model Selection UI</b>: Users can now select from multiple LLM providers and models without leaving the workbench</p></li><li><p class="paragraph" style="text-align:left;"><b>MCP Authentication Interface</b>: Enterprise-grade authentication configuration now accessible through an intuitive UI</p></li><li><p class="paragraph" style="text-align:left;"><b>Performance Optimization</b>: System UI performance significantly improved across all workflows</p></li><li><p class="paragraph" style="text-align:left;"><b>Multi-Ontology Support</b>: Fixed critical issues with importing and managing multiple ontologies</p></li><li><p class="paragraph" style="text-align:left;"><b>Crash Stability</b>: Resolved flow class viewer crash that could interrupt design workflows</p></li><li><p class="paragraph" style="text-align:left;"><b>Refactored Architecture</b>: Library breakout and module reorganization improve maintainability and extensibility</p></li></ul><p class="paragraph" style="text-align:left;"><b>Technical Excellence & Documentation</b><br>This release includes comprehensive technical specifications for ontologies and vector store lifecycle management, ensuring practitioners have the architectural context needed for production deployments. Full test coverage and updated documentation support enterprise adoption.</p><p class="paragraph" style="text-align:left;"><b>Availability</b><br>The latest version of TrustGraph is available immediately on <a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-launches-automated-ontology-driven-graph-building-for-reliable-ai-agents" target="_blank" rel="noopener noreferrer nofollow">GitHub</a>. Comprehensive release notes, technical documentation, and migration guides are available <a class="link" href="https://docs.trustgraph.ai/reference/changelog/trustgraph.html?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-launches-automated-ontology-driven-graph-building-for-reliable-ai-agents" target="_blank" rel="noopener noreferrer nofollow">here</a>.</p><p class="paragraph" style="text-align:left;"><b>TrustGraph</b> <b>Community Resources</b></p><ul><li><p class="paragraph" style="text-align:left;"><b>Documentation</b>: <a class="link" href="https://docs.trustgraph.ai/?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-launches-automated-ontology-driven-graph-building-for-reliable-ai-agents" target="_blank" rel="noopener noreferrer nofollow">https://docs.trustgraph.ai/</a></p></li><li><p class="paragraph" style="text-align:left;"><b>GitHub Repository</b>: <a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-launches-automated-ontology-driven-graph-building-for-reliable-ai-agents" target="_blank" rel="noopener noreferrer nofollow">https://github.com/trustgraph-ai/trustgraph</a></p></li><li><p class="paragraph" style="text-align:left;"><b>Configuration Builder</b>: <a class="link" href="https://config-ui.demo.trustgraph.ai/?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-launches-automated-ontology-driven-graph-building-for-reliable-ai-agents" target="_blank" rel="noopener noreferrer nofollow">https://config-ui.demo.trustgraph.ai/</a></p></li><li><p class="paragraph" style="text-align:left;"><b>Discord Community</b>: <a class="link" href="https://discord.gg/sQMwkRz5GX?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-launches-automated-ontology-driven-graph-building-for-reliable-ai-agents" target="_blank" rel="noopener noreferrer nofollow">https://discord.gg/sQMwkRz5GX</a></p></li></ul></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=f2083c13-0e0d-4c9c-8a43-b665ff6d9257&utm_medium=post_rss&utm_source=trustgraph">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>How TrustGraph Built Enterprise-Grade Agentic AI with Qdrant</title>
  <description>By grounding retrieval in graph semantics and Qdrant’s vector engine, TrustGraph pushes non-determinism to the edges while maintaining uptime, auditability, and sovereignty.
</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/0b4d6b24-9df6-4391-9546-ecc1b22a339e/qdrant%2Btrustgraph.jpg" length="303551" type="image/jpeg"/>
  <link>https://blog.trustgraph.ai/p/how-trustgraph-built-enterprise-grade-agentic-ai-with-qdrant</link>
  <guid isPermaLink="true">https://blog.trustgraph.ai/p/how-trustgraph-built-enterprise-grade-agentic-ai-with-qdrant</guid>
  <pubDate>Mon, 13 Oct 2025 10:00:00 +0000</pubDate>
  <atom:published>2025-10-13T10:00:00Z</atom:published>
    <dc:creator>Daniel Davis</dc:creator>
    <category><![CDATA[Trustgraph]]></category>
    <category><![CDATA[Qdrant]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">(original post courtesy of Qdrant <a class="link" href="https://qdrant.tech/blog/case-study-trustgraph/?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=how-trustgraph-built-enterprise-grade-agentic-ai-with-qdrant" target="_blank" rel="noopener noreferrer nofollow">here</a>)</p><div class="image"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/79ecf063-2d4b-4cad-a8e0-85fc37267ede/TG-summary-box.jpg?t=1760144143"/></div><p class="paragraph" style="text-align:left;">When teams first experiment with agentic AI, the journey often starts with a slick demo: a few APIs stitched together, a large language model answering questions, and just enough smoke and mirrors to impress stakeholders.</p><p class="paragraph" style="text-align:left;">But as soon as those demos face enterprise requirements — constant data ingestion, compliance, thousands of users, and 24×7 uptime — the illusion breaks. Services stall at the first failure, query reliability plummets, and regulatory guardrails are nowhere to be found. What worked in a five-minute demo becomes impossible to maintain in production.</p><div class="image"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/31ea8410-0340-421e-ab4c-5fbac1875595/failure-map-poc-to-production-2.png?t=1760144162"/><div class="image__source"><span class="image__source_text"><p>Failure mode map — “From POC to production.”</p></span></div></div><p class="paragraph" style="text-align:left;">This is exactly the gap <a class="link" href="https://trustgraph.ai?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=how-trustgraph-built-enterprise-grade-agentic-ai-with-qdrant" target="_blank" rel="noopener noreferrer nofollow">TrustGraph</a> set out to close. From day one, they designed their platform for availability, determinism, and scale — with <a class="link" href="https://qdrant.tech?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=how-trustgraph-built-enterprise-grade-agentic-ai-with-qdrant" target="_blank" rel="noopener noreferrer nofollow">Qdrant</a> as a core piece of the architecture.</p><p class="paragraph" style="text-align:left;"><b>Building for Production, Not Demos</b><br>TrustGraph’s architecture is fully containerized, modular, and deployable across cloud, virtualized, or bare-metal environments.</p><p class="paragraph" style="text-align:left;">At its core are three pillars:</p><ul><li><p class="paragraph" style="text-align:left;">A streaming spine with Apache Pulsar. Persistent queues, schema evolution, and replayability provide resilience. If a process fails, it automatically restarts and resumes without data loss.</p></li><li><p class="paragraph" style="text-align:left;">Graph-native semantics. Knowledge is modeled in Resource Description Framework (RDF), with SPARQL templates guiding retrieval. This reduces dependence on brittle, model-generated queries and ensures answers are precise and auditable.</p></li><li><p class="paragraph" style="text-align:left;">Qdrant vector search. Entities are embedded and stored in Qdrant, enabling fast, reliable similarity search that integrates into the graph-driven workflow.</p></li></ul><p class="paragraph" style="text-align:left;"></p><div class="image"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/f61614e6-3c4a-40bb-ab5d-295932d1086a/architecture-overview.png?t=1760144228"/><div class="image__source"><span class="image__source_text"><p>Architecture overview</p></span></div></div><p class="paragraph" style="text-align:left;"><b>From Documents to Knowledge</b><br>Instead of breaking documents into chunks, TrustGraph extracts facts. An LLM identifies entities and relationships, assembling them into a knowledge graph. In parallel, embeddings of entities are stored in Qdrant.</p><p class="paragraph" style="text-align:left;">This dual representation allows queries to ground themselves in both semantic similarity and graph structure. For example, asking “Tell me about Alice” retrieves the “Alice” entity via Qdrant and maps it to her connections in the graph, rather than just surfacing sentences that happen to contain her name.</p><p class="paragraph" style="text-align:left;"></p><div class="image"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/29630b86-47a8-452a-b58e-bc58904f5083/ingestion-process.png?t=1760144258"/><div class="image__source"><span class="image__source_text"><p>Ingestion process</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Retrieval That Goes Beyond RAG</b><br>When a query enters the system, it follows a deterministic path:</p><ol start="1"><li><p class="paragraph" style="text-align:left;">The query is embedded into vectors.</p></li><li><p class="paragraph" style="text-align:left;">Qdrant retrieves the nearest entities.</p></li><li><p class="paragraph" style="text-align:left;">Those entities expand into a subgraph of related facts.</p></li><li><p class="paragraph" style="text-align:left;">The subgraph is passed to the LLM, which answers strictly from that curated context.</p></li></ol><p class="paragraph" style="text-align:left;">This approach surpasses traditional RAG, which stops at semantically similar chunks. Graph-anchored retrieval allows TrustGraph to surface causal or related knowledge. For example, “Why did the engine fail?” doesn’t just find mentions of “engine” and “failure” — it also uncovers related causes like <i>metal fatigue</i> or <i>coolant leaks</i> through graph connections.</p><p class="paragraph" style="text-align:left;"></p><div class="image"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/60337607-7c11-4cb6-93d2-c30472efe876/query-process.png?t=1760144280"/><div class="image__source"><span class="image__source_text"><p>Query process</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Agentic AI at Scale</b><br>TrustGraph’s retrieval capabilities sit within a broader agentic AI framework. Developers can orchestrate pipelines that combine:</p><ul><li><p class="paragraph" style="text-align:left;">GraphRAG for structured fact retrieval</p></li><li><p class="paragraph" style="text-align:left;">Template-driven queries for determinism</p></li><li><p class="paragraph" style="text-align:left;">MCP tool invocation for external actions</p></li><li><p class="paragraph" style="text-align:left;">NLPR - Natural Language Precision Retrieval (experimental), which uses ontologies to drive specialized extraction</p></li></ul><p class="paragraph" style="text-align:left;">This enables enterprises to build retrieval pipelines that integrate internal knowledge graphs with external data sources, while maintaining reliability and control.</p><p class="paragraph" style="text-align:left;"></p><div class="image"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/cff7fe00-5f31-4c66-8856-76a3bf3a03d4/ingestion-querying-process.png?t=1760144304"/><div class="image__source"><span class="image__source_text"><p>Ingestion & querying process</p></span></div></div><p class="paragraph" style="text-align:left;"><b>Outcomes That Matter in Production</b><br>By combining a resilient streaming backbone, graph-native semantics, and Qdrant-powered retrieval, TrustGraph delivers production-grade architectural outcomes:</p><ul><li><p class="paragraph" style="text-align:left;">Determinism — Template-driven SPARQL and Qdrant similarity search eliminate fragile query synthesis.</p></li><li><p class="paragraph" style="text-align:left;">Resilience — Pulsar pipelines replay and recover automatically, keeping systems responsive during failures or rolling updates.</p></li><li><p class="paragraph" style="text-align:left;">Scalability & Sovereignty — The platform runs on diverse hardware stacks, including non-NVIDIA GPUs, and supports strict European data sovereignty requirements.</p></li><li><p class="paragraph" style="text-align:left;">Developer Simplicity — Qdrant’s open-source, containerized design makes scaling straightforward and reduces operational friction.</p></li></ul><p class="paragraph" style="text-align:left;"><i>“We haven’t had a reason to revisit alternatives. Qdrant checks the boxes for speed, reliability, and simplicity—and it keeps doing so.”</i><br>— <i>Daniel Davis, Co-founder, TrustGraph</i></p><p class="paragraph" style="text-align:left;"><b>From Demos to Durable Infrastructure</b><br>TrustGraph shows how agentic AI can evolve from flashy demos into mission-critical enterprise software. By grounding retrieval in graph semantics and Qdrant’s vector engine, they push non-determinism to the edges while maintaining uptime, auditability, and sovereignty.<br><br><b>TrustGraph</b> <b>Community Resources</b></p><ul><li><p class="paragraph" style="text-align:left;"><b>Documentation</b>: <a class="link" href="https://docs.trustgraph.ai/?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=how-trustgraph-built-enterprise-grade-agentic-ai-with-qdrant" target="_blank" rel="noopener noreferrer nofollow">https://docs.trustgraph.ai/</a></p></li><li><p class="paragraph" style="text-align:left;"><b>GitHub Repository</b>: <a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=how-trustgraph-built-enterprise-grade-agentic-ai-with-qdrant" target="_blank" rel="noopener noreferrer nofollow">https://github.com/trustgraph-ai/trustgraph</a></p></li><li><p class="paragraph" style="text-align:left;"><b>Configuration Builder</b>: <a class="link" href="https://config-ui.demo.trustgraph.ai/?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=how-trustgraph-built-enterprise-grade-agentic-ai-with-qdrant" target="_blank" rel="noopener noreferrer nofollow">https://config-ui.demo.trustgraph.ai/</a></p></li><li><p class="paragraph" style="text-align:left;"><b>Discord Community</b>: <a class="link" href="https://discord.gg/sQMwkRz5GX?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=how-trustgraph-built-enterprise-grade-agentic-ai-with-qdrant" target="_blank" rel="noopener noreferrer nofollow">https://discord.gg/sQMwkRz5GX</a></p></li></ul></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=4d81ae58-af94-46e2-aa67-2b2f126b6022&utm_medium=post_rss&utm_source=trustgraph">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>TrustGraph Unveils Version 1.4.22</title>
  <description>Revolutionary Flow Configuration and Enhanced Collection Management Transform Enterprise AI Operations</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/a1b63274-f53e-435a-b519-c6b04e92c1e5/tg-adapter.png" length="1757166" type="image/png"/>
  <link>https://blog.trustgraph.ai/p/trustgraph-unveils-version-1-4-22</link>
  <guid isPermaLink="true">https://blog.trustgraph.ai/p/trustgraph-unveils-version-1-4-22</guid>
  <pubDate>Tue, 07 Oct 2025 20:30:18 +0000</pubDate>
  <atom:published>2025-10-07T20:30:18Z</atom:published>
    <dc:creator>Daniel Davis</dc:creator>
    <category><![CDATA[Trustgraph]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;"><b>SAN FRANCISCO, CA – October 7, 2025</b> —TrustGraph, the pioneering open-source agentic AI platform built for enterprise demands, today announces the release of version 1.4.22, delivering groundbreaking advancements in flow configurability, structured data processing, and collection management. This major update empowers enterprises to achieve unprecedented flexibility and control over their AI workflows while maintaining the platform&#39;s hallmark transparency and scalability.</p><p class="paragraph" style="text-align:left;"><b>Transforming AI Workflow Management with Dynamic Flow Configuration</b><br>The centerpiece of version 1.4.22 introduces <b>Flow Configurable Parameters</b>, a revolutionary system that enables dynamic workflow configuration without requiring system rebuilds. This breakthrough allows enterprises to adapt their AI pipelines in real-time, adjusting Large Language Model (LLM) settings, temperature parameters, and processing configurations on-demand.</p><p class="paragraph" style="text-align:left;">&quot;<i>Version 1.4.22 represents a fundamental shift toward truly dynamic AI infrastructure,</i>&quot; said <b>Daniel Davis</b>, Co-Founder of TrustGraph. &quot;<i>Our new flow configuration system eliminates the traditional barriers between AI system design and operational deployment. Enterprises can now adapt their AI workflows as quickly as their business requirements change, without sacrificing the robust foundation they depend on for mission-critical operations.</i>&quot;</p><p class="paragraph" style="text-align:left;">The new flow parameter system introduces sophisticated <b>Advanced Parameter Mode</b> with controlled-by relationships, enabling complex interdependencies between workflow components. Organizations can now deploy LLM dynamic settings using <b>llm-model</b> and <b>llm-rag-model</b> parameters, while the new <b>tg-show-parameter-types</b> CLI tool provides comprehensive visibility into available configuration options.</p><p class="paragraph" style="text-align:left;"><b>Enhanced Intelligence Through Structured Data Diagnosis</b><br>Version 1.4.22 significantly expands TrustGraph&#39;s data processing capabilities with the introduction of the <b>Structured Data Diagnosis Service</b>. This powerful new component provides automatic detection and classification of XML, JSON, and CSV data formats, enabling TrustGraph&#39;s knowledge graph construction to work seamlessly across diverse enterprise data ecosystems.</p><p class="paragraph" style="text-align:left;">The integrated <b>Type Detector</b> with schema selection capabilities ensures that structured data is properly understood and processed according to enterprise-specific requirements. This advancement bridges the gap between TrustGraph&#39;s renowned unstructured document processing and the structured data that forms the backbone of most enterprise operations.</p><p class="paragraph" style="text-align:left;"><b>Revolutionary Collection Management for Enterprise Scale</b><br>Addressing the complex requirements of large-scale enterprise deployments, version 1.4.22 introduces comprehensive <b>Collection Management</b> capabilities that provide granular control over data organization and access patterns. The enhanced system delivers <b>Collection metadata management and deletion capabilities</b>, integrated <b>Librarian services</b>, and sophisticated <b>Collection tracking across all processors</b>.</p><p class="paragraph" style="text-align:left;">A critical advancement for enterprise security and compliance, the new <b>User/Collection Isolation</b> feature provides <b>User/Collection Separation</b> and <b>User/Collection Processing</b>, ensuring that sensitive data remains properly segregated across different organizational contexts and access levels.</p><p class="paragraph" style="text-align:left;"><b>Mark Adams</b>, Co-Founder of TrustGraph, emphasized the significance of these enhancements: &quot;<i>Enterprise AI requires enterprise-grade data management. Our enhanced collection management system provides the governance and isolation capabilities that large organizations need while maintaining the flexibility that makes TrustGraph so powerful for innovation.</i>&quot;</p><p class="paragraph" style="text-align:left;"><b>Performance Optimizations and Infrastructure Enhancements</b><br>Version 1.4.22 delivers substantial performance improvements across the platform&#39;s core infrastructure. The <b>Cassandra Performance</b> enhancements include a refactored knowledge graph implementation optimized for single-table operations, multi-table implementation for enhanced performance, and the addition of a dedicated <b>Cassandra collection table</b>.</p><p class="paragraph" style="text-align:left;"><b>GraphRAG Optimizations</b> have been implemented throughout the system, providing faster query processing and more efficient relationship traversal. The <b>Vector Store Enhancements</b> now include automatic collection creation on query, streamlining deployment and reducing operational overhead.</p><p class="paragraph" style="text-align:left;">The platform&#39;s robust <b>Apache Pulsar</b> foundation receives additional strengthening through <b>Build System</b> improvements that enable parallel container builds, significantly reducing deployment times for large-scale enterprise implementations.</p><p class="paragraph" style="text-align:left;"><b>About TrustGraph&#39;s Enterprise Mission</b><br>Since its release in 2024, TrustGraph has emerged as the definitive open-source platform for enterprise agentic AI, combining <b>data streaming with knowledge graphs, structured data, VectorDBs, and MCP interoperability</b> in a single containerized platform. TrustGraph’s mission centers on providing enterprises with <b>reduced hallucinations and higher accuracy</b> by grounding AI with contextually rich intelligence built from enterprise data.</p><p class="paragraph" style="text-align:left;">TrustGraph&#39;s commitment to <b>full stack sovereignty</b> enables organizations to deploy the entire platform—including sensitive Knowledge Cores—wherever they choose: on-premises, private cloud, public cloud, or bare metal infrastructure. This approach ensures enterprises maintain ultimate control over their data and AI stack while escaping the &quot;black box&quot; limitations of proprietary systems.</p><p class="paragraph" style="text-align:left;"><b>Availability and Getting Started</b><br>TrustGraph version 1.4.22 is immediately available through the project&#39;s <b>GitHub repository</b> and comprehensive documentation portal. The platform&#39;s <b>Configuration Builder</b> provides guided setup for enterprise deployments, while the <b>Workbench</b> offers interactive development capabilities for AI engineers and data scientists.</p><p class="paragraph" style="text-align:left;"><b>Community Resources</b><br>- <b>Documentation</b>: <a class="link" href="https://docs.trustgraph.ai/?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-unveils-version-1-4-22" target="_blank" rel="noopener noreferrer nofollow">https://docs.trustgraph.ai/</a><br>- <b>GitHub Repository</b>: <a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-unveils-version-1-4-22" target="_blank" rel="noopener noreferrer nofollow">https://github.com/trustgraph-ai/trustgraph</a><br>- <b>Configuration Builder</b>: <a class="link" href="https://config-ui.demo.trustgraph.ai/?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-unveils-version-1-4-22" target="_blank" rel="noopener noreferrer nofollow">https://config-ui.demo.trustgraph.ai/</a><br>- <b>Discord Community</b>: <a class="link" href="https://discord.gg/sQMwkRz5GX?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-unveils-version-1-4-22" target="_blank" rel="noopener noreferrer nofollow">https://discord.gg/sQMwkRz5GX</a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=2010b5e8-f5ec-4642-a702-81e9a1e9cd96&utm_medium=post_rss&utm_source=trustgraph">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>TrustGraph 1.3.20: Breakthrough in Natural Language Database Querying and Data Processing</title>
  <description>Major update introduces NLP-to-GraphQL capabilities, enhanced collection management, and comprehensive system improvements</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/53410818-7651-4a6b-813c-6c088e7576c3/tg-adapter.png" length="1757166" type="image/png"/>
  <link>https://blog.trustgraph.ai/p/trustgraph-1-3-20-breakthrough-in-natural-language-database-querying-and-data-processing</link>
  <guid isPermaLink="true">https://blog.trustgraph.ai/p/trustgraph-1-3-20-breakthrough-in-natural-language-database-querying-and-data-processing</guid>
  <pubDate>Tue, 23 Sep 2025 18:40:45 +0000</pubDate>
  <atom:published>2025-09-23T18:40:45Z</atom:published>
    <dc:creator>Daniel Davis</dc:creator>
    <category><![CDATA[Trustgraph]]></category>
    <category><![CDATA[Structured Data]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;"><b>SAN FRANCISCO, CA – September 23, 2025</b> — TrustGraph today announces the release of version 1.3.20, a significant update that revolutionizes how users interact with structured data through natural language processing capabilities. This release marks a major milestone in making complex database operations accessible through conversational interfaces.</p><h2 class="heading" style="text-align:left;" id="revolutionary-natural-language-data"><b>Revolutionary Natural Language Database Querying</b></h2><p class="paragraph" style="text-align:left;">The standout feature of TrustGraph 1.3.20 is its breakthrough NLP query to GraphQL service, enabling users to query databases using natural language instead of complex query syntax. This innovation democratizes data access by allowing technical and non-technical users alike to retrieve information through simple, conversational commands.</p><p class="paragraph" style="text-align:left;">&quot;This release represents a fundamental shift in how organizations can interact with their data,&quot; said Daniel Davis, the Co-Founder of TrustGraph. &quot;By bridging the gap between human language and database queries, we&#39;re making structured data truly accessible to everyone.&quot;</p><h2 class="heading" style="text-align:left;" id="comprehensive-structured-data-enhan"><b>Comprehensive Structured Data Enhancements</b></h2><p class="paragraph" style="text-align:left;">Version 1.3.20 introduces a complete overhaul of structured data handling capabilities:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Structured Query Tool Integration</b>: Seamlessly integrates with the agent platform for intelligent query processing</p></li><li><p class="paragraph" style="text-align:left;"><b>Advanced Object Batching</b>: Optimizes performance through enhanced batch processing capabilities</p></li><li><p class="paragraph" style="text-align:left;"><b>Automated Data Loading</b>: New CLI functionality with auto mode streamlines data import workflows</p></li><li><p class="paragraph" style="text-align:left;"><b>Batch Import Operations</b>: Efficiently handles large-scale object imports with intelligent batching</p></li></ul><h2 class="heading" style="text-align:left;" id="intelligent-collection-management"><b>Intelligent Collection Management</b></h2><p class="paragraph" style="text-align:left;">The new release features sophisticated collection management that adapts to user behavior:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Automatic Collection Creation</b>: Stores dynamically create collections during query operations</p></li><li><p class="paragraph" style="text-align:left;"><b>Extended User Integration</b>: User and collection fields are now utilized consistently throughout the entire system</p></li></ul><h2 class="heading" style="text-align:left;" id="enhanced-organization-and-performan"><b>Enhanced Organization and Performance</b></h2><p class="paragraph" style="text-align:left;">TrustGraph 1.3.20 introduces tool grouping functionality, allowing users to organize and manage their tools more effectively. The release also includes substantial GraphQL enhancements, including expanded table query support and streamlined parameter handling.</p><h2 class="heading" style="text-align:left;" id="infrastructure-and-reliability-impr"><b>Infrastructure and Reliability Improvements</b></h2><p class="paragraph" style="text-align:left;">This version delivers significant infrastructure improvements:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Unified Configuration</b>: Standardized Cassandra configuration across all utilities ensures consistent connectivity and authentication</p></li><li><p class="paragraph" style="text-align:left;"><b>Enhanced Monitoring</b>: Improved Prometheus metrics provide accurate system monitoring and performance insights</p></li><li><p class="paragraph" style="text-align:left;"><b>Robust Vector Operations</b>: Resolved Milvus vector store integration issues for improved reliability</p></li><li><p class="paragraph" style="text-align:left;"><b>Streamlined API Operations</b>: Enhanced API consistency eliminates confusion between chunks and documents</p></li></ul><h2 class="heading" style="text-align:left;" id="availability-and-compatibility"><b>Availability and Compatibility</b></h2><p class="paragraph" style="text-align:left;">TrustGraph 1.3.20 is available immediately for download. The release maintains full backward compatibility while introducing powerful new capabilities that enhance both user experience and system performance.</p><h2 class="heading" style="text-align:left;" id="about-trust-graph"><b>About TrustGraph</b></h2><p class="paragraph" style="text-align:left;">TrustGraph is a leading platform for intelligent data management and querying, enabling organizations to unlock insights from their structured and unstructured data through advanced AI-powered tools and natural language interfaces.</p><ul><li><p class="paragraph" style="text-align:left;">🌟 TrustGraph on <span style="text-decoration:underline;"><i><a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-1-3-20-breakthrough-in-natural-language-database-querying-and-data-processing" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(12, 74, 110)">GitHub</a></i></span></p></li><li><p class="paragraph" style="text-align:left;">Join the TrustGraph Community on <span style="text-decoration:underline;"><i><a class="link" href="https://discord.gg/sQMwkRz5GX?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-1-3-20-breakthrough-in-natural-language-database-querying-and-data-processing" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(12, 74, 110)">Discord</a></i></span></p></li><li><p class="paragraph" style="text-align:left;">Watch TrustGraph tutorials on <span style="color:inherit;"><span style="text-decoration:underline;"><i><a class="link" href="https://www.youtube.com/@TrustGraphAI?sub_confirmation=1&utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-1-3-20-breakthrough-in-natural-language-database-querying-and-data-processing" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(12, 74, 110)">YouTube</a></i></span></span></p></li></ul></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=26774ad5-ac01-45aa-95e1-43eb8302da75&utm_medium=post_rss&utm_source=trustgraph">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Case Study: Apache Pulsar as the Event-Driven Backbone of TrustGraph</title>
  <description>The story of TrustGraph underscores a lesson for AI platform developers: a robust messaging backbone is the key to unlocking scalable, modular, event-driven systems.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/856e27f2-ad07-4dcf-8391-a49a542c72f8/tgandpulsar.png" length="70420" type="image/png"/>
  <link>https://blog.trustgraph.ai/p/case-study-apache-pulsar-as-the-event-driven-backbone-of-trustgraph</link>
  <guid isPermaLink="true">https://blog.trustgraph.ai/p/case-study-apache-pulsar-as-the-event-driven-backbone-of-trustgraph</guid>
  <pubDate>Fri, 19 Sep 2025 17:11:00 +0000</pubDate>
  <atom:published>2025-09-19T17:11:00Z</atom:published>
    <dc:creator>Daniel Davis</dc:creator>
    <category><![CDATA[Trustgraph]]></category>
    <category><![CDATA[Data Streaming]]></category>
    <category><![CDATA[Pulsar]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">(<a class="link" href="https://streamnative.io/blog/case-study-apache-pulsar-as-the-event-driven-backbone-of-trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=case-study-apache-pulsar-as-the-event-driven-backbone-of-trustgraph" target="_blank" rel="noopener noreferrer nofollow">Original Post courtesy of StreamNative</a>)</p><h2 class="heading" style="text-align:left;" id="introduction-the-challenge-of-build">‍<b>Introduction: The Challenge of Building an AI Platform</b></h2><p class="paragraph" style="text-align:left;">Late one afternoon, a team of developers set out to build <b>TrustGraph</b> – an open-source AI product creation platform aimed at orchestrating sophisticated AI agents. They faced a familiar challenge: how to connect a constellation of microservices (knowledge extractors, vector indexers, agent runtimes, etc.) into one cohesive system that can scale and adapt dynamically. Traditional point-to-point integrations felt brittle and hard to scale. The team needed a <i>nervous system</i> for their platform – a messaging backbone that could seamlessly link all components in real-time. Enter <b>Apache Pulsar</b>, the technology that would become the high-performance core of TrustGraph’s event-driven architecture. Pulsar (with enterprise support from <b>StreamNative</b>) offered exactly what TrustGraph needed: a reliable publish/subscribe foundation with the flexibility to handle everything from real-time agent queries to large-scale data ingestion. What follows is the story of how Pulsar powers TrustGraph, enabling developers to build modular AI systems that are scalable, resilient, and a joy to work with.</p><h2 class="heading" style="text-align:left;" id="why-trust-graph-chose-pulsar-as-its"><b>Why TrustGraph Chose Pulsar as its Backbone</b></h2><p class="paragraph" style="text-align:left;">From the outset, the TrustGraph engineers recognized that building a scalable AI platform meant embracing <i>event-driven design</i>. They needed a messaging layer that could support <b>diverse workloads</b> – from synchronous API calls to asynchronous data pipelines – without becoming a bottleneck. Apache Pulsar stood out for several reasons:</p><ul><li><p class="paragraph" style="text-align:left;"><b>It “just works” for ops:</b> Pulsar provides an operations-friendly way to connect complex processing elements. Its simplicity in managing communication patterns and scaling freed the team from writing custom pipeline glue code. Site reliability engineers could focus on deploying and monitoring AI capabilities rather than debugging message passing.</p></li><li><p class="paragraph" style="text-align:left;"><b>Native Pub/Sub Model:</b> Pulsar’s publish-subscribe architecture was a perfect fit for TrustGraph’s decoupled microservices. Components like the Knowledge Graph Builder, AI Agent Runtime, and data processors communicate by publishing events and subscribing to the topics they care about – no direct dependencies needed. This decoupling means each service can evolve or scale independently, a critical requirement for a modular AI platform.</p></li><li><p class="paragraph" style="text-align:left;"><b>Persistent and Non-Persistent Topics:</b> Pulsar uniquely lets you choose between <b>persistent</b> and <b>non-persistent</b> messaging. TrustGraph leverages this to balance reliability vs. latency. For critical data (e.g. ingesting documents into a knowledge base), TrustGraph uses persistent topics to guarantee delivery – ensuring no data is lost even if a service goes down. Conversely, for high-speed, ephemeral interactions (like an AI agent responding to a user query), TrustGraph uses non-persistent topics to minimize overhead and latency. This flexible messaging guarantees that each use-case gets the right trade-off between speed and safety.</p></li><li><p class="paragraph" style="text-align:left;"><b>Multi-Tenancy and Isolation:</b> Pulsar’s built-in multi-tenancy (via tenants and namespaces) proved invaluable for TrustGraph’s vision of <b>dynamic “Flows.”</b> A Flow in TrustGraph is essentially an isolated AI pipeline or workspace. Pulsar’s tenant/namespace model allows TrustGraph to create isolated channels for each Flow, ensuring that projects or tenants don’t interfere with each other’s data streams. This strong isolation was critical for enabling TrustGraph to support multiple concurrent AI agent workflows in one cluster, whether they belong to different teams, customers, or use cases.</p></li></ul><p class="paragraph" style="text-align:left;">In summary, Pulsar provided the <b>scalability, flexibility, and reliability</b> that TrustGraph needed in a messaging backbone. As Mark Adams, Co-founder of TrustGraph, put it, building on Pulsar gave them confidence that the communication layer would not be the limiting factor in scaling intelligent agents. It laid a rock-solid foundation on which to construct an AI platform ready for both rapid iteration and production-grade stability.</p><h2 class="heading" style="text-align:left;" id="architecting-trust-graph-with-pulsa"><b>Architecting TrustGraph with Pulsar: Key Patterns</b></h2><p class="paragraph" style="text-align:left;">With Apache Pulsar at its core, TrustGraph’s architecture evolved a set of powerful patterns. These patterns illustrate how Pulsar’s features are used in practice to create an event-driven, modular AI system:</p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Dynamic and Scalable “Flows”:</b> In TrustGraph, a <b>Flow</b> represents a configurable pipeline of AI tasks (for example, a data ingestion flow or an agent reasoning flow). Some services are <i>global</i> (shared across all Flows), while others are <i>flow-specific</i>. Pulsar enables this dynamic behavior through <b>dynamic queue naming and creation</b>.<br></p><ul><li><p class="paragraph" style="text-align:left;"><i>Global Services</i> (like configuration, knowledge base, and librarian APIs) listen on well-known, fixed Pulsar topics since they are always available and shared.</p></li><li><p class="paragraph" style="text-align:left;"><i>Flow-Hosted Services</i> (like a GraphRAG processor, Agent runtime, or custom embeddings service) spin up when a new Flow is started. TrustGraph automatically generates unique Pulsar topics for that Flow’s services. For example, if a Flow is named <code>research-flow</code>, the GraphRAG service in that flow might publish/subscribe on topics named:<br></p><ul><li><p class="paragraph" style="text-align:left;"><code>non-persistent://tg/request/graph-rag:research-flow</code></p></li><li><p class="paragraph" style="text-align:left;"><code>non-persistent://tg/response/graph-rag:research-flow</code><br>Each new Flow gets its own set of topics, isolating its traffic. Multiple Flows can run concurrently without stepping on each other’s messages – a huge win for <b>multi-project and multi-tenant deployments</b>. When the Flow is stopped, its topics can be torn down just as easily. This dynamic provisioning of queues means the platform can <b>scale out new pipelines on the fly</b> with full isolation, all thanks to Pulsar’s flexible naming and multi-tenancy.</p></li></ul></li></ul></li><li><p class="paragraph" style="text-align:left;"><b>Diverse Communication Patterns (Pub/Sub Flexibility):</b> TrustGraph doesn’t force a one-size-fits-all messaging style; instead, it uses Pulsar to support different interaction patterns within the platform:<br></p><ul><li><p class="paragraph" style="text-align:left;"><b>Request/Response Messaging:</b> For interactive services—such as an AI Agent API or the GraphRAG query service—TrustGraph sets up dedicated request and response topics. For example, when a user’s query hits the Agent service, it is published to a <i>request</i> topic, the agent processes it, and the answer comes back on a <i>response</i> topic tied to that user’s session or flow. This pub/sub request-response pattern feels like a direct call from the client’s perspective, but under the hood it’s decoupled and asynchronous. The client can await a response without knowing which specific service instance will handle it. This pattern gives <b>synchronous behavior on top of asynchronous internals</b>, combining interactivity with scalability.</p></li><li><p class="paragraph" style="text-align:left;"><b>Fire-and-Forget Ingestion:</b> For one-way data pipelines like ingesting documents, TrustGraph uses a simpler fire-and-forget approach. A client (say, a data loader component or a user uploading a file) will publish data to an <b>ingestion topic</b> and immediately move on. Downstream processor services (e.g. a Text Load service or a Triples Store loader) are subscribed and will process the data in due course. Crucially, these ingestion topics are <b>persistent</b> in Pulsar. This guarantees that if a processor is slow or temporarily down, the data remains in the queue until processed, ensuring <b>no loss</b>. Developers benefit by not having to babysit the pipeline – they trust Pulsar to eventually deliver data when the consumers are ready, improving the system’s resilience to spikes or faults.</p></li></ul></li><li><p class="paragraph" style="text-align:left;"><b>Centralized, Push-Based Configuration:</b> Running a complex AI platform means lots of configuration: prompts for the LLM, tool definitions for agents, pipeline parameters, etc. TrustGraph chose to manage configuration changes through Pulsar as well, turning config into an event stream. There is a dedicated Pulsar topic (e.g. <code>persistent://tg/config/config</code>) that acts as a <b>central config channel</b>. Whenever an administrator or developer updates a configuration – for instance, adjusting a prompt template or adding a new tool plugin – that update is published as a message on the config topic. All services that care about config subscribe to this channel. TrustGraph’s services (built on common base classes <code>FlowProcessor</code> or <code>AsyncProcessor</code>) are designed to receive these config events and <b>reconfigure themselves on the fly</b>. The moment a new Flow is launched or a parameter changes, every component gets the memo via Pulsar and updates its behavior without needing a restart. This push-based config distribution makes the platform <b>highly dynamic</b> – developers can deploy new capabilities or tune the system in real-time, and Pulsar ensures a consistent configuration state across the distributed system.</p></li></ol><p class="paragraph" style="text-align:left;">These patterns highlight a theme: <b>Pulsar decouples parts of the system while keeping them coordinated.</b> Dynamic topic creation lets TrustGraph scale out new processing flows easily. Multiple messaging patterns let each service communicate in the style that fits its role. A config event stream keeps everything in sync. All of it is implemented on Pulsar’s robust pub/sub substrate, meaning it inherits Pulsar’s strengths like horizontal scalability, durability, and back-pressure handling.</p><h2 class="heading" style="text-align:left;" id="benefits-to-developers-and-ai-teams"><b>Benefits to Developers and AI Teams</b></h2><p class="paragraph" style="text-align:left;">By weaving Pulsar so deeply into its design, TrustGraph reaps numerous benefits that directly address pain points developers often face in building AI systems:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Easier Scaling:</b> Need to handle more load? Simply add more consumers to a Pulsar topic to scale out a microservice – no complex rebalancing needed. Because each TrustGraph component processes messages from a queue, scaling is as straightforward as running another instance that subscribes to the same topic. For example, if the AI Agent requests spike, the team can spin up additional agent service containers; Pulsar will automatically distribute requests among them. This elasticity means the system can handle varying workloads on different parts of the AI pipeline without a hitch.</p></li><li><p class="paragraph" style="text-align:left;"><b>Resilience and Fault Tolerance:</b> Pulsar’s persistent messaging ensures critical data isn’t lost if something fails. Developers don’t have to write custom retry logic or worry about data gaps – if the Knowledge Graph builder goes down for a bit, all pending documents remain queued. When it comes back up, it picks up where it left off. Also, thanks to the decoupled design, a failure in one component (e.g., the vector embedding service) won’t crash the entire platform. Messages will queue up until that service recovers, while the rest of the system continues unaffected. This isolation containing failures makes the overall platform more <b>robust in production</b>.</p></li><li><p class="paragraph" style="text-align:left;"><b>Flexibility for New Features:</b> The dynamic Flow architecture allows teams to deploy new pipelines or custom components without modifying the core system. Because Pulsar handles the routing, a new service can be introduced by simply defining the topics it will use and plugging it in. This <b>pluggable architecture</b> means TrustGraph can evolve quickly. For instance, a developer could add a new “Sentiment Analysis” microservice into a Flow by having it subscribe to an intermediate topic – no need for a full redeploy or breaking existing flows. Pulsar’s multi-tenant setup means this can happen in an isolated way, so experimentation in one Flow won’t disrupt others.</p></li><li><p class="paragraph" style="text-align:left;"><b>Better Observability:</b> With Pulsar as the central hub for all messages, it provides a one-stop view into the system’s activity. TrustGraph takes advantage of Pulsar’s metrics – like message rates, consumer backlogs, throughput, and latency per topic – to give developers deep insight into how each part of the platform is performing. These metrics feed into Grafana dashboards where the team can see, for example, if the “ingestion queue” is backing up or if the “response times on the agent request topic” are rising. Such observability helps pinpoint bottlenecks quickly (maybe a vector DB is slow, causing a backlog) and aids in capacity planning. It essentially turns Pulsar into a stethoscope on the health of the AI platform.</p></li><li><p class="paragraph" style="text-align:left;"><b>Faster Iteration:</b> Perhaps most importantly, this Pulsar-driven architecture empowers faster development cycles. Because adding new flows or services is low-friction, developers can prototype new AI capabilities without weeks of pipeline engineering. The combination of <b>fewer bottlenecks, auto-scaling behavior, safe fault handling, and real-time config updates</b> means the team spends less time on infrastructure and more on innovating AI features. In practice, that could mean quickly trying a new large language model in the Agent service or connecting an experimental knowledge source – TrustGraph will handle the messaging and integration details, so the developer can focus on the AI logic.</p></li></ul><p class="paragraph" style="text-align:left;">All these benefits fundamentally spring from <b>Pulsar’s role as a unified messaging layer</b>. It abstracts away the hard parts of distributed communication (scaling, reliability, ordering, isolation), letting developers concentrate on building intelligent agents and knowledge pipelines.</p><h2 class="heading" style="text-align:left;" id="pulsar-in-action-a-day-in-the-life-"><b>Pulsar in Action: A Day in the Life with TrustGraph</b></h2><p class="paragraph" style="text-align:left;">To cement how Pulsar powers real-world usage of TrustGraph, let’s walk through a hypothetical scenario:</p><p class="paragraph" style="text-align:left;"><b>Meet Alice</b>, an AI engineer at an enterprise, who is using TrustGraph to build a new AI-powered research assistant. She begins her day by defining a new processing <b>Flow</b> for the project, aptly named <code>research-flow</code>. When Alice starts this Flow via TrustGraph’s CLI, under the hood the platform spins up microservices for that Flow – an Agent service, a GraphRAG service, an Embeddings service, etc. – each with their own Pulsar topics. Alice doesn’t have to manually configure any queues; <b>Pulsar automatically provisions topics</b> like <code>tg/request/graph-rag:research-flow</code> and <code>tg/response/graph-rag:research-flow</code> for her new Flow. Immediately, her Flow’s services begin running in isolation. In fact, a colleague can launch a separate <code>analysis-flow</code> in parallel, and thanks to Pulsar, the two sets of services won’t conflict. This allows different teams to use TrustGraph on the same infrastructure, each with their own dedicated message streams.</p><p class="paragraph" style="text-align:left;">Later that morning, Alice feeds a batch of documents (PDF reports) into TrustGraph for ingestion. As she uploads them via the Workbench UI, each document’s content is published as a message to the <b>Text Load</b> service’s Pulsar topic. The ingestion is designed as <b>fire-and-forget</b> – the upload request immediately returns, and Alice can go grab a coffee while TrustGraph pipelines the data. Pulsar’s persistent queue means even if the Text Load processor or downstream Knowledge Graph builder is busy, all documents will be queued reliably. After a brief break, Alice checks the dashboard: the documents are being processed one by one, and there are no errors. One of the processing containers did restart (maybe due to a transient error), but because of Pulsar, <b>no data was lost and the pipeline resumed automatically</b> once the service recovered. Alice silently thanks the decision to use Pulsar; in past projects with DIY messaging, a crash often meant writing custom retry logic or manual data cleanup, but not anymore.</p><p class="paragraph" style="text-align:left;">In the afternoon, Alice decides to improve the AI agent’s behavior by tweaking its prompt and adding a new tool for it. She opens TrustGraph’s configuration UI and updates the prompt template and registers an external API as a new tool. The moment she hits “Save”, TrustGraph’s <b>Config service</b> publishes an update event to <code>tg/config/config</code> topic. All running services in <code>research-flow</code> receive this update within milliseconds, thanks to their Pulsar subscriptions. The Agent runtime immediately pulls in the new prompt and tool definitions – there’s no need to restart anything. Alice initiates a test query to her agent; it responds using the updated prompt format and can even call the new API tool as needed, all in real-time. This kind of <b>live reconfiguration</b> makes it incredibly easy for Alice to iterate on her AI agent’s capabilities. In traditional setups, such changes might require editing config files on multiple servers or restarting processes, disrupting the workflow. With Pulsar’s event-driven config, TrustGraph achieves seamless, centralized control.</p><p class="paragraph" style="text-align:left;">Before wrapping up, Alice reviews the system’s performance. Using TrustGraph’s observability stack, she notices the <b>message backlog</b> on the <code>research-flow</code> ingestion topic grew slightly during peak load, but then drained as additional consumers auto-scaled. The Grafana metrics (sourced from Pulsar) show healthy throughput. One insight stands out: the response queue for the Agent service shows occasional latency spikes. Investigating further, Alice realizes that complex user questions trigger multiple knowledge searches, slowing responses. She decides to allocate another instance of the GraphRAG service to that Flow to handle these heavy queries. Thanks to Pulsar, scaling out is straightforward – the new instance will simply become another consumer on the relevant topics. Sure enough, once deployed, the next test query is handled faster, as the load is now balanced. The <b>bottleneck was resolved by a one-line configuration change</b> to scale the service, without any code changes or downtime. This agility in tuning performance is a direct consequence of the Pulsar-based design.</p><p class="paragraph" style="text-align:left;">By the end of the day, Alice has not only built a functioning AI research assistant, but she’s also iterated on it multiple times – all without struggling with messaging middleware. TrustGraph, empowered by Pulsar, took care of the heavy lifting: routing messages, preserving data, triggering reconfigurations, and scaling services on demand. For Alice, the developer experience is night-and-day compared to earlier projects. She can focus on crafting AI logic, confident that the <b>event-driven backbone</b> (powered by Pulsar and StreamNative’s expertise) will handle the rest.</p><h2 class="heading" style="text-align:left;" id="conclusion-pulsar-as-the-foundation"><b>Conclusion: Pulsar as the Foundation for Event-Driven AI</b></h2><p class="paragraph" style="text-align:left;">The story of TrustGraph underscores a broader lesson for AI platform developers: a robust messaging backbone is the key to unlocking scalable, modular, event-driven systems. Apache Pulsar proved to be that backbone for TrustGraph – acting as the central nervous system that links independent AI modules into one intelligent whole. Its pub/sub model, dynamic queue management, multi-tenancy, and mix of persistent vs. transient messaging enabled TrustGraph to achieve a level of flexibility and resilience that would be hard to realize otherwise. By using Pulsar, the TrustGraph team and its users gained <b>scalability, fault tolerance, and speed of iteration</b> as first-class features of the architecture. Developers can add new capabilities without fear of breaking the system, ops engineers can sleep easier knowing spikes or failures won’t collapse the pipeline, and organizations can deploy multiple AI agent flows concurrently with confidence in their isolation and security.</p><p class="paragraph" style="text-align:left;">In essence, Pulsar (with StreamNative’s enterprise support in the wings) serves as the <i>foundation</i> for TrustGraph’s vision of an AI platform. It demonstrates how an advanced event streaming technology can solve the pain points of building AI products: eliminating brittle point-to-point links, preventing data loss, simplifying scaling, and improving observability. For any team looking to build the next generation of AI systems – be it autonomous agents, real-time analytics, or context-driven LLM applications – the combination of <b>TrustGraph’s modular framework and Pulsar’s event-driven backbone</b> offers a compelling blueprint. Pulsar enabled TrustGraph to transform from an ambitious idea into a production-grade reality, reinforcing its role as a foundational enabler for event-driven AI platforms. The result is a story of technology empowering developers: with Apache Pulsar under the hood, TrustGraph can truly deliver on its promise of creating intelligent, context-aware AI agents at scale.</p><ul><li><p class="paragraph" style="text-align:left;">🌟 TrustGraph on <a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=case-study-apache-pulsar-as-the-event-driven-backbone-of-trustgraph" target="_blank" rel="noopener noreferrer nofollow">GitHub</a></p></li><li><p class="paragraph" style="text-align:left;">Join the TrustGraph Community on <a class="link" href="https://discord.gg/sQMwkRz5GX?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=case-study-apache-pulsar-as-the-event-driven-backbone-of-trustgraph" target="_blank" rel="noopener noreferrer nofollow">Discord</a></p></li><li><p class="paragraph" style="text-align:left;">Watch TrustGraph tutorials on <span style="color:inherit;"><a class="link" href="https://www.youtube.com/@TrustGraphAI?sub_confirmation=1&utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=case-study-apache-pulsar-as-the-event-driven-backbone-of-trustgraph" target="_blank" rel="noopener noreferrer nofollow">YouTube</a></span></p></li><li><p class="paragraph" style="text-align:left;">Learn more about <a class="link" href="https://pulsar.apache.org/?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=case-study-apache-pulsar-as-the-event-driven-backbone-of-trustgraph" target="_blank" rel="noopener noreferrer nofollow">Apache Pulsar</a></p></li><li><p class="paragraph" style="text-align:left;">Learn more about the creators of Pulsar - <a class="link" href="https://streamnative.io/?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=case-study-apache-pulsar-as-the-event-driven-backbone-of-trustgraph" target="_blank" rel="noopener noreferrer nofollow">StreamNative</a></p></li></ul></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=24765973-3283-4438-890b-af774491fd80&utm_medium=post_rss&utm_source=trustgraph">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>The NeoCloud Revolution</title>
  <description>How TrustGraph Levels the AI Playing Field Against Hyperscale Giants</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/b16c9c27-a1bc-4c5f-b679-b6d3dcccab34/image__22_.jpg" length="1788657" type="image/jpeg"/>
  <link>https://blog.trustgraph.ai/p/the-neocloud-revolution</link>
  <guid isPermaLink="true">https://blog.trustgraph.ai/p/the-neocloud-revolution</guid>
  <pubDate>Wed, 10 Sep 2025 22:52:16 +0000</pubDate>
  <atom:published>2025-09-10T22:52:16Z</atom:published>
    <dc:creator>Daniel Davis</dc:creator>
    <category><![CDATA[Trustgraph]]></category>
    <category><![CDATA[Hyperscalers]]></category>
    <category><![CDATA[Neocloud]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">The cloud computing landscape has long been dominated by the &quot;Big Three&quot; hyperscalers—AWS, Microsoft Azure, and Google Cloud Platform. Their massive infrastructure investments and AI service portfolios have created seemingly insurmountable barriers for smaller cloud providers. But a new paradigm is emerging that could fundamentally reshape this dynamic, and it&#39;s powered by an unlikely catalyst: enterprise demand for AI sovereignty.</p><p class="paragraph" style="text-align:left;">Enter <a class="link" href="https://trustgraph.ai?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=the-neocloud-revolution" target="_blank" rel="noopener noreferrer nofollow">TrustGraph</a>, the open-source agentic AI platform that&#39;s opening unprecedented opportunities for NeoClouds—the emerging generation of specialized, agile cloud providers—to compete directly with hyperscale giants on AI capabilities while offering something the big players fundamentally cannot: true data sovereignty and freedom from vendor lock-in.</p><h2 class="heading" style="text-align:left;" id="the-hyperscaler-ai-trap"><b>The Hyperscaler AI Trap</b></h2><p class="paragraph" style="text-align:left;">The current AI landscape presents enterprises with a troubling dilemma. While hyperscalers offer impressive AI services, they come with significant drawbacks that forward-thinking organizations are increasingly unwilling to accept:</p><p class="paragraph" style="text-align:left;"><b>Proprietary Black Boxes</b>: Enterprise data flows into opaque systems where organizations lose visibility into how their information is processed, stored, and utilized. This lack of transparency creates compliance nightmares and erodes trust.</p><p class="paragraph" style="text-align:left;"><b>Vendor Lock-in</b>: Once committed to a hyperscaler&#39;s AI ecosystem, organizations face astronomical switching costs and technical dependencies that make migration nearly impossible.</p><p class="paragraph" style="text-align:left;"><b>Data Sovereignty Concerns</b>: In an era of increasing regulatory scrutiny and geopolitical tensions, many organizations cannot afford to have their most sensitive data processed on infrastructure they don&#39;t control.</p><p class="paragraph" style="text-align:left;"><b>One-Size-Fits-All Limitations</b>: Hyperscaler AI services are designed for mass consumption, often lacking the customization and specialized industry knowledge that enterprises require.</p><h2 class="heading" style="text-align:left;" id="the-neo-cloud-opportunity"><b>The NeoCloud Opportunity</b></h2><p class="paragraph" style="text-align:left;">This is where NeoClouds—agile, specialized cloud providers focused on specific verticals, regions, or technical capabilities—see their opening. Unlike hyperscalers, NeoClouds can offer:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Intimate Customer Relationships</b>: Direct access to decision-makers and custom solutions</p></li><li><p class="paragraph" style="text-align:left;"><b>Specialized Expertise</b>: Deep domain knowledge in specific industries or technologies</p></li><li><p class="paragraph" style="text-align:left;"><b>Regulatory Compliance</b>: Local data residency and industry-specific compliance frameworks</p></li><li><p class="paragraph" style="text-align:left;"><b>Flexible Pricing</b>: Competitive, transparent pricing without complex enterprise licensing schemes</p></li><li><p class="paragraph" style="text-align:left;"><b>Rapid Innovation</b>: Ability to quickly adopt and integrate cutting-edge technologies</p></li></ul><p class="paragraph" style="text-align:left;">The challenge has always been matching the technical sophistication of hyperscaler AI services. Until now.</p><h2 class="heading" style="text-align:left;" id="trust-graph-the-great-equalizer"><b>TrustGraph: The Great Equalizer</b></h2><p class="paragraph" style="text-align:left;">TrustGraph represents a paradigm shift that suddenly makes advanced AI capabilities accessible to any infrastructure provider, regardless of size. Here&#39;s how it transforms the competitive landscape:</p><h3 class="heading" style="text-align:left;" id="complete-technical-parity"><b>Complete Technical Parity</b></h3><p class="paragraph" style="text-align:left;">TrustGraph delivers enterprise-grade AI capabilities that match or exceed what hyperscalers offer for enterprise AI transformation:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Advanced Context Engineering</b>: Sophisticated knowledge graph construction and agentic retrieval systems that often outperform hyperscaler RAG implementations</p></li><li><p class="paragraph" style="text-align:left;"><b>Full LLM Control</b>: Deploy and manage open LLMs where you have full control paying only for the compute and maintaining total data residency</p></li><li><p class="paragraph" style="text-align:left;"><b>Production-Ready Orchestration</b>: Full agent lifecycle management with observability and monitoring</p></li><li><p class="paragraph" style="text-align:left;"><b>Scalable Architecture</b>: Enterprise-grade performance across diverse computing environments</p></li></ul><h3 class="heading" style="text-align:left;" id="the-sovereignty-advantage"><b>The Sovereignty Advantage</b></h3><p class="paragraph" style="text-align:left;">What makes TrustGraph revolutionary is that NeoClouds can now offer something hyperscalers fundamentally cannot:</p><p class="paragraph" style="text-align:left;"><b>Complete Data Sovereignty</b>: Organizations maintain total control over their data, processing pipelines, and AI models. Every component runs on infrastructure they choose and control.</p><p class="paragraph" style="text-align:left;"><b>Open Source Transparency</b>: Unlike hyperscaler black boxes, every aspect of the AI pipeline is visible, auditable, and modifiable. Organizations can see exactly how their data is processed and used.</p><p class="paragraph" style="text-align:left;"><b>Portable Knowledge Assets</b>: TrustGraph&#39;s revolutionary &quot;Knowledge Cores&quot; create truly portable AI knowledge packages that prevent vendor lock-in while enabling rapid deployment across different environments.</p><p class="paragraph" style="text-align:left;"><b>Universal Deployment Flexibility</b>: The same TrustGraph deployment works consistently across any infrastructure—from bare metal to public cloud to hybrid environments.</p><h3 class="heading" style="text-align:left;" id="economic-disruption"><b>Economic Disruption</b></h3><p class="paragraph" style="text-align:left;">TrustGraph enables NeoClouds to compete on economics in ways previously impossible:</p><p class="paragraph" style="text-align:left;"><b>Transparent Cost Structure</b>: Built-in economic observability provides real-time cost monitoring, eliminating the surprise billing that plagues hyperscaler relationships.</p><p class="paragraph" style="text-align:left;"><b>Efficient Resource Utilization</b>: Local deployment and optimization capabilities often result in better price-performance ratios than hyperscaler services.</p><p class="paragraph" style="text-align:left;"><b>No AI Tax</b>: Organizations avoid the premium pricing inherent in hyperscaler AI services, paying only for compute resources and platform licensing.</p><h2 class="heading" style="text-align:left;" id="strategic-implications-for-neo-clou"><b>Strategic Implications for NeoClouds</b></h2><p class="paragraph" style="text-align:left;">The emergence of TrustGraph creates several strategic opportunities for NeoClouds:</p><h3 class="heading" style="text-align:left;" id="1-industry-vertical-specialization"><b>1. Industry Vertical Specialization</b></h3><p class="paragraph" style="text-align:left;">NeoClouds can combine TrustGraph&#39;s technical capabilities with deep industry expertise to create compelling specialized offerings:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Healthcare NeoClouds</b> can offer HIPAA-compliant AI with medical knowledge graphs</p></li><li><p class="paragraph" style="text-align:left;"><b>Financial Services providers</b> can deliver regulatory-compliant AI with specialized financial reasoning</p></li><li><p class="paragraph" style="text-align:left;"><b>Manufacturing specialists</b> can integrate IoT data with AI for predictive maintenance and quality control</p></li><li><p class="paragraph" style="text-align:left;"><b>Legal tech providers</b> can offer secure document analysis with legal reasoning capabilities</p></li></ul><h3 class="heading" style="text-align:left;" id="2-geographic-and-regulatory-advanta"><b>2. Geographic and Regulatory Advantages</b></h3><p class="paragraph" style="text-align:left;">TrustGraph enables NeoClouds to compete on regulatory compliance and data residency:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Regional providers</b> can offer local data residency with global AI capabilities</p></li><li><p class="paragraph" style="text-align:left;"><b>Compliance specialists</b> can provide industry-specific regulatory frameworks</p></li><li><p class="paragraph" style="text-align:left;"><b>Sovereign cloud providers</b> can deliver national security-grade AI sovereignty</p></li></ul><h3 class="heading" style="text-align:left;" id="3-custom-ai-solutions"><b>3. Custom AI Solutions</b></h3><p class="paragraph" style="text-align:left;">Unlike hyperscalers&#39; standardized offerings, NeoClouds can provide deeply customized AI implementations:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Bespoke Knowledge Graphs</b> tailored to specific organizational structures</p></li><li><p class="paragraph" style="text-align:left;"><b>Custom Agent Frameworks</b> designed for unique business processes</p></li><li><p class="paragraph" style="text-align:left;"><b>Specialized Model Integration</b> including private, fine-tuned models</p></li><li><p class="paragraph" style="text-align:left;"><b>Industry-Specific Tooling</b> through MCP integration and custom agent tools</p></li></ul><h3 class="heading" style="text-align:left;" id="4-innovation-velocity"><b>4. Innovation Velocity</b></h3><p class="paragraph" style="text-align:left;">TrustGraph&#39;s open architecture allows NeoClouds to innovate faster than hyperscalers:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Rapid Feature Integration</b>: Direct access to cutting-edge AI research and tools</p></li><li><p class="paragraph" style="text-align:left;"><b>Customer-Driven Development</b>: Ability to quickly implement customer-specific requirements</p></li><li><p class="paragraph" style="text-align:left;"><b>Ecosystem Partnerships</b>: Integration with specialized AI tools and services</p></li></ul><h2 class="heading" style="text-align:left;" id="the-competitive-response"><b>The Competitive Response</b></h2><p class="paragraph" style="text-align:left;">This shift won&#39;t go unnoticed by hyperscalers, but their response options are limited:</p><p class="paragraph" style="text-align:left;"><b>They can&#39;t match sovereignty</b>: Hyperscalers&#39; business models fundamentally depend on data aggregation and vendor lock-in. Offering true sovereignty would cannibalize their core revenue streams.</p><p class="paragraph" style="text-align:left;"><b>They can&#39;t match agility</b>: Large organizations inherently move slower than specialized NeoClouds in responding to customer needs and market changes.</p><p class="paragraph" style="text-align:left;"><b>They can&#39;t match specialization</b>: Hyperscalers optimize for breadth and scale, not the depth of expertise that NeoClouds can provide in specific verticals.</p><h2 class="heading" style="text-align:left;" id="the-path-forward"><b>The Path Forward</b></h2><p class="paragraph" style="text-align:left;">For NeoClouds ready to seize this opportunity, the path is clear:</p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Embrace TrustGraph Early</b>: First-mover advantage in implementing and optimizing TrustGraph deployments</p></li><li><p class="paragraph" style="text-align:left;"><b>Develop Vertical Expertise</b>: Combine TrustGraph&#39;s technical capabilities with deep industry knowledge</p></li><li><p class="paragraph" style="text-align:left;"><b>Build Compliance Frameworks</b>: Develop regulatory and security frameworks that hyperscalers can&#39;t match</p></li><li><p class="paragraph" style="text-align:left;"><b>Invest in Customer Success</b>: Leverage the intimate customer relationships that are NeoClouds&#39; natural advantage</p></li><li><p class="paragraph" style="text-align:left;"><b>Create Knowledge Core Libraries</b>: Build reusable, industry-specific Knowledge Cores that provide ongoing competitive advantages</p></li></ol><h2 class="heading" style="text-align:left;" id="conclusion-a-new-era-of-competition"><b>Conclusion: A New Era of Competition</b></h2><p class="paragraph" style="text-align:left;">TrustGraph represents more than just another AI platform—it&#39;s a fundamental shift in how AI capabilities are delivered and controlled. By democratizing access to sophisticated AI technology while prioritizing sovereignty and transparency, TrustGraph enables NeoClouds to compete not just on price or service, but on the fundamental values that enterprises increasingly demand: control, transparency, and freedom.</p><p class="paragraph" style="text-align:left;">The hyperscaler era was built on the assumption that only massive scale could deliver advanced capabilities. TrustGraph shatters that assumption, creating a new competitive landscape where agility, specialization, and customer intimacy matter more than raw scale.</p><p class="paragraph" style="text-align:left;">For NeoClouds willing to embrace this opportunity, the message is clear: the AI playing field has been leveled. The question isn&#39;t whether you can compete with the hyperscalers—it&#39;s whether you&#39;re ready to surpass them.</p><p class="paragraph" style="text-align:left;">The revolution in enterprise AI isn&#39;t just about better technology—it&#39;s about better business relationships, better economics, and better control. And for the first time in the cloud computing era, the advantage belongs to the agile, the specialized, and the customer-focused.</p><p class="paragraph" style="text-align:left;">Welcome to the NeoCloud era. The giants should be worried.</p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=6555c9f6-ae65-4a38-903f-074ad512509a&utm_medium=post_rss&utm_source=trustgraph">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Join Us at the Data Streaming Summit San Francisco – Sept 29–30, 2025</title>
  <description>We’re excited to share that TrustGraph is a Community Sponsor of the upcoming Data Streaming Summit San Francisco, taking place September 29–30, 2025 at the Grand Hyatt SFO!</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/711a0701-b66d-4524-bc3a-abbe4b509f7c/SN-DSS25SF-sponsor-trustgraph.png" length="329859" type="image/png"/>
  <link>https://blog.trustgraph.ai/p/join-us-at-the-data-streaming-summit-san-francisco-sept-29-30-2025</link>
  <guid isPermaLink="true">https://blog.trustgraph.ai/p/join-us-at-the-data-streaming-summit-san-francisco-sept-29-30-2025</guid>
  <pubDate>Wed, 03 Sep 2025 21:27:15 +0000</pubDate>
  <atom:published>2025-09-03T21:27:15Z</atom:published>
    <dc:creator>Daniel Davis</dc:creator>
    <category><![CDATA[Trustgraph]]></category>
    <category><![CDATA[Pulsar]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">We’re excited to share that <a class="link" href="https://trustgraph.ai?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=join-us-at-the-data-streaming-summit-san-francisco-sept-29-30-2025" target="_blank" rel="noopener noreferrer nofollow">TrustGraph</a> is a Community Sponsor of the upcoming Data Streaming Summit San Francisco, taking place September 29–30, 2025 at the Grand Hyatt SFO!</p><p class="paragraph" style="text-align:left;"><a class="link" href="https://pulsar.apache.org/?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=join-us-at-the-data-streaming-summit-san-francisco-sept-29-30-2025" target="_blank" rel="noopener noreferrer nofollow">Apache Pulsar</a> is the backbone of TrustGraph, enabling enterprise ready AI solutions. Check out how TrustGraph leverages the power of Pulsar below. 👇️ </p><iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="true" class="youtube_embed" frameborder="0" height="100%" src="https://youtube.com/embed/qTzY-YbQAXw" width="100%"></iframe><p class="paragraph" style="text-align:left;">This two-day event is the must-attend gathering for data streaming professionals — packed with hands-on technical training, a live hackathon, and a full day of inspiring talks from industry leaders.</p><p class="paragraph" style="text-align:left;">📅 September 29 – Workshops and hands-on training.</p><p class="paragraph" style="text-align:left;">📅 September 30 – Full-day summit with expert speakers</p><p class="paragraph" style="text-align:left;">Don’t miss your chance to connect with engineers, architects, and innovators shaping the future of real-time data.</p><p class="paragraph" style="text-align:left;">👉 <a class="link" href="https://www.eventbrite.com/e/data-streaming-summit-san-francisco-2025-tickets-1432401484399?aff=oddtdtcreator&utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=join-us-at-the-data-streaming-summit-san-francisco-sept-29-30-2025" target="_blank" rel="noopener noreferrer nofollow">Register Now for DSS San Francisco</a></p><p class="paragraph" style="text-align:left;">Hope to see you there!</p><ul><li><p class="paragraph" style="text-align:left;">🌟 TrustGraph on <a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=join-us-at-the-data-streaming-summit-san-francisco-sept-29-30-2025" target="_blank" rel="noopener noreferrer nofollow">GitHub</a></p></li><li><p class="paragraph" style="text-align:left;">Join the TrustGraph Community on <a class="link" href="https://discord.gg/sQMwkRz5GX?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=join-us-at-the-data-streaming-summit-san-francisco-sept-29-30-2025" target="_blank" rel="noopener noreferrer nofollow">Discord</a></p></li><li><p class="paragraph" style="text-align:left;">Watch TrustGraph tutorials on <a class="link" href="https://www.youtube.com/@TrustGraphAI?sub_confirmation=1&utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=join-us-at-the-data-streaming-summit-san-francisco-sept-29-30-2025" target="_blank" rel="noopener noreferrer nofollow">YouTube</a> </p></li></ul></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=88726d5b-2a65-4103-a705-8574accdece7&utm_medium=post_rss&utm_source=trustgraph">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>TrustGraph Empowers AI Agents with MCP</title>
  <description>Unveiling MCP Support in Version 1.1</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/5cbc053e-6f7a-4e4b-92be-b04147c91296/tg-adapter.png" length="1757166" type="image/png"/>
  <link>https://blog.trustgraph.ai/p/trustgraph-empowers-ai-agents-with-mcp</link>
  <guid isPermaLink="true">https://blog.trustgraph.ai/p/trustgraph-empowers-ai-agents-with-mcp</guid>
  <pubDate>Thu, 17 Jul 2025 19:51:45 +0000</pubDate>
  <atom:published>2025-07-17T19:51:45Z</atom:published>
    <dc:creator>Daniel Davis</dc:creator>
    <category><![CDATA[Trustgraph]]></category>
    <category><![CDATA[Mcp]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;"><b>SAN FRANCISCO, CA </b>– TrustGraph, the open-source scale platform for enterprise AI, today announced the release of version 1.1, a milestone update that significantly advances the capabilities of its AI agents. The new version introduces full support for the Model Context Protocol (MCP), an open standard that allows TrustGraph agents to connect with and utilize external tools, transforming them from passive knowledge retrievers into active, task-oriented systems.</p><p class="paragraph" style="text-align:left;">This pivotal update enables AI agents built on TrustGraph to perform complex, multi-step workflows that combine deep contextual understanding from enterprise knowledge graphs with real-time data and actions from external services. With MCP, developers can easily extend their agents with custom tools, allowing them to perform tasks like fetching dynamic information or interacting with any external API.</p><p class="paragraph" style="text-align:left;">“This isn&#39;t just an integration; it&#39;s a new architectural standard for enterprise AI. Our LLM-enhanced knowledge graphs act as the platform’s intelligent brain, and through MCP, we&#39;ve given it hands to interact with the world. The crucial link is our rock-solid data engineering foundation built on Pulsar, which acts as the nervous system. With fully automated configuration and deployment, we can deliver this entire, cohesive capability seamlessly and at scale.” said Mark Adams, CoFounder of TrustGraph. </p><p class="paragraph" style="text-align:left;">TrustGraph’s core strength lies in its ability to build interconnected Knowledge Graphs from fragmented enterprise data, grounding AI responses and dramatically reducing hallucinations. With the integration of MCP, agents can now leverage this robust internal context to make more intelligent decisions when using external tools. For example, an agent can now answer a complex query like, &quot;Given the surveillance capabilities described in our research documents and my current bank balance, what privacy measures could I afford?&quot; by seamlessly querying both its internal knowledge graph and an external MCP tool for financial data.</p><p class="paragraph" style="text-align:left;">Key features in TrustGraph v1.1 include:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Full Model Context Protocol (MCP) Integration:</b> Easily connect TrustGraph agents to any MCP-compliant tool or service, enabling interaction with external APIs, databases, and systems.</p></li><li><p class="paragraph" style="text-align:left;"><b>Advanced Agentic Workflows:</b> Build sophisticated agents that can analyze a user&#39;s request, select the appropriate combination of internal knowledge and external tools, and synthesize the information to provide a comprehensive answer or complete a task.</p></li><li><p class="paragraph" style="text-align:left;"><b>Seamless Knowledge Graph and Tool Synergy:</b> Agents can now fluidly combine facts and relationships from their internal knowledge graph with real-time data from external tools, leading to more accurate and contextually-aware outcomes.</p></li><li><p class="paragraph" style="text-align:left;"><b>Open Standard for Custom Tools:</b> Empower developers to create and integrate their own custom tools using the open MCP standard, ensuring flexibility and preventing vendor lock-in.</p></li></ul><p class="paragraph" style="text-align:left;">The new capabilities are available immediately and are detailed in the new MCP Integration guide in the TrustGraph documentation.</p><p class="paragraph" style="text-align:left;"><b>Availability</b><br>TrustGraph v1.1 is available now. Developers can access the code, deployment guides, and documentation on the official website and GitHub repository.</p><ul><li><p class="paragraph" style="text-align:left;"><b>Documentation:</b> <b><a class="link" href="https://docs.trustgraph.ai/?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-empowers-ai-agents-with-mcp" target="_blank" rel="noopener noreferrer nofollow">https://docs.trustgraph.ai/</a></b></p></li><li><p class="paragraph" style="text-align:left;"><b>MCP Integration Guide:</b> <b><a class="link" href="https://docs.trustgraph.ai/guides/mcp-integration/?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-empowers-ai-agents-with-mcp" target="_blank" rel="noopener noreferrer nofollow">https://docs.trustgraph.ai/guides/mcp-integration/</a></b></p></li><li><p class="paragraph" style="text-align:left;"><b>GitHub:</b> <b><a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-empowers-ai-agents-with-mcp" target="_blank" rel="noopener noreferrer nofollow">https://github.com/trustgraph-ai/trustgraph</a></b></p></li></ul><p class="paragraph" style="text-align:left;"><b>TrustGraph for Enterprise</b><br>For organizations moving AI from experiment to enterprise-wide adoption, TrustGraph will offer a suite of commercial solutions focused on advanced governance, security, and operational management. These offerings are designed to provide the robust controls and dedicated support that mission-critical, large-scale AI deployments demand. Contact <a class="link" href="mailto:sales@trustgraph.ai" target="_blank" rel="noopener noreferrer nofollow"><b>sales@trustgraph.ai</b></a><b> </b>about our enterprise preview program.</p><p class="paragraph" style="text-align:left;"><b>About TrustGraph</b><br>TrustGraph is an Open Source Scale Platform for Enterprise AI that helps organizations build, deploy, and manage sophisticated AI agents with deep contextual understanding. By creating interconnected Knowledge Graphs from enterprise data, TrustGraph enables agents to understand relationships and context, significantly reducing AI hallucinations and providing radical transparency in how AI systems process information. With a commitment to open standards and modular architecture, TrustGraph empowers organizations to build trustworthy and powerful AI solutions.</p><p class="paragraph" style="text-align:left;"><b>Contact:</b><br><b><a class="link" href="mailto:marketing@trustgraph.ai" target="_blank" rel="noopener noreferrer nofollow">marketing@trustgraph.ai</a></b><br><a class="link" href="https://trustgraph.ai?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-empowers-ai-agents-with-mcp" target="_blank" rel="noopener noreferrer nofollow"><b>https://trustgraph.ai</b></a><br><a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-empowers-ai-agents-with-mcp" target="_blank" rel="noopener noreferrer nofollow"><b>https://github.com/trustgraph-ai/trustgraph</b></a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=75441b6c-b77e-4417-a209-34cf31204caa&utm_medium=post_rss&utm_source=trustgraph">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>TrustGraph Unveils Version 1.0</title>
  <description>Launching a Sovereign Universal AI Adapter for Enterprise Innovation</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/478a8688-efac-45b9-86ab-7f56beda29cc/tg-adapter.png" length="1757166" type="image/png"/>
  <link>https://blog.trustgraph.ai/p/trustgraph-unveils-version-1-0</link>
  <guid isPermaLink="true">https://blog.trustgraph.ai/p/trustgraph-unveils-version-1-0</guid>
  <pubDate>Tue, 08 Jul 2025 15:33:00 +0000</pubDate>
  <atom:published>2025-07-08T15:33:00Z</atom:published>
    <dc:creator>Daniel Davis</dc:creator>
    <category><![CDATA[Trustgraph]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;"><b>San Francisco, CA – July 8, 2025</b> – TrustGraph, the open-source platform for sovereign AI, today announced the landmark release of TrustGraph Version 1.0. This release transforms the platform into a <b>Sovereign Universal AI Adapter</b>, empowering enterprises to adapt to the relentless pace of AI innovation while maintaining absolute control over their technology stack and proprietary knowledge. Version 1.0 delivers dynamic, enterprise-ready capabilities designed to eliminate AI vendor lock-in and provide a future-proof foundation for intelligent systems.</p><p class="paragraph" style="text-align:left;">In today&#39;s chaotic AI landscape, businesses face a critical dilemma: risk falling behind or sacrifice control to opaque, third-party platforms. TrustGraph 1.0 solves this by providing a stable, sovereign core that intelligently interfaces with any AI technology, giving organizations the power to innovate without compromise.</p><p class="paragraph" style="text-align:left;">&quot;Our mission is to earn trust in AI through radical transparency, and Version 1.0 embodies that mission by giving enterprises true sovereignty over their AI destiny,&quot; said TrustGraph Co-Founder Daniel Davis. &quot;We’ve designed TrustGraph to be the universal adapter that lets businesses plug into the latest AI advancements on their own terms. This release is about providing the dynamic agility needed to adapt, without sacrificing the foundational control required for secure, long-term enterprise value.&quot;<br><br>Key capabilities in TrustGraph Version 1.0 that deliver on the promise of a Sovereign Universal AI Adapter include:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Dynamic Flows for Ultimate Modularity:</b> A revolutionary approach to data processing. &quot;Flows&quot; enable the dynamic start and stop of logical data streams, allowing enterprises to easily <b>plug in third-party processing elements and build custom, configurable workflows on the fly</b>. This is core to the &quot;adapter&quot; concept, making the platform exceptionally extensible.</p></li><li><p class="paragraph" style="text-align:left;"><b>Live, Dynamic Configuration:</b> The platform can now be reconfigured in real-time without restarts. Enterprises can instantly <b>adapt their AI&#39;s behavior by changing LLM prompts, swapping agent tools, and modifying configurations dynamically</b>, ensuring their AI stack can evolve at the speed of their business needs.</p></li><li><p class="paragraph" style="text-align:left;"><b>Portable Knowledge Packages for Sovereign IP:</b> Knowledge Packages—TrustGraph&#39;s structured knowledge artifacts—are now fully portable. New workflows allow users to save a Knowledge Package, <b>download it for secure offline storage or transfer, and upload it to any TrustGraph instance globally, and even remove it when not needed</b>. This capability makes enterprise knowledge a truly sovereign, portable asset, free from platform or infrastructure lock-in.</p></li><li><p class="paragraph" style="text-align:left;"><b>The Librarian & Workbench: A Control Plane for Your Knowledge:</b> The new <b>Librarian</b> workflow provides granular control over knowledge ingestion, while the new <b>Workbench UX</b> serves as the central command center. This intuitive interface allows users to manage everything from dynamic Flows to the download/upload of sovereign Knowledge Packages, providing a comprehensive control plane for their entire AI ecosystem.</p></li></ul><p class="paragraph" style="text-align:left;">With TrustGraph 1.0, enterprises can confidently build their AI strategy on a sovereign foundation. The platform&#39;s deploy-anywhere architecture continues to support any infrastructure—from on-premises and bare-metal servers to any private or public cloud—providing <b>Total Stack Sovereignty</b>.<br><br><b>Availability</b><br>TrustGraph Version 1.0 is available immediately under the Apache 2.0 license. To get started, visit the official <a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-unveils-version-1-0" target="_blank" rel="noopener noreferrer nofollow"><b>GitHub repository</b></a>.</p><p class="paragraph" style="text-align:left;"><b>TrustGraph for Enterprise</b><br>For organizations moving AI from experiment to enterprise-wide adoption, TrustGraph will offer a suite of commercial solutions focused on advanced governance, security, and operational management. These offerings are designed to provide the robust controls and dedicated support that mission-critical, large-scale AI deployments demand. Contact <a class="link" href="mailto:sales@trustgraph.ai" target="_blank" rel="noopener noreferrer nofollow"><b>sales@trustgraph.ai</b></a><b> </b>about our enterprise preview program.<br><br><b>About TrustGraph</b><br>TrustGraph is an open-source AI platform that functions as a Sovereign Universal AI Adapter. We empower enterprises to build, deploy, and manage trustworthy AI systems by providing a stable, transparent foundation that can adapt to any AI technology while ensuring complete control over data, deployments, and an organization&#39;s AI future.<br><br><b>Contact:</b><br><a class="link" href="mailto:info@trustgraph.ai" target="_blank" rel="noopener noreferrer nofollow"><b>info@trustgraph.ai</b></a><br><a class="link" href="https://trustgraph.ai?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-unveils-version-1-0" target="_blank" rel="noopener noreferrer nofollow"><b>https://trustgraph.ai</b></a><br><a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=trustgraph-unveils-version-1-0" target="_blank" rel="noopener noreferrer nofollow"><b>https://github.com/trustgraph-ai/trustgraph</b></a></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=8e2565d1-2d6b-423f-9d5e-d8a9fd4fd24a&utm_medium=post_rss&utm_source=trustgraph">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Case Study: Optimizing LLM Deployments with TrustGraph</title>
  <description>VertexAI vs. Intel Tiber Cloud vs. Scaleway</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/156b695b-cbe9-4a3a-9eb9-8a760085dfa4/image__3_.png" length="1432869" type="image/png"/>
  <link>https://blog.trustgraph.ai/p/case-study-optimizing-llm-deployments-with-trustgraph</link>
  <guid isPermaLink="true">https://blog.trustgraph.ai/p/case-study-optimizing-llm-deployments-with-trustgraph</guid>
  <pubDate>Tue, 10 Jun 2025 01:17:54 +0000</pubDate>
  <atom:published>2025-06-10T01:17:54Z</atom:published>
    <dc:creator>Daniel Davis</dc:creator>
    <category><![CDATA[Trustgraph]]></category>
    <category><![CDATA[Knowledge Sovereignty]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">At <a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=case-study-optimizing-llm-deployments-with-trustgraph" target="_blank" rel="noopener noreferrer nofollow">TrustGraph</a>, we’ve been using <b>VertexAI</b> going all the way back to the <b>PaLM2</b> days. Up until recently, VertexAI had been a high throughput API for using Google’s latest and greatest LLMs. But enter <code>429</code> errors. We assumed that we needed to increase our quotas or rate limits - that’s what you’d do for almost all other cloud services. We assumed wrong.</p><p class="paragraph" style="text-align:left;">We recently discovered that <b>Google</b> is taking a different approach to how they serve VertexAI (although if you’ve been using <b>AWS Bedrock </b>for a while, you can probably guess what I’m about to say next). Demand for Gemini is now handled dynamically and globally. What does that mean? If one region is particularly busy, your request is likely to return <code>429</code> errors. If you can’t increase your quotas or rate limits, what can you do?</p><p class="paragraph" style="text-align:left;">The first answer is to set your region to <code>global</code>. A <code>global</code> request is routed to whatever region has the most availability. You have no control over that routing. If all regions are busy, you could still get a <code>429</code> error. Making a <code>global</code> request does not guarantee a response.</p><p class="paragraph" style="text-align:left;">Then what’s the real answer? <i>Open up those wallets and buy</i> <i><b>provisioned throughput</b></i>! <b>Provisioned throughput</b> is essentially a reservation for “<i>units</i>” of LLM service. Why did I say “<i>units</i>”? Well, let’s look at the parameters that factor into your provisioned throughput purchase for the latest version of <b>Gemini 2.5 Pro</b>: </p><ul><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);"><i>Percentage of queries using &gt; 200K context window</i></span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);"><i>Estimated queries per second</i></span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);"><i>Input tokens per query</i></span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);"><i>Input image tokens per query</i></span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);"><i>Input video tokens per query</i></span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);"><i>Input audio tokens per query</i></span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);"><i>Output response text tokens per query</i></span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);"><i>Output reasoning text tokens per query  </i></span></p></li></ul><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);">And </span><span style="color:rgb(32, 33, 36);"><b>yes</b></span><span style="color:rgb(32, 33, 36);">, changing all of these parameters affects the provisioned throughput pricing differently.</span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);">So, I thought, what’s a reasonable amount of throughput someone might need to reserve? Without knowing how many users would be connected or the use case, how do you come up with reasonable estimates? What did I do? </span><span style="color:rgb(32, 33, 36);"><i>I guessed</i></span><span style="color:rgb(32, 33, 36);">. The lowest </span><span style="color:rgb(32, 33, 36);"><i>request per second</i></span><span style="color:rgb(32, 33, 36);"> you can set is </span><code>1</code><span style="color:rgb(32, 33, 36);">. I then picked some very conservative token numbers: </span><code>3000</code><span style="color:rgb(32, 33, 36);"> for </span><span style="color:rgb(32, 33, 36);"><i>input</i></span><span style="color:rgb(32, 33, 36);"> and </span><code>500</code><span style="color:rgb(32, 33, 36);"> for </span><span style="color:rgb(32, 33, 36);"><i>output</i></span><span style="color:rgb(32, 33, 36);">. This is certainly more tokens than would be used for a customer support chatbot, but considerably less than for agentic flows with RAG. With those conservative settings, how much provisioned throughput would I need to buy? </span><span style="color:rgb(32, 33, 36);"><b>$35,100/month</b></span><span style="color:rgb(32, 33, 36);">. Yes, you read that right. </span><span style="color:rgb(32, 33, 36);"><i>And that doesn’t include any images, video, audio, thinking tokens, or long context requests</i></span><span style="color:rgb(32, 33, 36);">. </span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);">My intent is </span><span style="color:rgb(32, 33, 36);"><b>not</b></span><span style="color:rgb(32, 33, 36);"> to be critical of Google.</span><span style="color:rgb(32, 33, 36);"><i> I’m merely stating the facts</i></span><span style="color:rgb(32, 33, 36);">. I’ve been saying for quite a while now, that we’ve yet to see </span><span style="color:rgb(32, 33, 36);"><b>real</b></span><span style="color:rgb(32, 33, 36);"> AI pricing. </span><span style="color:rgb(32, 33, 36);"><b>OpenAI</b></span><span style="color:rgb(32, 33, 36);">, </span><span style="color:rgb(32, 33, 36);"><b>Anthropic</b></span><span style="color:rgb(32, 33, 36);">, </span><span style="color:rgb(32, 33, 36);"><b>Google</b></span><span style="color:rgb(32, 33, 36);">, et al. have been in a customer acquisition hypergrowth phase, selling AI services at a massive loss. However, now that we’re seeing a need to buy, </span><span style="color:rgb(32, 33, 36);"><b>upfront</b></span><span style="color:rgb(32, 33, 36);">, </span><span style="color:rgb(32, 33, 36);"><b>$35k</b></span><span style="color:rgb(32, 33, 36);"> worth of provisioned throughput just to guarantee we won’t get </span><code>429</code><span style="color:rgb(32, 33, 36);"> errors, now we’re beginning to see </span><span style="color:rgb(32, 33, 36);"><b>real</b></span><span style="color:rgb(32, 33, 36);"> AI pricing. Oh by the way, Google did suggest a third solution to Gemini </span><code>429</code><span style="color:rgb(32, 33, 36);"> errors - fall back to using </span><span style="color:rgb(32, 33, 36);"><b>Claude</b></span><span style="color:rgb(32, 33, 36);">. Seriously, they </span><span style="color:rgb(32, 33, 36);"><b>really</b></span><span style="color:rgb(32, 33, 36);"> suggested that.</span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);">We’ve recently been doing some very interesting work with </span><span style="color:rgb(32, 33, 36);"><b>Intel</b></span><span style="color:rgb(32, 33, 36);"> on supporting Intel CPUs, GPUs, and </span><span style="color:rgb(32, 33, 36);"><b>Gaudi</b></span><span style="color:rgb(32, 33, 36);"> for LLM inference in </span><span style="color:rgb(32, 33, 36);"><b>Intel Tiber Cloud</b></span><span style="color:rgb(32, 33, 36);"> with </span><span style="color:rgb(32, 33, 36);"><b>TrustGraph</b></span><span style="color:rgb(32, 33, 36);">. We’ve been using TrustGraph to handle all of the LLM orchestration, deploying open models. The results have so far been fascinating. </span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);">While TGI/vLLM support for Intel GPUs and </span><span style="color:rgb(32, 33, 36);"><b>Gaudi</b></span><span style="color:rgb(32, 33, 36);"> is not as mature as Nvidia support, we’re already seeing very reasonable token throughputs at more than reasonable prices. A real wildcard is using Llamafiles (llama.cpp) for deploying purely with Intel CPUs (their </span><span style="color:rgb(32, 33, 36);"><b>Granite Ridge</b></span><span style="color:rgb(32, 33, 36);"> bare metal instance of 128 cores of Xeon CPUs has deployed Llama 3.3 70B for us quite well). And the pricing? Well…</span></p><ul><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);">Granite Ridge 128 core CPU instance: ~</span><span style="color:rgb(32, 33, 36);"><b>$3,300/month</b></span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);">8x GPU bare metal instance: ~</span><span style="color:rgb(32, 33, 36);"><b>$3,400/month</b></span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);">8x Gaudi2 instance: ~</span><span style="color:rgb(32, 33, 36);"><b>$7,500/month</b></span></p></li></ul><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);">We’ve also been doing some work lately deploying </span><span style="color:rgb(32, 33, 36);"><b>TrustGraph</b></span><span style="color:rgb(32, 33, 36);"> in </span><span style="color:rgb(32, 33, 36);"><a class="link" href="https://www.scaleway.com/en/?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=case-study-optimizing-llm-deployments-with-trustgraph" target="_blank" rel="noopener noreferrer nofollow">Scaleway</a></span><span style="color:rgb(32, 33, 36);">. For comparison, these are monthly prices for Nvidia GPUs in Scaleway:</span></p><ul><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);">H100-1-80G: ~</span><span style="color:rgb(32, 33, 36);"><b>$2,300/month</b></span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);">H100-SXM-8-80G: ~</span><span style="color:rgb(32, 33, 36);"><b>$19,200/month</b></span></p></li></ul><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);">At the moment, we don’t have a good feel for where the Intel services compare to Nvidia in terms of LLM performance. However, I’d feel comfortable saying Intel’s offerings are certainly </span><span style="color:rgb(32, 33, 36);"><i>in the ballpark</i></span><span style="color:rgb(32, 33, 36);"> with most of the H100 line, but perhaps not the newest ones.</span></p><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);">But let’s not forget how we began this conversation - </span><span style="color:rgb(32, 33, 36);"><b>$35,100/month</b></span><span style="color:rgb(32, 33, 36);"> to guarantee a small amount of service for </span><span style="color:rgb(32, 33, 36);"><b>Gemini 2.5 Pro</b></span><span style="color:rgb(32, 33, 36);">. Now begin the questions: </span><span style="color:rgb(32, 33, 36);"><i>what’s important to you</i></span><span style="color:rgb(32, 33, 36);">? </span></p><ul><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);"><b>How much are the latest and greatest Gemini models worth to you?</b></span><span style="color:rgb(32, 33, 36);"> </span><span style="color:rgb(32, 33, 36);"><i>Are you going to be able to run a model of Gemini’s capability with the above Intel/Nvidia options? No. But do you need it?</i></span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);"><b>Does data sovereignty matter?</b></span><span style="color:rgb(32, 33, 36);"> </span><span style="color:rgb(32, 33, 36);"><i>If you want control over the physical location of your data, the global option in VertexAI isn’t for you. Your only option is to buy provisioned throughput or deploy open models on Intel/Nvidia.</i></span></p></li><li><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);"><b>Does cost predictability matter?</b></span><span style="color:rgb(32, 33, 36);"> </span><span style="color:rgb(32, 33, 36);"><i>If you think you can accurately forecast the 8 parameters needed to estimate provisioned throughput in VertexAI, please tell me how you’re doing it. I think the rest of us would immediately shrug our shoulders, close our eyes, move some sliders, and hope for the best. What strange times we live in where we find ourselves finding certainty in forecasting CPU/GPU costs.</i></span></p></li></ul><p class="paragraph" style="text-align:left;"><span style="color:rgb(32, 33, 36);">Is there one right answer? </span><span style="color:rgb(32, 33, 36);"><b>No</b></span><span style="color:rgb(32, 33, 36);">. Depending on how you answer the above questions (and others), a particular solution will emerge as being </span><span style="color:rgb(32, 33, 36);"><i><b>best for you</b></i></span><span style="color:rgb(32, 33, 36);">. We designed </span><span style="color:rgb(32, 33, 36);"><b>TrustGraph</b></span><span style="color:rgb(32, 33, 36);"> with this flexibility in mind. </span><span style="color:rgb(32, 33, 36);"><b>TrustGraph</b></span><span style="color:rgb(32, 33, 36);"> supports all major LLM API services - including the cloud specific ones like </span><span style="color:rgb(32, 33, 36);"><b>AWS Bedrock</b></span><span style="color:rgb(32, 33, 36);">, </span><span style="color:rgb(32, 33, 36);"><b>Azure</b></span><span style="color:rgb(32, 33, 36);">, and </span><span style="color:rgb(32, 33, 36);"><b>VertexAI</b></span><span style="color:rgb(32, 33, 36);"> - in addition to LLM orchestration to deploy the entire TrustGraph knowledge automation platform along with open LLMs in any target environment. </span><span style="color:rgb(32, 33, 36);"><i><b>To learn more about how TrustGraph merges data silos into AI-optimized knowledge packages in a uniform agentic platform, check out the links below:</b></i></span></p><ul><li><p class="paragraph" style="text-align:left;">🌟 <b>TrustGraph on </b><span style="color:rgb(113, 183, 251);"><span style="text-decoration:underline;"><i><a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=case-study-optimizing-llm-deployments-with-trustgraph" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(12, 74, 110)"><b>GitHub</b></a></i></span></span> 🧠 </p></li><li><p class="paragraph" style="text-align:left;"><b>Join the </b><span style="color:rgb(113, 183, 251);"><span style="text-decoration:underline;"><i><a class="link" href="https://discord.gg/sQMwkRz5GX?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=case-study-optimizing-llm-deployments-with-trustgraph" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(12, 74, 110)"><b>Discord</b></a></i></span></span><b> </b>👋 </p></li></ul></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=997c81d1-d210-4290-abdc-d15527a9911f&utm_medium=post_rss&utm_source=trustgraph">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>How Your Enterprise&#39;s Greatest Strength Could Become Its Undoing</title>
  <description>Establish Knowledge Sovereignty with TrustGraph</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/bd91b17b-6660-42b1-8608-1bd4a4bfe852/Image_fx__42_.jpg" length="218252" type="image/jpeg"/>
  <link>https://blog.trustgraph.ai/p/how-your-enterprise-s-greatest-strength-could-become-its-undoing</link>
  <guid isPermaLink="true">https://blog.trustgraph.ai/p/how-your-enterprise-s-greatest-strength-could-become-its-undoing</guid>
  <pubDate>Thu, 15 May 2025 16:22:00 +0000</pubDate>
  <atom:published>2025-05-15T16:22:00Z</atom:published>
    <dc:creator>Daniel Davis</dc:creator>
    <category><![CDATA[Trustgraph]]></category>
    <category><![CDATA[Knowledge Sovereignty]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">We&#39;re in an AI gold rush. Every enterprise is scrambling to leverage it, hoping to unearth new efficiencies, personalize customer experiences, and gain an analytical edge. We diligently feed AI models our data, train them on our processes, and encourage them to learn from every interaction. The promise is tantalizing: an AI that intimately understands our business, ready to supercharge our operations.</p><p class="paragraph" style="text-align:left;">But in this race for AI integration, a new, insidious threat is emerging – one that traditional cybersecurity isn&#39;t built to address. This threat doesn&#39;t just target our systems; <b>it targets the very essence of our competitive advantage: our unique, accumulated knowledge.</b></p><p class="paragraph" style="text-align:left;"><b>Your Enterprise&#39;s Crown Jewels: More Than Just Data</b></p><p class="paragraph" style="text-align:left;">Every successful enterprise possesses a rich tapestry of knowledge, woven over years, sometimes decades. This isn&#39;t just raw data in a database; it&#39;s the deeply embedded intelligence that truly differentiates you:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Subject Matter Expertise (SME):</b> The nuanced understanding held by your veteran engineers, your leading scientists, your shrewdest market analysts. The &quot;how-to&quot; and &quot;why&quot; that isn&#39;t written down in any manual but is evident in their work and decisions.</p></li><li><p class="paragraph" style="text-align:left;"><b>Proprietary Processes:</b> The unique workflows, manufacturing techniques, supply chain optimizations, and customer service protocols refined through countless iterations, which give you operational superiority.</p></li><li><p class="paragraph" style="text-align:left;"><b>Customer Learnings:</b> The deep insights into customer behavior, preferences, pain points, and feedback – gleaned from sales interactions, support tickets, and product usage – that allow you to tailor offerings and build loyalty.</p></li><li><p class="paragraph" style="text-align:left;"><b>Strategic Roadmaps & Failed Experiments:</b> Even your strategic plans, your product backlogs, and the &quot;lessons learned&quot; from projects that didn&#39;t pan out contain invaluable intelligence about your market perception and future direction.</p></li></ul><p class="paragraph" style="text-align:left;">This collective intelligence is your <b>organizational DNA</b>. It’s the reason you succeed, the foundation of your value chain, and the moat around your business.</p><p class="paragraph" style="text-align:left;"><b>The New Cyberattack: Knowledge Weaponized by AI</b></p><p class="paragraph" style="text-align:left;">Imagine a scenario: a sophisticated actor gains access to this rich, internal knowledge base. In the past, exploiting this would have been a Herculean effort – requiring immense human capital to sift through, understand, and then attempt to replicate.</p><p class="paragraph" style="text-align:left;">Enter modern AI.</p><p class="paragraph" style="text-align:left;">Suddenly, an attacker armed with your internal knowledge and sophisticated AI tools can:</p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Replicate Your Success with Unprecedented Speed:</b> AI can rapidly analyze your documented processes, design documents, and even derive undocumented SME knowledge by studying patterns in your operational data. It can then be used to blueprint a competitor&#39;s product or service that mirrors your strengths, effectively shortcutting years of R&D and market learning.</p></li><li><p class="paragraph" style="text-align:left;"><b>Identify and Exploit Your Hidden Weaknesses:</b> More subtly, AI can scrutinize your entire value chain as revealed in your internal knowledge. It can find the inefficiencies in your processes that your own teams haven&#39;t spotted, identify subtle customer pain points that indicate market openings, or even model stress points in your operations, providing precise intelligence for exploitation or disruption.</p></li><li><p class="paragraph" style="text-align:left;"><b>Predict Your Next Move:</b> By analyzing internal communications, strategic planning documents, and development roadmaps, AI can develop a startlingly accurate predictive model of your company&#39;s future actions, eroding any element of surprise.</p></li></ol><p class="paragraph" style="text-align:left;"><b>The Asymmetric Threat: The Attacker with No Legacy</b></p><p class="paragraph" style="text-align:left;">Crucially, this new breed of competitor doesn&#39;t need your infrastructure or your legacy systems. They can operate with <b>zero tech debt</b>, using the latest tools and infrastructure. Armed <i>only</i> with your meticulously cultivated internal knowledge, they can effectively build a leaner, faster, and potentially more targeted version of what took you years to establish. They are the AI usurpers, ready to undercut you in the market with a fresh approach powered by <i>your own intelligence</i>.</p><p class="paragraph" style="text-align:left;"><b>This isn&#39;t a traditional data breach anymore; it&#39;s an intellectual property heist of your entire operational brain, with AI acting as the super-efficient translation engine to convert that knowledge into a direct competitive threat.</b></p><p class="paragraph" style="text-align:left;"><b>Rethinking Defense: Beyond Firewalls and Antivirus</b></p><p class="paragraph" style="text-align:left;">Our traditional cybersecurity measures – firewalls, endpoint detection, intrusion prevention – are designed to protect perimeters and known threats. They are less equipped to handle the insidious exploitation of intangible knowledge that may have been exfiltrated through subtle means, or even shared (perhaps unwisely) with AI systems whose governance is opaque.</p><p class="paragraph" style="text-align:left;">The battleground is shifting. Protecting your enterprise now means protecting the <i>sanctity and sovereignty</i> of your unique knowledge. It means having granular control over where this knowledge resides, who (or what AI) has access to it, and how its derived insights are managed and deployed.</p><p class="paragraph" style="text-align:left;">The ability to leverage AI on your own terms, within your own secure boundaries, using <i>your</i> knowledge exclusively for <i>your</i> benefit, is no longer just a good idea – it&#39;s becoming a fundamental imperative for survival in the AI-driven future. </p><p class="paragraph" style="text-align:left;">This is precisely the challenge <b>TrustGraph </b>is designed to address. By functioning as an AI Knowledge Platform that prioritizes <i><b>knowledge sovereignty</b></i>, TrustGraph enables your enterprise to harness the power of advanced AI while maintaining complete control over your proprietary knowledge. It allows you to deploy your entire AI stack—including sophisticated knowledge cores built by the TrustRAG engine from your internal data—within your secure, designated environments. This ensures your unique subject matter expertise, critical processes, and valuable customer learnings are used to generate insights exclusively for you, mitigating the risk of your knowledge being weaponized against you. With TrustGraph, you control where your knowledge resides, how it’s processed by AI, and who ultimately benefits from the intelligence it unlocks, turning your greatest asset back into your unassailable advantage.</p><p class="paragraph" style="text-align:left;">Establish knowledge sovereignty with <b>TrustGraph</b>:</p><ul><li><p class="paragraph" style="text-align:left;">🌟 <b>TrustGraph on </b><i><b><a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=how-your-enterprise-s-greatest-strength-could-become-its-undoing" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(113, 183, 251)">GitHub</a></b></i> 🧠 </p></li><li><p class="paragraph" style="text-align:left;"><b>Join the </b><i><b><a class="link" href="https://discord.gg/sQMwkRz5GX?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=how-your-enterprise-s-greatest-strength-could-become-its-undoing" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(113, 183, 251)">Discord</a></b></i><b> </b>👋 </p></li></ul></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=eafaa150-cc2c-4724-80e6-30d7a648c5c8&utm_medium=post_rss&utm_source=trustgraph">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>The Future is Imminent: Charting the Next Era of TrustGraph</title>
  <description>Version 1.0 COMING SOON</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/5613f6a4-66e4-463a-9e07-2dd154c42c0d/Image_fx__40_.jpg" length="142311" type="image/jpeg"/>
  <link>https://blog.trustgraph.ai/p/charting-the-next-era-of-trustgraph</link>
  <guid isPermaLink="true">https://blog.trustgraph.ai/p/charting-the-next-era-of-trustgraph</guid>
  <pubDate>Mon, 12 May 2025 15:56:00 +0000</pubDate>
  <atom:published>2025-05-12T15:56:00Z</atom:published>
    <dc:creator>Daniel Davis</dc:creator>
    <category><![CDATA[Trustgraph]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">For everyone who has been part of the <b>TrustGraph</b> journey, from our earliest releases to today, a heartfelt thank you! Your engagement, feedback, discussions about use cases, and enthusiasm for the future have been the bedrock of our development. It’s been an incredible adventure co-creating with this community, and we truly, <i>truly</i> appreciate every contribution.</p><p class="paragraph" style="text-align:left;">As many of you have sensed, a pivotal moment for TrustGraph is just around the corner: <b>Version 1.0 is imminent!</b> 🥳 We genuinely couldn&#39;t be more excited about what this signifies and the capabilities it will unlock.</p><p class="paragraph" style="text-align:left;"><b>A Quick Note on Version 0.23: Our Bridge to Tomorrow</b></p><p class="paragraph" style="text-align:left;">With the horizon of 1.0 so close, Version 0.23 holds a unique spot. Up until now, we’ve aimed for every iteration of TrustGraph to be production-worthy. Version 0.23, however, is a little different – think of it as a crucial <b>bridge to the future</b>, incorporating foundational work that sets the stage for the innovations in 1.0.</p><p class="paragraph" style="text-align:left;">Because of this transitional nature, we can’t, with a clear conscience, label 0.23 as fully &quot;production-grade.&quot; For those requiring absolute stability in their current deployments, we recommend sticking with <b>Version 0.22</b> for the time being. But for the adventurous souls keen to glimpse the architectural shifts and willing to explore, 0.23 is there for you to spin up! For everyone else, the comprehensive, robust, and groundbreaking Version 1.0 is coming VERY SOON.</p><p class="paragraph" style="text-align:left;"><b>Embracing Openness: A Shift to Apache 2.0 Licensing</b></p><p class="paragraph" style="text-align:left;">When we first launched TrustGraph, the licensing choice was a subject of much internal debate. We initially settled on AGPL-3.0, seeking a balance that included some protective measures. In retrospect, those measures were likely more complex than necessary.</p><p class="paragraph" style="text-align:left;">Our vision for TrustGraph’s commercial value is clear, and it doesn’t revolve around gatekeeping through complicated licenses. We believe in the power of open collaboration and want to remove any barriers to adoption and contribution. That&#39;s why, effective immediately, <b>TrustGraph is now licensed under Apache 2.0!</b> This shift reflects our commitment to fostering an even more open and collaborative ecosystem.</p><p class="paragraph" style="text-align:left;"><b>The Dawn of 1.0: A Dynamic, Living Knowledge Platform</b></p><p class="paragraph" style="text-align:left;">So, what does the future hold with TrustGraph 1.0 and beyond? We&#39;re set to <b>break the mold for knowledge platforms.</b> We’re moving beyond static pipelines or simple RAG add-ons to create a truly <b>dynamic and living knowledge platform.</b></p><p class="paragraph" style="text-align:left;">Imagine an AI Platform that not only deploys your LLMs and tools anywhere you need them but also enables:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Actively Evolving Knowledge Cores:</b> Systems that can learn, adapt, and have their structured understanding dynamically updated and refined.</p></li><li><p class="paragraph" style="text-align:left;"><b>Deep Contextual Awareness:</b> AI that operates with a rich, interconnected understanding of your domain, powered by truly intelligent knowledge graph capabilities.</p></li><li><p class="paragraph" style="text-align:left;"><b>Unlocking Latent LLM Power:</b> Providing LLMs with precisely the right and interconnected knowledge to perform at their peak, reliably and across any environment.</p></li><li><p class="paragraph" style="text-align:left;"><b>Dynamic Intelligence</b>: Adapt to loaded knowledge cores and generate custom code processing modules and even frontend apps that auto-deploy without any need for code development.</p></li></ul><p class="paragraph" style="text-align:left;">TrustGraph 1.0 is designed to be this platform – the system that provides the fertile ground for these intelligent, adaptable, and powerful AI solutions to flourish. It&#39;s about creating an environment where AI can truly <i>know</i>, <i>learn</i>, and <i>reason</i> based on a foundation of trust and integrated knowledge.</p><p class="paragraph" style="text-align:left;">The future is incredibly bright for TrustGraph. We are on the cusp of delivering dynamic capabilities that will redefine how organizations deploy and leverage AI knowledge.</p><p class="paragraph" style="text-align:left;"><b>Stay Connected!</b></p><p class="paragraph" style="text-align:left;">We&#39;re excited to share more details as we approach the 1.0 launch. If you have any questions, feedback, or simply want to chat about the future, please don’t hesitate to reach out.</p><ul><li><p class="paragraph" style="text-align:left;">🌟 <b>TrustGraph on </b><span style="color:rgb(12, 74, 110);"><span style="text-decoration:underline;"><i><a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=the-future-is-imminent-charting-the-next-era-of-trustgraph" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(12, 74, 110)"><b>GitHub</b></a></i></span></span> 🧠 </p></li><li><p class="paragraph" style="text-align:left;"><b>Join the </b><span style="color:rgb(12, 74, 110);"><span style="text-decoration:underline;"><i><b><a class="link" href="https://discord.gg/sQMwkRz5GX?utm_source=blog.trustgraph.ai&utm_medium=referral&utm_campaign=how-to-use-ai-to-see-data-in-3d" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(12, 74, 110)">Discord</a></b></i></span></span><b> </b>👋 </p></li></ul><p class="paragraph" style="text-align:left;">Thank you once again for being on this journey with us. The best is yet to come!</p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=5ac1d1dd-a4dd-40e2-9dbc-5ab02923623c&utm_medium=post_rss&utm_source=trustgraph">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Reclaiming Knowledge Sovereignty in the Age of AI</title>
  <description>Empower your enterprise with AI, without sacrificing an ounce of control. Provision your path to Knowledge Sovereignty with TrustGraph.</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/0ac4af4d-f675-4853-86b6-287f4a4fbf5b/image_fx__33_.jpg" length="271937" type="image/jpeg"/>
  <link>https://blog.trustgraph.ai/p/reclaiming-knowledge-sovereignty</link>
  <guid isPermaLink="true">https://blog.trustgraph.ai/p/reclaiming-knowledge-sovereignty</guid>
  <pubDate>Tue, 06 May 2025 22:17:55 +0000</pubDate>
  <atom:published>2025-05-06T22:17:55Z</atom:published>
    <dc:creator>Daniel Davis</dc:creator>
    <category><![CDATA[Trustgraph]]></category>
    <category><![CDATA[Trustrag]]></category>
    <category><![CDATA[Knowledge Sovereignty]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">The transformative power of AI is undeniable. Yet, as organizations increasingly leverage AI, a critical question emerges: <b>Who truly controls the invaluable knowledge these systems create?</b> This isn&#39;t just about raw data; it&#39;s about the derived insights, the refined intelligence, and the structured <b>knowledge cores</b> that will define competitive advantage. This is the essence of <b>Knowledge Sovereignty.</b></p><p class="paragraph" style="text-align:left;">For enterprises, operating within complex regulatory landscapes that champions data privacy and digital autonomy, knowledge sovereignty is not a luxury—it&#39;s a mandate. It means having unequivocal control over where your intellectual property and strategic insights reside, how they are governed, and who can access them, especially when that knowledge is processed and refined by AI.</p><p class="paragraph" style="text-align:left;">Many current AI platforms, often centralized and operating within specific cloud ecosystems, can inadvertently challenge an organization&#39;s ability to maintain full knowledge sovereignty. When AI tools process sensitive data and generate valuable knowledge graphs outside of your direct control or preferred jurisdictions, enterprises can face:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Compliance Risks:</b> Difficulty demonstrating adherence to stringent data protection and residency requirements.</p></li><li><p class="paragraph" style="text-align:left;"><b>Loss of Control:</b> Limited visibility or say over where refined knowledge assets are stored and processed.</p></li><li><p class="paragraph" style="text-align:left;"><b>Vendor Lock-in:</b> Dependency on specific provider ecosystems can make it challenging to repatriate or migrate valuable knowledge assets.</p></li></ul><p class="paragraph" style="text-align:left;">This is where <a class="link" href="https://trustgraph.ai?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=reclaiming-knowledge-sovereignty-in-the-age-of-ai" target="_blank" rel="noopener noreferrer nofollow"><b>TrustGraph</b></a><b>, the Knowledge Provisioning Platform</b>, fundamentally shifts the paradigm. We understand that for enterprises, the &quot;where&quot; and &quot;how&quot; of AI deployment are as crucial as the &quot;what.&quot; TrustGraph is designed from the ground up to empower you with full control:</p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Deploy Anywhere, Retain Control:</b> TrustGraph&#39;s core philosophy is universal deployability. As a <b>fully containerized solution</b>, it can be provisioned on <b>your choice of infrastructure</b>. This means you can deploy your entire AI stack – including models, tools, and the critical <b>TrustRAG knowledge cores</b> – within:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Your On-Premises Hardware:</b> Ensuring data and knowledge never leave your physical control.</p></li><li><p class="paragraph" style="text-align:left;"><b>Private Cloud Environments:</b> Leveraging cloud agility within your secure, dedicated infrastructure.</p></li><li><p class="paragraph" style="text-align:left;"><b>Specific Cloud Regions/Providers:</b> Adhering to jurisdictional requirements and maintaining data residency for both raw data and derived knowledge.</p></li></ul></li><li><p class="paragraph" style="text-align:left;"><b>Knowledge Cores Under Your Governance:</b> With TrustGraph, the sophisticated <b>Knowledge Graphs</b> - semantic relationships mapped to vector embeddings - built by our TrustRAG engine are not black-box assets residing on a third-party platform. They are integral components of <i>your</i> <b>TrustGraph</b> deployment. This means:</p><ul><li><p class="paragraph" style="text-align:left;"><b>You own and manage the derived knowledge:</b> The insights, the structured understanding, the very &quot;intelligence&quot; in your data, remains under your stewardship.</p></li><li><p class="paragraph" style="text-align:left;"><b>You dictate access and policies:</b> Applying your internal governance and compliance frameworks directly to these invaluable knowledge assets.</p></li></ul></li><li><p class="paragraph" style="text-align:left;"><b>Transparent & Verifiable Operations:</b> As an open-source platform, TrustGraph offers a level of transparency that builds confidence, allowing your teams to understand and verify how data is processed and knowledge is managed, ensuring alignment with values of data protection by design.</p></li></ol><p class="paragraph" style="text-align:left;"><b>The TrustGraph Advantage: Enabling Innovation with Confidence</b></p><p class="paragraph" style="text-align:left;">By choosing <b>TrustGraph</b>, enterprises no longer have to compromise between leveraging state-of-the-art AI and upholding the principles of knowledge sovereignty. You gain:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Full Jurisdictional Control:</b> Ensure your most valuable digital assets – including complex knowledge graphs – are managed within your desired legal and geographical boundaries.</p></li><li><p class="paragraph" style="text-align:left;"><b>Enhanced Compliance & Trust:</b> Confidently meet and exceed data protection regulations, building trust with your customers and partners.</p></li><li><p class="paragraph" style="text-align:left;"><b>Unhindered Innovation:</b> Leverage powerful AI capabilities, including advanced RAG with rich knowledge cores, without ceding control over the critical intelligence they generate.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Take Control of Your AI-Driven Future</b></p><p class="paragraph" style="text-align:left;">In an era where knowledge is the new currency, maintaining sovereignty over it is paramount. TrustGraph provides enterprises with the platform and philosophy to confidently embrace AI, secure in the knowledge that their insights and intellectual property remain firmly under their control.</p><p class="paragraph" style="text-align:left;"><b>Ready to build your AI future on a foundation of sovereignty?</b></p><ul><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=reclaiming-knowledge-sovereignty-in-the-age-of-ai" target="_blank" rel="noopener noreferrer nofollow"><b>Explore how TrustGraph&#39;s deploy-anywhere architecture works</b></a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://github.com/trustgraph-ai/trustgraph?tab=readme-ov-file&utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=reclaiming-knowledge-sovereignty-in-the-age-of-ai#-trustrag" target="_blank" rel="noopener noreferrer nofollow">Discover how TrustRAG puts you in control of your AI&#39;s Knowledge Core</a></p></li><li><p class="paragraph" style="text-align:left;"><a class="link" href="https://discord.gg/sQMwkRz5GX?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=reclaiming-knowledge-sovereignty-in-the-age-of-ai" target="_blank" rel="noopener noreferrer nofollow"><b>Engage with our team to discuss your specific sovereignty needs</b></a></p></li></ul><p class="paragraph" style="text-align:left;">Empower your enterprise with AI, without sacrificing an ounce of control. Provision your path to Knowledge Sovereignty with <a class="link" href="https://trustgraph.ai?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=reclaiming-knowledge-sovereignty-in-the-age-of-ai" target="_blank" rel="noopener noreferrer nofollow"><b>TrustGraph</b></a>.</p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=1212dc85-a401-4837-ace1-6295b14279a0&utm_medium=post_rss&utm_source=trustgraph">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Part 3: How TrustGraph&#39;s Knowledge Cores End the Memento Nightmare</title>
  <description>Giving AI Its Knowledge Back</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/223bbe4f-015b-4386-99c4-2b3116a0627c/TG-ship.jpg" length="208249" type="image/jpeg"/>
  <link>https://blog.trustgraph.ai/p/how-trustgraph-s-knowledge-cores-end-the-memento-nightmare</link>
  <guid isPermaLink="true">https://blog.trustgraph.ai/p/how-trustgraph-s-knowledge-cores-end-the-memento-nightmare</guid>
  <pubDate>Mon, 28 Apr 2025 16:22:03 +0000</pubDate>
  <atom:published>2025-04-28T16:22:03Z</atom:published>
    <dc:creator>Daniel Davis</dc:creator>
    <category><![CDATA[Trustgraph]]></category>
    <category><![CDATA[Trustrag]]></category>
    <category><![CDATA[Rag]]></category>
    <category><![CDATA[Memory]]></category>
    <category><![CDATA[Memento]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">In <a class="link" href="https://blog.trustgraph.ai/p/the-memento-problem-with-ai-memory?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=part-3-how-trustgraph-s-knowledge-cores-end-the-memento-nightmare" target="_blank" rel="noopener noreferrer nofollow">Parts 1</a> and <a class="link" href="https://blog.trustgraph.ai/p/why-your-ai-is-stuck-in-a-memento-loop?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=part-3-how-trustgraph-s-knowledge-cores-end-the-memento-nightmare" target="_blank" rel="noopener noreferrer nofollow">2</a>, we exposed the dangerous flaw in most current AI &quot;memory&quot;: like Leonard Shelby in <i>Memento</i>, our systems often operate on <b>disconnected fragments</b>, unable to form the <b>interconnected knowledge</b> needed for reliable reasoning. We saw how this reliance on context-stripped, relationship-blind, and provenance-oblivious data dooms AI to a cycle of confident errors and hallucinations, just as Leonard&#39;s fragmented note system led him down dangerous paths.</p><p class="paragraph" style="text-align:left;">So, how do we break the loop? How do we give AI the ability to truly <i>know</i>, not just <i>recall</i> fragments? The answer isn&#39;t a slightly better system of Polaroids and notes. The answer is to build the <b>integrated, structured understanding</b> Leonard tragically lacked: a <b>Knowledge Core</b>.</p><p class="paragraph" style="text-align:left;">This is precisely what <b>TrustGraph, the AI Provisioning Platform,</b> delivers through its advanced <b>TrustRAG</b> engine. It moves beyond the limitations of fragmented recall by architecting genuine knowledge:</p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>Mapping the Connections (Solving Relationship Blindness):</b> Unlike Leonard staring at isolated clues, TrustGraph automatically <b>builds a Knowledge Graph (KG)</b>. It doesn&#39;t just store facts; it explicitly <b>maps the relationships </b><i><b>between</b></i><b> them</b> (e.g., &quot;Person X <i>works for</i> Company Y,&quot; &quot;Event A <i>caused</i> Event B&quot;). This Knowledge Graph <i>is</i> the coherent narrative structure Leonard couldn&#39;t form – the understanding of <i>how</i> things connect.</p></li><li><p class="paragraph" style="text-align:left;"><b>Delivering Contextualized Scenes (Solving Context Stripping):</b> Leonard reviewed one Polaroid at a time, losing the big picture. TrustRAG uses a <b>hybrid retrieval</b> process. Vector search identifies relevant starting points <i>within</i> the Knowledge Graph, but then TrustRAG <b>traverses the graph connections</b>, constructing a <b>subgraph</b> of related entities and relationships. Instead of isolated fragments, the LLM receives a <i>connected scene</i> – a relevant slice of the knowledge core with inherent local context.</p></li><li><p class="paragraph" style="text-align:left;"><b>Verifying the Clues (Addressing Provenance Oblivion):</b> Leonard couldn&#39;t be sure when or why he wrote his notes. TrustGraph&#39;s Knowledge Graph architecture is designed to incorporate <b>provenance metadata</b> directly with the facts and relationships it stores (source, timestamp, reliability). TrustRAG can then leverage this, allowing the AI to weigh information based on its origins, escaping the trap of treating all retrieved fragments as equally trustworthy.</p></li></ol><p class="paragraph" style="text-align:left;"><b>Escaping the </b><i><b>Memento</b></i><b> Loop: The Power of a Knowledge Core</b></p><p class="paragraph" style="text-align:left;">By building and utilizing this structured Knowledge Core, TrustGraph fundamentally changes AI capabilities:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Enables Reliable Reasoning:</b> Provides the interconnected facts and explicit relationships needed for complex reasoning, synthesis, and understanding causality – tasks impossible for Leonard (and fragment-based AI).</p></li><li><p class="paragraph" style="text-align:left;"><b>Dramatically Reduces Hallucinations:</b> Grounding responses in a verifiable graph of knowledge, potentially weighted by provenance, significantly reduces the chance of fabricating connections or asserting baseless claims.</p></li><li><p class="paragraph" style="text-align:left;"><b>Offers Explainable Insight:</b> The retrieved subgraph itself acts as an explanation, showing <i>how</i> the AI arrived at its context based on the knowledge core&#39;s structure – unlike Leonard&#39;s often opaque leaps of faith.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Provisioning Reliable Knowledge, Not Just Infrastructure</b></p><p class="paragraph" style="text-align:left;">TrustGraph isn&#39;t just a concept. It&#39;s an <b>AI Provisioning Platform</b> that containerizes the <i>entire</i> intelligent system – the LLMs, the necessary tools, <i>and</i> the essential TrustRAG Knowledge Cores – allowing you to <b>reliably provision this complete, knowledgeable AI stack anywhere</b> (Cloud, On-Prem, Edge). We&#39;re providing the robust, managed infrastructure for knowledge that Leonard&#39;s fragile system lacked.</p><p class="paragraph" style="text-align:left;"><b>Stop building AI condemned to relive Leonard Shelby&#39;s nightmare.</b> Stop provisioning systems based on fragmented recall and start delivering applications grounded in genuine understanding.</p><p class="paragraph" style="text-align:left;"><b>Give your AI the gift of coherent memory. Build with a Knowledge Core.</b></p><ul><li><p class="paragraph" style="text-align:left;"><b>Explore </b><a class="link" href="https://trustgraph.ai?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=part-3-how-trustgraph-s-knowledge-cores-end-the-memento-nightmare" target="_blank" rel="noopener noreferrer nofollow"><b>TrustGraph</b></a><b> on </b><a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=part-3-how-trustgraph-s-knowledge-cores-end-the-memento-nightmare" target="_blank" rel="noopener noreferrer nofollow"><b>GitHub</b></a><b> and see how we structure knowledge</b></p></li><li><p class="paragraph" style="text-align:left;"><b>Read the </b><a class="link" href="https://github.com/trustgraph-ai/trustgraph?tab=readme-ov-file&utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=part-3-how-trustgraph-s-knowledge-cores-end-the-memento-nightmare#-trustrag" target="_blank" rel="noopener noreferrer nofollow"><b>TrustRAG</b></a><b> documentation for technical details</b></p></li><li><p class="paragraph" style="text-align:left;"><b>Join our </b><a class="link" href="https://discord.gg/sQMwkRz5GX?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=part-3-how-trustgraph-s-knowledge-cores-end-the-memento-nightmare" target="_blank" rel="noopener noreferrer nofollow"><b>community</b></a><b> and discuss the future of AI knowledge</b></p></li></ul><p class="paragraph" style="text-align:left;">Provision AI that <i>knows</i>. Provision it with <b>TrustGraph.</b></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=6559329f-3de0-47a4-b277-eb96628c9646&utm_medium=post_rss&utm_source=trustgraph">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Part 2: Why Your AI is Stuck in a Memento Loop</title>
  <description> The Relationship &amp; Provenance Black Hole</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/abcc89c2-e4ae-41c8-814a-7eda8c27c844/image_fx__28_.jpg" length="393130" type="image/jpeg"/>
  <link>https://blog.trustgraph.ai/p/why-your-ai-is-stuck-in-a-memento-loop</link>
  <guid isPermaLink="true">https://blog.trustgraph.ai/p/why-your-ai-is-stuck-in-a-memento-loop</guid>
  <pubDate>Sat, 26 Apr 2025 22:17:02 +0000</pubDate>
  <atom:published>2025-04-26T22:17:02Z</atom:published>
    <dc:creator>Daniel Davis</dc:creator>
    <category><![CDATA[Trustgraph]]></category>
    <category><![CDATA[Rag]]></category>
    <category><![CDATA[Memory]]></category>
    <category><![CDATA[Memento]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">In <a class="link" href="https://blog.trustgraph.ai/p/the-memento-problem-with-ai-memory?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=part-2-why-your-ai-is-stuck-in-a-memento-loop" target="_blank" rel="noopener noreferrer nofollow">Part 1</a>, we likened today&#39;s typical AI &quot;memory&quot; to the plight of Leonard Shelby in <b><i>Memento</i></b>: brilliant at accessing isolated fragments (Polaroids, notes, tattoos) but unable to weave them into the coherent tapestry of true <b>knowledge</b>. He remembers that he <i>has</i> a note, but not necessarily the reliable <i>why</i> or the <i>how</i> it connects to everything else. Now, let&#39;s diagnose <i>why</i> popular RAG approaches, inherently create this dangerous, fragmented reality for our AI.</p><p class="paragraph" style="text-align:left;">Imagine Leonard&#39;s investigation. His &quot;database&quot; consists of disconnected snapshots and cryptic assertions. When he tries to solve a problem (&quot;Who is John G?&quot;), he shuffles through these fragments, looking for clues that <i>feel</i> related. This is strikingly similar to how typical RAG approaches use “memory” :</p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>The Polaroid Snapshot (Context Stripping):</b> Just as Leonard&#39;s Polaroids capture only a single moment divorced from what came before or after, document chunking for vectorization <b>strips vital context</b>. A retrieved sentence saying &quot;Project Titan deadline is critical&quot; loses the surrounding discussion about <i>why</i> it&#39;s critical, <i>who</i> set it, or <i>what</i> happens if it&#39;s missed. The AI gets the snapshot, not the scene.</p></li><li><p class="paragraph" style="text-align:left;"><b>Cryptic Notes & Missing Links (Relationship Blindness):</b> Leonard&#39;s notes might say &quot;Meet Natalie&quot; and &quot;Don&#39;t believe Dodd&#39;s lies.&quot; Vector search can find documents mentioning &quot;Natalie&quot; and documents mentioning &quot;Dodd,&quot; but like Leonard, it <b>lacks the explicit map connecting them</b>. Does Natalie know Dodd? Is she part of the lies? The relationships aren&#39;t inherently encoded in the vector similarity. Finding <i>similar</i> topics doesn&#39;t mean understanding their <i>causal</i> or <i>structural</i> connection, leaving the AI to guess these critical links.</p></li><li><p class="paragraph" style="text-align:left;"><b>Trusting Faded Ink (Provenance Oblivion):</b> Leonard <i>must</i> trust his fragmented notes, even if they were written under duress, based on misinformation, or are simply outdated. Standard RAG often does the same, treating all retrieved text fragments as equally valid assertions. It frequently lacks a robust mechanism to track <b>provenance</b> – the source, timestamp, or reliability score of the information. An old, debunked &quot;fact&quot; retrieved via vector similarity looks just as convincing to the LLM as a fresh, verified one.</p></li></ol><p class="paragraph" style="text-align:left;"><b>The Leonard Shelby Effect in AI:</b></p><p class="paragraph" style="text-align:left;">When AI operates with only these disconnected, context-stripped, relationship-blind, and provenance-oblivious fragments, its reasoning becomes dangerously flawed:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Hallucinating Connections:</b> Like Leonard assuming connections between unrelated clues, the LLM invents relationships between text fragments simply because they were retrieved together.</p></li><li><p class="paragraph" style="text-align:left;"><b>Contradictory Actions:</b> Acting on conflicting &quot;facts&quot; because it can&#39;t verify which source or connection is trustworthy or current.</p></li><li><p class="paragraph" style="text-align:left;"><b>Inability to Synthesize:</b> Unable to build a larger picture or draw reliable conclusions because the foundational links between data points are missing or inferred incorrectly.</p></li></ul><p class="paragraph" style="text-align:left;">We are building AI systems trapped in a <i>Memento</i> loop: <b>forever re-reading fragmented clues, capable of impressive recall but incapable of forming the durable, interconnected knowledge needed for reliable reasoning and true understanding</b>. They are architecturally destined to make potentially disastrous mistakes based on an incomplete and untrustworthy view of their &quot;world.&quot;</p><p class="paragraph" style="text-align:left;">If we want AI to escape this loop, we need to fundamentally change how we provide information. We need to move beyond retrieving isolated Polaroids and start building systems that can understand the <i>whole</i>, <b>interconnected</b> story.</p><p class="paragraph" style="text-align:left;"><i>How do we provide that interconnected narrative? How do we build AI memory that understands relationships and provenance? Stay tuned for Part 3 where we reveal the architecture for true AI knowledge.</i></p><p class="paragraph" style="text-align:left;"><i>Have you seen an AI confidently stitch together unrelated facts like Leonard building a flawed theory? Let us know:</i></p><ul><li><p class="paragraph" style="text-align:left;">🌟 <a class="link" href="http://trustgraph.ai?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=part-2-why-your-ai-is-stuck-in-a-memento-loop" target="_blank" rel="noopener noreferrer nofollow">TrustGraph</a> on <a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=part-2-why-your-ai-is-stuck-in-a-memento-loop" target="_blank" rel="noopener noreferrer nofollow">GitHub</a> 🚢 </p></li><li><p class="paragraph" style="text-align:left;">Join the <a class="link" href="https://discord.gg/sQMwkRz5GX?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=part-2-why-your-ai-is-stuck-in-a-memento-loop" target="_blank" rel="noopener noreferrer nofollow">Discord</a> 👋 </p></li></ul></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=b88673bc-a81d-4c56-9f70-f5a331001683&utm_medium=post_rss&utm_source=trustgraph">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Part 1: The Memento Problem with AI Memory </title>
  <description>Why Your AI Memory Can&#39;t Truly Remember</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/7970e658-3686-4051-9a4f-bb8e7a3b08df/image_fx__27_.jpg" length="343795" type="image/jpeg"/>
  <link>https://blog.trustgraph.ai/p/the-memento-problem-with-ai-memory</link>
  <guid isPermaLink="true">https://blog.trustgraph.ai/p/the-memento-problem-with-ai-memory</guid>
  <pubDate>Fri, 25 Apr 2025 21:37:48 +0000</pubDate>
  <atom:published>2025-04-25T21:37:48Z</atom:published>
    <dc:creator>Daniel Davis</dc:creator>
    <category><![CDATA[Trustgraph]]></category>
    <category><![CDATA[Rag]]></category>
    <category><![CDATA[Memory]]></category>
    <category><![CDATA[Memento]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">We&#39;re drowning in takes about AI &quot;memory.&quot; RAG is hailed as the silver bullet, promising intelligent systems that learn and retain information. But let&#39;s be brutally honest: most implementations are building <b>agents that are drowning in data and suffocating from a lack of knowledge.</b></p><p class="paragraph" style="text-align:left;">These systems excel at retrieving <i>fragments</i> – isolated data points plucked from documents and observations stripped of their origins. Ask it a question, and it surfaces a text snippet that <i>looks</i> relevant. This feels like memory - like recall.</p><p class="paragraph" style="text-align:left;">But it isn&#39;t <b>knowledge</b>.</p><p class="paragraph" style="text-align:left;">Real <b>knowledge</b> isn&#39;t just storing data points - it&#39;s understanding their context, their provenance (where did this information come from? is it reliable?), and their relationships with <i>other</i> data points. Human memory builds interconnected information networks while current AI &quot;memory&quot; approaches just hoard disconnected digital Post-it notes. We are mistaking the retrieval of isolated assertions for the synthesis of contextualized understanding.</p><p class="paragraph" style="text-align:left;">Think of Leonard Shelby in Christopher Nolan&#39;s film <b><i>Memento</i></b>. Suffering from anterograde amnesia, Leonard can&#39;t form new memories. To function, he relies on a system of Polaroids, handwritten notes, and even tattoos – externalized fragments representing supposed facts about his world and his mission to find his wife&#39;s killer.</p><p class="paragraph" style="text-align:left;"><b>Today&#39;s RAG systems often operate eerily like Leonard.</b> They receive a query and consult their &quot;Polaroids&quot; – the vector embeddings of text chunks. They retrieve the chunk that seems most relevant based on similarity, a fragment like &quot;<i>Don&#39;t believe his lies</i>&quot; or &quot;<i>Find John G</i>.&quot; Unfortunately, like Leonard, these RAG systems lack the <i>overarching context</i> and the<b> relationships</b> between these fragments. It doesn&#39;t inherently <i>know</i> how the note about John G. relates to the warning about lies, or the sequence of events that led to these assertions being recorded.</p><p class="paragraph" style="text-align:left;"><b>And this fragmentation is where disaster strikes.</b> Leonard, working only with disconnected clues, makes fatal misinterpretations. He trusts the wrong people, acts on incomplete information, and is manipulated because he cannot form a cohesive, interconnected understanding of his reality. His &quot;memory,&quot; composed of isolated data points, leads him not to truth, but deeper into confusion, madness, and <b>catastrophe</b>.</p><p class="paragraph" style="text-align:left;">An AI that can quote a source but doesn&#39;t inherently grasp <i>how that source connects</i> to related concepts or <i>whether that source is trustworthy</i> isn&#39;t remembering – it&#39;s echoing fragments, just like Leonard reading his own fragmented notes.</p><p class="paragraph" style="text-align:left;">This fundamental flaw leads to confident hallucinations, an inability to reason deeply about causality, and systems that can misled. We&#39;re building articulate regurgitators, not truly knowledgeable thinkers.</p><p class="paragraph" style="text-align:left;">We need to stop celebrating glorified search indices as &quot;memory&quot; and start demanding systems capable of building actual <i>knowledge</i>. Until then, we&#39;re just building better mimics, doomed to repeat the mistakes born from disconnected understanding.</p><p class="paragraph" style="text-align:left;"><i>Next time in Part 2: We dissect why this fragment-recall approach fundamentally breaks down when AI needs to reason, synthesize, or understand causality.</i></p><p class="paragraph" style="text-align:left;">Does your AI feel like it <i>knows</i> things, or just <i>recalls</i> text like Leonard Shelby reading his notes? Reach out to us <b>below</b>:</p><ul><li><p class="paragraph" style="text-align:left;">🌟 <b>TrustGraph on </b><span style="color:rgb(12, 74, 110);"><b><a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=referral&utm_campaign=how-to-use-ai-to-see-data-in-3d" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(12, 74, 110)">GitHub</a></b></span> 🧠 </p></li><li><p class="paragraph" style="text-align:left;"><b>Join the </b><span style="color:rgb(12, 74, 110);"><b><a class="link" href="https://discord.gg/sQMwkRz5GX?utm_source=blog.trustgraph.ai&utm_medium=referral&utm_campaign=how-to-use-ai-to-see-data-in-3d" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(12, 74, 110)">Discord</a></b></span><b> </b>👋 </p></li></ul></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=5e6aa2f7-1889-4f06-9a1a-02f6814dc551&utm_medium=post_rss&utm_source=trustgraph">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>The Symphony of the Holistic Agentic Platform</title>
  <description>Moving Beyond the LLM Monolith</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/535866d9-78ab-4c4d-832c-2360539d9e78/image_fx__2_.jpg" length="311771" type="image/jpeg"/>
  <link>https://blog.trustgraph.ai/p/the-symphony-of-the-ai-platform</link>
  <guid isPermaLink="true">https://blog.trustgraph.ai/p/the-symphony-of-the-ai-platform</guid>
  <pubDate>Mon, 14 Apr 2025 17:58:03 +0000</pubDate>
  <atom:published>2025-04-14T17:58:03Z</atom:published>
    <dc:creator>Daniel Davis</dc:creator>
    <category><![CDATA[Trustgraph]]></category>
    <category><![CDATA[System]]></category>
    <category><![CDATA[Ai]]></category>
    <category><![CDATA[Platform]]></category>
    <category><![CDATA[Synaptic Automation System]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="beyond-the-monolith-why-the-future-">Beyond the Monolith: Why the Future of AI is a Symphony, Not a Soloist</h2><p class="paragraph" style="text-align:left;">For years, the science fiction dream and much of the AI hype cycle has revolved around a singular goal: building the one, giant Artificial General Intelligence (AGI). A single, monolithic model capable of learning, reasoning, and solving <i>any</i> problem like a human. It&#39;s a captivating vision, but is it the <i>only</i> path forward? Or even the <i>right</i> one for practical, powerful, and responsible AI?</p><p class="paragraph" style="text-align:left;">It&#39;s not. The pursuit of a single, all-encompassing model overlooks the messy, beautiful complexity of intelligence itself and ignores the profound limitations inherent in monolithic approaches. The true future of advanced machine intelligence lies not in a singular soloist, but in a <b>symphony of tightly interconnected, specialized software components working in harmony.</b></p><p class="paragraph" style="text-align:left;"><b>The Cracks in the Monolith</b></p><p class="paragraph" style="text-align:left;">Trying to build a single AI model to &quot;solve&quot; human-level intelligence faces immense hurdles:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Garbage In, Garbage Out:</b> Training such models requires unfathomable amounts of data and human intervention to evaluate the quality of the inputs and outputs which is subject to individual biases.</p></li><li><p class="paragraph" style="text-align:left;"><b>Brittleness & Lack of Nuance:</b> A single model, can struggle with specialized tasks outside its core training distribution. It&#39;s the ultimate &quot;jack of all trades, master of none,&quot; potentially failing when encountering edge cases.</p></li><li><p class="paragraph" style="text-align:left;"><b>Operational Nightmares:</b> Deploying, managing, updating, and securing a single, gigantic model across diverse environments (cloud, on-prem, edge) is incredibly complex and inefficient. How do you provide fine-grained updates or tailored capabilities?</p></li><li><p class="paragraph" style="text-align:left;"><b>Explainability & Auditability Black Holes:</b> Understanding <i>why</i> a monolithic model made a specific decision can be nearly impossible, hindering trust, debugging, and crucial safety checks.</p></li><li><p class="paragraph" style="text-align:left;"><b>Concentration of Power & Risk:</b> Placing all intelligent capabilities into a single entity creates immense concentrations of power and systemic risk.</p></li></ul><p class="paragraph" style="text-align:left;"><b>The Rise of the System: Intelligence as an Interconnected Network</b></p><p class="paragraph" style="text-align:left;">Nature offers a better blueprint. The human brain isn&#39;t a homogenous blob; it&#39;s a highly specialized, interconnected system of regions communicating dynamically. Complex tasks emerge from the coordinated activity of these specialized parts. Similarly, the future of advanced AI lies in building <b>systems</b> that mirror this principle.</p><p class="paragraph" style="text-align:left;">Imagine an AI architecture that functions less like a single giant brain and more like a biological nervous system – what we might call a <b>Synaptic Automation System</b>. This system possesses key characteristics:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Modular Expertise:</b> Instead of one model knowing everything, the system leverages specialized <b>&quot;Intelligent Cores&quot;</b> – components encapsulating deep expertise, algorithms, or processing for specific domains. These cores are the seeds of adaptable skill.</p></li><li><p class="paragraph" style="text-align:left;"><b>Dynamic Synthesis & Deployment:</b> The system doesn&#39;t just run pre-built applications. Based on the available Cores and the task at hand, it <i>dynamically generates and deploys</i> the necessary processing modules on the fly. Think of it assembling a specialized task force exactly when needed.</p></li><li><p class="paragraph" style="text-align:left;"><b>Emergent Learning & Adaptation:</b> Faced with unique situations, the system doesn&#39;t just rely on past training. It can generate <i>custom learning modules</i> to analyze new data, identify patterns, and evolve its understanding over time through integrated feedback loops, constantly refining its capabilities.</p></li><li><p class="paragraph" style="text-align:left;"><b>Inherent Connectivity & Communication:</b> Like synapses firing, components constantly communicate, sharing context and triggering actions across the system. This allows for holistic reasoning and complex workflow execution far beyond simple pipelines.</p></li><li><p class="paragraph" style="text-align:left;"><b>Transparency & Trust:</b> Crucially, because the system generates plans and modules dynamically, it can also be designed to make these processes transparent. The &#39;reasoning&#39; behind an automated workflow can be audited, allowing for verification, compliance, and crucial safety checks.</p></li><li><p class="paragraph" style="text-align:left;"><b>Safety First:</b> Built-in mechanisms constantly monitor the system&#39;s actions, detecting potential harms or deviations from desired boundaries, enabling adaptive responses to ensure responsible operation.</p></li><li><p class="paragraph" style="text-align:left;"><b>Universal Presence:</b> This entire intelligent system isn&#39;t locked to specific hardware. It&#39;s designed as a fabric that can be deployed consistently across any cloud, bare-metal servers, or edge devices, bringing intelligence wherever it&#39;s needed.</p></li></ul><p class="paragraph" style="text-align:left;"><b>TrustGraph: Embodying the Synaptic Vision</b></p><p class="paragraph" style="text-align:left;">This isn&#39;t just theory. Platforms like <b>TrustGraph</b> are pioneering this <b>Synaptic Automation System</b> approach. By focusing on dynamically connecting modular Intelligent Cores, synthesizing processes on demand, enabling continuous learning through feedback, ensuring auditability and safety, and running universally across infrastructures, TrustGraph demonstrates the power of this interconnected model over the monolithic dream.</p><p class="paragraph" style="text-align:left;"><b>The Symphony Takes the Stage</b></p><p class="paragraph" style="text-align:left;">The future of impactful AI won&#39;t be a single, monolithic oracle attempting to know everything. It will be a dynamic, adaptable, and <b>interconnected system</b> – a symphony of specialized components working together seamlessly. This approach offers a path towards more scalable, resilient, trustworthy, and ultimately more powerful machine intelligence capable of tackling the world&#39;s complex challenges. It’s time to move beyond the monolith and embrace the power of the network.</p><ul><li><p class="paragraph" style="text-align:left;">🌟 <b>TrustGraph on </b><span style="color:rgb(12, 74, 110);"><a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=referral&utm_campaign=how-to-use-ai-to-see-data-in-3d" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(12, 74, 110)"><b>GitHub</b></a></span> 🧠 </p></li><li><p class="paragraph" style="text-align:left;"><b>Join the </b><span style="color:rgb(12, 74, 110);"><a class="link" href="https://discord.gg/sQMwkRz5GX?utm_source=blog.trustgraph.ai&utm_medium=referral&utm_campaign=how-to-use-ai-to-see-data-in-3d" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(12, 74, 110)"><b>Discord</b></a></span><b> </b>👋 </p></li><li><p class="paragraph" style="text-align:left;"><b>Watch tutorials on </b><span style="color:inherit;"><a class="link" href="https://www.youtube.com/@TrustGraphAI?sub_confirmation=1&utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=yes-you-still-need-rag" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(12, 74, 110)"><b>YouTube</b></a></span> 📺️ </p></li></ul></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=89ebd85e-549d-40ba-b565-73afc8a39e63&utm_medium=post_rss&utm_source=trustgraph">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Stop Thinking AI Agents, Start Engineering Autonomous Knowledge Operations</title>
  <description>Solving the Demo-to-Production Problem</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/a7df12ed-cff1-4389-b19a-212bd38f517d/image_fx.jpg" length="390713" type="image/jpeg"/>
  <link>https://blog.trustgraph.ai/p/autonomous-knowledge-operations</link>
  <guid isPermaLink="true">https://blog.trustgraph.ai/p/autonomous-knowledge-operations</guid>
  <pubDate>Wed, 09 Apr 2025 19:16:09 +0000</pubDate>
  <atom:published>2025-04-09T19:16:09Z</atom:published>
    <dc:creator>Daniel Davis</dc:creator>
    <category><![CDATA[Autoknops]]></category>
    <category><![CDATA[Trustgraph]]></category>
    <category><![CDATA[Ai]]></category>
    <category><![CDATA[Knowledge Ops]]></category>
    <category><![CDATA[Agents]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><h2 class="heading" style="text-align:left;" id="beyond-the-buzz-why-autonomous-know">Beyond the Buzz: Why Autonomous Knowledge Operations Matters More Than Just AI Agents</h2><p class="paragraph" style="text-align:left;">The tech world has been ablaze with talk of AI agents. We see demos of agents booking flights, writing code snippets, or summarizing articles. It&#39;s exciting, capturing the imagination with glimpses of AI performing tasks previously requiring human operations. But as we move from demos to deployment, simply thinking in terms of &quot;agents&quot; falls short.</p><p class="paragraph" style="text-align:left;">The real paradigm shift isn&#39;t just about creating smarter <i>tools</i> (agents); it&#39;s about building <i>systems</i> capable of continuous, reliable, and goal-directed operations that are powered by deep contextual understanding. This is the philosophy of <b>TrustGraph’s</b> <a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=stop-thinking-ai-agents-start-engineering-autonomous-knowledge-operations" target="_blank" rel="noopener noreferrer nofollow">Autonomous Knowledge Operations</a>.</p><p class="paragraph" style="text-align:left;"><b>What&#39;s the Difference? Isn&#39;t an Agent Autonomous?</b></p><p class="paragraph" style="text-align:left;">An <b>AI Agent</b>, in its common definition today, is often:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Task-Oriented:</b> Designed to perform a specific, often short-lived task (e.g., answer a question, draft an email).</p></li><li><p class="paragraph" style="text-align:left;"><b>Reactive:</b> Primarily responds to direct input or triggers.</p></li><li><p class="paragraph" style="text-align:left;"><b>Component-Level:</b> Can be thought of as a sophisticated function call or a smart script.</p></li><li><p class="paragraph" style="text-align:left;"><b>Potentially Isolated & Knowledge-Poor:</b> Might operate with limited context or struggle to access and reason over the complex web of information within an enterprise.</p></li></ul><p class="paragraph" style="text-align:left;">While powerful, these agents often lack the deep knowledge integration, robustness, persistence, and manageability needed for mission-critical business functions. Running a complex business process isn&#39;t like asking an agent to write a poem; it requires continuous awareness, adaptation, reliability, and critically, intelligent use of relevant <b>knowledge</b>.</p><p class="paragraph" style="text-align:left;"><a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=stop-thinking-ai-agents-start-engineering-autonomous-knowledge-operations" target="_blank" rel="noopener noreferrer nofollow"><b>Autonomous Knowledge Operations</b></a>, is a broader, more systemic approach where autonomy is directly fueled by intelligent information:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Goal-Oriented & Continuous:</b> Focused on achieving and maintaining a desired state or objective over time. Action is driven by understanding the goal within its knowledge context.</p></li><li><p class="paragraph" style="text-align:left;"><b>Proactive, Persistent & Knowledge-Driven:</b> Actively monitors, plans, and acts by constantly interpreting its environment through a rich knowledge base. It runs <i>continuously</i>, learning and adapting.</p></li><li><p class="paragraph" style="text-align:left;"><b>System-Level:</b> Encompasses not just agents but the entire infrastructure, <b>knowledge pipelines (RAG, KG, VectorDBs)</b>, integration points, and feedback loops required for sustained, intelligent operation.</p></li><li><p class="paragraph" style="text-align:left;"><b>Fueled by Deep Knowledge & Context:</b> Leverages rich, relevant, and timely information drawn from enterprise sources. This requires sophisticated <b>RAG pipelines with both vector databases </b><i><b>and</b></i><b> knowledge graphs</b>.</p></li><li><p class="paragraph" style="text-align:left;"><b>Observable & Manageable:</b> Designed with built-in monitoring, logging, tracing, and controls to ensure reliability, understand the knowledge-driven behavior, and allow for intervention or adjustments.</p></li><li><p class="paragraph" style="text-align:left;"><b>Reliable & Scalable:</b> Built on enterprise-grade infrastructure capable of handling failures, scaling resources, and meeting performance demands for both computation <i>and</i> knowledge processing.</p></li></ul><p class="paragraph" style="text-align:left;"><b>Why This Shift in Thinking Matters</b></p><p class="paragraph" style="text-align:left;">Focusing solely on &quot;agents&quot; leads to several potential pitfalls in enterprise adoption:</p><ol start="1"><li><p class="paragraph" style="text-align:left;"><b>The &quot;Demo-to-Production&quot; Gap:</b> Cool agent demos often bypass the hard parts: robust <b>knowledge integration</b>, error handling, scalability, security, and monitoring needed for real-world value.</p></li><li><p class="paragraph" style="text-align:left;"><b>Context Starvation:</b> Agents without deep, structured context – the kind derived from integrated Knowledge Graphs combined with Vector DBs – struggle with complex reasoning and nuanced tasks common in business. This is a <i>knowledge access</i> problem.</p></li><li><p class="paragraph" style="text-align:left;"><b>Infrastructure Nightmare:</b> Managing dozens of agents <i>and</i> their disparate, potentially inconsistent knowledge sources, ensuring reliability, and providing consistent data access is an operational burden.</p></li><li><p class="paragraph" style="text-align:left;"><b>Lack of Trust:</b> How do you monitor, debug, or guarantee the performance of agents acting on potentially incomplete or misunderstood information? Observability into the <i>knowledge retrieval and reasoning process</i> is non-negotiable.</p></li></ol><p class="paragraph" style="text-align:left;"><b>Building for </b><a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=stop-thinking-ai-agents-start-engineering-autonomous-knowledge-operations" target="_blank" rel="noopener noreferrer nofollow"><b>Autonomous Knowledge Operations</b></a><b>: The TrustGraph Philosophy</b></p><p class="paragraph" style="text-align:left;">This is precisely the philosophy behind <b>TrustGraph</b>. We realized that the conversation needed to evolve beyond just the agent itself to encompass the entire knowledge-driven system. TrustGraph is an <a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=stop-thinking-ai-agents-start-engineering-autonomous-knowledge-operations" target="_blank" rel="noopener noreferrer nofollow"><b>Autonomous Knowledge Operations</b></a><b> Platform</b> designed to provide the foundational elements missing from simple agent frameworks:</p><ul><li><p class="paragraph" style="text-align:left;"><b>Enterprise-Grade Infrastructure:</b> It provides the scalable, reliable backend needed to run operations continuously, managing both computation and knowledge flows.</p></li><li><p class="paragraph" style="text-align:left;"><b>Integrated RAG (KG + VectorDB):</b> It <b>automates the deployment of sophisticated RAG pipelines</b>, acknowledging that deep context and reliable autonomy stem from leveraging <i>both</i> semantic similarity (vectors) and structured relationships (knowledge graphs).</p></li><li><p class="paragraph" style="text-align:left;"><b>Unified LLM Access:</b> It abstracts the complexity of dealing with multiple LLM providers, allowing the system to focus on applying the best reasoning to the available knowledge.</p></li><li><p class="paragraph" style="text-align:left;"><b>Full Observability Stack:</b> It builds in logging, metrics, and tracing from the ground up, including insights into the RAG process, because trusting autonomous systems requires understanding <i>how</i> they arrive at decisions based on knowledge.</p></li></ul><p class="paragraph" style="text-align:left;">By focusing on the <i>knowledge-driven operation</i> rather than just the <i>agent</i>, we can build systems that don&#39;t just perform tasks but <b>achieve persistent business outcomes reliably, efficiently, and intelligently</b>.</p><p class="paragraph" style="text-align:left;"><b>The Future is Systemic and Knowledge-Rich</b></p><p class="paragraph" style="text-align:left;">AI agents are a vital <i>component</i> of the future. But the true transformation lies in weaving these components into robust, <b>knowledge-aware</b>, observable, and continuous <a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=stop-thinking-ai-agents-start-engineering-autonomous-knowledge-operations" target="_blank" rel="noopener noreferrer nofollow"><b>Autonomous Knowledge Operations</b></a>. This requires a shift in mindset and tooling – moving from building smart tools to engineering intelligent, self-managing systems powered by deep understanding. That&#39;s the future we&#39;re building towards with <b>TrustGraph</b>.</p><ul><li><p class="paragraph" style="text-align:left;">🌟 <b>TrustGraph on </b><span style="color:rgb(12, 74, 110);"><span style="text-decoration:underline;"><i><b><a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=referral&utm_campaign=how-to-use-ai-to-see-data-in-3d" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(12, 74, 110)">GitHub</a></b></i></span></span> 🚀 </p></li><li><p class="paragraph" style="text-align:left;"><b>Join the </b><span style="color:rgb(12, 74, 110);"><span style="text-decoration:underline;"><i><b><a class="link" href="https://discord.gg/sQMwkRz5GX?utm_source=blog.trustgraph.ai&utm_medium=referral&utm_campaign=how-to-use-ai-to-see-data-in-3d" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(12, 74, 110)">Discord</a></b></i></span></span><b> </b>👋 </p></li><li><p class="paragraph" style="text-align:left;"><b>Watch tutorials on </b><span style="color:inherit;"><span style="text-decoration:underline;"><i><b><a class="link" href="https://www.youtube.com/@TrustGraphAI?sub_confirmation=1&utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=yes-you-still-need-rag" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(12, 74, 110)">YouTube</a></b></i></span></span> 📺️ </p></li></ul><hr class="content_break"><p class="paragraph" style="text-align:left;"><i>Feel free to add a specific call to action, like linking to the TrustGraph GitHub repo or inviting discussion on specific points.</i></p></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=ff049985-8da5-41d3-b449-9ca89bab2e2d&utm_medium=post_rss&utm_source=trustgraph">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

      <item>
  <title>Yes, You Still Need RAG</title>
  <description>Just Stating the Obvious</description>
      <enclosure url="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/f0ddcb45-ab93-4cc3-9cd6-0adefdb10fa0/image_fx___5_.jpg" length="432847" type="image/jpeg"/>
  <link>https://blog.trustgraph.ai/p/yes-you-still-need-rag</link>
  <guid isPermaLink="true">https://blog.trustgraph.ai/p/yes-you-still-need-rag</guid>
  <pubDate>Tue, 28 Jan 2025 15:54:00 +0000</pubDate>
  <atom:published>2025-01-28T15:54:00Z</atom:published>
    <dc:creator>Daniel Davis</dc:creator>
    <category><![CDATA[Graph Rag]]></category>
    <category><![CDATA[Rag]]></category>
    <category><![CDATA[Agentic Graph Rag]]></category>
  <content:encoded><![CDATA[
    <div class='beehiiv'><style>
  .bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; }
  .bh__table_cell { padding: 5px; background-color: #FFFFFF; }
  .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; }
  .bh__table_header { padding: 5px; background-color:#F1F1F1; }
  .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }
</style><div class='beehiiv__body'><p class="paragraph" style="text-align:left;">Once again, AI hype trains are full steam ahead as people overload DeepSeek to try out <b>DeepSeek-R1</b>. In fact, <b>DeepSeek-R1</b> has generated so much hype that Nvidia’s stock shed nearly 17% of its value. No matter how impressive <b>DeekSeek-R1</b> is, yes, you still need <b>RAG</b>.</p><p class="paragraph" style="text-align:left;">No, I’m not going to tell you why you still need RAG. I’m going to show you. What’s a good use case for this experiment? We need a highly complex document that describes a topic that requires substantial human effort to understand and is publicly available.</p><p class="paragraph" style="text-align:left;"><a class="link" href="https://quicksearch.dla.mil/Transient/CF898F465040412391316695584A70DA.pdf?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=yes-you-still-need-rag" target="_blank" rel="noopener noreferrer nofollow">MIL-HDBK-516C</a> to the rescue! Never thought I’d say that. <a class="link" href="https://quicksearch.dla.mil/Transient/CF898F465040412391316695584A70DA.pdf?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=yes-you-still-need-rag" target="_blank" rel="noopener noreferrer nofollow">MIL-HDBK-516C</a> is a guide for satisfying airworthiness requirements for military aircraft. It is highly complex, people (like myself in a past life) spend months if not years working with it, and it is publicly available. Check. ✅ </p><p class="paragraph" style="text-align:left;">Some quick stats on the doc:</p><ul><li><p class="paragraph" style="text-align:left;">Pages: <b>527</b></p></li><li><p class="paragraph" style="text-align:left;">Tokens: ~<b>550k</b></p></li><li><p class="paragraph" style="text-align:left;">Complexity: <b>Very High</b></p></li><li><p class="paragraph" style="text-align:left;">Format: <b>PDF</b></p></li></ul><p class="paragraph" style="text-align:left;">While there are nearly endless questions we could ask about this document, let’s choose one that has a concrete expected response.</p><p class="paragraph" style="text-align:center;"><i>Q: How does speech intelligibility impact airworthiness?</i></p><p class="paragraph" style="text-align:left;">The first issue is that if I ask <b>DeepSeek-R1</b> the test question, the model talks about airworthiness in a broader sense. To help narrow the model’s focus, I appended the questions with “<i>per MIL-HDBK-516C</i>”.</p><p class="paragraph" style="text-align:left;"><b>DeepSeek-R1</b> response: </p><div class="blockquote"><blockquote class="blockquote__quote"><p class="paragraph" style="text-align:left;">MIL-HDBK-516C, <i>Department of Defense Handbook: Airworthiness Certification Criteria</i>, establishes standards for ensuring military aircraft safety and mission readiness. Speech intelligibility directly impacts airworthiness by affecting crew coordination, emergency response, and mission execution. Below is a breakdown of how speech intelligibility intersects with MIL-HDBK-516C requirements:</p><p class="paragraph" style="text-align:left;"><b>1. Communication System Performance (Section 8.4.2.3.3.1)</b> </p><p class="paragraph" style="text-align:left;">MIL-HDBK-516C mandates that <b>communication systems</b> (e.g., intercoms, radios, public address systems) must:  </p><p class="paragraph" style="text-align:left;">   - <b>Operate effectively in all expected environments</b> (e.g., high noise, vibration, electromagnetic interference).  </p><p class="paragraph" style="text-align:left;">   - Ensure <b>clear voice transmission</b> between crew members, air traffic control (ATC), and ground personnel.  </p><p class="paragraph" style="text-align:left;">   - Meet <b>intelligibility thresholds</b> to prevent misunderstandings during critical operations (e.g., takeoff, landing, emergencies).  </p><p class="paragraph" style="text-align:left;">Poor speech intelligibility could lead to non-compliance with these criteria, jeopardizing airworthiness certification.</p><p class="paragraph" style="text-align:left;"><b>2. Human Factors Engineering (Section 8.4.1.4.1)</b> </p><p class="paragraph" style="text-align:left;">The handbook emphasizes <b>human-system integration</b>:  </p><p class="paragraph" style="text-align:left;">   - <b>Crew workload and situational awareness</b> depend on clear communication.  </p><p class="paragraph" style="text-align:left;">   - Systems must account for <b>acoustic environments</b> (e.g., cockpit noise levels) to ensure crew members can hear and interpret commands.  </p><p class="paragraph" style="text-align:left;">   - Design must align with <b>MIL-STD-1472</b> (Human Engineering), which specifies noise limits and speech intelligibility requirements (e.g., using the Articulation Index or Speech Transmission Index).  </p><p class="paragraph" style="text-align:left;"><b>3. Audio Alerting Systems (Section 8.4.2.3.3.2)</b> </p><p class="paragraph" style="text-align:left;">   - Voice warnings and alerts must <b>not interfere with other communications</b>.  </p><p class="paragraph" style="text-align:left;">   - Intelligibility requirements ensure that critical alerts (e.g., stall warnings, system failures) are distinguishable from routine speech.  </p><p class="paragraph" style="text-align:left;"><b>4. Environmental Testing (Section 8.4.1.3.2.1)</b> </p><p class="paragraph" style="text-align:left;">Aircraft systems must be tested under <b>worst-case operational conditions</b>, including:  </p><p class="paragraph" style="text-align:left;">   - High noise levels (e.g., engine roar, wind, rotor blades).  </p><p class="paragraph" style="text-align:left;">   - Stressful scenarios (e.g., combat, emergencies).  </p><p class="paragraph" style="text-align:left;">   - Speech intelligibility tests (e.g., using standardized phrases or metrics like STI/AI) are required to validate performance.  </p><p class="paragraph" style="text-align:left;"><b>Safety and Emergency Procedures (Section 8.4.1.5.1)</b> </p><p class="paragraph" style="text-align:left;">   - During emergencies (e.g., fires, cabin decompression), <b>evacuation commands</b> must be clearly understood by all personnel.  </p><p class="paragraph" style="text-align:left;">   - MIL-HDBK-516C requires redundant communication paths to ensure intelligibility even if primary systems fail.  </p><p class="paragraph" style="text-align:left;"><b>Interoperability (Section 8.4.2.3.3.4)</b> </p><p class="paragraph" style="text-align:left;">   - Communication systems must interface seamlessly with <b>ground control</b>, other aircraft, and NATO allies.  </p><p class="paragraph" style="text-align:left;">   - Poor intelligibility due to incompatible systems could violate interoperability requirements.  </p><p class="paragraph" style="text-align:left;"><b>Impact on Airworthiness Certification</b> </p><p class="paragraph" style="text-align:left;">If speech intelligibility fails to meet MIL-HDBK-516C standards, the aircraft may be deemed <b>non-airworthy</b> due to:  </p><p class="paragraph" style="text-align:left;">   - Increased risk of crew errors or delayed emergency responses.  </p><p class="paragraph" style="text-align:left;">   - Non-compliance with human factors or communication system criteria.  </p><p class="paragraph" style="text-align:left;">   - Inability to safely execute missions in contested or noisy environments.  </p><p class="paragraph" style="text-align:left;"><b>Key Testing & Compliance Tools</b> </p><p class="paragraph" style="text-align:left;">   - <b>Articulation Index (AI)</b> or <b>Speech Transmission Index (STI)</b>: Quantitative metrics to measure intelligibility.  </p><p class="paragraph" style="text-align:left;">   - <b>MIL-STD-1474</b> (Noise Limits): Defines maximum permissible noise levels for speech zones.  </p><p class="paragraph" style="text-align:left;">   - <b>MIL-STD-882E</b>: Risk assessment for communication-related hazards.  </p><p class="paragraph" style="text-align:left;"><b>Conclusion</b> </p><p class="paragraph" style="text-align:left;">Per MIL-HDBK-516C, speech intelligibility is a <b>critical safety factor</b> in airworthiness. Compliance requires rigorous design, testing, and validation of communication systems to ensure clarity in all operational scenarios. Failure to meet these standards directly impacts mission safety and certification eligibility.</p><figcaption class="blockquote__byline"></figcaption></blockquote></div><p class="paragraph" style="text-align:left;">If you’ve never worked with <b>MIL-HDBK-516C</b>, you might be shocked at the progress of AI. Unfortunately, this response that <i><b>looks</b></i> very good, actually has <i><b>major</b></i> problems. Some of the problems:</p><ul><li><p class="paragraph" style="text-align:left;">The section numbers referenced don’t actually exist.</p></li><li><p class="paragraph" style="text-align:left;">Nowhere does <b>MIL-HDBK-516C</b> require testing under “worst-case operational conditions” including “combat”. Not only is this statement false, it is dangerously misleading. The airworthiness process exists to fully test an aircraft system <b>prior</b> to engaging in worst-case operational conditions.</p></li><li><p class="paragraph" style="text-align:left;">At no point does <b>MIL-HDBK-516C</b> reference “evacuation commands”. </p></li><li><p class="paragraph" style="text-align:left;">At no point, anywhere, in <b>MIL-HDBK-516C</b> are “NATO allies” referenced.</p></li><li><p class="paragraph" style="text-align:left;">At no point is speech intelligibility referenced as a “critical safety factor”. In fact, speech intelligibility is merely one of hundreds of criteria necessary to establish airworthiness.</p></li><li><p class="paragraph" style="text-align:left;">The most important part of the speech intelligibility criteria, the <b>Modified Rhyme Test</b> is totally omitted from the response. The <b>Modified Rhyme Test</b>, MRT, is the primary testing method for demonstrating speech intelligibility compliance.</p></li></ul><p class="paragraph" style="text-align:left;">Are we surprised the response is so poor? No. Although the degree of hallucination is a bit concerning. What happens when we try to upload the entire document to <b>DeepSeek-R1</b>?</p><div class="image"><img alt="" class="image__image" style="" src="https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/b9985965-909a-49dc-befb-601d55e54dae/DeepSeekLimit.jpg?t=1738035490"/><div class="image__source"><span class="image__source_text"><p>Well that’s a problem.</p></span></div></div><p class="paragraph" style="text-align:left;">Given the large number of tokens, what happens if we dump the entire document on a model like <b>Gemini-2.0-Flash-Exp</b> with it’s <b>1M</b> token limits? </p><div class="blockquote"><blockquote class="blockquote__quote"><p class="paragraph" style="text-align:left;">Based on the provided text, speech intelligibility is a factor in airworthiness certification, specifically within the context of crew systems. Here&#39;s a breakdown of how it impacts airworthiness:</p><p class="paragraph" style="text-align:left;">1. Crew Communication:</p><p class="paragraph" style="text-align:left;">Clear Communication is Essential: The handbook emphasizes that clear communication is crucial for safe operation of an air system. This includes communication between the pilot, crew members, and potentially ground personnel.</p><p class="paragraph" style="text-align:left;">Intelligibility as a Parameter: Speech intelligibility is explicitly mentioned as a parameter that needs to be defined and documented in the context of each unique air system. This means that the quality of voice communication needs to be high enough for the crew to understand each other and any instructions or warnings.</p><p class="paragraph" style="text-align:left;">2. Crew Safety and Performance:</p><p class="paragraph" style="text-align:left;">Impact on Mission Duties: The text states that any person on the air vehicle who performs mission duties is considered a crewmember. Therefore, clear communication is essential for the performance of these duties.</p><p class="paragraph" style="text-align:left;">Avoiding Mishaps: Poor speech intelligibility can lead to misunderstandings, misinterpretations, and ultimately, errors that could cause a mishap. This is why it is a key area of concern for airworthiness.</p><p class="paragraph" style="text-align:left;">3. Specific Standards:</p><p class="paragraph" style="text-align:left;">ANSI/ASA S3.2 and S3.5: The document references ANSI/ASA S3.2 (Method for Measuring the Intelligibility of Speech over Communication Systems) and ANSI/ASA S3.5 (Methods for Calculation of the Speech Intelligibility Index). These standards are used to measure and evaluate the intelligibility of speech over communication systems. This highlights the importance of using standardized methods to assess speech intelligibility.</p><p class="paragraph" style="text-align:left;">4. Testing and Verification:</p><p class="paragraph" style="text-align:left;">Measuring Intelligibility: The handbook implies that speech intelligibility should be measured and verified as part of the airworthiness certification process. This could involve using the standards mentioned above, or other accepted methods.</p><p class="paragraph" style="text-align:left;">Ensuring Acceptable Levels: The goal is to ensure that the speech intelligibility meets acceptable levels, so that the crew can communicate effectively and safely.</p><p class="paragraph" style="text-align:left;">In summary, speech intelligibility is a critical component of airworthiness because it directly impacts the ability of the crew to communicate effectively and safely, which is essential for the safe operation of the air system. The handbook provides specific standards and guidelines for measuring and ensuring adequate speech intelligibility, highlighting its importance in the certification process.</p><figcaption class="blockquote__byline"></figcaption></blockquote></div><p class="paragraph" style="text-align:left;">Not surprisingly, this response features far fewer misleading hallucinations since the full text was able to be provided to the model. However, the section on “Intelligibility as a Parameter” is at best <i><b>confusing</b></i> and somewhat <i><b>inaccurate</b></i>. The statement, “Speech intelligibility is explicitly mentioned as a parameter” is factually false since nowhere in the source is speech intelligibility called a parameter. Instead, testing methods are discussed for verifying the quality of communications systems that satisfy the testing requirements. </p><p class="paragraph" style="text-align:left;">While this response features fewer inaccuracies, it still lacks the necessary conceptual depth to address the question. Just like <b>DeepSeek-R1</b>, <b>Gemini</b> omitted the most important information regarding speech intelligibility - the <b>Modified Rhyme Test</b>. <b>The Modified Rhyme Test</b>, MRT, is the most important information related to speech intelligibility and any response must discuss it to be complete.</p><p class="paragraph" style="text-align:left;">How does Agentic Graph RAG compare? Using <b>TrustGraph</b> configured to use Gemini-2.0-Flash-Exp on an extracted cognitive core of <b>MIL-HDBK-516C</b>, the response is:</p><div class="blockquote"><blockquote class="blockquote__quote"><p class="paragraph" style="text-align:left;">Speech intelligibility is a critical factor in airworthiness, as it directly impacts the ability of flight crew and other personnel to communicate effectively, especially during critical situations. Several metrics are used to assess speech intelligibility, including the Modified Rhyme Test (MRT), Speech Transmission Index (STI), Speech Intelligibility Index (SII), and Articulation Index. These metrics are defined in standards like MIL-STD-1472 and ANSI/ASA S3.2 and are used to verify that communication systems meet the necessary airworthiness criteria. The MRT, specifically, is used to evaluate critical audio communication paths and requires a score of 80% (91% for Army) for operational military aircraft. Compliance is demonstrated through verification methods, often using standards like ANSI/ASA S3.2 for conducting the MRT. Therefore, ensuring adequate speech intelligibility through testing and verification is crucial for maintaining airworthiness and flight safety.</p><figcaption class="blockquote__byline"></figcaption></blockquote></div><p class="paragraph" style="text-align:left;">Our first impression is that this response is, <i><b>by far</b></i>, the shortest response. However, it also, <i><b>by far</b></i>, has the <i><b>highest information density</b></i>. The response is the most complete response capturing all of the testing methodologies including the most important, the <b>MRT</b>. In fact, this response also catches the scoring criteria for the MRT, “a score of 80% (91% for Army)”. </p><p class="paragraph" style="text-align:left;">But don’t take my word for it, here’s the source text’s excerpt on speech intelligibility:</p><div class="blockquote"><blockquote class="blockquote__quote"><p class="paragraph" style="text-align:left;"><i>9.2.8 Speech intelligibility.</i></p><p class="paragraph" style="text-align:left;"><i>Criterion: Verify that all audio communication systems have speech intelligibility of sufficient quality to ensure safe and effective aircraft system operation.</i></p><p class="paragraph" style="text-align:left;"><i>Standard: All critical audio communications paths are to be evaluated using the Modified Rhyme Test (MRT), in accordance with ANSI/ASA S3.2. Critical communication paths may include but are not limited to:</i></p><p class="paragraph" style="text-align:left;"><i>a. Pilot/operator audio communications with ground personnel or mission personnel.</i></p><p class="paragraph" style="text-align:left;"><i>b. Pilot/operator communications with Air Traffic Control.</i></p><p class="paragraph" style="text-align:left;"><i>c. Pilot/operator with another external air vehicle pilot/operator.</i></p><p class="paragraph" style="text-align:left;"><i>d. Pilots to aircrew and passengers.</i></p><p class="paragraph" style="text-align:left;"><i>An 80% (91% for Army) score on the modified rhyme test (MRT) conducted in accordance with ANSI/ASA S3.2 is considered adequate for all operational military aircraft communication paths essential to safety of flight.</i></p><p class="paragraph" style="text-align:left;"><i>In support of initial ground testing and first flight predictive measures of speech intelligibility including the Articulation Index (AI), Speech Transmission Index (STI), and Speech Intelligibility Index (SII) may, at the discretion of the airworthiness authority, be used to predict communication performance for some environments.</i></p><p class="paragraph" style="text-align:left;"><i>Method of Compliance: Speech intelligibility is verified by testing. Verify with talkers and listeners in the worst case noise environments where communications are required. Speech intelligibility scores cannot be combined for different personnel locations and flight conditions. Predictive measures are not used for full airworthiness compliance.</i></p><p class="paragraph" style="text-align:left;"><i>References: </i></p><p class="paragraph" style="text-align:left;"><i>MIL-STD-1472</i></p><p class="paragraph" style="text-align:left;"><i>MIL-STD-1474</i></p><p class="paragraph" style="text-align:left;"><i>ANSI/ASA S3.2</i></p><p class="paragraph" style="text-align:left;"><i>ANSI/ASA S3.5</i></p><figcaption class="blockquote__byline"></figcaption></blockquote></div><p class="paragraph" style="text-align:left;">Now that we see the source text, it becomes even more obvious how far <b>DeepSeek-R1</b> diverged from the ground truth and how accurate the <b>Agentic Graph RAG</b> response with <b>TrustGraph</b> is. But should we be surprised that <b>DeepSeek-R1</b> can’t answer this obscure question? No, it’s absolutely an <i><b>obscure</b></i> question, but that’s the point. <i><b>Obscure topics where people don’t know the answer are exactly the use cases where you would want to use AI</b></i>. Perhaps Gemini’s response with the full source text is a bit surprising. On the other hand, our testing has - <i><b>time and time again</b></i> - pointed to less than <b>10%</b> of the advertised context window being reliably usable.</p><p class="paragraph" style="text-align:left;">But this question was only for one document. <b>MIL-HDBK-516C</b> references hundreds of other documents needed to fully understand airworthiness compliance. The full set of documents would likely be 10s of millions of tokens, far exceeding any LLM’s capabilities. Yet, RAG infrastructures, especially <b>Agentic Graph RAG</b> ones can manage these complex problems. The ability to save these data sets as <b>Cognitive Cores</b> that can be shared and loaded on demand are fundamental to <b>TrustGraph’s Agentic Graph RAG</b> approach.</p><ul><li><p class="paragraph" style="text-align:left;">🌟 <b>TrustGraph on </b><span style="text-decoration:underline;"><i><a class="link" href="https://github.com/trustgraph-ai/trustgraph?utm_source=blog.trustgraph.ai&utm_medium=referral&utm_campaign=how-to-use-ai-to-see-data-in-3d" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(12, 74, 110)"><b>GitHub</b></a></i></span> 🚀 </p></li><li><p class="paragraph" style="text-align:left;"><b>Load the MIL-HDBK-516C </b><a class="link" href="https://github.com/trustgraph-ai/catalog/tree/master/v3?utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=yes-you-still-need-rag" target="_blank" rel="noopener noreferrer nofollow"><b>Knowledge Package</b></a> 🧠 </p></li><li><p class="paragraph" style="text-align:left;"><b>Join the </b><span style="text-decoration:underline;"><i><b><a class="link" href="https://discord.gg/sQMwkRz5GX?utm_source=blog.trustgraph.ai&utm_medium=referral&utm_campaign=how-to-use-ai-to-see-data-in-3d" target="_blank" rel="noopener noreferrer nofollow" style="color: rgb(12, 74, 110)">Discord</a></b></i></span><b> </b>👋 </p></li><li><p class="paragraph" style="text-align:left;"><b>Watch tutorials on </b><a class="link" href="https://www.youtube.com/@TrustGraphAI?sub_confirmation=1&utm_source=blog.trustgraph.ai&utm_medium=newsletter&utm_campaign=yes-you-still-need-rag" target="_blank" rel="noopener noreferrer nofollow"><span style="text-decoration:underline;"><i><b>YouTube</b></i></span></a> 📺️ </p></li></ul></div><div class='beehiiv__footer'><br class='beehiiv__footer__break'><hr class='beehiiv__footer__line'><a target="_blank" class="beehiiv__footer_link" style="text-align: center;" href="https://www.beehiiv.com/?utm_campaign=eecc440b-4e76-441b-aa43-73d7399f6a4c&utm_medium=post_rss&utm_source=trustgraph">Powered by beehiiv</a></div></div>
  ]]></content:encoded>
</item>

  </channel>
</rss>
