Enterprise Artificial Intelligence: Building Trusted AI in the Sovereign Cloud
The decade of responsible intelligence has begun — are you ready?
Enterprise AI is hitting a wall: Public models aren’t trained on your business data, but you can’t hand over your organization's proprietary information to a public system. The definitive roadmap for this new reality is Enterprise Artificial Intelligence: Building Trusted AI in the Sovereign Cloud, a new book written by OpenText leaders. Listen now to learn why this book is a must for organizations looking to move from isolated AI experiments to enterprise-grade deployments.
Learn more here: https://www.opentext.com/resources/enterprise-artificial-intelligence-building-trusted-ai
Enterprise Artificial Intelligence: Building Trusted AI in the Sovereign Cloud
Introduction
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
The next era of innovation won’t be led by those who move the fastest, but by those who move the most responsibly. Every executive decision, every line of code, every AI model now carries a moral dimension – because information has power, and how that power is used will define the decade ahead.
Introduction. Welcome to the Cognitive Computing Era. We're living through another major turning point in technology. The beginning of the cognitive computing era, driven by the rise of Enterprise Artificial Intelligence, EAI. What started as a digital revolution has evolved into something far more dynamic. A world where data doesn't just inform decisions, it influences technology to interpret, learn, and act. Over the past few decades, the ground beneath the IT industry has shifted. The COVID-19 pandemic forced entire economies to digitize almost overnight. Cloud adoption, remote work, and automation leapt forward in months instead of years. The result? A completely rewired business landscape where digital infrastructure isn't just an operational layer, it's the heartbeat of the organization. Not long ago, enterprise IT meant makeshift servers running in overheated closets, where one bad line of code could crash an entire system. Today, those fragile setups have been replaced by hyperscale infrastructure, global, resilient, and endlessly scalable. On top of that backbone, a new intelligence has emerged, Agentic Artificial Intelligence, AI, can reason, adapt, and respond in real time. Work that once took teams of developers, marketers, or analysts can now be orchestrated instantly across millions of users. Classic enterprise applications are transforming. The hyperscalers are laying siege to the B2B castle with low-cost services in the cloud. The moat of the castle is GUIs and workflows, configuration management of the application. The wall of the castle is the historical data in the archive that is needed for training AI. The advent of agentic AI threatens to breach both defenses. This is the new inflection point, the convergence of hyperscale computing and agentic AI. Hyperscalers and sovereign clouds have become the industrial grid of the digital economy, the power that runs everything else. Layer AI on top, and you get a nervous system for modern business, one that doesn't just store and process data, but anticipates and acts on it. For enterprises, this isn't just another wave of innovation. It's a complete redefinition of how work, value, and intelligence flow. Classic enterprise applications are changing fast to keep pace. Hyperscalers like Amazon Web Services, Google Cloud, and Microsoft Azure are moving in on the traditional B2B stronghold, offering powerful, low-cost cloud services that rewrite the rules of the game. For years, what kept those systems safe were the things that made them hard to copy, their complex workflows, their custom interfaces, and their mountains of historical data locked deep in archives. But that's exactly where the new pressure is building. Those old defenses aren't holding the way they used to. The user interfaces and configuration tools that once felt unique can now be replicated in minutes. And the data sitting quietly in archives, the years of transactions, interactions, and records, has become the fuel that every AI model wants to learn from. Agentic AI doesn't just compete with enterprise systems, it learns from them, imitates them, and in many cases, outpaces them. The real advantage now isn't in building taller walls, it's in building smarter foundations. What makes an enterprise resilient today isn't the size of its software stack, but how well it governs, protects, and activates the data inside it. That's where real intelligence and real differentiation begins. Leaders advancing cognitive computing must balance trust and innovation, recognizing that secure, well-managed data underpins trust. While AI drives progress and innovation, trusted data ensures confidence, reliability, and compliance as AI unlocks new solutions and efficiencies. Without strong data governance, rapid AI advancement can compromise privacy and security. But without innovation, progress stalls. In this era, trust and innovation must be in sync. Aligning robust data practices with bold AI initiatives enables organizations to responsibly harness transformative technologies, ensuring every advancement is anchored in public trust and ethical standards. Simply put, only trusted data can power truly trustworthy and effective AI. Eighty-one percent of organizations have integrated Gen AI with content management to some degree. AI adopters report an average cost savings of twenty-three percent annually. Early adopters are already benefiting from the adoption of integrated AI. The next decade will belong to the organizations that understand this shift. The ones that treat data not as a byproduct of business, but as its operating system. The ones that know data can't be powerful unless it's also trusted, governed, and secure. AI will transform every industry, but only if it's built on a foundation strong enough to carry it. Consider this. In May 2025, Salesforce announced it would acquire Informatica for approximately 8 billion US dollars. A signal event that tells us exactly where the battle lines of enterprise software are shifting. Why did this deal make sense? While Salesforce relied on external systems for their critical data, their data integration capabilities were weak, and they wanted to close the data integration gap so that AI could run on governed, contextual information. Salesforce recognized that the future of AI depends on access to secure, structured, and unstructured data, the kind of data that lives deep within organizations, not out in the open web. This is especially true for the training of agentic AI systems, which are expected to become the cornerstone of global enterprise productivity in the decade ahead. But here's the challenge. Most of the world's valuable information doesn't exist on public data sources. It sits inside corporations, governments, and institutions, governed, protected, and often regulated by law. Public chatbots like Chat GPT, Claude, and Perplexity have already trained on nearly everything that's freely available Reddit posts, Wikipedia pages, and other open repositories. As they reach the limits of that content and attempt to learn from more specialized information, they collide with copyright, privacy, and data ownership boundaries. This is proprietary data, the information organizations have sovereignty over, and it's increasingly governed by strict privacy and security regulations, from municipal and federal frameworks to global standards set by the UN and NATO. In reality, more than 90% of the world's data sits behind corporate and government firewalls. It's not just hard to reach, it's protected by design. Gaining access requires permission, compliance, and governance. This is where enterprise information management comes in. Document and records management systems, workflows, and rules engines that control who can see what, when, and why. For Enterprise Agentic AI to evolve responsibly, it will need to learn not just from information, but within the guardrails of trust that systems like these provide. When a company trains or fine-tunes its large or small language model on data it doesn't have the right to use, it's basically teaching the machine someone else's homework, and that never ends well. If that data turns out to be proprietary, the fallout isn't a quick fix, it's a full reset. The organization that owns it can demand the AI be untrained, which in today's world means starting from scratch. You can't just pluck out one bad data source. You have to roll the whole thing back to where the mistake began. That can cost millions of dollars and months of lost time. In some cases, it can take a promising AI program right off the board. In short, when it comes to governing data to train effective AI models, organizations need to measure twice and cut once. That's why understanding data sovereignty has become mission critical. Every organization holds something too valuable to lose. It's institutional knowledge and data, the keys to the castle. It's what makes your business yours. Hand that over to the wrong system or the wrong partner, and you risk watching your own information come back to you repackaged and resold. Governments see that risk too, which is why they're racing to set guardrails. New AI and data protection laws are on the horizon, and that'll make General Data Protection Regulation look like the opening act. They'll reshape how companies store, share, and train on data, especially when personal or sensitive information is in play. To get AI ready, organizations need to understand the three kinds of data sets that fuel intelligent systems and manage them responsibly. One, human generated content we create every day, documents, emails, presentations, images, videos, and conversations, or what we call the living record of how an organization thinks. Two, machine generated content, log files, alerts, and telemetry from IT systems, networks and security tools, the constant hum of how an organization operates. Three, data that flows between organizations, transactions, supplier exchanges, and B2B integrations. This is the connective tissue that keeps the economy running. Organizations require sovereign data. Agentic Enterprise AI needs all three data sets to function with real business context. The shift ahead is from content in context to AI in context, where intelligence doesn't just process data, it understands relationships, intent, and value in order to effectively take action and learn. And just like the information it learns from, AI itself must be secure, governed, and compliant. Those aren't just technical standards, they're the foundation of trust that determines whether AI can be safely put to work inside the enterprise. The real question isn't whether you'll share your data, it's how you'll stay in control of it. True sovereignty means knowing where your data lives, who's accessing it, and how it's being used. Not once, but continuously. You need secure ways to bring generative AI, GEN AI, inside where your information already lives, inside your governed, secure systems. Let your people chat with their content, find it, summarize it, and build on it without ever breaking governance rules, and transform analytics into something you can simply ask about in plain language and safely introduce other products that carry the same intelligence into cybersecurity, application management, and beyond. With secure, governed, sovereign data, you don't have to hand over your crown jewels to innovate. You can use AI confidently. Because in an era of responsible intelligence, the smartest organizations aren't the ones who feed AI the most data. They're the ones who know which data they can trust. The Path Forward: a call to leadership. As we've discussed, the next era of innovation won't be led by those who move the fastest, but by those who move the most responsibly. It is the intersection of trust and innovation. Every executive decision, every line of code, every AI model now carries a moral dimension, because information has power, and how that power is used will define the decade ahead. This is the moment when leaders in enterprise, government, and industry will be required to treat their data and information not just as a resource, but as a responsibility, to strengthen data foundations, to embed governance into design, and to make security the default, not the exception. The future of AI depends not only on how much we can automate, but on how deeply we can trust the systems and the data we build it on. Across sectors, we see the same challenge emerging: the need to transform at speed without sacrificing control. The organizations that succeed will be those that combine the courage to innovate with the discipline to govern. They'll be the ones that protect privacy as fiercely as they pursue insight, that make transparency a competitive advantage, and that build enterprise AI capable not just of reasoning, but of earning trust. This is where readiness meets responsibility, where digital ambition becomes accountable intelligence, and where the most transformative technologies are also guided by principle. Just as trusted data forms the bedrock and AI innovation lights the path forward, the chapters ahead will unpack data frameworks, AI governance models, and key considerations for both sovereign and non-sovereign data architectures. At the end of each chapter, you'll find a Fast Five download to distill the essentials for quick reference, ensuring you're prepared to build, govern, and innovate with confidence. The decade of responsible intelligence has begun. Together, we can build it securely, ethically, and for the greater good.