Enterprise Artificial Intelligence: Building Trusted AI in the Sovereign Cloud
The decade of responsible intelligence has begun — are you ready?
Enterprise AI is hitting a wall: Public models aren’t trained on your business data, but you can’t hand over your organization's proprietary information to a public system. The definitive roadmap for this new reality is Enterprise Artificial Intelligence: Building Trusted AI in the Sovereign Cloud, a new book written by OpenText leaders. Listen now to learn why this book is a must for organizations looking to move from isolated AI experiments to enterprise-grade deployments.
Learn more here: https://www.opentext.com/resources/enterprise-artificial-intelligence-building-trusted-ai
Enterprise Artificial Intelligence: Building Trusted AI in the Sovereign Cloud
Chapter 8: Putting Agentic AI to Work
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Explore how to put AI to work across the enterprise by understanding the three levels of AI, that together form the foundation for intelligent, adaptive enterprise AI systems.
Chapter 8. Putting agentic AI to work. In this chapter, we'll explore how to put AI to work across your enterprise. A successful framework requires understanding the three levels of AI. Generative AI, which creates and synthesizes, agentic AI, which makes decisions and takes action, and artificial general intelligence, which extends reasoning across domains much like the human mind. Together, these layers form the foundation for intelligent, adaptive enterprise AI systems. Enterprise AI has moved well beyond the novelty stage. Organizations are no longer experimenting with simple text generators or using AI to draft email subject lines. The focus has shifted from isolated tools to autonomous collaborators that work alongside people. These are EAI agents, intelligent software entities that not only generate output but act, react, and adapt within dynamic business environments. Unlike traditional AI applications that respond to static prompts, EAI agents function more like digital colleagues, executing multi-step workflows, engaging in contextual problem solving, and supporting daily operations with speed, consistency, and intelligence. The difference is subtle but profound. This new class of AI doesn't just answer questions, it gets work done. EAI agents are moving beyond efficiency tools to become force multipliers for the enterprise. Always available, consistently accurate, and increasingly adept at understanding nuance, they deliver measurable lift without increasing overhead. Whether automating workflows, managing customer interactions, analyzing complex data sets, or guiding users through onboarding, EAI agents bring precision and scale to routine operations. Their reliability ensures uniform experiences, while their speed turns vast data into timely insight. The result is more than cost reduction, its reclaimed capacity for strategic focus, innovation, and growth. With integrated EIM and EAI, organizations in the public sector, like the Carlsrue Institute of Technology, KIT, gain leading edge analytics capabilities designed to mine, extract, and present the true value of information for improved research and analysis. Read about it in the following case study. Case study Carlsrux Institute of Technology. The Karlsruhe Institute of Technology, KIT, one of the world's leading engineering research institutions, was founded in 2009 by a merger of Forschung Centrum Karlsruhe and Universitat Karlsruhe. As a member of the Hemholz Association, the largest science organization in Germany, KIT makes major contributions to top national and international research. According to its mission, the organization operates along three strategic fields of action: research, teaching, and innovation. KIT currently has 9,000 employees and 24,000 students. KIT needed a leading-edge solution that would give researchers, students, and the general public a faster way to find information across 600 websites and 200,000 associated web pages. On the back end, the institute wanted a robust website management solution that would support their 1,300 editors worldwide on a day-to-day basis by supplying metadata, key phrases, and the ability to automatically generate extracts of text. KIT also wanted a collaborative platform that would bring together researchers, scientists, and students. KIT is using e-government technologies, semantic navigation, and content analytics in combination with website management to optimize web pages and provide relevant search results. Previously manual tasks that were labor-intensive have been replaced by an automated solution that assigns metadata and supports entity extraction by generating teaser texts for new pages, saving users time and reducing error. Visitors are given personalized access to highly relevant information, facilitated by faceted search and related hits, resulting in a more satisfying end-user experience. With improved access to information and the ability to connect with researchers in similar areas of study, the website has evolved into an advanced research network that successfully meets the needs of all stakeholder groups. Three levels of AI Generative AI, Gen AI, gained widespread consumer popularity with models such as OpenAI's ChatGPT and Google's Gemini. These, along with other large language models, LLMs, and AI applications, are trained to make predictions or recommendations based on publicly available data sources, including websites, news, Reddit, and Wikipedia, among others. While Gen AI models are useful for generating general insights, they are limited to general purpose tasks. This is because they lack access to the private, real-time, and enterprise-centric data required for specific business use cases. Agentic AI refers to artificial intelligence systems designed to function as autonomous agents. Unlike models that simply respond to a prompt, an agent can perceive its environment, create a multi-step plan, make independent decisions, and use tools to actively work toward a specific goal. In an enterprise context, these agents are a powerful engine for productivity with data serving as fuel. They can be given access to private enterprise data sets and internal tools, allowing them to automate complex workflows that previously required human judgment. Agentic AI is powered by a digital brain, a single competent model that can process decades of human responses. This technology is already reshaping industries, and enterprises that fail to adopt and orchestrate Agentic AI will be left behind. It is also a route to artificial general intelligence, AGI. Artificial general intelligence, AGI, refers to AI that can understand, learn, and apply knowledge across a wide range of complex tasks, much like humans. Such a technology would be capable of redefining not just sectors, but entire societies. Like all modern AI, the capabilities of a potential AGI would be fundamentally shaped by the quality and scale of its training data. The foundations of AGI will likely come from the orchestration of thousands of specialized Agentic AI instances within a secure and sovereign framework, as described in chapter 7. Building Agentic AI for the enterprise. Good data and effective business processes are foundational to achieving optimal AI outcomes, and the reverse is equally true. With enterprise data safeguarded and large language models, LLMs, fine-tuned within your domain, the stage is set to build what we call agentic capabilities that deliver real business value. Agentic AI is not just about technology, it's about building capabilities on top of your business processes and supporting a combination of people, processes, culture, and change management. When done right, it becomes much more than an experiment in generative output. Rather than merely asking a model to summarize a document, agentic AI might detect a bottleneck in a workflow, break tasks into subtasks, call APIs or other systems to execute, monitor results, learn from them, and then refine the next step with minimal human direction. Consider the findings of Massachusetts Institute of Technology's NANDA initiative, the Gen AI Divide, State of AI and Business 2025. Only about 5% of enterprise generative AI pilot projects achieved rapid revenue growth. The other 95% failed to deliver measurable PL impact. According to the researchers, the barrier was not the model or the hardware, it was the learning gap or the inability of AI systems and organizational workflows to adapt together. Among the lessons derived, allocate AI investments where they align to specific processes, not just high visibility use cases, embed feedback loops so the system learns, integrate deeply into existing operations rather than bolt on a generic tool, and ensure change management is in place so people and culture welcome the shift. In short, when data quality, process alignment, organizational readiness, and domain-specific modeling converge, you position your enterprise to move from pilot to real EAI-driven value creation. The case for agentic AI. Organizations are increasingly adopting AI in response to pressures from rapid economic and technological change, including the need for digital transformation, new business models, real-time decision making, global skill, and the ability to adapt to disruption. AI agents, which can perform tasks semi- or fully autonomously, help them remain competitive, scale information flows, reduce cognitive load on humans, and improve agility. Deploying systems of AI agents is only the start. The bigger challenge is sustaining them and ensuring they continue to deliver value, remain aligned with organizational goals, and evolve in step with the human-machine ecosystem. Starting with well-known business processes is critical, and this will be covered in more detail in the following chapter. It is best to adopt a standards-based approach and begin by identifying discrete, simple tasks for your first agents. This method will establish a foundation for a more complex, orchestrated model in the future. The following feature about a European Court of Human Rights demonstrates how EIM helps consolidate information to ensure accuracy and integration of AI with key processes, reducing administrative burden and improving performance. As a result, agencies are better equipped to deliver on their mission to protect citizens. Case study A European Court. The Court is part of the Council of Europe, an international intergovernmental organization that was established in 1949 to promote political democracy and human rights, social progress and cultural identity continent wide. Over the past decade, the court has seen its caseload skyrocket, from 14,000 applications to more than 50,000. To manage this surge, the court's IT department built an automated workflow solution that streamlined how committee and chamber cases moved through approval. What began as a digital transformation initiative has soon evolved into an intelligent information ecosystem powered by Analytics, EIM, and now Agentic AI. Today, the court's workflows do more than just route documents. They reason, adapt, and act. Built on a foundation of governed data, the system uses analytics to identify process bottlenecks and agentic AI to optimize them in real time. For example, the platform can detect when a case file lingers too long in review, automatically flag it for escalation, and redistribute workload across divisions to maintain throughput. Legal assistants no longer spend hours chasing paper trails. The AI monitors progress, generates dynamic reports, and recommends next steps, continuously learning from outcomes to improve future routing decisions. The results speak for themselves, faster case turnaround times, fewer administrative bottlenecks, and more time for the court's legal experts to focus on interpretation rather than administration. With Analytics, EIM, and Agentic AI working in concert, the court has transformed from a reactive institution to a proactive one, ready to scale, adapt, and deliver justice at the speed modern caseloads demand. Strong data foundations. The critical challenge is how to leverage secure, high-quality data to drive AI outcomes and achieve its full potential. There is a direct link between strong data foundations and successful AI outcomes. Beyond data, a successful strategy must also account for infrastructure. Different AI models have varying requirements, and the data and AI architecture must be designed to match the right model to the right business task. Unlocking private data to support Agentic AI. Given the quality and quantity of enterprise data sitting behind the firewall, it is crucial to leverage this private data. Fine-tuning or adapting an LLM with this domain-specific knowledge is what enables Agentic AI to handle meaningful, real-world business use cases and applications. Data pipelines, data lineage, and data flows will become critical. Every enterprise will need to become a data warehouse company. As we examine the breadth of data across the enterprise, it is clear that unlocking AI requires a strategy to work across both public and private data sets, which are each integral to how every private and public sector organization operates today. The diagram featured on page 140 illustrates examples of these data sets, private information within enterprise, financial data, ERP, customer data, CRM, IT operations data, HR data, supply operations data, engineering code and documents, manufacturing documents, legal and contractual documents, personal emails and documents, service operations data, secure zone, public information, open domain, websites, research, annual reports, public records, public images, online forums, Reddit, media, news, social media, customer support chats, communities, published articles, leveraging private data to fine-tune LLMs. Large language models rely on significant amounts of data to train. They learn from patterns in this data, including words, phrases, syntax, and semantic relationships. While the quality and scale of this initial training data are important, relevance is critical for enterprise use. As previously discussed, publicly trained LLMs, like those from OpenAI, Cohere, or Anthropic, are excellent for general purpose tasks, but lack deep context when it comes to specific businesses. To address this limitation, enterprises now employ multiple adaptation strategies to align models with their proprietary data and environments. The most established of these is fine-tuning, while techniques such as context engineering offer complementary flexibility and speed. Fine tuning a base model involves taking the publicly pre-trained model and continuing its training on a smaller, domain-specific or proprietary set of data. This allows the model to learn the enterprise's unique vocabulary, data, and processes without exposing that private data to the public. This fine tuning delivers a customized derivative model that performs better on enterprise and domain-specific tasks, as it has internalized patterns from private data. In parallel, context engineering enables enterprises to refine model behavior dynamically by structuring and updating the input context, example, prompts, examples, or metadata, rather than retraining the model. This approach allows faster, lower cost adaptation and supports selective unlearning for privacy or legal compliance, and is particularly useful when working with closed source models. In practice, organizations often combine both approaches, fine-tuning for deep domain alignment and long-term performance, and context engineering for agile real-time adaptation. Together, they create a layered sustainable strategy for aligning LLMs with enterprise goals and compliance standards. In the following feature, a travel tech leader in dynamic holiday packages is giving holiday makers access to millions of real-time combinations. It uses technology to simplify, personalize, and enhance customers' travel experiences. Case study a travel tech company. With operations across multiple European markets expanding and demand for real-time holiday packages increasing, a travel technology company experienced a sustained surge in data volume and variety. Bookings, web interactions, partner feeds, marketing activities, and customer support engagements all generated high-velocity data with different structures and latency requirements. This growth threatened pursuit of governance, access pattern, and time to insight initiatives. Over time, the data estate split into two silos, a data warehouse with ETL, extract transform load, and reporting tools and a data lake for raw ingestion. The separation created duplication, maintenance overhead, and slower analytics. Teams relied on different tools and custom glue code to bridge systems, introducing a stark divide that made growth harder and insights slower to deliver. The company adopted a unified data access layer, eliminating separate ingestion pipelines and the need for complex glue code. With the common interface and shared toolset, engineers and analysts could query and process data consistently across systems. By merging environments and decoupling producers from consumers while preserving full compatibility, the organization achieved a single source of truth. This integration enabled richer analytics, correlating BI data with marketing, CRM, and machine learning insights to power advanced, predictive customer models. The company next moved to a containerized cluster environment using Kubernetes to automate deployment, scaling, and workload management. Query jobs can now spin up on demand and shut down when complete, using shared storage and right-size compute for each task, from ETL to analytics dashboards. The result? Greater scalability and efficiency with lower compute cost, energy use, and carbon footprint. The company is able to study customer behavior across all of its channels and analyze every step of the customer journey, from preliminary searches to final payment. The implementation is positively impacting marketing campaign, return on investment, ROI, and company revenue. In addition, AI-driven algorithms for attribution and bidding automation help optimize marketing costs overall, leading to increased profits. But can I guarantee the sovereignty of my private data and fine-tuned model? This gets to the heart of the challenge. Your data and the AI built on it is your competitive advantage. The quality of your data defines the efficacy and integrity of your LLM. However, once you have trained an LLM on a specific data set, it cannot unlearn that data. This is a key reason why private data and private AI are essential. When you train a model, it internalizes the data patterns, making an indelible mark. These learnings become interconnected with other data, as part of both the specific examples and the broader underlying data distribution. Because of this, attempting to unlearn specific training data requires a massive amount of cleanup of the parameters and connections that drive the model's behavior. In Machine Unlearning Doesn't Do What You Think, Lessons for Generative AI Policy, Research and Practice, the authors highlight deleting information from an ML, machine learning, model is not well defined. First, information cannot be deleted from an ML model in the same way that it can from a database. While work is ongoing to refine approaches to machine unlearning, it is not a simple path for a private or public sector enterprise. The need to protect sovereign or private data in the resulting model is paramount. This is precisely why a sovereign data in AI architecture is so critical. Instead of relying on unlearning, this approach is built on prevention. It is designed to keep your data and the fine-tuned model private from the start. This requires ensuring the model is deployed in a certified sovereign zone where your fine-tuned models are protected, as depicted in the architecture above. In the figure featured on page 143, public data is leveraged to deliver a base pre-trained LLM1 that operates in the non-sovereign public zone. A complete set of Agentic AI capabilities can be delivered in this environment. On the private zone side, the base model is fine-tuned with private data too, which in turn provides a customized LLM that includes domain or industry specifics that may be a differentiator for your business. This LLM3 can be leveraged by Agentic AI operating in the private zone, and the data is not supplied back to the LLM operating in the public zone. Notably, if you control The infrastructure on which the model is deployed, you can keep it private. If the model is hosted in the public cloud, you require assurances from both the public cloud provider and the model provider, if they are different, as to the sovereignty of your data. Agentic AI in the sovereign context, a use case. If we place agentic AI within the sovereign context, we can use an example that's familiar to citizens of many nations, applying for a new passport. This example highlights the need for both private and public data sets and demonstrates how agentic AI can navigate both environments, with fine-tuned models operating in the sovereign private world working alongside models deployed in the non-sovereign public zone. When these environments work together, the end customer is the ultimate beneficiary, enjoying a great enhanced experience. Step one, Portal Access and Application Start. Public Zone. The applicant begins by accessing a passport renewal portal. The system authenticates the user and provides guidance on completing and submitting the application. Passport information and supporting documents are uploaded with encryption both at rest and in transit. At this stage, the statuses include user authentication, document encryption, and readiness for processing. Step two, Handoff API Gateway. The API Gateway handles authentication and authorization, auto classifying the data as protected. The payload is encrypted, biometric data is secured, and compliance validation is initiated. Step three, compliance and policy validation. The government AI agent validates compliance with the relevant privacy and information regulations and policies. The system confirms language service capability and accessibility standards, logging all processing steps for audit trail and compliance purposes. The statuses at this point include compliance validation, active audit logging, and confirmed accessibility. Step four Citizenship and Identity Verification. The government AI agent queries the citizenship database to match the identity from the submitted passport application. It cross-references existing passport records to confirm eligibility based on nationality and other criteria. The statuses include confirmed citizenship, available match override if needed, and validated eligibility. Step five. Security screening and risk assessment, sovereign zone. The government AI agent conducts security screenings through law enforcement databases, utilizing travel patterns and risk factors. Machine learning models assess application risk based on travel patterns and cross-check wait lists, then alert for manual review if necessary. The statuses include completed security screening and risk assessment categorized as low, medium, or high. Step six. Document and photo validation, sovereign zone. The government AI agent validates guarantor credentials through a database lookup, checks previous passport history for issues or travel fraud indicators, and utilizes machine learning models to authenticate the photo against government ID records and databases. The statuses include validated guarantor credentials and verified document integrity. Step 7. Officer review and final authorization. Sovereign Zone. An expert human agent reviews all information flagged by the system. Using AI as a decision support tool, the human agent grants final authorization based on the comprehensive review of all validated information. This workflow ensures that sensitive personal data is securely handled, validated, and processed in compliance with privacy regulations, maintaining a clear separation between public and sovereign zones. While specific use cases may vary, and this is a simplified example, it highlights how the engagement of agentic AI interacts across sovereign and non-sovereign zones while handling personally identifiable information as part of a hybrid deployment. Implementing and managing the deployment of agentic AI requires a broader consideration of how digital agents and human workforces work in concert. Agentic AI performs tasks using its built-in logic, while human oversight ensures successful outcomes. The next chapter will dig more deeply into how digital and human workforces must work in concert to realize AI's potential in the enterprise. Read the following feature to find out how the General Council of the Judiciary uses a trusted EIM system to securely consolidate both public and private information, to improve the delivery of its services to Spanish citizens. Case study General Counsel of the Judiciary. Analytics help us measure the success of our services and the public site's overall performance, equipping us with the tools we need to present users with a relevant and responsive experience, supported by multimedia content. The General Council of the Judiciary, Consejo General del Porro Judicial or CGPJ, was established by the Spanish Constitution in 1978 as the constitutional body that governs the Judiciary of Spain. The CGPJ wanted to combine its systems into an online portal to provide citizens with personalized access to the information and services they need. The new portal would support a variety of communication channels in multiple languages. On the back end, the system would be required to integrate all corporate services of the Judiciary Council to streamline collaboration, provide integrated services such as online applications, allow for the secure management of information, and comply with current regulations around transparency, accessibility, multilingualism, law 1107, and more. An e-government solution was selected as the basis for the CGPJ website and Judiciary Extranet, providing the council was a technologically sound and manageable platform for the future. The multilingual portal supports a substantial number of hits and is readily scalable. The web publishing process is more efficient. Self-service capabilities have significantly reduced the time it takes to publish up-to-date information. The system went live internally with 6,500 active users and 5,400 messages exchanged on its forums. Members of the judiciary can participate and collaborate using the system's virtual environments, 45 communities of practices, and shared files. Secure access to integrated applications and services is provided through single sign-on and identity management. The system is customizable, allowing users to personalize and configure their working environment. The Fast Five Download. One, deploy Agentic AI to drive enterprise value. Accelerate productivity and adaptability by adopting Agentic AI applications that autonomously perceive, plan, decide, and act, enabling automation of complex workflows and reducing dependency on manual intervention. Two, leverage private, domain-specific data to differentiate. Achieve business advantage by fine-tuning AI models with your organization's proprietary internal data, empowering agentic AI to solve domain-specific challenges that generic models cannot, and building a secure, high quality data foundation. Three, implement sovereign AI architectures to protect IP and privacy. Safeguard your enterprise's sensitive data and intellectual property by deploying AI models within secure, sovereign environments, ensuring compliance, data privacy, and full control over your AI assets. Four, focus AI initiatives on targeted, learning capable systems. Maximize ROI by directing agentic AI towards specific, well-understood business processes, and invest in systems that learn and adapt over time, avoiding generic solutions and minimizing disruptive organizational changes. 5. Champion human AI collaboration for sustained impact. Ensure ongoing value by establishing standards for human oversight and alignment, starting with simple agentic tasks and cultivating effective human machine collaboration that evolves with your business needs.