Enterprise Artificial Intelligence: Building Trusted AI in the Sovereign Cloud
The decade of responsible intelligence has begun — are you ready?
Enterprise AI is hitting a wall: Public models aren’t trained on your business data, but you can’t hand over your organization's proprietary information to a public system. The definitive roadmap for this new reality is Enterprise Artificial Intelligence: Building Trusted AI in the Sovereign Cloud, a new book written by OpenText leaders. Listen now to learn why this book is a must for organizations looking to move from isolated AI experiments to enterprise-grade deployments.
Learn more here: https://www.opentext.com/resources/enterprise-artificial-intelligence-building-trusted-ai
Enterprise Artificial Intelligence: Building Trusted AI in the Sovereign Cloud
Chapter 9: The Management of EAI Applications
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
The agentic AI evolution is not about replacing people with automation—it’s about redefining how humans and AI work together. Explore the principles that enable organizations to deploy agentic AI, aligned with strategic goals, risk frameworks, and human-centered governance.
Chapter nine The Management of EAI Applications In today's enterprise, the management of AI applications is no longer about deploying a single model. It's about orchestrating a symphony of intelligent agents operating across public and sovereign zones, private clouds and open networks, and multitudes of workflows. This is enterprise artificial intelligence, and managing this complexity demands more than technical prowess. It requires a sophisticated organizational approach rooted in solid business processes, change management, and governance. As noted by recent research, companies falter in their application of AI not because the models are inadequate, but because their structures, owners, and workflows aren't ready for it. In this chapter, we lay out the key principles that ensure your organization can not only deploy agentic AI, systems that reason, plan, act, and collaborate, but also manage them in a way that aligns with strategic goals, risk frameworks, and human-centered governance. These four principles define the right conditions for the successful deployment of enterprise agentic AI systems. One, organizational model for agentic EAI deployments, ensuring that ownership and accountability, along with roles and responsibilities, are clear. Two, developing agentic EAI applications, adopting a structured approach to prioritizing and building out capabilities for the organization. Three, collaboration between human and agentic EAI workforces. Ensuring that your teams embrace and adopt AI with clarity of role and purpose. Four, performance management and measurement. Closing the feedback loop by measuring outcomes and performance managing your agentic AI workforce. As we dive into each of these principles, you'll find frameworks, best practices, and real-world considerations to build and operate enterprise AI at scale. From defining the model of the organization to designing workflows that humans and agents share, to measuring value and adapting continuously. This chapter equips you to move from isolated AI experiments to enterprise grade deployments. Let's begin by exploring how the organizational model sets the foundation for accountable and scalable AI applications. 1. An organizational model for agentic AI deployments. How to select the right model. Enterprises seeking to drive AI adoption and more specifically, deliver agentic AI capabilities must make a critical decision about how to manage their AI applications. Several common models exist, each with distinct positive and negative attributes. But the most crucial aspect is selecting the right one. The following examples outline four models that work in various organizations depending on the industry and level of regulation. Centralized Model, AI Center of Excellence, COE, Hub and Spoke Model, Federated Model, and Hybrid Model. The Centralized Model. In this model, a centralized team holds deep competency and is responsible for driving strategy, model development, infrastructure, implementation, testing, and governance. This ensures consistency and control along with unified governance. However, it can absolve business units of accountability and ownership for driving outcomes from their AI adoption. Many organizations new to adoption technology, as well as those in regulated industries and government, find this model attractive. The Hub and Spoke Model. In this model, a small group at the center sets the strategy and provides tools and frameworks, while the different business units act as spokes to execute projects within their domain. This model is inherently more agile and attractive to organizations that have technology competency in other business units. It is also more agile and scalable for the organization to drive multiple projects in parallel. One point of contention with this model is the question of who owns, trains, and fine-tunes the LLM. Generally, this is owned by the hub and the business units are responsible for the agentic applications. The Federated Model. In this model, there is no centralized function, and each business unit has a group responsible for operating its own AI systems and technology. In some cases, a small team may provide corporate oversight. Generally, this model offers complete control for the business units, allowing them to drive faster implementation. However, this comes at the expense of governance and security risks. This model would be most useful for extremely mature organizations. Hybrid model. Hybrid model blends components from the other approaches. The key advantage lies in being able to leverage economies of scale by having standard services that teams can utilize, coupled with giving autonomy to business units to deliver. Ultimately, selecting the right model depends on the organization's specific needs. Success requires understanding how you want your teams to work and what guardrails you wish to put in place for your users. This clarity is critical to building scalable, outcome-focused agentic AI applications. In the following case study, the City of Barcelona has implemented a hub and spoke model to make data and services available to all citizens from any device in any location, improving access to services and the overall quality of life for its citizens. Case study City of Barcelona. The City of Barcelona is the second largest city in Spain with over a million and a half inhabitants. Fulfilling its vision of transformation into a smart city, the municipal government is relying on mobile and cloud-based e-government solutions to facilitate citizen engagement with administrative processes and city services. The goals of implementing an e-government system have been clear. To make data and services available to all citizens from any device in any location as a means to improve the quality of life for all citizens. A first step toward achieving this was making city council and other data available in digital format, while promoting the reuse of this information to stimulate economic growth through opportunities for innovation. To standardize its information, the city needed to consolidate its infrastructure based on interoperable and open standards and decommission its legacy systems. The city opted to migrate its solutions to the cloud. A content management system hosted in the cloud provides an alternative that is reliable, flexible, and produces economic gains in the long run. The result was the first Barcelona open data site with 510 data sets. The solution, based on the principles of mobility, smart cities and administration, information systems and innovation, supports 150 portals with over 4 million user visits and more than 65 million pages generated each month. Is your organization ready? The AI hype sparked a race to adopt, driven by a fear of being left behind. But the truly innovative organizations didn't slam the accelerator. They tapped the brakes. They understood that while data may be the fuel for the AI engine, flooring it without direction doesn't get you further. It only risks burning out before you reach your destination. The organizations that took time to learn, plan, and prepare have achieved far greater success. By ensuring their data was ready and governance was in place, they knew how they would use AI before accelerating. A critical step those who speed off from the starting line missed. Even Microsoft eased off the accelerator when they rolled out Copilot across their organization. As one of the first organizations to deploy at scale, they divided their implementation into distinct phases, from a limited early access rollout for specific groups to more focused groups to ultimately a full rollout by cohort. They shared, we divided our adoption along two vectors, internal organizations like legal or sales and marketing, and regions like North America or Europe. Different cohorts have different focuses, but the strategy is similar. The cohort-specific approach has been cited by other organizations as the key to their AI deployment success as they sought to enable specific groups and users with technology specific to their needs, thereby driving adoption. MillerCors acts as the focal point or hub and spoke of its supply chain by overseeing its suppliers directly in the following case study. MillerCors is a joint venture of the U.S. operations of Saab Miller and Molson Coors. With more than 450 years of combined brewing heritage, Miller Coors boasts an impressive portfolio of industry-leading beers. With nearly 30% of U.S. beer sales, Miller Coors is the second largest beer company in the United States. The company operates eight major breweries as well as several craft breweries. Miller Brewing, a legacy company of Miller Coors, found its distributor to retail supply chain was falling short of the industry standard and more importantly, user expectations. The company needed to modernize and standardize their inefficient and document-intensive processes to remain competitive in their complex and consumer-driven market. Using B2B Managed Services, Miller Brewing connected more than 400 distributors with 25 different business systems into a cohesive EDI, electronic data interchange platform. Doing so enabled their entire distributor network to conduct business with any retailer that required an EDI capability. B2B Managed Services provides the technical foundation for a seamless end-to-end EDI platform for all of Miller Core's supplier and banking connections. Critical documents are received, processed, and seamlessly exchanged to deliver efficiencies, cost savings, and of course, beer. In just one year, the business transformation eliminated 1.2 million hours of labor for distributors and 1.3 million hours for retailers, for a total of 2.5 million labor hours removed from the distributor to retailer supply chain. The time savings translates to an estimated reallocation of 1,200 full-time equivalent FTE resources to other tasks, freeing up potentially$50 million in labor savings. 2. Developing agentic EAI applications. Start small, keep it simple. Beyond a cohort-specific deployment approach, a core principle for successful AI implementation is to start small and keep it simple. Underlying this is a ruthless focus on driving business outcomes. Technology for technology's sake is a nice concept and can certainly be fun for technology teams, but it is not a winning strategy with AI. Starting small means picking a use case and working with a part of your business that understands the technology and its potential and has good, documented business processes, good data quality, and can drive adoption. The cultural impact of having change agents and champions that can drive adoption should not be underestimated. In our case, this was selecting our human resources team. With a strong technology orientation across that team, they were keen to embrace the transformation and their role as both business and product owner. Product owners play a vital role in designing and implementing specific AI use cases. Ensuring they can act as change agents and champions for agentic AI adoption is as critical as ensuring they have the right skills for the role. The importance of good processes and data to drive agentic AI applications. With a team and target use case defined, a deep understanding of the data and business process becomes the key to guiding the agent's development and behavior. The process is outlined below. Life cycle management approach for Agentic AI. Building on the use case, the process begins with an analysis of the data and business processes. This analysis directly informs the development of the Agentic AI application. This development phase, often driven by the product owner, is typically a collaboration between IT and the specific business unit. To lower the barrier to entry, many agentic frameworks now use low-code, no-code approaches, allowing non-developers to contribute. Following the agent build, the application moves to deployment. This step requires a standardized approach to ensure the proper governance and guardrails are in place for the organization. Finally, the process is completed with ongoing monitoring to manage performance. An agent's success often hinges on a unified understanding of its role, the data and processes it uses, and its inputs and outputs. Beyond that, carefully documenting its expected behaviors is critical to success. The best way to achieve this is to start small. This means beginning with simple, discrete function agents and avoiding complex scenarios with numerous edge cases. As these simple agents start delivering value, more complicated scenarios can be tackled through either a series of orchestration flows or more complex agents. 3. Collaboration between human and agentic AI workforces. Treating digital agents as an extension of your workforce. Should digital agents be managed by human resources just like human employees? This question is certainly a topic of debate as agentic AI applications become more mainstream. In their report about the agentic organization, McKinsey and Company examined how companies leverage human employees and digital agents to drive outcomes and results. As agents take on execution, people will increasingly define goals, make trade-offs, and steer outcomes. This will change how companies plan for a hybrid workforce, whom they hire or borrow, how they deploy human or AI talent, and how they measure success. HR systems not only track human employees, but also our repository of agents in a GENTIC workflow. One effective approach that some organizations are undertaking is building job descriptions for their agents. We've adopted this practice internally with our teams creating job descriptions during the initial development process that are similar to those we published for human roles. This ensures that as we move into the deployment and measurement phase, we know the expectations. Culturally, this has also helped our team better understand their own roles in relation to these new digital agents. How the human and the agentic AI assistant work together to select benefits for an employee. Step one, ticket received. Agentic AI Assistant classifies the ticket as benefits selection and extracts relevant employee data. Human Specialist monitors Q and reviews flagged tickets. Step two, recommendation. Agentic AI Assistant suggests a benefits package based on policy rules and employee profile. Human specialist reviews recommendation for accuracy and context. Step three, communication. Agentic AI Assistant sends summary to employee with links to enrollment forms and FAQs. Human Specialist follows up with employee if clarification or escalation is needed. Step four, exception handling. Agentic AI Assistant flags tickets with missing data or policy conflicts. Human Specialist resolves exceptions and updates agenc training data. Step five, audit and feedback. Agentic AI Assistant logs actions and learns from human feedback. Human Specialist audits agentic decisions and provides feedback for improvement. The agentic AI evolution is not about replacing people with automation. It's about redefining how humans and AI work together as complementary partners. Across industries, leading organizations are redesigning roles and workflows to integrate AI into editorial, creative, and strategic functions. Rather than viewing AI as a threat, they're embracing it as a catalyst for efficiency and innovation. The distinction lies in mindset. Success favors the enterprises that approach AI adoption with purpose and foresight, retraining teams, redesigning processes, and embedding AI where it amplifies human capability. At its core, this transformation shifts AI from an emotional flashpoint to an operational advantage. Creative professionals, writers, editors, designers are not being replaced, their capacity is being elevated. Enterprise AI now accelerates research, produces initial drafts, and automates routine production, creating time and space for people to focus on what they do best, shaping strategy, safeguarding brand integrity, and exercising human judgment where it matters most. In the feature below, AI and human agents in existing workflows orchestrate secure flows between public and private data sets within a private cloud, automating claims, protecting sensitive information, and enforcing compliance. This pattern unifies public cloud agility with on-premises governance and end-to-end data integrity. Case study Clerk of the Circuit Court, a U.S. County. The clerk of the Circuit Court in one U.S. county oversees a complex judicial ecosystem, maintaining court records, securing evidence, collecting fines, and managing documentation across 24 municipalities and unincorporated areas. For decades, paper-based systems created bottlenecks, clerks manually verified citations, judges relied on manila folders, and every data entry step slowed the pace of justice. The challenge wasn't just inefficiency, it was skill. As population growth accelerated, the volume of traffic cases threatened to overwhelm staff capacity and delay court outcomes. By introducing AI-driven automation inside a governed case management platform, the county transformed its courtroom operations. Agentic AI now monitors case status, validates records, and routes documents securely across systems. While human clerks remain in the loop to approve exceptions and verify edge cases, judges can instantly retrieve digital case histories, access integrated state traffic databases, and review prior citations through a unified dashboard. Behind the scenes, AI agents connect the county's N-Card ticketing system, document repositories, and court databases, automating the capture, classification, and validation of information. Sensitive data never leaves the county's private cloud, and every decision point is logged for audit and compliance. What once required manual data entry now happens in seconds, freeing clerks to focus on higher value tasks and reducing the risk of human error. With this hybrid AI model, combining agentic automation with human oversight, the clerk's office has modernized its entire information ecosystem. The outcome? Faster case processing, stronger data integrity, and full alignment with state mandates for information sharing. It's a blueprint for responsible AI and government, showing how intelligent systems and human judgment can work side by side to deliver more trusted, efficient public service. 4. Performance management and measurement. Measuring the success of your agentic AI application. Measuring outcomes is at the heart of whether an agentic AI application succeeds or fails. Therefore, these measurements must be clearly defined up front in a job description that outlines the agent's objective, tasks, and metrics. This job description, created while developing the business case, is the key to setting the proper scope for the agent's role. Today, there is no universally defined or adopted standard for KPIs for agentic EAI applications, although there are standards across many countries and regulatory bodies around AI, ethics, transparency, risk, etc. In the absence of a universal standard, organizations can leverage the many frameworks in the public domain, either adopting one directly or adapting it, depending on their specific implementation. A framework proposed in the International Journal of Scientific Research and Modern Technology looked at five vital dimensions model quality, system performance, business impact, human AI interaction, and ethical and environmental considerations. On the model quality dimension, the paper examined accuracy, precision, task completion, hallucination, and output. Operational key performance indicators, KPIs, were focused on latency, throughput, and resource utilization. Business impact assessed ROI, cost savings, productivity improvements, and market impact. Human AI interaction was examined in terms of user satisfaction, trust, adoption, and engagement. Finally, in terms of ethical and environmental considerations, the paper examined bias, fairness, transparency, environmental impact, and ethical drift. While all of these KPIs may not be relevant to every agentic AI deployment, it's important for teams to adopt a holistic approach to selecting the different dimensions. After the relevant KPIs are determined, they can be formalized into a scorecard. This approach mirrors the one successfully used for robotic processing. Automation, RPA, where scorecards were key to driving automation and efficiency. The same rigor should be applied here, with daily assessments against the agent's key dimensions. Equally important is having a defined remediation framework and process. Not all enterprise AI applications will be successful, and for those that fail, this framework is essential to understanding the root cause and making a data-driven decision about whether to invest in a fix or decommission it. This process directly supports a fail-fast culture. A shift that many organizations struggle with. Teams must be taught to accept that it is better to decommission a failing project than to force something to work that will never achieve its anticipated outcome. It's also important to remember that EAI thrives on secure contextual data, not just quantity. By connecting structured systems, unstructured content, and interorganizational flows across the EIM cloud, organizations can train and deploy agents that act responsibly, trace their decisions, and meet compliance requirements by design. In the next chapter, we'll examine the evolution from Agentic AI to AGI. The Fast V Download. 1. Choose the right organizational model for AI deployment. Selecting an appropriate deployment model, centralized, hub-in-spoke, federated, or hybrid, is foundational for successful Agentic AI adoption. The model should clarify ownership, accountability, and governance, balancing control with agility based on your industry's needs and regulatory environment. Two, adopt a phased and cohort-based rollout strategy. Rushed AI adoption often leads to missteps. Successful organizations roll out agentic AI applications in deliberate phases targeting specific business units or regions. This cohort-driven approach ensures readiness, maximizes user adoption, and tailors support to each group's unique requirements. 3. Start small, focus on business value, and build from success. Begin with simple, well-defined use cases, prioritizing business units with mature processes and strong data quality. Empower change agents such as product owners to drive adoption and iterate based on early wins before scaling to more complex scenarios. 4. Integrate digital agents into workforce planning and management. Treat AI agents as an extension of your workforce. Define clear roles and expectations for digital agents. Adopting HR practices like job descriptions and performance objectives to ensure alignment between human and AI team members, and to foster organizational understanding and acceptance. 5. Implement rigorous performance measurement and remediation. Success hinges on clearly defined KPIs across multiple dimensions, model quality, business impact, human AI interaction, ethics, and system performance. Develop scorecards for agents, regularly assess outcomes, and be prepared to remediate or retire underperforming applications. Embrace a culture that learns from failures and iterates quickly.