Enterprise Artificial Intelligence: Building Trusted AI in the Sovereign Cloud
The decade of responsible intelligence has begun — are you ready?
Enterprise AI is hitting a wall: Public models aren’t trained on your business data, but you can’t hand over your organization's proprietary information to a public system. The definitive roadmap for this new reality is Enterprise Artificial Intelligence: Building Trusted AI in the Sovereign Cloud, a new book written by OpenText leaders. Listen now to learn why this book is a must for organizations looking to move from isolated AI experiments to enterprise-grade deployments.
Learn more here: https://www.opentext.com/resources/enterprise-artificial-intelligence-building-trusted-ai
Enterprise Artificial Intelligence: Building Trusted AI in the Sovereign Cloud
Chapter 6: The Governance of EAI
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI governance provides the policies, processes, and controls that turn Enterprise AI (EAI) into a trusted, strategic asset. Explore how strong EAI governance underpins responsible, sustainable innovation across the enterprise.
Chapter 6. The Governance of EAI As discussed in the previous chapter, with technological advances come the need for effective governance and controls. While data governance is more mature in its history and evolution, the need for AI governance is catching up quickly. AI governance provides the policies, processes, and controls that ensure EAI technologies are aligned with organizational objectives and regulatory requirements. In this chapter, we'll explore how governance transforms AI from a technical capability into a trusted strategic asset. We'll explore the importance of AI governance in both private and public sectors with a focus on scope, ethics, compliance, risk management, and accountability. According to Gartner, only 12% of organizations have implemented a dedicated AI governance framework, while 55% report they have not yet done so. As AI becomes embedded in every aspect of business, governance has emerged as its critical foundation. Effective AI governance assigns responsibility, defines oversight, and ensures that intelligent systems operate ethically and transparently. Without it, organizations risk exposing themselves to bias, privacy breaches, and reputational harm. Yet many still struggle, lacking the expertise, coordination, and unified data needed to govern AI at scale. Looking ahead, governance will determine not just how AI is deployed, but how it earns trust. Future-ready enterprises are already aligning governance with evolving regulations and embedding responsible AI practices into their operations from the start. In the following case study, read about how a global consulting firm is one of Gartner's 12% when it comes to AI governance. Case study, a global consulting firm. The firm is an innovative leader in online and mobile strategy, design and development, and cybersecurity, offering world-class knowledge and resources from the leading global business and technology consultancy. They recognize today's digital inflection point, one where artificial intelligence, automation, and cloud technologies are reshaping business models, workforce dynamics, and even organizational culture. The following are excerpts from an interview from a top tech analyst at the firm. Alongside digital transformation, data itself has evolved. It's no longer just about transactions, it's about context. Value now lies in the relationships between structured and unstructured data, the conversations, images, and signals that give meaning to what's measured. AI and analytics make it possible to extract that meaning at scale, transforming unstructured information into actionable insight. When combined on a unified information platform, AI-driven analytics unlike exponential potential, revealing connections and risk we could never see before. But with this opportunity comes responsibility. As AI deepens its role in enterprise decision making, the importance of cybersecurity has never been greater. Every intelligent system depends on trusted data and secure infrastructure. A robust security practice that encompasses governance, compliance, and proactive defense helps secure the company's data. We understand that no matter how technologies evolve, the fundamentals remain constant. Clear frameworks, enforced policies, and vigilant oversight. Consolidated on an EIM platform. As enterprises embrace hybrid and multi-cloud models, the question isn't just where to store data, but how to protect it. AI amplifies both the power and the risk of digital transformation, making cybersecurity not just a technical safeguard, but the foundation of trust in an intelligent enterprise. What is the scope of EAI governance? AI governance guides how an enterprise develops, deploys, and manages AI in alignment with its strategic objectives and regulatory obligations. As an extension of both corporate and IT governance, EAI governance addresses challenges such as model oversight, bias, data management, cybersecurity risk controls, and compliance. Most organizations now establish policies on responsible AI use that embed principles such as fairness, transparency, and accountability throughout the AI life cycle. These guardrails are becoming essential for ensuring trustworthy, responsible AI adoption and maintaining public confidence in modern intelligent systems. Leading frameworks today that guide EAI governance include the Organization for Economic Cooperation and Development, OECD, AI Principles, the European Union's proposed AI Act, the National Institute of Standards and Technology, NIST, AI Risk Management Framework, AIRMF, and the International Organization for Standardization, ISO, Standards on AI. Increasingly, EAI governance also extends beyond the enterprise's immediate systems to the models and infrastructure it depends on. New regulatory frameworks include the EU AI Act 2024 and U.S. Executive Order 14110 on AI Safety 2023. Distinguish between AI system governance, the way an enterprise manages its own use of AI, and model level governance, the obligations of those developing or fine-tuning general purpose or frontier AI models. Enterprises are now expected to perform due diligence on model suppliers before integration, NIST 2023, European Commission 2024. This introduces accountability across the AI supply chain. Data ethics is about more than compliance. It's about doing the right thing, even when the law doesn't require it. Ensuring Ethical and Responsible AI. Ethical governance ensures that AI systems are fair, transparent, and respectful of human rights. While the term ethical AI may feel contemporary, its roots trace back decades to foundational discussions in computer ethics. In his landmark 1985 essay, What is Computer Ethics? Moore highlighted a typical problem in computer ethics arises because there is a policy vacuum about how computer technology should be used. Computers provide us with new capabilities, and these in turn give us new choices for action. Often, either no policies for conduct in these situations exist, or existing policies seem inadequate. A central task of computer ethics is to determine what we should do in such cases, i.e., to formulate policies to guide our actions. Of course, some ethical situations confront us as individuals and some as a society. Computer ethics includes consideration of both personal and social policies for the ethical use of computer technology. Later writings considered AI ethics specifically, including the Principles on Artificial Intelligence, 2019, by the OECD, and more recently, Recommendation on the Ethics of Artificial Intelligence, published by the United Nations Educational Scientific and Cultural Organization, UNESCO, in 2022. With 194 member states in the UN, this latter set of recommendations is the most wide-reaching framework published to date. The recommendations provide a strong rationale for the need for ethical guidelines. AI systems raise new types of ethical issues that include but are not limited to their impact on decision making, employment and labor, social interaction, healthcare, education, media, access to information, digital divide, personal data and consumer protection, environment, democracy, rule of law, security and policing, dual use, and the human rights and fundamental freedoms, including freedom of expression, privacy, and non-discrimination. Furthermore, new ethical challenges are created by the potential of AI algorithms to reproduce and reinforce existing biases, and thus to exacerbate already existing forms of discrimination, prejudice, and stereotyping. The recommendations further observe that as AI takes on more and more tasks previously executed by human beings, its impact on humanity will expand. It has the potential to profoundly alter how we understand the world around us, as well as our very sense of self. Ethical AI recommendations include, but are not limited to, the following principles. Proportionality and do no harm. Covering the breadth of use of AI and their appropriateness to the context of use. Safety and security, avoidance of harm, including security risks, fairness and non-discrimination, including requirements to promote social justice and to safeguard fairness. Sustainability, including a consideration of human, social, cultural, economic, and environmental impacts on sustainability. Right to privacy and data protection, governing the use of data for AI, human oversight and determination, maintaining human control over AI, transparency and explainability, including a broad understanding and explanation of the use of AI and relevant data in specific cases, responsibility and accountability, promoting human rights and freedoms, awareness and literacy, levering education, training, and media literacy to increase awareness of the use of AI, multi-stakeholder and adaptive governance and collaboration, respecting international laws and national sovereignty. As ethical AI governance institutionalizes the do no harm principle, embedding values such as nondiscrimination, accountability, and transparency is critical for businesses and imperative for public sector organizations. Shifting the conversation from a checklist to an engineered system means we should think of ethics as infrastructure. That means embedding ethical guardrails into every phase of the AI life cycle. In doing so, ethics becomes part of your operational architecture, defining what your organization stands for and how it behaves, not just what it produces. As enterprises move toward responsible innovation, ethical and responsible AI are indispensable for sustaining growth and trust. Managing risks and safeguarding trust. AI introduces unique risks not covered by traditional IT governance. For this reason, strengthening accountability and oversight is essential. From a reliability perspective, because AI can fail in unpredictable ways, rigorous testing is required with strong fallback protocols. On the quality and performance side, continuous monitoring helps minimize risks. Most important, though, is ensuring security, privacy, and safety, because incidents can damage an organization's reputation and erode public trust. Integrating AI risk management with broader enterprise risk management processes ensures AI risk are treated with the same rigor as financial or operational risks. The following best practices make up a robust governance framework. Integration with enterprise governance. As pointed out above, EAI governance intersects with existing structures, corporate governance, IT governance, and risk management, and requires the definition of roles, responsibilities, and oversight at all levels. Trusted AI governance is achieved through task-based policy management as an extension to existing RBAC, role-based access controls, for humans and now agents. Building a trust requires cross-functional teams that blend technologists, ethicists, legal advisors, and sometimes even customers. Because AI is a newer technology, the risk profile should be considered high while the full scope of risk is being quantified. Policies and standards. For enterprise organizations and their employees, having clear policies, codes of conduct, and internal standards makes expectations explicit and enforceable. These must be reinforced often and coded into AI agent business rules so autonomous decisions follow the exact same ethical framework to achieve desired outcomes. Most critical is the use of protected or private company data and ensuring that the guidelines for its use with respect to public AI models are straightforward. Auditability. Governance requires robust documentation, logging, and audit trails for all AI systems, supporting both internal reviews and regulatory audits. Setting this as a core operating principle can ensure you are proactive and identify issues before they happen. We described these aspects of data governance in the previous chapter. Transparency Transparency is the foundation of trustworthy AI governance. It ensures that decisions made by intelligent systems are understandable, traceable, and open to scrutiny. The principles of transparency, explainability, and contestability provide a structured approach. Organizations must design monitoring processes and conduct regular health checks to evaluate how clearly AI systems communicate their reasoning and how fairly they operate. By documenting decision logic, disclosing data use, and enabling users to question or challenge outcomes, enterprises transform AI from a black box into an accountable, human-centered system. One where visibility, fairness, and trust are built into every decision. Development and Operations. A privacy by design approach to AI development and operations ensures that data protection is engineered into every stage of the system life cycle, from conception to deployment and beyond. Rather than treating privacy as a compliance requirement or an afterthought, it becomes an architectural principle guiding how data is collected, processed, and retained. This means minimizing data use to what's strictly necessary, applying anonymization and encryption by default, and embedding user consent and control mechanisms directly into workflows. Continuous monitoring and privacy impact assessments keep systems accountable as they evolve. By aligning development and operations with privacy by design principles, organizations not only reduce regulatory risk, they also build trust, resilience, and a competitive advantage grounded in ethical innovation. Incident Response While Chapter 11 covers the need for a different approach to operations, it is worth highlighting in the context of AI governance and risk management that protocols for rapid response to AI failures, including human appeals and remediation, are essential for accountability and continuous improvement. All of these elements are critical to managing risk and safeguarding trust. Because many organizations have not yet fully operationalized AI, it is worth considering how these elements impact your overall strategy. Today's AI governance also demands explicit security and resilience controls tailored to generative AI. Traditional IT security frameworks often do not cover threats related to generative systems like prompt injection, hallucination, output manipulation, etc. NIST's Generative AI Profile, 2024 draft, and CISA's Secure by Design Guidance, 2024, recommend AI-specific threat modeling, adversarial red teaming, monitoring for data exfiltration, and provenance tracking of model outputs, CISA 2024, NIST 2024. Organizations should institute measures for novel risks associated with generative models to ensure security, reliability, and accountability across the entire generative AI lifecycle. In the following feature, a European telecom company is managing the life cycle of its information to meet compliance and data governance objectives, with strict rules about how long to keep information and when to dispose of it. Case study T-Systems With operations in more than 20 countries, T-Systems, the busiest customer brand of Deutsche Telekom, is the provider of choice for conducting global business by many major European customers. Around 160,000 companies and public bodies make use of T-Systems integrated services, everything from managing data centers and global internet protocol services to developing and administering applications. T-Systems teams needed a platform that would allow them to come together quickly and easily to exchange information and ensure the professional and efficient execution of customer projects. Approximately 40,000 T-Systems employees are now using a company-wide Enterprise Content Management ECM platform for collaboration, document management, and knowledge management. T-Systems is enhancing its collaboration platform with an extranet gateway to facilitate collaboration with customers and partners in a lifecycle management system for project rooms with storage periods of up to 10 years. This second feature will enable T-Systems to meet its compliance obligations in the area of corporate governance while simultaneously making valuable but dormant project information searchable at a later date. Leading frameworks, regulations, and standards. The regulatory landscape for AI is rapidly evolving, with compliance now a major governance driver. There are a combination of voluntary frameworks like the National Institute of Standards and Technology, NIST Risk Management Framework, RMF, and obligatory frameworks, like the EU AI Act, covered in Chapter 5. But even the voluntary frameworks are becoming a requirement for many organizations in terms of implementing a structured approach to trustworthy AI. Governance frameworks serve a purpose in helping to translate requirements into controls, policies, and oversight requirements to guide enterprises through adoption. Although the regulatory landscape is evolving, there are a variety of frameworks available today that can help an enterprise structure AI governance. OECD AI Principles. These were adopted in 2019 by 46 countries and represented one of the first standards on AI governance. Five key principles were outlined pertaining to AI governance as follows. One, AI should benefit people and the planet by driving inclusive growth and well-being. Two, AI systems should be designed to respect human rights, democratic values, and diversity. Three, there should be transparency and explainability in AI systems. Four, AI systems must be robust, secure, and safe throughout their life cycles. Five, organizations and individuals developing, deploying, or operating AI should be accountable for its outcomes. EU AI Act. The regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence. Artificial Intelligence Act was adopted in 2024. It is one of the first comprehensive legal frameworks for AI, setting requirements based on different risk levels, ranging from minimal to unacceptable. Key requirements of the Act are as follows Transparency for AI generated content and biometric systems, strict compliance, testing, and documentation for high risk AI systems, example, healthcare, critical infrastructure, and public administration. Prohibition of AI use that manipulates behavior or exploits vulnerabilities. Mandatory human oversight, risk management systems, and alignment to the EU's digital and data governance regulations. NIST AIRMF. The NIST Risk Management Framework, published in 2023, provides a voluntary, widely adopted framework for identifying, assessing, and managing risks in AI systems. It looks at mapping context and intended use, measuring AI risks, managing those risks through a series of controls, and governing AI systems across their life cycle. This framework is designed to work alongside security guidelines for zero trust architectures. The framework acknowledges that AI technology is still evolving. The AI RMF is intended to be practical, to adapt to the AI landscape as AI technologies continue to develop, and to be operationalized by organizations in varying degrees and capacities so society can benefit from AI while also being protected from its potential harms. ISO slash IEC 42001 2023. ISO and the International Electrotechnical Commission have developed standards for AI management and governance. This includes standards on AI Management Systems, AIMS, ISO slash IEC 42001, 2023, and AI Concepts and Terminology, ISO slash IEC 22989, 2022, and AI System Lifecycle Processes, ISO slash IEC 23053, 2022. Together, these standards and principles give enterprises a structured foundation for responsible innovation. They translate high-level ideals, fairness, transparency, accountability, into actionable governance practices aligned with global expectations. By anchoring their AI programs to these frameworks, your organization can build systems that are not only compliant, but consistent, explainable, and trusted across borders. There are other standards and frameworks, but these are the most broadly used and adopted. Over time, as technology evolves, new frameworks will emerge, so taking an adaptable approach to how you define controls is key to preventing future rework. In the following feature, Metro Vancouver is proving that good governance is good business. An EIM backbone is helping to make sure that the region can prove through audits that the documents in the system are trustworthy records, and that they comply with statutes and regulations while promoting good business practice. Case study Metro Vancouver Metro Vancouver is one of 29 regional districts that were created by the provincial government to ensure that all British Columbia residents have equal access to commonly needed services. Regional parks, affordable housing, labor relations, and regional urban planning are significant services provided directly to the public. The region supports thousands of full-time employees and serves a population of over 3 million. The region needed a central secure repository for storing and distributing electronic records. An e-government solution would enable them to enforce retention periods and disposition rules based on preset periods to help control risks, reduce storage costs, and ensure regulatory compliance. They were also looking for an improved user experience for the profiling of documents that included automation and improved accuracy. The system currently contains almost 2 million documents. An automated records management solution removes the complexities of electronic records management, making the process transparent to the end user. It maps record classifications to retention schedules, which fully automates the process of ensuring that records are kept as long as legally required and then destroyed when the time elapses. Enforcing governance across the region, each of its 14 departments is responsible to comply with policies, best practices, and procedures issued by the corporate records team. The system is helping to make sure that the region can prove through audits that the documents in the system are trustworthy records and that they comply with statutes and regulations while promoting good business practice. The path forward on EAI governance. AI governance is essential for both the public and private sectors. The implementation timelines and scope are relatively similar, however, there are a few subtle differences. For the public sector, the emphasis is on transparency and citizen trust. Governance is driven by public accountability, ethics, and compliance with human rights. For the private sector, the focus is more on innovation, business risk, and regulatory compliance. Governance is integrated with corporate social responsibility and environmental, social, and governance ESG agendas, balancing agility with impact. Both sectors benefit from aligning with international frameworks and standards, and both must treat AI governance as a dynamic evolving program. Overall, EAI governance ensures ethical principles, legal compliance, risk management, and accountability are embedded throughout the AI life cycle and across organizational boundaries. Successful AI governance blends high-level principles with concrete processes and tools, supported by a culture of responsibility at all levels. As AI technologies and regulations evolve, ongoing investment in governance will be critical, not only to mitigate risk but to build trust, drive sustainable innovation, and secure competitive advantage. AI governance goes beyond purely technical considerations to incorporate ethical and social dimensions too. Many organizations now adopt formal policies around responsible AI use, embedding key principles like fairness, transparency, accountability, and respect for human rights throughout the AI life cycle. These guardrails are becoming essential not just for compliance, but for sustaining trust in AI-driven operations. Without them, the promise of intelligent systems risks being undermined by ethical laps, privacy breaches, or governance failures. The next wave of AI governance concerns the alignment and control of autonomous and agentic AI systems that can initiate actions without explicit human approval. Governance requirements are expanding to include autonomy limits, real-time supervision, and escalation pathways when models display deceptive or goal-seeking behaviors, UK AI Safety Institute 2024. Similarly, compute and capability thresholds are becoming a policy tool to identify when AI development should trigger external review. NIST 2024, CISA 2024. For enterprises, this means EAI governance must shift from static policy compliance to continuous monitoring, assurance, and adaptive risk management. Organizations that institutionalize AI governance as a living system of controls, oversight, and external validation will be best positioned to innovate responsively in the frontier. In the following case study, discover how an enterprise software leader reduced document types by 96% to prepare for AI innovation and automation. Case study A Global ERP vendor. The possibilities for employee self-service are unlimited. For example, if an employee submits a document to change their address or marital status, AI-powered automation can update their employment record without anyone from the HR team getting involved. Head of Global HR Delivery. Managing millions of employee records across a global workforce presented significant governance and compliance challenges. Manual, time-consuming processes made regulatory requirements, such as GDPR, difficult to meet consistently while aging systems lacked compatibility with next generation HR technologies. To support modernization, the organization set out to transform its information governance framework, automating compliance and embedding security and privacy by design into every stage of HR data management. The new governance model unified document retention, disposition, and access policies across regions, replacing thousands of inconsistent templates with standardized global formats. Automated retention scheduling and deletion policies now ensure continuous compliance, reducing operational risk while freeing HR teams from manual oversight. Strong encryption, access controls, and audit trails reinforce data integrity, while governance automation enables faster, more reliable decision making. With this foundation in place, the organization is preparing for the next phase, leveraging AI to enhance document classification, automate records management, and further strengthen governance at scale. By combining robust technical controls with strong oversight and accountability mechanisms, it is evolving from compliance maintenance to proactive governance, building a secure, data-driven environment ready for intelligent innovation. The FAST Five Download. One, make AI governance a strategic imperative. Establish executive support and clear ownership for AI governance to ensure all initiatives align with ethical, legal, and organizational objectives. Two, build ethics and accountability into the AI life cycle. Embed ethical guidelines, risk controls, regulatory compliance, and accountability measures at every stage, from design to deployment and monitoring, to actively prevent bias and unintended harm. Three, activate leading frameworks and standards. Implement frameworks such as the OECD Principles, EU AI Act, NIST AIRMF, and ISO four two zero zero one to translate best practices and regulatory demands into actionable controls and oversight. Four, integrate AI governance across the enterprise. Align AI governance with corporate, IT, and data governance by clarifying roles, responsibilities, and processes, ensuring comprehensive oversight from project planning through decommissioning. Five, drive continuous improvement and trust. Institute ongoing audits, adapt governance protocols as technologies and regulations evolve, and embed a culture of learning and accountability to sustain trust and long term value.