Enterprise Artificial Intelligence: Building Trusted AI in the Sovereign Cloud

Chapter 2: The Rise of Enterprise Artificial Intelligence

OpenText

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 32:45

Enterprise Artificial Intelligence (EAI) is redefining enterprise performance by turning intelligence into context. Explore how trusted, secure data is the cornerstone of effective AI. 

SPEAKER_00

Chapter 2. The Rise of Enterprise Artificial Intelligence As the technology landscape evolves, artificial intelligence has the potential to reshape many aspects of our lives, from enhancing workplace productivity to revolutionizing how we interact with information and each other. AI is becoming a critical part of our personal and professional lives. Enterprise Artificial Intelligence, EAI, is redefining enterprise performance by turning intelligence into context. As adoption deepens across customer experience, operations, and content publishing, the real advantage isn't just automation. It's contextual understanding. This chapter will explore key AI technology concepts, fundamental principles, and applications across different sectors. The number of companies that have fully modernized AI-led processes have nearly doubled from 9% in 2023 to 16% in 2024. Compared to peers, these organizations achieve 2.5 times higher revenue growth, 2.4 times greater productivity, and 3.3 times greater success at scaling generative AI use cases. Contextual intelligence enables AI to grasp business intent, interpreting not just data, but the structures, workflows, and goals that define how an organization creates value. By mapping relationships across metrics, processes, and business logic, AI can anticipate outcomes, model dependencies, and recommend actions aligned with strategic priorities. It effectively bridges the gap between analytics and execution, transforming insight into measurable decisions that drive growth, efficiency, and competitive differentiation. However, alongside its benefits, some important ethical considerations and implications must be addressed. By understanding AI's dual nature, its potential to innovate and its capacity to disrupt, we can better prepare for a future in which this technology plays a central role. As we discussed data in the first chapter, here we will start to unpack how trusted, secure data is the cornerstone of effective AI. Good data is the key to driving innovation while preventing negative disruption as AI proliferates. Defining AI. There are many definitions of AI, but according to the International Organization for Standardization, ISO, Artificial Intelligence, AI, is a branch of computer science that creates systems and software capable of tasks once thought to be uniquely human. It enables machines to learn from experience, adapt to new information, and uses data, algorithms, and computational power to interpret complex situations and make decisions with minimal human input. AI is not a single concept or a single technology. In fact, AI is a broad field with numerous subfields, each with its own objectives and specifications. It is an umbrella term that encompasses a lot of technologies, including machine learning, deep learning, and natural language processing, NLP. Artificial intelligence usually falls into three categories: narrow, general, and superintelligence. The easiest way to understand the differences is to think about how each one learns. Artificial narrow intelligence, ANI, is like a student who excels at just one subject. For example, it might be great at playing chess, recognizing faces, or predicting traffic patterns, but it can't do much outside its specialty. Deep Minds AlphaGo and its successor AlphaZero marked major milestones in AI by mastering complex games through self-learning and skilled reinforcement techniques, demonstrating powerful generalization within constrained domains, yet still firmly within the realm of narrow AI rather than true AGI. Artificial general intelligence, AGI, on the other hand, is more like a well-rounded graduate student. AGI can pick up new subjects, connect ideas, and solve problems in many different areas, much like a person can. Then there's artificial superintelligence, ASI, which goes a step further. ASI would be a kind of genius that outperforms at everything, reasoning, creativity, and even improving itself. The European Commission's AI Watch report describes ANI as systems that can perform one specific task and operate within a predefined environment. ANI can process data at high speed and boost the productivity and efficiency in many practical applications. While ANI is superior in specialized domains, it is incapable of generalization, i.e., to reuse learned knowledge across domains. The report further explains AGI refers to machines that exhibit human intelligence. In other words, AGI aims to perform any intellectual task that a human being can. We are not yet at the stage of AGI as AI's intelligence is not human in nature, it is simulated. To fully achieve AGI, AI systems need to be capable of learning tasks specifically without retraining, exhibiting autonomous reasoning and understanding cause and effect in context. ASI is not yet defined in the standards and is acknowledged as a future state beyond AGI. Artificial Superintelligence is a hypothetical software-based artificial intelligence AI system with an intellectual scope beyond human intelligence. At the most fundamental level, this superintelligent AI has cutting-edge cognitive functions and highly developed thinking skills more advanced than any human. Artificial intelligence can be classified in two primary ways, by capability, how closely it approaches human-like cognition, and functionality, how it behaves and interacts with data. AI researcher R. Hensy introduced a widely recognized functional framework that explains how systems process information and respond to their environment. According to Hensy's framework, at the most basic level, reactive machines operate only on present inputs with no ability to learn from the past, IBM's deep blue chest system being a well-known example. Limited memory AI adds the ability to retain short-term data to inform decisions, and it represents the foundation of nearly all modern AI, from autonomous vehicles to recommendation engines. Beyond this, the field moves into theoretical territory. Theory of mind AI imagines systems capable of understanding human beliefs, emotions, and intent. While self-aware AI represents a hypothetical stage in which machines possess true consciousness and self-perception. While those higher stages remain speculative, understanding this spectrum helps frame where today's enterprise systems operate, overwhelmingly within the limited memory category, where value is realized through responsible data use, governed learning, and disciplined deployment at scale. As AI has evolved and different categories of AI have been identified, a common theme has emerged. Data, compute, and governance are core for the different variants of AI. Models and capabilities may change, but the ability to organize, secure, and operationalize information remains the defining advantage. This is where enterprise information management serves as a foundation for the application of AI, or Enterprise Artificial Intelligence, EAI. Enterprise Artificial Intelligence. Enterprise AI is not a separate class of intelligence. It is a term that describes the strategic application and integration of various AI technologies and capabilities within an organization to solve specific problems, automate processes, and drive decision making. EAI primarily falls under the umbrella of artificial narrow intelligence, ANI, weak AI, as the systems are designed to perform specialized tasks to enhance operations, not exhibit human-like general intelligence or consciousness. Enterprise AI is built on trusted and governed data, the sovereign data layer, AI lifecycle management platforms, example, ML ops, LLM ops, hybrid or sovereign cloud infrastructure, secure APIs and orchestration layers, agentic AI systems coordinating multiple specialized models. Enterprise AI is a governed architecture, not a single model. It stacks sovereign data, controlled compute, AI lifecycle discipline, secure integration, and agentic orchestration to deliver trusted automation at scale. It is a deployment context for operationalizing proven AI technologies and capabilities. To deliver value, EAI solutions draw from a broad toolkit. Machine learning enables predictive analytics, with the ability to anticipate equipment failures, forecast demand, or optimize inventory. Natural language processing, NLP, powers intelligent chatbots, document summarization, and customer sentiment analysis. Computer vision brings automation which can be used for manufacturing inspections and enhancing safety and security monitoring. Robotic process automation, RPA, streamlines structured repetitive tasks such as data entry and invoice reconciliation. And increasingly, generative AI supports content creation, code generation, and knowledge assistance. What differentiates enterprise AI is not the underlying model type. It is the governed integration of these technologies into business workflows, data systems, and decision processes. Success comes not from isolated models, but from orchestrating them responsibly at scale, underpinned by trusted data, secure infrastructure, and strong information governance. Enterprise AI differs fundamentally from consumer grade AI in both purpose and design. Consumer AI focuses on enhancing individual experiences, recommending movies, assisting with personal tasks, or powering virtual assistants. These systems typically operate at small scale, rely on publicly available or user-provided data, and require limited integration with other tools. Their value lies in convenience and personalization for a single user. By contrast, EAI is built for scale, security, and strategic impact. It operates on sensitive proprietary business data stored in CRM records, ERP systems, and other operational databases, and it must adhere to strict governance, compliance, and cybersecurity requirements. Enterprise AI integrates deeply with existing systems and workflows, automates complex cross-departmental processes, and delivers measurable outcomes such as operational efficiency, risk reduction, cost savings, and innovation. In essence, Enterprise AI is the industrial application of modern AI technologies, engineered to operate across large environments where accuracy, accountability, and trust are as important as intelligence itself. Discover how an international airport is using enterprise AI to keep 90 plus million passengers moving smoothly across the globe in the following case study. Case study, an international airport. Much of our data was siloed across multiple systems, and ensuring accuracy, particularly for tracking passenger flows and processing times, was a real challenge. Staff often lacked real-time insights or predictive tools to manage cues, staffing, and congestion proactively. Airport IT Service Management Lead. Serving more than 90 million travelers annually, this airport is among the world's busiest for international passenger traffic, and one of the most digitally advanced. It stands out as a global hub known for innovation, efficiency, and exceptional customer service. Since opening in 1960, the airport has expanded dramatically, adding new runways, terminals, and concourses to accommodate growing air traffic. Laying the digital foundation for this growth wasn't easy. With stakeholders ranging from airlines to police, customs, and service providers, communication is complex. Everyone needs access to the same data for coordinated decision making. As part of a broad service management initiative, the airport partnered with a technology provider to extend its monitoring capabilities. An AI operations management component provides centralized intelligent monitoring and management across complex IT environments. It enhances observability, reduces alert noise, predicts problems, and helps maintain uptime. Real-time insights and intelligent monitoring have transformed IT from a back-end function into a driver of service quality, stakeholder confidence, and customer satisfaction with measurable impacts. Using AI on top of an EIM infrastructure, the airport has been able to prevent 30% of incidents with proactive monitoring, better align IT operations with business needs, and strengthen customer service through IT excellence. The Evolution of Modern AI. Over the last 75 years, AI has evolved into what we recognize today, with some years marked by incredible innovation and others by hype as the technology developed. Let's take a walk down memory lane to understand how we got here. Early 1950 While AI has evolved over decades, Alan Turing is recognized as one of the early innovators who in his 1950 journal article, Computing, Machinery, and Intelligence, proposed that machines could simulate human reasoning and introduce the Turing test for machine intelligence. Late 1950. Fast forward to 1956 when the term artificial intelligence was introduced during the Dartmouth Summer Research Project on Artificial Intelligence. This conference, organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, is recognized as the origin of AI as a research topic. This event brought together researchers to formalize the goal of creating machines capable of human-like reasoning and learning. 1970s. However, the excitement from the conference didn't last long. While research progressed slowly, the reality of AI failed to live up to expectations, and early projects proved unsuccessful. The term AI winter was coined to describe a period in which criticism of the lack of progress, including the Light Hill Report in the UK in 1973, led to government funding being cut off. 1980s. After the 1970s AI winter, AI research got its second wind in the 1980s with the rise of expert systems, programs built on if-then rules designed to mimic human behavior. These systems started showing up everywhere, and for the first time companies began to see real commercial value in AI. This application of AI was narrow and task specific, rather than anything close to true general intelligence. Late 1980s, early 1990s. But just as things were looking positive, there was a second AI winter. In the late 1980s, the excitement died off as businesses found these programs expensive to build and maintain. Also, the limitations of the program logic became apparent. As funding dried up, AI experienced another downturn. Late 1990s. While AI went through a slower period, research didn't stop. Researchers worked on different approaches, moving away from hard-coded if thin patterns and looking at opportunities for machines to learn from data. This was the rise of modern machine learning. Some of the breakthroughs in the 1990s included algorithms for neural networks and decision trees. This was also when learning from data became a critical factor in the evolution of AI. 2010s. Another breakthrough came in 2012 with the dawn of deep learning. This happened when a neural network called AlexNet dominated the Image Net competition in image recognition, dramatically reducing error rates. This was a breakthrough for research because it showed that AI could outperform humans in visual recognition. Google, Facebook, and later OpenAI built on this momentum, creating AI systems that could not only recognize images, but also translate languages and generate text. Today, generative AI, Gen AI, has become mainstream. Large language models, LLMs, like GPT, Claude, and Gemini, are maturing in advancing AI capabilities. There has also been a realization that the size of models, the data used for training, and the computing power available are all crucial for performance. With these innovations, consumers are getting hands-on experience with AI, leading to rapid adoption in everyday life. Enterprise AI is also becoming a differentiator for business performance. The graph represents a summary of the differences between traditional and generative AI. Traditional AI is primarily used to analyze data and make predictions, excels at pattern recognition, can analyze data and explain what it sees. Generative AI creates new data similar to its training data, excels at pattern creation, can use data to create something new based on what is available. Data, compute, algorithms, and governance as the fuel for the modern enterprise AI engine. Artificial intelligence in the 2020s has advanced through the convergence of three forces data, compute power, and algorithms. Working together under the discipline of governance, data provides the essential fuel, the raw material that allows AI systems to learn, adapt, and generalize across domains. The quality, labeling, and integration of that data determine how effectively models perform. Diverse, well-governed data sets produce more accurate and resilient outcomes. Compute power enables scale. Advances in hardware, particularly graphics processing units, GPUs, tensor processing units, TPUs, and elastic cloud-based infrastructure have made it possible to train models at a scope that was unimaginable a decade ago. At the same time, algorithmic innovation has accelerated, giving rise to foundational models that support today's generative AI and multimodal AI systems. Together, these three elements, data, compute, and algorithms, form the technical core of modern AI. But governance provides the integrity layer that ensures each operates responsibly. When these forces work in concert, enterprises gain not only progress in reasoning and creativity, but also trust, compliance, and sustainable performance. As history has shown, the evolution of artificial intelligence has rarely followed a straight path. Progress has unfolded through familiar cycles of optimism and correction. Each surge of innovation followed by a period of recalibration. Early breakthroughs often inspired outsized expectations, which gave way to disillusionment when results fell short. Yet these cycles have been essential to the field's maturity, forcing researchers and enterprises alike to balance ambition with realism. Over time, one insight has proven consistent. Sustainable enterprise AI capability depends on equilibrium. True progress arises from the synergy among three interdependent pillars, well-managed data, scalable and efficient compute, and continually improving algorithms. Organizations that align these elements within a strong governance framework are the ones turning experimentation into measurable business value. In the following case study, a medical research company uses enterprise AI to connect clinical, financial, and patient outcome data across the country, providing holistic analysis to help value-based healthcare transformation. Case Study, a countrywide healthcare platform. A Dutch medical research company allows its users, hospitals, government, pharmaceutical companies, and insurance companies, under strict data privacy regulations, to compare their performance and patient experiences and outcomes across hospitals and clinicians. It is continually helping healthcare professionals deliver value-based health care using combined clinical, financial, and patient outcome measurement data. The organization needed a solution capable of providing an intuitive online dashboard, suitable for scaling up. They selected a combination of AI and business intelligence, BI reporting. More than 5,000 users regularly access the solution to review performance and make comparisons with their peers. Using a detailed analytics dashboard, they can drill down to the details behind high-level metrics. By having ready access to this analytical intelligence, clinicians have been able to identify areas that are working well and areas that need further consideration. They can then apply appropriate improvements where necessary. For example, using the solution has helped reduce complications after colon cancer surgeries by more than half over four years. With all hospitals in the Netherlands now using the solution, more clinical areas are being planned for inclusion. The full scope of possibilities AI provides is only just becoming apparent. Areas, such as decision support where patients and clinicians can choose the optimal treatment together, is an exciting use case for the medical research company. The expanding role of AI agents. AI agents are moving rapidly from concept to core capability. Across industries, they're reshaping how work gets done, automating what's repetitive, accelerating what's strategic, and amplifying human expertise. In customer service, agents now monitor behavior patterns, flag churn risks, and trigger retention campaigns before a support ticket is even filed. In sales, they qualify leads, automate follow-ups, and deliver real-time insights that help close deals faster. Marketing teams are using AI to optimize segmentation and personalize campaigns at scale. In product development, intelligent agents sift through feedback, benchmark competitors, and accelerate roadmap decisions. Wherever data meets repetition, AI is stepping in, not to replace people, but to extend what they can accomplish. The graph indicates the three pillars of Agentic Enterprise AI, understanding LLM context, RAG, and execution, MCP, with all three pillars relying on and connected to governance. The three pillars of agentic AI. Large language models, LLMs, are algorithms that excel at understanding natural language, retrieving information, and delivering clear conversational responses. But executing real tasks, configuring marketing campaigns, building user journeys, or testing pricing models requires context, an understanding of how enterprise systems actually work. This is where Retrieval Augmented Generation, RAG, and Model Context Protocol, MCP architecture redefines the boundaries of capability. Retrieval Augmented Generation RAG is a process that strengthens LLM performance by injecting relevant domain-specific knowledge into each response. Proprietary documentation, code repositories, and process instructions, securely integrated through RAG, give an LLM access to the how behind the task. Rather than relying solely on public data, it draws from the organization's governed knowledge base to generate precise, compliant guidance. The Model Context Protocol MCP server completes the loop, acting as the communication layer. An MCP server bridges generative AI with enterprise systems, databases, and APIs. It allows the AI to move beyond conversation into action, retrieving data, performing transactions, or triggering workflows in real time. Modern software environments may include hundreds of MCP endpoints, each enabling the AI to execute a specific operation under policy control. Together, these three pillars, LLMs, RAG, and MCP, form the foundation for agentic AI in the enterprise. They turn language into logic, intent into execution, and insight into measurable outcomes. This is the next evolution of intelligent systems, governed, contextual, and capable of working alongside humans to drive transformation across every function. A global mining company has done just that, accelerating its research project timelines with AI in the case study below. Case Study, a global mining company. A global mining company with headquarters in Brazil produces iron, nickel, copper, manganese, and more. The company undertakes careful research to assess both the viability and the social and environmental impacts of its mining operations. Their research projects can take up to 10 years. Typically, cross-functional teams spend weeks or months manually collecting and consolidating information for review as new product opportunities, market fluctuations, or environmental standards emerge. The mining company's research projects were hampered by low-level manual work and an inadequate AI assistant that lacked scalability for global reach. With a history of technological innovation to support its operations, the company sought a solution that could help its specialists cut down repetitive manual tasks, speeding up the research stage for both new and existing mines. A spokesperson from the Global Mining Company said, information about each mining project is typically stored in different formats across siloed systems, and accessing it takes up a significant amount of valuable time as workers search through mountains of documentation. AI, with its ability to quickly ingest and analyze large volumes of data, presented the perfect opportunity for reducing this manual work. The mining company built a proof of concept with an AI vendor, and through expert training and AI best practices, they have been able to boost their AI response accuracy by 47%. Using AI for researching mining projects, the company has reduced months of low-level manual work and stimulated rapid growth. If they need to assess the feasibility of using an existing mine to produce or, for example, something that a geologist would complete in two months, relevant information can be consolidated in just a few hours, helping the company rapidly close in on viable investment opportunities and stay ahead of market shifts. A path forward for AI. Looking back over the past 75 years of artificial intelligence, one clear lesson stands out. True success in AI has never been about technology alone. While breakthroughs in data, compute, and algorithms have fueled remarkable progress, the most enduring impact has come from aligning these advances with strong governance, ethical principles, and human collaboration. Enterprise AI represents more than the next phase of artificial intelligence. It marks a fundamental redesign of how people engage with technology. Unlike traditional generative AI, which produces outputs in response to prompts, agentic systems demonstrate autonomy, initiative, and adaptive reasoning. They can plan, act, and learn within real-world contexts, moving from conversation to execution without constant human direction. This shift signals the quiet retirement of the graphical user interface, GUI, as the dominant interaction model. The buttons, tabs, and menus of the GUI era are giving way to direct collaboration with intelligent agents through natural language and voice. Instead of navigating software, users now express intent, and the system interprets, decides, and acts. The result is not the disappearance of the interface, but its evolution. The visible surface of software recedes, and what remains is intelligence that operates through conversation, context, and trust. In this new paradigm, productivity is no longer measured by clicks per minute, but by the quality of outcomes achieved through human AI partnership. Agentic intelligence is already reshaping how organizations produce, manage, and personalize content. Research cycles are automated, first drafts generated, and experiences curated in real time, tailored to audience behavior and preference. AI doesn't just react, it reasons, predicting the downstream impact of every change. For enterprises, the opportunity is clear. Contextual intelligence doesn't replace human judgment. It amplifies it, scaling expertise, governance, and creativity across the organization. As we look ahead, the next era of AI will be defined not just by greater power, but by greater autonomy, transparency, and accountability. It will be defined by systems that can act intelligently while remaining explainable and aligned with human values. Understanding what AI is, where it came from, how it works, and when key milestones occurred provides the foundation for thoughtful leadership in shaping what comes next. With this context in mind, the next chapter explores the intersection of data and AI, examining how the fusion of information and intelligence is creating new possibilities for innovation, trust, and value in the modern enterprise. The Fast Five Download. 1. AI is broad, evolving, and foundational. Enterprise artificial intelligence is not a single technology, but an umbrella term covering diverse subfields like machine learning, deep learning, and natural language processing. It ranges from narrow, task-specific systems, ANI, to the conceptual goal of superintelligent systems, ASI. Understanding these distinctions is key for informed decision making. 2. Data quality and governance are critical. The effectiveness and trustworthiness of AI depend on high quality, well-governed data. Data is the fuel for AI innovation. Without reliable, secure, and well-managed data, AI systems are prone to errors, bias, and operational risk. Three, technology progress follows hype cycles. AI's history is marked by cycles of rapid innovation and periods of disillusionment, AI winters. These cycles have matured the field, highlighting that sustainable value comes from balancing technological advances with realistic expectations and prudent investment. Four, compute power and advanced algorithms drive modern AI. Breakthroughs in hardware, GPUs, TPUs, cloud infrastructure, and algorithm design have enabled today's large-scale generative AI systems. The synergy between data, compute, and algorithms is what sets leaders apart in the AI space. Five, governance, ethics, and human alignment are essential for the future. The next era of AI will be defined by systems that are not just powerful, but also transparent, explainable, and aligned with human values. Success will require strong governance, ethical frameworks, and human oversight to ensure AI enhances business value while building trust.