Enterprise Artificial Intelligence: Building Trusted AI in the Sovereign Cloud

Chapter 3: The Intersection of Data and Artificial Intelligence

OpenText

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 22:36

The true potential of Enterprise AI emerges where enterprise data and intelligent systems converge, creating new possibilities for innovation, trust, and value. Explore how together they form the foundation of intelligent enterprise decision-making. 

SPEAKER_00

Chapter 3. The Intersection of Data and Artificial Intelligence In this chapter we explore the intersection of data and artificial intelligence, focusing on how information becomes intelligence. Building on the foundations set out in Chapter 1, Data, and Chapter 2, AI, we look at how data and intelligence form a continuous value chain. Data fuels the AI engine. AI in turn unlocks the latent value of data. Continuous learning closes the loop, driving accuracy, adaptability, and insight over time. Governance connects these worlds, ensuring that as enterprise intelligence grows, it remains explainable, auditable, and aligned with organizational trust. Finally, we consider the strategic and economic implications of bringing data and AI together, where data and intelligence meet. In chapter one, we traced the evolution of data, how it became the foundation of enterprise information management, and how its structure, stewardship, and accessibility generate business value. Chapter two took a closer look at artificial intelligence as the engine of automation and intelligence, tracing its technological evolution beyond the hype cycle toward real-world agentic AI deployment. Throughout this book, our central thesis is that data and AI are symbiotic. Data gives AI the context and potential to learn, while AI transforms data in actionable insights. Together, they are driving innovation across modern enterprises. High performing AI requires high quality data. This is data that is well governed, structured, context rich, and secure. Not all data is created equal. Different data sets have different requirements, particularly as organizations balance public and private data. Public data sets train the large language models, LLMs, that underpin tools like ChatGPT, but for enterprises, private, customized data is the critical differentiator. Strategies that maintain the privacy and sovereignty of this data while enabling AI to learn from it are essential for competitive advantage. It's at this intersection where enterprise data meets intelligent systems that the true potential of enterprise AI is realized. Here, context becomes capability and information becomes insight. Organizations that can harness this relationship responsibly will define the next era of digital performance. Can AI replace data and enterprise information management? As AI adoption increases, a common question is whether AI can replace existing data and information management solutions. The short answer is no. However, it is the intersection of information management and AI that drives outcomes. One can't act without the other. AI automates data-specific actions, like data extraction and classification, whereas information management provides secure, organized content, and importantly, the foundational structure and rules for governance and compliance that AI does not. While data and information management solutions provide the fuel for AI, AI is also transforming information management. It does this through automation. AI automation tags documents, runs, extracts, summarizes reports, and reduces human error across key workflows. Insights. AI delivers valuable insights on the content that is being managed, including deriving key insights, sentiment, and other important notes. Search and retrieval. AI in combination with the metadata in content management makes search interfaces more accurate, efficient, and easier to use. Information management is the gatekeeper for trusted data. Data quality defines the credibility of every AI decision. The two disciplines are deeply interconnected. Effective AI relies on governed, high-integrity data, while information management gains new speed and intelligence through AI-driven automation. Integrating the two ensures consistency, compliance, and context across the information life cycle. We explore this in greater depth in chapter 5. AI can enhance how organizations manage content, but it cannot replace the discipline and governance that make information trustworthy. This interdependency between AI and information management is illustrated in the following case study about a global foods producer that is applying AI to its business information to modernize operations and improve performance. Case study A Global Foods Manufacturer. A foods manufacturer with operations around the world has implemented several strategies to position itself as an industry leader in AI adoption. What follows are excerpts of an interview with the company's EIM director. As part of our transformation project, we're exploring how artificial intelligence can help us modernize operations. Today, roughly 10% of our data is stored in the cloud. It's not a big number yet, and the cloud solutions we've used so far have been private to ensure the security of our proprietary information. But the technology is advancing quickly, and we're becoming more open to public cloud adoption, provided we can ensure strong governance and maintain ownership of our data. AI is becoming central to how we manage and extract value from our information. We're using AI-driven systems to pull insight from our operational data, helping management run our plants more efficiently. The results are tangible, higher quality products, more sustainable practices, and measurable improvements to the bottom line. AI has also inspired entirely new ways of working. We're piloting drone-based crop monitoring, an extension of a practice we've used for years with satellite imagery. Satellites helped us estimate crop health, but they don't perform well under cloud cover. Drones, on the other hand, can be programmed to fly over entire fields, capture high-resolution images, and feed that data directly into our AI models. Once processed, the models predict crop performance, identify stress or disease, and even recommend specific irrigation or fertilizer adjustments. That same insight is then integrated into automated fertilizer spreaders, which apply the right amount of treatment in the right places, reducing waste and improving yield. We are also advancing into predictive agriculture. By combining AI models with decades of historical weather and crop data, we can forecast growing conditions two to three years out in specific regions. These models aren't perfect, but they're increasingly accurate and incredibly useful for planning. Every region of the world is different. It's soil, weather, crops, and farmers. Our challenge is to adapt to all of them, and AI helps us do that at scale. The technology lets us understand local conditions in real time and make decisions that improve productivity, sustainability, and resilience. What used to take weeks of manual analysis now happens continuously. AI has become not just a tool for insight, but a partner in how we grow, produce, and feed the world. The data and AI Value Chain. Unlocking value from data begins with understanding the data and AI value chain. This process starts with data generation and collection, where access to enterprise data becomes a fundamental enabler. Once collected, data must be integrated, cleaned, and governed to ensure quality and reliability. Organizations that have invested in strong information management practices are better positioned to accelerate AI adoption because they have already done the foundational work of enabling their data. However, data alone is not enough. Without structured processes and workflows, data is not actionable. AI becomes valuable only when applied to real business challenges, integrating into these workflows to generate measurable outcomes. This is where AI model training, fine-tuning, and validation comes into play. LLMs are initially trained on public data sets, but enterprises can extend their value by fine-tuning them with private data or by using retrieval augmented generation that connect AI to internal knowledge sources. The right strategy depends on the organization's goals, resources, and maturity level. However, regardless of approach, data quality and model governance are critical requirements. The final and often overlooked step in the value chain is the feedback loop. Many organizations rush to deploy AI capabilities without establishing mechanisms for continuous learning and improvement. This is where the true value emerges, especially with AI. Iterative fine-tuning allows model accuracy to improve over time and drive more impactful results. To put the data in AI value chain into context, consider an example from manufacturing. Sensors on the factory floor collect data, then feed this data into the AI models. Based on this data, the models make recommendations to optimize performance. This in turn generates new data sets that can continually and iteratively be improved. This approach is demonstrated in the following case study, which describes how Noor Brimps keeps the wheels rolling with predictive maintenance powered by actionable insights. Case study Noor Brimps. Based in Munich, the Noor Brimps Group is the world's leading manufacturer of braking systems for rail and commercial vehicles. For more than 110 years now, the company has pioneered the development, production, marketing, and servicing of state-of-the-art braking systems. Noorbrimps' ICOM, Intelligent Condition Oriented Maintenance platform brings digitization to the rail business, connecting wireless-enabled sensors aboard trains to a back office cloud-based network using an IoT, Internet of Things model. This platform transmits detailed data that can help predict repair and replacement needs. The ICOM platform required a powerful and user-friendly analytics component to enable the analysis of the data received to help users make data-driven decisions. The ability to make predictive data-driven decisions results in more efficient and cost-effective repairs. With data being continually collected, the volumes across a fleet are significant. Customers now have the ability to visualize the data through interactive graphical dashboards, reducing the reliance on IT to create new reports. For example, they can provide heat maps of condition-based events, such as overheating breaks on a specific incline, helping customers put measures in place to reduce component failures, extending component life, and ultimately saving money. Data as the fuel for AI. Data is the fuel that powers the AI engine. The quantity, quality, and diversity of data matters far more than the complexity of the AI models themselves. High quality diverse data sets give AI systems the context they need to learn effectively. Simple AI can deliver impressive results on high quality and diverse datasets, whereas complex AI cannot deliver the same results on low quality and homogeneous data sets. As described in chapter one, both structured and unstructured data fuel this mix. Unlocking this data safely and responsibly is the key to meaningful AI adoption in business context. For most organizations, success depends not on training massive public models, but on leveraging private data strategically within existing frameworks. And once this private data is unlocked for AI, protecting that AI becomes critical. This is the key to the concept of sovereignty that we will discuss later in the book. Differentiating between public and private data sets and providing the appropriate protections for those private data sets is an urgent priority. AI doesn't merely consume data, it interprets, enriches, and organizes it for use across the enterprise. In this sense, AI and data form a two-way relationship. AI uses data to learn, but it also enhances the value of data by improving its structure, integrity, and accessibility. The Continuous Feedback Loop. Every effective enterprise AI system utilizes a continuous feedback loop. Data trains AI models. AI produces insights, and those insights generate new data that refines both the model and the underlying data sets. Intelligence improves not in a straight line, but in cycles of learning. For instance, recommendation systems constantly learn from user behavior. Each interaction creates new data that helps the system make better predictions. Over time, this iterative refinement increases accuracy, personalization, and efficiency. Think of this in the context of your favorite online shopping website. Every click, purchase, or pause creates new signals that reshape the system's understanding of user intent. The next set of recommendations reflects what the model has learned since the last one. It's trained to provide you with relevant recommendations, but as you continue shopping, it uses that data to generate new insights and over time improve the quality of the recommendations. Equally important to this process are observability and monitoring. The loop must be governed and models evolved responsibly. Continuous oversight of a model performance and data flows ensures that AI systems remain reliable, explainable, and aligned with business objectives. As we will explore later in the book, the operational management of AI systems cannot be an afterthought. It must be contemplated as a strategic capability that underpins long-term success. Governance at the intersection. Governance lies at the heart of the data in enterprise AI intersection. On the data side, governance focus on privacy, lineage, access control, and compliance with regulations, such as GDPR, General Data Protection Regulation. On the AI side, governance emphasizes fairness, transparency, accountability, and explainability. These two domains are now coming together under shared principles such as ethics, auditability, and trust. Emerging AI trust frameworks and international standards like ISO slash IEC 42001 for AI management and ISO slash IEC 38505 for data governance illustrate this convergence. As these frameworks mature, they will shape how organizations design, deploy, and monitor AI responsibly. We take a deeper dive into data and AI governance in chapters five and six respectively. Integrating data and AI creates competitive advantage. Governing them responsibly is what turns it into lasting economic value. Strategic and economic implications. Finally, integration of data and AI creates both strategic advantage and economic opportunity. Organizations that align these capabilities effectively are better equipped to innovate, optimize operations, and differentiate in competitive markets. With so much anticipation from boards and executive leadership on the potential for enterprise AI, it's easy to understand some of the market disappointment around the pace of change and impact. This has put a spotlight on early AI pilots in enterprises and their relative success. However, it's worth noting that many early AI pilots have underdelivered because they relied on publicly trained models without contextualizing them with enterprise data. Enterprise leaders need to understand that their competitive advantage lies in safely and securely unlocking this data. The next phase of success lies in taking a data-centric AI approach that prioritizes improving data quality and process design over building ever more complex AI models. Good data and sound processes lead to reliable AI outcomes. While large-scale computing remains necessary for training foundational models, most enterprises can achieve meaningful value through smaller, targeted deployments. Understanding your data requirements help determine what level of compute investment is truly needed. Preventing overspending and aligning AI initiatives with real business value. This also helps to reduce concerns around AI for leaders and employees who might still be working to fully understand the technology. Discover how ITAC Software AG is using intelligence to enable smart factories in the following case study. Case study ITAC Software AG. Since its founding, ITAC, Internet Technologies and Consulting Software AG, has been specializing in providing Internet technologies for the manufacturing industry. The manufacturer of standard software and products for cross-company IT applications is an industry-leading system and solution provider of manufacturing execution systems, MES, for the entire supply chain. To offer its customers the greatest possible transparency and decision-making capability for production control, and to meet growing demands related to the Internet of Things, IoT, ITAC wanted to integrate business intelligence, BI, and analytics software into their MES suite. Doing so would support customer demands for manufacturing intelligence, quality control, and traceability. In addition to rapid, effective implementation and seamless integration, ITAC required the customization of reports, analysis, and dashboards with full interactivity and security. All these needed to be web-based, offer transparent personalization for various applications, and be available through different channels. ITAC now has the BI operational and analytical capabilities it needs to support customer demands for greater intelligence, quality control, and traceability throughout the entire manufacturing process. The solution ensures transparency and metrics management and supports product lifecycle management, budget control, and quality assurance, as well as field activity management. The company's clients can access and analyze large amounts of data centrally with extensible support for future expansion, delivering competitive advantage. As we've covered in this chapter, data and AI are inseparable partners, with data as the fuel for the AI engine. AI without data is directionless, and data without AI is not actionable. Together, they form the foundation of intelligent enterprise decision making. As organizations increasingly move toward AI-driven decision frameworks, strong governance and strategic alignment become essential. The intersection of data and AI represents not only an operational shift, but also an innovation frontier, redefining how organizations think, decide, and compete. This convergence marks the beginning of a new chapter in digital transformation, one where information truly becomes intelligence. The Fast Five Download one. Prioritize data quality and governance. Establish data readiness as an organizational mandate, not a project deliverable. Direct your teams to perform comprehensive data audits and implement governance policies that ensure accuracy, security, and accessibility across all critical information assets. Make data quality a board level priority to maximize AI effectiveness. Two, integrate AI into real business workflows. Embed AI in high impact business processes to identify two to three key operational areas. Example, customer support, supply chain optimization, risk management, where it can deliver immediate benefits. Task business and technical leaders with deploying AI solutions that leverage proprietary data to address real business challenges. Three, establish continuous feedback loops for AI improvement. AI performance is never static. It requires continual monitoring and training. Institute an organizational policy for ongoing AI model performance monitoring, including user feedback loops and automated retraining using new data. Assign accountability for this process to ensure models remain accurate, personalized, and aligned with business goals. 4. Align data and AI governance for trust and compliance. Bring data in AI under a single governance framework, a point across functional task force that unites privacy, security, compliance, and ethical oversight to create consistent standards for how intelligence is built and applied. Adopt or benchmark against emerging standards, such as ISO slash IEC. 42001 and 38505 to proactively manage legal, reputational, and operational risks. 5. Take a data-centric approach to AI investment. Tie all investment decisions to data value and business outcomes. Before approving new AI projects, require business units to articulate how the initiative unlocks value from enterprise data and delivers measurable business results. Limit investment in large scale AI models unless justified by unique data assets and a clear path to ROI.