Enterprise Artificial Intelligence: Building Trusted AI in the Sovereign Cloud
The decade of responsible intelligence has begun — are you ready?
Enterprise AI is hitting a wall: Public models aren’t trained on your business data, but you can’t hand over your organization's proprietary information to a public system. The definitive roadmap for this new reality is Enterprise Artificial Intelligence: Building Trusted AI in the Sovereign Cloud, a new book written by OpenText leaders. Listen now to learn why this book is a must for organizations looking to move from isolated AI experiments to enterprise-grade deployments.
Learn more here: https://www.opentext.com/resources/enterprise-artificial-intelligence-building-trusted-ai
Enterprise Artificial Intelligence: Building Trusted AI in the Sovereign Cloud
Chapter 4: Making it Secure—The Importance of Cybersecurity
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Innovation must be balanced with trust to ensure the intelligence we build cannot be used against us. Explore how cybersecurity must evolve alongside AI to address emerging threats and the advanced defences needed to protect data, models, and AI‑driven operations.
Chapter four Making It Secure The Importance of Cybersecurity Innovation must be balanced by trust to ensure that the intelligence we build cannot be turned against us. In this chapter, we explore how cybersecurity must evolve alongside AI. As intelligence systems reshape how enterprises operate, new risks demand equally advanced defenses. We'll examine emerging threats and the strategies required to secure data, models, and AI-driven operations against them. 62% of organizations experienced a deep fake attack involving social engineering or exploiting automated processes, while 32% said they experienced an attack on AI applications that leveraged the application prompt in the last 12 months. In recent years, cyber threats have evolved from simple breaches to sophisticated assaults aimed directly at enterprise AI systems. And the stakes have never been higher. As organizations accelerate adoption of technologies such as generative AI, Gen AI, we are witnessing a surge in attacks that leverage AI for phishing, deepfakes, and advanced social engineering. At the same time, a new wave of vulnerability is emerging: malicious actors who exploit Gen AI infrastructure, manipulate prompts, or compromise chained AI workflows to infiltrate and disrupt enterprises. In the previous chapter, we examined how the intersection of data and enterprise AI creates opportunities for innovation and operational efficiency. But opportunities also bring unwanted risks. And as organizations rely on private data to fuel their AI engines, they may inadvertently expose themselves to new and evolving cyber risks. The protection of enterprise data and AI must keep pace with technological evolution, as data and AI are attractive targets for threat actors. In the following feature, an energy company lays the foundation for AI and advanced analytics within a secure EIM system, building an enterprise architecture that blends its data with processes for governance and cybersecurity controls. Case study A Nordic Energy Company. This energy generation organization operates in a highly regulated sector, managing vast volumes of technical documentation critical to safety in operations. Faced with the challenge of enabling over 900 employees to reliably access the latest approved versions of these documents across office and plant environments, the company recognized that legacy, fragmented systems lacked the governance and visibility required for modern risk and trust management. In an era where data is both an asset and a vulnerability, establishing a security by design approach became imperative. To build a robust foundation, the company implemented a unified content management environment underpinned by strong identity and access controls, workflow automation, and document lifecycle governance. By centralizing control and enforcing policy-driven access rights, the system ensured that only the appropriate users could reach sensitive operational records at the right time and from the right context. Automated workflows moved documents through review, approval, and archiving in a governed fashion, strengthening the security of the data plane while maintaining usability for field and office teams alike. With this architecture in place, the organization laid the groundwork for advanced analytics and AI-enabled capabilities, on the understanding that those must rest on secure, well-managed information. The results were transformative. The company achieved a high stability record and dramatically improved user productivity with near real-time access to mission-critical content, supporting both safety and operational integrity. But perhaps more important, they now possess the trusted information infrastructure required to introduce AI-driven search, insights, and decision support tools, securely and responsibly. In short, by treating cybersecurity, data governance, and AI readiness as intertwined components, the company moved from document management to a modern intelligence platform posture, anchored in trust, visibility, and automation. We spend upwards of 70% of our time playing defense from a technology perspective, whether it's regulatory or cybersecurity threats. We have to remain vigilant in protecting the bank and our clients' data and keep up with the latest changes and patches that addresses vulnerabilities. CTO and Managing Director of Global Bank. The Cyber Threat Landscape for Data and AI. According to the World Economic Forum's Global Cybersecurity Outlook 2025, Gen AI tools are reshaping the cybercrime landscape by enabling criminals to refine their methods and automate and personalize their techniques, with 47% of organizations citing their top concerns surrounding Gen AI as the advance of adversarial capabilities. Cyber criminals are harnessing the efficiency of AI to automate and personalize deceptive communications. Some 42% of organizations experienced a successful social engineering attack in the past year, a number that can only increase with advances and the malicious adoption of AI. Cybersecurity for enterprise AI must be approached from a multidimensional perspective. One that recognizes the full spectrum of threats spanning data, models, infrastructure, and human interaction. The cyber threat landscape for data and AI runs across infrastructure, governance, and human behavior. Traditional cyber risks such as unauthorized access, insider threats, and ransomware, continue to target enterprise systems and compromise critical data. And as organizations move more of their data and operations into cloud environments, the overall attack surface continues to grow. Threat actors continue to exploit weaknesses in identity management, network segmentation, and vulnerable software. These foundational threats create the conditions for more advanced forms of attack that exploit the growing reliance on AI. Some of the expanding attack surfaces include the data pipelines used to train LLMs in the models themselves. As researchers from IBM and Carnegie Mellon University have observed, growing applications of large language models, LLMs, trained by a third party raise serious concerns on the security vulnerability of LLMs. It has been demonstrated that malicious actors can covertly exploit these vulnerabilities in LLMs through poisoning attacks aimed at generating undesirable outputs. In addition to model poisoning, other security risks such as data exfiltration and prompt injection are becoming more commonplace, with the latter being one of the biggest challenges for security in LLM. Emerging AI-specific threats introduce new vulnerabilities that go beyond conventional data breaches. Because new threats emerge every day, it is impossible to build an up-to-date, all-encompassing list of attacks. However, some of the most common types include data poisoning attacks, backdoor attacks, adversarial evasion attacks, model inversion attacks, bias exploitation attacks. The diagram below shows the AI model lifecycle mapped to the different types of cyber attacks we will review in this section. The life cycle begins with data collection and preparation, which feeds into the training phase where the model learns. This results in a trained model which is then used for inference, the process of making predictions or decisions, to generate the final output, a prediction, classification, decision, or generated response. Understanding how these different cyber attacks relate to the AI model lifecycle puts them in a context for where threats can occur. Let's look at the threats in more detail. 1. Data poisoning. Data poisoning attacks are common during the data phase in advance of training, through collection and ingestion. In these attacks, threat actors inject malicious inputs into the training data set, corrupting how the model learns and undermining its integrity and reliability. The root issue lies in a flawed assumption. Most learning algorithms presume that training data is clean and representative of reality. In security-sensitive environments, that assumption simply doesn't hold true. 2. Backdoor Attacks. Backdoor attacks are a form of data poisoning where a trigger pattern is hidden in the model during training. The model will behave normally for regular inputs but then will produce a malicious output when the trigger appears. These types of attacks can be complicated to detect because they remain inactive until the trigger condition is met. In this case, an adversary can create a maliciously trained network, a backdoored neural network, or a bad net, that has state-of-the-art performance on the user's training and validation samples, but behaves badly on specific attacker-chosen inputs. 3. Adversarial Attacks Another common type of attack is an adversarial attack. These occur when threat actors try to manipulate the AI model's input to produce incorrect results. Sometimes these changes can be so small they aren't recognizable, but they can alter behavior and undermine safety in AI use cases such as medical imaging or autonomous navigation. 4. Model inversion attacks. Model inversion attacks pose a threat to privacy and personal data. They do this through attempts to reconstruct sensitive input data from model parameters, outputs, or intermediate representations. In other words, the attack essentially reverse engineers the model to expose the specific private data it learned from. 5. Bias exploitation attacks. The final type of attack we'll highlight is bias exploitation attacks, which take advantage of bias that already exists in the dataset to manipulate decision making. These attacks are unlike data poisoning as they do not introduce new data into the dataset. Instead, they exploit inherent inequities that are already present in the data to achieve an attack. Across both the public and private sectors, risk now extend beyond technical compromise, like gaining system access, to include data manipulation, like altering or poisoning data. These are just five examples showing how threat actors are attacking different parts of the AI model lifecycle. For example, in the public sector, governments have faced ransomware attacks on public infrastructure that have impacted critical services. Likewise, in the private sector, companies have experienced model interference from threat actors that affect their websites and recommendation systems. More broadly, Gen AI has been used to spread misinformation. These attacks exploit model bias and erode the public trust in AI. These cases highlight that cybersecurity for enterprise data in AI is no longer confined to protecting systems. It is about defending the integrity of data and decisions and preserving public trust in the cognitive era. Data Security Foundations. This review of the threats highlight a central theme. The data used to train enterprise AI models must be secure. As organizations scale their use of AI, the volume and sensitivity of the data they manage will continue to grow exponentially. Recall the number of parameters used to train small and large language models and how the volume grows as we get to AGI. For enterprises leveraging private data sets to build private AI, it is critical to ensure the confidentiality and integrity of that data throughout its life cycle. Building this foundation requires a security by design approach, combining robust technical controls with strong governance mechanisms. This strategy is essential to safeguard information while maintaining compliance with standards and regulations. Protecting the data lifecycle. In general, data goes through different life cycle phases, including collection, storage, transmission, distribution, processing, archiving and retention, and disposition. Protecting data at each stage requires a combination of preventative, detective, and corrective controls to defend against cyber threats. Collection. Data collection needs to be handled carefully as it introduces opportunities for data compromise. It is essential to understand what data is being collected and for what purposes. ISO slash IEC 27001 2022 provides a framework to help organizations understand how to protect information through its life cycle. It offers a set of control categories to ensure that data collection and processing are lawful, fair, and transparent. Storage. After data has been safely collected, it needs to be securely stored. This can be in on-premises infrastructure or in cloud environments. General data protection includes encryption at rest, access controls, and segregation of sensitive data. It can also include capabilities like immutable storage, which may be part of a broader data protection strategy to mitigate cyber attacks such as ransomware. This must be part of your enterprise's zero trust data protection strategy. More on this shortly. Transmission Data transmitted or distributed between systems is vulnerable to interception. To protect this data in transit, technical methodologies like encryption are used. Encryption does not prevent data from being intercepted, but it makes the data unusable if it is. Processing. The data processing stage is a critical point where data can be intercepted or manipulated, making strong access controls essential to prevent unauthorized access. At this stage, the primary risk involves privacy breaches, especially when sensitive data sets are used for AI model training or analytics. To mitigate these risks, new computation methods have been developed. Homomorphic encryption, for example, preserves privacy by enabling operations on encrypted data without decrypting it. In addition, federated learning represents a shift towards secure, distributed AI. This allows models to be trained locally across multiple decentralized data sets. This bring the code to the data approach minimizes the need to centralize sensitive data, reducing exposure risk while maintaining model performance. Data disposal or deletion. Secure deletion of data is the final stage and ensures that old or redundant data is permanently removed. Under privacy regulations, such as GDPR Article 17, there is a right to eraser, or right to be forgotten, which means that organizations must demonstrate that they have effectively executed deletion requests. Each data lifecycle stage is interdependent, and a weakness in any one can compromise the entire life cycle. Understanding the risk across the life cycle ensures that, as you develop a zero trust data protection strategy, you have considered all aspects. In the following case study, a leading chemicals company is using an enterprise information management system to manage their data lifecycle, achieve compliance, and secure their data across multiple processes, partners, and locations. Case study Lanxus. The core business of Lanxis is the development, manufacturing, and marketing of chemical intermediates, additives, specialty chemicals, and plastics. What follows are excerpts of an interview with a company's process expert, ECM. Given the complexity of our portfolio, when we manufacture products, chemical intermediates, additives, specialty chemicals, and plastics, our processes leave paper trails that stem from scientific research through to sales and marketing. A typical starting point for research occurs when a customer requests a new product feature. Typically, we would then conduct research with an external partner, so there would be requirements around secure access and collaboration. Because we are a company that manufactures and distributes globally, our products, operations, and paper trails have to comply with global regulations. An Enterprise Content Management ECM platform helps us to ensure that information is compliant, from the research conducted to the procedures that engineers establish to manufacture at scale, to the construction and operation of a plant, and finally through to sales and marketing. We deal with large volumes of paper every day. Every step in a process has to be compliant and well documented, especially given that we operate in 25 countries and each one has a different set of regulations. Compliance is a benefit as a result of effective information management, along with efficiency and productivity, specifically being able to find information more quickly. To realize these benefits, we have to show our internal customers how using the technology will make their jobs easier. ECM delivers the tools we need to balance compliance and security with usability. Zero Trust Architecture for Enterprise AI. We have reviewed cyber threats related to data in AI, focusing specifically on the data lifecycle and the points where attacks can occur. To protect against these threats, the National Institute of Standards and Technology, NIST, defines a zero trust architecture, ZTA, as a strategic approach to cybersecurity that assumes no implicit trust within a network. The model is based on the idea that you should never trust and always verify. And this philosophy must govern every access decision. Instead of relying on defenses such as firewalls or VPNs, Zero Trust leverages continuous verification and access controls across all assets, users, and data flows. According to NIST SP 800-207, the Zero Trust model redefines traditional enterprise security by focusing on identity-centric protection, every access request must be authenticated and authorized in real time. Least privilege access. Users and systems receive only the minimum level of access necessary to perform their function. Dynamic policy enforcement. Access decisions are evaluated in real time based on factors such as user behavior and data sensitivity. Micro segmentation. Networks are divided into small, isolated zones to limit the movement of threat actors in the event of a compromise. Visibility and analytics. Continuous monitoring and threat detection ensure that certain behaviors trigger automated responses. Zero trust is not a single solution. It is achieved through a combination of technology solutions, including identity and access management, multifactor authentication, MFA, encryption, continuous monitoring, and automated policy enforcement. AIAM will become a critical security component of any enterprise system. Find out how a Latin American entertainment company is working to combine these technology solutions in their plan to evolve to a zero trust security model in the feature below. Case study A Latin American Entertainment Company. With millions of customers and thousands of employees across multiple countries, a leading Latin American entertainment company faced growing challenges managing identity and access for a large, distributed workforce. Over time, fragmented systems and manual provisioning made it difficult to maintain visibility across 15,000 user identities and more than 400 applications. The lack of unified governance slowed response times, created security blind spots, and made it harder to advance toward a zero trust model. To address this, the company implemented a comprehensive identity governance and administration IGA framework, consolidating global identity data into a single source of truth and central point of control. Integrated with HR Systems, Active Directory, and dozens of enterprise applications, the platform automated provisioning, deprovisioning, and access reviews, reducing manual workloads by half. Intelligent alerting, continuous attestation, and role-based access controls reinforced compliance, minimized risk, and enforced least privilege principles across the enterprise. The results were immediate. The company gained end-to-end visibility into more than 15,000 identities, streamlined access management, and strengthened its security posture across a global footprint. With identity governance now at the core of its cybersecurity strategy, the company is well positioned to advance its zero trust model, extending the same rigor and automation to protect its data, applications, and AI-driven operations across the digital enterprise. AI security and model protection. Zero trust, as we just reviewed, provides a philosophy and strategy for protecting access. But we must also consider AI security and model protection more broadly. AI models differ from traditional IT systems in that they combine logic with the ability to learn from data continuously. We reviewed earlier the attack surfaces throughout the AI model lifecycle. To protect against these risks, organizations are adopting strategies that combine classic cybersecurity approaches with new ones. This can include training teams and models on adversarial attack. Approaches, watermarking models, and running red team tests of their own pre-deployment environments. For context, a red team is a group that simulates real-world cyber attacks to test an organization's security. Their goal is to find weaknesses in systems, networks, and people. Training can improve model performance by exposing it to adversarial examples during training, thereby increasing its resilience against adversarial inputs. Model watermarking provides assurance and helps identify unauthorized reuse of models. Red team tests are helpful in exposing vulnerabilities through simulated attacks before deployment. In combination with a zero trust approach, these can be powerful tactics to protect against attacks. However, these are just a few of the many potential approaches, and this should be defined as part of an overall AI strategy across the enterprise. Looking ahead, the future of cybersecurity for AI. This chapter has looked at the growing importance of cybersecurity in relation to enterprise AI, highlighting the rising number of cyber attacks targeting enterprise AI systems. With reports indicating that 62% of organizations have faced deep fake attacks and concerns about Gen AI's adversarial capabilities. The urgency of addressing AI and data-related cyber risk is apparent. As organizations leverage private data to enhance operational efficiency, they are simultaneously exposing themselves to complex vulnerabilities that threaten their AI models and data. We also analyze how these attacks work, outlining threats like data poisoning, backdoor attacks, and model inversion attacks. These risks illuminate some of the limitations of traditional cybersecurity approaches in protecting advanced AI systems. By understanding how these attacks relate to the different phases of the AI model lifecycle, organizations can better anticipate potential vulnerabilities and implement strategies for data protection and model security. Looking to the future, organizations must adopt proactive, adaptive cybersecurity frameworks that incorporate AI-driven defense approaches to counter AI-powered attacks. This includes developing intelligent threat detection systems and new risk assessment models. Ultimately, collaboration across public and private sectors, coupled with investment in innovative cybersecurity solutions, will be essential to outpace the evolving threat landscape and ensure the safe integration of AI technologies into enterprise operations. As organizations strengthen their cyber defenses, one truth becomes clear. Security and trust are inseparable. Protecting enterprise AI systems isn't only about defending against attacks. It's about ensuring that the data powering those systems remains accurate, ethical, and reliable. In the next chapter, we'll explore the foundation of trusted AI, data governance. The Fast Five Download. One, adopt a zero trust architecture for all data in AI systems. Immediately implement a zero trust security model that assumes no implicit trust within your network. Enforce continuous identity verification, least privilege access, dynamic policy enforcement, and micro segmentation to minimize the risk of both internal and external breaches. 2. Secure the entire data lifecycle with integrated controls. Mandate that all data across collection, storage, transmission, processing, and disposal is protected with layered security measures. This includes encryption at rest and in transit, strong access controls, immutable storage, and strict compliance with regulations, like GDPR and ISO slash IEC 27001 2022, any gaps in one stage can compromise the entire system. 3. Harden AI models against emerging threats. Establish protocols to defend against AI-specific attacks such as data poisoning, backdoor exploits, adversarial inputs, model inversion, and bias exploitation. Incorporate adversarial training, model watermarking, and regular red team testing to identify and remediate vulnerabilities before deployment. Four, build security by design into AI initiatives. Insist that every new AI or data project incorporates security and privacy by design from inception. Require cross-functional teams, including data, IT, compliance, and security, to work together to ensure technical and governance controls are integrated into AI model development and operations. Five, invest in proactive, enterprise AI-driven cybersecurity capabilities. Allocate resources to develop and deploy intelligent, adaptive cybersecurity solutions powered by AI. These should include automated threat detection, risk assessment tools, and real time monitoring to keep pace with evolving AI enabled attack methods. Foster collaboration with industry peers and public sector partners to stay ahead of emerging threats.