Cyber Grant Blog

EU AI Act compliance for CISOs: obligations, risks, and DLP

Written by FEDERICA MARIA RITA LIVELLI | May 8, 2026 3:56:12 PM

The EU AI Act and data protection: what CISOs and IT managers need to do now

The first deadlines under the EU AI Act are already live. If your organization uses AI, even just deploys it, you now have legal obligations, audit exposure, and potential fines. This article explains what the regulation requires, how to identify high-risk AI in your environment, and why on-premise AI combined with file-centric DLP is the architecture most CISOs are converging on.

 

 

What is the EU AI Act and who has to comply?

The EU AI Act is the first comprehensive legal framework for artificial intelligence, adopted by the European Union and now phasing into application. It applies to providers (those who build AI systems), deployers (those who use them in operations), importers, and distributors. For most enterprises, the relevant role is deployer, and that is where most CISOs and IT managers are exposed.

  • As a deployer under Article 26, your organization is responsible for four things:
  • Using AI systems in line with the provider's instructions
  • Ensuring staff who interact with high-risk AI have adequate AI literacy
  • Monitoring system behavior, retaining logs, and reporting serious incidents
  • Guaranteeing meaningful human oversight on critical decisions

These are not aspirational principles. They are auditable obligations, and the gap between "we use AI" and "we can prove how we use AI" is where most organizations are currently sitting.

 

What is a high-risk AI system under the EU AI Act?

A high-risk AI system, in the EU AI Act, is one whose intended purpose or operating context can materially affect health, safety, or fundamental rights. The Act classifies AI on four risk tiers: unacceptable, high, limited, and minimal. High-risk systems carry the strictest set of requirements.

The high-risk category includes, among others:

  • AI used as a safety component in critical infrastructure or regulated products (transport, medical devices, robotic surgery)
  • AI in education and employment that affects access to training, hiring, or career progression (CV screening, automated exam scoring)
  • AI used to access essential public or private services, including credit scoring
  • Biometric identification, emotion recognition, and biometric categorization
  • AI used in law enforcement, justice, and judicial decision support
  • AI in migration, asylum, and border control (automated visa screening)
  • AI applied to democratic processes where it can influence outcomes

If any of those touch your operations, you are not just running a tool. You are operating a regulated system, and the audit trail starts now.

 

The five obligation areas for high-risk AI

 

Before a high-risk AI system can be placed on the market or put into service, it has to meet five sets of requirements. Each one maps directly onto a CISO or IT manager workstream:

  • Risk governance. A documented process to identify, assess, and mitigate AI-related risks across the system's lifecycle.

  • Data quality. Datasets must be representative, controlled, and validated to prevent biased or discriminatory outcomes.

  • Traceability and documentation. Activity logs, technical documentation, and evidence sufficient to demonstrate conformity.

  • Human oversight. Operators must be able to monitor, override, and reverse AI-supported decisions.

  • Cybersecurity, robustness, and accuracy. High security standards and operational reliability across the entire system lifecycle.

A failure in any of these areas is not just a technical gap. It is direct exposure to enforcement action, especially when the absence of controls or documentation surfaces during an incident or an inspection.

 

Why most enterprises don't yet know what AI they're running

The single most common gap we see in enterprise AI compliance is the absence of an AI inventory. CISOs are being asked to certify compliance for systems they did not procure, did not approve, and in many cases did not know existed. This is shadow AI, and the EU AI Act does not care whether procurement signed off on it.

Without a structured inventory, demonstrating compliance is impossible and managing sanction risk is impossible. The inventory has to capture, for each system in use:

  • System name and provider
  • Function performed and business unit using it
  • Type of data processed (personal, sensitive, production data)
  • Risk classification under the EU AI Act
  • Compliance status and supporting documentation

This is not a one-time exercise. The inventory has to be maintained, integrated into procurement, and tied to vendor management and onboarding workflows. The companies that treat it as a project will be back at zero in twelve months. The companies that treat it as an operating control will pass audit.

 

On-premise AI and DLP: the architecture that holds up under audit

For organizations handling sensitive or regulated data, on-premise AI is becoming the default architectural choice, not a niche preference. Running AI inside your own infrastructure (including large language models) keeps data, prompts, and outputs under your control. Cloud-based generative AI cannot offer the same guarantees on data residency, log retention, or model behavior.

On-premise AI alone is not enough. The data itself still has to be governed. This is where AI-driven Data Loss Prevention enters the picture, and where traditional DLP shows its limits.

Traditional DLP was designed to control exit channels (email, USB, cloud upload). It assumes the perimeter still exists. In an environment where employees paste contracts into ChatGPT and upload files to unsanctioned tools, perimeter logic breaks. The 2024 Verizon Data Breach Investigations Report attributes 68% of breaches to human error, misconfigured permissions, and accidental exposure. Traditional DLP cannot intercept any of that, because none of it is technically an attack.

The alternative is file-centric DLP: protect the data itself, not the channel. The file is encrypted at creation, classified automatically, and carries its policy with it wherever it goes (cloud, third-party, personal device, AI tool). This is the architecture CyberGrant has built around: AIGrant for private AI and automatic classification, FileGrant for persistent protection across the file lifecycle, RemoteGrant for endpoint enforcement. The point is not the product stack. The point is that compliance with the EU AI Act, GDPR, and NIS2 now requires controls that survive outside the perimeter, and file-centric protection is one of the few approaches that does.

 

A 10-question self-assessment for CISOs and IT managers

This is not a substitute for a formal gap analysis, but it surfaces the priorities fast. Any "no" or "partial" answer is an active compliance gap.

#

Question

Yes / No / Partial

Note / Action

1

Do you have a current inventory of every AI system in use, including tools adopted directly by business units?

2

Do you know which of those systems fall under the high-risk category of the EU AI Act?

 

3

Is there a formal AI risk assessment procedure before a new system is deployed?

 

4

Have you mapped shadow AI across your infrastructure?

5

Have AI governance roles been assigned (AI officer, compliance lead)?

 

6

Do you have technical documentation and audit logs for critical AI systems?

7

Have your AI providers delivered the conformity documentation required by the Act?

8

Has the staff using high-risk AI received appropriate training?

9

Is AI Act assessment integrated into vendor onboarding and procurement?

10

Is there a remediation plan for non-compliant AI systems already in production?

 

If you cannot say yes to at least seven of these, you have a roadmap, not a compliance position.

 

What this means for CISOs and IT managers

EU AI Act compliance is no longer a legal interpretation problem. It is a governance, architecture, and data control problem. The decisions in front of CISOs and IT managers right now are concrete: which AI systems to allow, how to integrate them, what oversight to enforce, how to keep sensitive data out of public LLMs without blocking legitimate use.

The organizations that will navigate this well are not the ones with the most policy documents. They are the ones that can demonstrate, at any given moment, what AI is running, what data it touches, and who is accountable. On-premise AI for high-sensitivity workloads, file-centric DLP for everything that leaves the boundary, and a maintained AI inventory tied to procurement: that combination answers most of what the Act will ask, and most of what an auditor will ask first.

The work to do that starts before the next deadline, not after.