By: Ibrahim Mizi on Apr 03 2026 EU AI Act: UK Business Compliance Guide
What the EU AI Act means for UK companies, which systems are high-risk, and the six steps to meet the August 2026 deadline without derailing AI projects.
If you build or deploy AI systems and any of the output touches an EU user, the EU AI Act applies to you. Brexit did not create an exemption. The Act’s extraterritorial scope catches any company whose AI system outputs are “used” in the EU, regardless of where the company is incorporated.
This is not theoretical. If your hiring platform screens candidates who live in the EU, if your credit scoring model evaluates EU-based applicants, if your customer service bot handles queries from EU residents, you are in scope. The regulation defines “provider” and “deployer” broadly enough to cover most B2B software companies that operate cross-border.
This guide breaks down what the regulation actually requires, which deadlines matter, and what practical steps you can take now. No speculation. No hand-wringing about the future of AI regulation. Just the obligations, the timeline, and the work.
Does the EU AI Act apply to UK companies?
Yes. Article 2 of the EU AI Act establishes extraterritorial applicability. Three scenarios bring UK companies into scope.
You are a provider. You developed the AI system and it is placed on the EU market or put into service in the EU. This includes SaaS platforms accessible to EU users.
You are a deployer. You use an AI system within EU territory, even if the system was built elsewhere. UK companies with EU subsidiaries, clients, or operations fall here.
Your system’s output is used in the EU. This is the broadest trigger. If you build an AI model whose outputs inform decisions about people in the EU, the Act applies even if you never intended to serve the EU market.
The practical test is straightforward: does your AI system affect anyone in the EU? If the answer is yes, or even maybe, you should assume you are in scope and plan accordingly. Discovering this after enforcement begins is not a position you want to be in.
The four risk tiers
The EU AI Act classifies AI systems into four tiers based on the potential harm they can cause. Your compliance obligations depend entirely on where your systems land.
Prohibited (unacceptable risk)
These AI practices are banned outright, with enforcement already active since February 2, 2025. The list includes:
- Social scoring systems that evaluate people based on social behaviour or personal characteristics
- Real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions)
- AI systems that exploit vulnerabilities of specific groups (age, disability, economic situation)
- Emotion recognition systems in workplaces and educational institutions
- Untargeted scraping of facial images from the internet or CCTV to build recognition databases
If any of your systems fall into this category, they must be decommissioned. There is no compliance pathway for prohibited AI.
High-risk
This is where most of the compliance burden sits. High-risk systems are permitted but subject to strict requirements before they can be placed on the EU market. The Act identifies two routes to high-risk classification.
Annex I systems: AI used as a safety component of products already covered by EU product safety legislation (medical devices, machinery, toys, aviation, automotive).
Annex III systems: AI used in sensitive domains listed explicitly in the regulation:
- Biometric identification and categorisation of people
- Management and operation of critical infrastructure (energy, water, transport)
- Education and vocational training (admissions, assessment, proctoring)
- Employment and worker management (recruitment screening, task allocation, performance monitoring)
- Access to essential services (credit scoring, insurance pricing, emergency dispatch)
- Law enforcement (risk assessment, polygraph tools, evidence analysis)
- Migration, asylum, and border control
- Administration of justice and democratic processes
For software companies, the most common triggers are employment-related AI (automated CV screening, interview analysis), credit and insurance scoring, and educational assessment tools.
Limited risk
Systems with specific transparency obligations but lighter compliance requirements. This includes:
- Chatbots and conversational AI (must disclose to users they are interacting with AI)
- Emotion recognition systems not covered by the prohibition (must inform subjects)
- AI-generated content including deepfakes (must be labelled as artificially generated)
- Generative AI systems (must disclose AI generation and comply with copyright obligations)
Most customer-facing AI tools fall here. The primary obligation is disclosure, not documentation.
Minimal risk
Everything else. No specific regulatory obligations beyond existing law. This covers the majority of AI applications: spam filters, AI-assisted search, recommendation engines for non-sensitive domains, optimisation algorithms in logistics and manufacturing.
No registration, no conformity assessment, no mandatory documentation. You can still voluntarily adopt the code of practice for minimal-risk systems, but it is not required.
August 2, 2026: what must be in place
The EU AI Act rolls out in phases. Some obligations are already enforceable. Others arrive on specific dates. Here is the timeline that matters for UK companies.
February 2, 2025 (already passed): Prohibitions on banned AI practices took effect. If you operate any prohibited system with EU exposure, you are already non-compliant.
August 2, 2025 (already passed): Obligations for general-purpose AI (GPAI) models took effect. If you provide a foundation model or general-purpose AI system, you should already be meeting transparency and documentation requirements.
August 2, 2026: This is the big one. Full obligations for high-risk AI systems become enforceable. Providers and deployers of high-risk AI must have completed conformity assessments, established quality management systems, implemented post-market monitoring, and registered their systems in the EU database.
One important caveat. The European Parliament voted 569-45 in favour of the Digital Omnibus proposal, which would push the high-risk system deadline to December 2, 2027. As of April 2026, trilogue negotiations between the Parliament, Council, and Commission have not concluded. The extension is politically likely but not legally confirmed. Until the trilogue concludes and the amended regulation is published in the Official Journal, August 2, 2026 remains the binding date.
The responsible approach: prepare as if August is real. If the extension passes, you will have built compliance infrastructure ahead of schedule. If it does not pass, you will not be scrambling to meet a deadline that was always on the books.
What must be in place by August 2, 2026 for high-risk systems:
- Quality management system. Documented procedures covering the entire AI lifecycle: design, development, testing, deployment, monitoring, and decommissioning.
- Technical documentation. Detailed records of system architecture, training data, validation methodology, performance metrics, and known limitations. This must be prepared before the system is placed on the market.
- Conformity assessment. Either self-assessment or third-party assessment depending on the system category. Biometric identification systems require third-party (notified body) assessment. Most other high-risk systems allow self-assessment.
- EU database registration. High-risk systems must be registered in the EU-wide database before being placed on the market or put into service.
- Post-market monitoring plan. Active, documented processes for monitoring system performance after deployment, including mechanisms for identifying and addressing risks that emerge in production.
- Incident reporting. Procedures for reporting serious incidents to market surveillance authorities.
High-risk AI systems: documentation, oversight, and conformity assessment
If your AI system is classified as high-risk, the documentation and process requirements are substantial. This section covers what “compliance” actually looks like in practice.
Technical documentation
Article 11 requires technical documentation that demonstrates compliance before a system reaches the market. This is not a one-page summary. The regulation specifies what must be included:
- A general description of the system, its intended purpose, and the provider’s identity
- A detailed description of the system’s elements: algorithms, data, training processes, design choices, and their rationale
- Information about training, validation, and testing data: collection methods, data characteristics, preparation steps, assumptions, and any identified gaps
- Metrics used to measure accuracy, robustness, and cybersecurity, along with test results
- A description of the risk management system and decisions taken regarding risks identified
- A description of changes made to the system throughout its lifecycle
For development teams, this means documentation is not a post-build activity. It must be embedded in the development process from the start. If you are not already documenting model selection rationale, data provenance, test coverage, and performance benchmarks as you go, retrofitting this documentation will be expensive and error-prone.
Human oversight
Article 14 mandates that high-risk AI systems are designed to be effectively overseen by humans during use. This does not mean a person rubber-stamps the AI’s output. The regulation requires:
- Clear interfaces that enable human operators to understand system capabilities and limitations
- The ability for a human to interpret the system’s output correctly
- The ability for a human to decide not to use the system or to disregard, override, or reverse its output
- The ability for a human to intervene in or halt the system’s operation
For automated decision-making systems (hiring, credit, education), this means building genuine override mechanisms and training the people who use them. A “confirm” button that nobody ever clicks does not satisfy the human oversight requirement.
Conformity assessment
Before placing a high-risk system on the EU market, providers must complete a conformity assessment. The process depends on the system type.
Self-assessment (most high-risk systems): The provider evaluates their own system against the requirements in Chapter III, Section 2 of the Act. This includes verifying that the quality management system, technical documentation, and risk management procedures meet the regulatory standard. The provider issues an EU declaration of conformity and affixes the CE marking.
Third-party assessment (biometric identification systems): A notified body must independently assess the system. The provider cannot self-certify.
The conformity assessment is not a one-time event. If you make a substantial modification to a high-risk system after the initial assessment, you must repeat the process.
Post-market monitoring
Article 72 requires providers of high-risk AI systems to establish and document a post-market monitoring system. This system must actively and systematically collect, document, and analyse data on the AI system’s performance throughout its lifetime. The purpose is to identify risks or compliance failures that were not apparent during pre-market testing.
For software teams, this translates to production monitoring with a regulatory lens. Model drift detection, output quality metrics, fairness monitoring across protected characteristics, and a documented process for acting on findings.
How this interacts with GDPR, ISO 27001, and UK AI principles
AI regulation does not exist in isolation. If you are already compliant with GDPR and certified under ISO 27001, you have a head start. But the overlap is partial, not complete.
GDPR
The EU AI Act and GDPR share a common concern with automated decision-making that affects individuals, but they regulate different aspects of it.
GDPR Article 22 gives individuals the right not to be subject to solely automated decisions with legal or similarly significant effects. The AI Act goes further, requiring that high-risk AI systems have built-in human oversight mechanisms, documented testing regimes, and conformity assessments before deployment.
Where they converge: data governance, transparency obligations, and impact assessments. If you already conduct Data Protection Impact Assessments (DPIAs) under GDPR, the methodology translates well to the fundamental rights impact assessments required for high-risk AI deployers under the AI Act.
Where they diverge: the AI Act introduces requirements that have no GDPR equivalent. Conformity assessments, technical documentation of model architecture, mandatory registration in the EU database, and post-market monitoring obligations are all new.
ISO 27001
OpenKit holds ISO 27001 certification. The standard covers information security management: risk assessment, access controls, incident management, and continuous improvement. Several of these map onto AI Act requirements.
The overlap is real. ISO 27001’s risk management framework, documented procedures, audit trails, and approach to continuous monitoring align with what the AI Act demands in terms of quality management and data governance.
But ISO 27001 was not designed for AI. It does not address:
- Conformity assessments for AI systems
- Technical documentation of model architecture, training data, and performance metrics
- Human oversight mechanisms specific to automated decision-making
- Bias monitoring and fairness testing
- Post-market monitoring of AI-specific risks like model drift
Think of ISO 27001 as a strong foundation that covers perhaps 30-40% of what the AI Act requires for high-risk systems. The remaining 60-70% is AI-specific and must be built on top.
UK AI principles
The UK government published its pro-innovation approach to AI regulation in 2023, built around five principles: safety, transparency, fairness, accountability, and contestability. These principles are not legally binding. They are guidance for existing regulators to interpret within their own domains.
The alignment with the EU AI Act is philosophical rather than procedural. Both frameworks care about transparency, human oversight, and accountability. But the UK approach relies on sector-specific regulators applying these principles within existing legal frameworks, while the EU approach imposes uniform, cross-sector requirements with dedicated enforcement.
For UK companies subject to both regimes, meeting EU AI Act requirements will almost certainly satisfy UK AI principles as well. The reverse is not true. UK principles alone will not make you EU AI Act compliant.
Six steps to start compliance now
Waiting for the trilogue to conclude before beginning compliance work is a calculated risk that offers limited upside. The core requirements of the AI Act are settled. The Digital Omnibus debate concerns timing, not substance. These six steps are relevant regardless of whether the deadline is August 2026 or December 2027.
1. Audit your existing AI systems
Start with an inventory. Every AI system your company builds, deploys, or uses needs to be catalogued. For each system, document:
- What it does and what decisions it informs
- What data it uses and where that data comes from
- Who uses the output and in which jurisdictions
- Whether any output reaches EU users or affects EU residents
This inventory is the foundation for everything that follows. You cannot classify what you have not catalogued.
2. Classify each system by risk tier
Using the four-tier framework above, assign a risk classification to each system in your inventory. Be honest about edge cases. If you are unsure whether a system qualifies as high-risk, err on the side of caution. Reclassifying from high-risk to limited-risk later is straightforward. Discovering you should have been treating a system as high-risk after enforcement begins is not.
Pay particular attention to AI used in hiring, credit, education, and critical infrastructure. These are the categories most likely to affect UK software companies.
3. Document data flows end to end
For each high-risk system, map the complete data lifecycle: collection, preprocessing, storage, model training, inference, output delivery, and retention. Document where personal data enters the pipeline, how it is processed, and where it leaves your control.
This exercise serves dual purposes. It supports both AI Act technical documentation requirements and GDPR data mapping obligations. If you have already done thorough GDPR data mapping, extend it to cover the AI-specific elements: training data provenance, validation datasets, and model output flows.
4. Establish human oversight checkpoints
For each high-risk system, define where and how human operators can intervene. This is not about adding a review step at the end. Effective oversight means:
- Operators understand what the AI system is doing and why
- Operators can access enough information to evaluate whether the output is reasonable
- Operators have genuine authority and practical ability to override, modify, or reject the AI’s output
- Escalation paths exist for edge cases the system was not designed to handle
Document these checkpoints. Train the people responsible for oversight. Test that the override mechanisms actually work under realistic conditions.
5. Prepare technical documentation
Begin building the documentation package required by Article 11. For active development projects, embed documentation into your development workflow now. For existing systems, start the retrospective documentation process.
Priority documentation:
- System description and intended purpose
- Model architecture and design rationale
- Training data characteristics, collection methods, and known limitations
- Testing methodology and results (accuracy, robustness, bias)
- Risk management decisions and their justification
- Change log tracking modifications since initial deployment
This is the most labour-intensive step. Starting it six months before a deadline is feasible. Starting it six weeks before is not.
6. Designate a governance lead
Someone in your organisation needs to own AI governance. This does not require a new hire (though larger organisations may need one). It requires a named individual with authority to make decisions about AI risk classification, documentation standards, and compliance timelines.
The governance lead should:
- Maintain the AI system inventory and risk classifications
- Coordinate technical documentation across development teams
- Liaise with legal counsel on regulatory interpretation
- Track regulatory developments (the trilogue, implementing acts, harmonised standards)
- Report to senior leadership on compliance status and outstanding gaps
Without clear ownership, compliance efforts fragment across teams and stall.
When you need outside help with AI governance
Some of this work is straightforward if you have the internal expertise. Auditing your AI systems, classifying risk tiers, and designating a governance lead are things most competent engineering and legal teams can handle.
Other parts are harder. Conformity assessments require specific knowledge of the regulation’s technical requirements. Technical documentation must be detailed enough to satisfy regulatory scrutiny, which means understanding what regulators will actually look for. Building human oversight mechanisms that genuinely work (rather than check a box) requires design thinking that most teams have not had to apply to compliance before.
If your team has not worked with EU product safety regulation before, the conformity assessment process will be unfamiliar territory. The AI Act borrows heavily from the New Legislative Framework used for medical devices, machinery, and other regulated products. Companies with experience in those domains will recognise the structure. Companies without that background may need guidance.
OpenKit works with UK businesses on AI governance and compliance, including risk classification, documentation frameworks, and the technical architecture decisions that make oversight practical rather than performative. We hold ISO 27001 and ISO 9001 certifications, which gives us a working foundation in governance, but we are direct about what those certifications cover and what they do not.
If you are earlier in your AI journey and still working out where AI fits your operations, our AI consulting work starts with the strategic questions before moving to implementation. For organisations concerned about data sovereignty in the context of compliance, our private AI deployment work addresses the infrastructure side of the equation.
We wrote a separate piece on why private AI infrastructure matters for regulated organisations that covers the technical architecture in more detail.
The regulation is coming. The substance is settled even if the exact timeline shifts by a few months. The companies that start now will treat compliance as a manageable engineering project. The companies that wait will treat it as a crisis.