EU AI Act Compliance: What Engineering Teams Need to Build Before the Deadline
By Shayan Ghasemnezhad on January 20, 2026 · 3 min read
The EU AI Act is law. High-risk classification triggers concrete engineering requirements. What to build now and what can wait.
The EU AI Act entered into force in August 2024, with obligations phasing in through 2027. It is the first comprehensive AI regulation, and unlike GDPR—which primarily affected data handling—the AI Act imposes requirements on how systems are designed, tested, and operated. For engineering teams building AI-enabled products that serve EU customers, this is not a legal concern that can be deferred to compliance. It has direct engineering implications.
Risk Classification: Where Your System Lands
The Act classifies AI systems into four risk tiers: unacceptable (banned: social scoring, real-time biometric surveillance with narrow exceptions), high-risk (significant obligations), limited risk (transparency requirements), and minimal risk (no obligations beyond voluntary codes). Most enterprise AI products land in the limited risk category. But if your AI influences hiring, credit scoring, insurance underwriting, or critical infrastructure, you are high-risk.
High-risk classification triggers concrete requirements: risk management systems, data governance, technical documentation, logging, human oversight, accuracy and robustness testing, and cybersecurity. These are not vague principles—they map to specific engineering deliverables.
Engineering Requirements for High-Risk Systems
Logging and traceability: Article 12 requires automatic recording of events relevant to identifying risks and substantial modifications. In practice, this means structured logging of every inference: input data, model version, output, confidence score, and any human override decisions. Logs must be retained for the system’s lifetime or a minimum period set by the implementing legislation.
Human oversight: Article 14 requires that high-risk AI systems be designed to allow effective human oversight. This does not mean a human reviews every output—it means the system provides the tools for a human to understand, monitor, and override the AI. Concretely: dashboards showing model behaviour over time, the ability to disable the AI and fall back to manual processes, and alert mechanisms when the system operates outside expected parameters.
Data governance: Article 10 sets requirements for training, validation, and testing datasets. They must be relevant, representative, complete, and free of errors “to the best extent possible.” Document dataset composition, known biases, and preprocessing steps. If you cannot describe your training data, you cannot demonstrate compliance.
# Structured inference log for EU AI Act compliance
import json
import logging
from datetime import datetime, timezone
audit_logger = logging.getLogger('ai_audit')
def log_inference(
request_id: str,
model_version: str,
input_data: dict,
output: dict,
confidence: float,
human_override: bool = False,
) -> None:
audit_logger.info(json.dumps({
'timestamp': datetime.now(timezone.utc).isoformat(),
'request_id': request_id,
'model_version': model_version,
'input_hash': hash_pii_safe(input_data),
'output_summary': summarise(output),
'confidence': confidence,
'human_override': human_override,
}))
Transparency for Limited-Risk Systems
Even if your system is not high-risk, limited-risk classification requires transparency: users must be informed that they are interacting with an AI system. If your product uses AI to generate content (text, images, audio), the output must be labelled as AI-generated. This requires product design changes, not just legal disclaimers—visible, clear indicators in the UI.
Timeline and Prioritisation
Prohibited practices took effect in February 2025. Transparency obligations for limited-risk systems apply from August 2025. High-risk obligations apply from August 2026. General-purpose AI model requirements (for providers) apply from August 2025.
If you are building a high-risk system, start now with logging and data governance—these require architectural changes that are difficult to retrofit. Human oversight design can follow. Conformity assessment preparation should begin at least six months before your compliance deadline.
Decision Framework
Determine your risk classification first. If uncertain, consult legal—but bring the engineering context. Many legal teams lack the technical understanding to classify AI systems correctly. The classification depends on what the system does (intended purpose), not how it does it (technical implementation). Build compliance infrastructure as a platform capability, not as per-feature workarounds.
Failure Modes
The most dangerous failure: assuming GDPR compliance covers AI Act obligations. They overlap on data governance but diverge significantly on system design, testing, and human oversight requirements. GDPR compliance is necessary but not sufficient.
Another failure: treating compliance as a documentation exercise. The AI Act requires demonstrable compliance—working logging systems, real human oversight mechanisms, tested robustness. A PDF describing what you plan to build is not compliance. Build the infrastructure, then document it.
Regulation is a design input, not an afterthought. Teams that embed AI Act requirements into their development process now will ship compliant systems without a scramble at the deadline. Teams that defer will face a choice between expensive retrofitting and market withdrawal. Start with logging and data governance. The rest follows.