AI Act: Complete Guide for Companies in Spain — Obligations, Deadlines and Fines Under the EU AI Regulation
The EU AI Act is already in force. Find out if your company is affected, what obligations apply at each risk level, and what the penalties for non-compliance are.
Assess the risk level of my AI systems- REAF
- ICAM
- 5 Offices in Spain
- 25+ Years
- 30+ Jurisdictions
The problem
Regulation (EU) 2024/1689, known as the AI Act, entered into force on 1 August 2024 and applies progressively depending on the risk level of AI systems. Prohibited AI practices have applied since 2 February 2025. Obligations for high-risk AI systems apply from 2 August 2026. General-purpose AI (GPAI) model obligations have applied since August 2025. The problem most companies in Spain face is threefold: they do not know whether their AI systems fall within the regulation's scope, they do not know how to classify the risk level of the systems they use or deploy, and they do not know what practical obligations apply to them. A misclassification of risk can mean either overlooking critical obligations or facing fines of up to €35 million or 7% of global turnover for the most serious violations. The AI Act does not only affect companies that develop AI systems. It affects all companies that deploy third-party AI systems in critical processes — recruitment, credit scoring, critical infrastructure management — and those using AI systems to make or support decisions that significantly affect individuals. The scope is considerably broader than most companies assume.
Our solution
BMC offers an AI Act compliance service that starts from where your company is today. We begin with an AI systems inventory and risk classification under the regulation's criteria, proceed with a gap analysis against applicable obligations, and develop the AI governance framework the company needs for sustained compliance. Our interdisciplinary team combines lawyers specialised in technology regulation and data protection with AI governance experts who understand both the technical dimension — how AI systems work, how they are documented, how they are audited — and the legal dimension — what obligations the regulation imposes, how compliance is demonstrated, what records must be maintained. We integrate AI Act compliance with GDPR (AI systems processing personal data generate obligations under both frameworks), with the harmonised standards the European Commission is developing, and with international AI governance frameworks such as the NIST AI RMF.
How we do it
AI systems inventory and classification
We identify all AI systems the company develops, deploys, or uses in its processes and classify them under the AI Act's four risk levels: unacceptable (prohibited), high risk, limited risk, and minimal risk. Classification determines which obligations apply to each system and with what urgency.
Obligations gap analysis
For each identified high-risk AI system, we assess compliance with the regulation's obligations: training and input data, technical documentation, transparency and explainability, human oversight, accuracy and robustness, system cybersecurity, event logging, and auditability.
Compliance roadmap
We develop a prioritised roadmap to achieve compliance: technical documentation of systems, conformity assessment, registration in the EU high-risk AI systems database, implementation of human oversight controls, and adaptation of contracts with AI suppliers.
AI governance framework
We establish the organisational AI governance framework: AI use policy, evaluation process for new systems before deployment, roles and responsibilities (including, where applicable, an AI Officer role), staff training, and mechanisms for periodic monitoring and review of the framework.
Download our guide
Download our AI Act self-assessment tool: classify the risk level of your AI systems in 15 minutes
The AI Act: the world’s first risk-based legal framework for AI
Regulation (EU) 2024/1689 on Artificial Intelligence, in force since 1 August 2024, is the world’s first comprehensive legal framework for artificial intelligence. Unlike previous sectoral approaches, the AI Act adopts a horizontal, risk-based perspective: the higher the potential risk of an AI system to fundamental rights, health, and safety, the greater the obligations imposed on those who develop or deploy it.
The regulation is directly applicable in all EU member states without the need for national transposition, meaning its obligations apply equally in Spain regardless of whether national implementing legislation has been adopted. The only relevant exception relates to the national supervisory authority’s (AESIA’s) full enforcement powers, which do require a national legislative framework.
AI Act application timeline: what is already mandatory
The AI Act does not apply all at once. It has a staggered entry into force that is important to understand:
1 August 2024 — Entry into force: The regulation is live law. Actors must begin preparing.
2 February 2025 — Prohibitions: Unacceptable AI practices — subliminal manipulation, social scoring, emotion recognition in workplaces and education, real-time biometric identification without exceptions — are prohibited from this date. Any company operating these systems should have shut them down.
2 August 2025 — GPAI: General-purpose AI models have documentation, transparency, and copyright obligations from this date.
2 August 2026 — High-risk systems (Annex III): High-risk AI systems placed on the market from this date must comply with all Chapter III obligations before deployment.
2 August 2027 — Embedded high-risk systems (Annex I): High-risk AI systems that are components of products already regulated by other EU directives (machinery, medical devices, aviation products) have an additional year to adapt.
The four risk levels and their practical implications
The AI Act classifies AI systems into four risk categories, each with a different obligations regime:
Unacceptable risk (prohibited): Systems banned for unacceptably compromising fundamental rights. No company may operate them. If any internal process uses techniques of this type, it must have been modified before February 2025.
High risk: Systems with significant potential impact on individuals in critical areas. The most heavily regulated: mandatory technical documentation, conformity assessment, European registration, mandatory human oversight. The Annex III catalogue includes situations that many companies in non-technology sectors may not have identified as “high-risk AI” — such as CV screening tools or credit scoring systems.
Limited risk: Systems that interact with people — chatbots, AI-generated content — with transparency obligations: users must know they are interacting with AI, that content is AI-generated, or that their image or voice has been AI-manipulated (deepfakes). Generative models such as text-to-image tools or customer service chatbots fall in this category.
Minimal risk: The vast majority of AI applications — spam filters, content recommendations, AI productivity tools — fall into this category. They have no specific obligations under the AI Act, though companies developing them may adhere to voluntary codes of conduct.
High-risk AI systems in non-technology sectors
One of the most common surprises for companies is discovering they use high-risk AI systems without having identified them as such. Examples from the AI Act’s Annex III:
Human Resources: Any AI system used in candidate selection, performance evaluation, promotion management, or employee behaviour monitoring is high risk under the AI Act. This includes CV screening tools, AI-powered video interview analysis systems, and productivity analytics platforms.
Banking and credit: Credit scoring systems, creditworthiness assessment, and fraud detection models that determine whether a person can access a loan or on what terms are high risk.
Insurance: AI systems used in policy underwriting, claims assessment, or premium pricing based on AI-generated risk profiles are high risk.
Public services and administration: AI systems that determine individuals’ access to public benefits or that make social assistance decisions are high risk.
Our AI Act compliance team conducts systematic AI system inventories to identify all high-risk use cases in the company, including those that are not immediately obvious.
AI governance: beyond one-off compliance
The AI Act cannot be approached as a one-time compliance project. It requires establishing an AI governance function within the organisation capable of:
- Assessing new AI systems before deployment
- Maintaining an up-to-date inventory of AI systems and their risk classification
- Managing the lifecycle of high-risk AI systems (including significant changes that trigger a new conformity assessment)
- Training staff who use or oversee AI systems
- Managing incidents and communications with AESIA
- Monitoring AI supplier compliance
BMC helps companies establish this AI governance function proportionate to their size and complexity. For mid-sized companies, this may be a function partly assumed by the existing DPO with external support. For large companies with multiple AI systems, it may require a dedicated governance team.
The data protection dimension: AI Act and GDPR together
High-risk AI systems processing personal data trigger obligations under the AI Act and under GDPR simultaneously. The most important point of intersection is the Data Protection Impact Assessment (DPIA) that GDPR requires for high-risk personal data processing.
According to guidance from the AEPD and the European Data Protection Board, the DPIA of a high-risk AI system should be integrated with the AI Act conformity assessment. This means two separate evaluations should not be conducted — rather, an integrated process that addresses both the risks to individuals’ rights and freedoms (GDPR) and the safety and functioning risks of the AI system (AI Act).
Our data protection and AI Act compliance teams work jointly to develop this integrated assessment process for clients with high-risk AI systems processing personal data.
AI Act and procurement: what to require from AI suppliers
The AI Act has significant implications for how companies procure AI systems. When a company deploys a third-party AI system in a high-risk context, it becomes the deployer with its own compliance obligations. To fulfil those obligations, it needs cooperation and information from the provider.
Contracts with AI suppliers should now routinely include: access to technical documentation sufficient to meet regulatory obligations, notification rights when the system undergoes significant changes, information about training data and model limitations, support in conformity assessment processes, and contractual warranties regarding EU AI Act compliance.
BMC advises companies on the AI Act provisions that must be incorporated into supplier contracts, and assists in reviewing existing AI vendor agreements to identify gaps and renegotiation priorities.
Frequently asked questions
Related services
Take the first step
Request a no-obligation consultation and discover what we can do for your business.