Skip to content

AI Act: Complete Guide for Companies in Spain — Obligations, Deadlines and Fines Under the EU AI Regulation

The EU AI Act is already in force. Find out if your company is affected, what obligations apply at each risk level, and what the penalties for non-compliance are.

Assess the risk level of my AI systems

The problem

Regulation (EU) 2024/1689, known as the AI Act, entered into force on 1 August 2024 and applies progressively depending on the risk level of AI systems. Prohibited AI practices have applied since 2 February 2025. Obligations for high-risk AI systems apply from 2 August 2026. General-purpose AI (GPAI) model obligations have applied since August 2025. The problem most companies in Spain face is threefold: they do not know whether their AI systems fall within the regulation's scope, they do not know how to classify the risk level of the systems they use or deploy, and they do not know what practical obligations apply to them. A misclassification of risk can mean either overlooking critical obligations or facing fines of up to €35 million or 7% of global turnover for the most serious violations. The AI Act does not only affect companies that develop AI systems. It affects all companies that deploy third-party AI systems in critical processes — recruitment, credit scoring, critical infrastructure management — and those using AI systems to make or support decisions that significantly affect individuals. The scope is considerably broader than most companies assume.

Our solution

BMC offers an AI Act compliance service that starts from where your company is today. We begin with an AI systems inventory and risk classification under the regulation's criteria, proceed with a gap analysis against applicable obligations, and develop the AI governance framework the company needs for sustained compliance. Our interdisciplinary team combines lawyers specialised in technology regulation and data protection with AI governance experts who understand both the technical dimension — how AI systems work, how they are documented, how they are audited — and the legal dimension — what obligations the regulation imposes, how compliance is demonstrated, what records must be maintained. We integrate AI Act compliance with GDPR (AI systems processing personal data generate obligations under both frameworks), with the harmonised standards the European Commission is developing, and with international AI governance frameworks such as the NIST AI RMF.

Process

How we do it

1

AI systems inventory and classification

We identify all AI systems the company develops, deploys, or uses in its processes and classify them under the AI Act's four risk levels: unacceptable (prohibited), high risk, limited risk, and minimal risk. Classification determines which obligations apply to each system and with what urgency.

2

Obligations gap analysis

For each identified high-risk AI system, we assess compliance with the regulation's obligations: training and input data, technical documentation, transparency and explainability, human oversight, accuracy and robustness, system cybersecurity, event logging, and auditability.

3

Compliance roadmap

We develop a prioritised roadmap to achieve compliance: technical documentation of systems, conformity assessment, registration in the EU high-risk AI systems database, implementation of human oversight controls, and adaptation of contracts with AI suppliers.

4

AI governance framework

We establish the organisational AI governance framework: AI use policy, evaluation process for new systems before deployment, roles and responsibilities (including, where applicable, an AI Officer role), staff training, and mechanisms for periodic monitoring and review of the framework.

€35M
Maximum fine (or 7% of global turnover) for prohibited AI practices
Aug 2026
Application date for high-risk AI system obligations
3 risk tiers
Categories determining each company's compliance obligations

Download our guide

Download our AI Act self-assessment tool: classify the risk level of your AI systems in 15 minutes

Regulation (EU) 2024/1689 on Artificial Intelligence, in force since 1 August 2024, is the world’s first comprehensive legal framework for artificial intelligence. Unlike previous sectoral approaches, the AI Act adopts a horizontal, risk-based perspective: the higher the potential risk of an AI system to fundamental rights, health, and safety, the greater the obligations imposed on those who develop or deploy it.

The regulation is directly applicable in all EU member states without the need for national transposition, meaning its obligations apply equally in Spain regardless of whether national implementing legislation has been adopted. The only relevant exception relates to the national supervisory authority’s (AESIA’s) full enforcement powers, which do require a national legislative framework.

AI Act application timeline: what is already mandatory

The AI Act does not apply all at once. It has a staggered entry into force that is important to understand:

1 August 2024 — Entry into force: The regulation is live law. Actors must begin preparing.

2 February 2025 — Prohibitions: Unacceptable AI practices — subliminal manipulation, social scoring, emotion recognition in workplaces and education, real-time biometric identification without exceptions — are prohibited from this date. Any company operating these systems should have shut them down.

2 August 2025 — GPAI: General-purpose AI models have documentation, transparency, and copyright obligations from this date.

2 August 2026 — High-risk systems (Annex III): High-risk AI systems placed on the market from this date must comply with all Chapter III obligations before deployment.

2 August 2027 — Embedded high-risk systems (Annex I): High-risk AI systems that are components of products already regulated by other EU directives (machinery, medical devices, aviation products) have an additional year to adapt.

The four risk levels and their practical implications

The AI Act classifies AI systems into four risk categories, each with a different obligations regime:

Unacceptable risk (prohibited): Systems banned for unacceptably compromising fundamental rights. No company may operate them. If any internal process uses techniques of this type, it must have been modified before February 2025.

High risk: Systems with significant potential impact on individuals in critical areas. The most heavily regulated: mandatory technical documentation, conformity assessment, European registration, mandatory human oversight. The Annex III catalogue includes situations that many companies in non-technology sectors may not have identified as “high-risk AI” — such as CV screening tools or credit scoring systems.

Limited risk: Systems that interact with people — chatbots, AI-generated content — with transparency obligations: users must know they are interacting with AI, that content is AI-generated, or that their image or voice has been AI-manipulated (deepfakes). Generative models such as text-to-image tools or customer service chatbots fall in this category.

Minimal risk: The vast majority of AI applications — spam filters, content recommendations, AI productivity tools — fall into this category. They have no specific obligations under the AI Act, though companies developing them may adhere to voluntary codes of conduct.

High-risk AI systems in non-technology sectors

One of the most common surprises for companies is discovering they use high-risk AI systems without having identified them as such. Examples from the AI Act’s Annex III:

Human Resources: Any AI system used in candidate selection, performance evaluation, promotion management, or employee behaviour monitoring is high risk under the AI Act. This includes CV screening tools, AI-powered video interview analysis systems, and productivity analytics platforms.

Banking and credit: Credit scoring systems, creditworthiness assessment, and fraud detection models that determine whether a person can access a loan or on what terms are high risk.

Insurance: AI systems used in policy underwriting, claims assessment, or premium pricing based on AI-generated risk profiles are high risk.

Public services and administration: AI systems that determine individuals’ access to public benefits or that make social assistance decisions are high risk.

Our AI Act compliance team conducts systematic AI system inventories to identify all high-risk use cases in the company, including those that are not immediately obvious.

AI governance: beyond one-off compliance

The AI Act cannot be approached as a one-time compliance project. It requires establishing an AI governance function within the organisation capable of:

  • Assessing new AI systems before deployment
  • Maintaining an up-to-date inventory of AI systems and their risk classification
  • Managing the lifecycle of high-risk AI systems (including significant changes that trigger a new conformity assessment)
  • Training staff who use or oversee AI systems
  • Managing incidents and communications with AESIA
  • Monitoring AI supplier compliance

BMC helps companies establish this AI governance function proportionate to their size and complexity. For mid-sized companies, this may be a function partly assumed by the existing DPO with external support. For large companies with multiple AI systems, it may require a dedicated governance team.

The data protection dimension: AI Act and GDPR together

High-risk AI systems processing personal data trigger obligations under the AI Act and under GDPR simultaneously. The most important point of intersection is the Data Protection Impact Assessment (DPIA) that GDPR requires for high-risk personal data processing.

According to guidance from the AEPD and the European Data Protection Board, the DPIA of a high-risk AI system should be integrated with the AI Act conformity assessment. This means two separate evaluations should not be conducted — rather, an integrated process that addresses both the risks to individuals’ rights and freedoms (GDPR) and the safety and functioning risks of the AI system (AI Act).

Our data protection and AI Act compliance teams work jointly to develop this integrated assessment process for clients with high-risk AI systems processing personal data.

AI Act and procurement: what to require from AI suppliers

The AI Act has significant implications for how companies procure AI systems. When a company deploys a third-party AI system in a high-risk context, it becomes the deployer with its own compliance obligations. To fulfil those obligations, it needs cooperation and information from the provider.

Contracts with AI suppliers should now routinely include: access to technical documentation sufficient to meet regulatory obligations, notification rights when the system undergoes significant changes, information about training data and model limitations, support in conformity assessment processes, and contractual warranties regarding EU AI Act compliance.

BMC advises companies on the AI Act provisions that must be incorporated into supplier contracts, and assists in reviewing existing AI vendor agreements to identify gaps and renegotiation priorities.

FAQ

Frequently asked questions

The AI Act affects four categories of actors: (1) providers that develop AI systems and place them on the EU market; (2) deployers that use third-party AI systems in their professional activities; (3) importers of AI systems developed outside the EU; and (4) distributors of AI systems. The key point is that the AI Act does not only affect technology companies developing AI: it affects any company that deploys AI systems in its processes — including CV screening tools, credit scoring systems, customer-facing chatbots, or predictive analytics tools — when those systems have a degree of autonomy and produce outputs that influence relevant decisions.
Annex III of the AI Act lists AI systems that are considered high risk by default: biometric systems, critical infrastructure management systems, educational systems that determine access or outcomes, employment systems (recruitment, performance assessment, workforce management), essential public and private service systems (credit scoring, insurance assessment), law enforcement systems, justice administration systems, and democratic processes. Obligations for these systems include: implementing a risk management system, ensuring training data quality, producing technical documentation, guaranteeing human oversight, meeting defined accuracy and robustness levels, and registering the system in the EU database before deployment.
Since 2 February 2025, the following AI systems are prohibited in the EU: subliminal or manipulative techniques to influence human behaviour beyond the person's awareness; techniques exploiting vulnerabilities of specific groups (age, disability); social scoring systems by public authorities; real-time remote biometric identification in public spaces (with narrow law enforcement exceptions); emotion recognition systems in workplace and educational settings; biometric categorisation to infer sensitive characteristics (race, sexual orientation, religion, political views); and AI systems used for criminal prediction based on individual profiling.
General-purpose AI models (GPAI) are AI models — such as large language models (LLMs) — capable of performing a wide range of different tasks. Their obligations have applied since August 2025 and include: preparing and maintaining up-to-date technical documentation, providing information to providers integrating the model in their systems, complying with EU copyright policy, and publishing a summary of training data used. For GPAI with systemic risk (the most powerful models, currently defined as exceeding 10^25 FLOPs of training compute), additional obligations apply: adversarial evaluation, notifying serious incidents to the European Commission, ensuring model cybersecurity, and declaring energy consumption.
The AI Act establishes three penalty tiers: (1) up to €35 million or 7% of global annual turnover (whichever is higher) for non-compliance with the prohibited unacceptable AI practices; (2) up to €15 million or 3% for non-compliance with other obligations in the regulation; (3) up to €7.5 million or 1.5% for providing incorrect, incomplete, or misleading information to authorities. For SMEs and startups, the regulation requires that penalties be calculated considering proportionate impact. National supervisory authorities will be able to access companies' systems, data, and documentation.
The AI Act and GDPR are complementary frameworks that apply simultaneously when an AI system processes personal data. GDPR remains the reference framework for personal data protection. The AI Act adds a layer of regulation governing the AI system itself, regardless of whether it processes personal data. In practice, many high-risk AI systems are also heavily reliant on personal data (recruitment, credit scoring), which triggers obligations under both frameworks: AI Act conformity assessment + GDPR DPIA, AI Act technical documentation + GDPR record of processing activities, AI Act human oversight + GDPR data subject rights. Existing Data Protection Officers (DPOs) should extend their function to cover the AI Act dimension.
The European Commission's AI Office is the supervisory authority for GPAI models and has a coordinating role at EU level. In Spain, the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), established in 2024, is the national supervisory authority for the AI Act. AESIA will supervise compliance by providers and deployers established in Spain, investigate incidents and complaints, and impose the regulation's penalties. AESIA collaborates with the AEPD (Spanish data protection authority) in cases where the AI Act and GDPR overlap.
If your company uses third-party AI tools in its processes, it has the status of a deployer under the AI Act. Your obligations as a deployer include: (1) using the AI system in accordance with the provider's instructions; (2) ensuring staff using it have adequate training; (3) maintaining activity logs where the system requires it; and (4) implementing human oversight. If the AI tool is used in a high-risk context under Annex III — for example, if you use an AI system to evaluate job applications or for credit decisions — the obligations are more demanding even if you did not develop the system yourself. We recommend auditing your company's specific AI use cases and classifying their risk level.

Take the first step

Request a no-obligation consultation and discover what we can do for your business.

Call Contact