Key points of the new regulation
The regulation classifies AI according to the risk it poses:
1. Unacceptable risk AI: Systems that infringe on fundamental rights, such as mass surveillance or behavioral manipulation, are prohibited.
2. High-risk AI: Systems used in healthcare, justice, labor recruitment or financing, which must meet strict transparency and security requirements.
3. Limited-risk AI: Requiring greater transparency, such as chatbots that must report that they are not human.
4. Minimal risk AI: Not subject to strict regulations, such as spam filters or content recommendations.
Legislation on Artificial Intelligence in Spain and Europe
In Spain, the regulation of Artificial Intelligence is aligned with European regulations. Strategies have been developed such as the National Artificial Intelligence Strategy (ENIA), approved in 2020, which seeks to promote the ethical and safe development of AI.
At the European level, the EU Artificial Intelligence Regulation (AI Act) is the first major legislation on AI. Passed in 2024, it establishes mandatory regulations for AI systems based on their level of risk and sets penalties for companies that do not comply with the established requirements.
Impact on companies and users
- Technology companies: They will have to ensure regulatory compliance and conduct audits of their AI systems.
- Users and consumers: Data protection will be strengthened and tools will be provided to challenge automated decisions.
- Public administrations: Regulation of AI systems in surveillance, personnel selection and procedures.
Penalties for non-compliance
Fines can reach up to 6% of a company's global turnover, similar to GDPR (General Data Protection Regulation) penalties.
In summary, the EU seeks to balance innovation with the protection of fundamental rights, ensuring responsible use of AI. It is recommended that companies start adapting their systems and policies to comply with the new regulation before its final implementation in 2026.