Make AI work for people and progress

1843: Lovelace pioneered computer programming.

Today, red tape could bug the system.

1843: Lovelace pioneered computer programming.

Today, red tape could bug the system.

What is the issue?

In the 19th century, Ada Lovelace imagined a world where machines could think – laying the foundation for modern computing and artificial intelligence (AI).

Today, that vision powers some of Europe’s most transformative innovations: from generative AI and advanced analytics to automation and cybersecurity tools. All areas where the EU has immense potential to lead globally.

Europe retains its capacity to innovate. Yet if Lovelace were navigating today’s AI landscape, she would encounter an increasingly complex web of EU digital rules – including the AI Act, General Data Protection Regulation, Cyber Resilience Act, NIS2 Directive, Digital Services Act, and the copyright framework. Each plays an important role, but overlaps and unclear definitions risk slowing responsible innovation.

For companies developing and deploying AI systems, these inconsistencies translate into longer product rollouts, diverging national interpretations of laws, and uncertainty about which rules take precedence.

To ensure Europe leads in both trust and innovation, policymakers should focus on three priorities:

1. Make the AI Act work for innovation.

2. Align AI rules to cut duplication.

3. Simplify data and copyright rules to enable AI uptake.

1. Make the AI Act work for innovation

Clear, predictable rules will help Europe become an innovation hub.

Rapid advances in AI and algorithmic tools are boosting productivity, transforming industries, and making the lives of European consumers easier. But uncertainty around timelines, inconsistent definitions, and overlapping obligations across the AI Act, General Data Protection Regulation, and sectoral rules creates unnecessary complexity for responsible AI innovators.

At the same time, Europe’s generative AI (GenAI) ecosystem is thriving: competition is dynamic and innovation is accelerating. Only through proportionate, evidence-based oversight, EU policymakers can help ensure this momentum continues.

The way forward for bold and ambitious simplification:

I. Ensure predictable AI Act implementation.

Delay obligations until relevant guidance, standards, and codes of practice are finalised and published (at least 12 months before implementation) to allow meaningful, compliance preparation by companies.

II. Enforce AI rules proportionately and consistently.

Ensure supervisory authorities apply risk-based enforcement across AI, cybersecurity, and data-protection frameworks so that compliance efforts focus on high-risk use cases rather than low-risk ones.

III. Allow Europe’s GenAI ecosystem to thrive with oversight.

Monitor market developments and act only when clear harm emerges, while supporting innovation by improving access to capital, simplifying rules, and encouraging AI adoption across industries.

2. Align AI rules to cut duplication

One coordinated approach will make AI compliance simpler and smarter.

AI systems and connected services fall under multiple EU laws right now, ranging from data protection to cybersecurity, each with its own audits, authorities, and timelines. This fragmentation duplicates reporting and causes inconsistent supervision across the European Union.

The way forward for Europe:

I. Clarify and align key definitions across frameworks.

Clarify concepts such as ‘automated decision-making’ to prevent duplication and confusion across the AI Act, General Data Protection Regulation, and the Platform Work Directive.

II. Ensure coordinated enforcement through joint guidance.

Develop shared templates, coordinated oversight, and regular dialogues between the AI Office, data protection authorities, cybersecurity agencies, and national regulators to allow coherent application and legal certainty across the EU.

III. Streamline governance for dual compliance.

Simplify overlapping requirements and use existing compliance tools to meet obligations under new digital legislation – from cybersecurity and content-moderation to data rules related to labour.

3. Simplify data and copyright rules to enable AI uptake

Clear data and copyright rules will let Europe train AI responsibly.

AI innovation depends on access to quality data. But misalignment between the AI Act, General Data Protection Regulation (GDPR), and the Copyright Directive leaves developers uncertain about how to use such datasets legally and ethically.

The way forward for bold and ambitious simplification:

I. Adopt risk-based and contextual interpretation.

Provide guidance on how key GDPR principles – data minimisation, purpose limitation, and accuracy – apply to AI development and use, especially for general-purpose models. Clear, risk-based interpretation will help developers protect privacy while fostering innovation.

II. Improve access to data for AI and research.

Allow developers to responsibly process special-category data for AI bias detection using privacy-enhancing technologies, ensuring a pragmatic balance between data protection and innovation.

III. Clarify AI Act and copyright boundaries.

Remove or amend provisions that extend EU copyright rules beyond their territorial scope and conflict with international law, ensuring legal certainty and respect for recognised principles.

Simplifying and aligning Europe’s AI framework will empower innovators to turn Lovelace’s vision into real-world progress – ensuring algorithms serve people, creativity, and society across Europe.