Make AI work for people and progress

1843: Lovelace pioneered computer programming.

Today, red tape could bug the system.

1843: Lovelace pioneered computer programming.

Today, red tape could bug the system.

What is the issue?

In the 19th century, Ada Lovelace imagined a world where machines could think – laying the foundation for modern computing and artificial intelligence (AI).

Today, that vision powers some of Europe’s most transformative innovations: from generative AI and advanced analytics to automation and cybersecurity tools. All areas where the EU has immense potential to lead globally.

Europe retains its capacity to innovate. Yet if Lovelace were navigating today’s AI landscape, she would encounter an increasingly complex web of EU digital rules – including the AI Act, General Data Protection Regulation (GDPR), Cyber Resilience Act (CRA), NIS2 Directive, Digital Services Act (DSA), and copyright framework. Each plays an important role, but overlaps and unclear definitions risk slowing responsible innovation.

For companies developing and deploying AI systems, these inconsistencies translate into longer product rollouts, diverging national interpretations of laws, and uncertainty about which rules take precedence.

To ensure Europe leads in both trust and innovation, policymakers should focus on three priorities:

1. Make the AI Act work for innovation.

2. Align rules to cut duplication.

3. Simplify data and copyright rules to enable AI uptake.

Simplifying and aligning Europe’s AI framework will empower innovators to turn Lovelace’s vision into real-world progress – ensuring algorithms serve people, creativity, and society across Europe.

1. Make the AI Act work for innovation

Clear, predictable rules will help Europe become an innovation hub.

Rapid advances in AI and algorithmic tools are powering a broad range of applications and digital services across Europe – boosting productivity, transforming entire industries, and making the lives of European consumers easier. But uncertainty around implementation timelines, inconsistent definitions, and overlapping risk-assessment obligations across the AI Act, GDPR, and sectoral rules create unnecessary complexity for responsible AI innovators.

At the same time, Europe’s generative AI (GenAI) ecosystem is thriving: competition is dynamic and innovation is accelerating. With proportionate, evidence-based oversight, EU policymakers can help ensure this momentum continues, fostering investment and broad adoption of AI technologies that benefit European users and businesses alike.

The way forward for Europe:

I. Ensuring practical and predictable AI Act implementation.

Delay obligations until relevant guidance, standards, and codes of practice are finalised and make them available at least 12 months in advance of implementation.

II. Promote proportionate and innovation-friendly enforcement.

Ensure supervisory authorities apply consistent, risk-based enforcement across AI, cybersecurity, and data-protection frameworks so that compliance efforts focus on high-risk use cases rather than low-risk ones.

III. Support Europe’s thriving GenAI ecosystem with proportionate, evidence-based oversight.

While monitoring market developments closely, act only when clear harm emerges and lower entry barriers by improving access to capital, simplifying rules, and promoting GenAI adoption across industries.

2. Align rules to cut duplication

One coordinated approach will make AI compliance simpler and smarter.

AI systems and connected services fall under multiple EU laws right now, ranging from data protection to cybersecurity, each with its own audits, authorities, and timelines. This fragmentation duplicates reporting and causes inconsistent supervision across the European Union.

The way forward for Europe:

I. Clarify and align key definitions across frameworks.

Clarify concepts such as ‘automated decision-making’ to avoid duplication across the AI Act, General Data Protection Regulation (GDPR), and Platform Work Directive (PWD).

II. Issue joint guidance to align enforcement and avoid diverging interpretations.

Shared templates, coordinated oversight, and structured dialogue between the European Commission’s AI Office, data protection authorities, cybersecurity agencies, and national supervisory authorities would promote coherent application of rules and legal certainty across the EU single market.

III. Streamline governance for dual compliance.

Simplify overlapping requirements and leverage existing compliance tools to meet relevant obligations under all recently enacted digital legislation. This should range from cybersecurity to content-moderation and labour rules, particularly those provisions relating to data.

3. Simplify data and copyright rules to enable AI uptake

Clear data and copyright rules will let Europe train AI responsibly.

AI innovation depends on access to quality data. But misalignment between the AI Act, GDPR, and Copyright Directive leaves developers uncertain about how to use datasets legally and ethically.

The way forward for Europe:

I. Adopt risk-based and contextual interpretation.

Provide guidance on how core GDPR principles (including data minimisation, purpose limitation, and accuracy) apply to AI development and use, particularly for general-purpose models. Consistent, risk-based guidance will help developers apply privacy safeguards meaningfully rather than mechanically, ensuring AI innovation and accountability go hand in hand.

II. Improve access to data for AI and research.

Adopt a pragmatic approach so developers can responsibly process special-category data for AI bias detection using privacy-enhancing technologies.

III. Simplify AI Act and copyright application respecting territorial limits.

Remove or amend provisions that extend EU copyright rules beyond their territorial scope and run against international law.