Webinar: From Regulatory Compliance to Digital Sovereignty – new tradeoffs for Data Protection, Privacy and Security?
April 6, 2026 @ 4:00 pm – 5:00 pm
Register here: (free, you receive the Zoom)
Speakers (to be confirmed):
- Vinton Cerf, one of the “fathers of the Internet”, Vice President and Chief Internet Evangelist for Google, Disability Rights activist
- Geneviève Fieux-Castagnet, Head of AI Ethics, SNCF Group
- Gwendal Le Grand – Deputy Head of the EDPB Secretariat
- Gérald Santucci — President, European Education New Society Association (ENSA); former Head of Unit “Knowledge Sharing” (2012–2016) and “Networked Enterprise & RFID” (2006–2012), DG CONNECT
- Andrea Servida — Retired from the European Commission; former Head of Unit “Knowledge Management & Innovative Systems”, DG CONNECT
Over the last decade, some EU leaders have called on Europeans to be more independent, popularizing the term “strategic autonomy”. Recently, at the Munich Security Conference, French president Emmanuel Macron said that “A stronger Europe would be a better friend for its allies … Europe has to become a geopolitical power. We have to accelerate and deliver all the components of a geopolitical power: defense, technologies and de-risking from all the big powers.” The use of the term “strategic autonomy” is expanding rapidly, getting linked to the digital domain, where it is generally called “digital sovereignty”, but also to many other domains such as finance, energy, rare earths and other critical materials, and space.
Common to all the uses of the term is the increasingly shared conviction that Europe has become too much dependent on foreign providers and big powers like China and the United States. Though strategic autonomy is still subject to various interpretations, the idea prevails in Europe that it is about ensuring control over one’s own future, and hence safeguarding sovereignty, in a new world order in which power is not a public good but an asset to be traded, and rules-based international law is replaced with the rule of the bully. As US president Donald Trump continues to use tariffs as an argument in economic and political disputes, the US-China relations are subject to constant turbulence, swaying between trade war and temporary truce. The end-result has the appearance of a game of musical chairs played by the US, China and the EU, which reflects a redistribution of political alliances and a redefinition of trade strategies (e.g., Argentina, China, EU, India, Indonesia, Japan, Malaysia, UK etc. for the US; Australia, Chile, India, Indonesia, Malaysia, Mercosur etc. for the EU). Regarding “middle powers”, Canada’s prime minister Mark Carney put it very clearly at Davos: “Today, I’ll talk about the rupture in the world order, the end of a nice story and the beginning of a brutal reality where geopolitics among the great powers is not subject to any constraints. But I also submit to you that other countries, particularly middle powers like Canada, are not powerless. They have the capacity to build a new order that embodies our values, like respect for human rights, sustainable development, solidarity, sovereignty and territorial integrity of states. The power of the less powerful begins with honesty.”
In this context, the terms “strategic autonomy” and “digital sovereignty” are not just a matter of semantics. “To call things by incorrect names is to add to the world’s misery”, Albert Camus wrote (in Sur une philosophie de l’expression, 1944). They are the natural reaction of nation-states in a world shaken by instability and uncertainty. Taking the example of artificial intelligence (AI), over the last years four Global AI Summits took place, but their key focus shifted from Safety (Bletchley Park, 2023) to Innovation/Inclusion (Seoul, 2024) to Creativity/Use Cases (Paris, 2025), and finally to Impact (New Delhi, 2026). Along the trail, the sense of cooperation has lost dedication in favor of a sense of competition for investments in data centers, skills or nuclear energy, which is mediatically theatralized
In Europe where the General Data Protection Regulation (GDPR) turns ten and the EU AI Act comes into force, the idea of Europe’s digital sovereignty has gained momentum. GDPR, which requires companies around the world to win consent from natural persons to use their data, explain what they do with their data, and alert them every time that data is breached, has turned into a global data protection benchmark because of its extraterritorial reach and potential high penalties. It is today the most emblematic example of how the EU’s laws can shape global practices without the need for trade agreements or coercion. The dynamic it has created has been conceptualized as the “Brussels Effect”. The Brussels Effect describes a regulatory superpower that exports its rules not by force, but by gravity: countries like Japan, South Korea, Brazil, Kenya, India, and others adopted GDPR-like regimes, driven less by fear of or sympathy for Brussels and more by the need to get access to its single market.
But data protection was just the start. EU competition law became another template. High-profile cases against US big tech shaped antitrust debates in Australia and India. Environmental rules like REACH set the bar for product safety across entire supply chains. Then, the momentum carried into new terrain with the Digital Markets Act (DMA) that challenges the domination of digital platforms, and the AI Act that provides a global risk-based framework for AI. The message is clear: for companies to do business in Europe, they must play by its rules. And often, those rules spread elsewhere, too.
However, by the early 2020s signs of pushback began to emerge, from both corporate lobbyists and national governments. The language shifted from alignment to autonomy and from compliance to sovereignty – the new regulatory strategy. Regulatory convergence no longer seemed inevitable. Indeed, economic protectionism and techno-nationalism are driving fragmentation across digital, trade, and industrial policy. In sectors like AI and cybersecurity, governments are prioritizing self-reliance over harmonization, even at the expense of efficiency. Today, global rules are less about unilateral export and more about mutual influence and negotiated convergence, shaped by shifting constellations of power and interest. GDPR inspired near-universal imitation, but the new EU digital laws like the AI Act are entering a field already populated by rival standards, including the US risk-based, industry-led approach and China’s state-centric data governance regime.
Not surprisingly, the Draghi Report on EU Competitiveness published in 2024, making sense of the new dynamic, has shown that while the ambitions of the EU’s GDPR and AI Act are commendable, their complexity and risk of overlaps and inconsistencies can undermine developments in the field of AI by EU industry actors. “The EU faces now an unavoidable trade-off between stronger ex ante regulatory safeguards for fundamental rights and product safety, and more regulatory light-handed rules to promote EU investment and innovation, e.g. through sandboxing, without lowering consumer standards. This calls for developing simplified rules and enforcing harmonized implementation of the GDPR in the Member States, while removing regulatory overlaps with the AI Act (…) Estimates point to high GDPR compliance costs, up to EUR 500,000 for SMEs and up to EUR 10 million for large organizations. Furthermore, due to these compliance costs, EU companies decreased data storage by 26% and data processing by 15% in relation to comparable US companies.”
Thanks to the new Digital Package released at the end of 2025, Europe’s businesses are expected to spend less time on administrative work and compliance and more time innovating and scaling-up. This initiative aims at opening opportunities for European companies to grow and to stay at the forefront of technology while at the same time promoting Europe’s highest standards of fundamental rights, data protection, safety and fairness. At its core, the package includes a “Digital Omnibus” streamlines rules on artificial intelligence (AI), cybersecurity and data, complemented by a Data Union Strategy to unlock high-quality data for AI and European Business Wallets that will offer companies a single digital identity to simplify paperwork and make it much easier to do business across EU Member States. The package aims to ease compliance with simplification efforts estimated to save up to €5 billion in administrative costs by 2029. Additionally, the European Business Wallets could unlock another €150 billion in savings for businesses each year.
What comes out of this evolution is the realization that the digital transition is reshaping how people, objects and environments interact. The original Internet – let’s call it the “Internet of People” – has been complemented by an “Internet of Things” which today is itself complemented by an “Internet of Agents”. In a world where analogue and digital have merged, the questions arise as to how we can regulate effectively, what are the trade-offs between seizing opportunities and mitigating harms, how we can protect one group without unduly restricting the tools others need to work and innovate. Which policy instruments are fit for purpose in a world where interests and strategies vary significantly?
The answers are not easy. For example, without careful design to safeguard the Internet operation and maintain citizens’ and businesses’ trust in it, governance models to support digital sovereignty can have unintended side-effects for the Internet. Furthermore, the original text of the European Commission’s Digital Omnibus included a reform of the definition of personal data that is pseudonymized, which has been resisted by the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) – in order to avoid increasing legal uncertainty, the definition of personal data “should say what personal data is, instead of what it is not”.
This Webinar will cover the broad aspects of the above dynamic – what are the next steps in governance now, can there be a new global governance, what are the lessons learned from the web and IoT – as well as some of its specific aspects – what will privacy and security look like in ten years, will identity be worked in on the level of the chip, or OS, how can we operationalize an ethical guideline?
