A call from participants in the International Forum on Digital and Democracy at the World AI Cannes Festival 2026
A call from participants in the International Forum on Digital and Democracy at the World AI Cannes Festival 2026 — Cannes, 12th February 2026
We gather in Cannes at a moment when Artificial Intelligence (AI), and in particular Generative AI systems, are permeating into daily life, markets, and public administration with extraordinary speed. What only recently appeared as a set of experimental tools is increasingly presented as the new interface to knowledge, services, and civic participation. In many places, it is also becoming a permanent companion: a conversational layer through which people organise their work, seek advice, express fears, and disclose fragments of their inner lives, in the hope of assistance, companionship, or reassurance.
This Declaration is addressed to governments, parliaments, international organisations, regulators, technology companies, investors, research institutions, media organisations, civil society, and citizens worldwide. It is offered in continuity with the Rome Declaration on Media Ecology and Technology Diplomacy and the Universal Declaration of the Rights of the Human Mind, while focusing here on a challenge that goes beyond media content: the sovereignty of the mind in the age of anthropomorphic AI and technologies of persuasion.
In Cannes, we have convened a public dialogue and a closed-door roundtable to confront what is, above all, a democratic risk: the rise of Artificial Intelligence as a technology of hyper-persuasion, or worse, a technology which appeals exclusively to human emotional vulnerabilities, and is designed to bypass human reflection. These systems are not merely tools for productivity or information retrieval; they increasingly shape what people notice, trust, remember, think, and choose. In a word, they are the new means and vehicles of knowledge. This is not a secondary effect. It is becoming a business model and, in some contexts, a geopolitical instrument.
We face a deeper challenge still: the emergence of an economy and an infrastructure aimed at the formation and reshaping of thought itself. One that challenges the very identity of humanity and unique “human power”. When conversational systems mediate access to information and personalise influence at scale, they can become technologies of persuasion that operate continuously and asymmetrically, establishing simulated intimate relationships that many users experience as real and that can condition the formation of beliefs, preferences, and decisions. This is the “capitalism of minds”: value extraction that goes beyond attention to shape preferences, beliefs, and behaviour, while learning from users’ unguarded inner lives. No democracy should accept a situation in which the cognitive autonomy and freedom of entire populations depend on the incentives, policies, or vulnerabilities of a handful of gatekeepers.
In this context, we affirm a principle that must be explicitly defended: sovereignty of the mind. Article 18 of the Universal Declaration of Human Rights recognises the right to freedom of thought; in the digital age, that freedom must include protection against industrial-scale manipulation of human thought. The gravest risk is not hypothetical: that a very small number of private or state actors, controlling the dominant models and the channels through which people interact with them, may acquire the power to steer the cognitive environment of hundreds of millions of individuals. We therefore petition for action that is demanding yet practical, values-based yet technologically informed, rooted in fundamental rights yet attentive to the strategic realities of our time. We do not ask for nostalgia, nor for a futile attempt to stop innovation. We ask for the conditions under which innovation can remain compatible with democracy and fundamental freedoms, including a firm boundary against systems designed or used to manipulate thought at scale or to evade human thought and reflection1.
We begin with human dignity and agency: people must not be reduced to datasets to be mined, nor to targets to be nudged. We recognise that human qualities cannot be represented or replicated in data, but have a unique status that must be defended. We call for clear limits on the design and deployment of anthropomorphic AI services whose purpose or foreseeable effect is to establish simulated intimacy. Such systems must not use emotional manipulation, “emotional traps”, or dependency-by-design, nor should they be engineered to replace social interaction, control users’ psychology, or induce addiction as a design goal. Providers should be required to build in emotional boundary guidance, dependency-risk warnings, and straightforward exit routes, and to assume responsibility for these safeguards throughout the service’s lifecycle. In public life, we call for safeguards against covert behavioural manipulation, including heightened scrutiny where such systems are used in political communication, civic information, education, employment, or access to essential services. The right to form one’s opinions freely must be treated as a practical design constraint, not as an abstract aspiration. Reflection and democratic discourse among humans must be encouraged rather than discouraged through communication design or in building pressures to act.
We ask for measures to ensure strategic democratic resilience and a mature form of technological sovereignty that contributes to global openness rather than fragmentation. If the infrastructure of human thought becomes a central lever of power, democracies cannot remain structurally dependent on external actors for core capabilities. We therefore call for the creation of more European foundational models, consistent with European values and governed in the public interest. These models should be open-source and openly available for reuse by European companies and public institutions, with a binding commitment that it not be used for manipulative anthropomorphic services. Such an investment in public-interest infrastructure, from research and talent to trustworthy compute and open standards, is a call not to withdraw from the world, but to participate in it as a capable partner, able to uphold rights and public values rather than merely importing systems shaped elsewhere.
We ask for a step change in technology diplomacy and a global compact on red lines for AI supply chains, standards, risks, and influence operations across borders by design. This demands a form of diplomacy that is technically literate and values-aware, able to convene states, companies, academia, and civil society in the same room, and able to translate principles into workable arrangements. We support the development of dedicated diplomatic capacity to address the socio-technical dimensions of AI, technologies of persuasion, and anthropomorphic interactive services. In particular, we encourage the exploration of a European Technology Diplomacy initiative designed to cooperate internationally while strengthening the expertise, negotiation capacity, and institutional memory required for this new domain and bringing about a global coalition for red lines on AI, as outlined above.
We ask for education worthy of the moment. The resilience of democracy depends on the public’s ability to discern, deliberate, and participate, and to recognise when simulated intimacy is being used as a channel for influence. On the one hand, AI and digital literacy should be enhanced, while higher-level cognitive skills (i.e. critical and analytical thinking) must be preserved. On the other, AI ethics literacy, and civic resilience, as well as enforceable AI Law which sets the necessary boundaries and incentives to innovate in the desired direction of public interest, are therefore not optional add-ons, but civic, vital infrastructure for any state. We call on parliaments, governments, educators, media, technology companies, and civil society to support adoption of such rules and to support lifelong learning; professional training for civil servants, judges, and policymakers; and ethical formation for engineers and designers, so that those who build and deploy these systems understand the public sphere they are shaping.
As signatories, we commit to treat these aims not as rhetoric, but as a programme of work. We will use our respective roles to promote independent evaluation and accountability; to oppose the normalisation of opaque persuasion systems; to support public-interest research and infrastructures; and to build coalitions capable of turning principles into tangible measures.
We invite all participants in the World AI Cannes Festival (WAICF) and all organisations aligned with these aims to endorse this Declaration, which bridges cognitive and digital sovereignty in a new, broader framework. We ask public authorities and institutional leaders to respond publicly, to identify concrete measures they will take within the next twelve months.
Signed in Cannes, 12 February 2026.
1 Cfr. UNESCO’s work on ‘neuro rights’, including “Freedom of thought, cognitive liberty and free will. External interference in brain activity could undermine free will and personal responsibility, affecting justice and social systems”. Find out more on UNESCO’s page on the Ethics of Neurotechnology, available at https://www.unesco.org/en/ethics-neurotech.