Data now sits at the heart of economic value creation. Yet the promise of meaningful privacy protection continues to collide with the realities of digital markets.
If consumers repeatedly say they are worried about how their personal data is used, why do they keep handing it over so willingly? And how do companies weigh regulatory compliance and the risk of sanctions against mounting competitive pressure, economic uncertainty, and the disruptive force of artificial intelligence?
These were the questions asked during a roundtable discussion organized by the International Chair on Smart City Uses and Practices (Cit’Us), moderated by Dr. Anne-Sophie Cases, Professor at IAE Montpellier. Researchers and digital industry practitioners came together to unpack what has become known as the privacy paradox.
.png?width=1600&height=900&name=App%20Mobile%20Summer%20(1).png)
The Privacy Paradox: When Stated Concerns Fail to Shape Actual Behavior
The privacy paradox describes a striking inconsistency: people regularly claim to care deeply about protecting their personal data, yet their actions suggest otherwise.
This disconnect is particularly visible in cookie consent banners, where users overwhelmingly avoid granular settings.
The binary option remains by far the most common choice,” explains Romain Bessuges-Meusy, co-founder of Axeptio. “On average, about 60% of users give blanket consent to the site they are visiting, while only 0.05% actually take the time to fine-tune their preferences and choose which partners may or may not receive their data.”
According to researchers, this gap between intention and behavior is largely driven by cognitive and time-related costs.
Understanding what granular consent really means, anticipating how it might affect the browsing experience, all of this requires effort, skills, and attention that most consumers are simply unwilling to invest,” says Dr. Audrey Portes, Assistant Professor and Researcher at MBS.
A broader sense of resignation also plays a role, particularly among younger generations.
If you grew up with Google, Snapchat, or Instagram, the circulation of personal data is already part of the background. The feeling that ‘the damage is already done’ ultimately feeds a form of fatalism,” adds Alexandre Cougnenc, CEO of Ailix.ai.
Romain Bessuges-Meusy and Dr. Audrey Portes also point to a more subtle dynamic: learning through negative experiences. Dark patterns, cookie walls, or degraded services after refusal gradually condition users to accept data collection almost automatically.
Fear of negative consequences when refusing consent shapes future behavior. In that sense, it becomes legitimate to question whether consent is truly ‘free’ and ‘informed’, as required by the French data protection authority (CNIL),” Romain Bessuges-Meusy notes.
Habituation, Consent Fatigue and How to Disrupt the Routine
In my research, we observed a clear process of habituation to digital surveillance,” explains Dr. Pauline Roques, PhD in Management Sciences at the MRM Laboratory. “As part of a study, two volunteer students lived in what we called a ‘future apartment’, equipped with multiple sensors — smart floors, cameras, and other monitoring devices. During the first weeks, anxiety was high: they frequently sought reassurance from researchers and even tried to bypass certain systems. Over time, however, stress gave way to more emotional coping strategies. Data collection was gradually reinterpreted in a positive light, accepted, and eventually forgotten altogether. This behavioral shift, which we refer to as ‘habituation’ ultimately leads to the disappearance of protective strategies.”
If surveillance becomes routine, how can users be re-engaged and made to feel protected again? For some participants, the answer lies in design choices. Alexandre Cougnenc advocates for privacy-by-default systems, where protective settings are built in from the outset, reducing data exposure without requiring any additional effort from users.
Dr. Audrey Portes, working with fellow researchers from the Cit.Us Chair is exploring another avenue: a privacy score, inspired by the Nutri-Score used in the food industry. The idea is to provide users with a clear, immediately understandable signal about data-related risks during online navigation. Early findings in transactional contexts show that its impact varies depending on brand trust and purchase intent.
When consumers have not yet developed trust in a brand, the privacy score significantly influences their decision-making: a red score acts as a real warning signal. Conversely, once trust is established, rational assessment gives way, and users share their data while largely disregarding risk indicators. Silence also fuels suspicion — failing to communicate about data practices is perceived as a negative signal,” explains Dr. Audrey Portes.
Romain Bessuges-Meusy nonetheless highlights the structural limits of such approaches. Consent remains, at its core, an expression of trust in a brand. But in a fragmented advertising ecosystem involving countless third-party actors, few companies have an incentive to publicly expose their least ethical partners. This is why such scoring systems are unlikely to emerge without intervention from public authorities or regulators.
Delegating Trust to Artificial Intelligence: A Tempting, but Risky Prospect
The final part of the discussion took a more forward-looking turn, focusing on the rise of AI agents designed to accept terms of service or cookie banners on users’ behalf in order to access relevant information.
“These agents, which are expected to navigate and decide for users, ultimately depend on access to data,” says Romain Bessuges-Meusy. “Saying no to everything would make them economically and functionally unviable. The real challenge lies in defining the criteria that determine whether an agent grants access to data and, more importantly, who gets to define those criteria.”
Beyond technical and legal considerations, this shift signals a deeper reconfiguration of trust.
“In the near future, trust will no longer be placed in the terms and conditions themselves, but in the AI systems that read and accept them on our behalf. This delegation introduces new challenges, especially as the data processed by these agents becomes increasingly emotional, and even intimate,” explains Dr. Audrey Portes.
Dr. Pauline Roques also raises the issue of legal accountability in this emerging landscape. If an AI consents to data processing on our behalf, who bears responsibility in the event of a dispute — the user, the company, or the agent’s designer?
These questions remain largely unanswered. They nonetheless serve as a reminder that data protection cannot rely solely on individual choice. It requires collective frameworks, clear and intelligible signals, and a continuous rethinking of trust as technology evolves.