The Consent Gap: Why You Didn't Agree to What Tech Took
- Morris Pentel

- 10 hours ago
- 7 min read

IA1024 SERIES | INTERACTION-ANALYSIS.COM | FEBRUARY 2026
The Consent Gap: Why the Permission You Gave Is Not the Permission That Was Taken
CONTENT INTRODUCTION
The legal and commercial conversation about technology harm has focused almost entirely on what companies knew and when. That framing, while important, misses a more fundamental question: what exactly did users consent to, and what was actually done to them? This article introduces the concept of the consent gap — the distance between macro-level permission and micro-level exploitation — and the related mechanism of micro-moment stimulation. Together, they reframe the liability question in ways that extend well beyond the defendants currently in court, and well beyond social media as a category. This piece is drawn from the IA1024 series: The Redefinition of Technology Harm.
When you downloaded the app, you agreed to the terms. You accepted the data policy. You ticked the box. By any conventional measure of consent, the commercial relationship between you and the platform was legitimate. Companies have relied on this for a decade, and courts, regulators, and public debate have largely accepted the framing.
The problem is that what you consented to and what was actually done to you are not the same thing. The gap between them is not a technicality. It is, we would argue, the most significant unexamined liability in the current redefinition of technology harm — and the mechanism that makes the harm both universal and structural.
Understanding why requires being precise about two things: the nature of consent in this relationship, and the nature of the mechanism being deployed against it.
Consent Operates at the Wrong Level
When you agreed to the terms of service, you consented at the macro level. You gave permission for a category of relationship: the platform can show you content, use your data to personalise your experience, show you advertising relevant to your interests. That is what you understood yourself to be agreeing to. That is what the terms described.
But the exploitation does not happen at the macro level. It happens at the micro level — in each individual moment, each specific intervention, each calibrated stimulus delivered to you at a particular point in your day when the system has identified that you are most susceptible to it. You did not consent to those moments. You could not have. They were not described. They happened thousands of times, invisibly, before you had any framework for understanding that they were happening at all.
This is the consent gap. Macro permission was given once. Micro exploitation happens continuously, without disclosure, without ongoing consent, and — critically — in a way specifically designed to remain below the threshold of conscious awareness.
The Mechanism: Micro-Moment Stimulation
To understand why the gap matters, you need to understand the mechanism that operates within it.
The commercial model these platforms built is not, at its core, about showing you content you might enjoy. A satisfied user who finds what they are looking for and leaves generates one transaction. The economic logic requires something different: a user held in a loop of partial satisfaction, emotional arousal, and variable reward generates continuous engagement. That continuous engagement is the product. You are not the customer. You are the mechanism.
The tool for creating and maintaining that loop is what we call micro-moment stimulation: small, continuous, calibrated interventions delivered through the intimate device, each one designed to act on your psychological state at the specific moment it is most likely to be effective.
Every push notification timed to the moment the system has identified as your lowest point of resistance. Every content recommendation that arrives when you are anxious, feeding the anxiety rather than resolving it. Every variable reward — the unpredictable like, the intermittent validation, the slot-machine scroll — calibrated to the individual vulnerability pattern the system has learned over months of observation. None of these interventions announces itself. None of them says: we have identified that you are emotionally susceptible right now and we are delivering content specifically designed to extend your engagement. Each one feels like the platform simply working as expected. Cumulatively, across thousands of moments in a day, they are not.
This is what distinguishes the mechanism from ordinary advertising or personalisation. Advertising speaks to a demographic at a scheduled time. Micro-moment stimulation speaks to a specific person, at the specific moment of maximum psychological susceptibility, using knowledge of that person's individual vulnerability patterns that they did not knowingly provide and cannot see being used.
Why the Concealment Was Necessary, Not Accidental
Here is the feature of the consent gap that elevates it from a transparency failure to something more serious: the concealment of the mechanism was not an oversight. It was a design requirement.
A system that operates by finding your moment of lowest psychological resistance and acting on it cannot function if you are aware of each intervention as it occurs. The effectiveness of the mechanism depends on it operating below conscious awareness. If users understood, in real time, that a notification had been timed to their identified emotional low point, the notification would lose its power. If users understood that a content recommendation was calibrated to extend anxiety rather than resolve it, the recommendation would prompt resistance rather than engagement.
The terms of service that established macro consent did not disclose this mechanism because disclosing it would have defeated it. That is not a coincidence. The companies that built these systems understood how they worked. The internal documents now in evidence — Project Myst at Meta, the TikTok Growth Team communications, the explicit references to addiction loops and compulsion mechanics, the A/B testing that measured time to first craving after app closure — confirm that the people designing and operating these systems understood the mechanism precisely. They understood that it required users not to understand what was being done to them.
This is what distinguishes the consent gap from a simple failure to warn about risks. A failure to warn says: we knew there were side effects and we did not tell you about them. The consent gap says: we deployed a mechanism that could only function through your ignorance of it, and we maintained that ignorance deliberately. That is a different and harder category of liability — closer to structural deception than negligence.
The Scope: This Is Not Only a Social Media Problem
The reason the consent gap matters beyond the trial in Los Angeles is that the mechanism is not specific to social media. It is specific to a commercial logic — exploit the economic drivers of human behaviour through the intimate device, using AI to make that exploitation precise and personal — that operates across multiple industries simultaneously.
Gambling platforms use real-time personalisation to identify and act on loss-chasing behaviour at the specific moments when the pattern is strongest. BNPL products deploy frictionless design calibrated to impulse moments, enabling credit decisions that users would not make under deliberate consideration. Food delivery apps time their notifications to craving windows identified through behavioural data. Fintech trading apps use variable reward mechanics and push alerts during market volatility, calibrated to generate trading activity at the moments of highest emotional susceptibility.
In each case the structure is the same: macro consent was given to a service. Micro-moment stimulation operates within that service, without disclosure of the mechanism, in a way that depends on users not understanding what is being done to them. In each case the gap exists. In each case the concealment was necessary to the commercial model.
The legal framework being tested in Los Angeles does not specify an industry. It specifies a mechanism. The three conditions it requires — behavioural optimisation of user engagement, AI-driven personalisation calibrated to individual patterns, and institutional knowledge of effects on user behaviour — are present wherever the intimate device meets a commercial model built on exploiting normal human motivations without meaningful disclosure.
The Question Every Organisation Needs to Answer
The consent gap reframes the liability question in a way that most risk assessments have not yet caught up with. The conventional question is: did we harm people and did we know? That is the question the trial in Los Angeles is asking, and it is an important one.
But the consent gap asks a different question: did we deploy a mechanism that required users not to understand what we were doing to them in order to function? If the answer is yes — if the commercial model depends on micro-moment stimulation operating below conscious awareness, if disclosure of the mechanism would defeat the mechanism, if the terms of service described a relationship categorically different from the one actually delivered — then the liability framework extends to any organisation that meets those conditions, regardless of industry, regardless of whether children are the primary affected population, regardless of whether the harm looks like the harm currently described in court.
We are all subject to potential harm from technology because we have an intimate relationship with it. The economic drivers being exploited — to engage, to buy, to compare, to seek validation, to avoid loss, to return — are not pathologies. They are normal human motivations present in every person who carries a smartphone. The consent gap is the distance between the relationship people understood themselves to have with technology and the relationship that was actually built. Closing that gap — through honest disclosure, through genuine product redesign, through commercial models that optimise for user outcomes rather than user vulnerability — is what the redefinition is ultimately requiring.
The organisations that understand this now, and act on it, will define what comes next. The organisations that wait for the gap to be closed by courts and regulators will have it closed for them.
This article is part of the IA1024 series: The Redefinition of Technology Harm. The full institutional analysis, business leader briefing, and weekly intelligence service are available at Interaction-Analysis.com.



Comments