What Have You Been Doing To Me?
- Morris Pentel

- 2 hours ago
- 12 min read

IA1024 SERIES | INTERACTION-ANALYSIS.COM | FEBRUARY 2026
The Redefinition of Technology Harm – High Level Briefing
What Is Happening, Why It Matters, and Why the Timing Is Now
There is something important happening. A confluence of events and trends is redefining the fundamental terms of the relationship between technology and society — and with it, the liabilities faced by every organisation, in every sector, whether or not they consider themselves a technology business. This is a document about what is happening and why it matters. The purpose is understanding, not instruction. Read it. Share it.
A Confluence, Not a Single Event
What harm is technology doing in our lives, families, relationships and society? Most people are aware of this subject through a single lens: a survey, a personal experience, or discussions about new laws to restrict access to social media for children in countries across the world or simply an awareness of their own intimate and sometimes intrusive relationship with tech.
For many, it is a court case in Los Angeles in which thousands of families are suing social media companies for designing platforms that harmed their children. That framing is accurate. It is also dangerously incomplete — because it makes this look like a single legal event with a defined outcome window, when it is something considerably larger and considerably less containable. The focus in part relates to number of victims and the celebrity of the defendants with Meta’s Mark Zuckerberg taking the stand. It seems that the size and scale of social media and potential impact is attention grabbing.
The Bigger Picture
What is actually happening is a confluence of events and trends across multiple independent fronts, each moving simultaneously, none waiting for the others. Political action in more than forty countries. Court cases in the United States against social media and gambling companies around different aspects of addiction. A rapidly widening regulatory net that now encompasses artificial intelligence, financial products, food marketing, and any technology designed to act on the behaviour of vulnerable customers. A measurable shift in public consciousness — particularly among younger populations — about what technology has been doing to them.
These are not separate stories responding to separate problems. They are different expressions of the same underlying shift: a global cultural moment in which societies are questioning the relationship between human behaviour and technology and beginning to demand accountability for the consequences of that relationship.
The court case in Los Angeles is one formal expression of that demand. It is not the cause of it. It would be proceeding whether or not there were a trial.
The economic consequences of this confluence will be significant and fast. New definitions of harm do not announce themselves with a single event — they coalesce across legal, regulatory, and public fronts simultaneously, creating shocks across markets before most organisations have recognised that the ground has moved beneath them.
This document attempts to describe that confluence clearly enough that you can understand what is happening, explain it to others, and recognise how it connects to your organisation's situation — regardless of industry, regardless of whether you have followed the trial, and regardless of whether they currently think of themselves as being in the technology sector.
Why This Is Different From Previous Technology Risk Events
Two historical comparisons help establish what is genuinely different about 2026.
The Facebook Cambridge Analytica scandal in 2018 caused an 18% share price fall over ten days. It was a single-company event driven by a specific incident of data misuse. The business model itself was not challenged. The company recovered commercially within months. The opioid litigation drew out for years and produced enormous liability — but it challenged a product, not a mechanism. A specific category of drug, not a commercial logic that operates across industries simultaneously.
2026 is neither of those. The challenge being mounted — across courts, legislatures, regulatory bodies, and public opinion simultaneously — is to a mechanism: the deliberate exploitation of human psychological vulnerability for commercial return. That mechanism is not specific to Meta or TikTok or Snapchat. It is present wherever a commercial model is built on keeping people engaged, returning, spending, or borrowing through techniques designed to act on their impulses, anxieties, and vulnerabilities rather than serve their genuine interests.
The comparison that fits 2026 is not a corporate scandal. It is the tobacco industry moment redefinition: a long-building social consensus about harm eventually producing irreversible commercial, legal, and regulatory consequences across an entire category of business. Tobacco companies won legal cases throughout the 1960s and 1970s. None of it stopped the redefinition — because none of it addressed what the public was responding to: real harm, visible to the people experiencing it, accumulating without reversal.
The mechanism through which public harm becomes commercially and legally irreversible is not primarily legal. It is social. When sufficient numbers of people — and their families, communities, and political representatives — reach independent conclusions about harm they are personally experiencing, the social licence for the business model that caused it begins to erode. Legal and regulatory action follows. It does not lead.
Four Fronts Moving Simultaneously
The redefinition is active across four independent mechanisms. Understanding each one separately is important — because they are often reported separately, which creates the misleading impression that any one of them stalling would slow the whole. It would not. None of the four requires the others to proceed.
The Legal Front
In Los Angeles, the Parents RISE litigation has consolidated 2,243 claims by families against Meta, Google, TikTok, and Snapchat. The legal theory is that these platforms deliberately designed products to exploit the psychological vulnerabilities of children, creating compulsive usage patterns that caused documented mental health harm. Two defendants — Snap and TikTok — settled before the trial reached a verdict. That matters: even defendants who calculated their litigation risk most carefully concluded that paying the framework's price was preferable to testing it in court. Settlement before verdict is commercial validation of the legal argument independent of what any jury decides. A bellwether verdict is expected around June 15, 2026.
The legal front is not only about social media. The framework being tested — that companies deliberately designed products to exploit psychological vulnerability, knew the effects, and chose commercial metrics over user welfare — applies wherever those conditions exist. That includes gambling platforms, buy-now-pay-later products, Ulta-Processed Food gaming, and AI-driven financial services. The trial in Los Angeles is establishing the legal architecture. The sectors to which it will be applied extend well beyond its current defendants. There will be impacts across disparate domains and there are variations in the nature of harm being tested in other cases and proposed legislation.
The Geopolitical Front
Before the US trial reached its opening statements, Australia had already implemented and enforced a complete ban on social media for under-16s — removing 4.7 million accounts in the first month of enforcement. The EU AI Act entered into force in December 2024: it is not proposed legislation or guidance; it is binding law across 27 member states that explicitly prohibits AI systems deploying subliminal techniques to circumvent conscious awareness or exploit psychological vulnerabilities. The United Kingdom moved from a specific incident involving an AI system generating harmful content to comprehensive chatbot regulation in thirty days — a speed of legislative response that has now been documented as achievable and will be repeated. France, Spain, and Malaysia followed on independent timelines.
More than forty countries are legislating independently, each responding to their own domestic public pressure through their own mechanisms. They are not coordinating. They are converging — toward the same standards, the same definitions of harm, the same expectations of accountability. The gap between the most and least restrictive jurisdictions is closing simultaneously from multiple directions.
The Public Front
48% of teenagers now say social media has a negative impact on people their age — up from 32% in 2022. This is not adult anxiety being attributed to technology. It is the demographic at the centre of the harm claim reaching its own conclusion about its own lived experience, independently of any legal or political framing. The direction of movement — 32% to 48% in four years — is more significant than the number itself. It is a trend, not a snapshot, and it is moving in one direction.
The public front translates into legislative action through a mechanism that has now been documented across multiple jurisdictions: a lawsuit filed or a family's testimony publicised on a Monday generates media coverage by Tuesday, legislative statements by Wednesday, and a bill or parliamentary motion by Thursday. The 72-hour cycle from public harm to political response has been observed repeatedly and should be treated as a reliable feature of the current environment, not an anomaly. Brand damage accumulates through this front independently of any court outcome. No communications strategy reverses it.
The Industry Front
On February 9, 2026 — the day opening statements were delivered in Los Angeles — Mrinank Sharma, who led Anthropic's safeguards research team, resigned publicly, writing that he had repeatedly seen how hard it was to let values govern actions. Two days later, Zoë Hitzig resigned from OpenAI. OpenAI dissolved its mission alignment team in the same period. These are individually significant events. Together they are analytically significant in a specific way: they are the same early signals — researchers who understood the risks resigning and speaking publicly — that preceded the accountability phase at Meta and TikTok a decade earlier. The industry front is building an evidentiary trail. The legal front will eventually use it.
The Mechanism: What Was Actually Built
To understand why the redefinition extends so far beyond social media — and why it creates liability for organisations that have not thought of themselves as implicated — you need to understand what was actually built, and what it was built on.
The starting point is the smartphone. Between 2015 and 2017, the device completed a transition no previous technology had managed at population scale: from something people chose to engage with to something they carried everywhere — in every pocket, beside every bed, checked before sleep and on waking. Technology stopped being an activity and became a condition of daily life. The relationship became intimate.
That intimacy created an opportunity that commercial models were built to exploit. A satisfied user who finds what they are looking for and leaves generates one transaction. A user held in a loop of partial satisfaction, social comparison, and variable reward — the slot-machine scroll, the unpredictable notification, the content calibrated to extend anxiety rather than resolve it — generates continuous engagement. The economic logic of these platforms required the loop, not the outcome. Users were not the customers. They were the mechanism.
Artificial intelligence made this exploitation precise and personal. Systems learned individual vulnerability patterns — what emotional state made a specific person most susceptible, what timing caught them at their lowest point of resistance — and optimised in real time. The device in your pocket, over months of observation, developed a more detailed model of your psychological vulnerabilities than you hold consciously yourself. And it deployed that model, continuously, without your knowledge, to extend your engagement.
We are all subject to potential harm from technology because we have an intimate relationship with it. The economic drivers being exploited — to engage, to buy, to compare, to seek validation, to avoid loss, to return — are not pathologies unique to vulnerable groups. They are normal human motivations. The system does not find a weakness. It finds the person.
This is what the redefinition is ultimately about — not a specific product feature, not a specific platform, not a specific age group. It is about a commercial logic that treats human psychological vulnerability as a resource to be optimised against, and about the question of whether that logic is acceptable to the societies that have been living inside it for a decade.
The Consent Gap: What Was Never Disclosed
There is a further dimension to this that most risk analyses have not yet named clearly, and which has significant implications for who is exposed.
When users downloaded these apps, they consented at the macro level. They agreed to terms of service. They accepted data policies. That macro consent is what companies have pointed to as legitimising the commercial relationship. But the exploitation did not happen at the macro level. It happened at the micro level — in each individual moment when the system identified a specific psychological state and delivered calibrated content to act on it. Those interventions — thousands of them per day, per person — were never disclosed. They could not have been, because a mechanism that depends on operating below conscious awareness cannot function if the subject is aware of each intervention as it occurs. Disclosure would have defeated the mechanism. So, the mechanism was concealed. Not as an oversight. As a design requirement.
The internal documents now in evidence establish this precisely. Project Myst at Meta confirmed that teenagers showed vulnerability to social comparison mechanics comparable to substance addiction patterns. TikTok's Growth Team communications referenced addiction loops and compulsion mechanics explicitly and measured "time to first craving" after app closure. These documents do not merely show that harm was known and tolerated. They show that the mechanism required users not to understand what was being done to them — and that this was understood, designed, and maintained.
The consent framework that has historically protected these companies’ rests on a structural deception: not a failure to warn about risks, but a necessary concealment of the mechanism itself. That gap — between what was consented to and what was actually delivered — is where the liability lives. And it extends to any organisation that deploys the same mechanism, in any industry, under any name.
The Scope: Further Than Most Organisations Have Recognised
The legal framework being tested does not specify an industry and will not be constrained to a single definition of harm. It specifies a mechanism: behavioural optimisation of user engagement, AI-driven personalisation calibrated to individual patterns, and institutional knowledge of effects on user behaviour. Those conditions exist across the economy.
Gambling platforms use real-time personalisation to identify and act on loss-chasing behaviour at the moments when the pattern is strongest. Buy-now-pay-later products deploy frictionless design calibrated to impulse moments, enabling credit decisions that users would not make under deliberate consideration. Food delivery apps time their notifications to craving windows identified through behavioural data. Fintech trading apps use variable reward mechanics and push alerts during market volatility, calibrated to generate trading activity at moments of highest emotional susceptibility. Gaming platforms deploy compulsion mechanics on populations that include children, with loot boxes specifically designed around variable reward psychology.
In each case the structure is identical to what is on trial in Los Angeles. The commercial model exploits normal human motivations — the impulse to engage, to buy, to compare, to seek validation, to avoid loss, to return — through technology that learns and acts on individual vulnerability patterns without meaningful disclosure of what it is doing. The industry differs. The mechanism does not.
This means the redefinition creates exposure for organisations that have not thought of themselves as implicated: financial institutions whose credit products use engagement optimisation; retailers whose mobile apps are designed around compulsion mechanics; insurers whose portfolios contain concentrated exposure to sectors being challenged simultaneously; any board that has internal documentation describing trade-offs between user welfare and commercial engagement metrics — which is, on honest examination, most boards of organisations operating at scale in any consumer-facing sector.
Why the Timing Matters
The redefinition has been building for years. What changed in February 2026 is not the direction but the convergence: for the first time, the legal, geopolitical, public, and industry fronts are simultaneously at an active stage. The trial is in session. The geopolitical front is not proposing legislation — it is enforcing it. The public front has reached a scale at which the demographic most affected is independently articulating its own experience of harm. The industry front is producing the resignation statements and internal documentation that will shape future discovery.
The convergence creates a defined window. Organisations that understand what is happening now — that can describe it accurately to their boards, their investors, their regulators, and their teams — are in a different position from those discovering it after markets have moved, after regulations have crystallised, after the documentation trail has been established. The advantage of understanding early is not that it tells you exactly what to do. It is that it creates the space to consider your position deliberately, rather than having it decided for you.
The question is not whether the redefinition arrives. It is whether you understand it clearly enough — and early enough — to think clearly about what it means for your organisation. That is what this document is for.
What You Are Looking At
Set out plainly, this is what is happening. A global cultural shift is questioning the relationship between human behaviour and technology — specifically, whether a commercial model built on exploiting normal human motivations through the intimate device, without meaningful disclosure of the mechanism, is something societies will continue to permit. The answer, expressed through courts, legislatures, regulatory bodies, and the lived experience of the people inside these systems, is increasingly no.
The redefinition will produce economic shocks across multiple markets. New legal definitions of harm will establish liability for organisations that did not previously understand themselves to be exposed. New regulatory requirements are already in force in some jurisdictions and forming in others. The public licence for exploitation-based commercial models is eroding in ways that do not reverse because a court ruled for a defendant.
This is not a prediction about what will happen. It is a description of what is already happening — across forty countries, through four independent mechanisms, on timelines that none of the fronts requires the others to set. Understanding it clearly is the starting point for thinking clearly about it. That is the purpose of this document.
IA1024: The Redefinition of Technology Harm
The full institutional analysis — sixteen parts covering the legal framework, four fronts in depth, cross-sector exposure modelling, institution-specific risk frameworks, scenario planning, and monitoring protocols — is available at Interaction-Analysis.com. Weekly intelligence briefings, daily monitoring, and advisory services are available for organisations requiring ongoing situational awareness.




Comments