top of page

WEEKLY INTELLIGENCE UPDATE The Redefinition of Technology Harm Series - : Week of 17 February 2026

This is the weekly intelligence briefing accompanying IA1024: The Redefinition of Technology Harm. Each issue covers the most significant developments of the preceding seven days, identifies the events most worth monitoring in the week ahead, and provides the analytical context needed to understand what the news means — not just what happened.


It is written for senior leaders who have neither the time to track this story daily nor the luxury of missing what matters. The analysis follows a consistent structure: the background that makes a development significant; the event itself; why it matters beyond the headline; and what to think about strategically.

Sources are listed at the foot of each issue. No analysis is drawn from a single source. Where claims depend on a specific document or testimony, that is noted.


Background  For readers new to this story

Fundamental change in our relationship with technology has already begun. It is not being driven primarily by what courts decide or what governments legislate, though both are now active. It is being driven by what is happening to people — and by the growing recognition that the harm is not accidental.

The technology sector built a commercial model on a specific insight: that human attention is a resource, that psychological vulnerabilities are the mechanism for extracting it, and that AI can make the extraction precise. A system designed to maximise the time a user spends inside it, the frequency with which they return, and the emotional intensity of each session is not a neutral tool. It is an economic mechanism that treats the user's vulnerability as its primary input.

This model has produced real, widespread, accumulating harm — documented in the mental health outcomes of millions of young people, described in consistent terms by thousands of families, and now confirmed in the internal research of the companies themselves. What is new in 2026 is not the harm. It is the visibility of the harm, and the simultaneous response that visibility is producing across four independent fronts.



Each front operates on its own timeline and through its own mechanism. None requires the others to proceed. A court verdict can be appealed. A regulation can be challenged. But the harm being experienced by the people inside these technology systems does not reverse because a court ruled for the defendant. It accumulates. And as it accumulates, every available channel responds to it.

This week, all four fronts moved simultaneously — and for the first time, they did so in ways that visibly reinforced each other. That is what makes this week analytically significant, independent of any single event. 


What Happened This Week  10–17 February 2026

The EU named the mechanisms — and called them a legal breach

On 6 February, the European Commission issued preliminary findings that TikTok is in breach of the Digital Services Act through its addictive design. The Commission did not find a generalised concern about the platform's effects. It named specific mechanisms: infinite scroll, autoplay, push notifications, and the highly personalised recommender system. Its language was equally specific — concluding that by constantly rewarding users with new content, TikTok's design shifts users' brains into 'autopilot mode,' and that the company must change the basic design of its service.

These are preliminary findings, not a final ruling. But their analytical significance does not depend on the final outcome. The European Commission — the regulatory authority of the world's largest trading bloc — has established a working legal definition of addictive design, named the features that constitute it, and initiated formal enforcement against a global platform on that basis.

Why this matters beyond TikTok

TikTok is not the only platform using infinite scroll, autoplay, and algorithmic recommendation. These are standard features of social media, streaming services, gaming platforms, and many e-commerce and fintech products. The Commission's preliminary findings are being watched closely by platform operators and lawyers across every sector that uses these mechanisms — because the legal theory being tested is not TikTok-specific. If it holds, it establishes that these features are acceptable only where platforms can demonstrate effective safeguards that provably reduce harm at scale. That is a different standard from the current one. It is also now an active legal standard in 27 countries.

What to think about

Every product in your portfolio or supply chain that uses infinite scroll, autoplay, or algorithmic recommendation is operating under a legal definition of harm that is now active and being enforced. This is not incoming regulation to prepare for. It is current law with a current case. The compliance question is immediate.


The first executive took the stand. What he said and what the documents said are different things.

This week, Instagram's chief Adam Mosseri became the first senior technology executive to testify before the jury in the Los Angeles trial. He argued that social media does not meet the clinical definition of addiction — a point that remains genuinely debated in the scientific literature — and stated that Instagram generates less revenue from teenagers than from any other demographic group.

The internal documents told a different story. Discovery has produced a Meta internal document identifying children aged 10 to 12 — pre-teens, not the teenagers whose welfare was the stated concern — as an especially commercially valuable group, specifically because of their higher probability of remaining on the platform long-term. The commercial logic is explicit: younger users, acquired early, generate more lifetime value. The document did not frame this as a welfare question. It framed it as a retention opportunity.

The gap between what executives say in public and what internal documents record is the evidentiary foundation of the entire case. It is also the most transferable risk in this story.

Why this matters for your organisation

Every organisation that has conducted internal analysis of user engagement patterns, demographic vulnerability, or retention mechanics has generated documents that follow a structurally similar pattern. The specific harm may differ. The analytical structure — identifying which users are most valuable and why — is the same. The legal question the trial is testing is whether that analysis, when it involves psychological vulnerability as the mechanism of value extraction, constitutes institutional knowledge of harm. General counsel needs to know what your documents say before litigation discovery does.

The UK moved — and named the mechanisms, not just the platforms

On 15 and 16 February, Prime Minister Keir Starmer announced that the government will seek new legislative powers, including a ban on social media access for under-16s, specific restrictions on infinite scroll and autoplay features, and limits on children's access to AI chatbots. The government committed to a March consultation with the explicit intention of implementing rules within months — using fast-track powers rather than the multi-year timeline of fresh primary legislation.

The Times reported on its front page on 16 February that ministers plan to prohibit under-16s from social media this year. The consultation is not a delay mechanism. It is the procedural prerequisite for fast-track implementation.

The detail that matters most

The UK government is not banning 'social media' as a category. It is restricting specific design features — infinite scroll and autoplay — by name. This is the regulatory language narrowing from category to mechanism. When regulators move from platform-level restriction to mechanism-level restriction, it means the compliance question has also changed. It is no longer whether you are a social media platform. It is whether you deploy these specific features, in any product, for any user population. That scope is significantly wider.

 France passed its age ban — the Senate vote is imminent

The French National Assembly approved a bill banning social media access for under-15s by 130 votes to 21. The bill moves to the Senate, with former Prime Minister Gabriel Attal having publicly called for passage by mid-February to enable enforcement from 1 September for new accounts, with a 31 December deadline for deactivation of existing underage accounts.

President Macron framed the legislation in terms worth reading precisely: 'The brains of our children and adolescents are not for sale. Their emotions are not for sale or to be manipulated, whether by American platforms or Chinese algorithms.' This is a head of state, using the language of sale and manipulation, describing technology design as a commercial transaction conducted against the developmental interests of children. The framing is not accidental. It is the public argument that makes legislation politically viable — and it is an argument with broad democratic support across party lines.

What this signals geopolitically

If the French Senate passes the bill this week, France becomes the second country after Australia to legislate a social media age ban into enforceable law, and the first within the EU. Every EU member state government is watching. Greece, Denmark, and Spain have each been described in recent weeks as considering similar measures. The political cost of not acting rises each time another country acts — because the absence of equivalent legislation becomes, domestically, a choice to leave children less protected than they are in neighbouring countries. The contagion pattern is not metaphorical. It is a documented political mechanism operating in real time.

 Parents slept on courthouse steps to watch an executive testify

A group of parents spent the night sleeping on the courthouse steps in Los Angeles to secure seats for Adam Mosseri's testimony. Several of the parents who had previously watched Mark Zuckerberg apologise to the Senate in 2024 flew in from across the country for the same reason. One mother, whose teenage daughter died by suicide after Instagram's algorithm served her self-harm content, stood in line from one in the morning.

These are not protesters. They are plaintiffs' families — people who filed legal claims because the legal system was the only mechanism available to them. They are not there because a campaign organised them. They are there because they lost children and they are watching the people they hold responsible answer questions under oath for the first time.

This is what the public front looks like in its human form. It is not a sentiment measure or a brand risk metric. It is people who have lost children, who slept on concrete, to watch a technology executive answer questions. The social licence being withdrawn in this story is being withdrawn by people like these, through every channel available to them simultaneously.

It is worth sitting with this for a moment before moving to the analytical implications. The analytical implications are real and significant. But the people in that queue are the reason all the other fronts exist.


What to Watch: Next 7 Days  18–24 February 2026

18 Feb Tuesday

Mark Zuckerberg testifies before a jury for the first time

Zuckerberg has appeared before Congress multiple times. This is categorically different. Congressional testimony allows prepared statements, political skill, and the management of questions from politicians with mixed levels of preparation. Jury testimony operates under oath, subject to cross-examination by plaintiff lawyers who have spent years in discovery and who hold the internal documents. The 2021 email — 'We need to be thoughtful about the trade-offs between growth and teen wellbeing' — will almost certainly be placed in front of him. The Project Myst research will be placed in front of him. The document identifying tweens as a high-value retention demographic will be placed in front of him. His answers to those questions, before twelve jurors, are the most significant testimony of the trial so far.

Watch for: Watch the gap between Zuckerberg's answers and the documents presented alongside them. The evidentiary record that matters is not his testimony in isolation — it is the relationship between what he says he knew, what he says he decided, and what the internal documents recorded at the time.

 

This week

France Senate vote on the under-15 social media ban

The National Assembly passed the bill 130 to 21. Attal's stated target was Senate passage by mid-February. If the Senate votes this week and passes the bill, France becomes the first EU member state to legislate a social media age ban into enforceable law. The timeline matters: September 1 enforcement for new accounts means platforms have approximately six months to implement compliance mechanisms. The December 31 deadline for existing accounts is tighter than most age verification infrastructure is currently built for.

Watch for: If France passes, track the response from Greece, Denmark, and Spain within the following two weeks. The contagion pattern from Australia to France took approximately four weeks. The EU-internal contagion, if it follows the same mechanism, would produce further legislative announcements before the end of February.

 

This week

Expert scientific testimony — the mechanism of harm under examination

Dr Anna Lembke of Stanford testified this week that features including infinite scroll, push notifications, and algorithm-driven recommendations are specifically designed to be habit-forming and to foster compulsive use. Further expert scientific testimony is expected throughout this week, building the causal architecture that connects specific design features to specific neurological and psychological outcomes.

Watch for: The scientific testimony is the layer of the trial that determines cross-sector applicability. If expert witnesses establish convincingly that the mechanism of harm — variable reward, interrupted attention, social comparison — produces measurable psychological outcomes independent of the specific platform delivering it, that framework applies to gaming, streaming, BNPL, and fintech products using the same mechanisms. This week's testimony is not just about Instagram.

 

Ongoing

EU Commission TikTok proceedings — response deadline and next steps

Following the preliminary findings announced on 6 February, TikTok has the opportunity to respond before a final ruling. Watch for the company's formal response, which will either contest the Commission's mechanism-level findings or propose remediation measures. The remediation proposals TikTok offers — or refuses to offer — will define what compliant platform design looks like under the Digital Services Act, establishing the standard other platforms in European markets must meet.

Watch for: If TikTok proposes specific design changes in response, those changes become the implicit compliance benchmark for every platform using equivalent features in EU markets. Watch what they offer to remove or modify — it will tell you what the Commission considers the minimum threshold for compliance.

The Connecting Threads

What this week reveals more clearly than any previous week is that the four fronts are no longer simply running in parallel. They are now reinforcing each other in real time — sharing vocabulary, evidence, and political momentum across jurisdictions and mechanisms simultaneously.


The European Commission's preliminary findings on TikTok gave regulatory language to specific design features — infinite scroll, autoplay, algorithmic recommendation — that UK politicians adopted within days in a domestic policy announcement. That language is the same language plaintiff lawyers used in Los Angeles opening statements eight days earlier. The parents sleeping on courthouse steps are the reason that language is politically viable in Paris, London, and Brussels simultaneously. And the gap between executive testimony and internal documents — visible to every journalist covering the trial this week — is the same evidentiary structure that the EU, the UK, and every other regulator moving on this issue is using as their justification for action.


The four fronts are drawing on the same underlying evidence, the same public sentiment, and increasingly the same legal and regulatory vocabulary. Each front makes the others stronger. A European ruling on mechanism-level harm strengthens plaintiff lawyers arguing the same mechanism is harmful in California. A scientific expert explaining the neurological effects of variable reward in a Los Angeles courtroom strengthens a French parliamentarian explaining why emotional manipulation is not for sale. A mother standing in line from one in the morning strengthens every political leader who needs to explain why this is urgent.

The redefinition is not a story that will be told in a single verdict. It is being told in multiple languages, in multiple jurisdictions, through multiple mechanisms, every week. This is what it looks like when a social consensus is forming — before it has fully formed, while there is still time to position for it rather than react to it.


The defining moment of this particular week arrives on Tuesday morning, when Mark Zuckerberg sits before a jury for the first time. But Tuesday's testimony is one scene in a much longer story. The story does not end when he leaves the witness stand. It continues — in the French Senate, in the European Commission's formal proceedings, in the expert testimony scheduled for the rest of this week, and in the queue of families outside a courthouse in Los Angeles who will be back tomorrow.

 

Sources

All developments verified across multiple independent outlets. Primary sources this issue: European Commission Digital Services Act enforcement proceedings; CNN trial coverage; NBC News and PBS trial reporting; ABC News and RTÉ on French legislation; The Times (UK) and BBC on UK government announcement; Addiction Center and KRDO on expert testimony; court reporting services for trial proceedings. Internal documents referenced are those presented in open court and reported by multiple outlets.


Interaction-Analysis.com does not draw analysis from single sources. Where specific quotes or documents are referenced, the source is the court record or named publication. The analytical interpretation is our own.


IA1024 Technology Harm Redefined V1 Full Report Feb2026
£17,500.00
Buy Now

Next issue: 24 February 2026  |  IA1024 Full Analysis available at Interaction-Analysis.com

Comments


This is a Customer Experience Services Ltd website

For more information, contact 15 Hereson Road, Ramsgate, Kent CT11 7DP

You can use the chat button to talk to someone if available, or pop us an email here 

bottom of page