Zuckerberg on the Stand: What He Said, What the Documents Said, and What It Means
- Morris Pentel

- 3 hours ago
- 9 min read
IA1024 INTELLIGENCE UPDATE | INTERACTION-ANALYSIS.COM | 19 FEBRUARY 2026
TRIAL INTELLIGENCE | DAY 8 | LOS ANGELES COUNTY SUPERIOR COURT

ANALYTICAL NOTE: This document is published as commercial and strategic intelligence commentary. Nothing in it constitutes legal advice or legal opinion. The analytical commentary on legal arguments below reflects publicly available information about a live proceeding and contested legal questions. Readers should obtain independent legal advice on any matter affecting their organisation. Interaction-Analysis.com is not a law firm and does not hold itself out as providing legal services.
Mark Zuckerberg testified in a Los Angeles courtroom on February 18, 2026. He denied that Meta designs Instagram to be addictive. Internal documents introduced in evidence the same day told a different story. The gap between those two things is not a detail. It is the mechanism the IA1024 framework has been tracking — visible in public, before a jury, for the first time.
What Happened in Court
The CEO of Meta arrived at Los Angeles County Superior Court in a dark suit, took the stand before a jury for the first time on the question of child safety, and spent the day being questioned by plaintiff's attorney Mark Lanier against a running background of his own company's internal documents. The case is the bellwether trial in a consolidated group of claims from more than 1,600 plaintiffs — families, individuals, and school districts — alleging that Instagram, YouTube, TikTok, and Snapchat were deliberately designed to exploit the psychological vulnerabilities of young users. TikTok and Snap settled before the trial reached a verdict. Meta and Google remain defendants.
The plaintiff, a 20-year-old woman referred to in proceedings as Kaley G.M., was present in the courtroom gallery. She alleges that Instagram, which she began using at age 9 — four years below the platform's stated minimum age — caused addiction and worsened her mental health. The jury heard her case in part through the testimony of the man who built the product she was using.
Lanier's examination covered four main lines: whether Meta deliberately designed addiction mechanics into its products; whether the company knowingly allowed millions of underage users onto the platform; whether internal time-spent benchmarks constituted commercial targets for youth engagement; and whether Meta's own safety research on beauty filters was suppressed when it produced inconvenient results. On every line, Zuckerberg's testimony was followed almost immediately by documents that challenged it.
The day's most arresting moment was not a question or an answer. Lanier had six lawyers unroll a 35-foot vinyl poster showing hundreds of photographs Kaley had posted to Instagram from the age of 9 to the present. Zuckerberg was asked to look at them. Kaley watched from the gallery. The jury watched both.
What Zuckerberg Said
On addiction by design:
"I'm focused on building a community that is sustainable. If you do something that's not good for people, maybe they'll spend more time short term, but if they're not happy with it, they're not going to use it over time. I'm not trying to maximize the amount of time people spend every month."
On underage users:
"There's a distinction about whether someone is allowed to do something and whether we've caught them for breaking the rule. I don't see why this is so complicated. It's been our clear policy that people under the age of 13 are not allowed."
On time-spent benchmarks, after internal documents showing targets of 40 to 46 minutes of daily engagement were introduced:
"These aren't goals we give the teams for executing their jobs. They're ways we measure across the industry whether what we're creating is on track."
On the science linking social media to mental health harm:
"The existing body of scientific work has not shown a link between social media and worse mental health outcomes."
When asked whether he would pledge money to support victims, he declined to accept the premise of the question. When Lanier produced a 20-foot poster of Kaley's childhood photographs, Zuckerberg was asked to look at them. He did. He said nothing.
What the Documents Said
The documents introduced in court told a materially different account on almost every point. On underage access, an internal Meta document from 2018 estimated approximately 4 million users under 13 were active on the platform — roughly 30% of all 10 to 12 year olds in the United States at the time. The platform's policy said they were not permitted. The document said they were there.
On time-spent targeting, internal documents dating from 2013 detailed deliberate strategies to grow time spent on the app among users under 13. An internal message exchange from 2017 captures two employees reacting to Zuckerberg's push to expand the under-13 user base — one describing the last time he raised it as 'gross.' A 2022 internal milestone document set a benchmark of 40 minutes of daily engagement for Instagram active users in 2023, scaling to 46 minutes by 2026.
On safety research, 18 external experts commissioned by Meta to assess beauty filters raised concerns. The filters remained. On the link between social media and mental health, Meta's own internal safety teams produced research that contradicted the position Zuckerberg took on the stand — in the same proceeding.
The pattern running through the day's evidence is what the IA1024 framework calls institutional knowledge: the trade-off between user welfare and commercial engagement metrics was understood, documented, and resolved in favour of the business model. Yesterday, that documentation entered the public court record for the first time.
How the Media Reacted
Coverage was broadly unfavourable to Zuckerberg and analytically focused on the contradiction between testimony and documents. The LA Times characterised his manner as pugnacious. NBC described him as combative. Multiple outlets reproduced the Pew Research finding that the share of Americans who view Zuckerberg very favourably is statistically equivalent to the share who believe the Earth is flat.
Children's advocates moved quickly. Josh Golin, executive director of Fairplay, said Zuckerberg had proven again that he cannot be trusted on children's safety. Tammy Rodriguez, whose 11-year-old daughter died in circumstances she attributes to social media, said there was nothing new in what she heard and no satisfaction in hearing it.
The incident that received disproportionate coverage was the AI glasses. Judge Carolyn B. Kuhl interrupted proceedings to warn that anyone using Meta's Ray-Ban AI glasses to record or conduct facial recognition of the jury must stop immediately and delete any data collected. The irony was inescapable and was reported as such: technology capable of covert AI-enabled data collection, worn by members of the CEO's team, in a courtroom examining covert AI-enabled exploitation of users who had not meaningfully consented to it.
The 'Not Allowed' Defence: How Strong Is It?
The following section analyses publicly contested legal arguments in active litigation. It reflects the analytical view of Interaction-Analysis.com based on publicly available information. It is not legal advice and should not be treated as such. The strength of any legal argument is ultimately a matter for courts to determine. Organisations should seek independent legal counsel.
One of yesterday's central moments was Zuckerberg's repeated return to the position that under-13 users are prohibited by policy. 'I don't see why this is so complicated,' he said. 'It's been our clear policy.' The analytical question — and one the jury will need to assess — is how much legal weight that position can carry.
The answer, based on how plaintiffs are framing the argument and how similar cases have proceeded, appears to be: not much, for several distinct reasons.
The foreseeability problem
In negligence law, what matters is not what a company prohibits but what it could reasonably foresee. The argument the plaintiffs are likely making is that if you design a product in a way that makes a foreseeable class of users particularly likely to encounter and be harmed by it, a policy statement does not discharge your duty of care. The test, as negligence law generally frames it, is whether a reasonable person designing the product would have anticipated that children would use it and taken meaningful steps to prevent that. An age gate requiring only a self-reported birthdate — which children routinely falsify — does not obviously satisfy that test. Meta's own 2018 document showing 4 million under-13 users is significant here: it establishes that the foreseeable outcome was the actual outcome, at scale, and was known.
The enticement distinction
A useful way to think about the policy defence is through an analogy. Leaving a firearm accessible with a sign prohibiting use by children would expose most people to liability for a child's injury — not because they encouraged use, but because they failed to take reasonable precautions against a foreseeable harm. The argument that plaintiffs appear to be making is that the Meta situation is materially worse than that analogy, because the product does not merely sit accessible: it actively draws users in. Notification design, algorithmic recommendation, and variable reward mechanics are, on the plaintiffs' case, engineered to maximise engagement in exactly the age group the policy claims to prohibit. The enticement, on this argument, was designed in — which shifts the analysis from negligent storage toward something the plaintiffs may characterise as negligent or even deliberate enticement.
The COPPA dimension
The Children's Online Privacy Protection Act adds a statutory layer. COPPA requires platforms 'directed to children' under 13 to obtain verifiable parental consent before collecting personal data. Meta has historically maintained that Instagram is not directed to children — a separate legal question from whether children use it. The internal documents showing deliberate strategies to grow engagement among under-13s from 2013 onward substantially complicate that position. If regulators or a court finds the platform was in practice directed at children, the 'not allowed' policy becomes a much weaker shield. The FTC has pursued this theory against other platforms.
The 'we can't verify age' argument
This is arguably the strongest version of Meta's position: that meaningful age verification online is genuinely difficult, that device makers control the infrastructure that would make it more reliable, and that Meta cannot be held liable for harms it lacked the technical means to prevent. This argument has some analytical force — but two significant weaknesses. First, difficulty is not impossibility. Financial services verify identity online. Gambling and alcohol platforms in the UK and Australia now implement age verification with meaningful friction, under legal requirement. The question the plaintiffs are likely pressing is whether Meta invested seriously in technical solutions given what its internal research told it about under-13 usage. Second, and more consequentially for organisations watching from outside the courtroom, the Australian and EU regulatory frameworks have moved past this argument entirely by making age verification a legal requirement rather than a voluntary practice. The 'we couldn't do it' defence is being rendered moot by jurisdictions that have decided: you must.
The 'not allowed' defence is likely not the ground on which this case is won or lost. The deeper exposure, on the plaintiffs' argument, is the claim that the product was designed to exploit psychological vulnerabilities regardless of any age policy — and that the company continued doing so after internal research confirmed the harm. The policy becomes a side issue the moment the jury is focused on mechanism rather than access. The gap between the public policy and the operational reality is the consent gap the IA1024 framework identifies. Yesterday's documents made that gap visible in a courtroom.
The IA1024 Context: Four Things That Matter Beyond Today's Headlines
1. The documents are now public.
Every internal Meta document introduced in court yesterday is now part of the public record. The 2017 employee exchange, the 2013 targeting strategy, the 40-to-46-minute engagement milestones. These will be cited in every subsequent proceeding, in every jurisdiction, against every organisation whose internal documentation resembles them. The question every business leader should be asking this morning is not what the documents say about Meta. It is what their own internal documents say about them.
2. The testimony gap is the consent gap in public.
Public position followed by internal document that contradicted it, repeated across four lines of questioning. This is the consent gap as the IA1024 framework defines it: the distance between what was said publicly and what was understood and documented internally. The jury saw it. Every regulator watching this trial saw it. The evidentiary template for future proceedings in every sector was established in a single day of testimony.
3. The 35-foot poster will outlast the transcript.
Lanier's photographic timeline of the plaintiff's Instagram life — from a 9-year-old child to a 20-year-old woman — is not a legal tactic. It is a public front intervention. That image will be reproduced globally. It puts a face, a timeline, and a childhood on the mechanism. The public front accumulates through exactly this kind of moment.
4. The AI glasses incident is not incidental.
A judge warning that members of the defendant CEO's team may have been conducting covert AI-enabled facial recognition of the jury — in a trial examining covert AI-enabled exploitation of users without meaningful consent — is not a footnote. The mechanism being litigated was present in the room, worn by representatives of the company arguing it causes no harm. It will be reproduced in regulatory testimony, in parliamentary debate, and in future litigation.
TRIAL CONTINUES | Kaley G.M. expected to testify | Bellwether verdict on track: c. June 15, 2026
The full IA1024 institutional analysis, business leader briefing, and weekly intelligence service are available at Interaction-Analysis.com. This update is part of the IA1024 daily monitoring service. Nothing in this publication constitutes legal advice.




Comments