News
Meta Knew About Addiction Issues and Misled Parents, Whistleblowers Testify in Trial
Comments
Link successfully copied
Meta whistleblower Arturo Bejar, former director of Engineering for Protect and Care at Facebook in Berkeley, Calif., testifies before the Senate Judiciary subcommittee in Washington on Nov. 7, 2023. (Madalina Vasiliu/The Epoch Times)
By Beige Luciano-Adams
3/6/2026Updated: 3/6/2026

LOS ANGELES—Testifying in a high-stakes jury trial considering whether social media companies are designing their platforms to be addictive to young people despite known risks, a former director who oversaw safety at Meta on Thursday told a jury the company’s leadership was aware of “staggering” rates of harm to children but chose to bury the problem.

“Meta’s executive leadership are aware of a significant amount of harms kids experience on the platforms and chose not to address it, or tell parents about it,” Arturo Bejar, a former director of engineering at the tech giant, told the court.

“Today, they are really misleading parents about how safe Instagram is.”

As senior engineering and product lead at Facebook, Bejar oversaw security and safety, including child-safety tools such as age verification, from 2009 to 2015, and consulted on similar issues with Instagram’s “wellbeing” team from 2019 to 2021.

Facebook acquired Instagram in 2012 and rebranded as Meta in 2021.

Along with YouTube and its parent company, Google, Meta is a co-defendant in the bellwether civil trial brought by a 20-year-old woman who claims she became addicted to social media platforms as a child and suffered serious psychological harm as a result.

SnapChat, TIkTok and their parent companies were co-defendants in the lawsuit but settled shortly before trial began; they remain named in related cases.

Bejar said he took his concerns—and “really good data” to back them up—to the company’s top leadership, including Meta CEO Mark Zuckerberg and Instagram head Adam Mosseri, but was ignored.

Both executives have testified that their products are not engineered to addict users and denied that clinical social media addiction exists.

The company’s defense team has also outlined progressive steps it has taken to address safety concerns in recent years.

On Thursday, Meta’s attorneys noted Bejar had averaged only three hours per week at Instagram when he consulted from 2019 to 2021.

They said he had little idea about the efforts and resources the company was directing toward safety and well-being after he left his full-time position in 2015, or whether those efforts were prioritized.

Representatives did not respond to a request for comment from The Epoch Times regarding Bejar’s testimony by the time of publication.

The offices of Meta in Menlo Park, Calif., on July 31, 2025. (John Fredricks/The Epoch Times)

The offices of Meta in Menlo Park, Calif., on July 31, 2025. (John Fredricks/The Epoch Times)


‘A Dirty Word’


In the time between his two stints at the company, Bejar said, something had changed.

In earlier years, he could raise a concern and “just be able to run with it,” getting “great support” from Zuckerberg and the resources he needed to research a problem, build and implement a solution.

But the second time around, he found a company culture in which certain issues had become taboo.

“It was confusing to me at first. I thought it was because Mark [Zuckerberg] and company didn’t know the harm that was happening on the ground,” he said.

Specifically, he said, when he returned to Instagram in 2019, researchers there had already identified addiction as a potential problem—but opted to bury it.

“They changed the name of it, and you couldn’t talk about it,” Bejar said.

He recalled asking a researcher on the company’s wellbeing team about addiction.

“She told me there had been research done about addiction, and you couldn’t talk about it.”

Instead, he said, they had renamed it “problematic use” and defined it as “extreme behavior.”

“If you tried to work on that, you would get a lot of unwanted attention from leadership. I was left with this feeling that this was a dirty word we were not supposed to use, and a topic we were not supposed to research.”

Bejar has testified about the same concerns before a U.S. Senate subcommittee and in similar lawsuits against the company.

In 2021, he helped oversee an internal survey of more than 200,000 Instagram users to measure long-term harms.

In it, 60 percent of users reported being the target of bullying within the past seven days, which lawyers for the plaintiff argue proves the company’s top leadership was aware of rising harms and yet failed to act.

Instagram’s Mosseri testified that the company moved nine employees to its Well Being team in 2019 to address concerns over negative user experiences.

Bejar called that team “tragically understaffed,” and the transfer of nine people “categorically not a substantial investment.”

The few people on the team, he said, were often moved to high-priority areas in other parts of the company.

“They didn’t have enough time to focus on whatever the urgent issue of the week was, let alone prevent safety issues that were known [to be] happening at the time,” he said.

Short-Lived Solutions


Much of the trial has focused on how various social media features can be psychologically addictive to users.

Over nearly seven days, testimony from two different psychiatrists specialized in addiction addressed how young people can be particularly vulnerable to features that exploit a primal reward circuitry in the brain.

Like the young plaintiff, they become hooked at an early age and trapped in a vicious cycle of compulsion and addiction, despite obvious harms.

On Thursday, Bejar said Instagram’s features are particularly problematic because of the way they work together to encourage obsessive and compulsive use.

A highly adaptive algorithm notices what you interact with and delivers an infinite stream of it; if you decide to step away, notifications bring you back in, and you start scrolling again.

“And there is a profound social aspect, a really powerful hook,” Bejar said. “That’s where all the other kids are. You’re going to come back and see what they’re doing, but in the process, fall into the other features again.”

The witness said it’s well-known within Meta that these features contribute to problematic use, but attempts at change have been short-lived.

For example, Instagram added and then discontinued an “all caught up” notice when they’d seen all the new posts from the past 48 hours.

“All these short-lived efforts trying to make it a little better ... were washed away by the company prioritizing usage—eyeballs and time—over safety.”

Turning off or restricting problematic features for kids, he said, could have made a huge difference.

Meta CEO and Chairman Mark Zuckerberg arrives at Los Angeles Superior Court ahead of the social media trial tasked to determine whether social media giants deliberately designed their platforms to be addictive to children, in Los Angeles, on Feb. 18, 2026. (Frederic J. Brown/AFP via Getty Images)

Meta CEO and Chairman Mark Zuckerberg arrives at Los Angeles Superior Court ahead of the social media trial tasked to determine whether social media giants deliberately designed their platforms to be addictive to children, in Los Angeles, on Feb. 18, 2026. (Frederic J. Brown/AFP via Getty Images)

In his testimony, Zuckerberg said his “north star” was to create a product that provides value to users, and that he does not instruct his teams to focus on how much time users spend on an app.

“I think if people use something in the near-term but aren’t happy with what they’re doing ... or using it more than they want to, I don’t think it’s good for us in the long term,” the CEO said.

The company has evolved policies and features over time to respond to concerns in a way that balances different stakeholder interests—users, free speech advocates, and those concerned with problematic use, including among children—according to Zuckerberg.

Bejar said the company has access to all the data and resources it needs to address safety issues in earnest, but chooses not to.

“There are mountains of data that point at the problem ... you can bring in people who understand these issues from an academic or a clinical perspective, and they help you write the questions—I did that for six years and found it really effective.

“They have all the data and have the capacity to get any data they don’t have.”

‘Power and Growth’


During his time at Instagram, Bejar recalled the company was pushing its “Reels” feature, a vertical short-video feed to compete with TikTok, launched in 2020.

Along with other features, it has come under scrutiny in a trial focused on how design and function—rather than content—might be addictive to young people.

“The gas pedal was on Reels. There was no effort for safety,” Bejar said.

Brian Boland, a former executive who helped build Facebook’s digital advertising model, testified on Feb. 19 about concerns leading up to his own departure in 2020.

He said Meta had a strategy to go after teens, and cutthroat competition drove a focus on younger users.

An internal email presented by plaintiff’s attorneys showed top executives lamenting the loss of teens, ostensibly to competitors like TikTok, but expressing optimism over “a shot at getting kids” with dedicated apps and then transitioning them to the Facebook and Instagram main platforms.

“Mark Zuckerberg is an extremely competitive individual, and spent a lot of time looking at competitive dynamics [with] other products, where people would spend time and apps people would use,” Boland explained.

“At this time, teens were spending more time on TikTok.”

Attorneys for the plaintiff on Feb. 18 grilled Zuckerberg over allegations that Meta targeted teens and “tweens,” presenting a cascade of internal documents and communications as evidence.

He suggested these referred to abandoned efforts to create products for younger users, not target them for its main platforms.

Users under 13 are not allowed on Meta’s platforms, and Zuckerberg said the company removes them when they find them, but acknowledged it was difficult to keep people from faking their age.

Boland said the company knew more.

“We knew our data around age and gender was the best in the industry,” he said, adding that in addition to self-declared data, data based on how people interact on the platform allowed Meta to predict users’ ages.

Both whistleblowers recounted a similar narrative—true believers who had great experiences at Meta and got rich along the way, but at some point became disillusioned.

Boland said he went from a “deep, blind faith” in the leaders of the company—“I was drinking the Kool-Aid”—to a “firm belief that competition and power and growth were the things that [Meta CEO] Mark Zuckerberg cared about most.”

When there were opportunities to pay attention to red flags and safety concerns, Boland said, they were ignored.

He attributed his decision to quit after 11 years to mounting concerns over “siloed” and anemic safety efforts that amounted to little more than a public relations reflex.

Under cross-examination, both Boland and Bejar said they did not think Meta was intentionally designing Facebook and Instagram to hurt teenagers.

Share This Article:
Beige Luciano-Adams is an investigative reporter covering Los Angeles and statewide issues in California. She has covered politics, arts, culture, and social issues for a variety of outlets, including LA Weekly and MediaNews Group publications. Reach her at beige.luciano@epochtimesca.com and follow her on X: https://twitter.com/LucianoBeige