How Schools Across America Are Struggling With AI Deepfakes
Comments
Link successfully copied
An AI Girl Generator on a cellphone in front of a computer screen, created in Washington on Nov. 16, 2023, in a photo illustration. (Stefani Reynolds/AFP via Getty Images)
By Aaron Gifford
10/21/2024Updated: 12/9/2024

Gone are the days where the biggest concern is students drawing alien ears on their science teacher or printing images of a friend’s face connected to a four-legged body with scales and a tail.

That was 30-something years ago. Now, schools are being forced to develop emergency response plans in case sexually explicit images of students or teachers generated by artificial intelligence (AI) pop up on social media.

In two separate cases, school principals were seen or heard in recordings spewing racist, violent language against black students. Both were AI-generated deepfakes—one was produced by students and the other was made by a disgruntled athletic director who later was arrested.

Deepfakes are defined as “non-consensually AI-generated voices, images, or videos that are created to produce sexual imagery, commit fraud, or spread misinformation,” according to a nonprofit group focused on AI regulation.

As education leaders scramble to set policy to mitigate the damage of deepfakes—and as state legislators work to criminalize such malicious acts specific to schools or children—the technology to combat AI tools that can replicate a person’s image and voice doesn’t yet exist, according to Andrew Buher, founder and managing director of the Opportunity Labs nonprofit research organization.

“There is a lot of work to do, both with prevention and incident response,” he said during a virtual panel discussion on teaching digital and media literacy in the age of AI that was held by Education Week earlier this year. “This is about social norming [because] the technical mitigation is quite a ways away.”

Legislation Targets Deepfakes

On Sept. 29, California Gov. Gavin Newsom signed into law a bill criminalizing AI-generated child pornography. It’s now a felony in the Golden State to possess, publish, or pass along images, including AI-generated images, of people under the age of 18 simulating sexual conduct.

There are similar new laws in New YorkIllinois, and Washington state.

At the national level, Sen. Ted Cruz (R-Texas) has proposed the Take It Down Act, which would criminalize the “intentional disclosure of nonconsensual intimate visual depictions.”

The federal bill defines a deepfake as “a video or image that is generated or substantially modified using machine-learning techniques or any other computer-generated or machine-generated means to falsely depict an individual’s appearance or conduct within an intimate visual depiction.”

Sen. Ted Cruz (R-Texas) speaks at a news conference to unveil the Take It Down Act to protect victims against nonconsensual intimate image abuse, at the U.S. Capitol on June 18, 2024. (Andrew Harnik/Getty Images)

Sen. Ted Cruz (R-Texas) speaks at a news conference to unveil the Take It Down Act to protect victims against nonconsensual intimate image abuse, at the U.S. Capitol on June 18, 2024. (Andrew Harnik/Getty Images)

School districts, meanwhile, seek guidance on an emerging problem that threatens not just students, but also staff.

At Maryland’s Pikesville High School in January, a fake audio recording was made of the principal. School officials enlisted the help of local police agencies and the FBI to investigate.

The suspect, Dazhon Darien, 31, an athletic director, was charged with theft, stalking, disruption of school operations, and retaliation against a witness.

He allegedly made the recording to retaliate against the principal, who was investigating Darien’s alleged mishandling of school funds, according to an April 25 statement posted on the Baltimore County Government website.

Jim Siegl, a senior technologist with the Future of Privacy Forum, said during the Education Week panel discussion that investigators in the Baltimore case were able to link the suspect to the crime by reviewing “old school computer access logs.”

But as AI technology continues to evolve, he said, it may be necessary to develop a watermarking system for generated audio or video to replace outdated systems for monitoring and safeguarding school computer use.

In February 2023, high school students in Carmel, New York, used AI to impersonate a middle school principal. The deepfakes were posted on TikTok. Investigators were able to link the students’ activities to their accounts. They were disciplined under school code of conduct guidelines but not charged criminally, according to a statement released on the district’s Facebook page.

“As an organization committed to diversity and inclusion,” the statement read, “the Carmel Central School District Board of Education is appalled at, and condemns, these recent videos, along with the blatant racism, hatred, and disregard for humanity displayed in some of them.”

A parent, Abigail Lyons, said a coworker who also has children in the district showed her a text containing seven different videos.

“I basically fell to the floor,” Lyons, who is biracial, said. “It was horrific. It looked so real.”

They rewatched the videos and noticed that the lip movement and body language were a bit off from the sound. Lyons said most parents in the district had already seen or heard about the videos and probably knew they were deepfakes before Carmel school officials publicly acknowledged the incident and declared there “was no threat.”

Carmel High School, Carmel, N.Y., on Oct. 7, 2015. (Will2022/CC)

Carmel High School, Carmel, N.Y., on Oct. 7, 2015. (Will2022/CC)

Lyons said the event scared her daughter, and that events such as school lockdowns or emergency drills still trigger anxiety and fear stemming from the 2023 deepfake.

“Seventh graders should not have to worry about these things,” she told The Epoch Times.

Lyons said she is unaware of any deepfake incidents so far this semester, but students have threatened each other on social media, including one threat that led to a two-hour building lockdown.

“We still don’t know what [the lockdown] was for,” she said. “The transparency still isn’t there.”

The Epoch Times reached out to the district offices in Carmel and Baltimore County but didn’t receive any responses.

California’s new law was prompted by several deepfake incidents that victimized students.

Patrick Gittisriboongul, assistant superintendent for technology and innovation in the Lynwood Unified School District near Los Angeles, said his district implemented a zero tolerance policy that requires personnel to notify law enforcement, provide services to victims, and deploy an AI incident response plan in the case of such incidents.

The district restricts AI use, employs content filters for online functions, and requires students and staff to follow guidelines for the ethical use of technology.

“Incidents in other Southern California districts prompted us to take proactive measures,” he told The Epoch Times via email. “Given the rapid emergence of AI and its potential misuse, we drafted a comprehensive policy to ensure our district is prepared to address any future issues involving deep fakes or inappropriate AI-generated content.”

The Center for Democracy and Technology reported in September that 40 percent of students and 29 percent of teachers were aware of deepfakes depicting children or adults associated with their school during the 2023–2024 academic year.

Teenage girls look at a cellphone during a break on a school campus. (Nimito/Gettyimages)

Teenage girls look at a cellphone during a break on a school campus. (Nimito/Gettyimages)

The report is based on a survey of about 3,300 respondents across the country representing sixth through 12th grades, and including parents and teachers. Thirty-eight percent considered the deepfake content offensive, and 33 percent described it as sexually explicit.

Most teachers said they had not received training on how to respond to deepfakes and that their districts had not updated their policies to deal with these types of incidents, according to the report.

The report also stated that most of the teachers surveyed indicated that their employers have taken little action to address the threat of deepfakes. Less than a quarter of respondents said the word deepfake has been added to the district sexual harassment policy or student code of conduct.

Teachers prefer to contact law enforcement to discipline deepfake perpetrators, with long-term suspension as the second choice, and counseling as the third, according to the report.

By contrast, parents did not include any of those actions in their top three options. They said they prefer educating first-time offenders about the harmful impacts of deepfakes, according to the report. Their second choice is counseling, and their third option is issuing a warning.

AI Regulation

A consortium of nonprofits, including the National Organization for Women, the Future of Life Institute, and the Center for Human-Compatible Artificial Intelligence, launched the Campaign to Ban Deepfakes (CBD), a global public awareness effort, earlier this year.

The CBD is circulating an online petition to support its call for criminalizing all deepfakes and holding technology developers and content creators liable for their actions.

“The only effective way to stop deepfakes is for governments to ban them at every stage of production and distribution,” the CBD states on its website. The site notes that deepfake sexual content increased by 400 percent and deepfake-related fraud jumped by 3,000 percent between 2022 and 2023.

“There are no laws effectively targeting and limiting the creation and circulation of deepfakes, and all current requirements on creators are ineffective,” the CBD states.

A phone screen displaying a statement from the head of security policy at Meta with a fake video (R) of Ukrainian President Volodymyr Zelenskyy calling on his soldiers to lay down their weapons shown in the background, in Washington, on Jan. 30, 2023, in this photo illustration. (Olivier Douliery/AFP via Getty Images)

A phone screen displaying a statement from the head of security policy at Meta with a fake video (R) of Ukrainian President Volodymyr Zelenskyy calling on his soldiers to lay down their weapons shown in the background, in Washington, on Jan. 30, 2023, in this photo illustration. (Olivier Douliery/AFP via Getty Images)

Legal Guidance

There is plenty of free legal advice online that school districts can use to get ahead of the threats.

A California law firm that specializes in education law—Atkinson, Andelson, Loya, Ruud & Romo (AALRR)—released guidance on deepfakes ahead of the 2024–2025 academic year, citing the incidents in Carmel and Baltimore County, as well as an incident in Washington State.

In the Washington state incident, district officials said they didn’t call the police after a female student was victimized because their lawyers told them they weren’t required to report “fake” images.

AALRR attorneys declined an interview with The Epoch Times, but the company’s website says districts can be sued for failing to respond appropriately to deepfakes under laws that predate AI. They advise school leaders to establish AI policies and educate students on how AI can be used responsibly as well as misused.

The more difficult task is understanding and addressing actions that take place off school grounds but still impact K–12 school communities. The boundaries separating parent versus school responsibilities aren’t always clear.

“Student misuse of AI-generated content likely raises similar First Amendment concerns,” the AALRR’s website states. “A school seeking to punish a student for off-campus misuse of AI would need to show that student misuse of AI substantially impacted the school.”

While agencies attempt to cement a social norm worldwide, millions of students and school employees remain vulnerable.

Education leaders should work to raise awareness of the harms and consequences of AI-generated impersonations and make it clear to school communities that using this emerging technology to damage another person’s reputation is a form of harassment, Siegl said.

“Let law enforcement figure out if it’s a deepfake or not,” he said.

Share This Article:

©2023-2024 California Insider All Rights Reserved. California Insider is a part of Epoch Media Group.