At least 15 states have enacted or are pursuing legislation that would require online companies to protect the safety and privacy of kids using their platforms, putting pressure on Congress to pass more unifying federal legislation.
California in 2021 was among the first to do so when it enacted a measure that requires social media and online companies to “prioritize” the health and well-being of children before launching apps and services publicly.
The California law was halted after NetChoice, a tech industry trade group, sued to block it. In September, a U.S. District Court ruled that parts of the law probably violated First Amendment rights to free speech. California Attorney General Rob Bonta has appealed the ruling to the U.S. Court of Appeals for the 9th Circuit.
Despite that setback, however, legislators in other states have proposed bills modeled on California’s approach, known as age-appropriate design, as well as other measures that require parental consent for kids using online services.
Advocates for children’s online safety are hoping that Congress will enact federal legislation rather than allowing a piecemeal, state-by-state approach. They hope to rein in tech platforms designed to keep kids online for hours every day, blaming the platforms for a host of mental health problems, sleeplessness and eating disorders.
A study by the Harvard T.H. Chan School of Public Health released in late December found that social media companies Facebook, Instagram, Snapchat, TikTok, X (formerly Twitter) and YouTube collectively generated $11 billion in ad revenue in 2022 from U.S. based users younger than 18. Of that, about $2.1 billion came from users 12 or under who are not permitted on such platforms under the terms of service, the study found.
Unlike federal data privacy legislation, which has stalled in Congress while states have enacted measures, kids’ online safety is a more tangible issue for voters and therefore has a broader bipartisan support, said Josh Golin, executive director at Fairplay for Kids, a nonprofit group that aims to stop marketers’ targeting of children.
“The harm that parents are seeing every day with kids who can’t put their phones down, who are cutting themselves, who are helpless and hopeless because of what they’re consuming nonstop online, is so real and so tangible,” Golin said in an interview. Therefore, “you have public opinion on the side of more regulation.”
Golin said polling commissioned by Fairplay conducted in October showed that 87 percent of the 1,200 online respondents, including Democrats, Republicans and independent voters, ranked addressing the harmful impact of social media on children and teens as important, second only to improving the economy and ahead of border security and climate change concerns.
Golin is optimistic that two bipartisan bills approved by the Senate Commerce Committee last year, which combine elements of age-appropriate design and parental control requirements, will pass this year.
The first, sponsored by Sen. Richard Blumenthal, D-Conn., would require online platforms and social media apps to exercise a duty of care and take steps to mitigate harm for minors using their platforms. The requirement would apply to children below the age of 13.
The other measure, sponsored by Sen. Edward J. Markey, D-Mass., and co-sponsored by Sen. Bill Cassidy, R-La., would prohibit online platforms from disseminating children’s personal information without obtaining a verifiable parental consent, effectively ending ads targeted at kids and teens. The bill would raise the age of children protected to 17, from 12 and below under current law.
Sen. Maria Cantwell, D-Wash., chair of the Senate Commerce Committee, in a brief interview in November said that the Senate “would like to get some kids’ privacy bills done” before turning to broader data privacy measures.
The states that have either passed or are considering kids online safety laws include Arkansas, California, Connecticut, Iowa, Louisiana, Maryland, Minnesota, New Mexico, New York, New Jersey, Nevada, Oregon, Texas, Utah and Wisconsin.
Maryland, Minnesota, New Mexico and Nevada are pursuing an age-appropriate design approach to safety modeled on California’s law, said Nichole Rocha, the head of U.S. affairs at 5Rights Foundation, a U.K.-based nonprofit group that advocates for child safety online. Rocha has worked with the states to develop legislation.
The design approach to safety calls on platforms to build their products in ways to minimize or avoid harms to children before they occur instead of trying to fix problems that already exist, Rocha said.
The “design-based approach starts at a foundational level,” Rocha said. “It’s a data-privacy framework, so if there are kids on your site or app there are all sorts of privacy requirements, like, you can’t collect data that’s not necessary to provide services, you can’t track their location.”
This approach also requires companies to identify potential risks and to try to mitigate them, Rocha said.
This is unlike parental consent that puts the burden on parents, Rocha and Golin said.
Kids use dozens of apps and online platforms making it tough for parents to understand the risks of each and decide whether to allow their use, Golin said.
NetChoice, which represents Amazon.com Inc., Google LLC, Meta Platforms Inc. and TikTok, among other tech companies, argues that apps and online platforms are mostly neutral and any attempt by state or federal governments to regulate or control platforms is a violation of free-speech rights. The trade group also won an injunction against a kids online law passed by Arkansas and has sued to stop a similar measure in Utah.
Lawmakers around the country are “trying to use social media as a scapegoat” for a much broader mental health problem facing kids and teens, said Carl Szabo, vice president at NetChoice.
Szabo likened social media apps and online platforms to newspapers, TV shows, video games and movies, and pointed to self-regulation in those cases as the appropriate approach.
“What we should be talking about is, what is a better way to help teens and parents navigate and use this new technology,” Szabo said. “So the correct answer is not to begin banning free speech or creating essentially an ID for the Internet, which is the effect of California, which is the effect of Utah, which is the effect of Arkansas” laws, Szabo said. “The answer is not for the government to come in and decide what speech is appropriate for families.”
Szabo said laws requiring tech companies to figure out whether a user is a child or a teen would compel companies to use age verification systems that can lead to greater violations of privacy.
Child safety advocates dismiss such warnings.
“It’s a complete red herring,” Golin said of the tech industry’s argument. “It is laughable that the same companies that boast about their ability to micro-target their ads to people who are in same sex marriages and live in the mountains and are unicycle enthusiasts say that if a child enters the wrong date on their birthday, they are helpless to determine what their actual age is.”
Children who enter fake birthdays to gain access to an app reveal real information about themselves when they wish “each other happy 10th birthdays or put things like fourth-grade hash tags in their posts,” Golin said. Tech companies already have such data to determine the real ages of kids, he said.
___
©2024 CQ-Roll Call, Inc. Visit at rollcall.com. Distributed by Tribune Content Agency, LLC.
View more on Orange County Register