The panel of judges on the Ninth Circuit Court of Appeals objected to a portion of the California Age-Appropriate Design Code Act — specifically, a requirement that online businesses “opine on and mitigate the risk that children may be exposed to harmful or potentially harmful materials online.” The rule “facially violates the First Amendment,” the appeals court concluded. As a result, it’s upholding a preliminary injunction on that portion of the law and related aspects.

But it sent another part of the law back to the lower court to reconsider and vacated the rest of the preliminary injunction, saying it was unclear if the rest of the law violated the First Amendment. The panel believes it’s “too early” to say if the unconstitutional parts of the statute could be feasibly cut off from the rest.

The ruling, authored by Judge Milan Smith Jr., singles out the design code’s Data Protection Impact Assessment (DPIA) requirement. The DPIA would compel online businesses to craft reports on whether their designs could harm kids and “create a timed plan to mitigate or eliminate the risk[s].” Smith determined this would likely fail First Amendment scrutiny. California “could have easily employed less restrictive means to accomplish its protective goals,” he wrote, including incentives for voluntary content filters, education for children and parents, and the enforcement of existing criminal laws.

Instead, he added, the state’s law “attempts to indirectly censor the material available to children online, by delegating the controversial question of what content may ‘harm to children’ to the companies themselves.”

That could be an ominous sign for other legislation like the Kids Online Safety Act (KOSA), which recently passed the Senate 91–3. KOSA demands platforms take reasonable steps to protect kids from certain kinds of harms, including mental health disorders like anxiety and depression.

Still, the judges ruled that other parts of the Age-Appropriate Design Code Act may not violate the First Amendment in every possible application of the law. Smith pointed to provisions like banning dark patterns that encourage kids to give over more information than what’s necessary to operate the service. “Based on the record developed so far in this litigation, it is unclear whether a ‘dark pattern’ itself constitutes protected speech and whether a ban on using ‘dark patterns’ should always trigger First Amendment scrutiny, and the district court never grappled with this question.”

Smith’s ruling also said that the district court should have evaluated more closely whether other parts of the law could be upheld when applied to non-social media companies covered by the bill.

The ruling is the latest relative victory in NetChoice’s string of lawsuits against state-level internet regulations, including laws aimed at protecting children online. Courts have agreed with many of the First Amendment arguments that the group, which represents companies like Meta and Google, has made against such laws.

It’s also significant as it comes after an instructive Supreme Court ruling earlier this year in Moody v. NetChoice, which affirmed that content moderation and curation by platforms is protected speech. The justices expressed skepticism about bringing facial challenges — which assert that any possible application of a law is unconstitutional — under the First Amendment in such cases. Even so, Smith wrote that the case against the DPIA requirement of California’s law is facially unconstitutional because “in every application to a covered business, [it] raises the same First Amendment issues.”

The California attorney general’s office did not immediately respond to a request for comment. NetChoice Litigation Center director Chris Marchese called the ruling “a victory for free expression, online security and Californian families.” He added, “The court recognized that California’s government cannot commandeer private businesses to censor lawful content online or to restrict access to it.”

Share.
Exit mobile version