Senate seal

Hearings to examine platform power as section 230 turns 30.

Wednesday, March 18, 2026

Key Takeaways

  • Senators and witnesses reached a bipartisan consensus that Section 230 is not sacrosanct and requires modernization to address algorithmic harms, child safety, and generative AI liability.
  • Matthew Bergman (Founding Attorney, Social Media Victims Law Center) argued that platforms should face product liability for addictive design features that target children, rather than receiving blanket immunity.
  • Sen. Cruz (R-TX) pressed Daphne Keller (Director of Platform Regulation, Stanford Law School) on government "jawboning," prompting Keller to testify that both political parties have inappropriately pressured platforms to censor.
  • Republicans focused on curbing "Big Tech censorship" and government-led speech suppression, while Democrats prioritized child mental health protections, data privacy mandates, and prying open competition through interoperability requirements.
  • The committee signaled continued momentum for the Kids Off Social Media Act and COPPA 2.0 as they weigh whether to strip immunity from generative AI outputs.
Hearing Details

Witnesses

Members Who Spoke

Top 5 Organizations Mentioned

View on Congress.gov

Read the full transcript

Starting at $350/mo

  • Full hearing transcripts
  • Speaker timestamps with video verification
  • Organization & competitor mentions
  • Same-day delivery
  • Personalized summaries
Start reading

30-day money-back guarantee on all paid plans.

Hearing Analysis

Overview

This hearing examined the legacy and future of Section 230 of the Communications Decency Act as the statute approaches its 30th anniversary. Senators and witnesses debated whether the broad liability shield that helped build the modern internet remains appropriate in an era defined by algorithmic amplification, social media addiction, and the emergence of generative artificial intelligence. The discussion centered on balancing the protection of free expression and a competitive digital marketplace with the urgent need to hold platforms accountable for design choices that contribute to child exploitation, mental health crises, and illegal content distribution.

Key Testimony & Policy

The witness panel offered diverging perspectives on whether Section 230 requires fundamental repeal or surgical reform. Daphne Keller, Director of Platform Regulation at Stanford Law School, argued that Section 230 remains essential for protecting user speech rights. She warned that removing the shield would lead to "Disneyfication," where platforms purge any remotely controversial content to avoid litigation, or conversely, stop moderating altogether. Keller advocated for "middleware" and interoperability as market-based solutions to give users more control over their feeds. Similarly, Nadine Farid Johnson, Policy Director at the Knight First Amendment Institute, cautioned against a full repeal, suggesting instead that Section 230 protections be conditioned on platforms complying with transparency, privacy, and interoperability mandates. She highlighted the Platform Accountability and Transparency Act as a potential model for allowing independent researchers to study algorithmic impacts.

In contrast, Matthew Bergman, founding attorney of the Social Media Victims Law Center, argued that courts have interpreted Section 230 far beyond its original intent, creating a "get out of jail free card" for Big Tech. He contended that features like "infinite scroll," "streaks," and "push notifications" are deliberate product design choices intended to addict minors, rather than protected editorial speech. Bergman urged Congress to clarify that Section 230 does not immunize platforms for "negligent design" or "product liability." Brad Carson, President of Americans for Responsible Innovation, echoed these concerns, specifically regarding AI. He argued that generative AI outputs are not third-party content but products created by the companies themselves, and thus should not receive Section 230 protection.

Several legislative efforts were discussed as potential paths forward. These include the Take It Down Act, led by Sen. Ted Cruz (R-TX) and Sen. Amy Klobuchar (D-MN), which targets non-consensual intimate images; the Kids Online Safety Act (KOSA); and the Kids Off Social Media Act. Sen. Klobuchar also highlighted the American Innovation and Choice Online Act to address anti-competitive self-preferencing. Sen. Cruz mentioned his upcoming Jawbone Act, aimed at preventing government agencies from "bullying" platforms into censoring lawful speech, and the TERMS Act, which would prevent platforms from using terms of service to silence users.

Notable Exchanges & Partisan Dynamics

A significant point of bipartisan agreement emerged between Chairman Ted Cruz (R-TX) and Ranking Member Brian Schatz (D-HI), both of whom stated emphatically that Section 230 is "not one of the Ten Commandments" and is subject to reform. Both senators expressed frustration with the "light-touch" regulatory approach of the 1990s, arguing it is no longer sufficient for the modern era of "Big Tech gatekeepers."

However, sharp partisan tensions surfaced regarding "jawboning"—the practice of government officials pressuring platforms to remove content. Sen. Eric Schmitt (R-MO) engaged in a heated exchange with Daphne Keller over Stanford University’s role in the Election Integrity Project and the *Missouri v. Biden* (now *Murthy v. Missouri*) litigation. Sen. Schmitt accused Stanford of acting as a "censorship regime" for the Biden administration, while Keller defended the university’s actions as an exercise of First Amendment rights by academics. Sen. Tammy Baldwin (D-WI) countered by pointing to instances where the Trump administration and FCC Commissioner Brendan Carr allegedly pressured broadcasters and museums, arguing that the threat of government censorship is a multi-administration issue.

Sen. John R. Curtis (R-UT) utilized a "Post Office" analogy to distinguish between a neutral carrier and a platform that "opens the letter" and uses its contents to design addictive features for millions of people. This analogy was used to bridge the gap between protecting speech and regulating the "conduit" or "product" through which that speech is delivered.

Organizations Mentioned

- Meta Platforms, Inc. (Meta): Discussed regarding its algorithmic targeting, racially biased ad delivery (Vargas case), and the impact of its design choices on youth mental health. - Snap Inc. (Snap): Mentioned in the context of the *Lemmon v. Snap* case involving a "speed filter" and its role in facilitating connections between predators and minors via Bitmoji. - Stanford Law School / Stanford Internet Observatory: Criticized by Republican members for its alleged role in flagging content for government-led "censorship" while defended by witnesses as a center for First Amendment research. - United States Postal Service (Post Office): Used as a primary analogy to debate whether social media platforms should be treated as neutral common carriers or active publishers. - xAI: Criticized for its "Grok" chatbot’s alleged dissemination of antisemitic conspiracy theories and Holocaust denial. - Knight First Amendment Institute: Proposed structural reforms including researcher safe harbors and interoperability mandates to break platform monopolies. - Americans for Responsible Innovation (ARI): Advocated for excluding generative AI from Section 230 protections to prevent "legal stagnation" in the emerging AI industry. - YouTube: Discussed regarding its recommendation algorithms and the "Veoh" case, illustrating how litigation costs can stifle smaller competitors.

What's Next

The committee indicated that several bills are moving toward floor action or further markup. Sen. Schatz and Sen. Cruz both expressed a desire to pass the Kids Off Social Media Act and the Kids Online Safety Act (KOSA) on the Senate floor. Sen. Cruz announced the imminent introduction of the Jawbone Act to address government-coerced censorship. Additionally, the committee will likely monitor the ongoing "product liability" lawsuits in lower courts, such as those handled by the Social Media Victims Law Center, to see if judicial interpretations of Section 230 shift without legislative intervention. Witnesses also urged Congress to act on "COPPA 2.0" to update child privacy protections.

Transcript

Sen. Cruz (TX)

Good morning. The Senate Committee on Commerce, Science, and Transportation will come to order. Welcome to all the witnesses. Within our lifetimes, the internet has impacted nearly every aspect of the world and our daily lives, especially how we communicate. It was only a short time ago that speech and newsworthiness was controlled by a handful of TV networks and giant newspaper publishers. If you held a position they didn't want to print or wasn't consistent with their political views, it didn't get said. The internet changed that, allowing anyone to bypass these gatekeepers and shape public opinion with their own views. The internet also created a new way to communicate anonymously and at greater scale through blogs, message boards, and comment sections. But with opportunity came legal questions. The law wasn't written for the internet's ease and anonymity. Holding a platform liable for the illegal speech of another person threatened potentially to overwhelm early internet companies with ruinous lawsuits that would predictably result in less online speech. So Washington explicitly adopted a light-touch regulatory approach with the enactment of the Telecommunications Act of 1996. Congress included Section 230 to ensure that online platforms would not be liable for the illegal speech of another person. It did so to preserve a competitive free market, and the text of Section 230 explicitly recognized that the internet provided a "forum for a true diversity of political discourse." But 30 years later, it seems that Big Tech has now become the new gatekeeper, the new speech police. If you disagree with a particular view, Big Tech doesn't answer that with more speech. They do not try to persuade. They do not debate. They simply make the view they disagree with disappear, and they silence you. That should scare everyone. What's even more concerning is how the government hijacks Big Tech's powers to shape online discourse and to suppress dissenting views and undermine free speech. This isn't fiction. As I detailed in my report and hearings last year, the Biden administration weaponized the Cybersecurity and Infrastructure Security Agency to bully Big Tech to censor lawful speech on COVID and on elections, disproportionately muzzling conservative voices. We should recognize and celebrate how the free market can cause a course correction against Big Tech censorship. Elon Musk's purchase of Twitter was one of the most important steps for free speech in decades. It showed that the censorship regime is not inevitable, and it can be challenged in the marketplace and shifted to allow the kind of diverse viewpoints that Section 230 envisioned. Congress must also consider every constitutional tool we have to ensure and to prevent social media from harming Americans, especially children, while not incentivizing Big Tech censorship. The Take It Down Act, which I led together with Senator Klobuchar, demonstrates that Congress can pass targeted legislation to protect children and adults online. The law prohibits non-consensual intimate images, including such images created with artificial intelligence, and it creates a process to provide notice and takedown for victims, all without amending Section 230 or chilling lawful speech protected by the First Amendment. I've also introduced several other legislative reforms to actively support free speech online, including the TERMS Act, which stops online platforms from weaponizing their terms of service to silence Americans and deny them access to essential products and services. And I will soon be introducing the Jawbone Act to stop government agencies from bullying platforms into silencing the American people. The same reasons why Congress enacted Section 230 to prevent liability for a different person's speech are still relevant, and I'm concerned that a full repeal or sunset would lead platforms to engage in worse behavior, to engage in more censorship to protect themselves from litigation. I also don't believe, as some of my colleagues have suggested, that we should use Section 230 reform to silence more lawful speech or to turn the government into the arbiter of truth. But we should consider whether reform of Section 230 is needed to encourage and to protect more speech online and to stop Big Tech censorship. No government official, regardless of party, should have the power of censorship. I agree with John Stuart Mill that the best solution for bad ideas and for bad speech is better ideas and more speech. We don't need to use brute government force because the truth is much more powerful. I turn to Ranking Member Schatz.

Read the full transcript

Starting at $350/mo

  • Full hearing transcripts
  • Speaker timestamps with video verification
  • Organization & competitor mentions
  • Same-day delivery
  • Personalized summaries
Start reading

30-day money-back guarantee on all paid plans.

Not ready to subscribe?

Get a free daily digest with hearing summaries ranked by relevance.

Already have an account? Log in