Congress Must Lead On AI While It Still Can – The Fulcrum

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Opinion
Last month, Matthew and Maria Raine testified before Congress, describing how their 16-year-old son confided suicidal thoughts to AI chatbots, only to be met with validation, encouragement, and even help drafting a suicide note. The Raines are among multiple families who have recently filed lawsuits alleging that AI chatbots were responsible for their children’s suicides. Their deaths, now at the center of lawsuits against AI companies, underscore a similar argument playing out in federal courts: artificial intelligence is no longer an abstraction of the future; it is already shaping life and death.
And these teens are not outliers. According to Common Sense Media, a nonprofit dedicated to improving the lives of kids and families, 72 percent of teenagers report using AI companions, often relying on them for emotional support. This dependence is developing far ahead of any emerging national safety standard.
Notwithstanding the urgency, Congress has responded with paralysis, punctuated only by periodic attempts to stop others from acting. Senate Commerce Chair Ted Cruz insists that a ten-year federal moratorium blocking states and cities from passing their own AI laws is “not at all dead,” despite bipartisan opposition that kept it out of the summer budget bill. He has now doubled down with the SANDBOX Act, which would let AI companies sidestep existing protections by certifying the safety of their own systems and winning renewable waivers from agency oversight. Meanwhile, the Trump administration’s “AI Action Plan” rolls back Biden-era safety standards, threatens states with punishment for regulating, and promises to “unleash innovation” by removing so-called red tape.
This reflects a familiar but flawed assumption: that innovation and safety are fundamentally at odds, and that America must choose between technological leadership and responsible oversight. The idea that deregulation is the path to leadership fundamentally misunderstands American history—not to mention the law.
Far from stifling growth, regulations have turned legal uncertainty into public confidence, and confidence into robust industries. The railroad industry did not flourish because the government stayed out of the way. It flourished because congressionally mandated standards, such as block signaling and uniform track gauges, restored public trust after deadly collisions. The result reshaped America’s conception of itself, and railroads became the sinews of American economic dominance. Similarly, aviation did not become central to American power until Congress established the Federal Aviation Administration to regulate safety, unify air traffic control, and manage the national airspace system. Pharmaceuticals did not become a global industry until drug safety regulations gave consumers confidence in the products they were prescribed.

When Congress stalls, power moves elsewhere. Already, courts are left to improvise on fundamental questions such as whether AI companies bear liability when chatbots encourage suicide, whether training on copyrighted works is theft or fair use, and whether automated hiring systems violate civil rights laws. Federal judges are making rules without guidance. If this continues, the result will be a patchwork of contradictory precedents that destabilize both markets and public trust.
The internet age serves as a cautionary tale: when Congress chose “light-touch” regulation in the 1990s, courts issued contradictory rulings, forcing lawmakers to scramble. Their fix—Section 230 of the Communications Decency Act, the “twenty-six words that created the internet”prevented courts from holding platforms liable and, over time, was interpreted so broadly by some courts that even its co-author noted the law had become misunderstood as a free pass for illegal behavior.
In addition to courts, state legislatures are filling this vacuum. About a dozen or so bills have been introduced in states across the country to regulate AI chatbots. Illinois and Utah have banned AI therapy (bots that provide therapy services), and California has two bills winding their way through the state legislature that would mandate safeguards. But piecemeal lawsuits and a smattering of state laws are not enough. Americans need and deserve more fundamental protections.
Congress should empower a dedicated commission to set enforceable safety standards, establish the scope of legal liability for developers, and mandate transparency for high-risk applications. Courts are built to remedy past harms; Congress is built to prevent future ones by creating agencies with the technical expertise to set safety standards on highly complex and evolving technologies before disasters strike.
Bipartisan momentum around federal coordination is already growing. Senators Elizabeth Warren and Lindsey Graham have introduced legislation to create a new Digital Consumer Protection Commission with authority over tech platforms. Representatives Ted Lieu, Anna Eshoo, and Ken Buck have proposed a 20-member national AI commission to develop regulatory frameworks. Senators Richard Blumenthal and Josh Hawley announced a bipartisan framework calling for independent oversight of AI. Even senators who hold views as varied as Gary Peters and Thom Tillis agree on the need for federal AI governance standards. When progressive Democrats and conservative Republicans find common ground on AI regulation, the time is ripe for action.
To be clear, this is not about choking innovation. It is about ensuring AI does not collapse under the weight of public backlash, market confusion, and preventable harms. Regulation is what stabilizes innovation. America doesn’t lead the world by racing recklessly ahead. We lead when we set the rules of the road: rules that give innovators clarity, give the public confidence, and give democracy control over technologies that already touch life and death.
The parents who testified before Congress are right: their children’s deaths were avoidable. The question is whether lawmakers will act now to prevent more avoidable tragedies, or whether they will continue to abdicate their constitutional responsibility, leaving courts, corporations, and grieving families to pick up the pieces.
Aya Saed is an attorney and a leading voice in responsible AI legislation and the former counsel and legislative director for U.S. Representative Alexandria Ocasio-Cortez. She is the director of AI policy and strategy at Scope3 and the policy co-chair for the Green Software Foundation. She is a Public Voices Fellow with The OpEd Project in Partnership with the PD Soros Fellowship for New Americans.
Last month, Matthew and Maria Raine testified before Congress, describing how their 16-year-old son confided suicidal thoughts to AI chatbots, only to be met with validation, encouragement, and even help drafting a suicide note. The Raines are among multiple families who have recently filed lawsuits alleging that AI chatbots were responsible for their children’s suicides. Their deaths, now at the center of lawsuits against AI companies, underscore a similar argument playing out in federal courts: artificial intelligence is no longer an abstraction of the future; it is already shaping life and death.
And these teens are not outliers. According to Common Sense Media, a nonprofit dedicated to improving the lives of kids and families, 72 percent of teenagers report using AI companions, often relying on them for emotional support. This dependence is developing far ahead of any emerging national safety standard.
Notwithstanding the urgency, Congress has responded with paralysis, punctuated only by periodic attempts to stop others from acting. Senate Commerce Chair Ted Cruz insists that a ten-year federal moratorium blocking states and cities from passing their own AI laws is “not at all dead,” despite bipartisan opposition that kept it out of the summer budget bill. He has now doubled down with the SANDBOX Act, which would let AI companies sidestep existing protections by certifying the safety of their own systems and winning renewable waivers from agency oversight. Meanwhile, the Trump administration’s “AI Action Plan” rolls back Biden-era safety standards, threatens states with punishment for regulating, and promises to “unleash innovation” by removing so-called red tape.
This reflects a familiar but flawed assumption: that innovation and safety are fundamentally at odds, and that America must choose between technological leadership and responsible oversight. The idea that deregulation is the path to leadership fundamentally misunderstands American history—not to mention the law.
Far from stifling growth, regulations have turned legal uncertainty into public confidence, and confidence into robust industries. The railroad industry did not flourish because the government stayed out of the way. It flourished because congressionally mandated standards, such as block signaling and uniform track gauges, restored public trust after deadly collisions. The result reshaped America’s conception of itself, and railroads became the sinews of American economic dominance. Similarly, aviation did not become central to American power until Congress established the Federal Aviation Administration to regulate safety, unify air traffic control, and manage the national airspace system. Pharmaceuticals did not become a global industry until drug safety regulations gave consumers confidence in the products they were prescribed.

When Congress stalls, power moves elsewhere. Already, courts are left to improvise on fundamental questions such as whether AI companies bear liability when chatbots encourage suicide, whether training on copyrighted works is theft or fair use, and whether automated hiring systems violate civil rights laws. Federal judges are making rules without guidance. If this continues, the result will be a patchwork of contradictory precedents that destabilize both markets and public trust.
The internet age serves as a cautionary tale: when Congress chose “light-touch” regulation in the 1990s, courts issued contradictory rulings, forcing lawmakers to scramble. Their fix—Section 230 of the Communications Decency Act, the “twenty-six words that created the internet”prevented courts from holding platforms liable and, over time, was interpreted so broadly by some courts that even its co-author noted the law had become misunderstood as a free pass for illegal behavior.
In addition to courts, state legislatures are filling this vacuum. About a dozen or so bills have been introduced in states across the country to regulate AI chatbots. Illinois and Utah have banned AI therapy (bots that provide therapy services), and California has two bills winding their way through the state legislature that would mandate safeguards. But piecemeal lawsuits and a smattering of state laws are not enough. Americans need and deserve more fundamental protections.
Congress should empower a dedicated commission to set enforceable safety standards, establish the scope of legal liability for developers, and mandate transparency for high-risk applications. Courts are built to remedy past harms; Congress is built to prevent future ones by creating agencies with the technical expertise to set safety standards on highly complex and evolving technologies before disasters strike.
Bipartisan momentum around federal coordination is already growing. Senators Elizabeth Warren and Lindsey Graham have introduced legislation to create a new Digital Consumer Protection Commission with authority over tech platforms. Representatives Ted Lieu, Anna Eshoo, and Ken Buck have proposed a 20-member national AI commission to develop regulatory frameworks. Senators Richard Blumenthal and Josh Hawley announced a bipartisan framework calling for independent oversight of AI. Even senators who hold views as varied as Gary Peters and Thom Tillis agree on the need for federal AI governance standards. When progressive Democrats and conservative Republicans find common ground on AI regulation, the time is ripe for action.
To be clear, this is not about choking innovation. It is about ensuring AI does not collapse under the weight of public backlash, market confusion, and preventable harms. Regulation is what stabilizes innovation. America doesn’t lead the world by racing recklessly ahead. We lead when we set the rules of the road: rules that give innovators clarity, give the public confidence, and give democracy control over technologies that already touch life and death.
The parents who testified before Congress are right: their children’s deaths were avoidable. The question is whether lawmakers will act now to prevent more avoidable tragedies, or whether they will continue to abdicate their constitutional responsibility, leaving courts, corporations, and grieving families to pick up the pieces.
Aya Saed is an attorney and a leading voice in responsible AI legislation and the former counsel and legislative director for U.S. Representative Alexandria Ocasio-Cortez. She is the director of AI policy and strategy at Scope3 and the policy co-chair for the Green Software Foundation. She is a Public Voices Fellow with The OpEd Project in Partnership with the PD Soros Fellowship for New Americans.
With millions of child abuse images reported annually and AI creating new dangers, advocates are calling for accountability from Big Tech and stronger laws to keep kids safe online.
Forty-five years ago this month, Mothers Against Drunk Driving had its first national press conference, and a global movement to stop impaired driving was born. MADD was founded by Candace Lightner after her 13-year-old daughter was struck and killed by a drunk driver while walking to a church carnival in 1980. Terms like “designated driver” and the slogan “Friends don’t let friends drive drunk” came out of MADD’s campaigning, and a variety of state and federal laws, like a lowered blood alcohol limit and legal drinking age, were instituted thanks to their advocacy. Over time, social norms evolved, and driving drunk was no longer seen as a “folk crime,” but a serious, conscious choice with serious consequences.
Movements like this one, started by fed-up, grieving parents working with law enforcement and law makers, worked to lower road fatalities nationwide, inspire similar campaigns in other countries, and saved countless lives.
But today, one of the biggest dangers to children comes with almost no safeguards: the internet. Parents know the risks, yet there is no large-scale “movement” when it comes to keeping our kids safe online.
This is a big missed opportunity. The internet is not going anywhere, but in order to make it safer for children and young people, parents are key – and they need to get mad on a much larger scale.
In 2024, there were 20.5 million reports of child sexual abuse material made to the National Center for Missing and Exploited Children’s CyberTipline, and underreporting is a serious problem. These images represent real children who have been abused, their photos and videos of the abuse shared – exponentially – on platforms that we use every day. Add to that the rising number of teens who have died by suicide after being groomed and extorted, and the number of kids who are exposed to pornographic material on sites that are supposedly “safe” for children.
AI is complicating matters further, suggesting extreme dieting to teens and offering advice on how to commit suicide. According to Common Sense Media, 3 out of 4 kids have used an AI chatbot, and many parents have no idea.
Despite widespread acknowledgement of child sexual abuse, imagery, and exploitation on all major platforms, tech companies are still not required to proactively search for, detect, or remove content unless it is reported to them. Online safeguards are, by and large, voluntary, and tech companies are still rarely held accountable for crimes committed on their sites, creating a virtual playground for predators to groom children without consequences.

Much like the lax culture around drunk driving before MADD, the dangers online are often seen as an unfortunate risk that parents are forced to accept in order to let their children and teens exist in the digital world. Instead of anger, there is a sense of overwhelm and apathy at the scale and the ubiquity of online risks. Parents are mostly forced to throw up their hands, put in place whatever precautions they can, and just go along with it. This is unacceptable.
Congress is making some progress towards passing legislation that will help hold tech companies accountable and let law enforcement better prosecute these crimes. Other countries around the world, like Australia, the U.K., and Brazil, are starting to pass online safety legislation, too. But these achievements are largely uncoordinated, and they exist on a national scale, not a global one.
Since most Big Tech companies are based in the U.S., Congress must take the lead in making companies accountable for the risks children face online. We also need a collective, organic movement led by parents and the public that will drive a global movement for sustainable, meaningful change.
It is not up to parents to solve this crisis. But parents can – and should – be angry. And we must use that anger to fuel change. We must educate ourselves about the risks and not be afraid to talk to others about the risks our kids are facing. The tech companies will not bring themselves down, so parents, teachers, and adults who care about children must continue putting pressure on Congress to act. We can end online child sexual abuse and make the internet a much safer place for everyone, but only if we come together first.
Two Instagram images put out by the White House.
A grim-faced President Donald J. Trump looks out at the reader, under the headline “LAW AND ORDER.” Graffiti pictured in the corner of the White House Facebook post reads “Death to ICE.” Beneath that, a photo of protesters, choking on tear gas. And underneath it all, a smaller headline: “President Trump Deploys 2,000 National Guard After ICE Agents Attacked, No Mercy for Lawless Riots and Looters.”
The official communication from the White House appeared on Facebook in June 2025, after Trump sent in troops to quell protests against Immigration and Customs Enforcement agents in Los Angeles. Visually, it is melodramatic, almost campy, resembling a TV promotion.

The post is not an outlier.
In the Trump administration, White House social media posts often blur the lines between politics and entertainment, and between reality and illusion.
The White House has released AI images of Trump as the pope, as Superman and as a Star Wars Jedi, ready to do battle with “Radical Left Lunatics” who would bring “Murderers, Drug Lords … & well-known MS-13 Gang Members” into the country.
Most recently, on the weekend of the No Kings protests, both Trump and the White House released a video of the president wearing a crown and piloting a fighter jet, from which he dispenses feces onto a crowd of protesters below.
Underpinning it all is a calculated political strategy: an appeal to Trump’s political base – largely white, working-class, rural or small-town, evangelical and culturally conservative.
As scholars who study communication in politics and the media, we believe the White House’s rhetoric and style is part of a broader global change often found in countries experiencing increased polarization and democratic backsliding.
In the past, national leaders generally favored a professional tone, whether on social or traditional media. Their language was neutral and polished, laced with political jargon.
While populist political communication has become more common along with the proliferation of social media, the communication norms are further altered in Trump White House social media posts.
They are partisan, theatrical and exaggerated. Their tone is almost circuslike. The process of governing is portrayed as a reality TV show, in which political roles are performed with little regard for real-world consequences. Vivid color schemes and stylized imagery convert political messaging into visual spectacle. The language is colloquial, down-to-earth.
Just as other influencers in a variety of domains might create an emotional bond by tailoring social media messages, content, products and services to the needs and likes of individual customers, the White House tailors its content to the beliefs, language and worldview of Trump’s political base.

In doing so, the White House echoes a broad, growing trend in political communication, portraying Trump as “a champion of the people” and using direct and informal communication that appeals to fear and resentment.
Trump White House social media makes no effort to promote social unity or constructive dialogue, or reduce polarization – and often heightens it. Undocumented immigrants, for example, are often portrayed as inherently evil. White House social media amplifies dramatic, emotionally charged content.
In one video, Trump recites a poem about a kind woman who takes in a snake, a stand-in for an immigrant who in reality is a dangerous serpent. “Instead of saying thanks, that snake gave her a vicious bite,” Trump recites.
While some scholars have called the White House social media style “amateurish,” that hasn’t resulted in change.
The lack of response to negative feedback is partially explained by the strategic goal of these communications: to appeal to the frustrations of Trump’s deeply disaffected political base, which seems to revel in the White House social media style.
Scholars identify a large number of these voters as “the precariat,” a group whose once-stable, union-protected jobs have been outsourced or replaced with low-wage, insecure service work. These workers, many former Democrats, can no longer count on a regular paycheck, benefits or work they can identify with.
As a result, they are more likely to support political candidates whom they believe will respond to their economic instability.
In addition, many of these voters blame a breakdown in what they perceive as the racial pecking order for a loss of social status, especially when compared with more highly educated workers. Many of these workers distrust the media and other elite institutions they feel have failed them. Research shows that they are highly receptive to messages that confirm their grievances and that many regard Trump as their champion.
Trump and the White House social media play to this audience.
On social media, the president is free to violate norms that anger his critics but have little effect on his supporters, who view the current political system as flawed. One example: A White House Valentine’s Day communication that said “Roses are red, violets are blue, come here illegally, and we’ll deport you.”
In addition, Trump and the White House social media use the president’s status as a celebrity, coupled with comedy and spectacle, to immunize the administration from fallout, even among some of its critics.
Trump’s exaggerated gestures, over-the-top language, his lampooning of opponents and his use of caricature to ridicule whole categories of people – including Democrats, the disabled, Muslims, Mexicans and women – is read by his political base as a playful and entertaining take down of political correctness. It may form a sturdy pillar of his support.
But prioritizing entertainment over facts has long-term significance.
Trump’s communication strategies are already setting a global precedent, encouraging other politicians to adopt similar theatrical and polarizing tactics that distort or deny facts.
These methods may energize some audiences but risk alienating others. Informed political engagement is reduced, and democratic backsliding is increasingly a reality.
Although the communication style of the White House is playful and irreverent, it has a serious goal: the diffusion of ideological messages whose intent is to create a sense of strength and righteousness among its supporters.
In simple terms, this is propaganda designed to persuade citizens that the government is strong, its enemies evil and that fellow citizens – “real Americans” – think the same way.
Scholars observe that the White House projection of the often comical images of authority echoes the visual style of authoritarian governments. Both seek to be seen as in control of the social and political order and thereby to discourage dissent.
The chief difference between the two is that in a deeply polarized democracy such as the U.S., citizens interpret these displays of authority in sharply different ways: They build opposition among Trump opponents but support among supporters.
The rising intolerance that results erodes social cohesion, undermines support for democratic norms and weakens trust in institutions. And that opens the door to democratic backsliding.
Andrew Rojecki is a professor of communication at the University of Illinois Chicago.
Tanja Aitamurto is an associate professor of communication at the University of Illinois Chicago.
King, Pope, Jedi, Superman: Trump’s Social Media Images Exclusively Target His Base and Try To Blur Political Reality was originally published by The Conversation and is republished with permission.
The massive outage that crippled Amazon Web Services this past October 20th sent shockwaves through the digital world. Overnight, the invisible backbone of our online lives buckled: Websites went dark, apps froze, transactions stalled, and billions of dollars in productivity and trust evaporated. For a few hours, the modern economy’s nervous system failed. And in that silence, something was revealed — how utterly dependent we have become on a single corporate infrastructure to keep our civilization’s pulse steady.
When Amazon sneezes, the world catches a fever. That is not a mark of efficiency or innovation. It is evidence of recklessness. For years, business leaders have mocked antitrust reformers like FTC Chair Lina Khan, dismissing warnings about the dangers of monopoly concentration as outdated paranoia. But the AWS outage was not a cyberattack or an act of God — it was simply the predictable outcome of a world that has traded resilience for convenience, diversity for cost-cutting, and independence for “efficiency.” Executives who proudly tout their “risk management frameworks” now find themselves helpless before a single vendor’s internal failure.
And the irony is brutal. Because those very same executives who love to rail against regulation and celebrate “the free market,” have built their empires on a single provider’s proprietary architecture — a fragile monoculture dressed up as digital progress. The lesson is as old as civilization: Centralization breeds vulnerability. When everything is connected through one hub, the entire system becomes hostage to its stability.
And yet, there is a strange silver lining. Outages like AWS’s, painful as they are, have the virtue of being visible. They hurt in real time. The pain is immediate, undeniable, and public. The fallout generates debate and, at least for a while, introspection. We may even take steps toward diversification — using multiple providers, investing in redundancy, designing systems that can withstand partial failure. The lesson, though learned the hard way, can be learned.
But what about the monopolies that never go down? The ones that never blink out for a few hours to expose their power?
Those may be even more dangerous, because they do not shock — they soothe and hum along. They shape the air we breathe, the stories we hear, the categories of thought we consider acceptable, and they do it quietly.
A case in point: The great consolidation of modern media: A handful of conglomerates controlling newspapers, television, digital platforms, film studios, and streaming — has created a quieter, subtler outage: An outage of dissent and and with it the slow but relentless erasure an informed and engaged citizenry-driven democracy.

When every channel is owned by the same few hands, when public debate is filtered through the same editorial logic, and when the same “respectable” voices decide what counts as “reasonable” and what is “extreme,” we drift into a cultural monoculture no less brittle than AWS’s server farms. But this one never goes offline. It keeps running—shaping minds, narrowing horizons, policing language, and quietly defining the limits of permissible thought.
You don’t have to look far for proof. For more than two years, while much of the world took to the streets in outrage, the American media averted its gaze from the genocide in Gaza – the one that has been financed in our name by our own tax dollars. When it finally did turn its attention to the story – when images of children dying of famine became too unbearable to ignore – it did so in the antiseptic language of “conflict” and “security,” filtering suffering through euphemism and imbalance. And all along, Pro-Israel voices dominated the airwaves, while those speaking for the other side were marginalized, stripped of context, lectured and manhandled in interviews, and denied the empathy so readily extended to their adversaries. None of this was by accident.
When democracy is being dismantled, there is no harsh moment of disruption to wake us up from that. No frozen app, no lost transaction. Only, perhaps, one day, the slow realization that our freedoms have eroded, that we are living inside a surveillance architecture of our own making, that the stories we tell ourselves about being informed and free were quietly rewritten while we scrolled. And by then, there may be no “reboot” — no simple fix, no alternative provider to migrate to.
That is the deeper danger of monopoly: Not the moment when it fails, but the long years when it works too well — when it serves power so efficiently that no one remembers what it was like to live outside its reach.
We will recover from AWS’s outage. We always do. But the question that should haunt us is not how to prevent the next system crash. It’s how to prevent the far greater one — the silent crash of democratic agency, cultural plurality, and free thought — that happens not when the lights go out, but when they shine only on what we are allowed to see.
Ahmed Bouzid is the co-founder of The True Representation Movement.
Fear is the worst possible response to AI. Actions taken out of fear are rarely a good thing, especially when it comes to emerging technology. Empirically-driven scrutiny, on the other hand, is a savvy and necessary reaction to technologies like AI that introduce great benefits and harms. The difference is allowing emotions to drive policy rather than ongoing and rigorous evaluation.
A few reminders of tech policy gone wrong, due, at least in part, to fear, helps make this point clear. Fear is what has led the US to become a laggard in nuclear energy, while many of our allies and adversaries enjoy cheaper, more reliable energy. Fear is what explains opposition to autonomous vehicles in some communities, while human drivers are responsible for 120 deaths per day, as of 2022. Fear is what sustains delays in making drones more broadly available, even though many other countries are tackling issues like rural access to key medicine via drones.
Again, this is not to say that new technology should automatically be treated as trustworthy, nor that individuals may not have some emotional response when a new creation is introduced into the world. It’s human nature to be skeptical and perhaps even scared of the new and novel. But to allow those emotions to rob us of our agency and to dictate our policy is a step too far. Yet, that’s where much of AI policy seems headed.
State legislatures have rushed forward with AI bills that aim to put this technology back in the bottle and freeze the status quo in amber. Bans on AI therapy tools, limitations on AI companions, and related legislation are understandable when viewed from an emotional perspective. Following the social media era, it’s unsurprising that many of us feel disgust, anger, sadness, and unease by the idea of our kids again jumping on platforms of unknown capabilities and effects. Count me asking those who are worried about helping our kids (and adults) navigate the Intelligence Age. But those emotions should not excessively steer our policy response to AI. Through close scrutiny of AI, we can make sure that policy is not resulting in unintended consequences, such as denying children the use of AI tools that could actually improve their physical and mental health.
The path to this more deliberate policy approach starts with combating the source of AI fear.
Fear of AI is often a response to the bogus claim that it’s beyond the control of humans. The core aspects of developing and deploying AI are the product of decisions made by people just like you and me. What data is available for AI training is subject to choices made by human actors. Laws often prevent certain data from being disclosed and later used for AI training. Technical systems can prevent data from being scraped from the Internet. Norms and business incentives influence what data even gets created and how it is stored and shared.

How and when AI companies release models is a function of human decisions. The structure of the AI market and the demand for AI products are variables that we can all shape, at least directly, through our representatives and purchasing decisions.
Integration of AI tools into sensitive contexts, such as schools and hospitals, is wholly a matter of human choices. Leaders and stakeholders of those institutions are anything but powerless when it comes to AI tool adoption. These folks are free to budget a lot or a little toward what AI tools they purchase. They can dictate what training, if any, their staff needs to receive before using those tools. They can impose strict procurement standards for any AI tools that can be acquired.
It’s very true that each of us has varying degrees of influence on how AI is developed and deployed, but it’s a dangerous myth that we’ve lost agency at this important societal juncture.
This recognition of our agency is a license to collectively build the tech we want to see, not a mandate to stop its development. A society that acts out of fear defaults to prohibition, sacrificing tangible progress to avoid speculative harms. It chooses scarcity. A confident society, by contrast, establishes the conditions for responsible innovation to flourish, viewing risk not as something to be eliminated, but as something to be managed intelligently in the pursuit of a more abundant future.
The most effective way to foster this environment is not through a new thicket of prescriptive regulations, but through the clarification and modernization of our existing laws and reliance on healthy, competitive markets. Adaptive laws and robust competition have successfully governed centuries of technological change and can do so in the age of AI.
This approach creates powerful incentives for developers to prioritize safety and reliability, not to satisfy a bureaucratic checklist, but because it is the surest path to success in the marketplace. When innovators have a clear understanding of their responsibilities, and consumers are confident that their rights are protected, progress can accelerate. This is the true alternative to a policy of fear: a legal system and marketplace that enables dynamism, demands responsibility, and is squarely focused on unleashing the immense benefits of innovation.
Kevin Frazier is an AI Innovation and Law Fellow at Texas Law and Author of the Appleseed AI substack.

source

Scroll to Top