AI · · 8 min read

New York’s AI Chatbot Liability Bill Threatens $161B+ Industry with Professional Advice Restrictions

Senate Bill S7263 would create civil liability for AI systems dispensing legal, medical, or engineering guidance, advancing a model states could replicate.

New York lawmakers are advancing legislation that would make chatbot operators civilly liable when AI provides advice reserved for licensed professionals, in what could become the first state-level ban on AI in high-stakes domains. Senate Bill S7263 advanced out of the Internet and Technology Committee on February 26, 2026, on a 6–0 vote, positioning New York to set a regulatory template that could reshape how generative AI interacts with consumers across medicine, veterinary medicine, dentistry, physical therapy, pharmacy, nursing, podiatry, optometry, engineering, architecture, and social work.

Context

The bill targets a $161 billion global market expected to grow at 39.6% annually through 2034, according to Fortune Business Insights. Multiple sources value the generative AI sector between $103 billion and $161 billion in 2026, with projections climbing past $1 trillion within eight years.

What the Bill Does

Senate Bill S7263 would amend New York’s General Business Law to prohibit chatbot operators from allowing their systems to provide “substantive response, information, or advice” that would constitute unauthorized practice of licensed professions. The bill creates civil liability when a consumer-facing chatbot gives advice in licensed domains like medicine, law, licensed professional engineering, and mental health counseling.

Crucially, a proprietor may not waive or disclaim this liability merely by notifying consumers that they are interacting with a non-human chatbot system. The bill mandates that chatbot owners provide “clear, conspicuous, and explicit” notice to users that they are interacting with an AI system, but that disclosure does not shield operators from damages.

Users would be able to bring civil lawsuits against chatbot owners to recover damages and attorney’s fees, with fee shifting making lower-value cases worth filing because the plaintiff’s lawyer can be paid by the defendant. The private right of action creates the conditions for high-volume litigation similar to New York’s web accessibility cases, which generated more than 1,400 repeat-defendant filings in 2025 alone.

Who Bears Liability
Frontier model developers (OpenAI, Anthropic, Google)If deployed directly
API deployersFull exposure
Nonprofits & government agenciesCovered

The Constitutional Challenge

The bill reaches startups and enterprise software teams, but also hospitals, legal aid groups, nonprofits, schools, and government agencies that deploy Chatbots for public guidance. That breadth exposes the law to First Amendment challenges: limiting AI responses to questions about tenant rights, medication side effects, or basic legal notices may be viewed as a content-based speech restriction requiring strict scrutiny.

According to Reason, Kevin Frazier of the Cato Institute argues the bill “unduly limits access to information in a manner that is not only unconstitutional, but also ‘contrary to both democratic values and a free market economy'”. Frazier notes that “nobody would demand libraries remove resources on the law or mental health”, yet the bill treats AI-generated information differently.

The bill leaves it unclear when chatbot output that blends explanation with suggested next steps crosses the line into “substantive” professional guidance, creating legal uncertainty for developers attempting to comply.

Enforcement Economics

The enforcement mechanism mirrors New York’s web accessibility litigation model. New York has already seen the serial-plaintiff pattern in web accessibility litigation: high-volume filings, template complaints, and settlement pressure, with 2025 producing more than 5,000 digital accessibility lawsuits nationwide.

When frontier models are licensed via API and deployed by someone else, the deployer is the proprietor, meaning liability falls on whoever runs the consumer-facing interface. This hits everyone from the largest AI platforms to small teams shipping lightweight wrappers over OpenAI or Anthropic APIs, and users pay the price when useful guidance gets blocked.

April 7, 2025
Bill Introduced
Senator Kristen Gonzalez introduces S7263 alongside six other AI bills
February 26, 2026
Committee Approval
Bill advances 6-0 from Internet and Technology Committee to Senate floor
Pending
Full Senate Vote
Bill awaits floor vote; if passed, moves to Assembly and Governor

The Industry Response

The tech industry warns the bill could stifle the $161 billion generative AI market. Taylor Barkley of the Abundance Institute told Reason the ban is “shortsighted at best and protectionist at worst”.

Yet consumer protection advocates point to real harms. Senator Gonzalez cites a warning from the American Psychological Association to the Federal Trade Commission that chatbot therapists could drive vulnerable people to harm themselves or others. The bill follows high-profile lawsuits against Character.AI and Google over chatbot roles in minor suicides, which settled in January 2026.

Studies have found that companion chatbot use is associated with substantial reductions in anxiety, depression, and loneliness, complicating the regulatory calculus. The bill does not distinguish between chatbots offering dangerous medical advice and those providing general health information.

State Regulatory Momentum

New York’s move comes amid a wave of state AI Regulation. According to Lexology, chatbot bills advanced out of committees in Georgia, Illinois, New York, Oregon, and Washington in late February 2026, with bills crossing chambers in Arizona and Iowa.

California, Utah, Nevada, Illinois, and Maine enacted chatbot safety laws in 2025, though none impose blanket liability for professional advice. Other states, including California, Maine, Utah, Nevada, and Illinois, have recently enacted or introduced legislation governing AI chatbots, transparency in automated communications, and safeguards for users at heightened risk of harm.

New York already implemented companion chatbot safeguards in November 2025 requiring suicide prevention protocols and disclosures. The New York attorney general may seek injunctive relief and civil penalties of up to $15,000 per day for violations of that law.

State Chatbot Regulation Comparison
State Scope Enforcement
New York S7263 Professional advice ban Private right of action, actual damages + fees
California SB 243 Companion chatbots (youth protection) Private action, $1,000+ per violation
Illinois Ban on AI in therapy Up to $50,000 per violation
Utah Mental health chatbot disclosure AG enforcement

Federal Preemption Risk

The bill faces potential conflict with federal policy. On December 11, 2025, President Trump signed an executive order establishing a “minimally burdensome” federal AI framework and directing the Justice Department to challenge state laws deemed inconsistent with that policy, according to King & Spalding.

However, the executive order instructs officials not to preempt “otherwise lawful state AI laws relating to child safety protections”, leaving ambiguity over whether professional licensing restrictions qualify for the carve-out.

What to Watch

If S7263 passes, expect immediate litigation testing both the bill’s constitutionality and the boundaries of “substantive” advice. The Federal Trade Commission already targeted one AI legal service: in its DoNotPay settlement, the FTC brought deceptive advertising claims involving an AI chatbot marketed as a “robot lawyer” that allegedly generated legal documents without validation, resulting in outputs not fit for legal use.

Developers should audit chatbot outputs in regulated domains and prepare compliance frameworks assuming other states will follow New York’s lead. Companies deploying API-based models need clear contracts defining liability between model providers and deployers. And industry groups should coordinate constitutional challenges if the bill becomes law—the First Amendment implications extend beyond New York.

The question is no longer whether states will regulate AI chatbots in professional services, but how far restrictions will go and whether a fragmented state-by-state approach survives federal coordination. New York’s vote will answer the first question. The courts will decide the rest.