Conn. Supreme Court Asked to Toss Case Over AI Use

A Connecticut malpractice case faces dismissal after AI-generated filings surfaced, raising questions about accountability in artificial intelligence news and legal ethics.

The Connecticut Supreme Court will hear arguments Wednesday on whether a legal malpractice case should be thrown out because the plaintiff's attorneys used artificial intelligence to draft court filings. The case, Kellogg v. Schatteman, marks one of the first times a state's highest court has been asked to rule on whether AI-generated content in legal documents constitutes grounds for dismissal.

The dispute arose when attorney Jeffrey Schatteman, representing himself and his law firm, moved to dismiss a malpractice lawsuit brought by former client Robert Kellogg. Schatteman's central claim: Kellogg's new legal team relied on AI tools to produce "hallucinated" case citations and fabricated quotes in their complaint and subsequent motions. The trial court denied the dismissal request, but Schatteman appealed directly to the state's highest court, bypassing the intermediate appellate level.

How the AI Content Was Discovered

The controversy began during routine pretrial proceedings in 2024. Schatteman, reviewing Kellogg's amended complaint, noticed something odd about the case citations. Several decisions appeared to reference judges who'd never served on the cited courts. One "case" quoted a Connecticut Supreme Court justice discussing legal malpractice standards — but that justice had retired before the supposed decision date.

Schatteman's team ran the citations through legal databases. Six of the twelve cases cited in Kellogg's motion to compel arbitration didn't exist. The quotes were fabricated. The procedural histories were invented.

Kellogg's attorneys, from the Hartford firm Morrison & Cole, later admitted they'd used an AI research tool — initially identified only as "a widely used legal assistant platform" but subsequently confirmed as Harvey AI — to draft portions of their filings. They told reporters the tool had "presented these citations as verified" and they'd failed to independently check them.

The firm replaced the AI-drafted sections with manually researched citations and apologized to the court. But Schatteman argued the damage was done: the case should end there, with sanctions against Kellogg's counsel.

What the Court Must Decide

Connecticut's Supreme Court granted certification on a narrow question: whether the use of fabricated legal authorities, regardless of intent, warrants dismissal of the underlying action as a sanction.

The case doesn't ask whether lawyers can use AI. It asks what happens when they do it badly.

Schatteman's brief argues that AI-generated hallucinations constitute "fraud on the court" equivalent to a lawyer knowingly submitting false evidence. "The source of the fabrication is irrelevant," his attorneys wrote. "Whether a lawyer invents a case from whole cloth or delegates that invention to an algorithm, the corruption of the judicial record is the same."

Kellogg's response, filed in March, counters that dismissal is a "draconian" remedy for what was ultimately a citation error — albeit a serious one. They note that no party was actually misled; the fabricated cases weren't central to their legal arguments, and the motion to compel was granted on other grounds.

The Connecticut Bar Association filed an amicus brief supporting neither party directly, but urging the court to establish clear standards rather than impose blanket prohibitions. "The legal profession must adapt to new tools," the brief states. "But adaptation requires accountability."

JurisdictionAI Citation Sanctions (2023-2025)Notable Cases Federal courts23 documented instancesMata v. Avianca (S.D.N.Y. 2023) — first major sanction New York state4 attorney suspensionsTwo lawyers fined $5,000 each California3 public reprimandsNo dismissals granted Texas2 casesOne dismissal denied, one pending Connecticut1 pending (Kellogg)First state supreme court review

The federal Mata case in 2023 — where two lawyers submitted ChatGPT-generated briefs with entirely invented precedents — set the template for how courts have handled these incidents. Southern District Judge Kevin Castel imposed $5,000 fines but declined to dismiss the underlying personal injury case, finding that "the client's claim has independent merit."

Connecticut's appellate courts have yet to address the issue directly.

Why This Case Matters Beyond Connecticut

The Kellogg decision will likely influence how other state courts structure AI-related sanctions. Legal ethics professors are watching closely.

"We're seeing a collision between professional responsibility rules written for a different century and tools that generate plausible-sounding nonsense at scale," said Rebecca Roiphe, a legal ethics scholar at New York Law School, in an interview with the Connecticut Law Tribune. "Courts need to decide: is this a technology problem or a lawyer problem? Because the answer determines everything about the remedy."

Roiphe noted that every major jurisdiction has rules requiring lawyers to verify citations — Rule 3.3 on candor to tribunals, Rule 1.1 on competence. "The AI doesn't violate the rules. The lawyer does, by failing to supervise."

But Schatteman's argument pushes further: that AI-generated content creates structural problems for judicial review that traditional sanctions can't address. How can opposing counsel effectively rebut arguments supported by non-existent authority? How much court time gets wasted chasing ghosts?

Kellogg's team responds that these concerns are overstated in their specific case. The fabricated citations appeared in a procedural motion, not dispositive briefing. No trial date was delayed. The "fraud on the court" framing, they argue, is rhetorical overreach designed to escape a malpractice claim with documented merit — Schatteman allegedly missed a filing deadline that cost Kellogg his original personal injury case.

---

What Happens If the Court Agrees with Schatteman?

A dismissal with prejudice would be unprecedented in AI citation cases. It would also create a powerful strategic weapon: defendants in malpractice and other cases could force plaintiffs to disclose their drafting methods, then seek dismissal if any AI involvement surfaces.

More likely, legal observers predict, is a middle path. The court could affirm that dismissal is available as a sanction but requires a showing of prejudice or bad faith — neither of which Kellogg's attorneys demonstrated. Or it could punt, remanding for fact-finding on whether the AI use was reckless or merely negligent.

The justices may also address the broader question of AI disclosure. Several federal judges now require attorneys to certify whether AI drafted any portion of filings. Connecticut has no such rule. The Kellogg opinion could nudge the state's rules committee toward mandatory disclosure without the court imposing it directly.

Oral arguments are scheduled for 10 a.m. Wednesday in Hartford. A decision is expected by early 2026.

Whatever the outcome, the case signals that state supreme courts are done waiting for bar associations to set AI standards. The judiciary is writing the rules now, one sanction at a time.

---

Related Reading

- Startup Funding Hits $189B Record on AI Deal Surge - Military AI Reshapes Modern Combat by 2026 - Teachers' New Playbook for Spotting AI-Written Work - Nvidia CEO: AI Boom Is Only Getting Started - Pentagon Standoff Shapes Future of AI in Warfare