Exhibit AI — WHO'S ACCOUNTABLE: The verification step moves from best practice to rulebook.
> FROM THE EDITORS

Issue 001 said the demo era was over. Issue 002 names what's replacing it: accountability. The California Bar proposed a rule that would require lawyers to independently verify every AI output. An Icertis survey said most in-house teams have no audit trail of what their agents actually did. And Clio published the first dataset showing AI-using lawyers are billing more hours, not fewer.

Underneath those three is the same structural shift. The market is no longer paying for capability; it is paying for verifiable, supervisable capability. The firms and labs hiring this week reflect it — senior lawyers, not technologists, at the front of the function. Five-minute read.

> 01 // ETHICS_REGULATION

California writes the verification step into the rulebook

State Bar of California · six AI-focused ethics changes · proposed

The State Bar of California proposed a package of six AI-focused ethics changes this week. The headline rule: a lawyer must independently verify every AI-generated output before relying on it. The other five cover competence, confidentiality, supervision, candor to the tribunal, and client disclosure.

California is not the first state to write about AI in the rules of professional conduct, but it is the first to put verify-every-output on the page as a duty rather than a recommendation. That language matters. ‘Reasonable steps’ gets argued in malpractice depositions; ‘verify every output’ gets argued in disciplinary hearings.

If the rule is adopted, the operating consequences are immediate: every AI-assisted task has a named human owner, a verification record, and an audit trail. That is a procurement spec. Vendors that ship a check-mark UI for ‘reviewed and approved’ ship a compliance product. Vendors that don't ship a liability.

WHY IT MATTERS

‘Verify every output’ is the line that turns AI from a productivity tool into a regulated workflow. Other bars will copy it. Vendors should assume the spec, not wait for it.

[ LawSites · California Bar proposes verify-every-output rule ]

> 02 // MARKET

In-house teams admit they can't see what their agents did

Icertis research · in-house legal · AI agent visibility

A new Icertis survey of in-house legal teams found a majority have no audit trail of what their AI agents actually did — not what they were asked, not what data they touched, not what they decided. The teams are deploying agents anyway.

Read this against Story 01. The bar wants verification on the front end. Icertis is telling you that the back end — the ‘what did the agent do, with whose data, on whose authority’ record — does not yet exist in most legal departments. Without it, ‘reasonable supervision’ under any state's rules is a sentence a lawyer cannot finish.

The product gap is named: agent observability. Not features — visibility. The next 18 months of legal-AI buying will be driven less by what the agents can do and more by whether the buyer can prove, after the fact, what they did.

[ LawSites · Icertis on in-house AI-agent visibility ]

> 03 // MARKET

The billable-hour question, finally with data

Clio · AI-using lawyers · billable hours · work-life balance

Clio published the first credible dataset on what AI is doing to the billable hour, and the headline is the opposite of the popular narrative: lawyers using AI are billing more hours, not fewer, and reporting better work-life balance.

The intuitive explanation is wrong: AI is not making lawyers faster on the same work. It is letting them take on more work — and, per Stories 01 and 02, more defensible work, which is itself billable. Verification is real labour. Every ‘independently confirmed’ line in the matter file is a clock that runs.

This is a useful counter to the ‘AI will commoditise law’ thesis the analyst class still leans on. If the regulatory direction is ‘verify everything,’ the billable-hour direction is ‘bill for the verification.’

[ Clio · AI lawyers, billable hours, and work-life balance ]

> 04 // STRATEGY · CAPITAL FLOWS

The lab and the firm both put senior lawyers at the front

Anthropic · Mark Pike (AGC) · Stephenson Harwood · partner-led innovation

Two hires this week, read together, say the same thing. Anthropic's Associate General Counsel Mark Pike sat down with Artificial Lawyer to walk through how a frontier AI lab runs its legal function — heavy on accountability, audit trails, and human review at every decision point. The day before, Stephenson Harwood named its first head of innovation — and gave the seat to a partner, not a CIO.

Both are signals about who gets the authority. The lab puts senior counsel inside the engineering room. The firm puts a lawyer — not a technologist — at the top of the innovation function. In both cases the message is the same: when the question becomes ‘is this defensible?’, the person who answers it has to be a lawyer with seniority and authority.

Track this pattern. The firms and labs that look credible to general counsel in 2027 will be the ones whose AI decisions were made by people the GC can call by their first name.

PATTERN

Lawyer-led, not CIO-led. Two data points this week, building on a half-dozen since January. The category is consolidating around a name and a job title.

[ Artificial Lawyer · interview with Mark Pike (Anthropic AGC) ]  [ The Lawyer · Stephenson Harwood partner takes innovation seat ]

> 05 // MARKET

Cooley's Q1 numbers reset the legal-AI valuation conversation

Cooley · Q1 2026 venture financing report · legaltech

Cooley's Q1 2026 venture financing report dropped this week, and the legaltech section is the one to read. After the late-2025 reset — most visibly the re-priced Harvey Series E — the Q1 data is the first clean look at where AI legaltech valuations are actually landing.

The short version: Series A and B continue to clear, often at strong multiples, but the ‘AI for X’ pitch alone no longer drags terms upward. Repeat-business signal, named-firm references, and verification/audit-trail capability are the new term-sheet inputs. Cooley's own preferred-stock data shows the median post-money holding, with the variance compressing into a tighter band — the sign of a market that has stopped over-paying for the demo.

For founders raising in the next two quarters: the partner deck question stops being ‘how big can this get’ and starts being ‘who actually uses it, and what do they bill on top of it.’ (Cf. Story 03.)

[ Cooley · Q1 2026 Venture Financing Report ]

> THE DOCKET · MOVES THIS WEEK
REGULATION   Watch: which state bar adopts ‘verify every output’ language next. New York and Florida are the obvious candidates given their existing AI task-force work.
PRODUCT   The gap: agent observability. Logs of what the agent saw, what it decided, who authorised it. The first vendor to ship this as a primary feature (not a buried admin tab) wins a renewal cycle.
FIRMS   Pattern: partner-led innovation seats. Stephenson Harwood this week. Linklaters last week. Two more before the end of Q2 makes it a category.
CAPITAL   Cooley Q1: the ‘AI for X’ multiplier is gone. Repeat-business signal and named-firm references now drive terms. Founders raising into Q3 should rebuild decks around those slides.
LABS   Read: the Mark Pike interview is the cleanest public account yet of how a frontier lab runs its own legal function. Worth a Friday afternoon.
> THE HOLD · ONE PARAGRAPH

The week's stories rhyme. A regulator wants verification. A survey says verification is structurally impossible without observability. A billing dataset says verification is real labour that shows up on invoices. And the hires — at a firm and at a lab — say the people doing it must be senior lawyers. The legal-AI category is consolidating around the same answer: accountable, observable, billable. The vendors and firms that arrive at that posture first will spend the next year selling to the rest. — The Editors

EXHIBIT AI
THE TRUSTED DAILY AI BRIEFING
FOR THE LEGAL INDUSTRY.

NO HYPE. NO LEGAL ADVICE.
NO VENDOR CAPTURE.
EXHIBITAI.NEWS

[ unsubscribe ]
[ forward to a partner ]

Exhibit AI reports on the AI industry. We do not provide legal advice. Sponsorships are disclosed and never shape coverage. Issue N°002 published TUE 12 MAY 2026.

Keep Reading