Legal AI inside Law Firms: What we buy vs What we Demand

I want to be upfront about something before I make this argument: I am not a disinterested observer when it comes to LexisNexis.

A few years ago, I cancelled a LexisNexis subscription. I cancelled it because, despite having access to a significant volume of information, they had made no meaningful effort to tailor their product to what I actually needed as a transactional lawyer running a focused practice. The information was there. The utility was not. I stopped paying.

LexisNexis responded by retaining a UK law firm to collect the outstanding balance from me.

I paid but I have not forgotten the experience as I think it illustrates something important about how these companies think about the relationship between themselves and the law firms they serve. The data is theirs. The terms are theirs. The legal muscle, when needed, is theirs. The lawyer on the other side of the subscription? A revenue line. Not a client.

I raise this not to settle a personal score, but because my experience as a small firm founder who actually tried to use these products — and found them wanting — gives me a particular vantage point on the current “legal AI” moment. What the major vendors are selling right now follows the same logic it always has: maximise the vendor’s leverage, minimise the firm’s optionality, and charge accordingly. The packaging has changed. The underlying dynamic has not.

With that said, here is the technical and commercial case for what law firms should actually be demanding.

The Wrapper Problem

The products being marketed to law firms as “legal AI” are, with few exceptions, what the technology industry calls LLM wrappers. A wrapper takes a foundation model built by Anthropic, OpenAI, or Google at a cost of hundreds of millions of dollars and places a proprietary interface in front of it. The vendor fine-tunes the model on legal data, builds a retrieval layer, applies a brand, and charges a substantial markup for the result.

Products like Harvey, CoCounsel, Legora, and Lexis+ AI all follow this pattern. The firm ends up paying for someone else’s intelligence, delivered through a restricted interface that limits flexibility, enforces particular query patterns, and locks the firm into the vendor’s product roadmap rather than the frontier model’s actual capabilities. The wrapper does not make the underlying model more intelligent. It makes the underlying model more expensive and less flexible.

The vendors understand this. They are not confused about what they are selling. The commercial incentive to present a wrapper as a distinct product is significant, and the technical vocabulary required to see through the framing is not yet widespread in law firm management. That gap is the business model.

What the Database Providers Are Actually Offering

Thomson Reuters and LexisNexis are not primarily AI companies. They are information companies. Their genuine value lies in curated, authoritative, continuously updated legal databases such as Westlaw, Lexis+, and the surrounding case law and secondary source infrastructure that law firms have been licensing for decades. That value is real. The problem is not the data. The problem is what the vendors are trying to do with the AI moment.

Both providers have moved to insert themselves as AI intermediaries between law firms and the foundation models those firms are already paying for elsewhere. The commercial logic is clear: the AI moment represents an opportunity to construct a new revenue line, converting an existing database subscription into an AI product subscription at a higher price point. The result for the firm is that it pays twice — once for the database it has always licensed, and again for the AI layer the vendor has built on top of it.

My own experience with LexisNexis reflects a version of this pattern at the subscription level. They had a great deal of information. They had made essentially no effort to make that information useful for a transactional practice focused on real estate, corporate matters, and business transactions. When I concluded that the product was not worth what they were charging and cancelled, they did not reach out to ask what had gone wrong or how the product might have worked better. They instructed a law firm to recover the money. That is a revealing set of priorities.

The vendors have also made gestures toward building proprietary foundation models. This is a strategic distraction. The foundation model race is a multi-billion dollar infrastructure contest between Anthropic, Google, OpenAI, and Meta. Thomson Reuters and LexisNexis cannot meaningfully compete in that race, and the attempt to do so diverts resources from the thing they actually do well: maintaining authoritative legal data. The correct response to the AI moment was not to build an inferior general-purpose model. It was to make authoritative legal data directly accessible to the models that law firms will actually use.

Why API Access Is Not Enough

A step up from the wrapper is direct API integration between a law firm’s AI environment and a legal database. This is a genuine improvement. The firm writes its own retrieval logic, retains the freedom to choose its foundation model, and is not locked into the vendor’s product. For straightforward, predictable research tasks, it works.

The structural problem is that legal research is not straightforward or predictable. The decision to check a particular jurisdiction, retrieve a cited authority, or verify whether a principle has been overruled rarely arises at the beginning of a research task. It arises mid-task, in response to what has already been found. Conventional API integration front-loads the retrieval decision: the query logic must be defined before the research path is walked, which means it is usually wrong or incomplete the time it is needed.

API access treats legal research like a database query. Legal research is not a database query. It is an iterative, responsive process in which each answer generates the next question.

What MCP Access Changes

The Model Context Protocol  (MCP) changes the architecture in a way that matters for how legal work actually gets done.

Under an MCP architecture, the model itself decides, mid-task, that it needs to check an authority, retrieve a cited case, or query a specific jurisdiction. It makes that query directly, in real time, and incorporates the result into its reasoning before proceeding. The research path is not defined in advance. It emerges from the work, the same way a competent associate’s research path emerges from the work.

For law firms, this has three practical consequences. First, retrieval logic does not have to be written and maintained in advance as the model handles it. Second, the firm retains full freedom to choose whichever frontier model it is using, because MCP is an open, standardised protocol rather than a proprietary integration. Third, the database remains the database being an MCP server in front of Westlaw or Lexis enforces existing licence terms at the server level, logs queries for methodology documentation, and adds no new data product to the stack.

This is the point the vendors have every commercial incentive to obscure: MCP access requires no new data product. The underlying database is already licensed to the firm. The protocol is open. An MCP server in front of the existing database is an engineering project, not a new product line. The vendors could build it at marginal cost and offer it as part of the existing subscription. The reason they are not doing so is that the existing subscription model does not produce AI product revenues, and AI product revenues are what they are currently chasing.

What Firms Should Be Asking For

The ask is technically achievable and commercially defensible: open, governed, auditable MCP servers in front of Westlaw and Lexis, enforcing existing licence terms at the server level, with full query logging for methodology and professional responsibility documentation.

Firms that understand this should be asking for it explicitly in their renewal and procurement conversations. Not a wrapper. Not a proprietary model. Not an AI-branded upsell. MCP access to the data they are already paying for, usable with the frontier model of their choice.

Everything else being offered — the wrappers, the proprietary models, the AI-branded subscription tiers — is a product designed around the vendor’s margin, not the firm’s workflow.

The database subscription has been paying for the data since the beginning. MCP just gives the model a door. The vendors would prefer you not notice that the door was always there.

Barbarian Law is a transactional law firm based in Aurora, Ontario, focused on real estate, business purchase and sale, incorporations, corporate maintenance, contract drafting, and wills. If you want to talk through how AI tools are — or should be — working in your legal practice or business, visit barbarianlaw.ca/contact.

latest News & Insights

Lorem tellus pellentesque viverra ac hac sed id amet. Nisl risus in parturient ultrices neque tempus nibh. Integer pretium turpis tristique est.