Here in the heart of Silicon Valley, talks about AI are a daily routine. If you go to Blue Bottle Cafe on University Avenue in Palo Alto, 8 out of 10 people sitting there are writing code on AI, pitching their AI startup to an investor, or chatting about AI with their friends.
However, what does it look like across the country in NYC? For the second time in a row, we showed up at Legal Week in NYC to get a lay of the land and see some friends/colleagues. So, how was it this year? How does it compare to last year? Here are some interesting findings.
1. Almost every banner talks about AI.
This year, more than last, EVERYONE is talking about AI. Granted, this is after Harvey’s $70M raise (who, interestingly enough, didn’t bother to show up) and Spellbook’s $20M raise—they were here last year before raising their seed.
For the legal field, there are a few use cases for AI that are no-brainers: legal research, e-discovery, drafting certain documents, to name a few. These are also the areas that are most crowded, by larger companies and new startups alike.
Some forward-thinking law firms even have their own innovation team and have deployed large language models on their own. During a talk titled “Talking with Lawyers about AI,” Deanna Fowler, Senior Innovation Manager at the law firm Troutman Sanders, discussed how their firm deployed their own “ChatGPT” , and how she has seen increased adoption. We were definitely impressed with how these forward-thinking law firms have adopted Generative AI so quickly.
2. But do they really know how AI works?
In today’s world, if you build an application and you don’t claim you have AI, it’s like a high school girl without a boyfriend. But to use a similar example, AI is just like kissing a boyfriend in high school: every girl talks about it but no one actually knows how it works.
Many are not even fluent in the vocabulary. Lawyers should know “LLM” does not just stand for a degree in law. In another instance, one panel moderator could not repeat the phrase “prompt engineering,” clearly not understanding what it means. Others struggle with technical nuances: while many AI companies claim they “don’t send customer data back to OpenAI,” their terms and conditions may say otherwise. Having your “private instance” does not equal not sharing data with OpenAI at all.
To properly deploy Large Language Models (“LLMs”, unlike the example above), the team needs to understand the legal nuances, the intricacies of the technology, and build up a beneficial, monetizable solution based on real use cases, versus imaginary science experiments. Lacking any of the above, the solution is guaranteed to be smoke and mirrors versus a true transformation of how lawyers work.
3. Some companies chose not to show up.
E-discovery and legal practice management software still seem to dominate the floor, while many CLM companies, who used to have massive booths in previous years, chose not to participate. One CLM company just threw a happy hour in a bar next door. Others didn’t bother at all. Bad year for revenue thus tight budget? Challenging fundraising environment?
4. Across the street, are American white-shoe law firms really using AI?
After the conference, I had a drink with an old-time colleague, a partner at a white-shoe law firm across the street.
While British firms seem to be embracing Harvey and alike, thus contributing to the $8M revenue basis for the recent fundraising, American firms are at two complete extremes of the spectrum. Troutman Sanders clearly has a tech innovation team and has launched their own LLMs per the panel above. A few other firms have either invested in legal tech companies, have a tech innovation team, or both. However, if you count the AM100, how many of them still think there is no need for AI whatsoever in their practice?
“Clients would never allow it.” “We’ll just hire more junior associates.” “What we do is so nuanced and needs so much human judgment that an AI tool would just not make sense.”
However, we all know that AI is so much better than humans at data analysis, condensing large paragraphs into simple summaries, drafting some basic documents, avoiding missing a key term, to name a few use cases. One could argue that 70% of what lawyers do can be and should be replaced by AI. But we can look at the vision in the sky, yet the reality is still on the ground.
When Lisa Han published her article last year about Legal Tech and AI, I wrote to her and we debated whether law firms or in-house legal counsel would be better adopters of legal AI when it comes to reviewing and drafting documents for commercial transactions. Litigation aside, when it comes to transactions, as long as the billable hour is alive, I don’t see any motivation or incentive for law firms to adopt AI. You cannot bill the work AI does, however faster, on the human equivalent. That would be an ethical violation. However, unfortunately, many law firms' business models still center around billing the client at a high rate, hiring junior associates to do the work on a modest salary, and making the margin in between. Unless all law firms move to a total fee for project model, I don’t see any incentive to adopt AI on a large scale for billable work.
All and all, lawyers need to wake up and see the world. The clients will soon choose to work with the lawyers who use AI as their finest tool, as opposed to seeing it as a threat or a dream. The writing is on the wall.