AI in Law: Keep Judgment, Automate the Tedium (Full Transcript)

Why lawyers resist AI, what AI writing lacks, and how memory, RAG, and knowledge management can make legal AI genuinely useful.
Download Transcript (DOCX)
Speakers
add Add new speaker

[00:00:00] Speaker 1: So you think about the way that AI is advertised to law firms and to lawyers, right? It's like AI is going to step in and do these high level tasks for you, right? And if I was still practicing, I would look at that and think, like, why? Well, if AI is going to come in and do all of these high level tasks, like write the research memo for me, you know, like write the brief, you know, like do all of this stuff that I thought I was learning how to do in law school, then what am I doing? Yeah, except for like typing words in the magic box. And like that devalues what I do as a lawyer. And I think that that kind of has it backwards, you know, like I would want lawyers to operate at the top of their license, not the bottom. I think, you know, the the sweet spot for AI is like, you know, yes, it can help you write, but it's not going to replace your cognitive thinking and your cognitive ability and those hard skills that you learned over years of practice. Right. It should step in and do like the annoying things and the, you know, the stuff that I'd never wanted to do, like give me an update on where this case is, you know, like what did this guy say in the deposition? OK, let me think about that. Use my lawyer brain and think about where that fits into the case. Don't tell me where it fits in. Maybe let me figure that out. So then I'll know and then I can prosecute this case better. I can interview this next witness with that in mind. Right. So I think that's to me, that's like where where that cognitive disconnect comes in, because a lot of the naysayers in law firms are seeing like they're pushing back against this idea that, no, I don't want it to practice law for me. I don't want to be like just the air traffic controller, you know, watching the planes land. I want to be the pilot. Right. Yeah.

[00:01:58] Speaker 2: I don't know.

[00:01:59] Speaker 1: Just a thought.

[00:02:00] Speaker 2: Is that a is that just a packaging problem or is that an imagination problem? And what I mean by that is, is that a packaging problem from the people that are selling the A.I. or are people getting a little bit wild with what they think artificial intelligence is really capable of?

[00:02:16] Speaker 1: I think it's both in a weird way. It's both because that they like the a lot of the A.I. companies and we're starting to see them step back from that with like Clio and file, you know, like what I've seen with that. Yeah. Where it's taking out the tedium. But I think it's a packaging problem because A.I. is kind of intrinsically OK at a lot of that. And so, you know, there's a push to say like, hey, it can do all this cool stuff. It can write this stuff. It can, you know, like search or go through a rag vector database, whatever. And, you know, pull out the information that you want, you know, things like that. But there's a fine line between I don't want to have to go through and read every deposition verbatim five times. But I also don't want to be told, here's what you need to focus on in your next deposition, in this case, by a machine without having to like figure it out on my own. Right. So there's a fine line there. And I think it's a lack of imagination, too, like on both ends, you know, so the lawyers are seeing it as A.I. is coming to do the law tasks that I really wanted to do. They're not seeing it as this is like a helper that can help me do the stuff I don't want to do and offload some of the stuff I don't want to have to think about. It's it's both, you know, and yeah, and I think the there's a communication gap with like the companies, the lawyers, the people who are trying to encourage lawyers to use A.I., you know, either in their firm or outside of their firm, you know, like there's some big disconnect there that needs to be fixed.

[00:04:00] Speaker 3: Yeah, I think A.I. is proving what psychologists debated for a long time and what makes cognition right, like what what is cognitive ability and what distinguishes, you know, what we can do from what a machine can do. And I think the things that we used to hold in high regard and say that this is something that proves, you know, my humanity, my writing ability, my ability to parse out information, my ability to make quick snap decisions by my gut impulse. And we've proven that we can do that with mathematics. And I think the really difficult part of this now is figuring out what does it mean to be a pilot? Because what you thought was flying the plane is no longer what it's really flying the plane. And I think that is something that we all have to grapple with across industries. But really important for law is figuring out the pieces that are now left to you. And that, to me, is flying the plane. That's that's the part where you will be valuable, is choosing that I'm going to take command of the part where I decide where I'm going to do my work. That's where the value will lie, not necessarily in the things that we used to treat as, you know, the marks of our professionalism, you know, our work products quality. That will, of course, continue to be important. But it's almost that metacognition that's going to become, I think, what distinguishes us from these systems.

[00:05:23] Speaker 2: So that that goes into a I was cruising the interwebs the other day, which is my fancy way of saying I was on TikTok. And I came across this this person, Dr. Rachel Bell, I think, and she was talking about what are the things that really are marked differences between what AI can do and what humans can do. And a lot of times when we go to writing, we think the em dash, you know, like what are the things that show that this is AI? And one of the parts of her argument was that we have this the when AI gives you a result, when it writes something, it's this issue that we haven't really dealt with before, which is it writes beautifully and it writes like it's smart. But the content is not deep. It doesn't know what it wrote before. It doesn't really have any idea of the context of what it's writing, and it got me thinking about how we a are not used to that. And so you see this thing, you see this piece of writing that is using parallelism, you see this piece of writing that is using rhetorical tricks and you're like, man, this person must be smart, you know, and then it doesn't have any depth. And I think that's like going to what you were talking about, Drew, that's the place of like giving the depth to these arguments, giving the actual context around it and making these things actually like connect to each other.

[00:07:00] Speaker 3: Yeah. And I'm going to add to this that, you know, the real definition of a meme is a way to communicate a cultural understanding. And I think one of the things that humans do that machines really can't ask chat to come up with jokes, ask it to come up with punchlines, ask it to come up with slogans.

[00:07:17] Speaker 2: Yeah.

[00:07:17] Speaker 3: And those are the things that distinguish us as the ability to truly understand Zach or understand Sam in a way that I can send you an image that includes three words and I can make you laugh. Those are the things that will continue to have impact. And it's the person to person communication. It's also about what you're saying, Zach, how do I have depth in my writing? Sometimes it's breaking conventions that, you know, makes us able to understand what's being said. And I think that's going to continue to be a way that we can distinguish ourselves from what you've noted beautifully is that these machines can imitate us in wonderful ways, but it's all quite surface level. And I think human to human communication is much deeper. And I think, you know, maybe it will be a tick tockified future where we just send memes to each other and we communicate in shorts. And that will be the way that we have understanding, whereas these models have to vomit out a bunch of text to even, you know, poke around the meaning they're trying to make.

[00:08:16] Speaker 2: Well, and that goes to Sam's comment earlier, you know, where kind of like our imagination of what AI can do. We as attorneys think that AI is coming for the fun part. We think, you know, you Ben, you're saying have artificial intelligence help you write. Well, if you think that for some reason it actually has depth of writing, then you're going to let it write. But if you recognize that it doesn't really have any depth in its writing, then it has to help you. It cannot take over your writing like we're imagining that it's potentially going to do with the people that Sam's talking about.

[00:08:57] Speaker 4: Ben, I think even when you do, I mean, for a long time, we've always said that that, you know, even when you have the AI, right, there's value in you going back and revising what it wrote to add your voice, to add your, yeah, your point of view, because what the AI writes is often very generic. You know, I mean, yes, there are people who've done a lot of training or they've done a very extensive prompt to try to get the voice and tone a certain way. But in most cases, people are having the AI, who have the AI right, let the AI write in a very generic, you know, out of the box way. And then that comes across as definitely a little bit superficial, especially now when people are starting to spot AI writing, or at least think they can. But it comes off very, you know, trite. And and so I think adding your own voice and tone is really important, even if you let the AI do the writing. But if you start with your own writing, obviously you're there already. So so I think just doing the whole hands off, click a button and have the AI write the thing. I think that's it's a little bit lazy, but it's also, I think, increasingly effective.

[00:10:03] Speaker 2: I think we should tell that to the people that wrote the most recent season of Witcher. I don't know about you guys, but I keep trying to find what what are the what are the shows that they like fired all the writers and let AI let AI write it? And that's the that's the leading contender for me right now of like what's being written by AI and it just totally trashed it.

[00:10:28] Speaker 1: I hope they also let the main guy go, and now he has like 10 lines in the entire season. Yeah, yeah. Most of which are one word.

[00:10:39] Speaker 3: One of the core things I want to add here, which I think makes the Witcher really relevant, is that the the dialogue became so bad because you lost the guardrails you had to make sure that it stayed true to the story. And that isn't a dialogue problem, right? That's actually a really deep understanding problem that people contribute to it. It makes me think of the original Star Wars trilogy. George Lucas's wife had a heavy influence to make sure that that wooden dialogue got converted to something a human would actually say. When she was gone by the time he made the prequels, there was no one there to keep the dialogue from being absolutely horrible. And there was no agency for the actors to be able to convert it into something that sounded natural. And so the funny thing to me is I think the story of both trilogies is beautiful and deep. The problem is that it's communicated in a very authentic human way in the first three films. And then you see in the next three films that the actors are almost struggling to get through those lines.

[00:11:38] Speaker 4: Yeah, I wonder if they're still so. So this I don't mean to sidetrack us too much, but my father many years ago, he went to college. One of his college roommates has turned out to be a fairly successful science fiction writer, James Stevens. And they were at Jim was asked and Stevens was asked by the people who were making Star Trek TV show to write a script for them. And he asked my dad if he wanted to co-write with him. And he said, yeah, sure. So they they wrote some some proposals for some Star Trek episodes. This was for Next Generation or Deep Space, whatever. I don't know. I don't know the Star Trek set very well, but one of the one of the series. Anyway, what I'm going with this is what they received as part of their prep was an encyclopedia, basically, of every character in the in the series and like deep like this is their background. This is their motivation. This is their right. And they had to stay true to that when they were writing the character or writing things for the character so that they didn't get off the rails. And, you know, Spock has a temper suddenly, you know. Yeah. Yeah. And so I wonder if that's still happening. Or maybe that was an unusual thing that just the Star Trek people did. But but that sort of guardrail, that sort of context that you could provide, you know, and sort of tying it back to what we're doing here. I find that when I create an agent, I have the most success when I give it when I like all the ones I have in my personal life, all the agents I've created. I write a word document that gives it all the background and all the context that it needs to know to do whatever I'm asking it to do. And I instruct it to refer to that frequently so that it understands like my motivations and my my point of view and what it is that I'm looking for it to do. And I wonder if, you know, in the TV series, I wonder if the TV folks have gotten away from that practice. But also, again, in A.I., I think there's a lot of value in building that kind of context for your for your tool.

[00:13:37] Speaker 2: But what I think there's there's a level, though, because I think that's a really important note of if A.I. doesn't have that semantic understanding of like what it's done before and and and the the world around it, you have to give it the world around it as much as you possibly can. But I think where it comes in is like it just still doesn't have the ability to under like it's not actually understanding it. You know, you and so if we're writing things in there, you know, when you're creating that document, I think creating that document with that knowledge of like, OK, it's going to use this as guardrails as best it can. But it doesn't it's not really understanding this this document in the way that we think of understanding. Right. Yeah. Yeah.

[00:14:28] Speaker 4: And, you know, we talk about this when we do the brainstorming class because I do a class on brainstorming with Copilot. And, you know, we talk about using personas and giving giving the A.I. a persona, act as a experienced real estate attorney, act as a litigant, act as a property owner, you know, who's trying to lease this property. Yeah. Things like that. And I tell them it's valuable to do that. But it's also important to remember that the map is not the territory. Right. This is not actually an experienced property owner trying to lease some property. It's an A.I. simulation of a, you know. And so, yes, it may give you some valuable insights, but please don't confuse those insights with the actual person. It's just an A.I. simulation that hopefully gives you a perspective you didn't have before, but shouldn't be taken literally as, you know, as the ground truth.

[00:15:18] Speaker 2: So, OK, so as we kind of wrap up here, this this conversation lends me to this idea of memory. And it it makes me think that, you know, A.I. or a product like that having, quote unquote, memory is not really memory in the way that we think of it. And it's not even really memory in the way that we think of it as for computers now, you know, could we define what we think about as like literally quote unquote memory in a A.I. system?

[00:15:57] Speaker 3: I'll throw out a controversial take to kick us off here. I think that machine memory is actually a lot easier to define than human memory. And and I think that it's it's a really useful tool. It's typically created as a database. And so you've got different rows that represent different understandings of the, let's say, the user or the task, and they can be extracted. Or typically what happens is you feed the entire thing as part of the prompt. That that underlies how the system's operating. And to me, that gives it guidance where a human might walk into a situation and maybe I have my last two memories from this morning, but I'm not equipped with everything from the last week. But the A.I. is immediately fed the 10 most important things to know about me for work practices. And so I think it's a very effective system. I think it's a great thing for one of these systems to have.

[00:16:48] Speaker 2: But yet it doesn't know the like weight of those importance necessarily. Right. Like it wouldn't necessarily know that this is the most of the most importance or the connection between the importances. It's it really still is this kind of static information.

[00:17:03] Speaker 3: You're absolutely right about that, but to me, I think about it in terms of what are the tasks we're asking this thing to do. I'm not asking it to have a deep understanding of of me or even what we are doing. I'm having it do surface level things on my behalf. And because of that, it needs surface level static understandings to be able to do it.

[00:17:21] Speaker 4: That makes sense. And I think circling back to our previous chat about sort of movies and entertainment, I think there's two movies that might be very relevant to the discussion of A.I. and memory. And the ones I'm thinking of are 50 First Dates and Groundhog Day. Yeah. And so you can have either experience. Right. You can start a new chat every time you converse with your A.I. And that's like 50 First Dates. Right. You're starting off with a high. Who are you? Right. Even though you've had 30 previous dates with this with this A.I. Right. And so you can do that. But maybe the more effective way is to take the Groundhog Day approach, because he, by the end of that movie, had become a master of all these things simply because he had lived them through four thousand iterations or however many there were. Right. And then had over time figured out what wasn't working. Or maybe what's the the Tom Cruise day after tomorrow or whatever it is where he he keeps waking up on that same morning and having to and each time he gets one step further because he learns what went wrong the first the time before. Right. Yeah. So so I think with the A.I., if you get in that habit of and I'm starting to because I've been traditionally in the 50 First Dates model where I tended to start new conversations with my agents all the time here. Yeah. Right. And now I'm starting to actually back off on that and go to sort of the Groundhog Day model where I'm going back to a previous conversation and continuing that conversation so that the A.I. still has the context from before. The challenge is if the A.I. went too far astray in the previous conversation, like when do you give up on that and start over with a new conversation realizing, OK, this is this is unrecoverable, so to speak, versus realizing that that previous conversation was still on a productive track and that you can build on that for your new task? Is it that's going to be an interesting challenge of of making that judgment?

[00:19:11] Speaker 2: Well, and in a very real sense with, again, the our definition of, quote unquote, memory here for for A.I., like there's a point at which it's it's not holding all of your chats in its immediate. Moment, I guess I don't even want to call it, it's it's immediate, it's not holding all your chats right there, it still has a flawed method or way of remembering.

[00:19:39] Speaker 4: Yeah, I mean, there's an overt context window, of course, that after a while it's forgetting the stuff that happened at the beginning because it can only hold so much in storage. That's just a that's just a mathematical reality. But then also there's a there's a a weird quirk that I'm not sure we've ever figured out, but I know is something that we thought about when I was at Microsoft, which was that the tendency of the A.I. to be really strong on the stuff you said at the beginning of the conversation and the end of the conversation, but get a little fuzzy on the stuff in the middle. And I don't know that they ever quite figured out the technical reality of why that happened, but it still was something that they knew was a challenge that that had to be considered. I don't know what the solution to that problem is, other than you can continually reminded of the important the key issues just so that you're more assured that it's not forgetting that thing you said two days ago and remember the thing you said last week. But for me, one of the one of the things I've been doing or trying to do because because one of the places I'm using is in our house hunt, because as you may know, next year we're moving. So I've created an A.I. that's my real estate A.I. agent. That's, you know, it's tied into Zillow and I would tie it into Redfin if I could. But it's it's doing you know, we've given it a document that this is what we're looking for in a property. These are our must haves. These are our nice to haves. These are our deal breakers. These are the areas geographically we want to look at, et cetera, et cetera. It's about a seven page Word document that goes through with some detail, like what we want in a property. And then the A.I. is supposed to refer to that and create. Well, as I'm having these conversations, it's asking follow up questions like, oh, OK, what should I prioritize? Like your comment about it doesn't know what's more important or less important. Yesterday, working with this agent, it actually asked me like a question about I think one of our priorities is on schools. And it said, you know, which which what's more important here, this or this? Right. And so I answered that question, but I realized I answered that question in that conversation. So if I start a new conversation now, we're in 50 first dates territory. It's going to forget that priority. It doesn't know that priority work. Right. And so this comes back to me being in Groundhog Day mode and going, OK, I need to go back to that conversation so that it can leverage those priorities. I told it last week. Right. Hopefully.

[00:21:50] Speaker 3: Anyway, that's where I think what you've pointed out, though, is actually a really great feature, maybe not a bug. Right. If it focuses on the first and the last things that we've told it, most memory for a machine should function like memory does for a person. Right. For us, our most salient memories are the ones that are made most recently. And then the ones that last the longest are the ones that have mattered the most over time, let's say. And that seems to be how the machine is going to operate. And if it can write to its own memory, then we're going to end up with some kind of data set that's going to help guide it, I think, in the most optimal way to help behave in the way that you want it to to work with you.

[00:22:26] Speaker 4: And the other thing I'm starting to do along that same line is to take a little bit of the ambiguity out of it is when it asks me those questions and I learn like just like with a person, right, you start to learn what's actually important. Right. Like I can't think of a good example on top of my head. But there are things in your life as you go through that are trivial and things that you realize, oh, wait, that's something important. Right. And so I'm having the same experience with the A.I. agent where where it's asked a follow up question and there's an oh, OK, that's an important thing. And so what I'm doing to help to help the I understand that it's an important thing going forward, I'm actually going and updating the document I've uploaded to the to the A.I. and saying, by the way, I'm adding that. And so that in the future, it doesn't have to ask me if I've started a 51st, a 51st date. It doesn't have to ask me again, because in the document I've I've I've learned, OK, wait, this is something it wants to know about that's important. So I'm going to throw that in the document, too. And so my document is evolving over time as I'm learning the important things to this process to hopefully reduce the number of times the A.I. has to ask me.

[00:23:31] Speaker 1: As Drew mentioned, you know, recency bias is a validated thing in humans, too. Yeah. Right. Like that's a psychological effect. So it it's kind of like the A.I. is mimicking how people think in a weird way, even though it's not a person to take this to maybe a more metaphysical level on the idea of memory. I think about law firms and how they operate as kind of an organism that has its own memory. Right. So they have law firms, have lawyers that all have their own memories. And, you know, traditionally we've kind of seen lawyers silo their knowledge in their own memories. Right. So that they become an expert on some area of law or an expert with the cases of some particular client or they're great arguing in front of this set of judges or on these issues. And, you know, I think maybe in the not too distant future, we'll see A.I. kind of supplement that memory for a law firm as an organism or for lawyers working in that law firm. I don't know quite yet. What that looks like. But to me, it's kind of an interesting idea that instead of the traditional idea of knowledge management where lawyers have to dump all of their prior work into a database, which then we search through natural language search or something like that. We have an A.I. memory system where it's learning how the lawyers that came before have operated and how they have kind of done things, the things that worked for them and didn't work for them. Again, it's like kind of a weird metaphysical idea, maybe. But I think it's fascinating to think that that might be possible. Again, you're not talking about replacing a person. You're talking about supplementing what a firm can do and how it can remember what it has done and use that to its advantage in the future.

[00:25:38] Speaker 4: Yeah, I have long wanted that Jarvis assistant that can hover maybe, you know, and now you're starting to see, you know, different Kickstarters and things that are lapel pins that have this or glasses or whatever, they have it built in and that would sort of be there alongside you, you know, sort of fathom for your whole day, you know, that can that can be your A.I. assistant, but that that learns those things so that you, you know, that go on in your day, because one of the big gaps we have with the A.I. is that it doesn't understand all that context because it wasn't there. Right. And so maybe if the A.I. is there, it learns more context and can be more helpful to you. And, you know, because for me, I'm always paranoid that, oh, I forgot about some meeting. I forgot about some task. It's like, well, if the A.I. is there helping you capture that information, then maybe that adds more value. And it certainly changes knowledge management in a significant way. When you have the, you know, some tool that's capturing that automatically as opposed to relying on a person to do that data dump.

[00:26:38] Speaker 2: And I think this this idea of capturing all the information leads me to what about the trash? Because the one thing that the A.I. doesn't have, at least initially, is, again, context of what memory is a important B. needs to go the hell away. You know, like as as law firms, we have a lot of for for my purpose, I would have old leases, old leases that were still in the files, still in the computer, all that. And if I was relying simply on A.I. to to think of the area that I had, you know, like my computer just as a memory space, it would potentially without any instruction, it would potentially put the same value to that old lease as the new lease. You know, and so I think we need to keep in mind that there's just, again, not this context naturally within within the memory, whereas like we whether it's and I don't I don't know, I'm not a neuroscientist, but we kind of put context to memories, you know, as we go and can say, well, that isn't worth anything because in my experience, that's not that memory is not worth anything.

[00:27:55] Speaker 1: And I think. Oh, go ahead, Sam. Zach, to your point, I think we we don't have a way of of doing that because we haven't had to really do it yet. Right. Right. So imagine if every time you saved a prior document, like prior work to your document management system, when you saved it, there was a slider that said bad, good, zero to 10. Would would people use that? Right. Could it be that simple or does it need to be that nuance? Yeah, I'm saying this, but this is bad or I'm saving. This is the best thing I've ever done. Right. Right. And you had that just that simple of a signal of is this bad or good? You know, and you can kind of extrapolate from there. Is it good on these issues? Is it bad on these issues? Right. Is it good for this reason? Is it bad for this reason?

[00:28:50] Speaker 3: Do you need user input to have a system like that, though, because I think, you know, our current memories, the more you access them, the more they get corrupted, but the more that they are, they become basically just the flavor that you put into it. Right. They're basically triggered by the emotion and you lose the actual content of the memory. But that's the part that constructs your deep understandings. And if you had a system that went through and basically just measured how often it's using the memories, the trash that it's consumed and then just cleans out the stuff that's not useful, it's not that dissimilar to, you know, reaching an adult age and realizing you don't have many childhood memories left. You've only kept the ones that have been the most constructive or the ones that you enjoy the most. And those are the things that now define you. The rest of the garbage, it got scraped away and you didn't need it. And the system can do it by itself. I don't have to consciously tell myself as I'm going to bed, like, can we please not save that? Because in a year it's it's going to be gone.

[00:29:49] Speaker 4: That's that's a sort of a something I'm kind of going for with my, you know, the document I talked about where I upload to the agents. I'd like to think that over time it's getting better and tighter because that's the other thing. I don't just add to it. I also sometimes trim stuff out of it that seems irrelevant or maybe I didn't really need to put that in there. And so every now and then I do deliberately do the 51st date thing where I have it start over. But with my new and improved documents, like, you know, it's it's it's it's hopefully getting a better me each time because I've improved the document enough and I'm letting it start over with with a better document. And hopefully, you know, if there's a way to do that with our memories, that would be great. I've said for a while that I think one job in law firms that's going to grow is is a resurgence in law librarians. But in the role of curating that knowledge, right, of saying because you don't want the A.I. looking at that old lease, right, you want the A.I. looking at the best possible versions of itself, right, of its content, of the content available. And so I think there's going to be a big role for somebody in the firm to be the curator of here's the body of knowledge that the A.I. can ground on. And here's that old lease document that we want to archive off so that the A.I. is not looking at it. And here's that new lease document that's really awesome. And let's be true to that. Yeah. And so I think there's going to be a role for that that human curation to try to help the A.I. be grounded on the on the best stuff and not the least consequential stuff.

[00:31:23] Speaker 1: And going back to kind of circling back to what we were talking about earlier with like the idea of like, oh, A.I. is going to replace this stuff, right? I think when A.I. first came out, I think a lot of people looked at it and said this is going to totally replace knowledge management, right? Knowledge management is just going to poof, right? Go away. We're not going to have to worry about it. Right. Not something we're going to have to mess with. A.I. is just going to do it all. And to what Ben was saying, I think that's completely wrong. I think you need those signals of importance and value and everything else. And law librarians and knowledge management staff and professionals are going to have to be the ones because they can see at a higher level what is valuable, what's not, you know, and organize that so that it can be actually useful. I think that's a really good point.

[00:32:20] Speaker 2: And I think that's a good a good place to kind of end a moment of of like, hey, maybe we'll have more jobs. Here's a job we're going to create. Guys, thank you for talking to me about about all this. I always really enjoy speaking with you all. And just for the viewers at home, if they want to learn how they can actually make their memory better so they could compete with A.I., we have a podcast on that over at Lawyerist, episode number 578, where I interviewed a fellow who is a grand master of memory named Nelson Dulles. So that's a fun one. Guys, again, Ben, Drew, Sam, thanks for being with me. This is always very fun.

ai AI Insights
Arow Summary
Panel discusses AI in legal practice: marketing often frames AI as replacing high-level legal thinking (briefs, memos), which triggers lawyer resistance and devalues professional identity. Better framing is AI as an assistant that removes tedium (status updates, summarizing depositions) while lawyers retain judgment, strategy, and contextual reasoning. They note AI writing can be fluent but shallow, lacking true understanding and long-term context; humans provide depth, coherence, and authentic communication (including humor/memes). Conversation shifts to “memory” in AI: typically database/RAG plus context windows, with recency/primacy effects and limited capacity. Effective use involves supplying structured background documents, maintaining persistent threads when useful, and periodically restarting with improved instructions. For organizations like law firms, AI could augment institutional memory, but requires curation: distinguishing valuable/current documents from outdated “trash.” Knowledge management and law librarianship may become more important to curate datasets, create signals of quality, and maintain guardrails.
Arow Title
AI for Lawyers: Assistive Tools, Shallow Fluency, and Memory
Arow Keywords
legal AI Remove
law firms Remove
AI marketing Remove
automation Remove
cognitive work Remove
writing quality Remove
context Remove
RAG Remove
vector databases Remove
AI memory Remove
context window Remove
knowledge management Remove
law librarians Remove
document curation Remove
agent personas Remove
Arow Key Takeaways
  • AI should help lawyers operate at the top of their license by offloading tedious tasks, not replace legal judgment and strategy.
  • AI-generated writing can be rhetorically polished yet shallow; lawyers must add context, depth, and authentic voice.
  • Misalignment is partly a packaging/marketing issue and partly users’ misunderstanding of AI capabilities and limits.
  • AI “memory” is usually retrieval from stored data plus limited context windows; it’s not human-like understanding.
  • Practical workflows: provide durable background docs, update them as preferences evolve, and decide when to continue a thread vs. restart.
  • Institutional AI memory in law firms will require strong knowledge management: curation, de-duplication, versioning, and quality signals.
  • KM professionals and law librarians may become more valuable as curators of the firm’s AI-grounding corpus.
Arow Sentiments
Neutral: Balanced tone: cautious optimism about AI’s ability to reduce tedium and augment memory, paired with skepticism about AI replacing deep legal judgment and concerns about shallow, generic writing and context limitations.
Arow Enter your query
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript