AI Literacy for Lawyers: Risks, Workflow, and Privilege (Full Transcript)

A Lawyerist discussion on agentic AI workflows, why lawyers must learn AI basics, and how client AI use can threaten privilege and create discovery risk.
Download Transcript (DOCX)
Speakers
add Add new speaker

[00:00:00] Speaker 1: Hi, I'm Zach.

[00:00:03] Speaker 2: And I'm Stephanie, and this is episode 612 of the Lawyerist Podcast, part of the Legal Talk Network. Today, I'm talking with Kat Casey about what every lawyer needs to understand about AI because even if you're not using it, your client might be.

[00:00:18] Speaker 1: Ooh, I like that. I like that. It's, you, you just have to at this point. You have to understand it. Don't use it necessarily. I would argue that you should, but.

[00:00:29] Speaker 2: Yeah, me too, but I mean, we're going to get into it in the episode, but some of the new things came out that, you know, how you're advising your client to use it. You should, you should assume, just like we say, you should assume your team is using it and that's why you should have some policies and procedures in place. You should also assume your client might use it. And what happens if they put, you know, that memo you just wrote to them or the advice you just gave them into the tool? You should probably understand that.

[00:00:57] Speaker 1: That's a good point. Well, we are obviously using AI technology, chat, GPT, co-pilot, but especially right now at the very least, Claude here at, at Lawyerist. And one thing I've noticed, and I know you have too, is that the use of Claude and use of these, this agentic, you know, like the skills and things like that has really affected how I approach work. And I know that you and I were talking about that. What's it done for the way you kind of do work in the day?

[00:01:31] Speaker 2: Yeah. I mean, at a basic level, it's about the schedule. So the best example I have is last Friday, I knew I wanted to use Claude to work on a presentation that I was preparing and, and it takes a minute to do slides. So I came in. So normally if I come down to my office, I work from home and, but I'm very intentional about, okay, I go in my office, it's work time. So normally I go and get my coffee and do my morning routine. And then I come into my office and sit down on my desk and I'm like, okay, now I'm ready to work. Let's get started. I noticed, yeah, I noticed Friday I did something different, which was, okay, I know it's going to take Claude a little bit to work on this presentation. So I'm going to go into my office first and I'm going to get Claude started. I'm going to give it the instructions that I know it needs and let it get started on its work. And I'll go get my coffee while it's working. And I don't know, that was just really different and new for me to be like, oh, like I, this is what they mean by in the future will be about managing our agents and getting these things working for us. And I'll step away.

[00:02:34] Speaker 1: I like what you're saying there. When you, when you talked about this to me earlier, you know, I was like, oh, I feel that completely. And I like what you're saying there about managing the agents. I think of myself a lot of times during the day as orchestrating my different agents that are, they're doing different things. And for people that aren't quite kind of up to knowing, like they can't quite figure out what we're talking about with agents because it's, it's not a nailed down term. What I'm talking about are AI tools that I'm able to give deep instructions and context to, and they can take some sort of action either on my behalf or something like that. They can make something usually. And I think yours, it was making a presentation off of information that you fed in and context that you had fed in and all that. And I manage these, these different little agents that have different little tasks and different specialties and get them going. And I find myself kind of anxious when I don't have one running and I have one doing something. So I'm like, I'm losing that time.

[00:03:45] Speaker 2: Yeah. We talk so much about the way we work down to just the schedule we keep and how we are doing it, that, that shifts. And you know, in my house, I mean, you guys know what I do, but you know, sometimes I talk about my husband and he is new to all of this. He never used any AI tools until about 10 days ago when I shared an article with him and I was like, he hasn't, he's starting a new business. And I said, listen, you need to read this and I want to have a discussion about it. And he did. And then since then he's purchased a subscription to Claude and he started using it. And every day he's just like, Stephanie, this is blowing my mind because of what it can do. And it was almost like I forgot a beginner's mind. I had to go back and kind of try to help him get it set up because I showed him some of the things I was creating and he was like, that's amazing. And so it's interesting because now I realized this morning he sort of leapfrogged and is doing some of the work that I'm doing. And he didn't realize some of the basics of how we all used to work with Claude, like a few weeks ago. So I had to go back and I had to go back and teach him like, oh no, this is just how you have a normal chat experience. Like he, he skipped that and went right into the projects and building stuff. Oh, that's funny.

[00:05:04] Speaker 1: And I think, I think that's the thing is kind of like imagining what you're, what you're able to do with this. I spend more time kind of like structuring like the skills, structuring the things that Claude is able to do as opposed to like actually doing the things and, and it's, it really lets my ADHD or ADD or whatever brain like go because I can, I can switch from task to task to task and just like keep it moving, keep it moving, keep it moving.

[00:05:37] Speaker 2: I have a followup, but I'll take that offline with you because I, I mean, I think, well, or no, you know what? I'll just discuss it here because I think that that's a, I think a lot of lawyers, like I appreciate you bringing up, you know, your brain and I think a lot of lawyers have this. And I think that the threat there, the risk is that you have all these things moving, but then you got to come back and like close the loop. And how do you get them finished? Because so it's like, you know, so maybe even in a, in a prior world, we used to talk about just procrastinating and it was hard to get started on a task because the, that blank page problem is so big. Now these tools kind of eliminate that blank page problem. And in fact, Kat's going to talk about that here in a second, like how lawyers can start to think about that and leverage and where they can really use their skills differently. But then we run the risk of kind of starting so many things. How do we get them over the finish line?

[00:06:33] Speaker 1: I'm actually really glad that you asked that on, um, on air, because one of the things that I've really noticed is that I'm only able to do that if I'm sticking to my project management system, I have to be disciplined, Zach, I have to have my discipline hat on and think I'm not the one doing these things. Again, I'm orchestrating this and sticking to the project management system means I have to have scoped the project. I'm only doing it. I have multiple templates now that are what's in scope, what's out of scope and what is done look like. And so all these things exist and they're being done. But yes, if I, if I just pick up a little project and I'm like, Ooh, I want to play with this, you'll see me three weeks later and it won't be done. And I think that's a really good point. I'm glad that you brought that up because that is the other side of this is that personally with my little ADHD brain, I have to have that structure and I have to stick with it.

[00:07:36] Speaker 2: Yeah. So lots of good opportunities out there, but it's again about understanding these tools and how you leverage them and your work and your work day and how you structure your work day. And it's almost like we're learning to work new again, right? Like we're putting new guardrails around your work. I'm setting up my day differently. It's just, it's an, it's kind of fun. Interesting.

[00:08:00] Speaker 1: It is. It is fun and interesting. Well, speaking of fun and interesting, um, now let's get into your conversation with Kat.

[00:08:20] Speaker 3: Thank you so much for having me. My name is Kat Casey. A lot of people know me as the Techno Kat, um, and I am, well, I'm basically like Sisyphus in sequence. I've been shouting in the void about AI and legal for 20 years and now the world's caught up. I am the chief legal AI futurist for a, um, the first AI native for lawyers by lawyers conference series called the Masters AI. I'm the author of, I don't know if you can see it behind me, but AI and legal tech. And I've been a leader at big firms like Gibson, Dun & Crutcher for technology, foundational building tech for KPMG, PwC, and in the C suite at a lot of AI companies. So been there, done that, the world caught up and I want to help the rest of you catch up with us. Cause it's a really crazy, interesting, scary, awesome time.

[00:09:01] Speaker 2: Yeah. No, I love that. And I love that framing you just gave of like maybe cutting through the noise a little bit and really catching up and figuring out like, what is it that we need to know today? Cause it does feel like it just shifts. I used to say, you know, by the months now it feels like it's by the hour, um, yeah. So maybe with that, which is a very broad question, like, you know, what is happening today that, that lawyers need to be paying attention to that maybe has shifted even in the last, you know, 30, 45 days?

[00:09:33] Speaker 3: Yeah. I mean, I think if you're a solo practitioner or a lit boutique, maybe you thought, Hey, this AI thing, I'm going to let the big law handle it. It's more of an enterprise play. And what's really shifted is a couple of things. Like one, once Cartman from South Park was talking about Chachapiti, your clients heard about it, your colleagues, your peers, suddenly you can avoid it. And that was pretty early on. That was two, three years ago. But then we had Anthropic on February 3rd come in and make a play in the space. We've got billion dollar investments happening all over and we have everywhere from the Georgia Supreme Court getting hallucinated cases to, um, you know, false information. There's a lot of stuff happening. And so I think the shift in the last 45 days or so is it went from, Hey, this is a tech lawyer play. This is an enterprise, big law play. I don't really need to pay attention yet to, Oh my goodness. If I'm not advising my clients, they might be feeding our legal notes into Chachapiti vitiating priv, which happened two weeks ago or three weeks ago. So in order to do the job of lawyering, you need to be at least AI literate. It doesn't mean I'm using, you know, Claude to build the AI empowered, you know, law firm of the future, but you need to know it to be able to issue spot, identify risk and to, I don't know, guide your clients. The world I think is so AI in mesh now that to effectively advise, you need to at least be able to issue spot or, you know, use that linguistic power that lawyers have to parse problems to identify them and to say, Hey, I might need help. And so that's the big pivot for me. It's not that, Oh, we all have to use AI, some will, some won't, but it's, we all gotta be able to talk about it because the way the world works has shifted so much.

[00:11:12] Speaker 2: Yeah, I think it's a good point that even if you're, you've been nervous about using it for your law firm because of security concerns or people don't trust it yet, I get the reasons why lawyers are pushing back and, you know, I'll gently remind them, we have a lot of episodes on why they, why they need to maybe rethink that. But I think the shift too, is like now you better, you better believe your clients are probably using it. And so what does that mean that we need to change in our conversations, but even if we're not using it, if our clients are, we need to be aware and we probably need to be advising them on the front end about what that implication looks like.

[00:11:49] Speaker 3: A hundred percent. I talk to people that are, they're never going to be all in on AI. And you know what I tell them? Well, you need to be able to still talk about it to explain why you're not all in on AI. So whether you consider yourself a never or an all in anywhere on the spectrum, the language of lawyering now has to include the ability to translate those tech issues into legal risk and opportunity for your clients. And so that's a big pivot. I think people have been looking at say the internet or even the printing press, how long it took to get mass adoption. Well, it took 20 years to get a hundred million users of the internet. And there were still articles coming out saying the internet is dead. Like I think Newsweek did it in 94, 95. That adoption curve's broken, 62 days for Chachapiti to get a hundred million users. Hartman was talking about it in month three. And then my mom's making god-awful knitting abominations, right? And my nephew's using it to gamify and win Minecraft, whatever that means. And my clients are using it and my colleagues are using it. So the adoption curve, it's really, it's double exponential, it's moving so much faster. So even if you're towards the tail end of your career, you're like, hey, it's not going to trickle down to the smaller firms. That's what's shifted because it's not trickling down, it's trickling out across the horizontal of how we live, work, and play. And because of that, even if you're not an industry that's tech, even if you're not advising a client that will, you think, ever use AI, they might still have questions where you've got to be able to talk effectively about it. And so whether you're saying don't use it or do, that need for a common language I think is the biggest shift. Like the urgency is, I always knew it was urgent, but I've been shouting that for 20 years. I felt a little bit like, is it a chicken little, maybe it was kitty little, right? But the reality has caught up with that urgency.

[00:13:33] Speaker 2: And so at a basic level, what do lawyers who are listening to this and saying, okay, great, I believe you Kat, now I got to figure this out, what do they need to, what is it that they need to figure out? What do they need to be able to talk about?

[00:13:46] Speaker 3: So the way I would look at it is there's a couple of tranches to it. Like you need to know the key terms of art. What do they mean, right? So an LLM versus AI, AI versus generative AI. There's these big buckets, right? And there's different risks that are posed by them. Like generative AI makes new stuff. Well, that's a different risk from an AI that just finds patterns, right? And so you need to be able to know that difference. The other thing though, is you need to know what tools can and can't do, because some of the risk we're hearing about, hallucinations and bias and a whole host of other things in the generative AI space are because of features of generative AI. And so if you're trying to find that one determinative answer, maybe using a gen AI tool doesn't work. So you need to know the key terms of art, what the types of tools can and can't do. And then frankly, hey, when do I raise my hand and ask for help? It's like going back to law school, IRAC, right? Be able to issue spot and be able to identify what that risk or rule is. And then it doesn't mean you have to learn to code. It doesn't mean you have to be vibing out and creating an app. I mean, I don't even do that. But you need to know when to say, hey, I need help or hey, my client, I know you think this is awesome and we'll solve all your problems. There's some risks. Talk about it before you do anything. So it's that basic fluency. I mean, that's why I wrote the book, which is it's basically just a primer for the rest of us who maybe went and studied existential philosophy instead of learning to code. And I should be a barista, not an AI evangelist. So those of us who maybe didn't lean into math and science and who that's not what we're comfortable in, maybe you decided at 12 on a swing set because if you made a mistake as a doctor, someone dies. If you made a mistake as a lawyer, you can appeal. Maybe I'm projecting, but that was my process at 12, right? I pivoted away from the hard math and science to words and phrases and the power of language. And I think for a lot of lawyers, it feels like, well, I missed that boat. I made that decision in high school, in middle school, whatever. The nice thing along with the AI literacy is that lawyers and legal people have a skill that makes you a superhero in the age of AI because you have the power and precision of language. So if you have the right words and the right way to communicate in a natural language way, these new tools really level up with you. So if you can combine basic literacy with your legal skills, that syntax, the semantics, that issue parsing, you're not just going to not die in the age of AI. You actually can leapfrog people. So it's not just a, hey, existential dread must fix this. It's, hey, if I combine basic literacy with these skills I've honed for 20, 30 years, I might be ahead of my colleagues, ahead of peers who don't have that language prowess. So it's an urgency, an opportunity, and a really unique moment when the wordsmiths might rule the world in a way we didn't expect.

[00:16:39] Speaker 2: I like that. And I like that understanding of the tools. I feel like we also have been preaching that around here. And when you were talking, it occurred to me, in a way, lawyers have been using Westlaw and Lexis online for years to return results of real cases. And sometimes these tools, if you don't understand what a generative AI tool is actually doing, you may feel like, well, it's just like Westlaw. I ask it a question and it gives me an answer. And I think that that false premise is probably what's getting a lot of these lawyers in trouble because then they think, oh, it just gave me a case. So that must be a real case because that's what it does.

[00:17:19] Speaker 3: I'd even push back a little on that. A lot of people making the headline, wait, let me step back, yes, and, right, I'll go back to my improv days, like, you're absolutely right. But also, I think a lot of the issues we're seeing come from bad lawyering. Would you trust a first year that gave you an awesome citation you've never seen in your 20 years that's so perfect? No, you would go and double check, make sure they didn't type in the Boolean search in Westlaw wrong, right? And so what we're seeing, even in the big Avianca case, the first one, the guy watched a YouTube video because his kid said Gen AI is cool and then when he got caught with his pants down, instead of saying, oh, let me look at the citations, actually do the work of lawyering, he asked the generative AI, did I get it wrong, right? That was bad lawyering. So a lot of times what you're seeing is people not doing the basic due diligence, the ethical obligation to supervise. It's a little bit of it's tech competence and knowing where the failings are with tech. But the other is just because the robot said it doesn't mean you don't have to do the lawyer stuff of trust, but verify, authenticate, look for issues. I think some people are getting confused and it's very human to get confused. We're trained to trust tech. Google give us answer. Google's right. 25 years of being padlocked by Google, it's a little bit different now and you can't give away your ethical duty of judgment, right? That thing that you've been honing for 30 years. It's more important. And for anyone afraid of job displacement, I would point that out with a big gold star. If you become illiterate and you can use these tools, you are more important to the process, not less. If you don't use the tools and you might fall behind and there will be displacement. But if you want to safeguard your career, the best thing you can do is get this basic fluency and know the risks and opportunities.

[00:18:59] Speaker 2: Yeah. A hundred, a hundred percent. I agree on the bad lawyering. I always say like, I never even relied on a Westlaw headnote. I would still read the case and make sure that the opinion said what it said. So please don't forget how to be a lawyer, a good lawyer.

[00:19:14] Speaker 3: Exactly. Exactly. And I think that, so the language precision and the, our brains trained to identify risk and to not trust output, that if we keep our lawyer brains on, puts us in a better position to thrive in an age of AI because that's how you need to work with these tools. It's not a give me answer robot overlord, I wish. It's hey, help me think through this process. Help me find my blind spots. What haven't I thought of? It's having a really good sparring partner or a good, very eager to please over caffeinated junior that you're going to have a dialogue with, a discourse, not someone that's going to say, hey, here's the answer. I don't need your lawyer brain. Good luck.

[00:19:53] Speaker 2: You know? Yes. Yes. And I just read this morning, the people who are power users of these tools get that and they get that you, you go in and you, you know, you don't, I mean, you could fight with it, but you, you do, you use, yeah, you leverage it, you use it. It's an ongoing conversation. It's not a once and done. And I think the people who are starting to get that are using it much more effectively.

[00:20:15] Speaker 3: Well, I would say, so the other thing I think lawyers and legal pros struggle with is you don't get the ESQ or to the types of roles we're in by liking failure very much. We tend to be academic people that have thrived and done well and succeeded. And who likes to do stuff that doesn't feel like thriving and doing well and succeeding? And fortunately or fortunately, iteration and that back and forth banter and not just having the AI give you an answer. It's a feature, not a bug with these new types of tools. But for legal professionals, it can feel like the AI is not working. And so you do have to kind of recalibrate your brain on what success looks like and how you think and how you work with the tool. Or it can feel like, oh my gosh, I asked the AI to do five things. It took me longer. It's awful. The AI doesn't work. You need to kind of realize it's, it's about getting you to think differently and about the AI training you to ask it questions in a way that get answers more quickly. It's not a one and done. I was not great three, four years ago. I've gotten much better just through obsessively using it, not even on high risk stuff. Like I've got a good buddy that kept burning his brisket. So he took a picture of the brisket and asked Chachaputi, how do I quit burning the brisket? Or I'm writing a snarky email that needs to be like 17% snark, not 87.9%. So help me dial it back. There's a lot of ways you can kind of gain that comfort level. But the first step is realizing, you know, it's iterative. You're not failing if it doesn't work the first time, even if it feels different from what you're used to. You can't draft the perfect Boolean search to have the AI make you a masterpiece of a brief.

[00:21:46] Speaker 2: Don't expect that. Yeah. I think that is really great advice that can't, I mean, can't be understated enough. Like I just had someone on a call last week and he's like, Stephanie, I asked her to write a brief and it just did a terrible job. And I was like, back up. Did you just say, write me a brief? Because it can't do that. Like that's not what it was, you know, you, maybe you could say, help me write the statement of facts or help me write this argument. Like you got to break it down just like you do with the steps.

[00:22:12] Speaker 3: Or you can help me think about this. You know, what issues did I miss? I love it for like the blank page. So if I'm just starting something, all right, how would you start? What's your thought on this? Right. So I'm not staring for 30 minutes trying to like get the ADD hamster wheels to align appropriately. Maybe that's just me again, projecting. But also, hey, what are my blind spots? How would you think about this? How can I change the tone? And what I love doing is like, all right, now read this as a judge in the Southern District of New York who likes Sherlock Holmes a lot. Maybe I'm just thinking of Andy Peck, but you know what I mean. Like there's ways that you can use it for helping you think, helping you kind of pressure test as opposed to make the thing for me. Because it's going to, it's designed off of billions, trillions of data points and aim for the hundreds of millions of billions of users at this point. So it's going to aim for the midline. It's not going to hyper-customize for you if you don't kind of go through that whole process. So what you'll get is, eh, not something super useful and not better than what you would

[00:23:05] Speaker 2: have made. Yeah. Great advice. You hinted earlier that some stuff just came out with clients using these tools and especially clients feeding lawyers advice into the tools. And I feel like this is still pretty new and a lot of people aren't aware. So I'd love for you to kind of talk to us a little bit about that and what we need to be understanding and thinking about differently now.

[00:23:25] Speaker 3: Yeah. Yeah. So I'm not sure we can add the case citation in the after, but basically there was a case where an attorney client privilege information, you know, feedback that a client got about a certain matter was fed into an LLM and it was discoverable and it vitiated Priv. Now part of the reason had to do with it being an open model. So there's the type that's free, which means you're the product, right? It's training on your data, which means there's no expectation of privacy, no expectation of confidential information. They use that, which means you might as well have just posted it in my blog. Great attorney advice. What do you think universe? It's got about that much protection. So it vitiated Priv. There are some judges that are even thinking if you're using it in your paid model because it's still potentially could be used for some level of training that it may waive confidentiality or vitiate Priv. And this kind of goes back, I mean, this is a specific to legal output, but there were similar issues with people doing code in maybe two, three years ago, I think Samsung had that where, you know, patent protected or trade, I think it was patent protected code was added in and they were building out more code on it. And then the next person who asked a similar question got that patent protected code because it trained the model on it. So that's why literacy is important because how your clients use this can have a material impact open versus closed enterprise grade, whether it's safe or not safe, or what information you can even put in there. And there's a big gap, like companies aren't training their people quickly enough. So it really falls on the lawyers to offer that advice. And I think it'll be refined some more, but we're going to see more and more clients who are like, well, I'm using this AI for everything I do. I take a picture of my fridge and ask what I should bake with, you know, the ingredients in my refrigerator, or I'm going on a road trip. I ask where the best places to stop are. Why wouldn't I if I've got a big case? But there was also, and I don't remember who it was, but there was a $250 million case where the client was given advice by their attorneys, it's not going to win on the merits. They pursued Chachaputi advice, lost, and it was flamed out in the media. So even if you, the practitioner, big or small, doesn't matter, aren't using these tools, your clients have heard about it. Their kids are using it. They're using it. It's all over the media. If you're not telling them, hey, this is a risk if you do something with what I'm telling you, they may inadvertently expose themselves in a way they didn't even anticipate. And it's very, very hard to impossible to claw back, especially if they did it to the free version.

[00:25:59] Speaker 2: Yeah. I mean, it's probably been a while, but there was a time where we would tell clients, hey, if I give you advice, don't share this. You wouldn't go out and tell your neighbor that. And maybe now we need to remember that great advice and learning and remember to educate our clients on this is what I need you to be thinking about when I give you information because there's a risk, to your point, if you feed it into these tools and especially the free tools.

[00:26:27] Speaker 3: Well, and it's all discoverable. Before I was TechnoCat, I was eDiscoveryCat, right? I did that for 15 years. And so most of my early career was all around data. And so I've got much younger siblings there, seven, five years, and 14 years younger. And so I would tell them, don't tweet it, type it, slack it, post it, snap it, unless you want mom and dad to know about it, potentially, employer, your future wife or husband, doesn't matter. It's sort of the same thing. I think we sometimes think because these tools feel like we're working that they aren't discoverable. They are. Will everyone use this data in every case? Maybe not. But, you know, much like a Google search, it can be dispositive. And so you need to sort of think about that from a discoverability standpoint, from a priv, from a confidentiality standpoint. This is all bread and butter stuff that lawyers know. You only need to know that the risk could be triggered, but it builds off of your decades of experience advising on risk of, hey, don't do this. The exposures, the way it's being exposed might be new, but what the exposure is and what risk it creates isn't. It's just kind of you're adding another layer of, please don't do this.

[00:27:31] Speaker 2: Yeah. I think that's great advice. And same goes with our rules of professional responsibility. Like you said, the rule of being a competent lawyer still exists. That rule didn't change.

[00:27:41] Speaker 3: And supervising, you know, just because it's ones and zeros instead of highly caffeinated and people pleasing, doesn't mean you don't have to supervise it, especially with some of the co-work stuff coming out where these agents might be operating somewhat autonomously. Like you need to put yourself back in the loop and, you know, it's, this all feels very scary and different and it's, in some ways it is because we haven't had to think about this before. And a lot of us could really avoid the tech question if we didn't feel like talking about it. And I think the shift is we need to apply our legal brains to the tech question. And the first step, and I mean, again, that goes back to why I wrote a book and why I launched, relaunched the Masters AI. Lawyers need to have this fluency. And I think for a lot of us, we didn't put ourselves, maybe not me, I've been screaming about it for 20 years, but a lot of other lawyers, you know, there's only 10,000 of us who would consider ourselves legal tech. There's 1.3 million legal humans out there. So for the 1.29 million out there, I think now you've got to put yourself in that room and just start getting that basic level of fluency. Otherwise you're kind of like that, and I think back to my early career, that partner, I walked in his office and, you know, the big CRT monitor and the CPU were bookends. He wasn't using them and he told me confidently, I have nothing discoverable. I don't send email. And his secretary, as I walked out, said everything he dictates to me, I send an email, we don't enter office at anymore. So even if you think I'm not doing it, you are still possibly looped in. And so I think it's a similar inflection point. The good thing is, 20 years ago, there was no E&E discovery. As an industry, we pivoted to all these new data sources and to be able to translate a little bit between tech and not tech. So it's not our first rodeo. We can do this. We maybe are a little ahead of the curve from some other people, we just need to start doing

[00:29:19] Speaker 2: it. Yeah, I love that. And I love that you wrote a book to try to help folks figure this out and make it easy. Because I know, I'll be honest, I have considered, like our teams talked about, like, should we be writing an AI book? And it just seems so darn intimidating, because it changes so much. So I appreciate that you did it.

[00:29:36] Speaker 3: You know, it was more daunting than I thought. I write a lot. I write like 70, 80 articles a year. And I'm like, a book, it'll take me four months, you know, a year and change later. But what I did is I tried to sort of, in pop culture references, in a very human, non-technical way, explain the 70 years of AI history that got us to here, where and how you can use it, what it correlates to, like similar to a baby associate trying to people please you. And you know, a prompt primer glossary. But my goal was to kind of create a foundation, because it feels like every conference talks for three minutes about AI on every single panel. So you get enough to know, I should know more about it, and then stop. And every book is either about, should we or shouldn't we? Or so technical, even I, who've been talking about AI for 20 years, I'm uncomfortable. So I kind of wanted to bridge that gap. So it's sort of an AI book for the rest of us.

[00:30:27] Speaker 2: I love it. And where can people find it if they're interested?

[00:30:29] Speaker 3: On Amazon. So just AI and Legal Tech and type in Kat Casey. I'm sure we can add a link under the comments. And you know, I travel around and I go and I speak at corporations. I'm talking with Dolby later in this week to 100 plus of their legal ops people. So I'll go and I'll talk with people about it too. I don't want legal to go it alone. It's a scary transition, especially if you built your career on words and phrases, not numbers and statistics. And so if I can help, I want it. That's sort of my why. I love it.

[00:30:58] Speaker 2: Well, we'll make sure to put a link to the book in the comments, in the show notes. And Kat, thank you so much for being with us today and making tech sound fun. Because this book is definitely like, I love that you said, oh, let's do pop culture references and let's make it easy and approachable.

[00:31:15] Speaker 3: Well, you know, we're all in this together. It doesn't have to be like a root canal. Nice. Thank you so much for having me.

[00:31:20] Speaker 1: Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

ai AI Insights
Arow Summary
In this Lawyerist Podcast segment, hosts Zach and Stephanie discuss how AI—especially agentic tools like Claude—has changed their daily workflows, from scheduling work around AI “agents” to the need for stronger project management to close loops and avoid endless starts. They then interview legal AI futurist Kat Casey (Techno Kat), who argues that every lawyer must become AI-literate because clients and colleagues are already using these tools. Casey emphasizes understanding core AI terms (AI vs. generative AI vs. LLMs), tool limitations (hallucinations, bias), and the ongoing, iterative nature of prompting. She stresses that many AI-related court mishaps reflect bad lawyering (failure to verify) rather than purely bad tech. A major recent risk shift: clients may paste privileged legal advice into free/open AI tools, potentially waiving confidentiality/attorney-client privilege and creating discoverable records. Lawyers should proactively advise clients on safe/unsafe AI use, supervise AI outputs like junior work product, and apply existing competence and supervision duties to new technology.
Arow Title
Why Lawyers Need AI Literacy—Even If They Don’t Use AI
Arow Keywords
Lawyerist Podcast Remove
AI literacy Remove
generative AI Remove
LLMs Remove
Claude Remove
agentic workflows Remove
project management Remove
hallucinations Remove
bias Remove
attorney-client privilege Remove
confidentiality Remove
discoverability Remove
client use of AI Remove
competence duty Remove
supervision duty Remove
prompting Remove
legal risk Remove
Arow Key Takeaways
  • AI is now unavoidable in legal practice because clients and colleagues are using it, even if a lawyer is not.
  • Lawyers should learn key AI concepts (AI vs. genAI vs. LLMs) and understand what tools can and can’t do.
  • Many AI fiascos in court stem from bad lawyering—failure to verify citations and supervise work—rather than the mere existence of AI.
  • Effective AI use is iterative and conversational; “write me a brief” prompts are unlikely to produce good results without decomposition and context.
  • Agentic tools can reshape work habits, but they increase the need for disciplined project management to finish what’s started.
  • Client misuse—especially pasting legal advice into free/open models—can risk waiving privilege/confidentiality and create discoverable evidence.
  • Lawyers should proactively counsel clients on safe AI practices and treat AI output like junior work product: trust but verify.
  • Legal professionals’ strength in precise language can be a competitive advantage when working with AI tools.
Arow Sentiments
Positive: The tone is optimistic and pragmatic: speakers express excitement about productivity and new ways of working while candidly acknowledging risks (hallucinations, privilege waiver, discoverability) and the need for disciplined processes.
Arow Enter your query
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript