[00:00:00] Speaker 1: Welcome, everybody.
[00:00:02] Speaker 2: We've got a great topic today. Not really, but I'm glad to see that at least for a lot of state bar associations, the cutoff or time to receive credit for CLE for a year is June 30th. I know that's that way in Utah, where I'm at. And everybody, regardless of when your reporting deadline is, everyone knows that getting the ethics credit and the professionalism credit are sometimes a little trickier. And a lot of people put these ethics ones right at the end of the reporting period and in an attempt to gain a larger audience. But I'm going to go ahead and run through a handful of housekeeping items. This is our agenda for the day. We look forward to going through these. We have, I think, a fairly comprehensive discussion on these topics. Housekeeping. We are pleased to report that this CLE has been approved in all 50 states. And so you will have some prompts during this meeting with these, I don't know, we call them code words or something that you need to write down. And you will be able to get a form emailed out that you'll receive. You'll fill it out and complete it and submit it here. Again, if you have questions, we will, in the chat, put an email address so you can reach out. But hopefully it will be a pretty seamless thing for you to be able to get that credit. I am General Counsel of Filevine. I've been here about four years and with me today on the panel are John Risner. He's our Head of AI Legal Drafting. John and I have, this is our second gig together. And the first time I saw John was, I think, during COVID. I was interviewing him on a Zoom interview. He was in New York City. I was here in Utah. And he looks a lot younger than he does now, just a few years ago. And then he had this big mustache. And I knew that I was going to be friends with him in that moment. But John joined us at Filevine a couple of years ago. Really leads all of our AI initiatives here. I also have Kayla Grayson on. She's the Chief Operating Officer of Viles and Beckman. They're a personal injury law firm in South Florida. And her firm has been a Filevine customer for six years about. And so she's probably much better at using our platform than I am. So we're going to dive in a little bit here and start getting into our topic, which is ethics. I have drawn the short straw, so they're making me do the boring stuff. I assume that most of our audience here consists of lawyers and other legal professionals. Well, we're governed by the ethical rules of our respective jurisdictions based on the model rules. And the very first rule is the duty of competence. And part of the duty of competence is being abreast of new technology. So I think that most lawyers who've gone throughout their career, been in their career for a while, they switched over to email at some point. Older people might have switched from typewriters to computer processing, word processing and things like that. And so this is another technological shift as we talk about artificial intelligence. This is another technological shift and. You know, some would argue, myself included, that we as lawyers have a duty to be up to speed and when it makes sense to use new technologies. And so, again, that that boils back down to the ethics rules. We're going to talk about the duty of confidentiality as well. For legal professionals, the attorney client privilege is sacred. It's our it's our most sacred obligation to keep our client competences. And so if you are taking your clients information, which they have entrusted to you, they're paying you and you've signed an agreement and they are they're believing you're bound by ethical rules. You need to take that data and be very, very careful with it. The younger generation. I have three teenagers. People are OK putting all their personal information out into the to the world and lawyers. We really don't have that luxury when it comes to our client information, client data, things like that. So moving on a little bit, we want to look at I assume I'm going to assume for purposes of this discussion that many of the audience members have been playing with AI or maybe use it every day. Well, these are a few of the ethical issues here. So bias. So an AI is based on, you know, an algorithm that analyzes lots of data and but it can only analyze the data that has access to. And so bias in AI is a fairly scrutinized risk, particularly in the legal world where fairness and impartiality are foundational to what we do. And so the AI systems they train on historical data sets, legal outcomes, case law, sentencing data, billing records, things that reflect entrenched social or institutional biases. And so we need to be careful with that. You need to police it. You need to make sure that that and I'll say this throughout the seminar today is that, you know, you have to use the right data. You also need to do fairness testing and have a what they call human in the loop. Because we really can't just turn this all over to a bot quite yet. You know, lawyers all believe that. Our job will never be replaced by AI because we're so good and smart at what we do, but. Lots, lots of our parts may lots part many parts of our jobs may become automated at some point. Accuracy again, that's a big one. Most of the vendors you look at, they're going to tout some some accuracy percentage. It's likely going to be high, but that's something that that lawyers need to make sure that they're doing because you cannot rely upon. And we'll hear a couple of stories, and I'm sure that many of you have heard them in the news lately and things. But accuracy, obviously, is a is a really big one. It's vital in legal work. We know that factual interpretive error can lead to people going to jail. It can lead to malpractice. It can lead to breaches of duties, you know, millions and billions of dollars, things like that. So accuracy is very, very important. Responsibility and accountability. Again, this goes to. You know, many of us who worked in a law firm, the model is maybe a paralegal or a younger associate will sort of take the first cut of a job, get it up to somebody who's more senior, maybe some partner. And, you know, ultimately, that partner who is sending out the work product is on the hook, right? His or her name is on it, as they say. And so I think of AI as almost like that paralegal or that first year associate or somebody who can get you a draft of something. They can get you started. Oftentimes, that's a hard part of legal work, frankly, is getting started. Once I get started, I can start rolling with it, but it cannot it should not be used to draft up an important document and send it off. And the review probably has to be a little bit more detailed. Like if you're training somebody and they're learning how to do law. They've been through law school, but that's about it. You're going to look at it a little bit more detailed to make sure that you catch mistakes. Again, another ethical rule, 5.3, leads to tech tools and vendors, right? The practice of law, only licensed attorneys can practice law. So you really can't turn your legal work over to a program. It's meant to enhance. Finally, privacy, security and compliance should speak for themselves. But the data privacy, you have multi-jurisdictional laws, states, countries, they differ a little bit. And so you need to be up to speed on the GDPR and the CCPA and things like that. HIPAA, you have, like I mentioned before, the privilege. And so privacy, security, compliance are things you're going to want to keep your eye on. OK, a little bit here, the laws right now in legal practice are on AI, are a little, they're changing rapidly. We don't have defined laws necessarily. The executive orders from the last two administrations have been a little bit at odds with one another. And states are trying to come up with laws, too, right? The laws often don't keep up with technological changes, but they certainly try. So keeping an eye on the laws, the changing regulations and things like that is something we want to enforce or recommend as well. I'm going to talk about a few, a few cases here of lawyers using AI and probably not using it effectively. A prominent PI firm, like the biggest PI firm, you know, they got a really small fine. And that's probably not that big of a deal for these law firms to have lots of money. But the reputational hit is pretty big because these things make news and we lawyers like to read that stuff and our clients are reading that stuff. And so you have this quote on here, you know, that a cautionary tale for the firms, because these reputational hits, they could be big. And those actually are, you know, millions and millions of dollars, potentially a few more of these as well. You guys, again, are probably hearing these daily about these cases where lawyers use these these tools to do their work and they're getting caught. So there was a few weeks ago, a California judge, Judge Wilner, he became intrigued by a set of arguments that he had read in a filing. So he wants to learn more about these. So he starts looking at some of the articles that are cited in the in the brief and. They weren't there, they didn't exist, so he follows up with the lawyers and says, hey, you know. I can't find these articles, the lawyers come back and they say, oh, we have a new brief. Sorry about that. So he makes them give sworn testimonies and, you know, they're using Google Gemini. And these are firms. These are I won't name them on here, but these are well-known, nationally recognized firms that have these errors in them. And the judge is getting pretty upset. And he fined the firm thirty one thousand dollars, again, not much money for a giant law firm. And one of the lawyers, you know, he admitted that he had used these AI tools to create this brief because it had all these case citations and everything. And he shared it with his colleagues and it just kind of made its way through the firm again, probably through that layer of different attorneys reviewing it. And they just assumed that the person that did the last, you know, the first citation knew what they were doing. And so they're like, I don't need to go check Westlaw or Shepardize or Keysight this. I'm just going to rely on it. And, you know, that just made its way into different versions and later, later drafts of the document. So when the when the judge questions these lawyers, they said, oh, we got to fix it. They send a corrected version of the brief. Thanking them, thanking the judge for catching that, owning up to their mistake, we've addressed it, we've updated it. But the problem was that the corrected brief still had at least six other AI generated errors. And so in their declaration, the lawyers confessed that nine of the 27 cases that were cited in their 10 page brief were incorrect. And two were completely fabricated. They were completely made up cases. Again, this was their second try. Right. They did this. So the judge said, I quote, the attorneys had collectively acted in a manner that was tantamount to bad faith and used and chastised them for using their sketchy AI origins. And said it was deeply troubling what they had done. I'm going to keep going. There's a few more of these, but. You've got to check your work. You've got to check it as if as if you were doing it yourself. I've not used any of the AI tools to come up with cases. I've used them to to write things, to improve writing, quick questions and things like that. But you always have to double check this work. Trust but verify. I'm going to turn it over to John Risner for a few minutes to discuss this.
[00:13:31] Speaker 1: Awesome. Thanks, Alex. So I'm going to talk about how these tools can be applied in legal practice. But before I get there, I want to give everyone just a super quick intro in what type of tools we're talking or mainly talking about here. So for most most folks in the legal world where they are kind of engaging with AI tools, those are largely going to be large language models. And how these tools work is that they they gauge really in next token or word prediction. They are putting out what is statistically likely going to be the next word or token in a sequence of words or tokens. As Alex noted, to do this, they were trained on huge data sets, everything, books, Wikipedia pages, articles, basically whatever the creators of these alums could get their hands on to try to kind of develop these statistical relationships for their models. And a big note here is that these tools are usually not deterministic, meaning that if we put in the same answer, the same kind of request twice, we ask it to draft a clause of the agreement and say we sit down, ask to ask again in the fresh run, again, draft a clause of this agreement, we'll likely get two potentially slightly different answers. And that's because the model doesn't actually produce like the, you know, it's not just like, all right, you know, the should follow from this line. Rather, it returns like a distribution of returns. It kind of turns and applies a distribution of probabilities of different words and tokens when, for example, drafting or reviewing. Because the most legal work is textual based, whether or not you're kind of a litigator or transactional or regulatory practitioner, these tools being designed to really engage with language in this predictive matter. These tools are often super or can be super applicable to what the types of activities we're engaging with. So things like drafting, drafting an argument or drafting a clause of an agreement, reviewing a document for kind of the main points. Because these are all textual and word based, they can be, these tools can be particularly powerful in the legal environment. Now, because these tools are based on, again, these statistical relationships, these predictive elements and are nondeterministic, they produce kind of potential ethical risks for the practitioners using these tools. So I'm going to go to the next slide here and talk about technological competence. And so with that in mind, that these tools are, again, predictive in nature and are nondeterministic, we as attorneys have an obligation to both understand these tools and as they become kind of more ubiquitous, more throughout our practice areas, understand these dangers or risks of these tools being predictive and nondeterministic. In July 20, 2024, the ABA came out with Opinion 512. Some of you may have heard about that or even read the opinion. And this was the ABA's first big opinion on AI. And they went through a kind of a number of items that kind of those rules that Alex had kind of introduced with and will kind of continue to talk about today. But one of the ones I really want to kind of dive in on is this rule of competence and of knowledge of these tools. And one of my favorite footnotes of that opinion is one on competence where the ABA interacting opinion noted in a footnote that, you know, quoting kind of previous literature, they noted that, hey, you know, today, no competent lawyer would rely solely upon a typewriter to draft a contract brief or memo. Typewriters are no longer parts of the methods and procedures used by competent lawyers. And likewise, no lawyer in the 21st century who doesn't use a lawyer in the 21st century who doesn't effectively use the Internet for legal research is going to likely fall short of the minimum standards of professional competence and potentially be liable for malpractice. Now, where these where where the kind of the the the word processor or the Internet, you know, where where those were the kind of the technological tools for lawyers to become competent in, understand and master in kind of previous decades or previous century. With the rise of these large language models across all types of of legal practice, of legal practice tools and programs, these same questions of these same kind of things in mind of, hey, we have to the lawyer practicing today needs to understand, know how to use these tools to be competent. These are these are kind of these are questions that should be in the top of our mind as we sit down and use technology. To make to produce work product, whether that be review, whether that be a memo, whether that be a contract, whether that be review or advice given to our clients. Step to the next sides, and I want to really dive into the risks and limitations of using these tools with that in mind of their probabilistic and deterministic nature. But before I do that, I'm going to kind of drop in the first CLE term for your form, and that's going to be the code word legal. So I'm going to drop that down and keep that in, keep that in mind. So the first I want to talk about the core limitations of generative and generative and legal practice and the kind of first and biggest that everyone's likely heard of in passing are hallucinations. Again, these tools being built on the statistical relationships of words on in massive data sets. These tools don't think thinking quotation marks, as we do, how they process and produce language and knowledge is totally alien to how we do so. And as a consequence of of seeing relationships in these data sets, these tools can produce what are known as hallucinations. These are factually incorrect or fabricated information that doesn't actually exist in the real world, in our in reality, but that the tool is giving to us confidently that it's it's correct. And this is where we're hearing about made up case law or details in a report that didn't actually exist. These are, you know, that those are those are hallucinations. These made up things, these made up cases that never were litigated, that never existed. Yet a tool is giving to a lawyer as, hey, you should cite this in your in your brief or in your in your in your filing. Another really major item to think about in terms of a risk point or a limitation point is outdated data or you think of as a knowledge cut off. And this one might be a little bit less familiar or kind of less apparent to users of these technologies. Whenever the builders of the kind of developers of these models are working, when they're working with the data sets that they're training these models with, they're taking a kind of a piece of a giant purpose of data from a particular time period. So, you know, they might take all the data they can find up to some certain date and start using that to train the model. There's a consequence when you as a user or as a lawyer, as a user just using these tools, that tools, that tools training set cuts off or stops when those developers stopped pouring in that material to develop and train that model. So, for example, if you go on to Google and look up their Gemini 2.5 Pro, which is really one of their kind of newest state of the art models, that model was kind of most recently updated as of June 2025. So, you know, this month. That said, the knowledge cut off for said model is January 2025. So no information from January to June would have been part of that model's corpus of training data. So where you are looking to use that model to understand a topic and where that, you know, things have happened in that topic or in that area of knowledge, since that knowledge cut off, that model won't be able to engage with it, won't be able to give you responses based on that new information. So where you are using a model with a cut off that's happened in the past and you're relying on that model to give you the most updated piece of information on some topic, you're not going to, you're not going to land on anything useful or kind of the most accurate update information, again, because the information poured into that model stopped at that earlier date. Go ahead to the next slide, Alex. And kind of this is why this is kind of one of the core dangers of using these tools. One is getting false data, or two, having a model work with kind of assumptions of information or assumptions of knowledge that might be outdated by the time you are using it and you are kind of using that material to produce your work product. Let's go ahead to the next slide as well. The other kind of really large and core concern that kind of Alex touched on with kind of the confidentiality piece is the concern of data privacy and of confidentiality. When working with any large language model, because they are so reliant on the knowledge sources being fed to the model to train it, as well as kind of the reinforcement happening in developing that model, data is gold, data is treasure for any developer or creator of these AI tools. That is kind of getting the right data and making sure you have kind of the right data and as much of the right data as you can get really makes a huge difference in creating a quality model. As a consequence, many developers of large language models are looking to capture whatever data they can from whatever sources they can, including potentially their users of their own models. So where you are just kind of throwing information into a large language model and where you, for example, haven't set up a contractual relationship with that model developer where that model developer knows that they can't train on your data, unless you have that relationship set up, there's a fair likelihood that your responses going into that model are in fact being used by the developers there to improve on and expand on their model development. And wherever we have our information kind of going out and being grabbed and used by third parties, that as practitioners should kind of give us real worry about how secure and how truly confidential our client data is. The other piece to watch out for, and this would be more along the lines of kind of your IT security type considerations, is the proliferation of kind of apps and tools in kind of the online marketplace that are offering things like, oh, you know, use our free chat tool. My purpose, the reason that it's free to you all as a user is because they are scraping that data, capturing your inputs into the model for their own purposes, whether those be development or kind of other uses of that information. And so where you're kind of jumping online, jumping to the first sponsored ad on your browser to quote free chat tool, and where you start pouring in your information in there, the fact that one, you just grab your kind of, you aren't paying for that tool, and you haven't set up a relationship with that vendor to confirm that your information is not going to be trained on. The fact that those kind of steps aren't made probably means that whatever material you are giving into that model is kind of captured by those third parties, might be used by those third parties, and you're then at a real risk of meeting your duties of confidentiality as a legal professional. I think with that, we'll turn right back to Alex to, actually, no, take that back. No, that's Alex. I think this is me. Yeah, it is you. Jump on in.
[00:26:17] Speaker 2: Thanks, everybody, for hanging in with us. One of the key parts of your ethical duties as you run a law firm or legal department is choosing the right vendor. And there are, you can probably find lots of foundational questions that you can ask a legal AI vendor about how they do things. But you need to do your due diligence on them because, again, they are not only taking some of your data and doing something with it, you need to find out what. But also, even if they agree to be careful with your data and not disclose it, there are risks of data breach and other access to that data through, you know, even though they had contracted with you to keep it safe. So ask about audit trails. Ask about the, you know, how they do things in terms of privacy. You know, find out a little bit again about the outputs and how accurate they are. You know, maybe see some examples. And make sure you have, and we'll get into it a little bit more, but make sure you have the right vendor. And if the vendor can't answer questions or refuses to, then that should be a red flag for you. If they can't put you in touch with somebody at the company, at Filevine, you know, I can answer basic questions, but if somebody wants to get their CISO or other person who really knows tech to ask questions, I have, you know, 50 people here I can hand them off to who can answer those easy questions. The review protocols. One of the things here, again, is as I talked about that case earlier where it flew through a few layers of lawyers before it went out to a judge or to opposing counsel is establish a protocol. You can have a little watermark or a footnote that says, you know, some of this was AI generated content. That might be a little much, but then depending on how your firm works, then people can know that I've got to look at this a little bit more carefully because I know the AI was used to do some of this drafting, some of this research. That's going to help you again to, you know, find out, find these things out before you send them out because a judge is one thing, opposing counsel, it gets out in the press. It's just embarrassing if you're doing things like that. The last one there on that first box is provide training for staff. Again, some of us who are a little bit, have been in the profession a little bit longer, we're more resistant probably to change in technology and so even if somebody is proficient with technology, using AI is somewhat of an art. You know, how do you talk to the bot? Like I have people here at Filevine that I see putting in their prompts and their prompts are half a page long. Mine are like, you know, please review this. And so it is an art and training could really help and I'm sure there's a lot of resources out there for that. On the ongoing responsibility side, the ABA is putting out some really interesting stuff. They're pretty, they're doing pretty well. They're doing better than state legislatures, but you know, keep up on your specific state and for the ethics of it, of all the things that you're doing. Again, in the event that you get sideways with a client or a judge, you know, document how decisions are made and which tools were used. Because again, you're not required to be perfect, but you are required to be competent. And when you're taking on a client, then you do have requirements to be a zealous advocate and to do as good as you can. And so document how those are doing. So if something does slip through the cracks, you at least know why and you can always avoid it potentially in the future. Again, don't let EAI be a substitute for legal judgment. It is pretty miraculous when you use some of these tools and they can spit stuff out so quick and it sounds so good. I had a quote, which basically says that AI allows, you know, stupid people to sound smart. It's really, it's really hard because I can spot a chat GPT email to me usually fairly easily. M dashes. Okay, that's my only hint I'm going to give you. And the words to be clear. When somebody says that, other than President Obama, nobody says that in real life. Okay, moving on. Developing these firm wide policies. I'm going to kind of skip through these pretty quick. This is really dependent on your firm, how you do it. Policies are great, but they also need to be followed. So you need to be able to push people to do these things. And then why it matters is we are professionals. Again, we have an expertise, we have obligations and, you know, make sure that those in your organization are following the policies or the playbooks or however you want to call it. And also, you know, getting the training that they need. If you're going to, if you're going to launch, if your firm doesn't have an enterprise account, like John talked about, if you did a survey, I've seen a lot of statistics. Well, most of the attorneys, depending on how the size of your firm and what control you have, they've got chat GPT on their phone, they've got chat GPT on their computer, and it's probably not the version you want them using. Most teenagers in their pocket have it on their phones and things like that. And adults too, I guess. But you got to monitor that stuff, unfortunately, to make sure you're using, you know, doing the right things. Transparency is interesting because different states have different requirements. And, you know, we as a legal technology and AI provider at FileVine, we've hired one of the foremost experts in ethics to advise us on a lot of this stuff. But, you know, explain AI in your engagement letters. You know, the ABA has said you can bill for AI tools. I think it's Opinion 512, maybe John mentioned it a little bit. But interestingly, you can't, you know, if something normally takes you three hours, but AI with AI takes you 15 minutes, you can't bill for three hours, you can only bill for 15 minutes. But you can also bill for some of the technology costs. Look into that, do your own diligence on that. But, you know, you can't bill for 15 minutes. But you can also bill for some of the technology costs. Look into that, do your own diligence on that. But that's, you know, in many cases, that's something, you know, subject to certain exceptions, you can pass those costs on to your client. And by the way, your client should hopefully get a little bit more efficient work, maybe a better work product with a combination of AI and humans together. But, you know, be transparent. Don't be embarrassed to say it. If I'm a client, I'm going to think, well, you should be using some basic AI tools. I don't want you to be reckless with them. But sure, use them to have a better work product, to be more efficient, to be more thorough. I think everybody wins with, you know, proper use of artificial intelligence. I'll keep going, because I know we have a few more people to get through. So I'm going to let this one speak for itself. We will be sending out these slides. These are important, the risk assessments and governance, vet your AI vendors well, and always look at new ones, and then continue to ask the hard questions it's okay to.
[00:33:49] Speaker 1: Awesome. Yeah, you want this one? Go ahead. Awesome. Yeah, you want this one? Go ahead. Yeah, I'll pick up from here. So I know, and again, I'll try to move through this quick, because I know we've got other folks to hear from Kayla as well. But one of the other things to really, I think, keep in mind is when selecting the incessant AI tools, you should, I think practitioners should keep in mind the fact that the even some of the kind of newest technological developments in these tools do not solve the ethical problems that we've been talking about today. And the two kind of new features that you might hear about from vendors and from tools, one would be RAG, retrieval augmented generation, and the second being kind of the turn to more reasoning models. As a quick primer, go ahead and jump to the next one, Alex. RAG, what RAG does is that it takes, it takes information, so say a case files information or a data room, you know, data room information, turns it into numerical representations, and then groups those or kind of groups those numerical representations based on kind of similarity, in the words. So think, say you have a large set of medical records, a traditional search, if you search for cancer, you'll get like cancer, cancerous, cancerous, precancerous, you know, these iterations of cancer. In a vector, in vectorization, if you put in cancer, you might get back cancer, you might also get back leukemia, melanoma, because these words are from these words are similar in kind of are basically similar and there's relationships and linkage between your, your vector, your vector piece of cancer. How a lot of modern terms, tools, kind of tools using RAG work is they take that advantage of vectors to allow a data store of kind of grounded knowledge to be surfaced up to the large language model to use these kind of use these old pieces of information when drafting a response. So if we want to go to the next slide, Alex. You might have something like, you know, your, your pieces of data, go through an embedding model, become these vectors and these relationships, kind of in a, in a spin kind of a vector database. And so when you go ahead and say, hey, draft me a section on the medical damages of our client focused on kind of maybe cancer diagnoses, the tool could then surface back up from the vector store, you know, these instances of leukemia that were experienced by our client after taking some substance, for example. Now RAG does is particularly exciting because it allows our responses to be based in the kind of facts and data surfaced up through the vector store. And which kind of often you'll see a lot of kind of a lot of legal or kind of AI vendors talking about this is improving the outputs of of their work. The other main tool you might now hear about also are called reasoning models. So if we go ahead to the next slide. Where kind of some of you have played with their work with kind of some of the big foundational names, Gemini, OpenAI, Claude. And you've seen that they've come out with the new models, which is O3 or Gemini 2.5 or Sonnet 3.7. What these tools is they, when given an instruction or given a prompt, they go ahead and automatically generate kind of a chain of thinking. They kind of use compute to kind of think or to kind of think through an answer before giving you your final your final output, which often can provide kind of more coherent or stronger lines of reasoning and give more more cohesive, more coherent outputs, especially where the answer requires someone to say think through logical steps or to think through something in a more arithmetic like manner. The problem is, if you go ahead to the next slide, Alex, if these kind of recent studies as of March 2025. Go ahead to one more slide, Alex. Coming out of some recent studies in March 2025 noted that even applying RAG or reasoning models to legal work doesn't hasn't kind of showed to be a panacea for these issues of hallucinations or these issues of kind of or limited windows of training data, or the kind of the tendency of any of these machines to say make a mistake. And what we can see here is some snapshots out of this kind of one main study at the University of Minnesota Law School, looking at how Minnesota Law School, looking at how law students at a number of various levels of their legal education, how when using tools that had RAG and had reasoning capabilities, how often they were still kind of turning a work product with hallucinations. And we can see here in kind of in the in the chart is that they noted that even with these improvements in the actual abilities of these tools, the human users using these tools were still including hallucinations, including these, these kind of ethical errors within their outputs back into the back into their graders. So just because, for example, a AI tool says, oh, you know, we use advanced reasoning, we use RAG, we're basically grounding your data, that doesn't mean that you can kind of wipe your hands like, oh, you know, we're going to be totally fine, because this tool uses RAG, we can know that the data is working on is correct. No, that's incorrect. Even with these tools, you still can run into those same dangers. Kind of. So with that in mind, regardless of what tools you're using, regardless of how advanced or kind of what vendors you're bringing in, you're still going to need a culture and training on AI responsibility across your firm. Train staff that regardless of the AI tool you're using, regardless of what you're handing out to clients to court, you need to make sure that humans are going through these materials are checking to make sure that the material is right, checking to make sure that what you are actually producing meets your ethical requirements. And I kind of as Alex noted, this is important at all levels of your of your environment, because, you know, a, if a if you if like the law firm that Alex talked about earlier, if by the time it gets the partner and the partner assumes that, hey, these will probably check this is great, turns that in. But in fact, the junior associate kind of used a kind of LLM or an AI tool to produce the work, never checked it, the partner never checked it, it went out. That's going to kind of put the firm, put the practitioners at a real risk of kind of an ethical breach, because regardless of what tools you're signing up with, these dangers will kind of still nevertheless exist when working with these tools.
[00:41:17] Speaker 2: I'm going to skip these through real quick. Again, more about vetting your AI vendors, asking them about their, their security practices and certifications. And I'm going to pause and give the next code for the CLE. And the code is AI. And then I'm going to turn it over to Kayla.
[00:41:39] Speaker 3: Awesome. Thank you, Alex and john. So today, I just wanted to jump in because I know we're all diving into kind of how do we implement AI into our law firms? How do we get staff on board and a lot of what Alex and john talked about is some things that we've learned kind of some easy and some not but through our journey that I'd like to share just if somebody starting getting used to AI, it will help. And so I am a big believer that the firms that embrace AI will outpace those that don't. We've, we've seen it year over year, you know, with implementing internet and implementing computers and that sort of thing as Alex talked about in the beginning. And so it's really now is the time to embrace AI. And what when we first started implementing AI and our team, immediately, our people started worrying about job security, AI is going to take my job. And so we've really implemented a mindset of we use AI to be more human, not less. So we take AI to, it's not about replacing people, it's about empowering them. We try to take that, like the administrative tasks, the tasks that are repetitive, that don't take a lot of strategy. And we take those off of our team's plate for more time to really focus on those strategic things like negotiations, client care, legal outcomes, trial, that sort of thing. And that has allowed our team to really embrace the use of AI. Next slide, Alex, please. And how we really rolled it out was kind of our implementation playbook, we identified use cases. So we talked about demand drafting, we talked about case handoffs. One of the things that we're doing right now is building out a focus group, utilizing AI. So a focus group program that we can bring in a whole bunch of potential juries and focus group our cases and use that data and AI in order to project outcomes of our cases. But when we really started getting going was piloting tools like Filevine AI chat, demand generation, the new depo copilot. And I'm really excited to look at the MedCron as soon as it's released. Because these tools, when we looked at them, we looked at the benefits, we also looked at other programs. And what we found was, we were able to find that some of the external tools, while they may have other features, it actually took more time to download all of our documents and upload them into another software that wasn't implemented with Filevine or integrated with Filevine. And it actually was kind of counteracting things because documents were either getting lost, or it was taking more time, it was slowing our team down. And so really taking the time to pilot those tools and find out what features are really important to you. And does it add to the overall effect that you're trying to do? One thing that we have absolutely implemented across the board is that human verification is non-negotiable. AI assists, humans decide. So that is really important. Of course, we've talked about hallucinations a little bit. And our team knows that anything that is AI based, there is a tag for AI generated stuff in our Filevine. And that requires extra eyes, basically, make sure that, yes, AI is helping you draft it, but we expect you to know the file in and out. So if something is added, or something is omitted, it's on our human eyes to catch that. And so making sure that you train your team in that regard is really, really important. Getting the team buy-in really started with communicating the why. Letting them know, obviously, AI makes our job easier and more impactful. We created a champions program that helped us with that. With that, we basically, each department had a subject matter expert that helped me demo the various tools, identify what features we're looking for, and what would be most benefit to us. They led peer training and onboarding. And then from the bottom up, we did adoption. So rather than me or the firm owner pushing it down from top down, we actually created a champions program where our staff was training everybody from the bottom all the way up. And that really helped with getting everybody on board, understanding how AI works. I know Alex talked about the prompts a little bit. And one of the things that I learned is that you can actually use AI to create your prompts and to make them better, which was really cool to me. And that was something that one of our champions actually showed me. The next step was setting expectations and safeguards. And I know we talked a lot about training today, train early, train often, verify everything. You have to train your team that AI prompts are not final drafts, or sorry, AI outputs are not final drafts. So if you get an AI demand written letter, even in Filevine AI, it's doing a lot of the legwork to pull everything together. But it does not replace human eyes. So our team knows and is trained on the fact that they must review it. They must trust but verify. We basically treat AI as a research assistant. I always say you're still the lawyer, AI is great, but you're responsible for your final product. And then if I just wanted to speak to, I know I review a lot, or I watch a lot of these webinars. And so a lot of the people watching may not be attorneys, but they may actually be staff members within that use Filevine. And so I just wanted to speak to that group a little bit as well. You don't need a title to lead AI transformation in your law firm, you can become the subject matter expert. So by being here today, you are learning what it is to protect your data and to make sure that the AI tools that you use, they're coming at you every which way. There's AI tools, I think, being developed, multiple ones every single day. And so taking the time to learn, you know, which ones are SOC 2 and compliant, which ones are going to protect your data? Are you on an enterprise level? Have you implemented into your contracts, engagements, like Alex said, all of these things are really, really important. And so if you, it doesn't matter what level you are at, at a law firm, taking the time to become that subject matter expert, and implementing that AI transformation from within your law firm really helps. Even the small wins, they add up and they bring in like ultimate big shifts in the law firm.
[00:49:25] Speaker 1: Awesome. Thanks, Kayla. So now we're going to jump into the Q&A part of it. There's a Q&A box that you can use to submit your questions. We already have a few really great ones that I'll start working through now. But yeah, if any additional folks have questions, please put them in and I'll start kind of talking through these. So really great question from Ricardo on Opinion 705 from the State Bar of Texas. So I'm not in the part of the State Bar of Texas, but kind of during this webinar, I was able to kind of pull up and take a look. And I'm going to actually go ahead and...
[00:49:58] Speaker 2: I'm a member of the State Bar of Texas, John.
[00:50:00] Speaker 1: There we go, Alex. There was a... One of the really interesting pieces of it is the... And I'm going to kind of... Actually, I'll go ahead and type the response right into the... I'll type the text right into that question. And that is they note... Let's see if it goes right here and if folks can see it. Let's see. I can't see if it's centered on... I'll just go ahead and talk about it. So they note that Rule 1.1 doesn't require the use of generative AI. There's no need to use it. But that they also still note that lawyers should not, and I'm quoting right from the opinion, unnecessarily retreat from the use of new technology that may save significant time and money for clients. So they then follow that up with that the lawyer that opts to use generative AI needs to keep in mind those dangers of hallucination, dangers of confidentiality that we talked about today. So in a lot of ways, articulating or kind of re-articulating the ABA's opinion that we talked about earlier. To go with a later question as well, which is an anonymous attendee said that, hey, weapons, if you're worried about relying... Not instead of relying on... You're not worried about relying on AI too much, but that you're worried to use it. I think it is interesting that the state, like the State Bar of Texas, we see here some bar associations noting that lawyers should not retreat from this and should keep in mind the savings and the efficiencies gained by these tools, of course, balanced against the ethical items that we are kind of talking about today. I'm going to go through some of these other great questions we have. I saw another question that was on, if I have like, say, a chat GPT clod paid subscription, does that mean my data is not being trained on? Probably, even if you have a paid subscription, unless you have an arrangement with that provider, it probably is or might be trained on by that provider. I know I have, which is on my personal phone computer, the OpenAI's consumer-facing item, and I know for a fact that it's saving my data, recording it, is kind of can be, will be trained on it unless, you know, we'll be training that data. And I mean, I know the consumer-facing OpenAI one even has a piece called memory now. And so it's going to bring up items from my previous chats as context in later chats. So even if you are paying for your AI, that does not mean that they aren't keeping that data, that they aren't training on that data. It's really the case that unless you have an arrangement with that model provider saying there's going to be no retention, no training, unless you have that relationship, your data is likely at risk of being trained on, used later, accessed, etc. Some more great answers, great questions here. For John, what do you mean by verification? So it's probably mentioned in the RAG context, even if you have tools that are using a grounded data source, you'll still want to make sure that the information coming out of a tool using RAG is in fact correct. One, because they can still hallucinate. And for the fact that, you know, when these tools are surfacing information, it could be the case that the relationships between the items being surfaced and used by the tool might not be the ones that you care about. So, you know, even though we have our cancer example of those kind of cancers discussed, it could be the case that just because of how the way the model is developed, that there isn't as strong of a relationship between, say, the diagnosis you're expecting and the input that you placed in. And so just because a tool has RAG surfaced information up and had that, quote, grounded information, doesn't mean that that information has already been verified. You should still want to check out what it is that's coming out against your human known source material. I see some other great questions here. Is it worth looking to self-hosting something? This is from Chris Harshman, Paperless AI, Llama3, the big GPU. You know, that's for some cases, maybe having your own locally run tool set might be the way to go, depending on your confidentiality risks. That being said, you know, these tools are compute expensive. So if you are a kind of, unless you have a lot of resources to use towards compute, you might struggle to use some of the more contemporary foundational models just because of what they require to run. But of course, whether or not your team needs those higher levels of levels of confidentiality and control over your models, that's really going to be a practitioner by practitioner, firm by firm type thing, consideration.
[00:55:21] Speaker 2: I answered a few just in the chat, John, but I'm leaving these trickier ones for you because they're more technical in nature.
[00:55:28] Speaker 1: Do you want to answer the one about Texas Bar Association server locations or just server locations?
[00:55:37] Speaker 2: I don't know that. That's going to be a specific rule to Texas that I'm not familiar with, although I doubt there is one as long as it's in the United States, unless your client has specific requests. I'm not aware of any state bar rules that restrict servers to a state, but many do to the country. FileLine, for example, we store in AWS and it's a redundant throughout the country. There's like four data centers that are in different geographic locations in case something happens at one of the locations. So the data is not lost and it's backed up every minute or something. And so I'm not aware of any rules or restrictions that are Texas specific.
[00:56:28] Speaker 1: I see another great question about using perplexity or similar AI models to orient yourself before jumping into an area of law. You know, I think exploring these tools as a start off point is often a great way to experiment. We have about two minutes, John. Sounds good. The other piece I would keep in mind with that, though, is these large language models, how they even approach things like values, kind of more normative judgments, also can be affected both by the underlying training data that it used as well as the decisions of the developers. So even when thinking about like a legal argument to take or whether or not this might be the right or wrong decision to go on. These tools aren't just aren't they aren't blank slates that are affected just by your inputs. Those tools might have their own weights on what ethical approach to take or what how you know what philosophy to apply when considering a certain question. So you'll want to keep in mind that when you're using these tools, you will your the responses you're getting back and you're starting off jump point will be based on the underlying training and the decisions of the developers as well as what it gives back to you. So it can be a great jumping off point. But things things are, you know, you should keep those things in consideration when going off that jumping point at that that jumping off point might not be the sole place to start. And there might be approaches to take that the tool either isn't it won't surface to you based on its data or based even on the decisions of the developers.
[00:58:07] Speaker 2: I apologize that we will not be able to get through all the questions. I think we have a hard stop right now. Thanks, everybody, for joining us. Really appreciate the engagement. Thank you, Kayla, for coming on. And thank you, John. Appreciate both of you guys and the insights you shared. And hopefully this was informative. Thanks, everybody. Have a great day.
We’re Ready to Help
Call or Book a Meeting Now