GoTranscript
>
All Services
>

Public/lawyers Genai And The Duty To Verify Citations

Lawyers, GenAI, and the Duty to Verify Citations (Full Transcript)

The Avianca case shows how overreliance on generative AI can breach ethical duties—lawyers must supervise tech use and independently verify authorities.
Download Transcript (DOCX)
Speakers
add Add new speaker

[00:00:00] Speaker 1: And so what we're seeing, even in the big Avianca case, the first one, the guy watched a YouTube video because his kid said Gen AI is cool. And then when he got caught with his pants down, instead of saying, oh, let me look at the citations, actually do the work of lawyering. He asked the generative AI, did I get it wrong, right? That was bad lawyering. So a lot of times what you're seeing is people not doing the basic due diligence, the ethical obligation to supervise a little bit of its tech competence and knowing where the failings are with tech. But the other is just because the robot said it doesn't mean you don't have to do the lawyer stuff of trust, but verify.

ai AI Insights
Arow Summary
The speaker cites the Avianca case to illustrate poor lawyering involving generative AI: an attorney relied on a YouTube-inspired use of GenAI, failed to verify citations, and then asked the AI to confirm whether it was wrong. The core issue is neglecting basic due diligence and ethical duties of supervision and technological competence. The takeaway is that lawyers must “trust but verify” and still perform traditional legal validation rather than deferring to an AI’s assurance.
Arow Title
GenAI Misuse in Law: Due Diligence and ‘Trust but Verify’
Arow Keywords
Avianca case Remove
generative AI Remove
legal ethics Remove
due diligence Remove
citation checking Remove
technological competence Remove
supervision duty Remove
hallucinations Remove
trust but verify Remove
professional responsibility Remove
Arow Key Takeaways
  • Generative AI outputs do not replace core lawyering tasks like checking citations and sources.
  • Ethical duties include supervising technology use and maintaining tech competence.
  • Asking an AI to validate its own work is not a reliable safeguard.
  • Failures often stem from skipping basic due diligence rather than the technology itself.
  • A practical rule is ‘trust but verify’: independently confirm AI-generated legal authorities.
Arow Sentiments
Negative: Critical tone focused on misconduct and negligence, emphasizing ethical lapses, lack of verification, and consequences of overreliance on AI.
Arow Enter your query
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript