Blog chevron right Legal

AI Deposition Summary QA Checklist: Prevent Hallucinations and Misquotes

Daniel Chang
Daniel Chang
Posted in Zoom Mar 6 · 7 Mar, 2026
AI Deposition Summary QA Checklist: Prevent Hallucinations and Misquotes

An AI deposition summary can save time, but it can also invent facts, swap speakers, or soften key admissions if no one checks it. Use the QA checklist below to verify every decision, commitment, and admission against transcript citations, confirm speaker attribution, validate numbers and dates, and flag ambiguous wording before anyone relies on the summary.

This guide gives you a practical, repeatable process plus a “red flag” list that should trigger audio spot-checks or a full human review.

Primary keyword: AI deposition summary QA checklist

Key takeaways

  • Require a transcript citation for every material statement, especially decisions, commitments, and admissions.
  • Check speaker attribution and roles (witness vs. attorney) on every quoted or paraphrased point.
  • Validate all numbers, dates, names, and “never/always” statements directly against the transcript.
  • Flag ambiguity and legal conclusions; rewrite them into neutral, transcript-faithful language.
  • Use a red-flag list to decide when to spot-check audio or escalate to human review.

What can go wrong in AI deposition summaries (and what QA must catch)

AI tools can produce summaries that read confident even when they are wrong. QA exists to stop a clean-looking document from spreading an error into case strategy, motions, or settlement talks.

Common failure modes include hallucinations (facts that are not in the record), misquotes, wrong speaker attribution, and “helpful” legal framing that the witness never said.

Typical issues to look for

  • Hallucinated events: the summary describes meetings, emails, policies, or timelines that the transcript does not support.
  • Misquotes: a sentence in quotes does not match the transcript word-for-word.
  • Speaker swaps: the summary assigns a statement from counsel to the witness (or vice versa).
  • Stronger language than the record: “admitted,” “confirmed,” or “agreed” when the witness said “I don’t recall” or “I think.”
  • Numbers drift: amounts, dates, times, and counts shift subtly (e.g., “two weeks” becomes “two months”).
  • Missing qualifiers: “to the best of my knowledge,” “as far as I remember,” or “generally” disappears, changing meaning.
  • Over-smoothing: contradictions, uncertainty, and corrections get edited away, removing impeachment value.

Before you start: Set minimum standards for any AI-generated summary

QA goes faster when you define what “acceptable” means. Start by setting clear requirements for format, citations, and scope so reviewers do not debate basics on every file.

Decide whether the summary is for internal understanding, a client update, or a filing-related work product, because that affects how strict you need to be.

Baseline requirements to enforce

  • Version control: label the AI draft with date/time and the transcript version used.
  • Citation format: require page:line or a consistent locator throughout.
  • Scope statement: confirm which sessions are included (day 1 vs. day 2, exhibits, errata).
  • Quoting rules: define when the summary may quote vs. paraphrase, and how to handle corrections.
  • Neutral tone: summaries should report testimony, not argue the case.

Practical tip: Build a “citation-first” workflow

Many hallucinations disappear if the summary must include a citation for each key point. If your current tool cannot reliably cite, plan to add citations during QA and treat uncited claims as untrusted.

The AI deposition summary QA checklist (step-by-step)

Use this checklist in order. It starts with high-risk problems (things that change meaning) and ends with polish.

If you have limited time, prioritize the sections marked “high impact” because they tend to drive legal decisions.

1) Input integrity check (fast, but essential)

  • Confirm you are using the correct transcript (witness name, date, case, session/day).
  • Confirm the transcript is final (or note “draft/uncertified” if applicable).
  • Check that the AI summary covers the full range of pages intended (no missing chunks).
  • List all speakers and roles (witness, examining attorney, opposing counsel, interpreter, videographer) so attribution checks are easier.

2) Citation audit for material statements (high impact)

Verify every decision, commitment, and admission against transcript citations. These items often drive motions, settlement, and witness prep, so treat them as “must be perfect.”

  • Identify every statement in the summary that implies a concession or firm fact (e.g., “admitted,” “confirmed,” “agreed,” “never,” “always”).
  • For each, locate the exact transcript support and add a citation (page:line).
  • Confirm the testimony says what the summary claims, not something “close.”
  • Check that the summary includes any qualifying language that changes strength (e.g., “I believe,” “I don’t recall,” “approximately”).
  • If the summary uses a quote, verify it is verbatim, including pauses like “uh,” if your quoting standard requires it.

3) Speaker attribution check (high impact)

Misattribution can flip the meaning of a record. Counsel questions are not witness testimony, and objections or instructions not to answer should not be rewritten as facts.

  • Confirm each key point clearly identifies who said it.
  • Spot-check any point that begins with “witness stated/confirmed” and ensure the witness actually said it.
  • Separate questions from answers; make sure the summary does not treat a question as an admission.
  • Verify handling of objections (e.g., “objection, form”) and instructions not to answer.
  • If an interpreter is used, confirm the summary does not attribute the interpreter’s words as the witness’s phrasing.

4) Numbers, dates, and proper nouns validation (high impact)

Validate numbers/dates directly against the transcript. Small errors here can cause big downstream problems, especially in timelines, damages, and document identification.

  • Check dates (month/day/year) and relative time references (“two weeks later”).
  • Check all quantities: dollar amounts, counts, durations, percentages, measurements.
  • Verify names of people, companies, products, locations, and systems.
  • Confirm exhibit numbers and document titles match the record.
  • Watch for transposition errors (e.g., 2019 vs. 2021; 15 vs. 50).

5) Ambiguity and over-certainty scan (high impact)

Flag ambiguous statements and soften them to match the testimony. AI summaries often turn uncertainty into certainty, which can mislead readers who never open the transcript.

  • Highlight vague terms (“soon,” “regularly,” “a lot,” “standard practice”) and confirm how the witness defined them, if at all.
  • Flag any sentence where the summary draws an inference not stated (e.g., motive, intent, causation).
  • Replace legal conclusions with neutral language (e.g., “witness testified that…”).
  • Mark unresolved conflicts (contradictory testimony, corrections, or “I don’t recall” answers).

6) Context and completeness check

A quote can be accurate but still misleading if it drops key context. Check whether the summary captures limitations, follow-up questions, and corrections.

  • Confirm the summary includes clarifying follow-ups that change meaning.
  • Ensure it does not omit testimony that weakens the “headline” point.
  • Capture corrections (witness changes an answer later) and note where they occur.
  • Verify that the summary distinguishes personal knowledge from hearsay or speculation when the witness makes that distinction.

7) Consistency check across the document

AI text can contradict itself in different sections. Consistency checks catch subtle drift in names, roles, timelines, and key facts.

  • Confirm the witness’s role/title remains consistent.
  • Check timeline consistency (event A before event B) across sections.
  • Ensure recurring terms stay consistent (project names, policy names, department names).
  • Verify the summary does not present two incompatible versions of the same event without noting the conflict.

8) Formatting and usability check (low risk, but helpful)

  • Use clear headings (background, timeline, key admissions, damages, documents, credibility issues).
  • Keep paragraphs short and use bullets for dense fact lists.
  • Ensure every section has citations for factual statements, not just a few.
  • Remove filler language that adds confidence without support (“clearly,” “obviously,” “without question”).

Red flags that require audio spot-check or human review

Some situations make transcript-only QA risky. Use this list to decide when to spot-check audio/video or assign a deeper human review.

  • Critical “admission” with thin support: the summary calls it an admission, but the transcript shows hedging, confusion, or a re-asked question.
  • Heavy use of “inaudible,” “ph,” or unclear segments: key portions are unclear or disputed.
  • Overlapping speech or interruptions: the record has frequent cross-talk, making attribution uncertain.
  • Interpreter present: meaning can shift, and the transcript may not capture tone or corrections.
  • Technical terms, product names, or acronyms: mishearing one term can change the facts.
  • Fast reading of numbers: phone numbers, account numbers, dates, totals, or measurements.
  • High-stakes segments: testimony tied to liability, damages, notice, causation, spoliation, or intent.
  • Disputed exhibit identification: the witness is unsure which document they are looking at.
  • Significant corrections: the witness changes answers later or counsel revisits a topic after a break.
  • The summary introduces facts with no citation: any uncited “fact” should trigger review or deletion.

Practical QA workflow you can use in a real case

A checklist works best when it fits how legal teams actually review. The workflow below keeps the work bounded and creates a clear record of what you verified.

You can run it with one reviewer or split it across a team when the deposition is long.

Suggested roles and handoffs

  • Reviewer A (substance): admissions/commitments/decisions, speaker attribution, ambiguity flags.
  • Reviewer B (precision): numbers/dates/proper nouns, exhibit references, verbatim quote checks.
  • Optional spot-checker: audio/video checks for red-flag segments.

Time-boxed pass structure

  • Pass 1 (30–60 minutes): skim summary, highlight all material statements, and mark missing citations.
  • Pass 2 (60–120 minutes): verify each highlighted point against the transcript and add/repair citations.
  • Pass 3 (30 minutes): run the red-flag list and decide what needs audio or deeper review.
  • Pass 4 (15–30 minutes): clean formatting, remove over-confident wording, and finalize.

QA log template (copy/paste)

  • Summary version: [date/time, file name]
  • Transcript version: [date, certified/uncertified]
  • Verified admissions list: [bullet points + page:line]
  • Open issues: [uncited claims, ambiguous segments, contradictions]
  • Audio spot-check needed: [segments, timecodes if available]
  • Reviewer: [name/initials]

Common questions

Do I really need citations in an internal deposition summary?

Citations make internal summaries safer because they let any reader verify the record fast. They also reduce the chance that a paraphrase turns into a “fact” that no one can find later.

What should I do with uncited statements in the AI draft?

Treat them as untrusted until you can find transcript support. If you cannot support them, delete them or rewrite them as uncertainty with a note that the transcript does not confirm the point.

How do I check for misquotes efficiently?

Limit quotes to high-value lines, then verify them word-for-word against the transcript. If you need many quotes, consider converting them to paraphrases with citations unless your use case demands verbatim language.

Should I spot-check audio even if I have a certified transcript?

Audio spot-checks help most when the transcript shows uncertainty markers (like “inaudible”) or when the testimony is high stakes and turns on a single word or number. If the transcript is clear and consistent, transcript-based QA may be enough.

How do I handle contradictions in testimony within the summary?

Do not “resolve” contradictions unless the witness does so on the record. Note both versions with citations and indicate that the testimony differs across sections.

What wording helps avoid overstatement in summaries?

Use neutral verbs like “testified,” “stated,” “estimated,” “did not recall,” and “was unsure.” Avoid adding conclusions like “therefore,” “this proves,” or “clearly,” unless you label them as analysis outside the summary.

When should I escalate from AI summary QA to full human summarization?

Escalate when red flags stack up: unclear audio, complex technical topics, many exhibits, major inconsistencies, or any situation where an error would materially change decisions. In those cases, a human-generated or human-led summary with careful citation is safer.

Related services that can support your workflow

If you use AI to draft summaries, consider adding a human layer where accuracy matters most. You can also pair transcription and proofreading so your source transcript is clean before summarization begins.

If you want a reliable transcript foundation for summaries and citations, GoTranscript offers professional transcription services that fit legal workflows and help your team review testimony with more confidence.