In a grant methods section, describe transcription and analysis by stating (1) how you will record and store audio, (2) how you will transcribe it (verbatim or clean read), (3) how you will remove identifiers, and (4) how you will code and analyze the text. Reviewers mainly want to see a clear, repeatable plan that matches your study aims and follows your IRB-approved procedures.
This guide gives copy-ready methods text you can adapt, plus examples for interviews, focus groups, and observational audio.
Primary keyword: grant methods section
Key takeaways
- Be specific: name your recording setup, file handling, transcription approach, and analysis steps.
- Choose transcript style on purpose: verbatim supports discourse/interaction analysis; clean read supports theme-focused reporting.
- Plan de-identification early: define what you will remove, who does it, and how you will store the key (if any).
- Make analysis auditable: describe coding steps, team roles, memoing, and how you will resolve disagreements.
- Align with IRB: keep your wording and workflow consistent with what the IRB approved (consent, retention, sharing).
What reviewers expect when you write about transcription and analysis
A strong methods section answers “what will you do, who will do it, and how will you keep it consistent” in plain language. It also shows you have thought through participant privacy and data handling from recording to reporting.
Include enough detail that another researcher could repeat your workflow without guessing. If you plan to change any element later (for example, switching from verbatim to clean read), state the decision rule now.
A simple checklist you can mirror in your methods
- Data capture: device, format, backup, and where files live.
- Transcription: verbatim vs clean read, timestamping, speaker labels, and quality checks.
- De-identification: what gets removed, how, and when.
- Analysis: approach (thematic, content, grounded theory, framework), steps, tools, and team process.
- Governance: consent language, access controls, retention, and sharing rules per IRB.
Copy-ready methods text: Recording plan (adaptable)
Use the text below as a template, then swap in your exact devices, storage locations, and consent details. Keep it consistent with your approved protocol and consent form.
Template: Recording and file handling
- Recording: “With participant consent, we will audio-record all [interviews/focus groups/observations] using [device/app]. We will also take brief field notes to capture nonverbal context (e.g., pauses, laughter, setting) and key timestamps.”
- File format: “Audio will be saved in [WAV/MP3/M4A] format with a study ID (e.g., INT001) and no names in the filename.”
- Transfer and storage: “After each session, audio files will be transferred to [encrypted drive/secure institutional server] and deleted from the recording device.”
- Access: “Only authorized study personnel listed on the IRB protocol will have access to raw audio.”
- Retention: “We will retain audio and transcripts for [X years/months] in accordance with IRB approval and institutional policy.”
Optional line: Remote sessions
- “For remote sessions, we will use [platform] audio recording with participants’ permission and will instruct participants to join from a private space when possible.”
If your study falls under U.S. human subjects protections, keep your consent and data handling aligned with the Common Rule requirements (45 CFR 46). You can link to the current regulation text on the eCFR page for 45 CFR 46.
Copy-ready methods text: Transcription approach (verbatim vs clean read)
Reviewers often look for one sentence that tells them what kind of transcript you will produce and why. Pick the option that fits your research questions and analytic method.
Option A: Verbatim transcription (word-for-word)
- “Audio recordings will be transcribed verbatim (word-for-word), including false starts, repetitions, and non-lexical utterances (e.g., ‘um,’ ‘uh’) when they affect meaning.”
- “Transcripts will include speaker labels (e.g., P1, P2, Moderator) and timestamps at [e.g., every 30–60 seconds / at topic shifts] to support auditability.”
- “Unclear sections will be marked as [inaudible] with a timestamp, and we will attempt clarification using the audio and field notes.”
Option B: Clean read transcription (edited for readability)
- “Audio recordings will be transcribed using a clean read approach to improve readability, removing filler words and minor repetitions while preserving participants’ meaning.”
- “We will retain meaningful pauses, emphasis, and emotion notes (e.g., [laughs]) when they inform interpretation.”
- “Transcripts will include speaker labels and timestamps at [intervals/topic changes] to support efficient review and coding.”
Quality control language you can add
- “A second team member will review each transcript against the audio for accuracy and completeness using a standardized checklist.”
- “We will correct speaker attributions, key terms, and technical vocabulary (e.g., program names, clinical terms) during review.”
If you plan to use automated transcription first
- “We will generate an initial draft transcript using automated speech-to-text and then conduct full human review and correction against the audio before analysis.”
- “We will treat automated output as a draft and will not code transcripts until they meet our accuracy criteria.”
If you want a short, plain-language description of automated vs human workflows for your internal planning, see automated transcription options and how they can fit into a review-first process.
Copy-ready methods text: De-identification and confidentiality
De-identification is not just “remove names.” In your methods, define what identifiers you will remove and how you will handle indirect identifiers that could re-identify someone in a small sample.
Template: De-identification process
- “We will de-identify transcripts by removing direct identifiers (e.g., names, phone numbers, addresses) and replacing them with brackets (e.g., [NAME], [CLINIC], [CITY]).”
- “We will also review transcripts for indirect identifiers relevant to this setting (e.g., unique job titles, rare events, small community references) and generalize them when needed (e.g., ‘a local school’ instead of a specific school name).”
- “If we maintain a re-identification key, it will be stored separately from study data on [secure location] with access limited to [role], per IRB-approved procedures.”
- “We will use de-identified transcripts for coding, team discussion, and dissemination (quotes will be attributed to pseudonyms or participant IDs).”
Template: Handling audio vs transcripts
- “Raw audio will be treated as identifiable data and stored with additional protections compared to de-identified transcripts.”
- “We will not share raw audio outside the approved study team unless explicitly approved by the IRB and described in consent materials.”
Reminder to align with IRB
- Make sure your consent form matches your plan for: recording, who hears the audio, whether you will use vendors, retention length, and whether you may reuse quotes in future work.
Copy-ready methods text: Analysis plan (coding and thematic analysis)
Your analysis plan should connect your research questions to concrete steps. For many qualitative grants, a thematic analysis description is appropriate, but you can adapt the structure for content analysis, grounded theory, or a framework approach.
Template: Thematic analysis (step-by-step)
- Preparation: “After transcript quality checks and de-identification, we will import transcripts and field notes into [software, e.g., NVivo/ATLAS.ti/Dedoose/Excel] for management and coding.”
- Familiarization: “Two analysts will read a subset of transcripts in full, write analytic memos, and note early patterns linked to the study aims.”
- Codebook development: “We will develop an initial codebook using a combination of deductive codes (from the interview guide and research questions) and inductive codes (emerging from the data).”
- Coding: “Analysts will double-code an initial set of transcripts to calibrate code definitions, then code remaining transcripts with regular meetings to discuss questions and refine the codebook.”
- Theme development: “We will group related codes into candidate themes, review themes against the dataset, and define each theme with clear inclusion/exclusion criteria.”
- Rigor and documentation: “We will maintain an audit trail (codebook versions, memos, meeting notes, and decision logs) and document how analytic decisions were made.”
- Reporting: “We will report themes with representative, de-identified quotes and will describe deviant or disconfirming cases when relevant.”
Short add-on: How you will resolve coding differences
- “Coders will discuss discrepancies in regular meetings and revise code definitions as needed; unresolved items will be adjudicated by [PI/lead qualitative analyst].”
Short add-on: Linking analysis to implementation or theory
- “We will map themes to [framework/theory, e.g., CFIR, COM-B] to support interpretation and to inform recommendations.”
If your funder or journal expects explicit qualitative reporting standards, you may reference the COREQ checklist as a guide for what to document.
Methods examples by study type (copy-ready blocks)
Use these blocks as “drop-in” paragraphs. Replace bracketed text and keep the final wording consistent with your IRB-approved protocol.
Example 1: Semi-structured interviews (thematic analysis)
“With participant consent, we will audio-record semi-structured interviews (approximately [30–60] minutes) using [device/app]. Audio files will be labeled with a study ID and stored on [secure server/encrypted drive], with access limited to authorized study personnel. Recordings will be transcribed using a [verbatim/clean read] approach with speaker labels and timestamps at [interval/topic shifts]. Transcripts will be de-identified by removing direct identifiers and generalizing indirect identifiers, and we will analyze de-identified transcripts using thematic analysis. Two analysts will develop an initial codebook using deductive and inductive codes, double-code an initial subset to calibrate, and meet regularly to refine code definitions and document analytic decisions in memos and a decision log.”
Example 2: Focus groups (group dynamics and speaker management)
“With consent from all participants, we will audio-record focus groups (approximately [60–90] minutes) facilitated by a trained moderator using a semi-structured guide. We will assign participants a first-name or code at the start of the session (e.g., ‘P1,’ ‘P2’) to support speaker identification in the transcript, and we will capture field notes on turn-taking, group agreement/disagreement, and salient nonverbal cues. Audio will be transcribed [verbatim/clean read] with speaker labels to the extent possible; overlapping speech will be noted when it affects interpretation. We will de-identify transcripts prior to coding and conduct thematic analysis with a team-based codebook, documenting codebook changes and consensus decisions.”
Example 3: Observational audio (shadowing, clinical encounters, or naturalistic settings)
“During observational sessions, we will audio-record naturally occurring interactions where consent and setting allow, and we will create structured field notes to document context (location, participants’ roles, and relevant events) without recording unnecessary identifiers. Audio will be stored securely and transcribed using a [verbatim/clean read] approach, with additional context notes drawn from the field notes (e.g., ‘[door closes]’ or ‘[patient enters room]’) when relevant to interpretation. We will de-identify transcripts by removing names and other identifiers and will code de-identified transcripts and field notes together to develop themes related to [study aims].”
Example 4: Mixed approach (rapid analysis first, then deeper thematic coding)
“We will use a two-stage qualitative approach. First, we will conduct rapid analysis using structured summary templates completed from recordings and transcripts to identify time-sensitive findings related to [aims]. Second, we will complete thematic coding of de-identified transcripts using an evolving codebook, maintaining an audit trail of summaries, codebook versions, and analytic memos.”
Pitfalls to avoid (and what to write instead)
Many methods sections fail because they sound generic or leave privacy details out. These fixes help you keep language clear and review-proof.
- Too vague: “Interviews will be transcribed and analyzed.”
Write: “Recordings will be transcribed [verbatim/clean read], de-identified, imported into [software], and coded using [thematic/content] analysis with a documented codebook and audit trail.” - No link to aims: “We will identify themes.”
Write: “We will develop themes that answer research questions about [X], including comparisons across [groups/roles/sites].” - Unclear confidentiality: “Data will be kept confidential.”
Write: “Audio will be stored on [secure location] with access limited to authorized personnel; transcripts will be de-identified before coding and quote use.” - Transcript style mismatch: Verbatim transcripts for a study that only needs meaning-focused themes (or clean read for a study about speech patterns).
Write: “We chose [verbatim/clean read] because our research questions focus on [interactional detail vs content/meaning].” - No quality check: “Transcripts will be produced.”
Write: “A second team member will review transcripts against audio using a checklist before coding begins.”
Common questions
- Should I choose verbatim or clean read transcription for a grant?
Choose verbatim if your analysis depends on speech details (pauses, repetitions, exact wording). Choose clean read if your goal is clearer text for theme-focused coding and reporting, while keeping meaning intact. - Do I need timestamps in qualitative transcripts?
Timestamps are not always required, but they help you audit quotes, revisit key moments, and resolve disagreements during coding, especially in focus groups. - How do I describe de-identification without overpromising anonymity?
State the concrete steps you will take (remove names, generalize indirect identifiers, limit access). Avoid absolute claims, and keep language consistent with your consent and IRB materials. - Can I use automated transcription in an IRB-approved study?
You often can, but you should describe it as a draft step and explain who will review and correct it, where data is processed, and how that fits your IRB-approved data handling plan. - How many coders should I list?
List the roles you will actually use (e.g., two analysts plus PI adjudication). Reviewers want a realistic plan for calibration and decision-making, not a specific “right” number. - What software should I name in the methods section?
Name what you will use for data management and coding (even if it is a spreadsheet). If the exact tool may change, state your criteria (secure access, audit trail, team collaboration) and keep it aligned with IRB procedures. - How do I handle overlapping speech in focus groups?
Say you will attribute speakers where possible, mark overlap when it affects meaning, and use field notes and timestamps to support interpretation.
A quick fill-in template you can paste into your grant
“With participant consent, we will audio-record [data collection type] using [device/app]. Audio files will be labeled with study IDs and stored on [secure location], with access limited to authorized study personnel per IRB-approved procedures. Recordings will be transcribed using a [verbatim/clean read] approach with speaker labels and [timestamps frequency]. We will de-identify transcripts by removing direct identifiers and generalizing indirect identifiers, and we will store any re-identification key separately with restricted access. We will analyze de-identified transcripts using [thematic/content/framework] analysis: [number] analysts will develop an initial codebook using [deductive/inductive] codes, double-code an initial subset to calibrate, meet regularly to resolve discrepancies and refine code definitions, and maintain an audit trail of memos, codebook versions, and analytic decisions. We will report findings using de-identified quotes aligned with study aims.”
If you need transcripts that are ready for coding, clean read, or verbatim, GoTranscript can support your workflow with transcription proofreading services when you already have drafts that need a careful accuracy check.
When you’re ready, GoTranscript provides the right solutions for research teams that need consistent, copy-ready transcripts and related deliverables. You can explore professional transcription services to match your study’s methods and IRB requirements.