Blog chevron right Research

Focus Group Recruitment and Screener Notes: What to Document in the Transcript Pack

Andrew Russo
Andrew Russo
Posted in Zoom Apr 6 · 8 Apr, 2026
Focus Group Recruitment and Screener Notes: What to Document in the Transcript Pack

Focus group transcripts can mislead you if they arrive without recruitment and screener context. To interpret quotes correctly, your transcript pack should document how you defined segments, filled quotas, and applied exclusions, plus any last-minute changes in the room. This article explains exactly what to record and includes a simple “transcript pack cover sheet” template you can reuse.

Primary keyword: focus group recruitment and screener notes

  • Key takeaways
  • Always attach the screener, segment definitions, quotas, and exclusions to every transcript pack.
  • Document what actually happened, not only what was planned (no-shows, substitutions, late arrivals, and off-script recruiting).
  • Give each participant an anonymized ID and map that ID to segment attributes used in analysis.
  • Use a one-page cover sheet so analysts can evaluate how “projectable” each quote is.
  • Standardize these notes across vendors, markets, and waves to avoid hidden differences.

Why recruitment and screener notes belong with transcripts

A transcript captures what people said, but it does not explain why these people were in the room. Without recruitment and screener notes, the same quote can be read as a broad insight when it really reflects a tight niche segment or a quota that was hard to fill.

Recruitment context also helps you avoid common analysis errors. For example, if you learn that half the group was recruited from one panel source or that several participants were “soft-qualified,” you can weigh the strength of certain themes differently.

What goes wrong when context is missing

  • Segment confusion: Analysts do not know which quotes came from which target segment.
  • False certainty: A loud opinion sounds representative, but the quota structure says otherwise.
  • Hidden bias: A sourcing change (panel vs. intercept) affects the type of participant recruited.
  • Misused exclusions: You may treat a quote as “category user” insight when the person should have been excluded.
  • Inconsistent waves: Two waves look comparable, but recruiting differed in small ways that matter.

What recruitment details to document (and where to place them)

Keep recruitment and screener notes in two places: (1) a one-page cover sheet at the front of the transcript pack, and (2) a full appendix with the screener and recruiter instructions. The cover sheet helps analysts work fast, and the appendix helps them audit details later.

1) Segment definitions (the “who” behind each quote)

Segment definitions tell the analyst what “Segment A” actually means in real screening terms. Write them in plain language and tie them to exact screener criteria so they are not open to interpretation.

  • Segment name: Use a short label that can appear in charts (for example, “New Movers,” “Switchers,” “Heavy Users”).
  • Operational definition: State the exact screening rule (for example, “moved in the last 12 months” or “switched primary brand in last 6 months”).
  • Must-have attributes: The attributes that define membership (not just nice-to-haves).
  • Key analysis flags: Any traits you expect to drive differences (for example, “uses competitor X,” “has kids under 10”).

If you have multiple markets or waves, note any segment definition changes. A small change (like 6 months vs. 12 months) can change the story.

2) Quotas (the “how many” and why it matters)

Quotas explain the planned mix of participants and how closely you hit it. Analysts use quotas to understand which themes might be over- or under-represented.

  • Planned quotas: Include the target counts per segment and any cross-quota (for example, “4 parents / 4 non-parents”).
  • Achieved quotas: List what you actually got in each group.
  • Quota difficulty notes: If a cell was hard to fill, say why (niche criteria, timing, incentive constraints).
  • Balancing rules: Any rules like “no more than 2 from the same employer” or “mix of devices.”

Include both “planned” and “achieved” numbers because reality often differs. Those differences can explain why one group had more extreme views or less category knowledge.

3) Exclusions (the “who was kept out”)

Exclusions protect the discussion from people who can distort results. In the transcript pack, exclusions help analysts judge whether a quote could come from a disallowed background.

  • Industry exclusions: Competitors, agencies, media, research, or other relevant industries.
  • Prior research participation: Rules like “no groups in last 6 months.”
  • Household conflicts: No multiple participants from the same household.
  • Employment conflicts: No direct involvement in purchasing if you want end users only (or vice versa).
  • Other sensitive exclusions: Anything tied to compliance or ethics for the study.

Also note if you used “soft” exclusions (allowed with approval). Analysts need to know when rules were flexible.

4) Sourcing and recruitment method (the “where they came from”)

Two participants with the same screener answers can still behave differently depending on how they were found. Document the sourcing method to help interpret engagement, comfort, and potential professional-respondent risk.

  • Source type: Panel, recruiter database, intercept, customer list, referral, social ads.
  • Any mix rules: For example, “max 3 from panel A.”
  • Verification steps: Basic steps like confirmation calls, ID checks, or proof-of-eligibility requirements.
  • Incentive: Amount and form (gift card, cash, product), plus any differences across groups.

5) The final roster and attendance reality (the “who actually spoke”)

Analysis should reflect the real session, not the planned one. Include a simple attendance log and note any deviations that change interpretation.

  • Participant count: Invited vs. attended vs. completed.
  • No-shows and late arrivals: Note who missed key activities.
  • Substitutions: Who was swapped in and why.
  • Dominant voices: If one person drove the discussion, note it briefly for context.
  • Technical issues: Audio dropouts, crosstalk, or any portion that is hard to transcribe.

What screener notes to include so quotes stay interpretable

The screener itself is necessary, but analysts also need “screener notes” that explain how the screener was used in practice. These notes prevent people from treating screeners as perfect filters when they often involve judgment calls.

Include the full screener (as administered)

Attach the screener version that recruiters actually used, including any edits made after kickoff. If you changed wording mid-field, include both versions and the change date.

  • Screener version name and date
  • Exact question wording and answer options
  • Termination points (what disqualified someone)
  • Skip logic notes (even if described in text)

Call out “judgment” questions

Some screeners include questions that rely on recruiter judgment, like “articulate” or “comfortable sharing opinions.” When you use these, document how recruiters defined and applied them.

  • What recruiters listened for (one sentence)
  • How you avoided bias (for example, using consistent prompts)
  • Any overrides (who approved and why)

Track any soft-qualifications or exceptions

If a participant did not perfectly match criteria but you allowed them in, label that clearly. During analysis, you can still use their comments, but you will avoid accidentally treating them as a perfect segment fit.

  • Exception type: quota fill, availability, borderline category use, etc.
  • What rule they missed
  • Approval: who approved it and when

Provide a participant attribute table tied to transcript IDs

The fastest way to keep quotes interpretable is to link each speaker label to key screener attributes. Do not include personally identifying information in the transcript pack; use anonymized IDs instead.

  • Participant ID: P1, P2, P3, etc. (match the transcript speaker tags)
  • Segment label
  • Quota cells: only the ones you will analyze (for example, age band, usage level)
  • Recruitment source
  • Any exceptions

If privacy rules apply, keep the ID-to-name key in a separate secure file that stays with the recruiter or project lead. For general privacy guidance, see the GDPR overview if you work with EU participants and handle personal data.

A simple “transcript pack cover sheet” template (copy/paste)

Use the template below as the first page of every transcript pack. Keep it to one page so it stays useful during analysis.

Transcript Pack Cover Sheet (Template)

  • Study name: [Project / client]
  • Wave / phase: [Wave 1, Wave 2, etc.]
  • Method: [In-person focus group / Online group / Mini-group]
  • Market / language: [City / country / language]
  • Field dates: [Start–end]
  • Session details: [Group 1 date/time, length, moderator]
  • Target segments (definitions):
    • Segment A: [plain-language definition + exact screener rule]
    • Segment B: [plain-language definition + exact screener rule]
  • Quota plan (planned → achieved):
    • Segment A: [n] → [n]
    • Segment B: [n] → [n]
    • Key cross-quotas: [example: parents/non-parents, device type] planned → achieved
  • Key exclusions: [list the top 5–10 that matter for interpretation]
  • Recruitment sources: [panel, customer list, intercept] + any mix limits
  • Incentive: [amount + form] (note differences, if any)
  • Attendance notes (what changed):
    • Invited: [n] | Attended: [n] | Completed: [n]
    • No-shows: [count + quota impact]
    • Late arrivals/early departures: [P# + which sections missed]
    • Substitutions: [P# replaced by P# + reason]
  • Transcript notes:
    • Speaker labeling convention: [P1–P8, Moderator = MOD]
    • Any sections with low audio quality: [timestamps]
    • Any off-topic or sensitive moments redacted: [yes/no + approach]
  • Appendix included:
    • Screener (final): [file name + version]
    • Recruiter instructions: [file name]
    • Participant attribute table (anonymized): [file name]
    • Stimuli: [file name(s)]

How to use the cover sheet during analysis (a simple workflow)

The cover sheet should not sit unused at the front of a PDF. Build it into your analysis steps so every theme gets checked against recruitment reality.

Step 1: Set up your coding frame with segments and quotas

Before you code, copy the segment labels and key quota cells into your analysis workspace (spreadsheet, Dovetail, NVivo, etc.). Then set up filters so you can compare themes by segment and spot when a theme is driven by one cell.

  • Create a tag for each segment (Segment A, Segment B).
  • Create tags for the quota cells you plan to compare (only the meaningful ones).
  • Add a tag for “exception” so you can isolate soft-qualified participants.

Step 2: Sanity-check big quotes against the roster

When a quote feels decisive, check who said it and whether they represent a planned quota cell. If a powerful quote comes from an exception or a late arrival, you can still use it, but you should label it accurately.

  • Confirm the speaker’s segment and relevant attributes.
  • Check whether the group hit that quota or missed it.
  • Note any factors that could shape tone (recruitment source, incentive differences).

Step 3: Track “coverage” so you do not overgeneralize

Coverage is a simple check: did a theme appear across segments and groups, or only in one place. Use the achieved quota numbers to decide how careful your wording should be.

  • Wide coverage: Shows up in multiple segments and sessions.
  • Narrow coverage: Shows up in one session or one segment.
  • Exception-only: Shows up mainly among soft-qualified participants.

Step 4: Write findings with recruitment context baked in

Instead of writing “participants said,” specify the segment when it matters. If your quotas were intentionally balanced, say so, and if they were not met, write the finding with appropriate limits.

  • Good: “Switchers in Group 2 described trial as stressful when setup took longer than expected.”
  • Risky: “People find setup stressful.”

Pitfalls to avoid (and what to do instead)

Most transcript packs fail for predictable reasons. Fixing them usually takes a one-page standard and a habit of documenting deviations.

Pitfall 1: Using vague segments like “Gen Pop”

  • Problem: “General population” hides real inclusion rules.
  • Do instead: Write the actual definition (for example, “category aware, used in last 12 months, not employed in excluded industries”).

Pitfall 2: Sending quotas without achieved results

  • Problem: Analysts assume the plan happened.
  • Do instead: Add an achieved column and short notes on misses and makeshifts.

Pitfall 3: Not labeling exceptions

  • Problem: A single out-of-scope participant can skew interpretation.
  • Do instead: Mark exceptions in the roster and add a tag in your coding system.

Pitfall 4: Mixing recruitment sources without noting it

  • Problem: Differences in participant behavior get misread as “market differences.”
  • Do instead: Record source by participant ID and summarize the mix on the cover sheet.

Pitfall 5: Including identifying details inside transcripts

  • Problem: You increase privacy risk and limit who can access materials.
  • Do instead: Use anonymized IDs and store any identifying key separately with restricted access.

If you operate in the U.S. and do healthcare-related groups, you may also need to consider privacy rules for protected health information; see the HHS HIPAA Privacy Rule overview for foundational guidance.

Common questions

Do I need to include the full screener in the transcript pack?

Yes, include the full screener version that was administered, plus any revisions. The cover sheet is a summary, not a replacement.

How detailed should participant attributes be?

Include only attributes you will use in analysis and reporting, and keep them anonymized. Store names and contact info separately and securely.

What if quotas changed mid-field?

Document what changed, when it changed, and who approved it. Then reflect the final plan and achieved results on the cover sheet.

Should I add recruitment incident notes (like suspected professional respondents)?

If you have credible concerns, note them in a neutral way and tie them to a participant ID. Keep the note factual and avoid speculation.

How do I handle mixed methods (online + in-person) in one transcript pack?

Separate them by session and clearly label the method on each cover sheet. Method differences can change how comfortable participants feel sharing sensitive opinions.

Who should own the cover sheet: recruiter, moderator, or analyst?

One person should compile it, but it should pull inputs from all three roles. Recruiters provide sourcing and screening details, moderators add what happened in-session, and analysts confirm which attributes matter for coding.

Can I standardize this across vendors?

Yes, and you should. Provide the cover sheet template in your RFP or kickoff materials so every vendor delivers the same minimum context.

Helpful transcript formatting note (so IDs match analysis)

Ask for transcripts that label speakers consistently (MOD, P1, P2) and keep timestamps at a predictable interval. If you use AI tools for speed, plan a quick review for speaker labels and terminology; see GoTranscript’s automated transcription option if you need a starting point that you can refine.

If you already have drafts but need them cleaned up for analysis, a dedicated review step can help; GoTranscript also offers transcription proofreading services.

Conclusion: make transcripts usable by shipping the context

A transcript pack is most valuable when it lets any stakeholder understand who spoke, why they were recruited, and how closely the group matched your intended design. A one-page cover sheet, a complete screener appendix, and a participant attribute table will keep quotes grounded and reduce rework during analysis.

If you want transcripts that are easy to analyze and share across teams, GoTranscript provides the right solutions, including professional transcription services that fit well into a standardized transcript pack workflow.