Speaker 1: What I cover in the next slide is really a short list of general best practices, and the first one is, you know, always starting with purpose, right, and defining a clear, attainable goal for your survey. I'm going to say a little bit more about this because it's so important in the next slide, but for now, let's just remember that the first step is about defining a clear, attainable goal for your survey. We often say don't let your survey get too long. How long is too long, right? So again, most of the answers I give here is about it depends. And so the idea is you just want to ask questions that make the most sense for you to ask, what you want to get information on. We say often to keep it brief, to keep it simple, and to keep it specific, again, asking only the questions that you need to ask and asking them as clearly and simply as possible. How do you know if it's clear or simple? We're gonna focus on that in a little bit, but keep that in mind. We suggest that you save open-ended and challenging and more personal questions for the end of the survey. And we do this because it allows for respondents to get comfortable with the survey by asking these simpler, more general questions. And so that's always a best practice, I suggest. The fifth one, we want to allow respondents to respond not applicable, or sometimes it's written as N slash A, not applicable. And you wanna do this because sometimes the question isn't applicable to the respondent. And so you wanna make sure that you're capturing accurate data. And instead, if you don't have this, they might just skip it. And then you won't know when you analyze the data why they skipped it, right? So have it not applicable because you wanna know if it's not applicable to them. The sixth one is we want to suggest that you include a short introduction and a time estimate when you give out your survey. Because before asking respondents to answer questions, you want them to feel comfortable and also to gauge how much time it'll take to take the survey. And so how do you know? That leads us into number seven, the final general best practice, which is to test or pilot your survey and your survey platform beforehand. And by that, I mean, you're developing a survey, your co-worker to take the survey. Tell them what the purpose is so they know, like I'm giving the survey to teachers. Put on your teacher hat and take the survey and give me feedback on how the survey reads. Is there anything unclear, right? And then for the time estimate I mentioned, if they're taking the survey online, the online platforms automatically record start and end time. They'll give you a calculation. But if you're testing a paper in pencil one, you can still capture this good information, you're piloting it. So have on top and put the start time after they finish the survey, the last question, the end time, the last spot, and then you can do the calculation and see how long it takes to take a survey. And if you ask, you know, 20 folks to take your, to pilot your survey, you get an estimate, an average of how long it will take to take the survey. Okay, so those are some general best practices. And in the next slide, I mentioned that I feel that the idea of survey goal is so important that I'm going to say a little bit more about that. But in the next slide, you'll see that I give you an example of what it means to have, what an unclear and unattainable survey goal would be. And then I provide to you a clear and attainable goal. And so I have here for an unclear, unattainable goal, goal. This survey will help us learn about everything parents are thinking about their child's math homework assignments. So if you could see the slide, you'll see that I highlighted the word everything. There is no survey that will allow you to capture everything. It's just not possible, right? It's not realistic. So instead, I rewrote the survey goal and it's a little bit longer, but it explains more. The clear attainable now survey goal I'm going to give you is this survey is designed to understand parents' perception about the difficulty, frequency, and variety of their child's math homework last year. Responses will be used to shape the school's guidance to math teachers in the coming year. I hope you see how clear that is in terms of how the first one about you're going to know everything about what parents think versus we're asking you these questions so that we can plan for the next year. You see, it's a very defined survey goal and it honors your survey respondents, their time and why you're asking them this information. I was mentioning that there are two types of common survey types. So we have closed-ended questions where there are multiple choice or rating scales or check boxes and those are really easy for your respondents to answer, right? They provide, if well-written, they provide rich quantitative data for you to analyze and share out, right? Our second part of this presentation is about sharing out results. And so closed-ended questions are one bucket. The second bucket is open-ended questions. It won't be surprising. Open-ended questions ask respondents for feedback in their own words. So you often and see it as a free, hear about it called as free response. And this can provide very rich qualitative data for you to analyze. However, what we will say is that since open-ended questions take much longer to answer and to analyze, we suggest you include fewer of them and also put them at the end of the survey. So that's just a best practice that we have as well. This next section, I'm going to talk about what we call common problematic questions. So I have a list of them. I'm gonna kind of zoom through them a little bit faster, but these are things that are really important to keep in mind when you're writing survey questions. And so the first common problematic question that I work with a lot is what's called a leading question. And a leading question is really what it sounds like. it's a question that leads you on to answer a certain way. It's a question that signals, it prompts, or encourages a certain answer. So I'm gonna give you an example of what I would see as a leading question. The example is how helpful were your friendly library staff members as you engage in summer school teaching? Okay, so in the previous slide, I'm asking you to take a look this survey example question, how helpful were your friendly library staff members as you engage in summer school teaching? And I mentioned, this is a leading question. So what's the problem with this question and how would you fix it? Type for me in the chat. Oh, wonderful. Folks are already typing in, flagging the word friendly, right? That's leading us on as survey takers that you want us to say that we're friendly, right? And so how would you fix this? In the next slide, what you'll see in the next slide is that the fix is quite easy. It's just making it more neutral, right? Making the wording more neutral. So instead of what I shared in the previous slide, instead in this slide, I'm gonna say, rate the helpfulness of library staff members as you engage in summer school teaching, right? I'm saying rate the helpfulness. So I'm taking away the friendly language, right? So, and then I give them options to answer on a Likert scale. I'm gonna give you another example, another problematic question, which is what we call a loaded question. And a loaded question, thank you. A loaded question is a question that includes an unjustified assumption. So it really forces respondents to agree with the assumption that is in the question, right? So here's the survey example. How much do you think test scores will improve because of your school's new reading program? How much do you think test scores will improve? So similarly, question for you on the chat, folks are already putting in, type for me in the chat, what's problematic about this question? How would you fix it? Yes, I see folks are getting it, improve, right? Improve is that word that is just, makes it a very loaded question. So in the next slide, you'll see that the fix I offer is to change out the word improve. So improve suggests improvement, right? And so instead, let's say, how do you expect test scores to change because of your school's new reading program? Okay, and so, you know, a lot of times when this type of questions are developed it's because of the program folks, right? want to get information on how their new reading program is going. And so you want improvement, but really the proper question to ask is about change. There's lots of problematic question types. So in the next one, we talk about double-barreled questions. A double-barreled question is a question that asks us for an opinion about really two different things, but it allows for only one response. And the example here, the survey example here is, how do you think the students' test scores and attendance will change because of the new afterschool program? Okay, how would you fix this? So it's asking about two very different things, right? Test scores and attendance. You would hope both of them will be improved, but they're separate. One could improve and one could not, right? So I love it. people are chatting in, ask it as two separate questions. That's exactly it. That's a simple fix. Perfect. The next one is about, so we talked about double-barreled questions. Now we actually have something called a double-barreled answer, if you can believe that. And that is an answer option that presents two possibly different opinions as a response to just one question, okay? So a double-barreled answer will show you that the question is actually okay. It's very similar to a double-barreled question, but this case, the problem, again, it's in the answer. So the example I will give you, the question is, what was your personal experience with mathematics in high school? Okay, so that's the question. And then the answer choices I gave were right from one, did not like or did not succeed, to five, passionate about, excelled at. And so here, the question is actually okay. What was your personal experience with mathematics like? But it's the answer choices that are not okay, right? Because you can imagine someone being good at something, you know, succeeding in it, but not really liking it. You know, I think about a friend of mine who is really a whiz in math, but she doesn't really like it. She actually really enjoys a cartoon. She's a cartoon. She likes cartoons, right? Drawing and the creative side. So she's really good at it, but she doesn't really like it. So you see how it's different? All right. And finally, we have something called double negative question. And a double negative question is just, talking about it makes my mind kind of go crazy because it's about a question that contains two negative elements that is intended to create a positive element, which really confuses respondents who take the survey. So it's when you hear people speak double negatives, like the example I give is, is it not uncommon for teachers to coach a sport after school? Is it not uncommon? That's just too hard for people to process, right? You want to, you're asking people's time to take the survey. You want to reduce cognitive load as much as possible. You do this by avoiding double negative questions. So instead of asking, is it not uncommon for teachers to coach a sport after school? This is the double negative question. You wanna say, how common is it for teachers to coach a sport after school? Okay, so that's the fix. You just take out the double negative. So I'm gonna move forward here and talk about Likert scales and rating scales. And so here, and yes, you heard me say Likert scale. You will also hear it referred to as Likert scale, which actually more people say Likert, but just a fun fact, when I went to grad school, I have a professor who actually knew Likert back in the 1930s, and he was a psychologist who created the Likert scale that we are very familiar with and use. And so his name is Likert, so I say Likert scale. So yes, thank you, Don. Rensis Likert is his name. So Likert scales or rating scales are close-ended questions and really are great sources of quantitative data. And so there's this debate in the survey world, survey design, survey analysis, of how many choices is enough, how many options. And so my answer might not be satisfactory to some of you. I'm gonna say it depends. It really does depend. For my work and for most of my purposes, I would say, honestly, 95% of the time, five choices is plenty, it's enough. You know, rate from one to five, choose from five options. So it really depends on what we call, what do you want from it? Do you want very basic, untextured data? If you don't need a lot of texture, you just want folks to get a sense, you wanna get a sense for folks, is it positive, negative, or neutral, you just need three options, right? So it depends on what it is you're going for. If you need a lot of texture in your responses, I would say five is really good. We also suggest in your scales that you maintain balance and objectivity. Easier said than done sometimes, which again goes back to one of our best practices, always get someone to review your survey, take it for you. But here I give you two examples of options, answer options. The one on the left, okay? The one on the left starts with not helpful. The one on the right starts with very unhelpful. And so I'm gonna ask you, type in chat, is it the one on the left or the one on the right? Which one has more balance and objectivity to the answer options? Okay, and oh my goodness, you guys are so fast. So I'm seeing a lot of right, right, right. And that is correct, that is right. And the reason for that is the key indicator for you to see is that there's this middle neutral option, three, neither helpful nor unhelpful. So here it's balanced because you're giving two negative opportunities to respond, you know, very unhelpful or unhelpful. You have that neutral, and then you have two positive, helpful and very helpful. So it's a nice balance, okay? And so I wanna make sure you are aware of the different types of questions like that. Likert scales and rating scales are often really, really wonderful to kind of gauge your respondents experience before and after an event, right? And so here we talk about, you're doing a workshop and the example I give here is maybe you have teachers attending a series of workshops to increase their learning learning about strategies, specific strategies for English learners. What a perfect opportunity to embed a question, you know, what we call a pre-survey. So the first time you meet with them, before they even get your workshop learning, ask them this question on a survey, on a scale of one least confident to five most confident, how confident do you feel in your ability to craft lesson plans with specific strategies for English learners? Okay, so you ask that question. And then like I said, let's say you have a series of these at the end of the sessions, of the series of sessions, you are basically asking the same question. And the only difference is you say, after participating in this training, and then as you can see, the rest of the question remains the same. And it's a great opportunity there for you to, you know, do some calculations about a change in knowledge, change in experience, right? Feeling, whatever it is the topic is that you're covering. So really love the use of Likert scales and rating scales for this. And the following slide, I give you another example. You know, folks ask me, well, Tran, that's nice. If you have, you know, three or four sessions, you can ask before and after. And the next slide though, an example I give you, it's really the same event, okay? And you can still capture this data. So it's really wonderful. But here is, again, I mentioned this is an example of measuring change in understanding. So let's say you did a training, something like this, and you ask this question in the same, again, the same post-event survey, and you ask them to, the answer choices would be something like, not at all, a little, a moderate amount, a lot, and a great deal, okay? And then you're able to show your, whatever it is, your boss, your funder, yourself, right? to see if you've made an impact in their learning, and you can see how they responded from when they did not, they're self-assessing how much they knew to now they've taken your three-hour workshop and now their change in familiarity, comfort, whatever is now at a different spot, right? So a really great way to get data from the same survey, from one event. I'm gonna end here with talking about a really meaty topic that just deserves its own time spot. What we are talking about is about how to create more inclusive surveys. So it's really important to my organization, to the REL program that we are putting out surveys that are inclusive of various lived experiences that folks have. And so, you know, I give you just kind of four outlines here from how to create more inclusive surveys. You know, the first thing is just being thoughtful about demographic questions. You know, long, thankfully, long one of the days where you take surveys and there's like 30 demographic questions you're answering, you're wondering why, because you're not really sure why this all applies to what you're doing, right? So these days, you know, it's just, the idea is just to be really thoughtful about asking them and do you really even need to include them? Second point is making survey questions mandatory only if a response is necessary. And this is critical in my opinion because I've seen so many surveys now where you see that asterisk where it's required. Well, really ask yourself, is it required? Because if it's a question that is so sensitive question that you'd love to get information on, but it's not necessary. And there's other parts of the survey you want information on, don't make it required. Let people, give people the option of skipping. Okay, we talk about being mindful of language use in your survey. And with that, I will say there are just great resources out there. And we link this into our slide deck that you'll have access to. If you need support in that, there's a lot of really good free vetted support for you. And finally, I do this in my work. I do surveys for a living, but I make sure I consult with resources on inclusivity and bias-free language. I'm just gonna end and say that my final thought is that, when we administer better surveys, it leads to more specific and accurate data, which is what we want, right? When we're collecting data. And better data leads to better evidence for which for us to make some good decisions, informed decisions. And we do that by following some of these points that I mentioned in all my previous slides. So I won't go through them again, but everything that I've reviewed for you, actually we recently published just last week, even late last week, an infographic, a reference guide. It's very short, it's about eight pages. I highly suggest you get access to that because it really summarizes the half hour that I just went over. So I'm really thankful for that opportunity to have the infographic.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now