[00:00:03] Speaker 1: Hey friends, Katherine here from Research Rockstar, and today I have a topic that is really timely because it's something that's been coming up a lot in conversations lately with people who do survey research. So I want to dive right in here. Let me ask you a question, and maybe you can tell me if you've had this debate recently. So for years, people who do professional research, who do survey research, have debated the use of rating scales. Maybe it's the debate about even or odd scales, or how you should label your scales, or whether you should use bipolar scales or unipolar scales. There are a lot of related debates, but perhaps the most common debate, and one that keeps coming up, is the length of the scale. So do you ever debate scale length with your colleagues or your clients? And what do you prefer? Do you favor a 3 or 4 or 5 point scale? Maybe a 6 or 7 point scale? Do you favor 10 or 11 point scales? It's an important question, and one of the things that's really important is how we determine what scale length is going to be best. Now the good news is that there has been research on scale length for years. There has been a lot of research on the reliability of data collected using different types of rating scales, including rating scales with different lengths. So for example, going back even to the early 90s, there have been some really great studies. Some of them are coming from academic researchers, some of them are coming from practitioners, but there's a lot of documented research on research about scale length. So you don't have to guess. You can look at actual published studies where the researchers have generously shared their results so you can draw your own conclusions. Now just for the sake of definition and for people who may be newer to survey research, let me just clarify what I mean by scale length. If you consider a very common rating scale as a satisfaction scale, that might be presented as a 5 point scale or maybe as a 7 point scale or something else. Let's use the example of a 7 point satisfaction rating scale. That would be a 7 point scale where the 1 value is perhaps completely dissatisfied and 7 is completely satisfied. Now this is an odd point scale, so there's a midpoint and that midpoint might be labeled as neutral or perhaps as something like neither satisfied nor dissatisfied. A very common type of scale. Another common rating scale would be a level of quality scale. So a 5 point rating scale to measure level of quality might range 5 points from 1 being poor to 5 being excellent. Another common rating scale that we see all the time is expectations. This is a scale that I've personally used many times. I like expectation because sometimes there's a big difference between satisfaction and expectation. I might be satisfied, but did you meet my expectations? There can be a really interesting difference between satisfaction and expectations. So in this case, maybe I have a 5 point scale ranging from 1 being much worse than expected and 5 being much better than expected. So measuring whether or not expectations have been met or exceeded or way below. The midpoint on an expectation scale presented as an odd scale might be about as expected. But going from much worse than expected, worse than expected, about as expected, better than expected, to much better than expected. So those are some simple examples of rating scales and again I'm showing some examples of common odd scales formatted as 5 or 7 point scales. Now again the good news is if you're trying to decide for your own survey research work if you should be generally standardizing on say 5 or 7 point scales, there's been a ton of research on this. So I actually, since this has been coming up a lot lately, I actually grabbed some of the historic resources that I've liked that really have very great level of detail about how the scales were tested and I've created a little one page summary of some of these articles. So I will post a link in the show notes so if you want to see my sort of keynotes from three of these specific studies, you can just click and download them from the show notes. And you'll see that a couple of the studies are from the 1990s because that's when some of that original research took place. So there's two articles from the 1990s and I think the third article is from 2008. And what you'll see is that in general if we focus on one of these topics, let's focus on the issue of 5 to 7 point rating scales and which is better and is there a difference between the reliability of data that I get from a 5 point scale versus a 7 point scale. What you'll see from these three different studies collected over quite a bit of time is that there is a slight preference for 7 point scales. That's really interesting because of course many of us have standardized on 5 point scales often in the interest of keeping the questionnaire as simple as possible for our respondents. But if we're taking sort of this pure point of view about data reliability, there are cases to be made to favor 7 point scales. So again, I've shared this with you in a PDF that you can get very easily. Just click and it will download for you right away from the show notes. Now something that's come up is that this conversation has been happening a lot lately. It's kind of interesting to me because I always feel like as many years as we've been doing survey research, there's just like these recurring debates that keep coming back about things like how we deal with scales and format them. And actually there was a really great article this week that actually ended up spurring some conversations that I've had with some of our students here at Research Rockstar. There was an article published in Quirks Marketing Research Review. If you don't read Quirks Marketing Research Review, I really encourage you to go over to their website and sign up. Because not only do they publish a lot of really great articles from research practitioners, but they also publish case studies and really great in-depth articles that are practical for people who actually do market research and insights work. The article that I'm referencing is an article that was published by Quirks, was written by a couple of research experts from PNC Bank. It's about their experience experimenting with five and seven point scales to see which would give them the most reliable data when asking for customer experience feedback. Think about it. If you run a bank, you want to do surveys from time to time to find out what's the experience of people who visit your bank branches, right? And what's the best scale? Will you get better data with a five point scale or a seven point scale? Now I don't want to steal their thunder. You should definitely read the whole article. It's a short, really well-written article. But the most important thing is not even what their specific conclusion was, it's the description of how they did the experiment. And the reason this is important is because one of the things we know from research on questionnaire design and data collection quality is that there are certain things that can vary by your topic and by your population of interest. So for example, there's been some research that has indicated that when you're dealing with a category with very low satisfaction, you might have better data reliability from using different types of scales than if you're dealing with a population that's generally satisfied. Of course, there's also the case of depending on how diverse your population is. If you're primarily doing research with a very homogeneous population, you might draw a different conclusion than somebody who's doing survey research with a very culturally diverse population. Because there are certain survey taking behaviors that do vary by culture. And something that you can look up there if you want to learn more about that is extreme response bias. So check out extreme response bias if you want to learn more about an example of how survey taking behaviors can vary among different types of populations culturally. So again, I don't want to steal the results from PNC, but I will say this. They did a really great experiment. The way they talk about their experiment in this article is something that you can use to inspire your own experiment. So maybe you need to do an experiment too. Maybe you and your colleagues are debating, should we be rethinking our current standard on five or seven point scales? Would we get more reliable data if we rethought our scale length? And how are you going to do that? So one of the things in the article that was important to the folks at PNC is they do use a lot of top two box summaries, right? A very common practice for people who report survey research results is to report top two box. Now, if you're comparing the top two box results from a five point scale versus a seven point scale, kind of expect them to be different, right? After all, the top two values from five is very different than the top two values out of seven. However, the research results that they had from their experiment, you very much might find that the results are surprising. So I encourage you to check it out. But again, the point here is that even though they had a particular result, they're dealing with banking customers. They're dealing with their own banking customers. The results could have been different with different populations or topics. So if you do research with banking customers, well, then you might find that the results they have are directly applicable to you. But maybe you do research with, I don't know, for a restaurant chain. So it's got to be a scale that's going to capture satisfaction and customer experiences for the in-restaurant experience. That could be very different than the bank, you know, a banking customer's measurements and experiences. Or what if you do B2B research? Maybe you're doing satisfaction research with HR professionals about some HR software product. Maybe you're going to get different results with that population. So I do encourage you to do experiments. And again, in this article, they show a really great example of doing a split sample experiment. And it wasn't trivial either, just to be clear. They had a very large sample size. So they really do have some great data here to illustrate what they found for their population. So again, if you haven't done an experiment to see whether or not the scales you're using are indeed the best ones for your topics and populations, I really encourage you to check out this article and use it as a model or at least inspiration for designing and conducting your own experiment. If you haven't really rethought your choice of scales and how you standardize on scales, if it's been a while since you've thought about it, it may be time to refresh. By the way, for those of you who do survey research, I want to mention that we have a new course I'm super excited about. It's about big data integration. A trend that you may be aware of, and you may have already started to experiment with this, is we increasingly see situations where people who are doing survey research need to append that survey data file, the raw data file, with data from other data sources, some of which might be from the category of, quote, big data. So they might be doing a survey. You might have a survey, for example, where you've got 1,000 records where you've collected data from your customers, and now you want to append that data set with some variables maybe from your customer database or from a credit database or another type of source so that you can really collect additional variables based on actual behaviors or purchase behaviors or online behaviors and such. So in this new class, Jeff McKenna, who's a great market researcher, I've known Jeff for years, is going to be talking about how to plan and execute those types of projects even if you've never done it before. So if you're being asked to append survey data sets with additional data sets, Jeff is going to walk you through it step by step. For more information on that course, please visit training.researchrockstar.com, and don't forget to check the show notes for the research summaries that I've mentioned earlier. That's it, everybody. If you have any questions, let me know. Otherwise, I'll look forward to talking to you soon. Thanks.
We’re Ready to Help
Call or Book a Meeting Now