DIY 2.0: How AI Will Reshape Market Research (Full Transcript)

Panelists debate AI-driven automation in research—its democratizing benefits, quality risks, bias concerns, and how insight teams must adapt.
Download Transcript (DOCX)
Speakers
add Add new speaker

[00:00:04] Speaker 1: So, hello, everybody. Happy Friday morning, afternoon, or really, really early morning, depending on where you are joining us from. I am Nikki Lavoie from MindSpark Research International based in Paris, and I am going to be bringing you today a really interesting conversation live with three really great and well-respected research figures from our industry. By way of introduction, please allow me to introduce Ray Poitner. Ray has actually been around for a while. He's been working in the industry for 40 years at the intersection of innovation, technology, and research. He is the founder of NewMR, which focuses on training and consulting, and he writes a wide range of articles, books, chapters, and blog posts. Ray's mission is to have fun, help people learn more, and hopefully make a little bit of money in the process. And next to him, virtually speaking, is Catherine. Catherine Korosoff is a market research professional and college professor who's passionate about excellence in business innovation through fresh, objective customer insights. She advocates for the use of new methods and rigorous analysis to help all researchers become research rock stars. She leads a team of 10 instructors at Research Rockstar, which has a catalogue of more than 30 research training topics. And last, but certainly not least, is Edward Appleton, who runs Global Sales for Happy Thinking People. He has been working in market research now for over 20 years, partly on agency's side, but also has some experience on client side. He writes a very interesting market research blog, which if you haven't checked it out already, please do so, and is a regular contributor to SMR, MRS UK, and other industry organizations. He is currently fascinated by the changing role of Quarrel in an increasingly digital world. So that is all for us, and to explain to you a little bit more about why we decided to bring you this live conversation today, I'm going to start us off with Ray, who has essentially started this conversation. As close to a viral conversation, an organic viral conversation within our industry on LinkedIn as you can possibly get. Ray has unleashed a wide variety of opinions and thoughts, some happy and some less than happy on the idea of DIY 2.0. So Ray, maybe you could get us started in talking to us a little bit about what do you mean by DIY 2.0, and why do you think it's important for us to be talking about today?

[00:02:40] Speaker 2: Thanks Nikki, and thanks everyone for being on the line. So what I mean by DIY 2.0 is the next step. So if we think about DIY, we will often think about something like SurveyMonkey, but we could also be thinking about some of the pre-packaged solutions like Validated have. We could be thinking about some of the integrated packages for online communities that are taking and using. And those are great, but they actually require, to do them properly, quite a lot of skill. And that is one of the things that's caused a lot of discussion in the past about whether it's a good thing or a bad thing that there is DIY around. But the next step, I feel, is going to be using AI, it's going to be using automation. So if you take a very simple case of a survey, it would say, okay, what do you want to do? I want to do a concept test, and it's going to say, well, are you more about feedback or are you more about scoring? You say, oh, I'm more about scoring, and say, right, how many concepts? That's it. What are the questions you should be asking? Here are two or three you might want to tweak. The sample should be regular buyers in the Midwest. You want me to buy that for you? Yes, buy the sample for me. Runs the analysis. It comes back and says, okay, the viable breaks are these, the patterns are these. Here are some key comments you might want to use. Now, can you finish the report from there? And that's what I see as 2.0, something which is much more automated, much less error-prone and is capable of producing good research.

[00:04:20] Speaker 1: Great. So, Catherine, I'd like to jump over to you because I know that you are particularly interested in how we use the term DIY research. So building off of what Ray has said, what would you add or maybe even change about that definition if you could?

[00:04:34] Speaker 3: Thank you, Nicky. So I absolutely, of course, agree with Ray that there are a lot of changes going on. There are more tools that involve automation. Automation is becoming a huge part of what we do in market research on a day-to-day basis and automation is coming in many forms, thanks to AI or things that are being called AI but are really just, you know, very elaborate rules-based systems, but in any case, a lot of automation. My concern is I feel that this is just something that is more market research 3.0 and not necessarily DIY 2.0. I do think that referring to some of these things as DIY is a bit imprecise in my point of view because it's really just about different tools that are available to us as professional researchers. So whether you're doing market research in a full-service agency or in an educational institution or at a client-side insights team, there are a lot of different tools available to us and some of them involve automation and some of them don't. I don't see the point in defining things like platforms of certain nature as DIY because it implies that those are only used by people who have low levels of skills or trainings and in fact, I think a lot of the tools that are available for automation are great for people of different levels of skill and I also know from direct personal experience that a lot of automation tools are in fact actually used by people who are experienced researchers. So my concern is just I don't think we should be calling it DIY 2.0. I think this is really just part of market research 3.0 or maybe it should be market research 4.0 at this point. I'm not really sure.

[00:06:17] Speaker 1: Okay. Great. Edward, I'm interested in your perspective on this as well, knowing that you have been both client and agency side as has Catherine. What are your thoughts on the fact that is this just another, you know, an evolution of the set of tools we have at our disposal or is this some other form of evolution that we're seeing happening in the industry?

[00:06:39] Speaker 4: I think it's massive. It's massively important for both agencies and client-side people. A huge benefit is that there'll be more decisions that are based on evidence rather than gut and it overcomes massive barriers in terms of cost and price. So I think it has huge, huge benefits that more organizations that possibly historically couldn't afford to do a traditional piece of market research can now see that we're going to, yeah, look to research, look for a consumer point of view to factor that into the decision making. I think you need to be careful and say, well, what are you actually looking to do for your particular need? Is it a question of just consumer closeness or getting some quick feedback or are you really looking for something which is more like an insight? Because I think the notion of doing it yourself is, you see it in very, very many industries. If I wanted to, I could cut my own hair, but I probably don't. Well, maybe you think I did, but I didn't. And I think that's one of the most important things to understand is what is it suitable for? Is it fit for task? How much can, what's, I don't know whether I'm concerned about the vocabulary, but if you say you take some software, some AI, how much can that take you into the interpretive, the inside generation and where do you say, no, you need somebody with an analytical skills with academic background to help you shape the output so that it's really useful for marketing?

[00:08:27] Speaker 2: Okay. So, if I come back to, perhaps we don't worry about whether it's called DIY, but what I'm really talking about is massively changing who can do the research. And I'll give you a couple of examples from the past. When I was a student at university, there was a software package which came in called the Statistical Package for Social Scientists. Later on it was renamed SPSS and that was referred to as DIY because all of a sudden you didn't need a statistics training and background to be able to run SPSS and it led to some shocking things. A lot of people would put the wrong data in and they'd go everything by everything, run all of the tests. It moved forward and we now see that as a specialist thing, but the number of people who could run tests massively increased. Whole Tooth Software in the 1980s took conjoint analysis, which required extreme skills and being a SAS programmer and made it accessible to a much greater number of people. You still had to be a bit of a marketing scientist, a bit of a programmer and a bit of a nerd, but it was DIY in the sense that you no longer needed the skills of the previous cohort. And that is why I'm seeing, thinking is happening next, that we're going to see this real expansion of people who can and also an expansion of people who think they can, and there isn't a complete overlap between those two groups because of these changes.

[00:09:59] Speaker 1: So that's a really interesting point and I'd like to build off of that and off of what Edward was saying and pose some questions back to the mini panel here. From my perspective, there's sort of two potential ways to look at this and probably more than that. But one is, let's use the phrase DIY for now just to give us a nice thing that we can all refer to in the cloud. Where do you guys think that DIY can actually add value? And then I'd like to hear the other side of that. Where do you think that the idea of DIY, so a new set of people having access and doing this, where do you think that that's going to potentially fall short and maybe create some new needs and some new things that we need to address? How about you, Catherine?

[00:10:46] Speaker 3: Thank you. So I think that when we talk about these types of tools and automation in general and market research, I think this is huge for those of us who are professional market researchers. The brutal reality is that today we have so much work to do. In a lot of cases, a lot of the teams that I work with, they are expected to be doing work that's great, that's fast, that's on a budget. They're juggling projects, clients, suppliers. I think the work of professional market research and insights folks is actually really, really hard. So why not automate what we can? The brutal reality is that those of us who do, for example, a lot of survey research, we have certain processes we use. We're sort of doing human automation when we're doing our questionnaire design, when we're doing our reporting, and if I can use some of these automated tools to take some of those steps and truly automate them, I think that's fantastic. Then I get to do what I really need to be doing these days as an insights professional, being more consultative at the front end of a project to make sure that I'm coming up with truly the right methodology because these days, methodologies and how we blend methodologies and blend data sources is something that takes a lot of time, and it also saves me more time at the end of the project. So instead of just going through these mechanical steps, my human automation, I'm getting actual automation and I can add my value on top of that. So I'm very excited about automation. I think it's going to save us a lot of efficiency from things that, frankly, we were doing pretty much on automatic inside our heads anyway.

[00:12:26] Speaker 1: Yeah. Edward, what do you think in terms of, are there any potential areas where you see that automation and DIY work can be particularly helpful or, on the contrary, are potentially dangerous?

[00:12:41] Speaker 4: Well, like Ray mentioned, the expansion of the reach, that more people will be able to do market research themselves, maybe more people who think they can do it by themselves, that's potentially very, very beneficial. I think one of my main concerns is that if you take something which looks deceptively simple like qualitative research, it's just talking to a bunch of people, right? And I'd say, well, actually, no, there are plenty of things you need to do correctly. Even if it sounds as simple as asking the right questions, not asking too many questions, making sure that the group discussion doesn't get dominated or go in the wrong direction. And that's just one aspect. Then in the analysis process, again, you need people who have the ability to structure and sort through data, see patterns quite quickly and make sense of it. And third, I think more important, as a client-side researcher, after about six months, you become blind. You see your problem according to your internal perspective, often through a product lens, if you like. You're no longer genuinely consumer-driven. So I think that if you just say, okay, we'll do this all ourselves on the qualitative side, you have a danger of genuine company myopia. So you don't actually see things through an objective lens.

[00:14:11] Speaker 1: Yeah, that's a really good point. I'm willing to throw it back to you guys really quickly, because we are already sort of talking about people who can do market research well, or people who think they can do market research well. And I'm interested if any of you can clarify that a little bit further. So just for full transparency with the audience of people that are watching now, we had some pre-discussion about this. And I will tell you that the panelists are in agreement that this does not inherently mean client-side researchers versus not client-side researchers. So I'm wondering if you guys can maybe enlighten us a little bit more on these two camps of researchers, those who can and those who think they can.

[00:14:54] Speaker 2: I think we have all seen awful surveys produced in SurveyMonkey by somebody who doesn't know the basic rules of writing a survey, they're leading, they're missing characteristics. One classic one is they assume we all live in the United States, and therefore you have to pick which of the 50 states are you living in. Equally, I think probably all of us have seen equally bad surveys from professional research companies. So there are people who can't actually do research really well in both sets of organizations. And to some extent, if we can improve these tools that I'm calling DIY, but we can have they should make fewer mistakes because it's going to push them towards canned solutions. And those canned solutions will not be great, but they will be better than the worst stuff that's coming out at the moment. So I think that is one of the things that we will see where people who think they can do the research. I think we've come back to the qualitative point, because I think that's a deeper point and it's a really important one.

[00:16:04] Speaker 1: Yeah, and that leads on to another great question is, Edward raised an excellent point, which is qualitative is one of those types of research, which is inherently more complex than it seems. What are your thoughts on the interplay between these quote unquote DIY solutions and the quantitative side, which might be potentially more visibly complex, but having more clear instructions for how to arrive at the appropriate result? So how do you think DIY is going to influence current and future quantitative methods?

[00:16:38] Speaker 3: Catherine, what do you think? Well, the truth of the matter is, is that 80% of survey projects are actually pretty simple. If you really look at survey research that's being done, even by the most qualified researchers, there's a lot of very simple survey research out there these days, and for good reasons. One of the reasons, of course, is that over the last 20 years, we've seen response rates plummet, data quality and online surveys plummet, and so we are very concerned about making sure our surveys are very short and mobile first designed and so forth. So we've already seen that survey research in many cases is extremely simple these days, and again, for good reasons. And frankly, I think that with some of these DIY tools, that people who are smart marketing people, maybe brand managers, product managers who have some light training, not necessarily people with 20 years of survey research experience, I think they can do a great job with an automated tool. But then there's the other 20% of the projects where you really do need somebody who's got that background, who understands statistics, who knows the difference between different data types and knows how to treat data, knows how to do predictive analysis and so forth. But in my experience, people who do their own research, whether they work at a research agency or an insights team who don't have a ton of skills and training, they know when they're stepping into a project where they need extra help. They know the difference between doing a simple project and a complex project. So I don't want to disrespect those people. I think that there's a lot of super smart people who are in general marketing roles who can take advantage of DIY tools really, really well. And I think most of them, there are exceptions, know when to ask for help from a true expert in the field.

[00:18:25] Speaker 1: Yeah. Go ahead, Edwin.

[00:18:30] Speaker 4: Yeah. I listened to a talk, I think it was a year ago, from a Dutch airline company, the head of insights, who talked about her own in-house DIY solution, allowing people from all departments to access their data bank and then program their service themselves. And it was very hands off. And one of my concerns was how do you avoid the notion that, say, an engineer will see the solution that the engineer wants to see, but somebody that's in the front end looks to the answers that they want to see. There isn't any triangulation anymore. You become judge and jury yourself. There's no third party. And I think this notion of bias is one that we need to be cognizant of and question how you manage that.

[00:19:19] Speaker 2: It is. But it's also, I think, a journey, because if that engineer was doing no consumer research, which is normal in design-led companies, because they know what technology should do, then moving to bad research is probably a step on the journey to moving to good research, as opposed to leaving them in a silo where they're doing no research. Now, we can improve the research with more AI, more rules that say you can't ask that question. This is the wrong sequence, where you actually get feedback from the software platform. But I think that there is a risk and a benefit there.

[00:20:03] Speaker 4: I think you're right. But, I mean, we've all heard the phrase, but we researched it. And you say, but what was the quality of the research? What were your assumptions? I've met plenty of marketing people who thought that a quantitative sample size was 30. And then you point it out to them and say, well, that's pattern recognition, but do you know anything about data variance? And then they start, their glasses, if they're wearing them, steam up a little bit. So I think the assumption that we're all in a savvy world of SPSS and statisticians is a nice one. Maybe we're moving towards it. My personal professional experience has contradicted that on a few occasions, more than a few.

[00:20:41] Speaker 2: Oh, I don't think we're moving to any better knowledge of stats, for sure.

[00:20:47] Speaker 1: So I want to throw you guys a curveball here, if you'll forgive me for this. We've been talking about clients. We've been talking about researchers, we've been talking about marketeers. We haven't said a whole lot about the participants. So I'd be willing to know what you guys think could be the impact of these quote unquote DIY tools and or the use of research tools by those who are not traditionally doing research or who have less experience with research. What do you think the impact could be on participants in our industry?

[00:21:22] Speaker 3: I'm going to jump in here first. I could see it going either extreme. I tend to be an optimist. So on the optimist, with my optimist hat on, I would say, actually, I think there's a lot of potential for goodness here, because what I'm seeing with some of the automation tools is that they're enforcing some best practices that a lot of professional researchers don't like to enforce. For example, making sure that the wording of questions is not excessive, not allowing you to have too many answer options in a questionnaire, keeping your questionnaires nice and short. Now, I know I just used only examples from survey research, and obviously, automation applies to other methodologies as well. But in that case, it seems pretty cool to me that some of these tools are actually enforcing best practices that we, as professionals, sometimes are unable to enforce ourselves directly.

[00:22:10] Speaker 2: Yeah, I think that is a definite positive. The biggest negative that I will worry about is that we might see a 10, 20, 30% or a 300% increase in the number of research activities, which is kind of what we want. We want more brand managers, more engineers to be talking to consumers, but that could mean we get a lot more survey requests, requests to take part in online discussions, requests to join into communities, and that in itself could be a bad thing.

[00:22:48] Speaker 4: Yeah, on the qualitative side, when we do what we call consumer immersions, where we, for short training, we encourage clients to spend a half a day with their customers in places of their choice, it has an amazingly energizing effect. And often, they come back thinking, oh my, wow, I really get it now. It has this emotional immediacy, which really, really works. So I think if you manage it, that's one positive example, and I'll give you another positive example. Sometimes in co-creation sessions, towards the end, if clients are part of the process, the participants ask who they are, and they discover they're the end client, and then they say, oh, I've really wanted to ask this question for somebody, it hasn't been part of the discussion at all until now, can I ask you? And it works wonderfully. There's a sense of huge connection between the participant and the people they see as the end client. On the downside, I've moderated groups myself with brand managers in Munich, actually, where the participants, if they knew that the person in charge of the product that they were just in the process of slagging off was the person responsible for it, they would probably shut up. But that's probably a very extreme example.

[00:24:20] Speaker 1: So it sounds like there are some potential good things and some potential less than good things that could come out of this, and maybe that's something we can talk about in a future discussion is how to kind of shape those things more thoughtfully in relation to and in light of all of the new tools that are becoming available. One more question to pose to each of you as we wrap things up, we've only got a couple of minutes left. There are a lot of things changing, and each of you come to the table with a really unique set of perspectives. Ray, you work a lot with people from all over the industry, clients and agency side folks alike. Catherine, you're doing a lot of training, and Edward, you are sort of at the forefront of thought leadership, specifically around qual, but not just so. And so I would be really interested to know what each of you think around what do we need to be doing and thinking about next? So what do we really need to be having in mind or doing to prepare for DIY 2.0, DIY 2.5, DIY 3.0? What should we be doing next? Ray, let's start with you.

[00:25:26] Speaker 2: I think unless you are going to be a real specialist, the ethnographer, the data scientist or so on, really the focus wants to be on understanding businesses and curating messages, because you are going to be able to use an ever-changing range of tools. So for me, if you're on the client side, it's the insight manager who's drawing on different things. If you're on the agency side, it's probably the customer success manager rather than the researcher where the real focal point is going to be.

[00:26:01] Speaker 1: Okay, great.

[00:26:02] Speaker 3: Catherine, what do you think? Well, I think all of this ultimately is good news, because I think that we have to always start with the client point of view. What's going on on the client side, right? Because the suppliers, whether it's the technology and platform suppliers or the agencies, ultimately are all driven by what's going on on the client side. And I think that what's happening here with what we're calling DIY is really something that is aligned with a bigger trend that's going on in corporate insights teams, which is corporate insights teams do need to have certain types of tasks faster and more automated than ever before because they've got new types of things they're working on. A lot of insights teams these days are increasingly working with cross-functional teams and collaborative projects where market research is no longer just a silo. I see a lot of cases with the teams that I work with where they're increasingly expected to be collaborating with people in their organization who come from more of the customer analytic side or e-commerce analytic side, people who are working with big data, people working with other types of third-party data to bring multiple data sources, to work in blended data environments. The market research and the insights teams are expected to be very innovative and to be more data agnostic. They can't possibly do all of that if they're also doing, frankly, a lot of mechanical work that can be automated. I think this is what is going to happen on insights teams. I think that they are embracing more automation so that they can save up their time and their brain power for more of the blended data, data agnostic methodologies that they need to be working on. I think that the suppliers and the full-service research agencies need to be very aware of that because it has huge implications for their opportunities. Great. Edward, what do you think?

[00:27:53] Speaker 4: I think we need to embrace it for its potential excitement. If you look at client companies, the ones that I've worked with that managed to institutionalize a good kind of like DIY system which overcomes some of the barriers, then the excitement across the organization in departments which have been otherwise difficult to reach, the fact that they can get data far more quicker in a wonderful form, it's fabulous. I think from the insights, whether you're an agency or a client-side person, if you're the insights department, you have to make sure it actually frees you up, that it doesn't become a volume trap, that you don't end up being the person doing hundreds and hundreds of tactical surveys. It has to feed you up to become the data insights consultant that you've always wanted to be. That means you need to shift your focus towards being comfortable with data, looking for the stories in the data, and then being bold as a business part and saying, I have a point of view, I'm going to argue for it, and you want to be provocative and hopefully transformative, if not necessarily disruptive.

[00:29:02] Speaker 1: Great. Well, that's basically all the time that we have for today. This was a really great conversation, and I'm sure that there are probably several more avenues we could go down, and maybe we will do in a future discussion. In the meantime, thanks everybody who managed to watch live, and to all the future viewers who may watch this later on, feel free to check out Nu-MR's website, where you'll catch more of Ray. You can head on over to Research Rockstar, where you'll see what Catherine's up to, and Edward's blogs are easy to find and fascinating to read, so please check them out as well. Thanks everyone. Thanks, Nikki. Thank you.

ai AI Insights
Arow Summary
A live panel discussion moderated by Nikki Lavoie explores “DIY 2.0” in market research—an evolution from do‑it‑yourself survey tools to AI- and automation-driven research workflows that guide design, sampling, analysis, and reporting. Ray Poynter argues AI will broaden who can conduct research, similar to past democratizing shifts like SPSS and conjoint software, while also increasing the number of people who think they can do research well. Catherine Korosoff challenges the “DIY” label, viewing automation as part of broader market research evolution (3.0/4.0) used by professionals as much as novices, and emphasizes automation’s ability to enforce best practices and free researchers for higher-value consultative and blended-data work. Edward Appleton highlights major benefits—more evidence-based decisions and lower barriers for organizations—but warns about risks: poor qualitative practice, organizational bias when teams act as “judge and jury,” loss of third-party objectivity, and participant fatigue if research volume spikes. The panel concludes that researchers should embrace automation while shifting toward business understanding, storytelling, consulting, and governance so DIY systems improve quality rather than create a volume trap.
Arow Title
DIY 2.0 in Market Research: Automation’s Promise and Risks
Arow Keywords
DIY research Remove
market research automation Remove
AI in insights Remove
SurveyMonkey Remove
concept testing Remove
SPSS democratization Remove
conjoint software Remove
qualitative research risks Remove
bias and objectivity Remove
participant fatigue Remove
client-side insights teams Remove
blended data Remove
data storytelling Remove
research governance Remove
Arow Key Takeaways
  • DIY 2.0 is framed as AI-driven automation that can recommend designs, buy sample, run analysis, and draft outputs, reducing errors and effort.
  • The term “DIY” may be misleading; many automation tools are used by experienced researchers and reflect broader industry evolution.
  • Automation can enforce best practices (shorter surveys, cleaner question wording) and free professionals for consultative and methodological work.
  • Biggest benefits: broader access to research, lower cost barriers, and more decisions based on evidence rather than gut feel.
  • Key risks: poor-quality self-serve research, especially in qualitative; internal myopia and confirmation bias without third-party triangulation.
  • Participant impact could be positive (better experiences via best practices) or negative (more invitations and higher burden).
  • To prepare, researchers should build skills in business context, insight curation, data storytelling, and governance—avoiding becoming trapped in high volumes of tactical requests.
  • Future roles may shift toward insight managers/customer success/consultant profiles, alongside deep specialists (ethnography, data science).
Arow Sentiments
Neutral: The tone is balanced and exploratory: speakers express optimism about automation’s efficiency and democratization benefits while candidly raising concerns about research quality, bias, qualitative complexity, and participant burden.
Arow Enter your query
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript