Navigating U.S. Privacy, Incentives, and AI in Research (Full Transcript)

A discussion on state privacy patchworks, prospects for a federal law, the 1099 incentive threshold rising to $2,000 in 2026, and AI transparency needs.
Download Transcript (DOCX)
Speakers
add Add new speaker

[00:00:04] Speaker 1: Hey, Howard, it's great to see you. Thank you for joining me here today for this conversation.

[00:00:09] Speaker 2: Thanks for having me, Katherine. I appreciate it.

[00:00:12] Speaker 1: Awesome. Then I'm going to jump right in. And the first thing I want to talk about is a topic that I know that you've been following very, very closely, which has to do with privacy regulations. As many researchers know, if you do business in the EU, you have to be familiar with GDPR. And GDPR gives all of the European countries that participate, I believe there's 27 countries that participate in GDPR, sort of a unified framework for what privacy regulations need to be. And in the US, it's been more of a patchwork. Different states have different rules. And a lot of people point to California and CCPA as sort of the example of a particularly rigorous set of privacy regulations here in the United States. Now, I have heard that because of the state differences, that some research firms actually avoid doing research in certain states, because they don't want to inadvertently run afoul of state specific regulations. Is this something that you've encountered?

[00:01:13] Speaker 2: So I don't have reports of anybody or at least evidence of anybody actually avoiding a specific state. The problem as you've identified, there are 20 different states with comprehensive privacy laws now. They're not entirely all interchangeable, or a lot of them have different provisions here and there. It makes it very complicated. I know that early on when California was originally getting ready to pass the California Consumer Privacy Act, some companies said, oh, well, we'll just avoid doing business, we'll avoid doing research in California, which was, of course, laughable. Come on. It's a massive state. You can't really do that. The problem is, though, once every other state started getting into the game, there's no benefit really to avoiding a state once now 20, as of 2026, will be having their separate privacy laws. The only state in particular that I would single out where if I was a company, I would try to find a way to avoid doing research, there is Washington state, because Washington state has a comprehensive privacy law that they talk about it as a health care privacy law. But they were very, very good at defining health care, personal information, as pretty much everything that is personal information everywhere else. It's very expansive, and they have a private right of action. So private lawsuits, class actions, these are in store for people that go too far off the reservation in dealing with Washington state. So if I were to avoid a specific state, that would be my choice. Do I know of anybody that actually has? No. But yeah, I would imagine most companies don't talk about it if they are doing that sort of thing. But I mean, you can imagine that if you did, maybe if you're further down the food chain, you might avoid choosing to pick up a contract that involves something specifically there, for example. But it's a hard thing to do to differentiate that way. It's a big country, and people are all over the place. And most clients, whether they're corporate or nonprofit, they want to know what's going on with consumers across the whole country.

[00:03:31] Speaker 1: Right. And that certainly makes sense. But your example from Washington state certainly would give people pause. I mean, the idea that it's not just a regulatory issue where you might get a slap on the wrist, but that people can actually sue your company if they have the sense that their information was disclosed or not used appropriately. Yeah.

[00:03:53] Speaker 2: Well, in any of these states, the attorney general, the privacy authorities, various authorities within the state can seek damages against you when you do bad things or when you do things badly, I guess, rather than doing bad things in most cases. When it comes to individualized lawsuits or class actions, yeah, that takes it to another painful level.

[00:04:17] Speaker 1: Okay. Got it. And I would imagine even for privacy rules, obviously, for example, companies that do survey research, they always have a privacy policy that documents how participants' information is used and protected. I got to imagine that keeping up those privacy policies to adhere to all the different states that have specific requirements, even just updating your privacy policy could become onerous.

[00:04:43] Speaker 2: It is complicated, but it's definitely something that everybody needs to do. And one of the reasons we don't offer just a blanket privacy policy that we recommend to people to use is because it does need to be tailored to what you actually do within your business. And that can vary dramatically from company to company, even within our industry, which is, I mean, the insights industry is extremely diverse. And so those policies need to not just reflect what is demanded by given states, but it needs to really reflect what you're doing internally. And that is an ongoing challenge, but one that companies need to focus on.

[00:05:24] Speaker 1: And of course, now there's the discussion that maybe the United States will have a unified set of privacy regulations, which potentially could help. And that would potentially replace the state-by-state variability. Is this likely to happen? And if so, in what timeframe do you think?

[00:05:46] Speaker 2: Well, we've been battling for it for a number of years now with a coalition called Privacy for America. And we have draft legislation that we have been offering to Congress. And they've had some fits and starts along the way, including the last couple of Congresses where they put together legislation, including in 2024, there was legislation advancing in the House and it recognized market research, but it didn't actually help us in any way. So what looked like it might be positive from the beginning was kind of a poison pill that would actually make it much more difficult to ever do market research and a lot of insights and work. So I think things are looking better right now within the House Energy and Commerce Committee. They have a privacy working group that is trying to formulate their own draft legislation and they're taking some cues from some of the more reasonable state laws and trying to put together a package that actually works. Because I think recently there's been an attempt where everybody just makes this big grab bag full of junk and say, all right, well, we're going to do this and we're going to do this. And sure, those two pieces don't fit together. And we say something over here that contradicts something over here. So what? We're going to jam it through anyways. These guys are taking a more patient approach and trying to come up with a process that will work, something with which all of the businesses are able to comply, but that will actually provide protections for consumers and give them transparency and choices. And that's going to take a little while. I expect could have something, you know, all right, the odds that it gets passed into law next year are not great. It's hard to pass anything into law at the moment with this Congress, very thin margins between parties. That's just not a recipe for this law happening right away. But certainly it's something that I could see moving along to the point where we're set up for passing a law by 2027. Which obviously does not make everybody happy right now because you still have to deal with the problems on the ground. But in the meantime, I have confidence that we're working towards what should be a workable and preemptive privacy law to preempt that patchwork of state laws because that was another failing along the way in some of the past few years as they've tried to craft a federal privacy law. They've tried to carve out, say, all right, well, we're going to preempt everybody except California or last year. Well, everybody except Washington state, which would be disastrous. So it's all about give and take and trying to get the best compromise that will work for everybody involved. And I think it is a good possibility. And once that's something that we can help get passed, it would hopefully come into effect relatively quickly.

[00:08:55] Speaker 1: And can you give me an example of something that's like a sticking point?

[00:09:02] Speaker 2: Sometimes it can be very simple things, but a lot of it is details and definitions. So last year, a sticking point was that the sponsors wanted to say, all right, market research is special, but only if it involves a complete opt-in, like an express opt-in for any use of personal information for market research, which would be great if you're running a panel, for example, because you've got five layers of opt-in all the time anyways. But for most other kinds of research, you're left hanging. A sticking point has also been really the ability to track people across websites, even with an opt-in, they've tried to find ways to cut that off. Because again, mostly they're focused on actual targeted advertising and personalized advertising. But in reality, they end up going after any uses of data and market research, audience measurement. We're all a part of that. So trying to fine tune how they approach things like that has been a big focus of ours.

[00:10:15] Speaker 1: Got it. Thank you. Those were two great examples. I really appreciate that. All right. So if we time travel a couple of years in the future and we do get some sort of federal framework in place so that there's not a patchwork across states, would that necessarily mean that that's all you have to worry about or would you still have to worry about now you've got to comply with the state level and the federal level?

[00:10:42] Speaker 2: Well, our end goal is to get rid of the patchwork. So there will still be other state laws on and it's likely that some of the specific laws that deal with privacy for children, for example, those would presumably be carved out and some other sectoral type privacy laws. Those would still sometimes be handled at the state level. But the sort of the overall majority would be strictly federal and state. You might have to deal with your local state attorney general, for example, as part of the enforcement of it. But those guys wouldn't be writing the rules. And that's that's the key thing is keeping the right thing of the rules of the federal level in an orderly fashion, hopefully with the proper guardrails to keep the Federal Trade Commission focused on the important things rather than getting ahead of their skis.

[00:11:34] Speaker 1: Got it. All right. Excellent. Thank you. Then the next thing I'd like to talk to you about is rules about incentive caps. And this is obviously I know something that you've written about in the Fighting for You newsletter that the Insights Association puts out about about your some of your work, trying to make sure that these things are handled appropriately and not in a way that would damage us as a as a profession. But I know you've written about this idea of, you know, right now there is a reporting threshold gap. That is like if I'm paying somebody incentives right now, if I'm paying people incentives to participate in research, I can't pay any any individual person more than six hundred dollars a year. When did that cap originate and how did six hundred dollars become the cap?

[00:12:31] Speaker 2: Yeah, so I had to go back and do the research because I didn't even know where it came from. But yeah, it dates back to the early nineteen hundreds. I think their first references to this around the time of World War One, when the then federal government was trying to raise revenue and they set a reporting threshold of anything over at the time a thousand dollars, as best I can tell. And then that was set a little later. There was the reporting threshold was lowered to eight hundred dollars. And then finally, when it became a proper part of tax law for good, apparently until now it was set at six hundred dollars. Now, of course, six hundred dollars, even back in the early nineteen hundreds, an insane amount of money. So. They weren't really worried about capturing that much, they were, you know, but it was designed to deal with independent contractors, which it still does, which is why we care about it, because if a research insights company is paying someone to participate in a study, that person is an independent contractor rather than an employee of the company. It's a fund binary choice. So if they're an independent contractor that, you know, at a certain point you hit that six hundred dollar threshold in overall for the year, you have to issue a ten ninety nine form, either an NEC or an MISC, and you issue it to them and to the Internal Revenue Service. And. Certainly, when I started, I have finally discovered that there were some companies that weren't issuing the forms because they thought other people were responsible for it. We helped clean that up, much to everybody's misery. But, you know, this was this has been really mostly a concern for high value audience. So because the average research subject is not going to be getting near even six hundred dollars in an average year. That because that's that's a lot of work for someone if you're at the low value end of the scale. But if you're an IT pro, if you're a health care professional, if you're a business executive, policy influencer is the category I fall under if I want to participate in research. The incentive levels are much higher. So a single study for one specialty doctor puts in oftentimes we'll put them over that six hundred dollar limit right away. But the good news is that in a weird and unexpected move, the one big beautiful bill act, the big tax and budget reconciliation law that passed in the summer, is lowering. Sorry, it was raising the threshold. So where it has been six hundred dollars for many, many decades, this is finally going to be raised in twenty twenty six to two thousand dollars. And then it's going to be adjusted every year thereafter for inflation. And so, again, unless you're dealing with, you know, the serious high level, attractive audience that requires serious lots of incentives and they're doing maybe more than a handful of studies, probably not going to have to worry about that IRS reporting threshold for a while for a lot of people. And I think that's going to mean that we're going to have to be a little bit more careful to make life a lot easier for a lot of folks in our industry, because, you know, for a lot of audiences, incentives are a key aspect of getting and retaining interest from a research subject population. And anything that reduces the bureaucratic paperwork on our end is a great benefit. But it also makes it more attractive to our research subjects to participate if, you know, obviously the impetus is on them to report their income rather than us. And, you know, if they choose not to, that is their choice.

[00:16:43] Speaker 1: Right. And obviously, a lot of people, when they get a survey invitation and it's just five, ten, twenty dollars, the last thing they're thinking about is I'm going to have to report this income or I'm going to get a ten ninety nine. But at two thousand dollars, gosh, even if I'm getting paid twenty bucks a shot for participating in research, that's the chance, you know, am I going to participate in over one hundred studies over the course of probably not, not unless I'm a professional researcher.

[00:17:11] Speaker 2: And if you are and if you are that you're probably spreading yourself around to multiple platforms, multiple companies, which would avoid the problem also.

[00:17:20] Speaker 1: Got it. So the benefit here, then, is both for the participant and the company. Obviously, companies would rather not have to issue a lot of ten ninety nines. Yeah.

[00:17:29] Speaker 2: So and that's so that spares if someone chooses not to report it themselves, then they take that risk. But, you know, it also saves them a lot of paperwork.

[00:17:41] Speaker 1: Absolutely. So, you know, it's interesting. So talking about both of these things, talking about, you know, privacy regulations and trying to get to a national consistency, talking about raising the cap, you know, both of these things, while they're very different topics, do seem to point to this, you know, the importance of kind of staying up with what's what's changing, what's likely to change so that, you know, the business decision makers at research agencies, different types of research suppliers know that, hey, there's things coming soon that could have an impact on our costs or even our business processes.

[00:18:17] Speaker 2: Yeah, I think it's it's very important to keep abreast of all of this. I certainly at the Insights Association, I'm doing my best to affect policy in the right direction, but also just to keep abreast of and keep our membership aware of what's going on that, you know, when things pass, whether positive or negative, you know, at the moment, I am thoroughly immersed in going through the detritus out of the latest legislative sessions in California where we were successful in beating back lots of nasty things. But, you know, there are also new and interesting laws that get passed there every year, little tweaks and expansions of their privacy law, but also some new A.I. reporting requirements and A.I. transparency requirements and weird protocols around chatbots. There's always there. They're a particular engine of interesting law every year. But for Insights Association members, we provide a monthly newsletter which is available to the public as well that summarizes what's been going on for the last month. It's called Fighting for You. And also, you know, more or less every week there's a post on LinkedIn giving a quick summary of what's been going on during that week. So there are various ways to keep tabs on what's going on beyond just entrusting it to me and turning a blind eye to it.

[00:19:44] Speaker 1: Right. No, thank you, Howard. And I love the work that you do. I think that it's so important and so valuable. Before I let you go, though, you did bring up A.I. And I'm just curious, did you happen to catch the news item in the last couple of weeks about the fine that Deloitte has to pay to the government of Australia?

[00:20:03] Speaker 2: So I've heard, but I do not know the details of it, to be fair.

[00:20:07] Speaker 1: It's OK. So there have been obviously a number of articles about this, but I think this is something that for our membership is really important because it's probably the first really public example of a highly reputable company, right? A really well-known, reputable company. And they delivered a report to the government of Australia as part of a larger contract. And somebody went through that report and found A.I., obvious A.I. errors. And a couple of them are the really classic things, right? Like somebody was quoted in the report. But then when you check that person's information, it was like clearly that was not what they said or that that they were not the source of that quote. So misattribution of quotes, which we all know, you know, is something that that still happens. It doesn't happen as often as it used to, but it does still happen. And they did have to pay back some of the money for that contract and, you know, to kind of in restitution of, you know, making the client feel whole about the experience. But the client caught that error. So, you know, it's a real wake up call, I think, for those of us and anybody who's delivering reports, especially research reports, because, you know, when you use A.I. to help you with writing, it does make errors. In fact, I was just talking to some researchers yesterday and sharing an example of how an A.I. tool had misinterpreted data from different columns in a table. So it thought that one column was about customer group A and one column was about customer group B, and it had actually referred to the wrong columns, even though they were labeled. And if you just read the text, you would have thought, OK, this is the result. You didn't catch it unless you actually checked the numbers to make sure they were being pulled from the right column.

[00:21:58] Speaker 2: Which, you know, used to be a problem that was made by people.

[00:22:02] Speaker 1: Right. And so we always checked people.

[00:22:04] Speaker 2: Unfortunately, double checks are a necessity, whether you have A.I. or not. A.I. obviously requires its own separate eye on the product. Yeah, it's it's complicated, but I think one of the key things that we stress at the Insights Association is transparency in the use of the A.I. It's and this is coming up now in state laws and will likely be federal law before too long, is that a transparency to users when A.I. is being used, when they're interacting with an A.I. Yeah, so it's because of, you know, chatbots and impersonating people and that causing strange and weird problems. You know, transparency is a key thing, but, you know, transparency of the use of A.I. is an important thing on the business to business side as well, that, you know, your your clients, your partners, they should know when you use A.I. to do something because that can change the equation. It's not necessarily a bad thing, but, you know, people need to know what tools you're using. That's that's just sort of a common sense approach, and it's you would use it and demand that sort of thing. Anyways, A.I. is not different. It should be treated somewhat the same way that should let people know when it's being used.

[00:23:32] Speaker 1: Great. I love that point. You know, we've got to check the work, whether it's human or A.I. We're responsible. Bottom line is we're responsible for what we deliver. Correct. And and, you know, I it is true. Like now we just have to have a slightly different process because how we do the checking is going to be different. Whether we're the producer and are checking before we submit something or we're the client and we receive something. And now I know, OK, you've been transparent. You told me that you used A.I. for some of this analysis. That gives me some priorities in terms of how I'm going to check the Exactly.

[00:24:04] Speaker 2: Yeah. And look, and sometimes it's simple things. It's, you know, spelling spelling and grammar check now is done entirely by a relative rudimentary A.I. system. But you know what? They're not perfect. Yeah, just not. So a human eye is always, you know, can be helpful. Excellent. Well, thank you so much for your time, Howard.

[00:24:25] Speaker 1: This was a great Excellent. Well, thank you so much for your time, Howard. This was a great conversation and I love both of these topics and I really look forward to seeing what happens with with how some of these things evolve, especially getting to a national standard for privacy regulations, I think would be great.

[00:24:42] Speaker 2: Yeah, I am very excited about it and I will keep working hard on it and look forward to talking on these and other subjects again soon.

[00:24:49] Speaker 1: Excellent. Thanks, Howard.

ai AI Insights
Arow Summary
Katherine interviews Howard about evolving compliance issues for the U.S. insights/market research industry. They discuss the growing patchwork of state privacy laws (about 20 comprehensive laws by 2026) versus GDPR’s unified EU framework, noting companies generally can’t realistically avoid states, though Washington state is flagged as uniquely risky due to an expansive health-data definition and a private right of action. They cover the effort to pass a federal, preemptive U.S. privacy law via the Privacy for America coalition, with progress in the House Energy & Commerce privacy working group but slim odds of passage immediately; 2027 is suggested as a plausible timeframe. Sticking points include definitions, opt-in requirements for market research, and rules affecting cross-site tracking that often target advertising but can inadvertently restrict research and measurement. They then shift to IRS incentive reporting thresholds: the longstanding $600 1099 threshold (dating back to early 1900s tax policy) creates friction mainly for high-incentive audiences (HCPs, IT pros, executives). A new law raises the threshold to $2,000 in 2026 and indexes it to inflation, reducing paperwork for companies and deterrence for participants. Finally, they address AI in reporting, referencing a publicized case where Deloitte had to repay fees after AI-related errors/misattributions in a report for the Australian government. They emphasize that errors happen with humans and AI, but AI requires tailored QA and, critically, transparency to clients and users about when AI is used—an expectation increasingly reflected in emerging state (and likely future federal) AI transparency rules.
Arow Title
Privacy Patchwork, 1099 Incentives, and AI Transparency in Insights
Arow Keywords
GDPR Remove
CCPA Remove
U.S. state privacy laws Remove
Washington My Health My Data Remove
private right of action Remove
federal privacy law Remove
preemption Remove
Privacy for America Remove
House Energy and Commerce Remove
market research exemptions Remove
opt-in consent Remove
cross-site tracking Remove
audience measurement Remove
FTC enforcement Remove
1099 reporting threshold Remove
research incentives Remove
high-value respondents Remove
inflation indexing Remove
AI transparency Remove
quality assurance Remove
Deloitte Australia report Remove
Arow Key Takeaways
  • U.S. privacy compliance is increasingly complex as ~20 states adopt comprehensive privacy laws; most firms can’t realistically avoid states, but Washington state poses outsized risk due to broad health-data scope and private lawsuits.
  • A federal privacy law that preempts state patchwork remains a goal; near-term passage is uncertain, with 2027 presented as a plausible window.
  • Key legislative sticking points often come down to definitions and consent mechanics (e.g., strict opt-in for market research) and limits on cross-site tracking that can inadvertently restrict research and measurement.
  • Privacy policies can’t be one-size-fits-all; they must reflect a company’s actual data practices and be maintained as laws evolve.
  • The IRS 1099 incentive reporting threshold historically set at $600 (originating in early 20th-century policy) mainly impacts high-incentive respondent groups.
  • A new U.S. law raises the threshold to $2,000 starting in 2026 and indexes it to inflation, reducing administrative burden and participation friction.
  • AI can introduce report errors (e.g., misattributed quotes, misread tables); robust human review remains necessary.
  • Transparency about AI use is becoming a best practice and likely legal requirement; clients should be informed when AI materially contributes to deliverables.
Arow Sentiments
Neutral: The tone is pragmatic and policy-focused, balancing concerns (state-law complexity, litigation risk, AI errors) with cautious optimism (potential federal privacy framework, higher 1099 threshold).
Arow Enter your query
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript