Speaker 1: Artificial intelligence is fast becoming a part of our daily lives, showing up in places we would never have expected, including at border crossings around the world. With more people fleeing their countries due to war, climate change, and other destabilizing events, our next guest argues that countries are turning to AI-driven technology to help secure their borders, but at a profound human cost. Joining us now, Petra Molnar, author of The Walls Have Eyes, Surviving Migration in the Age of Artificial Intelligence. And Petra, it's great to meet you. Welcome. Thank you so much for having me. As people are going to discover, you have just the smallest hint of an accent. Now, I know you separate your time between Toronto and New York right now, but your background is really interesting. Tell us about it.
Speaker 2: Yeah, I mean, like so many people in Canada, I have my own migration story that brought me here. First as a teenager to Winnipeg, and then to Toronto. But, yeah, my family is split between different continents, and I think that's probably what motivated me to do this work.
Speaker 1: Where did your life start? In the Czech Republic. Czech Republic, as it then was and still is. In Czechoslovakia, actually.
Speaker 2: Maybe I'm dating myself.
Speaker 1: You're not old enough to be... Are you? No, you're not old enough to be in Czechoslovakia. I am. I guess you are. It's good makeup. Live and learn. Let's do an excerpt from the book, and then we'll come back and chat. Sheldon, if you would. Here's Petra Molnar in her book. Technology is often presented as being neutral, but it is always socially constructed. All technologies have an inherently political dimension, whether it be, for instance, about who counts and why, and they replicate biases that render certain communities at risk of being harmed. Consequently, what we are really talking about when we talk about borders is a human laboratory of high-risk experiments. Okay, lots to unpack here, Petra. A human laboratory of high-risk experiments. What does that mean?
Speaker 2: So, it means that a lot of governments around the world are using new technologies, things like drones, surveillance, AI-type projects to augment the way that migration control is happening. But borders have historically been these spaces where there's not a lot of oversight or accountability, a lot of decisions get made behind closed doors, and it creates this kind of high-risk experimentation where technologies are tested out without even public knowledge.
Speaker 1: These drones and sensors and cameras and so on, are they inherently nefarious?
Speaker 2: Well, it depends how you look at it, right? Because I think technology is a social construct. It's something that replicates vast power differentials that we have in our world, and it's definitely not a neutral tool. So, the way that it's being used right now, unfortunately, is kind of exacerbating the way that it's harming people who are crossing borders.
Speaker 1: Okay, let's look at the most famous border in the world anyway right now, and that's the US-Mexico border, where it's either under control or there's an invasion every day, depending on which presidential candidate you want to listen to. What does the technology testing on that border look like right now?
Speaker 2: So, the US-Mexico border is a really interesting place where you can actually go and see some of this technology, and the book took me there. I've been in Arizona many times, trying to understand what's happening there. And you can see, for example, AI-type towers that have been erected in the Sonora Desert to sweep the desert and create this kind of surveillance dragnet. There's also robodogs that were announced a couple years ago by the Department of Homeland Security to augment the way that borders are being policed. So, when we hear,
Speaker 1: we got to build a wall, we got to build a wall, there's a lot more than just a wall happening there.
Speaker 2: Absolutely. It's really a smart wall and a smart border system that has expanded the border from its kind of physical location into the skies and across the land and has really created, again, this kind of laboratory of experimentation with tech that you wouldn't see in other spaces.
Speaker 1: I think we get just by definition what a robodog is, but do you want to just put a little more flesh on that bone, no pun intended?
Speaker 2: Sure. I mean, it's one of those, I think, most draconian examples that I've come across. It's essentially a four-legged, military-grade piece of technology that you might have actually come across on a sci-fi show like The Black Mirror or other types of movies that are futuristic, but it's really not the future. It's here.
Speaker 1: Interesting. Now, for as long as I can recall, the US-Canada border has always been described as the longest undefended border in the world. What's going on on that border?
Speaker 2: So, the Canada-US border is another site where technology has been tested out for a number of years, and that's actually where my work in this started. Back in 2018, we came across some documents that proved that the Canadian government was using AI and different types of technologies to augment our immigration decision-making system, too. Like what? Visa triaging algorithms, for example.
Speaker 1: English, please. What does that mean?
Speaker 2: So, an algorithm is basically like a recipe of programmable steps. So, it's something that helps you make decisions and augment the way that human decisions are made. And so, you feed it a bunch of information and it comes out with a result. And so, essentially, what the Canadian government has been doing is using machine learning and algorithms to automate the way that visas are given out.
Speaker 1: On this issue of the US-Canada border, DNA collection pilot projects. What's going on?
Speaker 2: So, that's kind of one of the, again, latest manifestations of how much this space is expanding. DNA collection was tested out at the Canada-US border and also at the US-Mexico border, including on minors in that situation. And again, it's almost like your body becoming your passport in that instance.
Speaker 1: Do you know, when I drive to Buffalo to go to a Sabres game, I get to the booth where the person is, but the car stops before that and there's clearly something taking my picture, looking at me, surveying me. I don't know what it's doing. What's it doing and where's that picture going?
Speaker 2: That's a great question and I think that's part of the story, too. We just oftentimes don't know what kind of technology is being used, what data is being collected, and whether we can actually meaningfully opt out. It's similar stuff that we see in airports, for example, right? With facial recognition being used or just even other types of technology that's made its way into migration.
Speaker 1: Yeah, there's a suspicion... Well, maybe this is my inference, but when you go to the airport, you kind of expect everything, anything. You know, it's all hands on deck. The border, at least, you know, I remember the days of my childhood when going to the border was like... almost Ontario to Quebec. There was nothing to it. There's a lot more to it now, isn't there? Do I need to be worried about that?
Speaker 2: Well, borders have been changing and they've become sharper, I would say, in the last number of decades, really. And technology plays that up, right? Like, we are seeing more infrastructure, more technology, more kind of projects that are being tested out, oftentimes, again, without our knowledge. And you might end up at a border and not even know kind of what you're interacting with.
Speaker 1: And even you, with all the research you've done, you're not sure where all that information goes?
Speaker 2: Yeah. I mean, that's part of the story. I mean, I do this for a living and I've sent out so many access to information requests to different governments. I've been to different conferences. I have closed-door meetings with officials. And it's very difficult to get a full picture of just how much technology there really is and how it's harming people.
Speaker 1: I've seen your CV. You have been everywhere. How many countries do you think you've visited?
Speaker 2: Quite a few. I mean, for this book, at least six or seven.
Speaker 1: Oh, no, no. But, I mean, you've been... I mean, you've done speaking engagements, your research takes you all over the place. You've been to lots and lots of countries. Can you tell us which border in the world between which two countries you think right now is the most concerning as it relates to these issues?
Speaker 2: You know, it's hard to pick one, but I would probably have to say the US-Mexico border because, of course, it's also an election issue in this year and it's becoming a political free-for-all, so to speak, and it's definitely a border to watch.
Speaker 1: We're going to talk some US politics, okay? Okay. Okay, here we go. Republicans, of course, have been very strong in their view on border security, finishing the border wall, increasing deportations, that kind of thing. Many people in the Democratic Party, conversely, are for softer immigration policies, so-called. More pathways to citizenship, better treatment for undocumented immigrants, that kind of thing. So a Harris presidency versus a Trump presidency presumably makes a big difference in your line of work, fair to say?
Speaker 2: Yes and no. I mean, I think, at first blush, definitely a Harris presidency may not be, again, as sharp on migration as another Trump administration. But the Democrats have also been expanding a lot of problematic policy-making, especially on the smart border side of things. So I think we need to keep a careful eye on what their policy actually is going to be if there is to be a Harris administration.
Speaker 1: To the best of my knowledge, the president who expelled more people in my lifetime was Barack Obama. Exactly, yeah.
Speaker 2: Sometime known as Deporter-in-Chief.
Speaker 1: Deporter-in-Chief. So for those people who think Trump is the biggest problem going here, maybe not?
Speaker 2: I think it's the politics of migration. That's the biggest problem.
Speaker 1: How did Biden do? Joseph Biden, the current president, how has he done following up on what Donald Trump started?
Speaker 2: I mean, there has definitely been kind of a return, somewhat at least, to a human rights-respecting framework. But there's also been very problematic expansions of the surveillance state, of the border, industrial complex in that situation, and, again, deportations, again, also.
Speaker 1: So the military-industrial complex, in your line of thinking, is not the biggest problem we have nowadays, eh?
Speaker 2: I think it's the military and the border-industrial complex. They're actually together.
Speaker 1: The bigger concern, in your view, the technology deployed at the borders or the lack of oversight deploying those technologies at the borders?
Speaker 2: I'll give you a bit of a lawyerly answer. I think it's both. I do actually think there's some technology that is concerning, things like robodogs or AI lie detectors. Those are things that I think we should just not be using. There needs to be a red line under. But a lot of it is about the governance, or lack thereof. We just don't really have a lot of law or policymaking that would put some guardrails around this.
Speaker 1: What technology are you okay with?
Speaker 2: I think technology that ultimately would help kind of rebalance the power differentials that are inherent in immigration, you know, and maybe make the system more transparent and fair and make access to resources better for people.
Speaker 1: Are lie detectors okay?
Speaker 2: No, definitely not.
Speaker 1: Because?
Speaker 2: Well, I mean, lie detectors as a technology, the kind of traditional ones, you know, the ones that go back and forth between the US and CSI, they're really problematic and they don't even work properly. Now there's experimentation with AI-type lie detectors in the European Union, for example, using facial recognition to make inferences about people's behaviour and say, oh, this person's likely a liar or not. This is happening and being researched.
Speaker 1: Let me be provocative just for the heck of it, okay? It wasn't that long ago that we saw somebody who was in Quebec trying to get into New York State to make trouble, to perform a terrorist act in New York State. If it... if that person could have been polygraphed before anything dramatic... anything more dramatic happened, would that not be worth doing?
Speaker 2: Well, I guess it ultimately goes back to what the polygraphs are even doing, because this AI-type project that I mentioned, I mean, it's been basically debunked as snake oil. It doesn't even work the way it's intended to. And I understand, of course, there are real security concerns that come with certain people and certain borders, but the answer isn't untested, unregulated and high-risk technology that can then also be harmed... used to harm others.
Speaker 1: There are biases associated with AI. We have heard them, particularly as it relates to people of colour. You want to expand on that a bit?
Speaker 2: Absolutely. I mean, AI in particular has a really poor track record when it comes to discrimination, particularly along racial lines. I mean, we know that facial recognition is very biased against people with darker skin, for example. Not to mention some of these more kind of nuanced ways that bias can be baked into it if we're using biased data sets to make decisions.
Speaker 1: If they're smart enough to figure out how to do face-recognition AI, how can they not be smart enough to take the racism out of it?
Speaker 2: Well, maybe that's the question to ask, right?
Speaker 1: I just did.
Speaker 2: I guess that's really what it's about. Is that a goal, or is it really about automating certain processes to make certain goals easier?
Speaker 1: On the other hand, I can also imagine people saying we saw all sorts of headlines about how inhumane the wall at, for example, US-Mexico could be. Kids taken away from their parents, put in cages, separated for weeks, if not months, at a time. That's pretty scary stuff. If you can come up with technology that would obviate the need for any of that, to the extent that there is any need at all, wouldn't you rather go that route?
Speaker 2: Well, in an ideal world, yes, right? But, again, we are in a world that is, um, kind of built on power differentials. And also, in immigration, there's so much discretion that goes into the way that decisions get made. And so, for sure, I mean, if we are working towards a world that is more fair and more transparent, then technology can help with that. But what it's doing now is it's just exacerbating the kind of wall that you're describing. It's just a digital wall now.
Speaker 1: So, what kind of ethical considerations that are not currently being undertaken should we be thinking about?
Speaker 2: I mean, ethics are definitely part of it. You know, we need to think about what this is doing to real humans, and especially people who are marginalized and people who have been historically disenfranchised from these conversations. But I actually think ethics don't go far enough. We actually need to have conversations about human rights, about regulation, and about some no-go zones. I just... I don't think it's ever appropriate to introduce a robodog at the border or use predictive policing, for example, or even welfare algorithms, things like this that are being developed without public scrutiny.
Speaker 1: Predictive policing is very Minority Report, isn't it? It is. Remember that Tom Cruise movie? Of course you do. Yeah, yeah. It, uh... Yeah, you have a problem with that.
Speaker 2: For sure, because, again, it's about who gets to make decisions about which groups of people. And if we're moving into prediction, I mean, that can get out of hand incredibly quickly.
Speaker 1: During COVID, we got... Well, I think we probably got accustomed to having our civil rights curtailed for what we were told was the greater good, namely the health of society. Have we, therefore, to some extent, normalized, um, you know, a lot of what we're talking about here today to the point where we're not as vigilant or concerned about it maybe as we should be?
Speaker 2: Yeah, absolutely. I think the COVID moment did normalize surveillance and certain incursions when it comes to, for example, personal data being shared. And, you know, there's colleagues of mine who have been tracking the kind of trends that have kind of arisen during COVID and have stayed with us. And definitely, I think these moments of crisis do normalize surveillance.
Speaker 1: You, when you fly... I mean, I guess not now, but when you flew from New York to Toronto, for example, during COVID, when it was at its worst, did you use the Arrive Can app?
Speaker 2: Yeah, I definitely used it. What did you think about that? Well, you know, it's one of those apps that came up as a result of a crisis, right, where I think there really wasn't an option not to use it. But it reminded me, again, of all these kind of themes and trends that I've been seeing around the world. The fact that a lot of these tools and apps are kind of presented as a solution without even having a conversation about it and what it's doing or whether it's even working.
Speaker 1: Never mind that it was a $60 million boondoggle for something that probably could have been done for two or three million bucks. Did you think the point of it was something worth doing? Well, it's, you know...
Speaker 2: I mean, it's important to situate it, I think, in the historical moment of COVID, which was a major global health emergency. But I think in retrospect, right, seeing the costs and seeing also the kind of inefficiencies and also the lack of uptake, right, I think shows that perhaps it wasn't the best way forward.
Speaker 1: You don't still use it, do you? No. Nobody uses it anymore, do they? Yeah, I don't think so. It's still on my phone.
Speaker 2: Yeah, yeah. I see signs for it at the airport. Yeah. But I haven't used it.
Speaker 1: I don't know anybody who uses it anymore. Okay, border security as a business. How lucrative?
Speaker 2: Very lucrative. Very lucrative. I mean, we're talking in the figure of about $70 billion. Worldwide? Mm-hmm. And this is, again, this is kind of a border industrial complex that has grown up around the use of this technology.
Speaker 1: We want to be safe these days, Petram, right? We have a lot of fearful people who want to be safe and who want to think that somebody out there is looking out for their interests.
Speaker 2: Is that an understandable feeling? Of course. And I think, you know, despite the kind of competing crises that people are dealing with these days, also in the aftermath of COVID, economically and environmentally, there is a lot to think about. But I think decisions motivated by fear
Speaker 1: are never really the way to go. No, but that's... I mean, that's how you get people motivated, right? I mean, that's what a lot of politics is about, scaring people into allowing governments to do various things that they otherwise couldn't do. But there... Are you prepared to throw the baby out with the bathwater?
Speaker 2: I mean, I think, ultimately, it's about looking at what the effects of these technologies actually are, right? And even if they're motivated by fear, we have to keep coming back to, what is this really doing on the ground? And the thing is, it also doesn't just stay at the border, right? The robodogs that we were talking about, a year after they were announced by Department of Homeland Security, the New York City Police Department announced they're going to be using them on the streets of New York.
Speaker 1: Hmm. Starting when?
Speaker 2: That's to be determined. But some of them have walked around. One was even painted white with black spots on it, like a Dalmatian.
Speaker 1: Just to look kind of cute, eh?
Speaker 2: Yeah, I guess so. To normalize it again.
Speaker 1: Huh. There... I mean, presumably, there is a trick to finding the sweet spot between legitimate security concerns and balancing the ethical worries that you have. Are we any good at finding that sweet spot?
Speaker 2: It's a hard one to find, because so many of the conversations that happen happen behind closed doors, right? It's government officials, often the private sector, that stands to make a lot of money in the development of this, again, this border industrial complex. And a lot of people who have training in human rights law or even affected communities are actually not really part of the conversation. I think we need to broaden that out and actually build a bigger table around which we can all sit to have these nuanced points discussed.
Speaker 1: Who should be sitting at that table?
Speaker 2: I think affected communities, definitely. People who cross borders, people who move. In other contexts, you know, other groups that are kind of on the sharpest edges of technological interventions. That's a major population that's missing from the conversation.
Speaker 1: I am trying to imagine you as a... I mean, you are a frequent flyer, right? You travel a lot and you go to a lot of airports. You see a lot of this kind of security. And, um, I do wonder whether you... Like, are you the kind of person who's a nightmare at the border who, when they ask you for some information, you say, you don't need to know that. You have no business asking me that. Are you that kind of... Are you that person?
Speaker 2: Well, I try to not, you know, be unpleasant, but I, you know, I think as a lawyer I do know my rights and I know how difficult it can be at the border, definitely.
Speaker 1: Okay, well, tell me how that goes. Like, give me an example of an ask that a border official would have made of you and how you responded if you didn't think that they were on solid ground.
Speaker 2: Oh, you know, I remember this one time, they were asking me a lot about the work that I did and I think I was feeling particularly maybe sharp that day and I said, oh, you know, I work on immigration and refugee issues and then somehow we got into a bit of a political debate, which could have gone on for another 20 minutes, half an hour, but, you know, we decided to just leave it there and I was allowed to enter.
Speaker 1: Enter which country?
Speaker 2: Canada.
Speaker 1: Enter Canada? So, you got the third degree entering Canada? Yeah.
Speaker 2: Even though you're a Canadian citizen?
Speaker 1: Yeah, yeah, yeah.
Speaker 2: What did you think of that? Well, I mean, I think this is a common experience for many people. Many Canadians also, based on your background, based on your travel history, lots of very problematic decisions can get made at the border under the guise of security again.
Speaker 1: I... Okay, I want more on this. How much did you push back?
Speaker 2: I pushed back quite a lot, yeah, because, you know, I was coming home, I think, from a conference on human rights, you know, or something like that, and it just felt a little bit of an unnecessary, kind of sharp interaction, you know.
Speaker 1: That person that you had the argument with, they're under no obligation to let you into the country, right? Or they certainly could have put you aside and subjected you to all kinds of extra security concerns. Were you not worried they would do that to you if you pushed back too much?
Speaker 2: No, not at all, because I know my rights and I know my travel history, and because of my work, I think I know how to defend myself. But not everybody can, right? And to me, these reminders are always there to, I guess, highlight again how difficult it can be for people to be interacting with these kind of operations of the border.
Speaker 1: Let's say the title of a book again, shall we? Surviving Migration in the Age of Artificial Intelligence. Petra Molnar, thanks for coming into TVO tonight.
Speaker 2: Thanks so much for having me.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now