How is artificial intelligence being used for disinformation purposes? How effective can it be in influencing our reality and political choices? We discuss the rise of synthetic media with Craig Silverman, a reporter for ProPublica who covers voting, platforms, disinformation, and online manipulation, and one of the world’s leading experts on online disinformation.
In the first season of Machines That Fail Us, our focus was to explore a fundamental question: what do AI errors reveal about their societal impact and our future with artificial intelligence? Through engaging discussions with global experts from journalism, activism, entrepreneurship, and academia, we examined how AI and its shortcomings are already influencing various sectors of society. Alongside analyzing present challenges, we envisioned strategies for creating more equitable and effective AI systems. As artificial intelligence becomes increasingly integrated into our lives, we decided to expand these conversations in the new season, delving into additional areas where machine learning, generative AI, and their societal effects are making a significant mark. This season begins by examining AI's role in the spread of misinformation, disinformation, and the ways generative AI has been used to orchestrate influence campaigns. Are we unknowingly falling victim to machine-generated falsehoods? With 2024 being a record year for global elections, we will explore the extent to which AI-driven disinformation has shaped democratic processes. Has it truly had an impact, and if so, how? In this episode, we are joined by Craig Silverman, an award-winning journalist, author, and one of the foremost authorities on online disinformation, fake news, and digital investigations. Currently reporting for ProPublica, Craig specializes in topics such as voting, disinformation, online manipulation, and the role of digital platforms.
Machines that Fail Us, a podcast by the Media and Culture Research Group at the University of St. Gallen about the human rights implications of AI errors and failures and our future with artificial intelligence.
Welcome to the first episode of the second season of Machines that Fail Us. My name is Filip DiSalvo. I'm a researcher and a lecturer at University of St.
Gallen, a member of the Media and Culture Research team in the Institute for Media and Communications Management and the host of this podcast. Welcome back to the show.
With the first season of Machines that Failed Us, we aimed at answering one what are the implications of AI errors for society and what do they tell us about our future with artificial intelligence?
In the first season of the podcast, we discussed these issues with international experts from journalism, AI activism, entrepreneurship and academia, looking at different fields of our society where AI and its errors are already having an impact. While discussing current issues in this area, we also try to imagine ways for building better and fairer futures with artificial intelligence.
You can still listen to all the episodes on the University of St. Gallen website and all major audio and podcast platforms.
Given the even more tangible presence on artificial intelligence in our world and lives, we decided to continue those conversations and to touch on even more areas of society that are being influenced by the rise of machine learning. Generative AI and their social impact.
content Created by machines?:In what ways has this information created by AI influenced these democratic processes and most importantly, has it really done so? For this episode, I'm extremely happy to be joined by Craig Silverman.
Craig Silverman is an award winning journalist and author and one of the world's leading experts on online disinformation, fake news and digital investigations. He is now a reporter for ProPublica covering voting platforms, disinformation and online manipulation. Craig, welcome to machines. Felix, it's great to have you on the show.
Craig Silverman:Yeah, thank you very much for having me.
Philip Di Salvo:So the first question I would like to ask you is about, generally speaking, what's the impact of AI on MIEs and disinformation? Because there's been lots of speculation around the use of AI for this kind of malicious uses if you want. And my question is quite straightforward.
Is this already happening? Are we already using AI to do MIS and disinformation? Or not.
Craig Silverman:I mean, the short answer to that is, yes, it's definitely being used to create it and to encourage people to then spread that content. So it is definitely there. It is definitely out there.
I don't think we've seen the kind of information apocalypse that people have been sort of warning of. And that doesn't mean there isn't real harm and real problems.
And so we have this kind of interesting dynamic where, because there hasn't been a really crazy example where people think it has a significant effect on election or what have you. Now you have folks sort of saying, oh, you see, it's not that much of a concern. We shouldn't be that worried about it.
But it is absolutely being used.
And there were some great efforts last year by rest of world and by Wired magazine to sort of track the uses of AI around elections, because we had so many elections last year. And that was sort of where a lot of the concern was, will we see a big moment where it, you know, impacts an election?
And I don't think we saw it significantly impact an election, but there are lots of examples where AI generated audio, AI generated video, AI generated images or manipulated images or audio were used to forward a false or misleading message.
And then there were also examples where it was just used to sort of assist, you know, examples where you might have candidates in India where there are so many official languages using AI to sort of automatically translate and then also to sort of make a deep fake of them seeming like they speak one of these languages, a candidate. So absolutely there it's present. And I think now it's sort of just part of it. It's part of what we can expect in elections.
It's part of what we can expect in sort of efforts to manipulate the information environment.
Philip Di Salvo:Okay. And there's been lots of speculation around the deep fakes in video.
So, like those videos where people are kind of created synthetically, like famous politicians or VIPs of any sort speaking things. They never said those kind of things. In your experience, is it that real risk or is it more around images and audios, for instance?
Craig Silverman:Yeah, I think the stuff that tends to be most effective falls into what people sometimes put in the category of cheap fakes, where you haven't necessarily done a really advanced job of taking a politician and putting words in their mouth that they never said. I mean, that is definitely going on.
Like, just as an example of that, during the recent elections in Venezuela, you know, the authoritarian leader Maduro, there were videos after the election, which is hotly disputed, and Appears to have been widespread fraud. You know, there was a deepfake video of Maduro seeming to like confirm and concede that yes, the election was manipulated.
There was also manipulated video of Venezuelan military leaders acknowledging that, yes, the opposition parties won. And then there were also deepfake videos of opposition figures. So it's absolutely something that's out there.
And I do think it's a scenario where always tough to know if a particular video or piece of information really impacts someone's view or changes someone's mind. And that's a very hard thing to prove, to study that kind of direct impact.
And so I feel like sometimes the bar gets set really high, like, oh, well, if it didn't change people's minds, does it matter?
And the larger thing is absolutely, you have those deep fakes, then you have cheaper fakes, which is, you know, maybe just speeding up or slowing down someone's speech to make them seem like maybe they're, you know, incapacitated in some way.
Doing very simple video or clipping video, leaving out context, manipulating images in a very basic way, that stuff is, is in some ways just as, or even more effective.
And so I think it's kind of, you know, the technology itself can assist with what is already the goal, which is to just appeal to people's emotions, pre existing beliefs, to feed that bias, to feed the partisanship and to sort of reconfirm people's views.
I think it is much harder to change someone's mind or to change someone's vote based on a deep fake as a confirmation machine and a media tactic to assist with that and to perhaps deepen divides. That's where it's probably pretty effective.
So there are absolutely deepfakes in the wild, but there's also a lot of other stuff that doesn't require as much effort, which can be just as effective for sure.
Philip Di Salvo:And how much AI is in cheap fakes? In the end, maybe it's not even the right term in these cases.
Craig Silverman:That's true because in some cases speeding up something, slowing something down, doing a deceptive edit, I mean, this is stuff that's been around for a very long time. But this sort of goes to, I think the larger question around, like, where does AI fit in all of this?
And it fits in, in just ease of use and ease of manipulation. And it also fits in, in scale and production of content.
And so it sort of fits with this trajectory that we've been on where media is getting easier and easier to create, to disseminate, to build an audience around, to Manipulate.
And that's, you know, because we have a much more open media environment with social media and other dynamics and we have phones and cameras and all these things. And all of that can have so many positive effects, but it also creates a really big ease to manipulate.
And it's very easy for anyone to create and spread something. And so AI just sort of exacerbates that.
Where it's like, if you wanted to make a fake or misleading image, now it is super easy and super fast to do that. You don't need to know Photoshop, you don't have to have advanced skills in these areas.
Deep fakes and other things are getting very easy and less technical to produce.
And so the impact is really on just that ease of use to create something that is sort of good enough doesn't have to be a super convincing thing to convince people who already want to believe what you're portraying. And also the production of scale is like, I can produce tons of images, I can produce tons of video, I can produce tons of text.
Using AI to assist with my manipulation and influence campaign.
Philip Di Salvo: all of those that happened in: nytime it was social media in: Craig Silverman:On the one level, it's good that there is discussion and concern and sort of game planning of like, well, how might these new technologies and new systems and products assist with manipulation, assist with deception?
Like, I think that is a good and productive thing to think about because our media environment is ever evolving and any new thing that is out there can be used to manipulate as well. And so that kind of game planning is good.
But yes, there is an element of where sometimes things get overheated and people talk about like, oh, AI is that's it, that's, you know, going to be the end of democracy. People won't know what to believe and what to see.
at you mentioned sort of with:And, you know, so did stories about like the Macedonian sort of the, you know, the money oriented, partisan publishers from Macedonia and how the dynamics of it was basically, they realized the stuff that got the most traction on Facebook and that earned the most money for them by people clicking on it and going to their website with ads was, you know, pro Trump, anti Clinton stuff that was the more false, the better. And that's what Facebook was rewarding.
Like, the vast majority of them, with the exception of like one guy, weren't really passionate about American politics. They just wanted to know what earned them money. And so that's what Facebook was rewarding.
And so that reporting, then when Trump wins, people are like, oh, it's the Macedonian teens and it's the fake news that got him elected. And, you know, so for me, it was a weird experience to have my reporting used for something that I didn't support at all.
Like, that's not the conclusion.
You can talk very realistically and in a serious way and make the point about the bad incentives and lack of oversight and new dynamics that these platforms and new types of media have brought on without having to go so far as to say, that's it, they decided the election.
nd the difference was that in:And so we have sort of the lead up hype of, oh, AI is going to fundamentally change elections and democracy. And so I think it's good to be thinking about it and good to be warning about it and tracking it.
But it's true, we could maybe calibrate some of the warnings a bit more because then you, you get the other extreme reaction, which is, see, they were wrong. Again, you know, AI means nothing.
No, I mean, it is a very important new tool and dynamic being used for manipulation, but also being used in other productive ways as well. And we need to be looking at that and paying attention to it.
Philip Di Salvo: own presidential elections in:is there any kind of notable example of attempt to fade the public through AI, for instance?
Craig Silverman:Yeah, I mean, I'll point to two examples. So one, there was use of AI for audio.
So back during the primaries, there were robocalls, so automated phone Calls that went to voters in New Hampshire and the United States.
In advance of the primary, somebody cloned President Biden's voice and sent out these calls telling them not to vote, which would have depressed potentially his turnout and helped. He did have a challenger at that time.
And so it turned out there was a political consultant who had paid someone to clone Biden's voice, produce this message, and then they paid to get the robocalls. And they'd been caught and they were fined, I believe it was $6 million.
And that was, you know, that was a pretty interesting, probably the most interesting example of sort of deepfake stuff to try and influence the election. And it happened before the general election.
And then, you know, the other thing, which is a story that I worked on, which is we found like tens of millions of dollars, more than $20 million worth of ads that had been purchased on Facebook, Instagram, by people not necessarily looking to sort of influence politics, but to make money.
And so we found huge amounts of ads that were using deepfakes of Trump's voice and in some cases deepfake video to have his voice cloned and then match it. They were cloning Trump, they were cloning Eric Trump.
They were using a clone of Melania's voice and other people to basically try and sell really what appears to be low quality Trump merchandise. So they were targeting Trump supporters.
They weren't trying to change anybody's minds, but they were basically taking real video, interspersing that with deep faked audio and in some cases video to have Trump saying, like, oh, I've got this new coin and you really need to buy it, and this is the way to support me. But also, some of the scripts that they used did sort of push really divisive messaging.
You know, they talked about immigrants, they talked about migrants. And so they were not about political messaging to help Trump win the election. They were not about trying to change it up outcome.
They were seizing upon people's political beliefs and the perception of trunk among his supporters to try and make money. And so in some cases, these also, there was AI generated images pushing what turned out to be scam healthcare offers and other things as well.
And so people often think in terms of elections and like voting. But the reality is, like, a lot of the big impact from AI technology has actually been around scams and harassment and things like that.
And that's where arguably some of the biggest harm has taken place, which is using AI to clone people's voices and call up relatives and say, I'm in jail. I need you to help me and send money.
In some cases, there have been reports, and I haven't seen the official confirmation myself, where even police officers in local parts of the United States had their voices cloned in order to call people and say, hey, you know, you missed jury duty. You've got a fine out for you.
And so I think sometimes we get very focused on politics when it's, you know, AI generated sexual abuse material or scams or fraud. That's actually probably the worst stuff that's going on right now.
Philip Di Salvo:Yeah, most definitely worse.
Deep fake videos, I mean, there were reports that the vast majority of all the content produced that way or the synthetic content produced that way is actually sexual harassment.
Craig Silverman:Yes.
Philip Di Salvo:But the audio fake that you mentioned, that's quite, quite interesting because I think those are really underrated in the level of concerns that we have.
But I ran an experiment with my students a few months ago trying to spot whether fake audios of either Trump and Harris were actually effective in kind of being credible to us. And me, journalist, professor, and my students, master's students, sometimes also got it wrong.
So how would you potentially respond to this kind of audio fake? What's the audience in the audience ends to potentially respond and be prepared? I don't know.
Craig Silverman:Yeah, no, I think audio fakes are actually in some ways the most concerning because it's just audio. You have a lot less to work with from my perspective, as someone trying to figure out whether it's real or not.
In general, AI detection tools, the advice I give, you know, students and journalists in trainings is, you know, maybe this can be a helpful indicator, but you can't, you can never trust it 100%. Not all of these detection tools have been trained on all the models. You can't be guaranteed that they actually are equipped to detect this.
What you're looking at.
And it's even more, I mean, there's far more detection tools around video, I think, than there are audio, although there are ones sort of coming out into the market. So audio, I think, is the biggest challenge.
And it's also because you don't have these other cues of like, oh, does the audio match the video and you know, their persons, are their lips moving weird? Is there stuff in the background that doesn't make sense?
There's far less material to actually dig into from a verification perspective, whether on a professional basis or just as a casual viewer, than the cues that people might pick up on naturally for a deep faked video or for an image. And so I think audio presents really Unique and specific challenges because sometimes we're used to hearing voices in a poor quality recording.
And so sometimes just like a good enough kind of audio thing might be convincing for enough people. So, yes, I think audio is really tricky and in some ways the most concerning one.
And it's funny because it is one of the ones, you know, thinking of case studies, like years ago there was an audio file and I forget the exact circumstances, but one that really went viral in Brazil.
It was an audio message that was supposedly, I think, of Bolsonaro speaking and him saying things that really would have damaged his, his credibility and things with voters.
And there's a case study of this in the most recent edition of the verification handbook written by some of the fact checkers in Brazil who tried to nail it down.
And they went through a process of, of course, like trying to get comment from Bolsonaro and his people, but also trying to track down the origin of it.
They took it to forensic audio experts at a university in Brazil and it was a really, really hard thing for them to be able to track down and to have any kind of certainty about. And so that is, I think, a very tricky thing about audio that still exists to this day.
Philip Di Salvo:Yeah. And one obvious question is, who's behind this AI power disinformation or misinformation campaigns?
In your experience, is it really like the foreign governments as we us you, the famous Russian Akus 2.0 with AI, or is it kind of smaller in scale overall?
Craig Silverman:I think it's in some ways it's kind of all of the above. We've certainly seen like Russian information operations incorporate AI elements into what they're doing.
And it could be as simple as AI generated headshots for fake social media accounts to all the way to sort of more deep fake videos and clips and things like that.
Then there's also individual people who are just kind of having fun with these tools and messing around with it and might be passionate about one topic or one candidate and deciding to use it. And so I think it's just as always, that element of attribution and understanding who's behind something and their exact motivation.
You sometimes get left in an unsatisfying spot in the digital environment and sometimes we can get to a very, very high level of confidence and attribution.
But a lot of times it's just like, well, this thing is out there and it's being shared and you know, it seems to have early been shared on this platform. But we, we don't know who created it. We don't know why this originally started.
And so, you know, that sort of lack of attribution and the traceability of it just sort of things, at a certain point, they get out there in the ether and so many people start interacting with it and seeing it that it sort of feels like it's there. And it's a real thing for some people in some cases. So it's tricky, but I do think.
And in some cases you also see the campaigns themselves using AI generated images of their own candidates and their own. Like Trump shared some of his own AI generated images of him and other people.
You definitely saw the BJP India using AI to assist with things like, you know, videos in different languages.
And you see candidates in other places, like in Indonesia, where, like helping sort of rebrand and put leaders in a sort of more warm and fuzzy place. So the use of it is, I think, really across the board.
And it's not just for manipulation, but it's also to sort of like for branding and campaigning as well.
Philip Di Salvo:Yeah, Trump and the little animals, for instance. Yes, those kind of things.
Craig Silverman:Yeah.
Taking people who have a very harsh Persona and making, you know, warm, fuzzy or funny memes of them is like, definitely something I think we saw over the last year. And it creates kind of easily shareable stuff. Stuff.
And so that sort of AI slop element of taking someone and putting them in a completely different context really easily, even though some people might wonder whether it's real. But that's not the point. The point is like, the message and the mood and the sort of meme being conveyed by it.
Philip Di Salvo:And I have one question about journalism and the kind of investigations you do now with Repubblica is really targeting these campaigns, unmasking them, seeing who's behind them, what's other tactics. And what's your comment on this kind of journalism? How do you position it in the contemporary investigative field, for instance?
Craig Silverman:The things that I've really been trying to do now for, I guess roughly around 10 years or so, is that I think the average journalist does not, still does not have the basic sort of digital investigative and digital verification skills that they should. It is not widely taught and widely known.
I do a lot of workshops for newsrooms and I'm still surprised the extent to which the average journalist is not familiar with things like reverse image search. Whereas I teach a university course here in Canada and usually around half of my students have used reverse image search.
Now maybe they've used it Google Lens to like, look at a piece of clothing they're interested in. But that's cool, fine. You know, at least they understand, you know, they're using it. And so it's easy to shift their use and their thinking of it.
And so I think this work of sort of digital investigative work and incorporating that into journalism overall, we still have a lot of work to do.
And for me, that accountability lens of like trying to figure out who is behind something, I have always felt that that's important because again, you know, you want to be able to show and have some sense of accountability and maybe it helps with deterrence as well, of if someone has created and spread something that is, you know, truly malicious, truly damaging, scamming people, what have you, they are typically doing their best to hide their identity and to be able to keep doing it and to not be accountable for it. And so I think it's important to try to have that, that approach.
And overall, like these digital skills now can be helpful for not just a sort of like disinformation oriented or digital oriented investigation, but anytime you are dealing with trying to figure out people or companies or entities involved in something, there's often some type of digital trail there.
And if you're not able to dig into that and understand what might be available to you, then you're potentially missing out on some really important information or the things that might actually lead you to the real person or the real entity behind something.
For me, it's like I often try to just fold it into overall responsibility as a journalist and investigative work people too often I think sort of put this in and say, oh, well, you know, that's stuff for disinformation reporters.
But my view is that there are fundamental skills that every journalist needs to have and I don't think we've been doing a great job of getting these spread as far as they need to be.
Philip Di Salvo:Craig, I have one last question for you.
Because we are recording this a few days after Mark Zuckerberg announced profound changes in our meta platforms are responding to content moderation and mise and disinformation. And of course, in my view it's going to be bad and it's going to be worse. Now it is.
And I wonder how do you see that, especially in light of the kind of AI powered content we discussed today.
Craig Silverman:Yeah, so it was in some ways a very surprising announcement to me and in some ways not surprising.
So I mean, the gist of it, like the core thing is Zuck has said we're getting rid of the fact checking partners we have in the US we're going to move towards A community notes model like X has.
And also they're getting rid of and rolling back, you know, changing some policies that they had that sort of restricted, like you can now refer to a trans person or a gay person as it. You can now, you know, sort of criticize their right to exist or what have you.
So they've rolled back some policies, but also they're rolling back some of their automated content systems that might have removed or flagged posts. And so in that sense, it is a very significant change. I see it as really, in some ways it's kind of a rollback to pre.
Late:But the other part that actually that to me, in some ways wasn't super surprising, but the part that I found really surprising is that is like the way and the tone and the actual content of the announcement where there's lots of ways that Meta could have framed it. They could have said, we want to try a different approach.
We feel like, you know, the fact checking approach isn't what we want to be doing, you know, et cetera.
They could have framed it in a lot of ways, but actually they really went to the extent where Mark Zuckerberg basically parroted a line that I have not seen any evidence to back this up that, you know, the fact checkers in the US are too biased, they're too politically biased. Which Meta has more data than anyone else to be able to judge whether they are or not. Metadata. Meta has provided zero data, zero backing for that.
This is repeating a line that has been out there that is usually just used as a political attack and basically threw them under the bus and framed it as a free speech thing. And the thing that Meta is not acknowledging here is number one, they have the data to actually prove whether there's been issues and bias.
They've never shared that. I'm not aware of any of that data of showing bias existing.
The second thing is that the fact checkers themselves were hired by Meta to review content on Meta's platform and to, you know, determine fact check it and then send the results of that fact checking to Meta. They had absolutely zero control over Meta's product use of their fact checks, displaying of them. Meta chose.
And that's the other thing is Zuck framed this as censorship. He said, well, the fact checking stuff was used to censor and that was Meta's choice. And for Eight years they have said it is not censorship.
We are labeling content, we are reducing the spread of false content. We are not censoring, we are not removing it, we are not preventing people from seeing it. We are just providing contextual information.
That is what Meta has said for eight years. And now Mark Zuckerberg is basically saying no for eight years. I guess apparently we've been lying or we've been wrong.
So it's shocking to me the way that they chose to say it, because I don't know that there's any going back on this. They've basically kind of gone full MAGA on it and have thrown the US Fact checkers under the bus, said they're biased, said this was censorship.
When Meta was the one who chose how to use the fact checks, Meta was the one who could see whether there was political bias. So that, to me was the part that was in a way shocking.
But I also think it's just genuinely how Mark Zuckerberg feels, and he's felt constrained for eight years because I don't think he ever liked that program.
And so beyond that, I mean, I do feel like to me, my thought over the last couple of days has just been, it's gonna be a great boom time for viral hoaxes again in the US it's going to be a great boom time for false claims.
ally it feels like a reset to:And without that layer of oversight and you know, the deterrence of the fact checkers, I think there's nothing to stop people in the US but also from outside of the US targeting people in the US with tons of viral hoaxes and fakes.
And so I just don't see how, especially in the short term, this isn't anything other than a deterioration of sort of the quality of information on, on Meta. But the last thing I'll say is I think crowdsource fact checking should be part of the mix.
And I think a lot of journalists would say, you know, community notes has great potential, but you can't just turn it on. Like, you have to build a community and put resources into this and build it up over years.
So to remove the fact checkers basically in two months time, and to say you're doing community notes, there's no way that program is going to be at any kind of level to actually serve people at the level it could. I hope that in the coming years it's going grown because Meta invested in it in a big way.
That would be great, but the short term here is going to be, I think, pretty bad.
Philip Di Salvo:Craig, that was my last question. Thanks a lot for your time. It was great to have you here on Machines that Fail Us.
Craig Silverman:Yes, thank you for having me.
Philip Di Salvo:The Machines that Fail Us podcast is produced by the Media and Culture Research Group at the Institute of Media and Communications Management at the University of St. Gallen together with the Communications Department of the University. The post production is curated by Podcast Schmiede.
The next episode of the podcast will be released next month. You can listen to all the episodes of Machines that Fail Us on the University of St.
Gallen website, the Human Aero Project website and all major audio and podcasting platform. This is Philip DiSalvo and on behalf of the entire Media and Culture Research Group, thank you for listening.