With a slate of major elections planned across the world in 2024, including in the UK, we cannot simply adopt a ‘wait and see’ approach when it comes to AI’s impact on electoral integrity
Grace Barnett, Head of Membership, Unlock Democracy
Democracy is not faring well in the digital age. Social media and 24-hour, online news have been charged with driving polarisation, undermining trust in politics, and spreading disinformation on a scale that has disrupted free and fair elections. Better access to information has been a fundamental ingredient of democracy’s success, but what happens when that information becomes unreliable, of poor quality, or just plain untrue?
Which brings us to the present day, and 2023’s hottest topic: AI. In particular, our attentions and imaginations have been caught by user-friendly, free to access generative AI tools like ChatGPT. These are tools which can create media, be it text, images, or audio, based on a few words or “prompts” provided by the user. This technology has big implications for the way we create, consume, and share information. In short, information is about to become very cheap indeed.
When information becomes cheap, it can also lose integrity. When the gatekeepers of knowledge are brushed aside, we are at greater risk of exposing ourselves to bad information and bad actors taking advantage of a media landscape where anything goes.
It’s likely that AI will grow ever more powerful and integrated into our society over the next decade. AI tools learn and improve from the data they process, meaning their capabilities can grow exponentially. This, combined with potential use cases in almost every corner of our lives means it’s difficult to say exactly what AI development might look like, and how it could change our worlds.
With a slate of major elections planned across the world in 2024, including (probably) in the UK, we cannot simply adopt a ‘wait and see’ approach when it comes to AI’s impact on electoral integrity. Social media has been at the heart of some major disruptions to the democratic process in recent years. Throwing AI tools into the mix could be about to make things a whole lot worse.
Here are just a few ways that might happen.
1 – Use of Deepfakes will supercharge the existing problems with fake news and disinformation
Generative AI can be used to create misleading images, video, or audio of public figures – often called Deepfakes – which can be used to discredit political opponents. The most notorious example to date in the UK context was the Deepfake audio of Keir Starmer, supposedly caught on a hot mic bullying his staff, was published during the Labour Party’s annual conference this year.
So far, most of these use cases have been easily debunked, but it seems only a matter of time before a Deepfake emerges that is contested enough to sow the seeds of doubt and distrust in the minds of many citizens.
The real threat here is to trust in news media more deeply. Once the idea that what we’re seeing with our own eyes could be an illusion takes hold, the ability to establish fundamental truths and shared understanding of facts is undermined.
2- AI will lower the barriers to access for all, and that includes bad actors
Generative AI tools can create convincing content quickly. Now remember that these tools are cheap and freely available. These days, a small, expert team and $400 can get you an entire online disinformation campaign.
AI tools are not fundamentally bad. Indeed, many will use them for good or neutral ends. But they are universally available, and so are also going to be in the hands of those who will be actively using them to do harm.
3 – AI will leave us more vulnerable to cyber attacks
Last but not least, AI tools may be used to mount more sophisticated cyber attacks, such as phishing scams which can mimic the voices of colleagues or loved ones, which threaten the infrastructure of elections and voting directly. We need only look to the successful hack of the Electoral Commission in 2021 to appreciate the risks here for UK democracy.
So, is democracy doomed?
If all this feels overwhelmingly bleak, it’s also worth remembering that AI tools can be used for good, and present real opportunities for progressive campaigners to work more efficiently and effectively on low budgets.
Governments across the world are also beginning to tackle the challenge of regulating AI tools without choking off innovation. Social media platforms must also play their part, and start to take their responsibility to protect election integrity more seriously.
But it’s important to note that, while AI could deliver a critical hit to trust in democracy, it won’t have been acting alone. In the UK, at least, we cannot ignore other factors which have been chipping away at trust in democracy. I’m talking about the rigid First-Past-the-Post voting system which leaves so many feeling voiceless, a scandal-ridden government that partied on while the rest of the country was in Covid lockdown, and new voter ID rules which could block over a hundred thousand people from voting at the next general election, to name but a few.
AI is going to be part of all our lives now, and there might be a rocky ride ahead as we adapt to this new technology as individuals and as a society. That’s why it’s never been more important to strengthen and renew our democratic systems and infrastructure, and ensure they are flexible enough to grow with a changing society.
To reach hundreds of thousands of new readers we need to grow our donor base substantially.
That's why in 2024, we are seeking to generate 150 additional regular donors to support Left Foot Forward's work.
We still need another 117 people to donate to hit the target. You can help. Donate today.