Artificial intelligence is supercharging the threat of election disinformation worldwide, making it easy for anyone with a smartphone and a devious imagination to create fake – but convincing – content aimed at fooling voters.
It marks a quantum leap from a few years ago, when creating phony photos, videos or audio clips required teams of people with time, technical skill and money. Now, using free and low-cost generative artificial intelligence services from companies like Google and OpenAI, anyone can create high-quality “deepfakes” with just a simple text prompt.
A wave of AI deepfakes tied to elections in Europe and Asia has coursed through social media for months, serving as a warning for more than 50 countries heading to the polls this year.
I wouldn’t be so confident about intelligent and informed people. They too get duped by propaganda. When we are dealing with entirely realistic-looking videos and photos, and we have spent our whole lives being trained to trust photo and video evidence, it’s going to be hard not to fall for some of this disinformation.
I wouldn’t trust myself never to fall for any of that just on the basis of “intelligence”, and I’m only partly dumb. It requires us to retrain a lifetime of habitual trust in recorded evidence.