Skip to main content
Tag

AI

Beyond Deepfakes: How AI Impacts Democracies

By AINo Comments

Most discussions about the impact of generative AI on elections this year have centred on how the technology allows political groups to cheaply create deepfakes or mass disinformation campaigns. It’s a valid concern, given the global proliferation and influence of such campaigns.

In the United States, an AI-generated robocall impersonated President Joe Biden, falsely advising voters about the primary in New Hampshire. In Slovakia, AI-generated audio recordings falsely claimed a candidate planned to rig the election, and in Nigeria, manipulated audio clips implicated a presidential candidate in ballot tampering.

The affordability and efficiency of AI-driven misinformation campaigns enable them to reach large audiences swiftly, particularly on social media, where sensational and emotionally charged content spreads rapidly and usually before it can be fact-checked.

Fake content is a significant issue in how political actors utilise technology, but it’s not the most critical. The real trouble lies in how many politicians are obfuscating the debate and attempting to benefit from the confusion.

There is a growing trend of using generative AI as a scapegoat. The phenomenon of liars using the perception of widespread deepfakes to avoid accountability has even been dubbed “liar’s dividend” by American Law professors Bobby Chesney and Danielle Citron.

In this year’s elections, an Indian politician claimed that genuine audio of him criticising party members was AI-generated. In Turkey, a candidate claimed a compromising video was a deepfake, although it was authentic.

This use casts doubt on all information, undermining public trust in genuine data and the possibility of a shared truth.

Modern authoritarian governments used similar tactics long before generative AI. They discredit any notion of reliable truth or independent sources to demobilise and demoralise their citizens. The uncertainty and general lack of trust immobilise the population even when faced with critical situations. Putin has again demonstrated his ability to use this method when he even avoided calling the War in Ukraine a war.

These authoritarian governments still employ more traditional tactics like eliminating the opposition or controlling the media, but these more violent approaches are becoming increasingly unnecessary in societies overwhelmed by misinformation and doubt.

Even without central authoritarian governments orchestrating these processes, democracies now face a similar challenge.

Trust in democratic institutions has declined globally. A recent Edelman survey shows that 59% of Australians think that political leaders “are purposely trying to mislead people by saying things they know are false or gross exaggerations”.

This amplifies the perception that democratic systems are broken and increases the appeal of politicians who don’t play by the same rules.

One evident consequence of this erosion of shared reality is the trend of political campaigns based on pure propaganda and emotional messages. The 2024 US election is an example. Thus far, it has been dominated by sensationalism, personal attacks, and tribalism rather than discussions on problems, policies, and solutions. This shift fosters division and fear, impoverishes the political debate, and undermines democratic institutions.

There is no panacea to address these challenges. Educating the public about AI, deepfakes, and disinformation is crucial. By improving media and information literacy, citizens can become more discerning consumers of information and better equipped to identify and reject false content.

Investing in advanced technologies to detect and debunk deepfakes and other AI-generated misinformation can help mitigate the spread of false information.

It is also important to implement and enforce regulations that require transparency and accountability from technology companies. Policies can mandate clear labelling of AI-generated content and hold creators of malicious disinformation campaigns accountable.

If democracies don’t find ways to resolve the deep fake AI-generated crisis and the consequent plunge of trust in democratic systems, at best, we will see an impoverishment of political debate and policies. At worst, it can threaten the democratic endeavour entirely.

Information Obesity and Keeping The Brain Fit

By technologyNo Comments

It’s estimated that we create more information every two days than human civilisation produced from the beginning of our species until 2003. On YouTube alone, users upload over 500 hours of video every minute. On Instagram, people share over 100 million photos daily. There are more than 600 million blogs globally publishing content every second.

This deluge of information can lead to anxiety, a fear of missing out, or even what some experts call Information Obesity—the idea that we’re consuming loads of information without retaining, learning, or using much of it.

Some people argue the comparison with body fat is inappropriate and misleading, but I like the expression because it makes it easier to comprehend that too much information can be harmful and that sometimes we consume excessive amounts of it without noticing it.

Many times, I feel my brain is obese (regarding the body, I’m sure :-)), and I have been trying to get it into informational shape.

One of the most important notions I came across recently is the need to be more intentional and disciplined about setting priorities for what information I want to consume and how to do it.

Defining this is tricky and personal. Not everything I read, listen to, or watch is for work or some practical purpose. I also do it for pleasure and distraction.

But as digital platforms become better at holding my attention for as long as possible, my old, laissez-faire approach to consuming information no longer works. Like many people, I have experienced significant amounts of time evaporating while scrolling social media – without anything to show for it or even remember afterwards.

Now, I use strategies and technologies to improve my chances of beating the algorithms.

One strategy is to reduce the number of sources I follow to a few trusted, high-quality ones. It helps me focus on what I find relevant and less on what makes me feel bloated and unsatisfied.

I actively choose when to read or listen to publications and podcasts, ensuring they occupy a privileged space on my devices and alerts. I follow routines for information consumption, including always listening to the same podcasts and reading the same outlets in the morning.

Having specific times for checking social media, news, and even for guilty pleasurable snacks—like random TikTok videos—helps control constant distractions.

Because this is not easy, and there are engineers much brighter than me working to keep me in their apps forever, I use technology to keep on track.

I limit my digital consumption and subscribe to fewer newsletters, which automatically move to an email folder and don’t flash in my face whenever I open my inbox.

I’ve also been testing different systems, like AI RSS feeds and Gen AI filters, to tailor the information I receive, and content curation apps that let you save articles and videos to check them later.

I’m using AI to reduce the time I spend on tasks I consider of low value so that I can consume in-depth content or do deeper work.

Of course, one thing is to create these rules and systems. Another is to turn what I consume—particularly work-related material—into something valuable and useful.

This has never been easy, but our constant snacking of superficial information is making it harder at the moment we need it most. In a world where continued education is increasingly essential, finding, absorbing, and applying new and relevant knowledge is critical.

Carving larger chunks of time for in-depth work, reading longer-form content, or listening to a two-hour podcast that delves deep into a topic are some ways that have been helping me retain and apply information better.

The author Cal Newport, who advocates for deep work, develops this idea in his book Slow Productivity, which offers practical insights into fighting distraction and information overload.

None of this is easy, and I’m certainly far from perfect. However, learning to deal with information obesity will only become more critical, no matter your area of work or career stage.

PS—I hope you find this helpful and not just another info snack that makes you obese.