Skip to main content
All Posts By

NN

Beyond Deepfakes: How AI Impacts Democracies

By AINo Comments

Most discussions about the impact of generative AI on elections this year have centred on how the technology allows political groups to cheaply create deepfakes or mass disinformation campaigns. It’s a valid concern, given the global proliferation and influence of such campaigns.

In the United States, an AI-generated robocall impersonated President Joe Biden, falsely advising voters about the primary in New Hampshire. In Slovakia, AI-generated audio recordings falsely claimed a candidate planned to rig the election, and in Nigeria, manipulated audio clips implicated a presidential candidate in ballot tampering.

The affordability and efficiency of AI-driven misinformation campaigns enable them to reach large audiences swiftly, particularly on social media, where sensational and emotionally charged content spreads rapidly and usually before it can be fact-checked.

Fake content is a significant issue in how political actors utilise technology, but it’s not the most critical. The real trouble lies in how many politicians are obfuscating the debate and attempting to benefit from the confusion.

There is a growing trend of using generative AI as a scapegoat. The phenomenon of liars using the perception of widespread deepfakes to avoid accountability has even been dubbed “liar’s dividend” by American Law professors Bobby Chesney and Danielle Citron.

In this year’s elections, an Indian politician claimed that genuine audio of him criticising party members was AI-generated. In Turkey, a candidate claimed a compromising video was a deepfake, although it was authentic.

This use casts doubt on all information, undermining public trust in genuine data and the possibility of a shared truth.

Modern authoritarian governments used similar tactics long before generative AI. They discredit any notion of reliable truth or independent sources to demobilise and demoralise their citizens. The uncertainty and general lack of trust immobilise the population even when faced with critical situations. Putin has again demonstrated his ability to use this method when he even avoided calling the War in Ukraine a war.

These authoritarian governments still employ more traditional tactics like eliminating the opposition or controlling the media, but these more violent approaches are becoming increasingly unnecessary in societies overwhelmed by misinformation and doubt.

Even without central authoritarian governments orchestrating these processes, democracies now face a similar challenge.

Trust in democratic institutions has declined globally. A recent Edelman survey shows that 59% of Australians think that political leaders “are purposely trying to mislead people by saying things they know are false or gross exaggerations”.

This amplifies the perception that democratic systems are broken and increases the appeal of politicians who don’t play by the same rules.

One evident consequence of this erosion of shared reality is the trend of political campaigns based on pure propaganda and emotional messages. The 2024 US election is an example. Thus far, it has been dominated by sensationalism, personal attacks, and tribalism rather than discussions on problems, policies, and solutions. This shift fosters division and fear, impoverishes the political debate, and undermines democratic institutions.

There is no panacea to address these challenges. Educating the public about AI, deepfakes, and disinformation is crucial. By improving media and information literacy, citizens can become more discerning consumers of information and better equipped to identify and reject false content.

Investing in advanced technologies to detect and debunk deepfakes and other AI-generated misinformation can help mitigate the spread of false information.

It is also important to implement and enforce regulations that require transparency and accountability from technology companies. Policies can mandate clear labelling of AI-generated content and hold creators of malicious disinformation campaigns accountable.

If democracies don’t find ways to resolve the deep fake AI-generated crisis and the consequent plunge of trust in democratic systems, at best, we will see an impoverishment of political debate and policies. At worst, it can threaten the democratic endeavour entirely.

Information Obesity and Keeping The Brain Fit

By technologyNo Comments

It’s estimated that we create more information every two days than human civilisation produced from the beginning of our species until 2003. On YouTube alone, users upload over 500 hours of video every minute. On Instagram, people share over 100 million photos daily. There are more than 600 million blogs globally publishing content every second.

This deluge of information can lead to anxiety, a fear of missing out, or even what some experts call Information Obesity—the idea that we’re consuming loads of information without retaining, learning, or using much of it.

Some people argue the comparison with body fat is inappropriate and misleading, but I like the expression because it makes it easier to comprehend that too much information can be harmful and that sometimes we consume excessive amounts of it without noticing it.

Many times, I feel my brain is obese (regarding the body, I’m sure :-)), and I have been trying to get it into informational shape.

One of the most important notions I came across recently is the need to be more intentional and disciplined about setting priorities for what information I want to consume and how to do it.

Defining this is tricky and personal. Not everything I read, listen to, or watch is for work or some practical purpose. I also do it for pleasure and distraction.

But as digital platforms become better at holding my attention for as long as possible, my old, laissez-faire approach to consuming information no longer works. Like many people, I have experienced significant amounts of time evaporating while scrolling social media – without anything to show for it or even remember afterwards.

Now, I use strategies and technologies to improve my chances of beating the algorithms.

One strategy is to reduce the number of sources I follow to a few trusted, high-quality ones. It helps me focus on what I find relevant and less on what makes me feel bloated and unsatisfied.

I actively choose when to read or listen to publications and podcasts, ensuring they occupy a privileged space on my devices and alerts. I follow routines for information consumption, including always listening to the same podcasts and reading the same outlets in the morning.

Having specific times for checking social media, news, and even for guilty pleasurable snacks—like random TikTok videos—helps control constant distractions.

Because this is not easy, and there are engineers much brighter than me working to keep me in their apps forever, I use technology to keep on track.

I limit my digital consumption and subscribe to fewer newsletters, which automatically move to an email folder and don’t flash in my face whenever I open my inbox.

I’ve also been testing different systems, like AI RSS feeds and Gen AI filters, to tailor the information I receive, and content curation apps that let you save articles and videos to check them later.

I’m using AI to reduce the time I spend on tasks I consider of low value so that I can consume in-depth content or do deeper work.

Of course, one thing is to create these rules and systems. Another is to turn what I consume—particularly work-related material—into something valuable and useful.

This has never been easy, but our constant snacking of superficial information is making it harder at the moment we need it most. In a world where continued education is increasingly essential, finding, absorbing, and applying new and relevant knowledge is critical.

Carving larger chunks of time for in-depth work, reading longer-form content, or listening to a two-hour podcast that delves deep into a topic are some ways that have been helping me retain and apply information better.

The author Cal Newport, who advocates for deep work, develops this idea in his book Slow Productivity, which offers practical insights into fighting distraction and information overload.

None of this is easy, and I’m certainly far from perfect. However, learning to deal with information obesity will only become more critical, no matter your area of work or career stage.

PS—I hope you find this helpful and not just another info snack that makes you obese.

Apple Intelligence: Can Apple Do It Again?

By AINo Comments

Apple has a history of transforming existing technologies into revolutionary products. Although the iPhone wasn’t the first smartphone, the giant’s innovative integration, design, and marketing founded the smartphone era.

With this week’s launch of Apple Intelligence at its Worldwide Developers Conference (WWDC) in Cupertino, California, the company aspires to do the same with artificial intelligence. However, the mixed response to the event intensifies the question of whether Apple can pull off this feat again.

The company excels at integrating and refining technologies to make them accessible and desirable for consumers. Its strength lies in the ability to package complex innovations into sleek, user-friendly products. This ability has made technologies like touchscreens, mobile internet, and app ecosystems what they are today.

With Apple Intelligence, the objective is to leapfrog the competition using hardware integration, privacy, and excellent user experience. The partnership with OpenAI to incorporate ChatGPT into its ecosystem is a move in this direction and mirrors Apple’s historical approach of using alliances and acquisitions to fill technological gaps.

As usual, Tim Cook and his team are exploring their strengths. They are focusing on on-device processing and user privacy. Running AI functionalities on its own hardware (as Microsoft also proposes) can provide a faster, more secure experience. This is particularly appealing to consumers who are increasingly concerned about data privacy.

Apple’s ability to create intuitive, seamless user experiences is also a terrific asset. Embedding AI capabilities deeply into iOS and macOS can make AI tools more integral to daily use and generate practical and appealing applications for the average consumer.

The company’s control over hardware and software potentially allows it to create better solutions, performance, reliability, and satisfaction.

However, the AI landscape is more complex and competitive than during the iPhone’s debut. Apple is entering the AI space later than large competitors, which have aggressively been developing and promoting AI integrations. Its lack of proprietary AI research and cloud capabilities poses a significant challenge, making it more vulnerable to partners and less competitive in access to processing power.

While Apple’s focus on privacy and on-device processing can be seen as an advantage, it may limit AI’s potential compared to more data-intensive, cloud-based solutions.

What may prove critical for Apple is that, despite all the speed and innovation, the AI game is still in its infancy, and the company has a clear window of opportunity. Beyond the hype, the technology’s real utility for everyday tasks remains a challenge, and most consumers find it difficult to integrate AI into their daily routines in a meaningful way.

Even after massive investments and efforts, players like Microsoft, OpenAI, and Google haven’t managed to make AI practical benefits indispensable for users.

Apple has found or created these benefits in the past, and it has the potential to do so again in the AI space. However, the road ahead will likely be harder and less straightforward than ever.

Have you tried Perplexity AI? Here’s why you should

By AINo Comments

Many companies have tried to challenge Google Search over the years. Microsoft Bing seemed well-placed to disrupt it when it integrated Generative AI last year, but the hype has faded.

However, Google Search has never seemed so outdated and at risk as it does today. One challenger resisting its dominance is Perplexity AI, a company that’s received extensive coverage in the US but very little attention in Australia.

Andy Konwinski, Denis Yarats, Johnny Ho, and Aravind Srinivas (the current CEO) launched it in 2022 and made headlines after raising almost US$100 million in the past year. Backing from names like Jeff Bezos, Tobias Lütke, and NVIDIA also helped to grab media attention.

Unlike traditional search engines, which often lead users down a rabbit hole of links, Perplexity AI offers a conversational interface, synthesising information from multiple sources to provide direct answers.

Crucially, Perplexity AI automatically shows all its sources – something ChatGPT and Gemini Advance don’t do.

The free version uses a neural network architecture similar to an optimised GPT-3.5, while the paid version combines GPT-4, Anthropic‘s Claude, and other algorithms.

The search experience is much more pleasant than clicking through links. It often speeds up the process and improves the ability to find the right source.

For me, these qualities have transformed Google Search into a support tool rather than my first search option. This shift directly challenges Google’s model, which relies on a massive user base and traffic to sell ads. Perplexity relies solely on subscriptions.

Google could probably implement the same model and kill Perplexity AI tomorrow. But this move would challenge its incredibly lucrative ad model, as no one knows how to monetise AI search conversations.

While Perplexity AI represents a significant leap forward, the battle to make Google Search obsolete is far from over. The new tool must expand its capabilities and refine its user experience, including adding image search and diversifying sources.

However, its threat to traditional search engines is real, offering a glimpse into the future of search in the age of generative AI.