The profound challenges facing the world today appear in daily headlines on everything from global terror to inflation and catastrophic weather events.
But according to the World Economic Forum, the most serious global risk over the next two years is actually the spread of misinformation and disinformation aimed at widening social and political divisions.
The threat has faded over the past half-decade, but the convergence of technological advances and geopolitical dynamics has made the stakes higher than ever.
AI-generated content, bots, algorithmic manipulation and other tools are being used to suppress dissent, manipulate elections and – as Avril Haines, the US Director of National Intelligence, recently reported – to shape false narratives, to to incite anger and to spread hatred against Israel and the West.
In 2024, half the world’s population is participating in elections: in the United States, Mexico, India, Britain, Taiwan, France, and even (not so democratically) Iran. More than ever, lies spread by bad actors have the potential to empower tyrants and embolden terrorist groups.
“Disinformation” is generally defined as false and inaccurate information. It can be repeated unintentionally, but it can still do harm. “Disinformation” is more insidious and aims to mislead people using deliberate lies.
As we have long seen, autocrats and dictators have told lies to gain power and marginalize enemies throughout history.
During the Cold War, for example, the Soviet Union saw Israel as an unwanted American outpost in the Middle East and attempted to undermine both nations through disinformation that drew on age-old anti-Semitic tropes.
The Soviets said Zionism was fascist, imperialist, racist and colonialist. They said that Zionists were Nazis – and that all Jews were Zionists.
The Soviets also successfully injected those lies into the halls of Western academia, poisoning students with the rhetoric we now hear on campuses across America.
Today, state-backed propagandists can be exponentially more effective using social media to rapidly spread malicious content and AI, which can easily make the absurd or evil seem real.
Microsoft, for example, reports that Russian social media accounts are once again spreading anti-US propaganda ahead of the US presidential election, often masquerading as legitimate accounts.
“Based on AI [programs] are being used to create deep fakes of political leaders by adapting video, audio and photographs,” the World Economic Forum wrote in 2022. “Such deep fakes can be used to sow the seeds of discord in society and create chaos in the markets.”
Other global threats—hurricanes, terrorist attacks, rising food prices—are extremely damaging, but also easily identifiable. AI-generated disinformation is more dangerous because, by its very nature, it is cloaked to appear authentic.
Sometimes, problems stem from the platforms themselves. TikTok promotes itself as an open forum that fosters personal connections among its billion users. But the dark truth is that the Chinese-owned company appears to suppress topics critical of the Chinese government.
A report by the Network Contagion Research Institute found that TikTok suppressed conversations about China’s treatment of Uighurs and the Hong Kong protests, while amplifying conversations that undermine America and its allies.
When the Wall Street Journal created accounts registered as 13-year-old users, it took only a few hours for highly polarized content to appear, including pro-Palestinian and anti-Israel content.
Misinformation about Israel spreads beyond TikTok.
A day after the Oct. 7 attacks in southern Israel, more than 40,000 fake social media accounts that had been inactive for more than a year suddenly began posting pro-Hamas messages hundreds of times a day, according to Cyabra, an intelligence of social threats based in Israel. the company. They used hashtags like #IStandWithIsrael to reach audiences and spread pro-Hamas propaganda.
As videos circulated online showing Israeli women and children being held captive by Hamas, these fake profiles declared that no harm would come to them because Hamas was known for its “humanity and compassion.” We see these false narratives persist even today, even as former hostages have revealed the barbaric conditions in Hamas captivity.
Across Facebook, X, Instagram and TikTok, Cyabra found that 26% of profiles – one in four – participating in conversations about the Hamas-Israel war were fake. Frighteningly, the tactics seem to be working. While 79% of Americans support Israel in its fight against Hamas, 43% of 18-24 year olds support Hamas. Not surprising for a demographic that gets its news disproportionately through social media.
As the Director of National Intelligence recently warned, Iran is covertly encouraging protests against Israel on social media platforms, “seeking to foment discord and undermine confidence in our democratic institutions.”
“Americans who are targeted by this Iranian campaign,” she said, “may not be aware that they are interacting with or receiving support from a foreign government.”
It is time for government agencies and social media platforms to take more proactive steps to combat these disinformation campaigns. The rest of us have a role to play, too. We need to educate ourselves to identify misinformation, verify what we see online, seek out reliable sources, and—most importantly—apply critical thinking skills before hitting the share button.
Aviva Klompas is the former director of speechwriting at the Israeli Mission to the United Nations and co-founder of Israel without borders.
#disinformation #greatest #threat #global #order
Image Source : nypost.com