AI Ethics - Artificial Intelligence Ethics - Conceptual Illustration

By Marcelina Horrillo Husillos, Journalist and Correspondent at The World Financial Review

Old tricks to manipulate and control public opinion in the era of communication: the impact of noise and biased information in audiences

We are well into the 4th Industrial Revolution, and a new relationship between technology and geopolitics is emerging. Artificial intelligence, Blockchain, Metaverse, Web3, Web4, Tokenization, Robotics, and 5G capabilities are quickly becoming the frontlines of either a new work or social system.

The race for AI proliferation intensifies geopolitical rivalries and corporate completion as countries seek to harness AI’s potential for economic advantage, technological supremacy, and influence over global norms and standards.

The world is increasingly fragmented; communication remains the most significant brace of globalization. But tools for communication are primarily provided by private “Big Tech” companies, which are scarcely accountable for the nation-states they work in, and have national roots that can be seen as channels of outside influence, be it American, Chinese, or anybody else’s.

Democratic societies may face challenges from AI-generated misinformation during elections, not to mention mass unemployment or societal pushback from rapid technological change. Authoritarian regimes may find AI to be an unwelcome empowering tool for individuals, and poorer countries may suffer from a rapidly widening “digital divide” leading to even greater economic inequality. Ultimately, nations will pursue AI-driven development according to their prevailing political, societal, and economic models.

The danger of using AI for domination is real, and how nations handle the potential social, cultural, and political disruptions from AI will also impact their relative power. Tech-leading communication channels speed up the communication process, but paradoxically create saturation and unnecessary noise that leads to a great deal of confusion about almost any topic across the audiences.

AI and Trust

Thousands of CEOs, technologists, researchers, academics, and others signed an open letter in early 2023 calling for a pause in AI deployments, even as millions of people start using ChatGPT and other generative AI systems.

The March letter starts by calling out AI’s “profound risks to society and humanity” and chastising AI labs for engaging in “an out-of-control race to develop and deploy ever more powerful digital minds that no one can understand, predict, or reliably control.”

They are not the only ones worried about AI: 46% of respondents to a February 2023 Monmouth University poll said AI would do equal amounts of harm and good, while 41% believe the technology would do more harm. Only 9% of respondents believe computer scientists can develop AI that would benefit society.

Although AI has been tasked with creating everything from computer code to visual art, it lacks original thought, and its use in some circumstances demonstrates its limits and potential to harm others. These real-world examples demonstrate how AI can be inappropriately used:

In late spring 2023, a New York lawyer faced judicial scrutiny for submitting court filings citing fictitious cases that had been made up by ChatGPT. The lawyer acknowledged using ChatGPT to draft the document and told a federal judge that he didn’t realize the tool could make such an error.

In 2019, a video of a drunk Nancy Pelosi, a California Democrat who was then U.S. House Speaker, circulated online. The video was a deepfake, a form of media that has been altered using AI. The believability of that Pelosi video, and others that followed, set off alarms about how AI-generated content could be used to distort truth and spread misinformation.

Another well-publicized example of a poorly executed AI use case happened in 2016 when Microsoft released a chatbot on Twitter. Microsoft engineers designed the bot to act like a female teenager and expected Tay—the name given to the bot—to learn to be more like other teens as she engaged.

AI makes predictions based on algorithms and the training data it has been fed, and although machine learning algorithms help the machine learn over time, it doesn’t have the same capacity humans have for being selective, building criteria around ideas, and constructing unbiased common sense. It is a very rapid, effective tech tool that replicates the same stored data from the internet without moving forward.

Tech shaping Geopolitics

Geopolitical swing states will have a meaningful role in shaping the AI-enabled future. In particular, the UK, the UAE, Israel, Japan, the Netherlands, South Korea, Taiwan, and India are the key players in the category. Moreover, these players may form innovation blocs, creating alliances and partnerships with more dominant states or cooperating to pursue common goals.

The most profound impact of generative AI may be on economic growth. Goldman Sachs Research estimates a baseline case in which the widespread adoption of AI could contribute 1.5% to annual productivity growth over ten years, lifting global GDP by nearly $7 trillion. The upside case carries a remarkable 2.9% total uplift. However, these outcomes are not guaranteed and will be determined by four key components: energy, computing, data, and models.

The technology’s development will shape its geopolitical effects. Next year will witness several key milestones for the future of generative AI; the world’s three largest democracies, and approximately 41% of the global population, will participate in national elections. AI will continue to accelerate and be adopted by state and commercial actors for everything from defense to health care to education and more. At the end of 2024, we will have a better idea of how AI will transform scientific discovery, labor, and the balance of power.

Noise in the Communication era

In the so-called communication era, misinformation and disinformation are ramping up in the media and the internet. Organized media manipulation campaigns were found in 81 surveyed countries, up 15% from 70 countries in 2019. Governments, public relations firms, and political parties are producing misinformation on an industrial scale, according to the report. It shows disinformation has become a common strategy, with more than 93% of the countries (76 out of 81) seeing disinformation deployed as part of political communication.

Creating fake news and inaccurate content deliberately drives the recipient’s attention to other matters, manipulating their perception and, in consequence, their opinions and behavior around things. Creating public noise has been a standard tactic of the dominant power to build confusion in public opinion. A confused public opinion over-saturated by tones of disinformation presents itself as weak and unable to see what is happening around them.

Fear is also a very strong tool that can blur people’s logic and change their behavior. The cumulative effect of these individual judgments by citizens over time shapes public opinion. There is always the tendency of those who control the levers of power to “manage” information, and by doing so, present those involved in the best light. Public opinion is the stock in trade of all who hold office as trust. Once lost, is very hard to rebuild, whether in the public or private sector.

Conclusion

We are at the beginning of a new order, and its future is uncertain. The long-term effects of technological revolutions are not made clear overnight.

Geopolitical competition is a constant, but the technologies that animate that competition are not. While the United States, China, and Russia do not agree on many things, they all acknowledge that AI could reshape the balance of power.

Today’s geopolitical rivals are putting AI at the center of their national strategies. In 2017, Russian President Vladimir Putin said, “Artificial intelligence is the future not only of Russia but of all of mankind.” Five years later, Chinese Communist Party General Secretary Xi Jinping declared, “We will focus on national strategic needs, gather strength to carry out indigenous and leading scientific and technological research, and resolutely win the battle in key core technologies.” And, in 2023, US President Joe Biden summarized what AI will mean for humanity: “We’re going to see more technological change in the next 10—maybe the next 5 years—than we’ve seen in the last 50 years…Artificial intelligence is accelerating that change.”

Today, there are two AI superpowers, the United States and China. But they are not the only countries that will define the technology’s future. Earlier this year, we identified a category of geopolitical swing states—non-great powers with the capacity, agency, and increasing will to assert themselves on the global stage. Many of these states have the power to meaningfully shape the future of AI. There are also emerging economies that have the potential to reap the rewards of AI—if the right policies and institutions are established—and whose talent, resources, and voices are essential to ensure that the creation of a human-like intelligence benefits all of humanity.