The spread of misinformation is not new to the world but it has gained access to a virtual super-highway on the internet. From video clips to personalized messages on your smartphone to social media networks, misinformation and conspiracy theories are circulating at an all-time high — especially during the COVID pandemic. After all, busy people don’t have the time to fact-check everything they see or read online, and nefarious actors understand — and prey — on this new reality.
The Threat Posed by Misinformation and Conspiracy Theories
The proliferation of news and information sources targeting niche communities, the personalization of social media feeds, and other developments in 21st-century media have made it increasingly difficult to reference a shared, objective reality. Even mainstream media sources often amplify misinformation if they serve a particular agenda or are perceived as something their viewers or readers want.
Of course, conspiracy theories fall flat when they’re ultimately put to the test. For instance, the violent insurrection at the United States Capitol this past January was triggered by widely debunked theories that the 2020 U.S. election was rigged. Moreover, fear of receiving the COVID-19 vaccine because of misinformation about deaths caused by vaccines themselves, or deaths taken out of context in already vulnerable populations in nursing homes, for instance, could prolong the pandemic if enough people refuse to get vaccinated.
Catching these conspiracy theories early, as they’re formed and proliferated through social media, can help authorities defuse them sooner and, ideally, prevent any mayhem that may result once they take on lives of their own. It turns out that AI is an extremely useful tool for cutting through the fog of misinformation.
Tracking Conspiracy Theories and Misinformation With the Help of AI
Real conspiracies are, by definition, nefarious schemes that are deliberately hidden and planned by a small group of people. Conspiracy theories, in contrast, are typically constructed through broad collaboration and in broad daylight. When a conspiracy theory is taking shape, the tell-tale signs typically appear like a complex narrative that attempts to link together several seemingly unrelated entities and events.
To shed light on this phenomenon, a University of California data analytics group led by professors Timothy Tangherlini and Vwani Roychowdhury has developed an automated process for spotting social media activity showing the signs of misinformation through machine learning. The UC group has employed an AI algorithm to facilitate better monitoring and the prevention of actual harm prompted by online conspiracy theories.
Since conspiracy theories are often cooked up on social media networks, the UC analytics group realized that by identifying certain patterns — such as disjointed rumors that eventually form a comprehensive narrative — could help identify conspiracy theories and their origins. Machine learning algorithms are ideal for finding and making sense of data patterns, so they set out to create a set of machine learning tools focused on identifying misinformation based on how sets of people, places, and things are related.
The team at UC (comprised of researchers at both UC Berkeley and UCLA) tested its model by analyzing more than 17,000 posts on 4chan and Reddit forums discussing the so-called “Pizzagate” theory. From a broad perspective, their model set out to identify which elements of the narrative were substantial (or unsubstantial) and how they’re all connected, if at all.
To determine the effectiveness of their model, they compared its conclusions to illustrations published by the New York Times outlining the Pizzagate narrative. Not only did it perfectly match up with the Times illustration, but it also added an extra layer of complexity to how the people, places, events, and things aligned with one another. What they found was that the actual facts comprising the broader Pizzagate theory could easily be divided into four unrelated narratives once the false interpretations of these connections were removed.
The researchers’ ability to leverage AI to determine the falsehood of the Pizzagate theory from social media chatter could in theory be used to detect other conspiracy theories in the beginning stages of their development. In essence, this could provide an “early warning system” that a critical mass of individuals is being mobilized around misinformation.
Beyond Conspiracy Theories: The Potential of AI and Machine Learning
As with other disruptive, groundbreaking technologies, AI — and machine learning in particular — has the potential to transform not only business processes but also our approach to social problems such as the spread of misinformation. The sky’s the limit for how this technology may be applied across so many aspects of our lives, which means there’s probably an existing (or potential) AI application for your passion project. It’s the “Swiss Army Knife” of tech that can be leveraged in so many ways.
Given its rapid development and nearly limitless uses, professionals with AI knowledge and hands-on ability to wield its power are in high demand. Wherever there’s demand, high wages and plentiful career opportunities are sure to follow. Even if you’re a recent college graduate, however, it’s likely that you haven’t learned the latest AI and machine learning skills and theories. Thankfully, you have many options for getting certified in this field from the comfort of your own home.
Simplilearn’s proven, world-class programs in AI and machine learning are taught by today’s leaders in the field and include self-guided coursework along with live, online classes, and hands-on projects. These programs are focused on getting you career-ready upon completion. Whatever your passion is, whether it’s business intelligence or rooting out potentially dangerous conspiracy theories, AI and machine learning are powerful tools.
No comments:
Post a Comment