Week 10: The Second Rise of Misinformation –1/31 [11:58 AM]

Week 10

The Second Rise of Misinformation –1/31 [11:58 AM]

 


In our digital age, information exists in spades, accessible to anyone with an internet connection instantaneously — which leads to… misinformation. Why is this a problem? When discussion of misinformation online breached in the early 2010s, people were concerned about edited or modified content, such as falsified videos or images, which can make it hard to believe content online. Now, our collective attention has shifted to AI. These AI models make it possible for vast quantities of falsified documents and media to completely saturate the internet in misinformation, and change society at large.These local models may be held under tight lock and key by corporate giants, such as Microsoft and Google to prevent this behavior, but recent developments and analysis question these companies’ claims of “safety”. 

Despite all of the claims that these companies put out on the relative “safety” of the AI systems, the algorithms that these AI models run on run on tremendous amounts of data, with many sources of inherent bias. This can manifest in these systems spreading the misinformation that it processes. Unfortunately, these biases aren’t limited to ChatGPT and other text-generation models that require human input. These models currently influence image recognition algorithms in many pieces of software today, form your views by changing what media you consume, and can even shape discussions online, as more and more AI accounts log in to popular social media platforms, such as tiktok, twitter (r.i.p.), and the like. 

The previous statement about the risks of having AI-driven content recommendation programs is more significant than most realize. These algorithms, employed by social media platforms and news aggregators, tailor content based on users' preferences, essentially creating “echo chambers” of users that accept this misinformation and pass it along the rest of the information superhighway through other online channels and chats. Users feel compelled to make these “private clubs” in the internet from exposure to content aligned with their existing beliefs that these  AI systems inadvertently fortify, making it challenging for accurate information to penetrate these isolated digital enclaves. 

Moreover, AI can be exploited as a tool for generating and disseminating misinformation intentionally. Deepfakes, for instance, leverage AI to create hyper-realistic but entirely false media. If the recent issue with Taylor Swift was anything to go off of, these tools represent a real danger for society. In the wrong hands, deepfakes have manipulated political events, fabricated statements from officials, and spread false narratives and straight-up propaganda. 

We have yet to see the full extent of the consequences of the release of these tools and the relationship between AI and misinformation in general. Going forward, these tech companies that provide these models need, more than ever, to set guidelines and terms of service for the usage of these models, and to work with the government to regulate these technologies. However, until these changes are made, we can only wait, watch, and conduct research in our current climate of misinformation to prevent it from reaching its current extent, to create a better internet.

Comments

  1. Hi Saahith! I've seen a lot of videos and articles warning against the danger of newer AI, and your blog effectively summarizes most, if not all, of their main points. Deepfaking and hyperrealistic image generation are both two terrifying things brought about by the rise of AI, both of which you mentioned fall under the umbrella of misinformation. However, I think there are a lot of good things about AI as well, and it's too early to tell whether or not it will be a truly detrimental addition to society. Your hyperlink usage is really good and it provides a source for pretty much every single topic/event that may need elaboration on throughout the blog. I think it's impressive that you were able to integrate six total hyperlinks, something unseen in most blogs. Great job!

    ReplyDelete
  2. Little does Veeramani know, I have been sitting on the edge of my seat, waiting for yet another masterfully written blog by Saahith Blogger himself, CEO of blogger.com. When I saw this post, I must admit I pumped my fist in the air, stomped my legs on the ground, and may have let loose a "Yahoo!" (not the company) myself. Fake news has always been a prevalent issue in the online space. Yet, as a newer technology, moderation in the online space has always been belittled or considered secondary. And even when its harms are contested, it is mainly done so in a rather overly-antagonistic or fear-mongering way, with people claiming that the internet and social media are outright harmful and should be removed. In addition, the nature of the internet naturally promotes extremes and partisanship, with algorithms pushing for interaction and baiting. Either you see someone ignoring, or arguing for no changes, or you see someone silly arguing to shut down the internet, making legitimate arguments all irrelevant in the face of goofy and evident biases. Companies purposefully create echo chambers, hoping to entrap users in a cycle of "feel-good" or "rage baiting," deriving profit from such harmful practices. The January 6th incident is an example of the harms of such echo chambers. Yet, Section 230, an act from 1996, and is thus super out-of-date, essentially provides federal immunity for large companies who perpetuate these shady and harmful tactics. At the time, Section 230 was written in fear of small, random forum owners getting in trouble for something one of the unrelated users on the forum writes. Now, it is supported by big tech companies who use the power of "donations" and lobbying.

    ReplyDelete

Post a Comment

Popular posts from this blog

Emily Gan Week 16- My Collage

Bill Wang - Week 9 - Mango Yuzu Citrus Charged Lemonade

Bill Wang - Week 10 - Why I Hate the Sun