The development of artificial intelligence (AI) programs has made it easier for people to access them, leading to the creation of deep fakes. A recent example of this is an AI-generated image of an explosion near the Pentagon that made headlines online and even slightly impacted the stock market until it was quickly deemed a hoax. According to Cayce Myers, a professor in Virginia Tech’s School of Communication, it is becoming increasingly difficult to identify disinformation, particularly sophisticated AI-generated deep fakes. Myers believes that we will see a lot more disinformation over the next few years, both visual and written. Myers emphasizes that spotting disinformation is going to require media literacy and savvy in examining the truth of any claim.

The Danger of AI-generated Disinformation

Myers explains that the difference between traditional photo manipulation and disinformation created with AI lies in the sophistication and scope of the latter. While photoshop programs have been used for years, AI can create altered videos that are very convincing. Given that disinformation is now a widespread source of content online, this type of fake news can reach a much wider audience, especially if it goes viral. To combat disinformation, Myers says there are two main sources: individuals and AI companies. Examining sources, understanding warning signs of disinformation, and being diligent in what we share online is one personal way to combat the spread of disinformation. However, companies that produce AI content and social media companies where disinformation is spread will also need to implement some level of guardrails to prevent the widespread disinformation from being spread.

The Need for Regulation of AI

Attempts to regulate AI are going on in the U.S. at the federal, state, and even local level. Lawmakers are considering a variety of issues including disinformation, discrimination, intellectual property infringement, and privacy. However, Myers warns that creating a law too fast can stifle AI’s development and growth, while creating one too slow may open the door for a lot of potential problems. Striking a balance will be a challenge. Myers explains that the technology of AI has developed so fast that it is likely that any mechanism to prevent the spread of AI-generated disinformation will not be foolproof. Therefore, awareness is key in preventing the spread of disinformation.

Technology

Articles You May Like

Rethinking Uranus: New Insights into Its Mysterious Magnetic Field
Revolutionizing Sea Level Rise Predictions: The Impact of Newly Discovered Ice Mechanisms
Unlocking the Secrets of Zirconium: Pioneering Insights into High-Pressure Materials Science
Harnessing Sunlight: A Revolutionary Approach to Mitigating Greenhouse Gases

Leave a Reply

Your email address will not be published. Required fields are marked *