Categories: Technology

Expert Warns that AI Programs are Making it Harder to Spot Deep Fakes

The development of artificial intelligence (AI) programs has made it easier for people to access them, leading to the creation of deep fakes. A recent example of this is an AI-generated image of an explosion near the Pentagon that made headlines online and even slightly impacted the stock market until it was quickly deemed a hoax. According to Cayce Myers, a professor in Virginia Tech’s School of Communication, it is becoming increasingly difficult to identify disinformation, particularly sophisticated AI-generated deep fakes. Myers believes that we will see a lot more disinformation over the next few years, both visual and written. Myers emphasizes that spotting disinformation is going to require media literacy and savvy in examining the truth of any claim.

The Danger of AI-generated Disinformation

Myers explains that the difference between traditional photo manipulation and disinformation created with AI lies in the sophistication and scope of the latter. While photoshop programs have been used for years, AI can create altered videos that are very convincing. Given that disinformation is now a widespread source of content online, this type of fake news can reach a much wider audience, especially if it goes viral. To combat disinformation, Myers says there are two main sources: individuals and AI companies. Examining sources, understanding warning signs of disinformation, and being diligent in what we share online is one personal way to combat the spread of disinformation. However, companies that produce AI content and social media companies where disinformation is spread will also need to implement some level of guardrails to prevent the widespread disinformation from being spread.

The Need for Regulation of AI

Attempts to regulate AI are going on in the U.S. at the federal, state, and even local level. Lawmakers are considering a variety of issues including disinformation, discrimination, intellectual property infringement, and privacy. However, Myers warns that creating a law too fast can stifle AI’s development and growth, while creating one too slow may open the door for a lot of potential problems. Striking a balance will be a challenge. Myers explains that the technology of AI has developed so fast that it is likely that any mechanism to prevent the spread of AI-generated disinformation will not be foolproof. Therefore, awareness is key in preventing the spread of disinformation.

adam1

Recent Posts

The Incredible Value of Your Teeth

Teeth are fascinating structures in the human body that often go underappreciated. From chewing to…

23 hours ago

Boeing’s Starliner: A Critical Moment for NASA and Aerospace Giant

Boeing's Starliner capsule is finally set to launch on its first crewed mission to the…

1 day ago

Uncovering the Truth Behind Corporate Influence on our Health

In our daily lives, we often believe that we are in control of our health.…

1 day ago

The Impact of CO2 Levels on Airborne Viral Loads

New research has indicated that keeping CO2 levels low can help reduce infectious airborne viral…

1 day ago

The Impact of Diet and Glucose Metabolism on Cancer Risk

New research has uncovered a previously unknown mechanism for inactivating genes that suppress tumor formation,…

2 days ago

The Critical Role of Cybersecurity in the Paris Olympics

Just like athletes who train rigorously for the Olympic Games, cyberwarriors are also gearing up…

3 days ago

This website uses cookies.