Categories: Technology

Expert Warns that AI Programs are Making it Harder to Spot Deep Fakes

The development of artificial intelligence (AI) programs has made it easier for people to access them, leading to the creation of deep fakes. A recent example of this is an AI-generated image of an explosion near the Pentagon that made headlines online and even slightly impacted the stock market until it was quickly deemed a hoax. According to Cayce Myers, a professor in Virginia Tech’s School of Communication, it is becoming increasingly difficult to identify disinformation, particularly sophisticated AI-generated deep fakes. Myers believes that we will see a lot more disinformation over the next few years, both visual and written. Myers emphasizes that spotting disinformation is going to require media literacy and savvy in examining the truth of any claim.

The Danger of AI-generated Disinformation

Myers explains that the difference between traditional photo manipulation and disinformation created with AI lies in the sophistication and scope of the latter. While photoshop programs have been used for years, AI can create altered videos that are very convincing. Given that disinformation is now a widespread source of content online, this type of fake news can reach a much wider audience, especially if it goes viral. To combat disinformation, Myers says there are two main sources: individuals and AI companies. Examining sources, understanding warning signs of disinformation, and being diligent in what we share online is one personal way to combat the spread of disinformation. However, companies that produce AI content and social media companies where disinformation is spread will also need to implement some level of guardrails to prevent the widespread disinformation from being spread.

The Need for Regulation of AI

Attempts to regulate AI are going on in the U.S. at the federal, state, and even local level. Lawmakers are considering a variety of issues including disinformation, discrimination, intellectual property infringement, and privacy. However, Myers warns that creating a law too fast can stifle AI’s development and growth, while creating one too slow may open the door for a lot of potential problems. Striking a balance will be a challenge. Myers explains that the technology of AI has developed so fast that it is likely that any mechanism to prevent the spread of AI-generated disinformation will not be foolproof. Therefore, awareness is key in preventing the spread of disinformation.

adam1

Recent Posts

Advancements in Higgs Boson Interaction Studies: Insights from the ATLAS Collaboration

The Higgs boson has emerged as a central figure in the field of particle physics,…

5 hours ago

Revolutionizing Renewable Energy: Synhelion’s Breakthrough in Solar Fuel Production

As climate change continues to pose a serious threat to our planet, innovative solutions are…

6 hours ago

Assessing Earthquake Hazards Through Precariously Balanced Rocks

In regions where geological forces collide, understanding the intricacies of seismic hazards becomes not only…

6 hours ago

The Emergence of High-Density Cobalt-Based Single-Atom Catalysts in Hydrogen Production

As society increasingly grapples with the realities of climate change and an energy crisis, hydrogen…

6 hours ago

The Moon’s Inner Core: A Shift in Understanding Our Celestial Neighbor

Recent findings regarding the Moon’s physical state mark a significant leap in lunar research, refining…

15 hours ago

The Promise of Rilmenidine: Exploring a Future of Healthier Aging

As the global population continues to age at an unprecedented rate, the search for effective…

1 day ago

This website uses cookies.