Categories: Technology

Expert Warns that AI Programs are Making it Harder to Spot Deep Fakes

The development of artificial intelligence (AI) programs has made it easier for people to access them, leading to the creation of deep fakes. A recent example of this is an AI-generated image of an explosion near the Pentagon that made headlines online and even slightly impacted the stock market until it was quickly deemed a hoax. According to Cayce Myers, a professor in Virginia Tech’s School of Communication, it is becoming increasingly difficult to identify disinformation, particularly sophisticated AI-generated deep fakes. Myers believes that we will see a lot more disinformation over the next few years, both visual and written. Myers emphasizes that spotting disinformation is going to require media literacy and savvy in examining the truth of any claim.

The Danger of AI-generated Disinformation

Myers explains that the difference between traditional photo manipulation and disinformation created with AI lies in the sophistication and scope of the latter. While photoshop programs have been used for years, AI can create altered videos that are very convincing. Given that disinformation is now a widespread source of content online, this type of fake news can reach a much wider audience, especially if it goes viral. To combat disinformation, Myers says there are two main sources: individuals and AI companies. Examining sources, understanding warning signs of disinformation, and being diligent in what we share online is one personal way to combat the spread of disinformation. However, companies that produce AI content and social media companies where disinformation is spread will also need to implement some level of guardrails to prevent the widespread disinformation from being spread.

The Need for Regulation of AI

Attempts to regulate AI are going on in the U.S. at the federal, state, and even local level. Lawmakers are considering a variety of issues including disinformation, discrimination, intellectual property infringement, and privacy. However, Myers warns that creating a law too fast can stifle AI’s development and growth, while creating one too slow may open the door for a lot of potential problems. Striking a balance will be a challenge. Myers explains that the technology of AI has developed so fast that it is likely that any mechanism to prevent the spread of AI-generated disinformation will not be foolproof. Therefore, awareness is key in preventing the spread of disinformation.

adam1

Recent Posts

Revolutionizing Oxygen Evolution Reactions: The Promise of Doped Cobalt Catalysts

Recent advancements in electrocatalysis have opened up exciting avenues for energy conversion technologies. A multidisciplinary…

4 hours ago

The Cosmic Symphony: Unraveling the Birth and Death of Stars

Stars are the luminous beacons of the universe, embodying both beauty and complexity. Their life…

5 hours ago

The Future of Antarctica’s Ice Sheet: Warnings from Recent Research

As the climate crisis continues to escalate, a groundbreaking study led by a team of…

6 hours ago

Triumph of Innovation: Belgian Team Shines in South Africa’s Solar Car Challenge

In a remarkable testament to human ingenuity and the potential of renewable energy, a Belgian…

7 hours ago

The Expansion of Memory: Beyond the Brain

The human understanding of memory has long been confined to the realms of the brain,…

12 hours ago

The Enigmatic Dance of the Sun: Unraveling the Mysteries of Solar Behavior

The Sun has captivated humanity for millennia, serving not only as the source of light…

19 hours ago

This website uses cookies.