Categories: Technology

Expert Warns that AI Programs are Making it Harder to Spot Deep Fakes

The development of artificial intelligence (AI) programs has made it easier for people to access them, leading to the creation of deep fakes. A recent example of this is an AI-generated image of an explosion near the Pentagon that made headlines online and even slightly impacted the stock market until it was quickly deemed a hoax. According to Cayce Myers, a professor in Virginia Tech’s School of Communication, it is becoming increasingly difficult to identify disinformation, particularly sophisticated AI-generated deep fakes. Myers believes that we will see a lot more disinformation over the next few years, both visual and written. Myers emphasizes that spotting disinformation is going to require media literacy and savvy in examining the truth of any claim.

The Danger of AI-generated Disinformation

Myers explains that the difference between traditional photo manipulation and disinformation created with AI lies in the sophistication and scope of the latter. While photoshop programs have been used for years, AI can create altered videos that are very convincing. Given that disinformation is now a widespread source of content online, this type of fake news can reach a much wider audience, especially if it goes viral. To combat disinformation, Myers says there are two main sources: individuals and AI companies. Examining sources, understanding warning signs of disinformation, and being diligent in what we share online is one personal way to combat the spread of disinformation. However, companies that produce AI content and social media companies where disinformation is spread will also need to implement some level of guardrails to prevent the widespread disinformation from being spread.

The Need for Regulation of AI

Attempts to regulate AI are going on in the U.S. at the federal, state, and even local level. Lawmakers are considering a variety of issues including disinformation, discrimination, intellectual property infringement, and privacy. However, Myers warns that creating a law too fast can stifle AI’s development and growth, while creating one too slow may open the door for a lot of potential problems. Striking a balance will be a challenge. Myers explains that the technology of AI has developed so fast that it is likely that any mechanism to prevent the spread of AI-generated disinformation will not be foolproof. Therefore, awareness is key in preventing the spread of disinformation.

adam1

Recent Posts

Revolutionizing the Fight Against Antibiotic Resistance: A Breakthrough in Drug Discovery

The battle against antimicrobial resistance (AMR) has become one of the paramount public health challenges…

1 day ago

The Sweet Deception: Unveiling the Hidden Risks of Sucralose

In our relentless pursuit of healthier lifestyles, the craze for sugar alternatives has become a…

1 day ago

Empowering Africa: The Path to Effective Climate Adaptation Tracking

As climate change continues to wreak havoc globally, Africa's vulnerability makes it imperative for nations…

1 day ago

Unlocking the Future: Revolutionary Quantum Sensors Set to Transform Detection

The realm of quantum technology has long been hailed as the next frontier in scientific…

1 day ago

Unlocking the Secrets of Black Holes: A Journey Through Cosmic History

The fascination surrounding black holes often breeds misconceptions, particularly the idea that they obliterate not…

1 day ago

Transformative Art: Bridging Chemistry and Creativity through Molecular Portraits

In a groundbreaking endeavor, researchers at Trinity College Dublin have merged the worlds of chemistry…

1 day ago

This website uses cookies.