Posted in | News | Machine-Vision

Novel Artificially Intelligent Computer Program for Detecting Fake Online Photos, Videos

Imagine seeing yourself in a photo or video that was never taken, with your head possibly appearing on another person's body. You're likely a victim of a deepfake cyberattack -- where cyber attackers expertly alter images and videos shared on a social media platform to fool people into believing what they are seeing is true.

As these attacks become more sophisticated in nature, stronger detection methods and quicker responses are needed to counteract the threats. This type of digital deception could lead to a wide range of issues including the destruction of personal privacy, such as stealing someone's likeness to sell a product, rising political or religious tension between countries or creating chaos in financial markets.

Recently, Dan Lin, the director of MU's I-Privacy Lab in the College of Engineering, was awarded nearly $1.2 million from the National Science Foundation to design an artificially intelligent computer program to provide real-time detection of deepfake threats. The grant is being shared with her project collaborator Jianping Fan, a professor of computer science at University of North Carolina at Charlotte, who is an expert in image processing. Together, their goal is to allow a quicker response to occur to prevent these false images and videos from spreading in the public domain.

Powered by a computerized brain, or artificial intelligence, the program will need only a small number of deepfake examples to build its knowledge base. Then, using its capabilities to self-learn and self-evolve, the program will be able to detect evolving deepfake techniques over time, learning from previous actions to make more accurate detections and prevent mistakes in identifying content. The project is scheduled to take four years to complete and will include a mobile app to alert smartphone users to the presence of deepfake content on their downloaded social media platforms.

"We want the detector to be able to learn by itself by pulling previous knowledge from its deep neural network, much like a human brain," Lin said. "For example, when kids see a picture of an elephant, then they go to a zoo, they can easily relate the picture with the animal. But, this type of analysis is hard for machines to do. So, we want to be able to have the program provide an educated guess at what is an unknown deepfake threat by relating to what it already has stored in its knowledge base."

Lin's interest in the technical aspects of computing started growing at the age of 12 after she helped promote a computer programming competition at a summer camp. At the time, she didn't own a computer and asked her parents to buy her one. They did, and she was hooked. Years later, while completing her doctorate in Singapore, her advisor suggested she visit his collaborator in the U.S. After years of conducting national security related research in the U.S., Lin now works to find ways to protect people's privacy on the internet.

Lin has joint appointments in the Department of Electrical Engineering and Computer Science in the MU College of Engineering and the Department of Management in the Trulaske College of Business.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.