Thursday, May 29, 2025

Deepfake Videos as a Social Challenge

Deepfake Videos as a Social Challenge

By Janpha Thadphoothon

Not being a computer vision or cybersecurity specialist, what you are about to read from me might be something you have already come across in the media, or perhaps even witnessed firsthand — the phenomenon of deepfake videos. I am certain you would agree with me that the pace at which artificial intelligence (AI) is evolving is nothing short of remarkable, and occasionally, a little unsettling. It is no longer the stuff of futuristic science fiction films. As we speak, we are already living in a world where the line between authentic video content and AI-generated visual forgeries is increasingly difficult to discern.

A "deepfake" video is a digitally manipulated video in which a person's face or body has been altered to convincingly appear as someone else. This technology is frequently exploited for malicious purposes, including spreading disinformation.

In my opinion, this isn’t just an intriguing technological feat. It is a social challenge — one with consequences that we are only beginning to comprehend.


Let me first explain, in the simplest possible terms, what a deepfake is. As far as I understand it — informed by my readings and my modest certificate course in Generative AI from DeepLearning.AI — a deepfake is a type of synthetic media. It is created using AI techniques, particularly deep learning, where a computer system learns to convincingly swap or generate faces, voices, or entire human figures in a video, making them appear as if someone said or did something they never did. It often feels like magic, though it is anything but.

Some years ago, out of sheer curiosity, I decided to experiment with one of the free online deepfake generators, purely for educational purposes. I uploaded a short video clip of myself delivering a familiar classroom greeting: “Good morning, students.” Then, using an archive of public domain celebrity images, I overlaid a famous actor’s face onto mine. What emerged after a few minutes of processing was frankly unsettling. It was my body, my gestures, but with a different face — blinking, smiling, and speaking in perfect synchrony. It wasn’t perfect by today’s standards, but it was good enough to unsettle me. I remember thinking to myself: if this is possible now, what about in five years?

The answer, as it turns out, is more alarming than I had anticipated.

As with many powerful tools, deepfake technology embodies a dual nature. On the one hand, it holds enormous creative potential — in cinema, advertising, education, and even healthcare. Imagine bringing historical figures to life for interactive museum exhibits, or creating personalized educational content delivered in a familiar voice. On the other hand, and this is where my concern lies, it is an instrument that can be weaponized in deeply troubling ways.

Let’s be a little more systematic in outlining these potential threats.

First and foremost, there’s the risk of misinformation and disinformation. You may wish to picture this: a fabricated video emerges online showing a world leader announcing a military escalation, or a corporate CEO confessing to financial wrongdoing. The clip spreads like wildfire on social media before any official clarification is issued. The damage — to reputations, markets, and public trust — would be swift and possibly irreversible. Experts in digital forensics have warned that in an age already saturated with “fake news” and clickbait headlines, video-based deception introduces a far more visceral and convincing layer of falsehood.

Secondly, I would like to draw your attention to the personal and social risks. Consider the implications for individual privacy and dignity. There have been disturbing cases — well-documented by investigative journalists — of malicious actors creating non-consensual deepfake videos, often of an explicit nature, targeting private individuals and celebrities alike. Victims of such digital violations often endure lasting psychological trauma, compounded by the difficulty of removing such content once it circulates online. It has perplexed me how, as a society, we will navigate a world where anyone’s image can be hijacked and manipulated without consent.

Thirdly, and this is something that particularly concerns me as an educator, is the erosion of trust in evidence. In the past, a video clip was considered a definitive form of proof. Now, with the proliferation of AI-generated media, we may soon reach a point where every video’s authenticity is suspect. I often emphasize to my students the importance of critical thinking when evaluating information sources. But in a world of deepfakes, we might need to teach new digital verification skills — scrutinizing metadata, employing forensic tools, and relying on corroborating evidence before accepting a video as genuine.

Globally, the response to deepfake technology has been uneven. Some countries, notably in Europe and North America, have begun drafting legislation aimed at curbing malicious deepfake creation and distribution. Tech companies, too, are developing AI-based detection tools, though experts admit it’s a technological arms race — with forgers and detectors constantly trying to outsmart one another. In Thailand, where awareness of cybercrime is steadily growing, specific laws addressing AI-generated video forgeries might still be in their infancy. And even if comprehensive legislation were in place, the transnational nature of the internet makes enforcement a daunting task.

In my humble opinion, education remains our most potent defense. I often tell my students that digital literacy today means far more than knowing how to use social media or search for information. It entails the ability to critically assess digital content, to question what one sees and hears, and to seek multiple sources of verification before drawing conclusions. It is well understood that 21st-century literacy must include media literacy and AI literacy.

I have read somewhere that researchers are working on watermarking techniques — subtle digital signatures embedded within videos to indicate authenticity or AI generation. Some suggest developing centralized verification systems, where major platforms label or block suspected deepfake content. However, the democratization of AI tools means that increasingly sophisticated deepfake software is now available to individuals with modest technical skills, making regulation and control an uphill battle.

One cannot help but worry about the implications for younger generations. They have grown up in a digital world where images, voices, and now videos can be easily manipulated. While digital natives may be adept at navigating online spaces, their immersion may paradoxically render them more vulnerable to well-crafted deceptions. Nevertheless, I have faith in their adaptability. History suggests that each generation learns to cope with the unique technological challenges it faces.

I would like to propose a kind of “zero-trust” principle for video media, much like cybersecurity experts advocate in network security. In situations where critical decisions rely on video evidence, we may need to adopt multi-layered verification: requiring corroborating testimony, contextual evidence, or even AI-detection confirmation. This might add complexity to our information consumption habits, but it may be a necessary inconvenience.

At its core, I would argue, the conversation around deepfakes is not merely a technical debate. It is an ethical, legal, and social one. Developers of such technology bear an undeniable responsibility to anticipate misuse and implement safeguards wherever possible. Likewise, policymakers, educators, and media professionals must play an active role in raising awareness and fostering a culture of critical digital engagement.

I suspect that the genie, so to speak, is already out of the bottle. The technology exists, and it will only improve. Our task now is to mitigate its harmful effects while harnessing its positive potentials. Some call for sweeping regulations, while others fear that overregulation might stifle creativity and innovation. Finding the right balance will not be easy.

It is my long-held belief — though I may be mistaken — that we cannot afford to be passive observers. A multi-pronged approach, combining technical safeguards, legal frameworks, public education, and ethical reflection, is urgently needed. The days when “seeing is believing” held unchallenged authority are behind us. We must teach ourselves, and the generations to come, to question and verify.

While I marvel at the technical brilliance behind deepfake videos, like many of you, I am also deeply uneasy about their darker possibilities. It is a powerful reminder that, as the saying goes, with great power comes great responsibility. Fostering a culture of awareness and ethical engagement with technology, rather than succumbing to fear, is — in my view — the most constructive path forward.

Indeed, the future will be shaped not merely by what technology can do, but by how we choose to use it.

About Me:
Janpha Thadphoothon is an assistant professor of ELT at the International College, Dhurakij Pundit University in Bangkok, Thailand. Janpha Thadphoothon also holds a certificate of Generative AI with Large Language Models issued by DeepLearning.AI.

No comments:

Post a Comment

Wikibooks has deleted my book?

  Wikibooks has deleted my book? Books are a form of intellectual property (IP). Janpha Thadphoothon Professionally speaking.... The work of...