Thursday, May 29, 2025

AI in Decision-Making: A Double-Edged Sword

AI in Decision-Making: A Double-Edged Sword

By Janpha Thadphoothon

In recent years, artificial intelligence (AI) has evolved from a futuristic concept into a tangible force shaping various aspects of our daily lives. Among its many applications, one of the most intriguing — and potentially unsettling — is its growing role in decision-making. Whether we notice it or not, AI systems are already making decisions for us, about us, and around us. Some of these are trivial; others have profound consequences.


Basically, AI in decision-making is all about using artificial intelligence to crunch data, spot trends, and give us insights so we can make smarter, quicker choices. It can automate stuff, make things super accurate, and give us recommendations based on solid data, which really speeds up how we make decisions.

In my opinion, this is an issue that deserves careful, public reflection.

At first glance, delegating decisions to AI seems sensible. Machines can process vast amounts of data quickly, identify patterns invisible to the human eye, and offer solutions free from fatigue or emotion. For example, AI is being used in financial markets to predict stock movements, in logistics to optimize delivery routes, and in healthcare to assist in diagnosing diseases based on complex data. In these contexts, AI can enhance efficiency, reduce human error, and in some cases, save lives.

However, it is obvious that decision-making is not always a purely technical process. Many decisions — especially those involving people — are entangled with values, emotions, and cultural understandings that no algorithm, however advanced, can fully grasp. This is where concerns begin to surface.

Consider, for instance, the use of AI in recruitment. Several companies have adopted AI tools to screen job applications, rank candidates, and even conduct initial video interviews using facial analysis software. While this might improve efficiency and reduce paperwork, it raises uncomfortable questions: Can a machine fairly assess a candidate’s potential? What biases might be hidden within the data it was trained on? Who takes responsibility if a qualified person is unfairly rejected because of a flawed algorithm?

A similar dilemma emerges in the judicial system. In some countries, AI tools are used to assess the risk of re-offending among criminal defendants, influencing decisions about bail, sentencing, and parole. The promise here is impartiality — a system free from human prejudice. Yet studies have shown that these AI systems can, paradoxically, reproduce and even amplify existing social biases, particularly against minority groups. The reason lies in the data: AI learns from historical records, which may reflect past injustices.

I just read this comprehensive review on AI in decision-making by Balbaa and Abdurashidova, and it's super timely. They do a solid job breaking down how AI, with its machine learning and natural language processing, is totally transforming how we make choices. The article highlights the big wins: AI can automate tasks, boost accuracy, and give us data-driven insights, which clearly makes decision-making faster and better. What I really appreciate is that they don't just focus on the good stuff. They're honest about the challenges too, like ethical considerations, algorithmic biases, and even job displacement. It's clear they get the importance of "transparency, accountability, and interpretability" in AI systems, and they stress the need for human-AI collaboration. The real-world examples from finance to healthcare are a nice touch, showing how AI is already making waves.

Common sense tells us that data is never neutral. It carries with it the imprint of human choices, values, and assumptions. When AI systems are asked to make decisions based on such data, they inherit its imperfections. The danger lies in the illusion of objectivity. A decision handed down by a machine may appear impartial, but it is still shaped by human-designed processes, priorities, and blind spots.

Another area where AI decision-making is rapidly advancing is in personalized advertising and content recommendation. Algorithms decide which news articles we see, which products are suggested to us, and even which potential partners appear on dating apps. While this may seem benign, it subtly shapes our preferences, habits, and beliefs. Over time, it can create echo chambers, limiting exposure to diverse viewpoints and reinforcing existing biases. In this sense, AI does not merely respond to our preferences; it actively shapes them.

One might ask: Should AI be involved in decisions that affect human lives so directly? My answer is not a simple yes or no. I believe AI can and should assist in decision-making processes, particularly where large-scale data analysis is needed. But it should not replace human judgment in areas where empathy, ethical reasoning, and cultural sensitivity are essential.

Moreover, transparency is vital. People affected by AI-driven decisions deserve to know how those decisions were made, what data was used, and whether they have the right to appeal or challenge the outcome. Too often, AI systems operate as opaque “black boxes,” making decisions that even their designers struggle to explain. This undermines accountability and trust.

In educational settings, for example, AI might help identify students at risk of dropping out based on attendance, grades, and engagement data. While this can prompt timely interventions, it also risks labeling students prematurely, reducing them to data points rather than seeing them as individuals with complex, changing lives.

In my view, the real challenge is not whether AI should be involved in decision-making, but how we can design systems that enhance human welfare while respecting human dignity. This requires multidisciplinary collaboration — not only from engineers and data scientists but also from ethicists, educators, legal scholars, and ordinary citizens.

As a university lecturer, one of my missions is to equip the students with critical thinking skills. I have often reminded my students that technology is never neutral. Every tool we create reflects human intentions and limitations. AI is no exception. As it becomes a silent decision-maker in more areas of life, we must ensure it serves not only efficiency but also fairness, compassion, and the public good.

AI in decision-making is neither wholly good nor entirely bad. It is a double-edged sword. Used wisely, it can improve lives and make societies fairer. Used carelessly, it can entrench inequalities and erode human agency. The choice, as always, lies with us.


Janpha Thadphoothon is an assistant professor of ELT at the International College, Dhurakij Pundit University in Bangkok, Thailand. Janpha Thadphoothon also holds a certificate of Generative AI with Large Language Models issued by DeepLearning.AI.


Deepfake Videos as a Social Challenge

Deepfake Videos as a Social Challenge

By Janpha Thadphoothon

Not being a computer vision or cybersecurity specialist, what you are about to read from me might be something you have already come across in the media, or perhaps even witnessed firsthand — the phenomenon of deepfake videos. I am certain you would agree with me that the pace at which artificial intelligence (AI) is evolving is nothing short of remarkable, and occasionally, a little unsettling. It is no longer the stuff of futuristic science fiction films. As we speak, we are already living in a world where the line between authentic video content and AI-generated visual forgeries is increasingly difficult to discern.

A "deepfake" video is a digitally manipulated video in which a person's face or body has been altered to convincingly appear as someone else. This technology is frequently exploited for malicious purposes, including spreading disinformation.

In my opinion, this isn’t just an intriguing technological feat. It is a social challenge — one with consequences that we are only beginning to comprehend.


Let me first explain, in the simplest possible terms, what a deepfake is. As far as I understand it — informed by my readings and my modest certificate course in Generative AI from DeepLearning.AI — a deepfake is a type of synthetic media. It is created using AI techniques, particularly deep learning, where a computer system learns to convincingly swap or generate faces, voices, or entire human figures in a video, making them appear as if someone said or did something they never did. It often feels like magic, though it is anything but.

Some years ago, out of sheer curiosity, I decided to experiment with one of the free online deepfake generators, purely for educational purposes. I uploaded a short video clip of myself delivering a familiar classroom greeting: “Good morning, students.” Then, using an archive of public domain celebrity images, I overlaid a famous actor’s face onto mine. What emerged after a few minutes of processing was frankly unsettling. It was my body, my gestures, but with a different face — blinking, smiling, and speaking in perfect synchrony. It wasn’t perfect by today’s standards, but it was good enough to unsettle me. I remember thinking to myself: if this is possible now, what about in five years?

The answer, as it turns out, is more alarming than I had anticipated.

As with many powerful tools, deepfake technology embodies a dual nature. On the one hand, it holds enormous creative potential — in cinema, advertising, education, and even healthcare. Imagine bringing historical figures to life for interactive museum exhibits, or creating personalized educational content delivered in a familiar voice. On the other hand, and this is where my concern lies, it is an instrument that can be weaponized in deeply troubling ways.

Let’s be a little more systematic in outlining these potential threats.

First and foremost, there’s the risk of misinformation and disinformation. You may wish to picture this: a fabricated video emerges online showing a world leader announcing a military escalation, or a corporate CEO confessing to financial wrongdoing. The clip spreads like wildfire on social media before any official clarification is issued. The damage — to reputations, markets, and public trust — would be swift and possibly irreversible. Experts in digital forensics have warned that in an age already saturated with “fake news” and clickbait headlines, video-based deception introduces a far more visceral and convincing layer of falsehood.

Secondly, I would like to draw your attention to the personal and social risks. Consider the implications for individual privacy and dignity. There have been disturbing cases — well-documented by investigative journalists — of malicious actors creating non-consensual deepfake videos, often of an explicit nature, targeting private individuals and celebrities alike. Victims of such digital violations often endure lasting psychological trauma, compounded by the difficulty of removing such content once it circulates online. It has perplexed me how, as a society, we will navigate a world where anyone’s image can be hijacked and manipulated without consent.

Thirdly, and this is something that particularly concerns me as an educator, is the erosion of trust in evidence. In the past, a video clip was considered a definitive form of proof. Now, with the proliferation of AI-generated media, we may soon reach a point where every video’s authenticity is suspect. I often emphasize to my students the importance of critical thinking when evaluating information sources. But in a world of deepfakes, we might need to teach new digital verification skills — scrutinizing metadata, employing forensic tools, and relying on corroborating evidence before accepting a video as genuine.

Globally, the response to deepfake technology has been uneven. Some countries, notably in Europe and North America, have begun drafting legislation aimed at curbing malicious deepfake creation and distribution. Tech companies, too, are developing AI-based detection tools, though experts admit it’s a technological arms race — with forgers and detectors constantly trying to outsmart one another. In Thailand, where awareness of cybercrime is steadily growing, specific laws addressing AI-generated video forgeries might still be in their infancy. And even if comprehensive legislation were in place, the transnational nature of the internet makes enforcement a daunting task.

In my humble opinion, education remains our most potent defense. I often tell my students that digital literacy today means far more than knowing how to use social media or search for information. It entails the ability to critically assess digital content, to question what one sees and hears, and to seek multiple sources of verification before drawing conclusions. It is well understood that 21st-century literacy must include media literacy and AI literacy.

I have read somewhere that researchers are working on watermarking techniques — subtle digital signatures embedded within videos to indicate authenticity or AI generation. Some suggest developing centralized verification systems, where major platforms label or block suspected deepfake content. However, the democratization of AI tools means that increasingly sophisticated deepfake software is now available to individuals with modest technical skills, making regulation and control an uphill battle.

One cannot help but worry about the implications for younger generations. They have grown up in a digital world where images, voices, and now videos can be easily manipulated. While digital natives may be adept at navigating online spaces, their immersion may paradoxically render them more vulnerable to well-crafted deceptions. Nevertheless, I have faith in their adaptability. History suggests that each generation learns to cope with the unique technological challenges it faces.

I would like to propose a kind of “zero-trust” principle for video media, much like cybersecurity experts advocate in network security. In situations where critical decisions rely on video evidence, we may need to adopt multi-layered verification: requiring corroborating testimony, contextual evidence, or even AI-detection confirmation. This might add complexity to our information consumption habits, but it may be a necessary inconvenience.

At its core, I would argue, the conversation around deepfakes is not merely a technical debate. It is an ethical, legal, and social one. Developers of such technology bear an undeniable responsibility to anticipate misuse and implement safeguards wherever possible. Likewise, policymakers, educators, and media professionals must play an active role in raising awareness and fostering a culture of critical digital engagement.

I suspect that the genie, so to speak, is already out of the bottle. The technology exists, and it will only improve. Our task now is to mitigate its harmful effects while harnessing its positive potentials. Some call for sweeping regulations, while others fear that overregulation might stifle creativity and innovation. Finding the right balance will not be easy.

It is my long-held belief — though I may be mistaken — that we cannot afford to be passive observers. A multi-pronged approach, combining technical safeguards, legal frameworks, public education, and ethical reflection, is urgently needed. The days when “seeing is believing” held unchallenged authority are behind us. We must teach ourselves, and the generations to come, to question and verify.

While I marvel at the technical brilliance behind deepfake videos, like many of you, I am also deeply uneasy about their darker possibilities. It is a powerful reminder that, as the saying goes, with great power comes great responsibility. Fostering a culture of awareness and ethical engagement with technology, rather than succumbing to fear, is — in my view — the most constructive path forward.

Indeed, the future will be shaped not merely by what technology can do, but by how we choose to use it.

About Me:
Janpha Thadphoothon is an assistant professor of ELT at the International College, Dhurakij Pundit University in Bangkok, Thailand. Janpha Thadphoothon also holds a certificate of Generative AI with Large Language Models issued by DeepLearning.AI.

AI-Generated Images: A Beautiful Threat

AI-Generated Images: A Beautiful Threat

By Janpha Thadphoothon

I am not an AI researcher, nor am I an expert in digital image processing. What you are about to read from me might be something you have come across already — that AI today can generate images so stunning, so realistic, and so eerily convincing that it blurs the line between what is real and what is artificially created. I would be surprised if you haven’t seen or heard about the latest AI-generated artworks, photorealistic portraits of people who don’t exist, or historic moments fabricated with uncanny precision. It is, I must confess, a development that leaves me both amazed and a little unsettled.


In my opinion, this is not just another harmless digital novelty. It represents a profound shift in how images are created, perceived, and, most importantly, trusted. What was once the exclusive domain of artists, photographers, and graphic designers is now accessible to anyone with an internet connection and a prompt.

Let me share with you a personal story. It must have been around 2023 when I first ventured into this AI image generation phenomenon. I remember being curious, as someone naturally drawn to the intersection of language, visual culture, and technology. I tried one of those free online AI art platforms, entering a simple phrase: “A peaceful library by the river at sunset.” Within seconds, the screen displayed a breathtaking image — one that could easily have graced the cover of a travel magazine or an art book. The colors, the shadows, the reflection of the sun on the water — it was all there. And it was beautiful. And it was fake.

The speed and ease of it were both exhilarating and disconcerting. What took human artists hours, sometimes days, was now produced in mere moments by an indifferent machine. I remember feeling a strange mixture of admiration and quiet concern. It was one of those moments when you realize that the world you know is changing right before your eyes.

As far as I understand, the underlying technology relies on feeding AI systems massive datasets of images — millions of them, sourced from the internet, often without explicit consent from creators. The AI learns patterns, textures, proportions, and artistic styles. It doesn’t understand beauty, of course, but it knows how to statistically approximate what we have historically considered beautiful or convincing.

Now, I would like to turn to what concerns me the most: the ethical implications.

First of all, there’s the obvious issue of misinformation. One could easily imagine AI-generated images of a natural disaster that never happened, a politician at a protest they never attended, or a celebrity in a compromising situation they were never in. The potential for disinformation is immense. In a world where people already struggle to separate truth from fabrication in text and video, adding high-quality, undetectable fake images to the mix is a recipe for confusion. I am sure you would agree that this is deeply worrying.

Secondly, I feel uneasy about the questions of consent and ownership. Many of these AI models are trained on images scraped from the web, often without the explicit permission of the photographers, artists, or subjects. Imagine an artist finding their distinctive style mimicked by a machine, with no credit or compensation. Or a person discovering their face, or one eerily similar, in an image they never posed for. It is, in my opinion, a violation not just of intellectual property but of personal dignity.

What’s more, there’s a deeper, more philosophical concern — one that, as a teacher of language and human expression, resonates with me. If images can be so easily manufactured, what happens to authenticity? What happens to the power of a photograph to bear witness, or a painting to reflect the inner world of an artist? Some might argue that AI-generated images democratize creativity, making artistic expression accessible to all. While I see merit in that viewpoint, I also fear the erosion of meaning when everything is possible, and nothing is certain.

In Thailand, for instance, while the conversation around AI-generated images is still relatively nascent, the potential implications are global. Different societies will wrestle with these issues differently. Some may embrace the technology wholeheartedly; others, like myself, might advocate for caution, regulation, and ethical guidelines.

As you might expect, major tech companies are already responding. Some AI platforms now watermark their creations or restrict certain types of image generation, particularly around sensitive topics like violence, politics, or explicit content. But as history has shown, restrictions can be bypassed, and tools meant for good can quickly be repurposed for harm.

One proposed solution is what cybersecurity experts call provenance tracking — a system where every digital image carries an unalterable record of its origin, including whether it was AI-generated. It’s a promising idea, though not without technical and ethical hurdles. I somehow believe that no technological safeguard is foolproof. Ultimately, the responsibility lies with us — as creators, consumers, and citizens — to question, verify, and think critically about the images we encounter.

I would also like to raise a point about education. Just as we teach students to be critical readers of texts, we must now teach them to be critical viewers of images. In an age where visual literacy is as essential as reading and writing, this is no longer optional. It is well known that young people, though digital natives, often lack the skepticism necessary to navigate such complex digital environments.

Make no mistake, the ability for AI to generate images is a double-edged sword. It has tremendous potential for good — from medical imaging and historical reconstructions to personalized art therapy and virtual tourism. I am genuinely excited about these positive applications. But like any powerful technology, it demands vigilance, ethics, and public awareness.

I am not suggesting we halt progress. That would be neither possible nor desirable. But I do believe we need to approach AI-generated images with informed caution. The genie is out of the bottle, as they say, and while it may grant us new wonders, it also casts long, complex shadows.

In closing, let me reiterate a personal conviction: the conversation about AI-generated images isn’t just about pixels and algorithms. It’s about truth, consent, creativity, and the kind of digital world we wish to inhabit. And in my opinion, that is a conversation worth having.


Janpha Thadphoothon is an assistant professor of ELT at the International College, Dhurakij Pundit University in Bangkok, Thailand. Janpha Thadphoothon also holds a certificate of Generative AI with Large Language Models issued by DeepLearning.AI.

Tuesday, May 27, 2025

Voice-Cloning as a Threat

Voice-Cloning as a Threat

By Janpha Thadphoothon

Not being an IT or AI expert, what you are about to hear from me might be something that you already know – AI can mimic or reproduce voice, or do voice cloning. I am sure you would agree with me that the pace at which artificial intelligence is developing is nothing short of astonishing, and sometimes, a little unsettling. As we know, the technology has advanced so rapidly that, as of today, distinguishing a machine-generated voice from an authentic human one is almost impossible. 


In my opinion, this isn't just a remarkable feat; it's a situation that veers sharply from what we consider normal, carrying implications we are only beginning to understand.


Fundamentally, voice cloning is a technology where Artificial Intelligence (AI) is used to create a synthetic, artificial copy of a person's voice.

I have tried to clone a voice of myself many years ago and found that the technology is viable. First of all, let me share a personal anecdote. It must have been several years back, when the whispers about AI-driven voice synthesis were just beginning to gain traction beyond niche tech circles. Intrigued, and as someone always curious about the intersection of language and technology – a natural consequence of being a language teacher, I suppose – I ventured into trying one of the early platforms. I uploaded samples of my own voice, meticulously following the instructions. And then ultimately, after some processing time, which felt like an eternity, I prompted the system. I typed in a simple phrase, something mundane like, "Hello, this is Janpha," and lo and behold, a voice that was unmistakably mine, yet not me, spoke those words. The fidelity wasn't perfect by today's standards, but it was good enough to send a shiver down my spine. It was a profound moment, I must admit that.

Created by Gemini AI, prompted by J.T.
Let me try explain you the technology. As far as I know, you can train using a dataset; the more data and the more compute power, the better. Then you ask the system to reproduce – you can type "I disagree with what you have said" or "You need money." As we know, the underlying principle, at least as I understand it from my readings and my certificate course in Generative AI with Large Language Models from DeepLearning.AI, isn't magic, though it often feels like it. Fundamentally, it is all about feeding the AI a substantial dataset of a target voice. This dataset consists of audio recordings, often hours of them, meticulously transcribed. The AI, typically a type of neural network, then learns the unique characteristics of that voice: the pitch, the timbre, the cadence, the subtle inflections, even the common speech patterns and filler words. The more high-quality data you provide, and the more computational power you throw at it, the more convincingly the AI can learn to "speak" in that voice. Once trained, you can simply provide text, and the AI will synthesize audio output in the target voice. Imagine typing, "The meeting has been rescheduled," or more alarmingly, "Transfer the funds immediately," and having it spoken in a voice your colleagues or family would implicitly trust.

Later on, I discovered that Google, for example, prohibited users (free users) to clone voice in the sandbox platform. This observation, I think, was quite telling. When a tech giant like Google, which is at the forefront of AI development, decides to put restrictions on such a feature, especially on its more accessible platforms, it signals a recognition of potential misuse. My gut tells me that this wasn't a decision taken lightly. It suggested that the gatekeepers of this technology were already grappling with the ethical implications.

There are risks and threats, of course. And this is where my initial intrigue gradually turned into a more profound concern. Like it or not, the world moves on, and with every technological leap, new challenges emerge. What we all know and agree upon is that powerful tools can be used for both good and ill. Voice cloning, in my opinion, is a quintessential example of this duality.

Let's be a bit more scientific, or at least systematic, in exploring these threats.

First of all, there's the obvious threat of impersonation for fraud. You may wish to picture this scenario: an elderly person receives a call. The voice on the other end is their grandchild, sounding distressed, claiming to be in trouble and urgently needing money. The voice is a perfect mimic. How many would hesitate, especially when emotions are heightened? The news has it that such scams, often called "vishing" (voice phishing), are already on the rise, even with less sophisticated voice manipulation. With high-fidelity voice cloning, this could become devastatingly effective. People say that the human ear, and the trust we place in familiar voices, are remarkably easy to exploit.

Secondly, I am sure you would agree with me that the potential for disinformation and fake news is massive. Imagine a cloned voice of a political leader appearing to endorse a controversial policy or make an inflammatory statement just before an election. Or a CEO seemingly announcing a catastrophic failure in their company, causing stock prices to plummet. Experts say that in an era already struggling with "fake news" in text and doctored images, deepfake audio could add a potent and harder-to-detect layer of deception. The saying "seeing is believing" has long been challenged; soon, "hearing is believing" might become equally fraught.

Thirdly, and this is something that particularly resonates with me as a language teacher, is the erosion of personal reputation and trust. Critics such as those focused on digital ethics would tell you that the ability to make anyone say anything could be weaponized for personal vendettas, blackmail, or severe harassment. Consider the psychological impact on the victim, who might struggle to prove their innocence against a recording that sounds exactly like them. It has perplexed me how we, as a society, will navigate a world where our own words can be so easily fabricated and turned against us.

What's more, there's the challenge to intellectual property and the creative arts. Voice actors, narrators, singers – their unique vocal talents are their livelihood. I'd like to entertain you with the idea that widespread, unauthorized voice cloning could devalue their work or see their voices used in ways they never consented to. Some argue for the creative possibilities, perhaps generating new performances by long-deceased artists. But some argue against it, citing the ethical nightmare of consent and compensation. It is my personal belief that we need to tread very carefully here.

Globally, the response to these emerging threats is still nascent. Different countries will undoubtedly adopt different regulatory approaches. In Thailand, for example, while awareness of cybercrime is growing, specific legislation addressing AI-generated voice impersonation might still be in its early stages. We often see technology outpace the legal frameworks designed to govern it. That's not all; even if laws are in place, the cross-border nature of the internet makes enforcement incredibly challenging.

My conviction is that education and awareness are paramount. As a language teacher, I often emphasize critical thinking when interpreting texts. Now, we must extend that critical thinking to auditory information. We need to cultivate a healthy skepticism, a habit of cross-referencing, and an understanding of the technological capabilities that exist. It is well known that literacy in the 21st century encompasses more than just reading and writing; it includes digital and media literacy.

I am not an expert in cybersecurity, but I have read somewhere that researchers are working on AI tools to detect AI-generated voices. This is a kind of technological arms race, where "good" AI is developed to fight "bad" AI. However challenging, I determine to make it clear that technology alone is unlikely to be the complete solution. The "democratization" of AI tools means that cloning capabilities are becoming more accessible, not just to large corporations or state actors, but to individuals with moderate technical skills and, potentially, malicious intent. They say that what was once the domain of specialized labs can now be achieved with off-the-shelf software or cloud-based services.

I notice that younger generations, often dubbed "digital natives," might be particularly vulnerable, paradoxically because they are so immersed in digital communication. Based on first impression, they might be quicker to trust digital interactions. However, I also have faith in their adaptability and their capacity to learn and navigate new digital terrains. Wisdom from the past hints that every generation faces its unique technological challenges and finds ways to adapt.

Let me introduce you to the notion of a "zero-trust" approach, but for audio. Perhaps, in high-stakes situations, we might need to move towards systems where voice alone is not sufficient for authentication or verification. Multi-factor authentication, video confirmation, or even pre-established code words could become more commonplace, even in personal communications, if the threat escalates. I know you would agree with me that this adds layers of complexity to our interactions, but it might be a necessary inconvenience.

Fundamentally, I would argue that the conversation around voice cloning is not just about technology; it's about ethics, responsibility, and the kind of society we want to build. It is my personal belief that developers of these powerful AI models bear a significant responsibility. They must proactively think about safeguards, responsible deployment, and the potential for misuse from the very inception of their creations. The decision by Google, which I mentioned earlier, to restrict access on its sandbox platform, was perhaps an early example of such corporate responsibility, however limited.

I somehow think that the genie is already out of the bottle. The technology exists, and it will continue to improve. Therefore, our efforts must focus on mitigating the risks. Some argue for stringent regulations, while others fear stifling innovation. Finding that balance is incredibly difficult. Nevertheless, it is my long-held belief that (though I could be wrong) we cannot afford to be passive observers. We need a multi-pronged approach involving technological safeguards, legal frameworks, public education, and a strong ethical compass guiding AI development and deployment.

What's more interesting is that the positive applications, though not the focus of this piece, do exist. For individuals who have lost their voice due to illness or injury, voice cloning offers a path to regaining a part of their identity. For creative industries, it can offer new tools for dubbing, character creation, or even personalized digital assistants that sound more natural and engaging. I like the idea of technology serving humanity in positive ways. As a matter of fact, AI has tremendous potential for good.

However, we must remain vigilant about the "threat" aspect. Those were the days when everything was simple, but that simplicity is often a veil of ignorance about underlying complexities. Like it or not, the world moves on, and we must move with it, armed with knowledge and caution.

Make no mistake, the capacity for AI to clone voices with increasing accuracy is a double-edged sword. One may ask what the ultimate impact will be. No one knows everything, but I would like to sound a note of informed caution. My gut tells me that we are still in the early stages of understanding the full societal implications of this technology. Accordingly, continuous dialogue, research, and proactive measures are essential.

While I marvel at the technological prowess behind voice cloning, like most people, I am also deeply concerned about its potential for misuse. It is a stark reminder that with great power comes great responsibility, as the saying goes. We, as individuals and as a society, need to be prepared. Having said that, I realize that fostering a culture of critical engagement with technology, rather than outright fear, is the most constructive path forward. 

Indeed, the future will be shaped by how we choose to navigate these powerful new tools. The past is the past; we must look to how we responsibly manage such innovations for a more secure future.

Janpha Thadphoothon is an assistant professor of ELT at the International College, Dhurakij Pundit University in Bangkok, Thailand. Janpha Thadphoothon also holds a certificate of Generative AI with Large Language Models issued by DeepLearning.AI.

Monday, May 26, 2025

Navigating Trump's Tariffs: Thailand's Strategic Response

Navigating Trump's Tariffs: Thailand's Strategic Response

By Janpha Thadphoothon, Assistant Professor of ELT, International College, Dhurakij Pundit University, Bangkok, Thailand

Like millions of people, I am increasingly concerned about Donald Trump’s trade policies, especially his tariff measures.

I am sure you would agree with me that the global economic landscape is in a state of flux, especially with the resurgence of protectionist policies. As we know, the reimplementation of tariffs under President Trump's administration has sent ripples through international trade networks. I think it's imperative to delve into how Thailand, an export-driven economy, is strategizing to mitigate these challenges. We need to so something or change our ways of doing things.


Understanding the Tariff Landscape

The news has it that the U.S. has imposed tariffs ranging from 10% to 49% on various imports, affecting several ASEAN countries, including Thailand. Critics such as economists from the Penn Wharton Budget Model would tell you that these tariffs could reduce long-run GDP by about 6% and wages by 5%. Accordingly, these measures have significant implications for Thailand's economy, which relies heavily on exports.

Thailand's Economic Vulnerabilities

It is well known that Thailand's economy is export-oriented, with the U.S. being its largest export market, accounting for 18.3% of total exports valued at $55 billion last year. My conviction is that such heavy reliance on a single market makes the economy susceptible to external shocks. People say that diversification is key, and in this context, it's more relevant than ever.

It seems clear to me that these tariffs will have far-reaching effects on everyone, not just the countries targeted by them but also the American people themselves. In my opinion, trade measures of this kind often produce unintended consequences. While they may be designed to protect domestic industries, they inevitably lead to higher prices for imported goods, disrupt global supply chains, and invite retaliatory actions from trading partners. As we know, when tariffs are imposed, the costs are frequently passed down to consumers, businesses, and workers alike. I am sure you would agree with me that in a globally connected economy, no one is entirely immune from such policy shifts — and, in the long run, it is ordinary people who often bear the brunt.

Government's Strategic Response

First of all, the Thai cabinet has approved the reallocation of 157 billion baht ($4.7 billion) from a consumer stimulus program to fund projects aimed at countering the economic impact of looming U.S. tariffs. This move indicates a shift from short-term consumer spending to long-term investments in infrastructure and support for small businesses.

Furthermore, Thailand has submitted a proposal to the U.S. that includes improving market access for U.S. goods, preventing transshipment violations, and encouraging Thai investments that would generate jobs in the U.S. I like the idea of fostering mutual economic benefits to ease trade tensions.

ASEAN's Collective Stance

What's more interesting is that at a recent ASEAN summit, Malaysia's Foreign Minister urged Southeast Asian nations to deepen economic integration and market diversification in response to U.S. tariffs. I somehow think that a unified regional approach could amplify the negotiating power of individual countries like Thailand.

Private Sector's Concerns

I notice that the private sector is also voicing concerns. The Federation of Thai Industries has warned that if the tariffs stand, the economy might grow by just 0.7% this year, with potential export revenue losses over the next decade amounting to 1.4 trillion baht ($43 billion). In my opinion, this underscores the urgency for both government and industry to collaborate on mitigation strategies.

Exploring Alternative Markets

Some argue for exploring alternative markets to reduce dependency on the U.S. market. Thailand's participation in the Regional Comprehensive Economic Partnership (RCEP) could open new avenues for trade within Asia. I think leveraging such agreements can provide a buffer against unilateral trade policies.

Enhancing Domestic Competitiveness

Fundamentally, it is all about enhancing domestic competitiveness. The government's focus on infrastructure and support for small businesses aims to strengthen the internal economy. I must admit that building resilience from within is a prudent approach in these uncertain times.

Conclusion

Having said that, I realize the challenges are multifaceted. Nevertheless, it is my belief that through strategic planning, regional cooperation, and domestic strengthening, Thailand can navigate the complexities of Trump's tariffs. Like it or not, the world moves on, and adaptability is the key to thriving amidst change.

It won't be easy.

About me: Janpha Thadphoothon is an assistant professor of ELT at the International College, Dhurakij Pundit University in Bangkok, Thailand. Janpha Thadphoothon also holds a certificate of Generative AI with Large Language Models issued by DeepLearning.AI.


Disclaimer:

The above is my personal opinion and perspective. I am not an economist, financial analyst, or expert in the field of international trade and tariffs. This article is written for informational and reflective purposes only, based on publicly available information and personal observation. Readers are advised to consult qualified professionals or conduct their own research before making any economic or policy-related decisions.

Wikibooks has deleted my book?

  Wikibooks has deleted my book? Books are a form of intellectual property (IP). Janpha Thadphoothon Professionally speaking.... The work of...