When Two Super AIs Do Not Understand Each Other
Janpha Thadphoothon
It is my personal belief that we are sleepwalking into a digital Tower of Babel. They say that math is the universal language, but it is well known that advanced AI doesn't just process numbers; it processes context. Critics such as those watching the decoupling of the Western and Chinese tech stacks would tell you that we are building two separate "Super-Intelligences" with fundamentally different worldviews.
Fundamentally, it is all about the "Semantic Gap." The news has it that as China moves ahead with its 15th Five-Year Plan, its AI models are being trained on a diet of "Sino-centric" data—cultural conventions, social governance models, and specific national aims. Meanwhile, the Western models from Google or Microsoft are being fed on a different diet of individualistic legal precedents and market-driven ethics.
Accordingly, when these two systems meet at the digital border, I am not sure but I suspect they won't just disagree; they might view each other as "corrupt data." My conviction is that if their underlying "Aims" and "Conventions" are incompatible, even a "Super-Intelligence" cannot bridge the gap without a common frame of reference.
Critics such as Nick Bostrom or researchers at the 2026 Davos AI Summit would tell you that if two super-intelligences have different "world models," they might suffer from "Specification Drift." Fundamentally, it is all about alignment. If System A's "Aim" is to maximize efficiency under China's 15th FYP, and System B's "Aim" is to maximize shareholder value under Western conventions, they won't just disagree—they might view each other as "corrupt data."
People say that AI will solve all our communication problems, but some argue for the idea that AI will actually inherit our deepest cultural biases. I notice this even in the simple act of being in a cashless society like Tianjin. The digital system here understands "trust" and "transaction" in a way that feels seamless, but it is built on a specific social architecture.
Based on the first impression, it seems like a technical glitch. Gradually, I have come to realize it is something deeper. First of all, if System A thinks a transaction is "valid" based on social credit and System B thinks it is "valid" based on Western financial privacy, they aren't just speaking different languages—they are living in different realities.
And then ultimately, we have to ask: I know you would agree with me that if these systems are "super," shouldn't they be smart enough to translate? I guess it is possible, but I'd like to entertain you with the idea that the more "human-oriented" an AI becomes, the more it adopts the "untranslatable" nuances of its creators.
No one knows everything, but I would like to suggest that this is where the ACA Model becomes critical once more. I somehow think that if we don't teach these systems to understand Aims, Conventions, and Audiences across borders, we will end up with a digital world that is "synced" locally but "broken" globally.
Nevertheless, it is my long-held belief that (though I could be wrong) the "difficulty" of the near future is exactly what will force us to become better bridges ourselves. Fundamentally, I would argue that the "Sync Tax" we are paying now—the struggle to make different tech systems talk to each other—is just the beginning.
My gut tells me that the future belongs to those who can navigate the friction. Indeed, as the saying goes, "A bridge-builder is only as strong as the foundations on both sides." Wisdom from the past hints that we cannot rely on the machines to find the common ground for us. It has perplexed me to think of two "God-like" AIs stalled because they can't agree on a semantic definition of "fairness," but what's more interesting is that this forces the human back into the loop.
The past is the past. Like it or not, the world moves on. Let's be a bit more scientific: we are heading toward a "Multi-Polar AI" world. In Thailand, for example, we will have to be the ultimate "tech-translators," using one system for trade with China and another for the West. I like the idea of Thailand being the neutral "Dashboard" where these two super-systems are forced to shake hands.
As a language teacher, I know that true understanding requires more than just vocabulary; it requires empathy. I somehow think that even for Super AI, the same rule might apply.
About the Author:
Janpha Thadphoothon is an assistant professor of ELT at the Faculty of Arts, Dhurakij Pundit University in Bangkok, Thailand. He holds a Doctorate in Education and a certificate in Generative AI with Large Language Models.
No comments:
Post a Comment