Monday, November 18, 2024

Chomsky's Last Intellectual Debate?

Chomsky's Last Intellectual Debate?

By Janpha Thadphoothon

I am not a big fan of Noam Chomsky's political stance, but I hold great respect for his monumental contributions to linguistics and related fields. His work has shaped how we understand the human capacity for language, making him a towering figure in intellectual history. 

One of my most vivid memories is of a teacher harshly criticizing Chomsky's theory of Universal Grammar in an attempt to elevate her own. At the time, in the early 2000s, I had a limited understanding of the complex linguistic concepts involved and the importance of building upon the work of others.

I discovered Hinton's groundbreaking AI ideas through online resources. He is a highly intelligent individual, recognized as a top expert in both artificial intelligence and physics. Born in Britain, he now calls Canada his home.



Chomsky's theory of Universal Grammar (UG) and his nativist perspective on language revolutionized linguistics. At its core, UG posits that the ability to acquire language is hardwired into the human brain—a genetic endowment unique to our species. This theory stood in direct opposition to behaviorist views, famously debated in his clash with B. F. Skinner, who emphasized external stimuli and conditioning as the primary drivers of language acquisition.  

Fast-forward to the present, and Chomsky faces a new intellectual challenge. The rise of Artificial Intelligence (AI), particularly Large Language Models (LLMs) and generative AI systems like ChatGPT, has reignited debates about his theory of language. Unlike humans, these models are not born with innate linguistic knowledge. Instead, they process vast amounts of data and rely on probabilistic algorithms to generate human-like text.  

Critics argue that LLMs undermine Chomsky's nativist framework, demonstrating that language can emerge from statistical patterns and data-driven learning, without the need for an innate grammar. This debate has unfolded against the backdrop of Chomsky's advancing age—now over 95—and a rapidly evolving AI landscape.  

Chomsky has not remained silent on this issue. He has described AI systems like ChatGPT as "sophisticated tricks" that excel at mimicry but lack the deeper cognitive capacities that characterize human language. Unlike Geoffrey Hinton and other AI pioneers who see these models as a paradigm shift, Chomsky remains steadfast in his belief that true language use requires understanding, intentionality, and a biological foundation that machines inherently lack.  

This debate may well be one of the last intellectual battlegrounds for Chomsky, a scholar who has spent decades defending the biological roots of language. Whether one agrees with him or not, his enduring presence in the discussion is a testament to the profound influence he has had on how we think about human nature and communication. 

The debate between Noam Chomsky and proponents of AI advancements, including Geoffrey Hinton, revolves around fundamental disagreements on the nature of intelligence, language, and the role of generative AI models such as Large Language Models (LLMs).

Chomsky's Perspective

Chomsky argues that LLMs, such as ChatGPT, lack genuine understanding and are merely sophisticated statistical systems predicting text based on prior data. He has described such AI systems as "a trick," emphasizing that they do not engage in reasoning or possess the innate linguistic structures central to his Universal Grammar theory. Chomsky's work posits that language is an innate human capability governed by biological principles, setting it apart from AI-driven language generation, which lacks the conceptual depth and cognitive framework to mirror human linguistic competence.

Hinton and the AI Community

On the other hand, figures like Geoffrey Hinton highlight the transformative potential of LLMs, emphasizing their emergent abilities. These models, despite their lack of explicit programming to understand concepts, demonstrate skills such as contextual reasoning and stylistic imitation. Critics of Chomsky's views argue that the success of LLMs challenges the necessity of innate linguistic principles. They suggest that such systems can approximate understanding through their training on vast data sets, showing capabilities that resemble human-like behavior, even if derived differently.

LLMs and the neo-behaviorist Perspective

Hinton’s perspective on language within AI systems can be seen as echoing elements of a neo-behaviorist approach, focusing on patterns, frequency, and stimuli. In this view, the functioning of Large Language Models (LLMs) resembles the behaviorist emphasis on observable and measurable responses, as these models learn language by identifying patterns in massive datasets without requiring innate structures like Universal Grammar. This alignment is worth exploring in several ways:

Neo-behaviorist Features in LLMs

Pattern Recognition Over Cognition: LLMs process language through statistical analysis, identifying frequencies and co-occurrences of words and phrases. This approach aligns with the behaviorist principle that learning results from exposure to patterns in stimuli, rather than from innate cognitive mechanisms.

Stimuli-Response Dynamics: Behaviorists, including B.F. Skinner, viewed learning as the strengthening of responses to specific stimuli. Similarly, LLMs “learn” by adjusting weights in neural networks based on input-output pairings during training, which mirrors this behaviorist framework.

Emergence Through Data, Not Innateness: Unlike Chomsky's nativist theories that posit a biological basis for language, Hinton and others in the AI field see language as an emergent property of processing vast amounts of data. This perspective suggests that complex linguistic behavior can arise from simpler mechanisms, a hallmark of behaviorist thinking.

Where Neo-Behaviorism Diverges

Hinton’s approach differs in one critical way: while behaviorism traditionally rejected the idea of internal cognitive states, LLMs involve intricate neural network architectures. These architectures, while not biological, model internal representations that enable context-sensitive responses, suggesting a more nuanced framework than strict behaviorism.

Relevance to the Debate

In framing LLMs as neo-behaviorist, the criticism from Chomsky becomes more pointed: these systems, like behaviorist theories, may fail to capture the deeper generative and cognitive aspects of human language. The debate thus highlights whether linguistic competence is a product of surface-level pattern learning or innate faculties.

This perspective offers a valuable lens for interpreting how AI reshapes our understanding of language learning and cognition, blending modern computational techniques with echoes of mid-20th-century psychological theory​.
Key Debate Points

1. Understanding vs. Mimicry: Chomsky believes LLMs mimic language without understanding, while proponents argue that the models exhibit emergent properties indicating complex skill synthesis.

2. Innateness vs. Learning: Chomsky’s theory of Universal Grammar emphasizes innate structures, whereas LLM success suggests that massive data exposure and pattern recognition might suffice for language-like behavior.

3. Cognitive Boundaries: Critics, including Hinton, challenge Chomsky’s notion that AI models cannot approach human-like cognition, pointing to their practical effectiveness in real-world tasks.

Current Implications

This debate extends beyond linguistics to broader questions about AI’s role in society. The AI community acknowledges that while LLMs lack human-like intentionality, their applications in communication, knowledge synthesis, and decision-making are revolutionary. Researchers are actively exploring whether the capabilities of LLMs signify a fundamental shift in understanding intelligence or merely an extension of statistical modeling.

For more in-depth information, you can explore discussions about AI's linguistic capabilities on sites like Quanta Magazine or CBMM’s panel discussions

Who wins?

Predicting the "winner" of a debate between Noam Chomsky and Geoffrey Hinton on the nature of language and AI depends on how one defines "winning" and the audience's perspective. Each side represents a fundamentally different approach to understanding intelligence and language, and their strengths lie in their respective domains.

Why Chomsky Could Prevail

1. Philosophical and Biological Consistency: Chomsky’s Universal Grammar theory has withstood decades of scrutiny and is deeply rooted in biology and cognitive science. His arguments that AI lacks genuine understanding resonate with those who view language as more than just statistical patterns.

2. Critique of AI Limitations: Chomsky's critique of AI as "a trick" reflects concerns about the absence of reasoning and consciousness in models like ChatGPT. His points resonate with skeptics who prioritize human-like cognition over performance.

Why Hinton and AI Proponents Could Prevail

1. Demonstrable Results: AI systems like LLMs have produced remarkable, measurable outcomes, excelling in tasks previously thought to require human intelligence. For many, this pragmatic success outweighs philosophical objections.

2. Challenging Innateness: The ability of LLMs to generate coherent and contextually relevant language through training on large datasets challenges the necessity of innate linguistic structures, directly undermining Chomsky’s theory.

3. Broader Acceptance of Data-Driven Models: In the age of AI, data-driven approaches have gained wide acceptance for their scalability and application, making Hinton's views more appealing to technologists and applied linguists.

Who Wins? The "winner" depends on the framing:

- Academically: Chomsky’s theories hold a foundational place in linguistic thought and remain essential for understanding human language development and cognition.

- Practically: Hinton and the AI community are reshaping how society interacts with and understands language through technology.

Chomsky's Perspective on Language Acquisition

  • Universal Grammar (UG): Chomsky argues that humans are born with an innate ability to acquire language, governed by a "universal grammar" hardwired into the brain. This framework provides the structures and rules necessary for language learning.
  • Poverty of the Stimulus: He emphasizes that children acquire complex language structures despite limited exposure (or incomplete input), suggesting the existence of internal mechanisms that fill in the gaps.
  • Critique of AI in Language Learning: Chomsky asserts that LLMs (like ChatGPT) and their statistical approaches are not analogous to how humans acquire or understand language because they lack an intrinsic grasp of grammar and semantics.

Hinton's (and AI's) Implications for Language Acquisition

  • Pattern Recognition Over Innateness: Hinton’s work with neural networks implies that language acquisition might be more about recognizing and replicating patterns in large datasets (a process similar to AI training) than relying on innate mechanisms.
  • Empirical Learning: Neural networks learn through exposure to massive amounts of data, resembling behaviorist theories where input (stimuli) and repetition shape learning. This contrasts with Chomsky’s claim that exposure alone is insufficient for language acquisition in humans.
  • AI as a Model for Learning: Hinton’s perspective challenges Chomsky’s by showing that systems can generate meaningful linguistic output without innate grammar, calling into question the necessity of a universal grammar for learning language.

Overlap and Tensions

  • The debate implicitly examines whether humans acquire language via:
    • Internal, biologically encoded rules (Chomsky).
    • External data-driven processes of pattern recognition (Hinton/AI models).

While Chomsky’s theory focuses on human-specific biological mechanisms, Hinton’s AI-driven approach suggests that learning could be explained by exposure and interaction with linguistic data. This contrast invites further exploration into whether human language acquisition is unique or shares similarities with machine learning processes.

Geoffrey Hinton does acknowledge the role of biological factors, including genetics, in language development, but his focus primarily differs from Chomsky's. Hinton’s work centers on computational models and neural networks, emphasizing the power of learning from data and experiences rather than relying on strictly innate mechanisms.


Hinton’s Recognition of Nature in Language Development

1. Brain-Inspired Models:

   - Hinton’s neural networks are based on how the brain functions, reflecting his acknowledgment of the biological foundations of intelligence, including language. These models simulate neurons and synaptic connections, inspired by human cognitive processes, which are ultimately rooted in our genetic makeup.

2. Initial Neural Capacities:

   - Hinton recognizes that humans are born with certain innate capacities, such as the structure of the brain and the ability to form connections between neurons. This mirrors a basic form of nature's contribution, though he views these as general cognitive mechanisms rather than a language-specific module like Chomsky’s Universal Grammar.

3. Adaptation Through Experience:

   - Unlike Chomsky, who emphasizes pre-wired linguistic structures, Hinton suggests that genetic predispositions provide the foundation for learning but that language itself is shaped largely by interaction with the environment and exposure to data.

Key Differences from Chomsky’s View

While Hinton doesn’t deny the influence of genetics, he diverges from Chomsky by:

- Downplaying the need for an innate, language-specific grammar.

- Highlighting the role of exposure and iterative learning in shaping linguistic abilities.

- Suggesting that human intelligence, including language, emerges from more general neural mechanisms rather than a pre-programmed linguistic blueprint.

Hinton acknowledges that nature plays a role in language development through the biological structures that facilitate learning, but he focuses on the adaptability and emergent properties of these systems. His work bridges the gap between acknowledging innate capacities and demonstrating how sophisticated learning arises primarily from data-driven processes. This creates a more empiricist view compared to Chomsky’s rationalist stance on Universal Grammar.


Ultimately, this debate may not produce a definitive "winner" because it highlights two complementary perspectives. Chomsky's work emphasizes the uniqueness of human cognition, while Hinton's contributions showcase how technology can mimic and extend aspects of intelligence. The real value lies in how these debates push the boundaries of what we know about both human and artificial intelligence.

As AI continues to advance, the question remains: does it challenge Chomsky's theories, or does it merely highlight the profound differences between artificial and human intelligence? Only time—and further debate—will tell.  

In retrospect, this controversial debate stimulated scholarly exploration and investigation. After all, such intellectual discourse is the hallmark of human civilization.



Janpha Thadphoothon is an assistant professor of ELT at the International College, Dhurakij Pundit University in Bangkok, Thailand. Janpha Thadphoothon also holds a certificate of Generative AI with Large Language Models issued by DeepLearning.AI.

Stages in AI Development and the Future of AI

Stages in AI Development and the Future of AI  

By Janpha Thadphoothon  

I am sure you would agree with me that the development of artificial intelligence (AI) is one of the most transformative technological shifts in human history. What we refer to as "artificial intelligence" is, in essence, a software application—or more accurately, a set of digital applications—that satisfies two key criteria: it can learn or be trained, and it can exhibit human-like behavior.  

I am not a data scientist but an English teacher, so my perspective on AI is not overly technical. However, I have read somewhere that AI has been evolving through distinct stages since its early beginnings. From its humble roots in the 1970s to the innovations of the 1980s, and now, as we look toward 2030, we can observe a remarkable trajectory.  



Experts in the field say that AI can be understood through a series of developmental stages, each with its unique characteristics and potential. In this blog post, I will share what I believe are five key stages in AI's evolution—from simple chatbots to what I prefer to call "agentic entities," systems that could one day manage businesses or even entire organizations.  

Types of AI and How They Learn

Before diving into the stages, let me briefly touch on the different types of AI and how they are trained.

  1. Types of AI:

    • Generative AI: These systems, such as ChatGPT, DALL-E, and MidJourney, can create new content, including text, images, music, or videos. They use large datasets and advanced models to generate outputs that mimic human creativity.
    • Predictive AI: Systems like recommendation engines analyze data to predict future outcomes, such as which movies you might like or stock market trends.
    • Reactive AI: These are limited systems that only respond to specific tasks, like playing chess or diagnosing faults in machines.
    • Adaptive AI: AI capable of learning and evolving in real time, improving its performance as it interacts with its environment.
  2. Training Methods:

    • Supervised Learning: AI is trained on labeled data, where it learns by example. For instance, a system might be trained on images of cats and dogs to identify which is which.
    • Unsupervised Learning: The system works with unlabeled data, finding patterns or clusters on its own. This approach is often used in market segmentation.
    • Reinforcement Learning: This involves training AI through trial and error, rewarding it for correct actions and penalizing it for mistakes. A good example is AlphaGo, which learned to master the game of Go through countless simulations.

Now, let’s look at how these training methods have contributed to the development of AI through its various stages.

The Five Stages of AI Development

1. Reactive Agents (Chatbots)  

In its earliest stage, AI is reactive, designed to handle specific inputs and generate pre-programmed outputs. These agents lack memory or the ability to understand context. They say the first chatbots, like ELIZA from the 1960s, were pioneers of this stage. Today, this level of AI is still widely used in customer service chatbots.  

2. Contextual Agents (Assistants)  

The second stage involves AI systems that can learn from data and adapt to context. Virtual assistants like Siri and Alexa fall into this category. They are smarter than simple chatbots and can perform a range of tasks, from setting reminders to answering trivia questions.  

3. Collaborative Agents (Strategists)

By the 2020s, AI began to take on more collaborative roles, assisting humans in making strategic decisions. For example, AI tools in finance or logistics analyze data and provide actionable insights. I am sure you would agree with me that such systems already show potential as strategic partners.  

4. Agentic Entities (Entrepreneurs)  

Looking toward the near future, it is believed that AI will evolve into fully autonomous systems. These "agentic entities" will be capable of managing entire enterprises, from identifying business opportunities to executing strategies. This stage could redefine what it means to lead and innovate.  

5. Networked Entities (Ecosystem Leaders)  

 In the final stage, AI systems will likely function as part of interconnected networks. They say these entities will not only work independently but also coordinate with other systems to optimize global operations in fields like healthcare, education, and transportation.  

From the 1970s to 2030: A Brief Timeline  

- 1970s: The early days of AI were driven by academic curiosity and foundational theories. ELIZA, one of the first chatbots, demonstrated the potential for AI to simulate conversations, albeit in a limited way.  

- 1980s: Expert systems emerged, allowing computers to make decisions based on pre-defined rules. This decade saw AI applications in industries like medicine and engineering.  

- 1990s-2000s: Machine learning gained traction, with systems becoming more adaptive. Breakthroughs like IBM's Deep Blue defeating a world chess champion in 1997 showcased the growing capabilities of AI.  

- 2010s: The era of deep learning and big data began. Virtual assistants like Siri, Alexa, and Google Assistant became household names. AI began assisting in areas such as autonomous vehicles and personalized recommendations.  

- 2020s-2030s: Experts predict that AI will evolve into agentic entities capable of entrepreneurship and leadership. These systems will be smarter, more autonomous, and able to navigate ethical challenges, prompting the need for robust AI regulations.  

Ethical Concerns and the Role of AI Regulations  

As AI progresses, ethical concerns inevitably arise. I have read somewhere that questions about privacy, bias, and accountability dominate discussions about AI's future. For example, who is responsible when an autonomous system makes a mistake? Can we ensure that AI decisions are fair and unbiased?  

It is believed that governments and organizations are taking steps to address these issues. AI regulations are being developed to create a balance between innovation and responsibility. For instance, the European Union has proposed frameworks to ensure AI systems respect fundamental rights and promote transparency.  

They say we are entering an age where ethics must go hand in hand with technology. Without thoughtful regulation, the potential misuse of AI could overshadow its benefits. As educators, we have a role to play in fostering discussions about these challenges and preparing the next generation to navigate this new world responsibly.  


At present, it should be clear that AI is neither a passing fad nor mere hype. It is as real and transformative as air, water, or electricity. If anyone still has doubts, I encourage them to seek the truth and explore the subject further. I cannot force people to believe in the reality of AI, but I can share my insights and experiences to raise awareness.  

The future is being shaped before our eyes, with AI playing a pivotal role alongside other beings—humans and cyborgs—working together to lead the way forward.  


Saturday, November 16, 2024

Random Errors and the Event Horizon

 Random Errors and the Event Horizon


Janpha Thadphoothon

Some of the most profound questions in life come to us unexpectedly, often when we are unprepared to grapple with their depth. Many years ago, I had the privilege of conversing with two remarkable individuals whose insights shaped my understanding of randomness, design, and the nature of the universe.  



The Statistician and Random Errors  

The first individual was an expert in statistics. He once posed a question about the random error in a language (English) test I had designed. At the time, I was young and inexperienced, and I could only respond with confusion. Thankfully, he was kind, acknowledging my ignorance without judgment.  

It took me years to fully appreciate his question. I later learned that in measurement theory, a true score equals the measured score plus errors, which can be classified into two types: random errors and systematic errors. While systematic errors follow a predictable pattern, random errors are, by their nature, unpredictable and scattered.  

At the philosophical plane, the concept of randomness is intellectually stimulating. It raises questions about whether chaos truly governs the universe or whether what appears random is part of a grander design. Physicist Roger Penrose suggested that the universe might not be a product of random chance but rather of design—or something beyond our current comprehension.  

I cannot claim to have an answer to such a monumental question. However, one thing is clear: when randomness approaches zero, we encounter the realm of absolutes or singularities—a point where certainty reigns, much like the event horizon of a black hole. At this boundary, the status of things teeters in a gray zone, neither fully defined as "0" nor "1."  

The Engineer and the Event Horizon  

The second individual was an engineer specializing in telecommunications. We met at the university canteen, where I had the chance to read one of his papers on event horizons. He explained to me that in certain systems, signals encoded in the "gray areas" between 0 and 1 are secure from hacking. His work left me puzzled, as I lacked the technical background to fully grasp his ideas.  

What intrigued me, however, was his description of the "gray area" as a zone of uncertainty and transition. At the boundaries—whether in black holes or in data encoding—the usual rules break down, and we confront a state of liminality. It is a state where definitions blur and possibilities multiply.  

Randomness or Design: A Personal Choice  

This brings me back to the question that lingers: is the universe a product of randomness or design? Perhaps the answer lies within us. If you choose to see the universe as designed, you might find evidence to support that belief. If you see it as random, you might be equally justified.  

After all, we are products of this universe, composed of atoms and molecules that themselves are the outcomes of countless interactions. In a way, we embody both randomness and design—a balance of chaos and order.  

Einstein once remarked, "God does not play dice with the universe," suggesting a deterministic view of existence. Yet he also said that perhaps even God had no choice. These ideas remind us that the boundary between randomness and design may be as fluid as the event horizon, where clarity dissolves into mystery.  

In the end, the question of randomness versus design might not demand an answer. Instead, it invites us to marvel at the complexity of existence and our role within it.  


Janpha Thadphoothon is an assistant professor of ELT at the International College, Dhurakij Pundit University in Bangkok, Thailand. Janpha Thadphoothon also holds a certificate of Generative AI with Large Language Models issued by DeepLearning.AI.

Thursday, November 14, 2024

How to Communicate with AI through Prompts

 How to Communicate with AI through Prompts

Janpha Thadphoothon

I'm writing this blog article in reaction to some questions students asked me in class—"Sir, what is a prompt?" The question was both surprising and refreshing. It's funny how some of the most profound questions are the simplest. The students asked in the context of using prompts to improve their writing, particularly with the help of AI tools. "You need to learn how to use prompts to work with machine (AI) agents," I advised, knowing that it would be a skill they would need sooner than later.

I hesitated to utter the term "prompt engineering." In my opinion, it sounds technical, even intimidating. I don’t consider myself an expert in prompt engineering, but I do know what a prompt is. To me, it’s essentially a command, like telling the AI, “Explain what global warming is.” AI agents, like ChatGPT or Gemini, operate according to our commands—our prompts. 

I think of a prompt as a way of communicating with the AI, shaping its response to suit our needs. You would agree with me that, when used thoughtfully, prompts are a powerful tool. They allow us to tap into AI’s vast knowledge in a way that’s personalized and useful. For instance, we could ask, “List some strategies for learning English,” and the AI could provide helpful methods, examples, and even suggest interactive exercises.

They say prompt engineering is like speaking a new language. While it may seem complex, it’s really about clarity and specificity—telling the AI precisely what we want. People often say that AI only knows what we tell it, and I think there’s truth to that. Crafting a good prompt is about understanding the details you need and then directing the AI to focus on those details.

As far as I know, using prompts effectively is about having a conversation with the AI. Think of it as guiding a partner in a dance. For example, if you’re researching climate change, you could ask the AI for “climate change data from the past decade” or “an explanation on how climate change impacts tropical ecosystems.” Each prompt guides the AI differently.

My perception is that learning to communicate with AI will be as essential as learning to write a formal letter or make a presentation. This skill opens doors to endless information and insights, empowering us to learn more efficiently. In my opinion, mastering prompts doesn’t just improve our interaction with AI; it enhances our critical thinking by teaching us to frame questions and guide conversations with purpose. 

Examples of Prompts in Communication with AI Agents

When communicating with AI, prompts can range from simple to complex depending on the desired response. Here are a few examples to illustrate how prompts work:

1. Basic Inquiry Prompt

   - Example: “What is climate change?”

   - Explanation: This is a straightforward question, and ChatGPT will typically provide a general, concise answer. It’s often referred to as a “zero-shot” prompt, meaning the AI doesn’t have any extra context or examples and must answer directly based on its training data.


2. Elaborative Prompt

   - Example: “Explain climate change in simple terms for a 10-year-old.”

   - Explanation: Here, the prompt includes additional information, requesting a response suitable for a younger audience. This guides ChatGPT to simplify complex concepts.

3. Analytical Prompt

   - Example: “Compare climate change policies in the US and Europe.”

   - Explanation: This prompt requires the AI to perform a comparative analysis, resulting in a more detailed response that considers policy differences.

4. Creative Prompt

   - Example: “Write a short story about a robot exploring a new planet.”

   - Explanation: This prompt nudges the AI to take on a creative task, generating a story rather than a factual answer.

5. Multi-Step Inquiry

   - Example: “Explain the greenhouse effect, then list three ways individuals can reduce their carbon footprint.”

   - Explanation: This prompt has multiple parts, directing the AI to provide a layered answer.

Understanding Zero-Shot Prompting vs. Multi-Layer Prompting (Structured Prompts)

Zero-Shot Prompting

Zero-shot prompting involves giving the AI a single question or command with no additional context, guidance, or examples. The AI answers based on what it “knows” from its training data.

- Example of Zero-Shot Prompt: “Summarize the plot of To Kill a Mockingbird.”

  - Here, the AI provides a direct response with no extra prompting or follow-up questions. This method is quick and straightforward but may yield a simpler response.

- Best Use Case: Zero-shot prompts work well for general knowledge questions or simple tasks that don’t require specific customization or depth.

Multi-Layer Prompting (Structured Prompts)

Multi-layer prompting, or structured prompting, breaks down a question into multiple, structured parts or layers, guiding the AI through a step-by-step approach. This approach is also known as few-shot prompting if it involves examples, or prompt chaining if it builds on previous prompts.

- Example of Multi-Layer Prompt: 

  - Layer 1: “List three major themes in To Kill a Mockingbird.”

  - Layer 2: “Now, explain each theme with a quote from the book.”

  - Layer 3: “Provide a short analysis of how each theme is relevant today.”

- Best Use Case: Multi-layer prompting is ideal for complex tasks that require deeper analysis, detailed information, or a more structured response. This method allows the AI to generate responses that build on prior information or context.

Key Differences

- Depth of Response: Zero-shot prompts often lead to brief, direct answers, while multi-layer prompts result in richer, more comprehensive responses.

- Control over Output: Multi-layer prompting gives the user more control over the AI’s output by guiding it through specific steps, whereas zero-shot relies on the AI’s interpretation of a single, isolated question.

- Application Suitability: Zero-shot is efficient for straightforward inquiries; multi-layer is better for tasks that require detailed, organized information or creative content with specific direction.

Zero-shot prompting is quick and simple but less detailed, while multi-layer prompting allows for structured, complex responses that align more closely with specific needs or goals.


When interacting with AI agents, the quality of your instructions plays a crucial role in shaping the responses you receive. Clear, specific, and well-structured instructions guide the AI, allowing it to understand your intent better and deliver results that align with your expectations. 

In other words, the way you “communicate” through prompts directly impacts how effectively the AI understands and responds. If you’re precise and provide necessary details in your instructions, the AI can generate a response that’s not only accurate but also relevant to your needs. 

For example:

- Vague Prompt: “Explain climate change.”

- Detailed Prompt: “Explain climate change in simple terms, focusing on how it affects daily life, and provide three examples of actions people can take to reduce their impact.”

The second prompt is likely to produce a more insightful and targeted response. So, yes—clear and thoughtful instructions really do matter when communicating with AI agents.


Does Politeness and Hedging Affect Responses?

From a practical standpoint, using polite expressions (like “please” or “could you…”) or hedging phrases (“I think…”, “Would you agree…?”) doesn’t impact the technical function of AI responses because ChatGPT processes the core of a question rather than emotional tone. The AI isn’t aware of politeness or human-like intentions; it simply analyzes input to generate relevant output. However, my personal experience is this - the use of politeness can actually improve interactions in specific ways. Why?

1. Clarity and Completeness: Phrasing questions as polite requests often naturally encourages you to give clearer, more specific prompts. For example, “Could you provide a list of…” often yields a better response than a vague “List…”

2. Human-Like Interaction: When AI is embedded in tools for tasks like customer service or virtual assistance, polite phrasing can feel more natural and create a sense of empathy, improving user experience for people engaging with AI in a social or professional context.

Psychological Side: Should We Treat AI as Sentient?

Regarding the deeper question of whether it’s desirable to treat AI as if it’s sentient, here are some factors to consider:

1. Social and Emotional Conditioning: Language shapes how we think and feel. If we talk to AI with human-like courtesy, we may unconsciously attribute human characteristics to it. For some people, this could create comfort, encouraging them to explore and interact more. But for others, it could blur the line between technology and true social interaction, potentially leading to misunderstandings about AI’s capabilities and limitations.

2. Empathy and Ethics: There’s a rising view that polite language may foster respectful behavior overall, even toward non-sentient systems. Teaching people to interact respectfully with AI could help reinforce empathy and patience in broader social interactions, especially for younger users. Yet, it’s crucial to remind ourselves that AI, unlike a human, doesn’t feel and doesn’t require empathy.

3. Effective Communication: Hedging and politeness often bring clarity and specificity, as we tend to be more intentional with our wording when we use polite phrases. This approach is useful, not because the AI needs it, but because it helps users articulate thoughts more clearly, leading to better AI responses.

Final Thought

In my opinion, polite and hedged language can enhance the interaction experience with AI, making it feel more human-like and approachable, which may foster exploration and creativity. However, we need to keep in mind that AI lacks true understanding, sentience, or emotion helps maintain realistic expectations and prevents us from ascribing too much human-like agency to the technology.

So, it’s effective to use polite language for our benefit in terms of clarity and comfort—but remembering AI’s non-sentient nature is key to using it wisely and effectively.

This article, as I mentioned earlier, is a reaction to a question asked by some of my students. I hope you’ve found it insightful. One key takeaway is that when you ask questions, good things happen. Curiosity opens doors to new knowledge and understanding. So, don’t hesitate—ask questions whenever you want to learn more, and seek answers from both people and AI agents alike.

About Janpha Thadphoothon


Janpha Thadphoothon is an assistant professor of ELT at the International College, Dhurakij Pundit University in Bangkok, Thailand. Janpha Thadphoothon also holds a certificate of Generative AI with Large Language Models issued by DeepLearning.AI.


Saturday, November 9, 2024

English and AI Skills

English and AI Skills

Janpha Thadphoothon

Hello, I’m Janpha Thadphoothon, a professional English language teacher and author. Today, we are on the cusp of a new frontier of cooperation, one that transcends the traditional human-to-human collaboration and opens doors to human-AI partnerships. My goal here is to highlight that English skills and AI skills are closely connected, with English serving as a crucial component of AI competence. In other words, if English is your strength, you're already halfway to thriving in the digital age.


Working with AI 

In the past, successful collaboration was built on interpersonal skills among humans. Now, with the rise of artificial intelligence, it has become crucial to equip ourselves with the skills needed to work effectively alongside these automated systems. This is especially relevant for Thai university students who will soon navigate a world where human-AI cooperation is routine.


Why Thai University Students Need English and AI Skills

AI is advancing rapidly in understanding language and responding to our needs, yet it requires a solid foundation in English to unlock its full potential. Thai students, in particular, need robust English language skills to communicate with AI systems that largely operate in English and to access global information. English proficiency allows students to better understand and control these technologies, enhancing their capabilities to use AI for research, data analysis, and communication.

Key Skills for Working with AI Agents

To collaborate with AI effectively, students need to develop both technical and soft skills. Technical skills include understanding basic principles of machine learning, data analysis, and prompt engineering — skills that enable them to interact with AI agents in a way that maximizes their utility. Soft skills, such as critical thinking and problem-solving, are equally important, as they help students evaluate AI outputs critically and use them effectively in their studies and future careers.

Mindset and Attitudes Needed

Beyond technical proficiency, the mindset toward AI is equally significant. Thai students need to adopt an open-minded and adaptable approach, understanding that AI is a tool, not a threat. Embracing curiosity and a willingness to learn is vital, as AI technologies continuously evolve. The ability to question, evaluate, and adapt to new AI tools will allow students to stay current in a fast-changing landscape.

For Thailand to thrive in this age of AI, there is an urgent need to integrate these skills into education. While other countries are making strides in AI-driven education, Thailand lags due to limited access and a lack of focus on AI-readiness in curricula. Thai universities must prioritize English and digital skills, preparing students not just for the present but for the future. 

We need to take bold steps to ensure that our education system fully embraces AI, giving Thai students the skills and confidence to work alongside AI agents. Only by addressing these issues can Thailand harness the transformative potential of AI and secure a stronger position in the global economy.


Illustrated by Gemini (Dated 13 Nov 2024)

Why Canada is an AI Superpower

Why Canada is an AI Superpower

Janpha Thadphoothon

Canada is often imagined as a serene country filled with nature lovers and cyclists, a place where people cherish the environment and embrace a green lifestyle. Though I’ve never been to Canada myself, my impression of the country comes from media and its portrayal as a peaceful, eco-conscious nation. However, recent discoveries about Canada’s progress in artificial intelligence (AI) have reshaped my view. Canada is not only known for its vast wilderness but is emerging as a leader in AI innovation.

Canada’s AI Powerhouses

Over the past decade, Canada has quietly positioned itself as a global hub for AI research. Toronto, Montreal, and Edmonton have become centers of AI activity, drawing researchers from around the world. The presence of luminaries like Geoffrey Hinton—often referred to as one of the "Godfathers of AI"—has put Canada on the AI map. Hinton’s groundbreaking work in deep learning, conducted primarily in Canada, has paved the way for many AI advancements we see today.

Government Support and Policy as Catalysts

One reason for Canada’s rise in AI is its supportive policies. Canada has a history of investing in research and development, and the government has specifically supported AI as a strategic field. The Pan-Canadian Artificial Intelligence Strategy, launched in 2017, was one of the first of its kind globally, with a significant investment aimed at supporting AI research and commercialization. This commitment has attracted talent and funding, bolstering Canada’s position as a leader in the field.

Collaborative Culture and Inclusivity

Canada’s culture of collaboration and inclusivity has also played a part. The country has fostered a research environment where collaboration between universities, private companies, and government agencies thrives. Canadian universities, such as the University of Toronto, McGill University, and the University of Alberta, are known for their strong computer science and AI programs. This collaborative environment has allowed Canada to advance AI technologies in ways that align with ethical considerations, fairness, and inclusivity.

AI for Sustainable Development

Interestingly, Canada’s AI initiatives are often linked with sustainable development. From AI models that monitor climate change to technologies that optimize resource management, Canada’s AI advancements reflect its commitment to a greener world. AI labs in Canada are not just creating products; they are innovating with a vision for environmental responsibility and sustainable progress.

Canada's AI-Enhanced Approach to Risk Assessment

The recent decision by Canada to dissolve TikTok's Canadian business while allowing the app to remain highlights the country's nuanced approach to balancing digital freedom with national security. This action could indeed reflect Canada's growing expertise in AI, which has likely enhanced its ability to assess and manage cybersecurity risks from foreign tech entities like ByteDance, TikTok's parent company. By allowing individual Canadians the choice to continue using TikTok, Canada underscores its commitment to digital autonomy and personal freedom while taking preventative action to protect sensitive national data from potential threats.

Canada’s AI capabilities may play a role in its cautious stance on foreign digital platforms, leveraging advanced predictive modeling and data analysis to assess risks. Canadian AI research institutions and government agencies likely have the means to simulate and model potential security concerns, including privacy vulnerabilities and data exposure risks associated with apps developed abroad. This expertise allows Canada to make informed decisions that strike a balance between security needs and the freedoms of its citizens.

Why dissolution instead of a ban? Unlike countries that have outright banned TikTok, Canada’s approach of ordering TikTok Technology Canada Inc. to dissolve reflects a targeted intervention that addresses corporate and regulatory concerns without impeding individual freedoms. This decision is rooted in the Investment Canada Act, which enables the review of foreign investments for potential threats to national security. In this case, Canada’s decision illustrates its approach to handling security risks without resorting to outright censorship, showing respect for user choice while managing its own security policies more directly.

Canada’s Digital Security Landscape

Canada’s leadership in AI could be a significant factor behind this tailored approach. Canada has invested in AI to support ethical decision-making, data transparency, and security, all of which strengthen its stance on cybersecurity. This expertise supports Canada’s capability to monitor and assess foreign digital entities in real time, which may have informed the outcome of the TikTok review. Canada's AI capabilities thus seem to enhance its ability to navigate the complex interplay between security and digital freedoms, serving as a model for responsible tech governance.

Canada’s Unique AI Strengths

Canada’s AI strength may come as a surprise to those who think of it solely as a land of nature lovers, but this perception overlooks the country’s quiet ambition and strategic investments. Canada’s dedication to AI research is grounded in values that emphasize collaboration, ethical responsibility, and sustainability. Canada, it seems, has harnessed its green values and applied them to the digital age, creating a model of AI leadership that is as thoughtful as it is innovative.


References

Canada shuts down TikTok's Canadian offices, but allows app to remain
(November 7, 2024) https://www.cbsnews.com/news/tiktok-ban-canada/

AICan: The impact of the Pan-Canadian AI Strategy 

https://cifar.ca/ai/impact/#topskipToContent

Canada: A Global Leader in AI Technology (January 11, 2024)

https://letstalkscience.ca/educational-resources/backgrounders/canada-a-global-leader-in-ai-technology


Please cite as:

Thadphoothon, J. (November 2024). "Why Canada is an AI Superpower". JT Blog. https://janpha.blogspot.com/2024/11/why-canada-is-ai-superpower.html


Janpha Thadphoothon is an assistant professor of English Language Teaching (ELT) at the International College, Dhurakij Pundit University in Bangkok, Thailand. He holds a certificate in Generative AI with Large Language Models issued by DeepLearning.AI. His research interests include the intersection of language, technology, and the philosophies underpinning social structures.

Sunday, November 3, 2024

Agentic AI and Multi-Agent Collaboration

Agentic AI and Multi-Agent Collaboration


Janpha Thadphoothon


The rapid evolution of AI is astonishing. Consider ChatGPT, which was introduced only a few years ago. Today, we're contemplating a future where humans and agentic AI agents work side-by-side.


Agentic AI and multi-agent collaboration are two closely related concepts in the field of artificial intelligence, both focused on creating intelligent systems that can act autonomously and interact with their environment.

It is highly likely that in the near future, humans will work alongside AI agents and other humans, including those with cybernetic enhancements. This collaboration is already happening in various forms:

  • AI as tools: AI systems are increasingly used as tools to assist humans in their work, such as in data analysis, design, and decision-making.
  • AI as collaborators: AI agents can work alongside humans to solve complex problems, providing insights, automating tasks, and learning from human feedback.
  • Cybernetic enhancements: Humans are already integrating technology into their bodies, from prosthetics to implants that enhance their abilities. This trend is likely to continue, leading to a future where humans and machines are more closely integrated.

This collaboration will likely lead to increased productivity, innovation, and new opportunities for both humans and AI. However, it also raises important ethical and societal questions about the nature of work, the distribution of wealth, and the potential for job displacement.



Source: Gemini

Agentic AI

Agentic AI refers to artificial intelligence systems that are capable of independent action and goal achievement. They can perceive their environment, make decisions, and take actions to achieve specific objectives.

Key Characteristics:

Autonomy: Agentic AI systems can operate independently without constant human intervention.

Goal-Oriented: They have specific objectives that they strive to achieve.

Learning and Adaptation: They can learn from their experiences and adapt their behavior to changing circumstances.

Complex Decision-Making: They can make complex decisions based on incomplete or uncertain information.

  • Applications:
    • Robotics: Autonomous robots that can navigate complex environments and perform tasks.
    • Virtual Assistants: Intelligent virtual assistants that can understand and respond to user queries.
    • Self-Driving Cars: Autonomous vehicles that can navigate roads and traffic.
    • Game AI: AI agents that can play complex games at a high level.

Multi-Agent Collaboration

Multi-agent collaboration involves multiple AI agents working together to achieve a common goal - kind of like human-human collaboration. These agents (not FBI :-)) can coordinate their actions, share information, and negotiate with each other to achieve their objectives.

Back in the early days, AI agents were lone wolves, each handling their own tasks. But as things got more complicated, we realized that these agents needed to work together. Enter multi-agent collaboration! By teaming up, these AI agents can tackle tough problems, share insights, and make smart decisions together. This collaborative approach not only makes things more efficient but also opens up a world of possibilities for innovation and problem-solving.

  • Key Challenges:
    • Communication: Effective communication between agents is essential for successful collaboration.
    • Coordination: Agents must coordinate their actions to avoid conflicts and maximize efficiency.
    • Trust and Reputation: Agents must be able to trust each other and build reputations.
  • Applications:
    • Distributed Systems: Multiple AI agents working together to solve complex problems.
    • Supply Chain Management: AI agents coordinating the flow of goods and materials.
    • Smart Cities: AI agents managing traffic, energy consumption, and other urban systems.
    • Military Simulations: AI agents simulating complex military scenarios.

The Intersection of Agentic AI and Multi-Agent Collaboration

The combination of agentic AI and multi-agent collaboration enables the creation of highly intelligent and adaptable systems that can tackle complex problems. By working together, AI agents can achieve goals that would be difficult or impossible for a single agent to accomplish.

For example, in a supply chain management system, agentic AI agents could autonomously monitor inventory levels, predict demand, and optimize logistics. Multiple agents could collaborate to coordinate the movement of goods, negotiate with suppliers, and respond to disruptions in the supply chain.

As AI technology continues to advance, we can expect to see increasing applications of agentic AI and multi-agent collaboration in various fields, leading to significant innovations and improvements in efficiency and productivity.

Works Cited

Ariel. “AI Agentic Workflow: A New Era of Smarter, More Collaborative AI - Viscovery.” Viscovery, 29 Aug. 2024, viscovery.com/en/ai-agentic-workflow-a-new-era-of-smarter-more-collaborative-ai/. Accessed 3 Nov. 2024.

DeepLearning.AI. “Agentic Design Patterns Part 5, Multi-Agent Collaboration.” Agentic Design Patterns Part 5, Multi-Agent Collaboration, 17 Apr. 2024, www.deeplearning.ai/the-batch/agentic-design-patterns-part-5-multi-agent-collaboration/. Accessed 3 Nov. 2024.

“RagaAI- Blog.” Raga.ai, 2024, raga.ai/blogs/ai-agent-workflow-collaboration. Accessed 3 Nov. 2024.

Roth, Wes. “Andrew Ng STUNNING AI Architecture Revealed | “AI Agentic Workflows Will Drive Massive AI Progress.”” Www.youtube.com, www.youtube.com/watch?v=wM5837pVh1g.

tee.t@scbx.com. “Foundation of Agentic AI Workflow: Use Cases and How SCBX Can Adopt This Technology āļˆāļēāļāļ‡āļēāļ™āļŠัāļĄāļĄāļ™āļē SCBX Unlocking AI | SCBX.” āļšāļĢิāļĐัāļ— āđ€āļ­āļŠāļ‹ีāļšี āđ€āļ­āļāļ‹์ āļˆāļģāļัāļ” (āļĄāļŦāļēāļŠāļ™) | SCBX, 9 Oct. 2024, www.scbx.com/th/scbx-exclusive/foundation-of-agentic-aiworkflow/. Accessed 3 Nov. 2024.


Chomsky's Last Intellectual Debate?

Chomsky's Last Intellectual Debate? By Janpha Thadphoothon I am not a big fan of Noam Chomsky's political stance, but I hold great r...