Tuesday, January 13, 2026

From Coding to Teaching: Scaffolding in the Age of AI

From Coding to Teaching: Scaffolding in the Age of AI

By: Asst. Prof. Dr. Janpha Thadphoothon

Introduction: The Winds of Change

As we know, the landscape of technology is shifting beneath our feet at a velocity that is both exhilarating and, at times, disorienting. It feels like only yesterday we were marveling at the basic conversational abilities of early chatbots. Yet, today, we stand on the precipice of a new era defined not just by generative text, but by "Physical AI"—machines that can navigate, manipulate, and interact with the messy reality of our physical world.

Image created by Janpha Thadphoothon & Gemini.
Like it or not, the world moves on. The headlines from global tech hubs are no longer just about faster processors or larger language models; they are about robots that can fold laundry, navigate cluttered kitchens, and perform complex industrial tasks. But beneath these flashy demonstrations lies a profound philosophical and practical shift in how humans interact with machines.

It is well known that for decades, the primary mechanism for controlling a machine was programming. We were the dictators of logic, writing rigid lines of code to define every possible movement and contingency. If we wanted a robot arm to move, we had to calculate the precise coordinates and type something akin to move_arm_x_axis_10cm. It was an exacting, often frustrating process rooted in explicit instruction.

However, my gut tells me that this era is drawing to a close. We are moving away from writing code to define individual movements toward a paradigm of "Robot Learning." The future engineer—or perhaps even the general student in a smart home—might not be writing Python scripts to get things done. Instead, they might act as "teachers" in highly sophisticated simulations, correcting the AI's behavior until it learns to generalize a task.

This transition fascinates me deeply. As a language teacher, I notice a striking parallel between this emerging technological method and the fundamental principles of human pedagogy. We are entering a time where our relationship with technology is becoming less about engineering and more about education. I'd like to entertain you with the idea that the future of human-AI interaction will rest heavily on a concept beloved by educators worldwide: scaffolding.

The Old Paradigm: The Rigid Instructor

Those were the days when everything was simple, but rigidity was king. In traditional robotics programming, the human was responsible for foreseeing every variable. The machine had zero agency; it was merely a highly efficient executor of explicit commands.

You may wish to picture this scenario: trying to program a robot to pick up a strawberry. In the old paradigm, you would need to code the exact pressure required to grip the fruit. Too little, it drops; too much, it becomes jam. You would have to hard-code the visual recognition of the strawberry, its exact location in 3D space, and the trajectory of the arm. If someone moved the strawberry two inches to the left, the entire program might fail.

Make no mistake, this approach built the modern industrial world. Automotive assembly lines are testaments to the power of precise, repetitive pre-programming. But the real world is not an assembly line. It is chaotic, unstructured, and unpredictable. A robot programmed rigidly cannot handle a child leaving a toy on the kitchen floor or a coffee cup placed on a slightly different shelf.

I am sure you would agree with me that this reliance on explicit coding is a bottleneck. It requires enormous amounts of human labor and expertise to define every micro-action. Furthermore, it creates brittle systems. As an educator, I liken this to teaching a student by forcing them to memorize thousands of specific sentences without ever teaching them the underlying grammar or giving them the confidence to construct their own thoughts. It works only as long as the context never changes.

Critics such as traditional software engineers would tell you that explicit coding offers control and predictability. They are not wrong. Yet, having said that, I realize that to achieve true utility in our homes and workplaces, machines need something more than rigid obedience; they need a form of adaptability that hard-coding cannot provide.

The New Paradigm: The Rise of the Simulator Classroom

The news has it that major players in AI and robotics are fundamentally rethinking this approach. They are moving toward systems where the AI learns not through line-by-line instruction, but through experience, observation, and feedback—much like a human child.

Experts say that the key unlock here is the use of massive, photorealistic simulations—what some call "world foundation models." These are digital twins of reality where a robot can practice a task millions of times in seconds without ever breaking a real physical object.

What's more interesting is that the human role shifts from writing the code to designing the "curriculum" inside these simulations. We become the mentors.

Fundamentally, it is all about data and feedback. Instead of writing grasp_object, a human operator might wear sensors and perform the action themselves, providing the AI with demonstration data. Or, the AI might try the task in the simulation, fail, and receive a "reward" signal when it gets closer to the goal.

This is where the concept of "teaching" truly enters the frame. If an AI robot in a simulation tries to stack blocks and keeps knocking them over, the human doesn't rewrite the physics engine. The human provides corrective feedback, perhaps by demonstrating a more stable stacking technique or adjusting the parameters of the goal. We are guiding the AI toward a desired outcome rather than dictating the precise path to get there.

Let me introduce you to the notion of reinforcement learning from human feedback (RLHF), which has been crucial in training language models like ChatGPT. We are now seeing this applied to physical actions. We are telling the AI, "That was a good try, but a bit too clumsy," or "Yes, that is exactly how you hold a delicate glass."

I somehow think this shift is profound. It democratizes the control of complex machinery. You may not need a degree in computer science to teach a future household robot how to fold your specific type of shirts; you might just need the patience to demonstrate it a few times in a virtual interface and correct its initial mistakes.

Scaffolding: Bridging Pedagogy and Technology

This is where my background intersects intimately with this technological frontier. Wisdom from the past hints that the best way to teach complex skills is through a process known as scaffolding.

Originated by cognitive psychologists like Jerome Bruner and deeply influenced by Lev Vygotsky's concept of the Zone of Proximal Development (ZPD), scaffolding is the temporary support given by a teacher that allows a student to perform a task they could not yet do alone. As the student gains competence, the support is gradually faded away until they are independent.

I know you would agree with me that you don't teach a child to write an essay by immediately demanding five pages on Shakespeare. You start with sentences. Then paragraphs with sentence starters. Then structured outlines. You provide models, feedback, and encouragement. You operate within their ZPD—the space just beyond what they can do right now, but within reach with guidance.

I'd like to entertain you with the idea that we are now applying Vygotskian scaffolding to artificial intelligence.

When we train a robot in a simulation, we rarely start with the most complex version of the task. We start simple. If we want it to learn to navigate a busy warehouse, we might first place it in an empty virtual room with one target. That is the initial scaffold. Once it masters that, we add static obstacles. Then, perhaps moving obstacles. Finally, we introduce unpredictable elements like other agents.

Gradually, I see the parallels emerging clearly. In the classroom, I provide corrective feedback to an English language learner not by writing the essay for them, but by pointing out patterns of error and guiding them toward self-correction. Similarly, in training Physical AI, the human "teacher" provides feedback signals that guide the robot toward the optimal policy for action. We don't move the robot arm for it; we tell it whether its own attempt was successful or unsuccessful.

Fundamentally, I would argue that the effectiveness of future AI will depend heavily on the quality of our human pedagogy. How well can we design these simulation environments? How precisely can we scaffold the tasks from simple to complex? How clear and consistent is the feedback we provide?

Some argue against anthropomorphizing AI, warning that we shouldn't treat machines like humans. While I agree on a technical level—they are mathematical functions, not conscious beings—I must admit that the methods we use to train them are becoming strikingly humanistic. We are moving from an engineering mindset of specifications to a pedagogical mindset of guidance and development.

Implications for the Future of Learning

What does this mean for us, educators and students, particularly here in Southeast Asia?

In Thailand, for example, we have a strong emphasis on STEM education, often focused heavily on coding syntax. While knowing Python remain valuable, I am not sure but I suspect that the skill set required for the immediate future is broadening.

If the future of human-machine interaction is teaching and scaffolding, then we need to cultivate skills that are inherently human: patience, the ability to deconstruct complex tasks into simpler steps, clear communication, and the ability to provide constructive, targeted feedback.

My conviction is that we might soon see "prompt engineering" evolve into "interaction pedagogy." Students might not just be learning how to query a database; they might be learning how to set up a simulation environment to teach a virtual agent how to identify ripe agricultural produce, a skill highly relevant to our economy.

Furthermore, this shift offers a unique opportunity for reflection. By having to explicitly teach an AI how to perform a task, we are forced to deeply analyze how we understand that task ourselves. You cannot effectively scaffold learning for another entity—human or machine—if you do not possess a deep, metacognitive understanding of the subject matter. In teaching AI, we may end up learning a great deal about our own learning processes.

Some argue for a future where humans are entirely replaced by automation. Nevertheless, it is my long-held belief that the future is collaborative. The "human-in-the-loop" remains essential, but the nature of that loop is changing from a tight, control-based loop to a looser, feedback-based pedagogical loop.

What's next?

It has perplexed me for some time how quickly we have moved from viewing computers as glorified calculators to viewing them as entities capable of learning. We are stepping out of the rigid confines of traditional programming and into the fluid, dynamic world of teaching in simulations.

Based on the first impression, this might seem merely like a technical upgrade. But, as a language teacher, I see it as a validation of educational theory. The principles of scaffolding, giving feedback, and guiding learners through their Zone of Proximal Development are no longer just classroom techniques; they are becoming the foundational principles of human-AI interaction.

Ultimately, as we move forward, we must remember that while the student may be silicon, the teacher is still carbon. Our ability to guide these new tools effectively will depend not on how well we code, but on how well we teach. It is a brave new world, and it turns out, good pedagogy is the key to unlocking it.





Asst. Prof. Dr. Janpha Thadphoothon is an Assistant Professor of English Language Teaching (ELT) at the Faculty of Arts, Dhurakij Pundit University (DPU) in Bangkok, Thailand. With an Ed.D. and a Master’s in Industrial and Organizational Psychology, he specializes in the intersection of human pedagogy and emerging technology. He holds a certificate in Generative AI with Large Language Models from DeepLearning.AI and is a passionate advocate for applying educational frameworks, such as scaffolding, to the future of human-AI interaction.

No comments:

Post a Comment

Eight Elements of Culture App

Learning Module: 8 Elements of Culture 8 Elements of Culture Module Ed...