How to Communicate with AI through Prompts
Janpha Thadphoothon
I'm writing this blog article in reaction to some questions students asked me in class—"Sir, what is a prompt?" The question was both surprising and refreshing. It's funny how some of the most profound questions are the simplest. The students asked in the context of using prompts to improve their writing, particularly with the help of AI tools. "You need to learn how to use prompts to work with machine (AI) agents," I advised, knowing that it would be a skill they would need sooner than later.
I hesitated to utter the term "prompt engineering." In my opinion, it sounds technical, even intimidating. I don’t consider myself an expert in prompt engineering, but I do know what a prompt is. To me, it’s essentially a command, like telling the AI, “Explain what global warming is.” AI agents, like ChatGPT or Gemini, operate according to our commands—our prompts.
I think of a prompt as a way of communicating with the AI, shaping its response to suit our needs. You would agree with me that, when used thoughtfully, prompts are a powerful tool. They allow us to tap into AI’s vast knowledge in a way that’s personalized and useful. For instance, we could ask, “List some strategies for learning English,” and the AI could provide helpful methods, examples, and even suggest interactive exercises.
They say prompt engineering is like speaking a new language. While it may seem complex, it’s really about clarity and specificity—telling the AI precisely what we want. People often say that AI only knows what we tell it, and I think there’s truth to that. Crafting a good prompt is about understanding the details you need and then directing the AI to focus on those details.
As far as I know, using prompts effectively is about having a conversation with the AI. Think of it as guiding a partner in a dance. For example, if you’re researching climate change, you could ask the AI for “climate change data from the past decade” or “an explanation on how climate change impacts tropical ecosystems.” Each prompt guides the AI differently.
My perception is that learning to communicate with AI will be as essential as learning to write a formal letter or make a presentation. This skill opens doors to endless information and insights, empowering us to learn more efficiently. In my opinion, mastering prompts doesn’t just improve our interaction with AI; it enhances our critical thinking by teaching us to frame questions and guide conversations with purpose.
Examples of Prompts in Communication with AI Agents
When communicating with AI, prompts can range from simple to complex depending on the desired response. Here are a few examples to illustrate how prompts work:
1. Basic Inquiry Prompt
- Example: “What is climate change?”
- Explanation: This is a straightforward question, and ChatGPT will typically provide a general, concise answer. It’s often referred to as a “zero-shot” prompt, meaning the AI doesn’t have any extra context or examples and must answer directly based on its training data.
2. Elaborative Prompt
- Example: “Explain climate change in simple terms for a 10-year-old.”
- Explanation: Here, the prompt includes additional information, requesting a response suitable for a younger audience. This guides ChatGPT to simplify complex concepts.
3. Analytical Prompt
- Example: “Compare climate change policies in the US and Europe.”
- Explanation: This prompt requires the AI to perform a comparative analysis, resulting in a more detailed response that considers policy differences.
4. Creative Prompt
- Example: “Write a short story about a robot exploring a new planet.”
- Explanation: This prompt nudges the AI to take on a creative task, generating a story rather than a factual answer.
5. Multi-Step Inquiry
- Example: “Explain the greenhouse effect, then list three ways individuals can reduce their carbon footprint.”
- Explanation: This prompt has multiple parts, directing the AI to provide a layered answer.
Understanding Zero-Shot Prompting vs. Multi-Layer Prompting (Structured Prompts)
Zero-Shot Prompting
Zero-shot prompting involves giving the AI a single question or command with no additional context, guidance, or examples. The AI answers based on what it “knows” from its training data.
- Example of Zero-Shot Prompt: “Summarize the plot of To Kill a Mockingbird.”
- Here, the AI provides a direct response with no extra prompting or follow-up questions. This method is quick and straightforward but may yield a simpler response.
- Best Use Case: Zero-shot prompts work well for general knowledge questions or simple tasks that don’t require specific customization or depth.
Multi-Layer Prompting (Structured Prompts)
Multi-layer prompting, or structured prompting, breaks down a question into multiple, structured parts or layers, guiding the AI through a step-by-step approach. This approach is also known as few-shot prompting if it involves examples, or prompt chaining if it builds on previous prompts.
- Example of Multi-Layer Prompt:
- Layer 1: “List three major themes in To Kill a Mockingbird.”
- Layer 2: “Now, explain each theme with a quote from the book.”
- Layer 3: “Provide a short analysis of how each theme is relevant today.”
- Best Use Case: Multi-layer prompting is ideal for complex tasks that require deeper analysis, detailed information, or a more structured response. This method allows the AI to generate responses that build on prior information or context.
Key Differences
- Depth of Response: Zero-shot prompts often lead to brief, direct answers, while multi-layer prompts result in richer, more comprehensive responses.
- Control over Output: Multi-layer prompting gives the user more control over the AI’s output by guiding it through specific steps, whereas zero-shot relies on the AI’s interpretation of a single, isolated question.
- Application Suitability: Zero-shot is efficient for straightforward inquiries; multi-layer is better for tasks that require detailed, organized information or creative content with specific direction.
Zero-shot prompting is quick and simple but less detailed, while multi-layer prompting allows for structured, complex responses that align more closely with specific needs or goals.
When interacting with AI agents, the quality of your instructions plays a crucial role in shaping the responses you receive. Clear, specific, and well-structured instructions guide the AI, allowing it to understand your intent better and deliver results that align with your expectations.
In other words, the way you “communicate” through prompts directly impacts how effectively the AI understands and responds. If you’re precise and provide necessary details in your instructions, the AI can generate a response that’s not only accurate but also relevant to your needs.
For example:
- Vague Prompt: “Explain climate change.”
- Detailed Prompt: “Explain climate change in simple terms, focusing on how it affects daily life, and provide three examples of actions people can take to reduce their impact.”
The second prompt is likely to produce a more insightful and targeted response. So, yes—clear and thoughtful instructions really do matter when communicating with AI agents.
Does Politeness and Hedging Affect Responses?
From a practical standpoint, using polite expressions (like “please” or “could you…”) or hedging phrases (“I think…”, “Would you agree…?”) doesn’t impact the technical function of AI responses because ChatGPT processes the core of a question rather than emotional tone. The AI isn’t aware of politeness or human-like intentions; it simply analyzes input to generate relevant output. However, my personal experience is this - the use of politeness can actually improve interactions in specific ways. Why?
1. Clarity and Completeness: Phrasing questions as polite requests often naturally encourages you to give clearer, more specific prompts. For example, “Could you provide a list of…” often yields a better response than a vague “List…”
2. Human-Like Interaction: When AI is embedded in tools for tasks like customer service or virtual assistance, polite phrasing can feel more natural and create a sense of empathy, improving user experience for people engaging with AI in a social or professional context.
Psychological Side: Should We Treat AI as Sentient?
Regarding the deeper question of whether it’s desirable to treat AI as if it’s sentient, here are some factors to consider:
1. Social and Emotional Conditioning: Language shapes how we think and feel. If we talk to AI with human-like courtesy, we may unconsciously attribute human characteristics to it. For some people, this could create comfort, encouraging them to explore and interact more. But for others, it could blur the line between technology and true social interaction, potentially leading to misunderstandings about AI’s capabilities and limitations.
2. Empathy and Ethics: There’s a rising view that polite language may foster respectful behavior overall, even toward non-sentient systems. Teaching people to interact respectfully with AI could help reinforce empathy and patience in broader social interactions, especially for younger users. Yet, it’s crucial to remind ourselves that AI, unlike a human, doesn’t feel and doesn’t require empathy.
3. Effective Communication: Hedging and politeness often bring clarity and specificity, as we tend to be more intentional with our wording when we use polite phrases. This approach is useful, not because the AI needs it, but because it helps users articulate thoughts more clearly, leading to better AI responses.
Final Thought
In my opinion, polite and hedged language can enhance the interaction experience with AI, making it feel more human-like and approachable, which may foster exploration and creativity. However, we need to keep in mind that AI lacks true understanding, sentience, or emotion helps maintain realistic expectations and prevents us from ascribing too much human-like agency to the technology.
So, it’s effective to use polite language for our benefit in terms of clarity and comfort—but remembering AI’s non-sentient nature is key to using it wisely and effectively.
This article, as I mentioned earlier, is a reaction to a question asked by some of my students. I hope you’ve found it insightful. One key takeaway is that when you ask questions, good things happen. Curiosity opens doors to new knowledge and understanding. So, don’t hesitate—ask questions whenever you want to learn more, and seek answers from both people and AI agents alike.
About Janpha Thadphoothon
Janpha Thadphoothon is an assistant professor of ELT at the International College, Dhurakij Pundit University in Bangkok, Thailand. Janpha Thadphoothon also holds a certificate of Generative AI with Large Language Models issued by DeepLearning.AI.