Last updated: April 17, 2025
[Following information generated by perplexity.ai]
Prompting in Large Language Models (LLMs) refers to the process of structuring input to effectively communicate desired outcomes to the model, enabling it to perform various tasks without specialized fine-tuning1. It's the primary interface between users and LLMs, allowing us to harness their in-context learning abilities.
See Tina Huang's "Google's 9 Hour AI Prompt Engineering Course In 20 Minutes", https://youtu.be/p09yRj47kNM?si=gCqCDYVQnrd_1WZP
When crafting prompts, it's crucial to use clear, unambiguous language and provide sufficient context4. For example:
Unclear: "Who won the election?"
Clear: "Which party won the 2023 general election in Paraguay?"
Unspecific: "Generate a list of titles for my autobiography."
Specific: "Generate a list of ten titles for my autobiography. The book is about my journey as an adventurer who has lived an unconventional life, meeting many different personalities and finally finding peace in gardening."
Zero-Shot Prompting: This involves asking the model to perform a task without providing examples2. For instance:
Translate "Hello, how are you?" from English to French.
Few-Shot Prompting: This technique provides the model with a few examples of the desired input-output pairs before asking it to perform a similar task. For example:
Example 1: "The sun sets gently, painting the sky with hues of orange and pink, a masterpiece of the night."
Example 2: "The rain whispers secrets to the leaves, a soft symphony of nature's voice, soothing the soul."
Now, write a short poem in this style.
Chain-of-Thought (CoT) Prompting: This method guides the LLM to outline its thought process step by step2. For instance:
Sarah bought 5 apples for $2 each and 3 oranges for $1 each. How much did she spend in total?
Explain each step of your reasoning before providing the final answer.
Zero-Shot Chain-of-Thought: This technique involves appending "let's think step by step" to the prompt, triggering the LLM to exhibit a reasoning trail without examples.
Model Sensitivity: Different LLMs have varying strengths and may require different levels of detail in instructions1.
Task Complexity: More complex tasks may necessitate more elaborate and precise instructions1.
Experimentation: Devising suitable prompts often requires experimentation and benchmarking1.
Context Maintenance: LLMs are designed to understand and generate text based on the provided context, including from initial prompts3.
Follow-up Prompts: Use follow-up prompts to narrow the context and elicit more specific information3.
"Ask Before Prompting" Technique: Start by asking the model to clarify or seek more information before providing a direct prompt3.
Fine-tuning vs. Prompt-tuning: Consider whether your task requires fine-tuning the model or if prompt-tuning (using few-shot or zero-shot prompting) is sufficient6.
By keeping these aspects in mind and utilizing various prompting techniques, you can effectively harness the power of LLMs for a wide range of tasks, from creative writing to problem-solving and code generation.
See the collection of prompts by Tulsi Soni, https://x.com/shedntcare_/status/1893620948265283868?t=uzeUmBfF3jMc-BsZdMxtSQ&s=03
Pentagram - Five steps to a good prompt: Persona, Context, Task, Output, and Constraint. From a course by Alina Zhang (https://www.linkedin.com/in/alina-li-zhang/?trk=lil_instructor) where she called it a pentagram, https://www.linkedin.com/learning/build-your-own-gpts/pentagram-framework-for-prompt-engineering?u=2106537
Pentagram
Persona: [GPT is talking like a salesman, a professor...]
Context: [For whom is this GPT intended? its real users]
Task: [What exactly the GPT is supposed to provide. The specific actions from the GPT, perhaps in some measurable way]
Output: [The tone, like encouraging, professional, criticizing, etc. Output a pdf file, an image, or a csv file, etc.]
Constraint: [Constraints like don't use overly technical jargon, don't answer questions that are not related to the GPT, etc.]
Define persona and then:
Task
Context
References: [Provide examples at that helps]
Evaluate
Iterate and Refine: [(1) Revisit the prompting framework (2) Separate prompts into shorter sentences
(3) Try different phrasing or switching to an analogous task, (4) Introduce Constraints
This guide explores various prompting techniques used in AI and machine learning, with examples from practical applications in data science. These techniques enhance the performance of large language models (LLMs) by tailoring how they interpret and respond to queries.
Zero-shot prompting involves directly instructing the model to perform a task without providing any examples. The model relies entirely on its pre-trained knowledge.
Use Cases:
Text classification
Sentiment analysis
Simple data transformations
Example:
Task: Classify the sentiment of a review.
Prompt:
text
Classify the text as positive, negative, or neutral.
Text: "The product quality is excellent but delivery was delayed."
Sentiment:
Output: Positive.
Few-shot prompting provides a handful of examples to guide the model toward the desired outcome. This is useful when tasks require specific formatting or nuanced understanding.
Use Cases:
Data formatting
Sentiment analysis with unique expressions
Classification tasks with limited labeled data
Example:
Task: Convert dates into a specific format.
Prompt:
text
Convert these dates to DD-MM-YYYY format:
Input: March 15, 2024
Output: 15-03-2024
Input: December 1, 2023
Output: 01-12-2023
Input: April 7, 2024
Output:
Output: 07-04-2024.
Chain-of-thought prompting encourages step-by-step reasoning, making it ideal for complex problem-solving tasks.
Use Cases:
Mathematical reasoning
Logical problem solving
Debugging workflows
Example:
Task: Solve a math problem.
Prompt:
text
A store has 120 apples and sells 30% on Monday, then 25% of the remaining apples on Tuesday. How many apples are left? Let’s solve this step-by-step.
Output:
Monday sales = 30%30\%30% of 120=36120 = 36120=36. Remaining = 120−36=84120 - 36 = 84120−36=84.
Tuesday sales = 25%25\%25% of 84=2184 = 2184=21. Remaining = 84−21=6384 - 21 = 6384−21=63.
Final answer: 63 apples remain.
Iterative prompting refines responses through feedback loops, allowing the model to improve its output over multiple iterations.
Use Cases:
Code optimization
Sentiment analysis refinement
Essay writing
Example:
Task: Improve a draft essay.
Generate an essay draft.
Critique for clarity and coherence.
Rewrite based on critique.
Repeat until satisfactory results are achieved.
Role-play prompting assigns the model a specific persona or role to generate more contextually relevant responses.
Use Cases:
Simulating expert advice (e.g., data scientist, doctor)
Customer support automation
Educational tools
Example:
Task: Analyze customer behavior for an e-commerce site.
Prompt: "You are a data scientist analyzing customer behavior for an e-commerce platform. Explain how you would segment customers based on purchase history."
This technique builds on Chain-of-Thought by exploring multiple reasoning paths (branches) before converging on an answer.
Use Cases:
Decision-making processes
Exploring alternative solutions
Example: For solving complex optimization problems like supply chain logistics, each branch could represent a different strategy (e.g., cost minimization vs time efficiency).
Self-refine prompting involves generating an initial response, critiquing it, and iteratively improving it until the output meets specific criteria.
Use Cases:
Code debugging
Research summaries
Creative writing
Example: Generate an essay on climate change, critique it for missing points (e.g., economic impacts), and revise iteratively until complete.
Negative prompting specifies what not to include in responses, helping refine outputs by avoiding irrelevant or undesired elements.
Use Cases:
Filtering out biases in text generation
Excluding certain topics or keywords
Example: "Write a summary of this article but exclude any mention of politics."
This technique prompts the model to first generate relevant facts or subproblems before addressing the main task.
Use Cases:
Writing essays or reports
Complex problem-solving
Example: Before writing an essay on deforestation, prompt the model to generate key facts like "Deforestation contributes to climate change" and "Leads to loss of biodiversity."
In this approach, the model breaks down a problem into smaller subproblems and solves them sequentially.
Use Cases:
Multi-step calculations
Hierarchical task completion
Example: Solve 2x+3=112x + 3 = 112x+3=11.
Subproblem: Subtract 333 from both sides (2x=82x = 82x=8).
Subproblem: Divide by 222 (x=4x = 4x=4).
Meta-prompting involves guiding the model through iterative steps while refining scope and depth at each stage.
Use Cases:
Task automation
Comprehensive research
Example: "List steps for market analysis → Break down each step → Suggest tools for each step."
This technique uses specific keywords or cues to guide output generation toward desired themes or styles.
Use Cases:
Creative writing (e.g., poetry)
Thematic content generation
Example: "Write a poem about love using the words 'heart,' 'passion,' and 'eternal.'"
These innovative prompting techniques enable tailored interactions with LLMs, making them highly effective for diverse applications in data science and machine learning workflows!
Answer from Perplexity: pplx.ai/share
Brown, et.al., Language Models are Few-Shot Learners, https://papers.nips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf
Wei, et.al., Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, https://arxiv.org/pdf/2201.11903
Liu, et.al., Prompt Injection attack against LLM-integrated Applications, https://arxiv.org/abs/2306.05499
Lee Boonstra (Google), Prompt Engineering, https://www.linkedin.com/posts/awaiskhanli_google-prompt-guide-ugcPost-7318611802974511104-T73_?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAPsW64BNtxodzQe3M_H7WwpWaac2Y0ycAQ