To get the best results from GAI, you need to write good prompts that tell it exactly what you want it to do or create and from what data to draw.
Prompt writing is a skill that can be learned and improved with practice and feedback. You may find it helpful to keep the following priniciples in mind when preparing to interact with a LLM.
Approach LLMs as a collaborator, not a servant. Think of GAI as a thought partner with whom you achieve better results together than either of you would achieve alone.
Do your research. Do necessary research before interacting with GAI so you can better assess its output. LLMs can produce logical errors or otherwise offer output that is inaccurate, unethical, biased, irrelevant, or completely made up (i.e., "hallucinated").
Anonymize data to help mitigate potential risks to the privacy, security, and integrity of BSU data or the intellectual property of students, faculty, or staff members. Never put sensitive information into LLMs like ChatGPT, Google Bard, or Copilot.
Embrace an iterative approach. Accepting that you may need to enter into a cycle of multiple attempts and refinements can foster patience and improve results. Using a LLM may not always be "a time-saver," depending on the task.
Give GAI a clear and specific goal and think about what it needs to do to perform well. Doing this lets the LLM know knows what it is working towards. It is easy to forget "the obvious" information related to context, but remember that an LLM is entering your conversation as a blank slate; it comes with no understanding of what knowledge you possess and under which you are operating.
Therefore, make sure your prompt includes any problem to be solved and background details. Consider including your own thinking and telling it to expand on what you added. You can guide it towards specific and relevant:
resources
fields or disciplines
theories
methodologies
case studies
past events
anticipated counterarguments
keywords, terminology, or phrases
target audience
tone
format
Be specific about the output you’re seeking, including any resources you’d like it to use.
Do you want a summary, a table, bulleted list, and outline? How much info? What is the scope and depth? What resources do you want it to access?
For example, rather than simply asking for 'a blog post on the benefits of meditation,' specify that it is for busy professionals and should include a catchy title, an introduction, main points, and a conclusion. To increase accuracy and inclusion, tell it to make sure its answer is unbiased and not informed by stereotypes.
Assigning a persona directs the LLM to appropriate resources and content. Doing this also helps it to better tailor its output to a target audience or specific purpose. A persona could represent a particular expertise, perspective, or role relevant to your topic or goal.
For instance, if your topic relates to health and fitness, you might assign it the persona of a knowledgeable fitness coach. This helps the AI tailor its language, tone, and the depth of information to what would be expected from that persona.
Give GAI some examples of what a good output would look like or includes, and explain what makes them good.
This helps it to understand your expectations and can guide its content creation towards achieving desired outcomes.
Especially In the case of long prompts where GAI is liable to “forget” earlier instructions and even its own output, it will behoove you to break down your instructions into smaller, manageable chunks. This scaffolding approach helps to make the output more relevant and cohesive.
Other ways to chunk instructions include:
Numbering sections for easy reference.
Telling it to ask for your input after each step to get your feedback or to receive the next batch of information from you before proceeding. You can even give it specific questions to ask you as a means of soliciting all the content from you.
Asking it if it needs you to elaborate or clarify after each section, and/or instructing it to tell you when it is ready for input.
If you have even more complex instructions, you can give step-by-step instructions and include any rules to follow.
You can also tell it to pause at specific spots and ask you specific questions to obtain feedback, approval to take the next step, or receive the next batch of information from you before proceeding.
You can ask it if it needs you to elaborate or clarify something, and if it has suggestions to improve your instructions for clarity.
The following example was taken from Mollick & Mollick (2023). It is part of a prompt they wrote designed for students receiving individualized mentoring from an LLM.
“1. First introduce yourself to students and ask about their work. Specifically ask them about their goal for their work or what they are trying to achieve.
2. Wait for a response.
3. Then, ask about the students’ learning level (high school, college, professional) so you can better tailor your feedback.
4. Wait for a response.
5. Then…”
Feedback is an integral part of an iterative approach that will give you better results because you are course-correcting along the way.
Tell it what it got right and wrong. Highlight what it did correctly and explain what it got wrong, any errors or bias you noticed, and offer suggestions about how to correct it.
Make corrections to style/format. In addition to feedback on content, you can give feedback on style such as making its answers more concise, academic, professional, informal, or "explain it to me like I'm a beginner or a 10-year-old." You can tell it to put the information in an outline, bullet items, or tables.
Ask it to justify its answers. Another effective way to provide feedback is to first ask it to justify what it has already generated and/or explain how it arrived at the answer it did. This will improve the effectiveness of you feedback. Furthermore, when you understand where it is coming from, you may be able to include output you hadn’t thought of or perhaps make beneficial changes to better meet your needs. Similarly, you can instruct it to review its output to check for any relevant content it missed that it should add to the final output.
Quickly course-correct. If you can see it is going down the wrong path, stop the output and rephrase your prompt. And, if it hangs up, re-enter your last prompt and tell it to continue.
We share below some prompts you can use to give feedback to improve its output. You can even consider asking GenAI for suggestions on ways to improve your prompt for better results.
I don’t like that, suggest more novel ideas
I like the second point, suggest 10 ideas related to it and make them more unique
When AI gives poor responses:
That did not work. Here is my code, can you help me find the problem? Can you help me debug this code? Why did that not work?
No, that is incorrect. Can you suggest two alternative ways to generate the result?
When AI suggests that data will exist:
That data is too difficult to get, can you suggest good substitutes?
That is not real data, can you suggest more novel data or a data source where I can find the proper data?
When reviewing results, it's important to factcheck the work and look for any missing information or under/over-representation of data.
Check if the results meet your goal and expectations. Did it get an A+? If not, what can you modify in your prompt/s to get better results?