Careful prompt design can significantly reduce hallucinations by shaping the model’s behavior. In this context, I introduced five prompting techniques that help mitigate hallucinations. These techniques assist large language models in minimizing hallucinations by promoting self-auditing through structured guidance and by directing them to designated, reliable data sources.
Chain of Verification (CoVE) Prompting
Chain of Verification (CoVE) prompting is a prompting technique designed to reduce mistakes and hallucinations in large language models (LLMs) by building a self-checking process into the prompt.
Instead of just asking the model to answer a question directly, CoVE prompting adds an extra step:
Answer the question first (just like normal).
Generate follow-up verification questions based on the original answer — these questions are designed to check if the answer is correct or if there are gaps.
Answer those verification questions separately.
Review all answers together to either confirm or revise the original answer.
Normal Prompt:
What is the correct journal entry when a company receives cash in advance for services it has not yet performed?
CoVE Prompt:
Step 1 – First, answer directly:
What is the correct journal entry when a company receives cash in advance for services it has not yet performed?
Step 2 – Generate verification questions:
Based on the answer above, now create 2–3 verification questions to double-check whether the journal entry is correct.
For example:
Is unearned revenue classified correctly as an asset, liability, or revenue?
At the point of receiving cash, has the revenue been earned or not according to accrual accounting?
What does the matching principle say about when revenue should be recognized?
Step 3 – Answer the verification questions:
Answer each verification question carefully based on accounting principles.
Step 4 – Review and Finalize:
Based on your verification answers, confirm if the original journal entry is correct.
If needed, revise your original answer.
It forces the model to double-check itself systematically.
It catches mistakes that might happen if the model just rushed to answer.
It encourages a "pause and verify" mindset, similar to how a human might double-check an important answer.
Chain of Verification (CoVE) prompting makes the model slow down, ask itself follow-up questions, and rethink its original answer before finalizing it — leading to more accurate and reliable responses.
Step-Back Prompting is a prompting technique designed to improve the accuracy and depth of an LLM's answer by first asking the model to think about the bigger picture before it dives into the specific question.
Instead of answering immediately, Step-Back prompting makes the model pause and reflect at a higher level:
First, identify and describe the broader concept or category related to the question.
Then, based on that broader view, answer the specific question.
Normal Prompt:
What is goodwill in accounting?
Answer:
In accounting, goodwill is an intangible asset that arises when a company acquires another business for a price higher than the fair value of the identifiable net assets (assets minus liabilities) of the acquired company...
Step-Back Prompt:
"
Step Back First:
In accounting, what are the main types of assets (and how are they classified)?
Among intangible assets, what types are separately identifiable versus those that are not?
What happens when a company acquires another company and pays more than the fair value of identifiable net assets?
Based on this context, what is goodwill in accounting?
"
It frames the answer within a larger logical structure.
It helps avoid shallow or misdirected answers by grounding the model in the correct context first.
It naturally leads to better explanations — richer and more trustworthy.
Step-Back Prompting makes the model think about the bigger picture first (what kind of thing we are talking about) and then narrow down to give a specific, correct, and contextually anchored answer.
Retrieval-Augmented Generation (RAG) prompting is a technique that combines two steps:
First, it retrieves relevant information from a trusted source, such as a textbook, accounting standard, or database.
Then, it generates an answer based on both the retrieved information and the model’s internal knowledge.
This approach helps reduce hallucinations by grounding the response in verified facts rather than relying solely on the model’s internal assumptions.
Retrieval can draw from both internal data sources and external references.
Normal Prompt:
"How do you account for the impairment of goodwill under U.S. GAAP?"
RAG Prompt:
"How do you account for the impairment of goodwill under U.S. GAAP? Refer directly to the FASB Accounting Standards Codification (ASC)."
You may add this "Focus on ASC 350 - Intangibles—Goodwill and Other for retrieving goodwill impairment guidance." to the prompt for more clarification of the data source.
It anchors the model’s answer to external, verified information (like official accounting standards).
It reduces the risks of the model "making up" wrong accounting rules.
It improves precision, especially when rules are technical or have changed over time.
RAG prompting makes the model look up real information first and then write the answer — combining retrieval and generation to produce more accurate, grounded, and trustworthy responses.
ReAct prompting stands for Reasoning + Acting. It’s a prompting technique where the model is encouraged to reason step-by-step, and take actions (like retrieving information, making calculations, or asking sub-questions) before producing a final answer.
In ReAct, the model alternates between:
Thinking ("what should I do next?")
Acting (retrieving data, calculating, double-checking)
Observing the results
Continuing reasoning until it is confident about the final answer.
Normal Prompt:
"A company purchased equipment for $100,000 with a 10-year useful life and no salvage value. What is the annual depreciation expense under the straight-line method?"
Prompts:
"
You are solving an accounting problem.
First, think aloud step-by-step: What information is needed? What formula or method applies?
Then, take the necessary action: Do any calculations, look up any necessary accounting rules, or define any terms.
After each action, observe the result: Summarize what you learned or calculated.
Repeat thinking → acting → observing as many times as needed until you can confidently provide the final answer.
A company purchased equipment for $100,000 with a 10-year useful life and no salvage value. What is the annual depreciation expense under the straight-line method?
"
Step-by-Step ReAct Process:
Step 1: Reason:
To calculate straight-line depreciation, I need to divide the depreciable cost by the useful life. Depreciable cost = purchase price - salvage value.
Step 2: Act (Do calculation):
Depreciable cost = $100,000 - $0 = $100,000. Useful life = 10 years. Annual depreciation expense = $100,000 ÷ 10 = $10,000.
Step 3: Observe:
The calculation shows $10,000 per year.
Step 4: Final Answer:
The annual depreciation expense under the straight-line method is $10,000.
It forces the model to slow down and reason explicitly rather than jumping to an answer.
It makes the thought process transparent, so you can spot mistakes earlier.
It helps with complex, multi-step accounting problems, like reconciliations, journal entries, or tax calculations.
ReAct prompting makes the model think, act, and check in a loop, making the final answer more accurate, logical, and well-supported — which is critical in accounting.
Decoprompting stands for Decompose Prompting. It’s a prompting technique where instead of tackling a complex question all at once, you break the problem into smaller, manageable parts (sub-questions) first, and then solve each part step-by-step to build the final answer.
Key Idea: Instead of answering a big question directly, decompose it into smaller pieces → solve each piece carefully → combine the pieces for the final solution.
Normal Prompt:
"Prepare the journal entry for the issuance of $500,000 bonds at 98 (98% of face value) with interest payable annually."
DecoPrompt:
"
What is the cash amount the company receives from issuing the $500,000 bonds at 98?
Calculate the cash received by the company.
Calculate the discount on bonds payable.
Identify the accounts involved (specify which ones are debited and credited).
Prepare the complete journal entry based on the above steps.
"
Breaks complex tasks into easier, smaller steps, making it less likely to miss something important.
Improves clarity — every step is deliberate and easy to verify.
Great for structured problems, like journal entries, reconciliations, tax calculations, and preparing financial statements.
Decoprompting means decomposing a big accounting question into small sub-questions, solving them carefully, and then combining the answers — resulting in better accuracy, structure, and confidence.