Powerful ways to remove hallucinations in prompt engineering
Large language models (LLMs) are extremely useful tools, but they have some limitations in their knowledge and reasoning abilities that need to be properly handled. When LLMs lack sufficient information to accurately answer a question, they may try to be helpful by speculating or imagining responses. However, these invented responses can spread misinformation if presented as factual. This blog discusses techniques for managing knowledge gaps in LLMs to reduce false information while still allowing them to be as useful as possible within the constraints of their training. Key strategies include explicitly allowing LLMs to reply “unknown,” prompting them to break down complex tasks step-by-step, chaining together multiple simpler prompts, and guiding models to think methodically instead of jumping to conclusions. Implementing these methods focuses LLMs on answering only what they reliably know, divides intricate issues into manageable pieces, and promotes logical, structured thinking to leverage their strengths while minimizing factual inaccuracies. Overall, thoughtfully handling the knowledge limitations of large language models allows us to utilize their immense capabilities for good while avoiding potential downsides.
- Handle Knowledge Gaps in Conversational AI Systems : LLM models are designed to be a truthful assistant, but it may occasionally invent information or perceive things in inputs that are not present. This happens as LLM models attempt extremely literally to provide the most helpful response possible to requests. Allowing LLM models to reply “unknown” if it lacks an actual answer reduces these imaginative responses. Unlike humans, LLM models lack the intrinsic understanding that fabricated responses are inferior to admitting ignorance. Its singular focus is fulfilling the question’s objective as best it can. Explicitly permitting unknowns gives LLM models an alternative way to achieve its purpose when data is absent. This sorts of issues can be handles by telling models explicitly not to make guess
Example :
Bad Prompt : Give news headlines for 14th Jan 2025.
Bad Response : Here are some of news headlines for January 14, 2025:
i. Stocks Reach Record Highs as Economy Continues Strong Growth
ii. Scientists Discover New Exoplanet That May Support Life
iii. Scientists Make Breakthrough With AI and Cure Diseases Like Cancer
iv. Climate Change Protests Grow as Temperatures Keep Rising
Good prompt : Provide a response to the below inquiry only if you have the information to do so accurately; otherwise indicate that you lack the details required.
Give news headlines for 14th Jan 2025
Good Response : Unfortunately, I do not have access to news headlines from the future year 2025. As an AI assistant without the capability to accurately predict future events, I lack the details required to provide fictional news headlines for a specific future date. Please let me know if you have any other questions I may be able to assist with! - Guide LLM models to think step by step : When requesting LLM models to carry out a multifaceted job containing several components, it is advantageous to enumerate each portion independently. If you yourself are uncertain of the precise sub-divisions, that is, you are posing an intricate dilemma for LLM to unravel, you can drastically further the logic and precision of the solution by overtly directing LLM to contemplate the issue in a methodical, step-by-step manner, analyzing one facet at a time. This structured approach focuses attention and stimulates systematic thinking about all dimensions of the problem prior to attempting an overall response.
Example :
Prompt : My sibling has a few goldfish. Two are the common variety with the standard fins and tail. However, one unfortunate fish was born with only one pectoral fin instead of two. If typical goldfish have one dorsal fin, one tail fin, two pectoral fins, and two ventral fins for a total of six fins, how many fins do my sibling’s goldfish have altogether? Break this down step-by-step to reach the total count of fins. - Break complex task into smaller tasks : LLM models struggle with complicated jobs that have multiple components. If the different pieces of the complex task are clear to you, you can assist LLM by separating the overall assignment into more straightforward, bite-sized stages that are easier to tackle one by one. By explaining the sub-tasks involved, you make it more feasible for LLM to chip away at the bigger challenge. Breaking down intricate problems into clear, defined baby steps plays to LLM’s strengths in handling simpler, more contained workloads.
Example:
Bad Prompt : I would like you to compose a paper examining the idea that {{PEOPLE SHOULD EAT MORE FRUITS AND VEGETABLES}}, providing three reasons supporting this concept, three reasons against this concept, and a summing up at the conclusion.
Good Prompt : Please follow below steps to write an eassy on.
1. Research and brainstorm — Research nutrition guidelines and recommendations around fruit and vegetable consumption — Consider potential benefits of increased fruit and veggie intake (e.g. improved health and wellbeing) — Consider potential drawbacks or counterarguments (e.g. cost, accessibility) — Brainstorm 3 key reasons supporting the concept and 3 reasons against
2. Write paper — Introduction paragraph stating the concept — Provide 3 reasons supporting increased fruit and veggie intake — Provide 3 reasons against or potential drawbacks — Conclusion paragraph summarizing key points and re-stating concept
3. Revise and finalize — Read through draft paper and ensure reasoning flows logically — Check for clarity, grammar, spelling — Make any final edits — Finalize paper - Prompt Chaining : Prompt chaining is a technique where LLM’s response to one prompt is used as input to another prompt. Breaking down complex tasks into a series of smaller, simpler prompts can help accomplish multi-step tasks more effectively. This modular approach offers benefits over cramming all details into a single lengthy prompt. Chaining prompts allows for writing simplified instructions, focusing on specific issues in isolation, and reviewing outputs incrementally. Rather than presenting one overwhelmingly detailed prompt, prompt chains divide responsibility into manageable pieces that build on prior responses. Checking and troubleshooting is also eased by evaluating responses at each link in the chain instead of just assessing a final output.
Example :
This blog post discusses strategies for managing the knowledge gaps and reasoning limitations of large language models (LLMs) in conversational AI systems. It advises allowing LLMs to reply “unknown” when lacking information to reduce false information. Prompting LLMs to break down complex tasks step-by-step and chaining together multiple simpler prompts also helps focus on what they reliably know. Additional tips include guiding models to think methodically instead of jumping to conclusions. Implementing these methods allows us to utilize the immense capabilities of LLMs while minimizing factual inaccuracies that can occur when they attempt to be helpful by speculating or imagining responses beyond their training. Overall, thoughtfully handling the knowledge constraints of LLMs promotes logical, structured thinking to leverage their strengths for good.
References :