How to ask AI questions to get the answers we want: creating effective prompts

If we try to ask ChatGPT or any AI-based language model (LLM) to complete the sentence “The best food is…” will give us a different answer every time we ask. The answers in fact, LLMs have a random component and we will never get the exact same answer to the same question. In order to get what we want from an LLM, we need to learn to write the request wellthat is, the prompt. To get started, it is enough to be clear, direct And detailed in our request. We can also show some answer examples or how we would like “reasoned” to guide her in her response and ask her to cite documents directly to prevent him from inventing information.

HOW TO ASK QUESTIONS TO THE AI
  • 1Test the AI’s response on something we know
  • 2Ask clear, direct and detailed questions
  • 3Provide examples of responses and reasoning: Few-Shot Prompting and Chain-of-Thought
  • 4How to limit hallucinations: Allow uncertainty and make several attempts

Test the AI’s response on something we know

First of all, if we are new to generative AI, such as ChatGPT, it is important to start using it in areas in which we are competent. In this way, we can realize what he knows and what he doesn’t know how to do. The capacity of AI could in fact surprise us, Not Always positively. His skills follow what in technical terms is called “jagged frontier” , i.e. “jagged frontier”: the models could be able to do something very difficult for us (outside our frontier), but be incompetent in tasks that are very easy for us (inside our frontier). Furthermore, if we test their skills in fields in which we are competent, we could easily recognize them “hallucinations”, that is, those sentences that seem true, but are not true at all.

Ask clear, direct and detailed questions

When we interface with AI, let’s think of it as new colleague extremely brilliantinfinitely patientbut with strong memory problems. Since she is new on the job, she will need clear, direct and detailed instructions in order to give the best result. For example, it is useful to explain them who is writingFor What will come used The result and in what format the answer is expected to be given. So ask, “Prepare a test on Manzoni” will give a mediocre result, while asking “Prepare a test on the Betrothed for a secondary school senior. The test must be cross-type and contain ten questions, all on the fourth chapter” will produce something much closer to our desired result.

However, it is not a given that we will get what we wanted on the first try, and this is where we will be able to take advantage of the “infinitely patient” nature of AI. We can try and try again to write the prompt until we get exactly what we need, the AI ​​will never tire of answering us. Let’s remember, however, that he has “severe memory problems” and if let’s change chatYes will forget everything we have told you and will have to start again from scratch. This can also be good if we are in a dead end and we’re not managing to get the answer we want: let’s take the prompt that gave us the best result and let’s start again in one new chat.

Provide examples of responses and reasoning: Few-Shot Prompting and Chain-of-Thought

If we want to get more targeted responses, we can teach the AI ​​how we want it to respond by placing some answer examples in the prompt. This method is called few-shot prompting and allows the AI ​​to recognize any guidelines to be more accurate. For example, if we need to transform some notes on the PC into a social post, we can show 2-3 examples already done and ask to apply the same method to the new notes.

artificial intelligence imagination

If we are tackling complex tasks, we can guide the AI ​​by asking it directly break down the problem or explaining the strategy of reasoning to follow or showing solved examples. All these techniques fall under the Chain of Thought (CoT) promptingextremely useful for difficult problems. Versions of ChatGPT from “o1” onwards have become so good at solving difficult tasks because, among other things, they integrate the CoT into the model itself. As AI models improve, the most advanced prompting techniques will be incorporated into the internal structures of the models and will gradually become less useful for us users.

How to limit hallucinations: Allow uncertainty and make several attempts

To reduce the chance of hallucinating, Anthropic has released a guide with some possible strategies:

  • Allowing you to say “I don’t know”: AI models are created to always give an answer, even when they don’t have the necessary information. Explicitly writing in the prompt that the model can answer “I don’t know” when uncertain decreases the generation of misinformation.
  • Ask to cite the document: If we are analyzing a very long document, it is advisable to ask to extract quotes from the text word by word, and then to use these quotes to respond. In this way, the model will deviate little from what is actually written in the document.
  • Show us the reasoning: Ask the model to explain the process with which he arrived at a certain answer simply by saying “Explain the reasoning step by step” allows us to individuate possible logical errors or incorrect assumptions.
  • Compare multiple answers: Giving it same prompt multiple times we can compare the answers: if they are very different from each other, we can assume that the model is completely inventing the answer and that it is therefore a hallucination.

In any case, remember that these techniques, although significantly reducing hallucinations, do not eliminate them completely.