Improve your ChatGPT outputs by 50% with these 26 simple prompt changes - The Daily Dose of Digital - 09/01/24
26 simple changes to experiment with in ChatGPT

Improve your ChatGPT outputs by 50% with these 26 simple prompt changes - The Daily Dose of Digital - 09/01/24

A recent study published by VILALab, Mohamed bin Zayed University of AI in Abu Dhabi has discovered 26 key pieces of guidance to help improve your prompts, generating significantly better outputs. In their research, the team evaluated various prompting techniques to boost performance on chatbots and LLMs like GPT-4 (ChatGPT's recent model).

The outline of the paper states: "Since the quality of the responses generated by a pre-trained and aligned LLM is directly relevant to the quality of the prompts or instructions provided by the users, it is essential to craft prompts that the LLM can comprehend and respond to effectively. The prompts delivered to an LLM serve as a way to program the interaction between a user and the LLM, enhancing its ability to address a diverse range of tasks. The primary focus of this work is on the methodology of crafting and customising prompts to enhance output quality. This necessitates a comprehensive grasp of the functioning and behaviours of LLMs, their underlying mechanisms, and the principles governing their responses. In this work, we achieve this goal through elaborating 26 principles for comprehensive prompts in different scenarios and circumstances."

The researchers found that implementing the 26 prompting strategies below produced a stable 50% improvement. I've listed them below (to save you reading through the entire paper!).

26 Key Principles for Improving your Prompts:

  1. No need to be polite with LLMs so there is no need to add phrases like “please”, “if you don’t mind”, “thank you”, “I would like to”, etc., and get straight to the point.
  2. Integrate the intended audience in the prompt, e.g., the audience is an expert in the field.
  3. Break down complex tasks into a sequence of simpler prompts in an interactive conversation.
  4. Employ affirmative directives such as ‘do,’ while steering clear of negative language like ‘don’t’.
  5. When you need clarity or a deeper understanding of a topic, idea, or any piece of information, utilize the following prompts:- Explain [insert specific topic] in simple terms.- Explain to me like I’m 11 years old.- Explain to me as if I’m a beginner in [field].- Write the [essay/text/paragraph] using simple English like you’re explaining something to a 5-year-old.
  6. Add “I’m going to tip $xxx for a better solution!”
  7. Implement example-driven prompting (Use few-shot prompting).
  8. When formatting your prompt, start with ‘###Instruction###’, followed by either ‘###Example###’ or ‘###Question###’ if relevant. Subsequently, present your content. Use one or more line breaks to separate instructions, examples, questions, context, and input data.
  9. Incorporate the following phrases: “Your task is” and “You MUST”.
  10. Incorporate the following phrases: “You will be penalized”.
  11. Use the phrase ”Answer a question given in a natural, human-like manner” in your prompts.
  12. Use leading words like writing “think step by step”.
  13. Add to your prompt the following phrase “Ensure that your answer is unbiased and does not rely on stereotypes”.
  14. Allow the model to elicit precise details and requirements from you by asking you questions until he has enough information to provide the needed output (for example, “From now on, I would like you to ask me questions to...”).
  15. To inquire about a specific topic or idea or any information and you want to test your understanding, you can use the following phrase: “Teach me the [Any theorem/topic/rule name] and include a test at the end, but don’t give me the answers and then tell me if I got the answer right when I respond”.
  16. Assign a role to the large language models.
  17. Use Delimiters.
  18. Repeat a specific word or phrase multiple times within a prompt.
  19. Combine Chain-of-thought (CoT) with few-Shot prompts.
  20. Use output primers, which involve concluding your prompt with the beginning of the desired output. Utilize output primers by ending your prompt with the start of the anticipated response.
  21. To write an essay/text/paragraph/article or any type of text that should be detailed: “Write a detailed [essay/text /paragraph] for me on [topic] in detail by adding all the information necessary”.
  22. To correct/change specific text without changing its style: “Try to revise every paragraph sent by users. You should only improve the user’s grammar and vocabulary and make sure it sounds natural. You should not change the writing style, such as making a formal paragraph casual”.
  23. When you have a complex coding prompt that may be in different files: “From now on when ever you generate code that spans more than one file, generate a [programming language] script that can be run to automatically create the specified files or make changes to existing files to insert the generated code. [your question]”.
  24. When you want to initiate or continue a text using specific words, phrases, or sentences, utilize the following prompt:- I’m providing you with the beginning [song lyrics/story/paragraph/essay...]: [Insert lyrics/words/sentence]’. Finish it based on the words provided. Keep the flow consistent.
  25. Clearly state the requirements that the model must follow in order to produce content, in the form of the keywords, regulations, hint, or instructions.
  26. To write any text, such as an essay or paragraph, that is intended to be similar to a provided sample, include the following instructions:- Please use the same language based on the provided paragraph[/title/text/essay/answer].

Boosting an example LLM response with principle 13

The results of this research are interesting, especially given that some of these suggestions seem pretty preposterous (particularly the "tipping" one) but you can't argue with the scientific method I guess!

Realistically, you don't need to implement all 26 strategies to get a boost in output performance. You can chop and change whichever strategy suits your desired output best. In testing the strategies above, I've found that even just implementing 1-2 strategies per prompt produces a pretty decent increase in quality of responses in ChatGPT (GPT4).

Let me know in the comments if you've found this useful and if you'll be employing any of these tweaks to your prompts.

Daisy Stapley-Bunten

Entrepreneur & COO of CEP Agency 💫 Innovator of the Year Finalist (Digital Women Awards 2024)

11mo

I worry about not saying "please" and "thank you" ...what about when the robot uprising happens? They'll go straight for the blunt prompters first! 😂

Tom Owen Hughes

Software/ Ai Developer (Typescript, React, Python, +)

11mo

Gerat tips, much more thorough than I was expecting! Thanks for posting!

Udo Kiel

🔬📣Vom Arbeitswissenschaftler zum Wissenschaftskommunikator: Gemeinsam für eine sichtbarere Forschungswelt

11mo

Wow, this is valuable information! Thanks for sharing! 💡

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics