“Deep-dive” prompting technique to improve the quality of LLM’s response

Mangesh Pise
5 min readOct 27, 2023
Image by lucatelles from Pixabay

For folks who have tried various prompting techniques, such as zero-shot, few-shot, chain-of-thoughts (COT), etc., I intend to share yet another prompting technique I have been working with. For lack of my ability to come up with any fancy names and call it for what it is, I will refer to it here as the “deep-dive prompting technique”.

But before I go into the details of the deep-dive prompting technique, I think we should step back a little to understand that at its core, the Large Language Model (LLM) rely on clear expectations of what it’s human expects from them (a.k.a. instructions). I find it no different than how we have programmed software for a long time — only this time it is in a language in its natural form, and the system we are working with can handle complex multi-modal tasks together.

The Concept

It often occurs to us! Yes, we are sometimes stumped by a few questions. What do we do then? We ask the question to be asked differently. “Please, come again!”, “Could you say that again?”, “What do you mean?”, are some ways we deploy to grab additional context behind the question.

Essentially this back-and-forth to develop a better question is the crux of the “deep-dive” prompting technique. Do it at least 3 times, and you should be able to form a much better context that you might be able to provide a much better response.

Prompting with deep-dive

While classical zero-shot prompts may fall short of providing enough details, providing chain-of-thoughts (COT) can elevate the response quality by instating reasoning. But it doesn't have to be one or the other. By simply adding a statement like, “Let’s think step by step”, you could be nudging the LLM to take a zero-shot-COT route.

Image Source: Kojima et al. (2022)

The deep-dive technique is similar to the above ideas, but without the overhead to provide an example for LLM to learn the approach to respond to the actual question (as shown in (b) above), or without taking a longer route by thinking through various steps to arrive at a response (as shown in (d) above).

The main idea behind the deep-dive prompting technique is to instruct LLM to introspect the user’s original question and predict follow-up questions the user may want to ask. Let’s look at this via an example for better articulation.

Example: Zero-Shot Prompt

As you can see from the above example, LLM provided a pretty straightforward response. Just to be clear, I don’t see anything wrong with this response, but could it have been better? I believe so. “and how so?”, did you just deep-dive? (pun intended!).

Let’s see what a deep-dive prompt looks like and would do to this question.

Example: Prompt with Deep-Dive technique

Before we look at LLM's response to this prompt, let’s break it down.

  1. A role/responsibility has been assigned to LLM: to provide complete and meaningful responses.
  2. Chain-of-Thoughts are provided as instructions: to think of deep-dive questions, and formulate a holistic response.
  3. An expectation of the format to be used is specified: via a set of instructions, and length of response.

The response from the above prompt would be something like below:

Example: Response to the prompt that used a deep-dive technique
--------
# TLDR;
--------
* Wondering what were the deep-dive questions?
> By simply adding "DEBUG: list the deep-dive questions"
towards the end of the above prompt, we can see the deep-dive
qustions LLM generated on behalf of the user.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Deep-Dive Questions:

- What is the average distance between Earth and the Moon?
- Why does the distance between Earth and the Moon vary?
- How is the distance between Earth and the Moon measured?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Conclusion

I would like to remind you again that the purpose of this approach was to help LLMs generate a more comprehensive and subjectively better response that aligns with human cognitive expectations or standards. The prompting technique is open for improvement as it has not been tested thoroughly for reliability, speed, etc. I will, however, add that reliability and accuracy can be elevated by implementing grounding techniques, such as Retrieval-Augmented-Generation (RAG), or by providing reference text (a.k.a. context) within the prompt — but that is for another day’s conversation.

Until then, try this technique and comment if it did or did not help you. Please also comment if you find any additional techniques useful for a better-quality response.

Now, if you are an application developer and find yourselves spending a lot (really a lot) of time perfecting your prompt, and are attempting to integrate API responses for real-time use cases, take a look at AI-Dapter. This open-source project simplifies the complex journey for developers between orchestrating API calls and obtaining LLM responses, all with just a few lines of code and without prompt engineering! By the way, AI-Dapter internally uses the deep-dive prompting technique discussed here, so obviously the responses are complete and meaningful. Check it out and comment about it as well here!

--

--

Mangesh Pise

Mangesh is an enthusiast of modern technologies, thought leader & brings his experience of over 22 years in developing and leading large IT projects.