Prompt Like a Data Scientist: Auto Prompt Optimization and Testing with DSPy – Towards Data Science

We will spend some time to go over the environment preparation. Afterwards, this article is divided into 3 sections:

We are now ready to start!

They are the building blocks of prompt programming in DSPy. Lets dive in to see what they are about!

A signature is the most fundamental building block in DSPys prompt programming, which is a declarative specification of input/output behavior of a DSPy module. Signatures allow you to tell the LM what it needs to do, rather than specify how we should ask the LM to do it.

Say we want to obtain the sentiment of a sentence, traditionally we might write such prompt:

But in DSPy, we can achieve the same by defining a signature as below. At its most basic form, a signature is as simple as a single string separating the inputs and output with a ->

Note: Code in this section contains those referred from DSPys documentation of Signatures

The prediction is not a good one, but for instructional purpose lets inspect what was the issued prompt.

We can see the above prompt is assembled from the sentence -> sentiment signature. But how did DSPy came up with the Given the fields in the prompt?

Inspecting the dspy.Predict() class, we see when we pass to it our signature, the signature will be parsed as the signature attribute of the class, and subsequently assembled as a prompt. The instructions is a default one hardcoded in the DSPy library.

What if we want to provide a more detailed description of our objective to the LLM, beyond the basic sentence -> sentiment signature? To do so we need to provide a more verbose signature in form of Class-based DSPy Signatures.

Notice we provide no explicit instruction as to how the LLM should obtain the sentiment. We are just describing the task at hand, and also the expected output.

It is now outputting a much better prediction! Again we see the descriptions we made when defining the class-based DSPy signatures are assembled into a prompt.

This might do for simple tasks, but advanced applications might require sophisticated prompting techniques like Chain of Thought or ReAct. In DSPy these are implemented as Modules

We may be used to apply prompting techniques by hardcoding phrases like lets think step by step in our prompt . In DSPy these prompting techniques are abstracted as Modules. Lets see below for an example of applying our class-based signature to the dspy.ChainOfThought module

Notice how the Reasoning: Lets think step by step phrase is added to our prompt, and the quality of our prediction is even better now.

According to DSPys documentation, as of time of writing DSPy provides the following prompting techniques in form of Modules. Notice the dspy.Predict we used in the initial example is also a Module, representing no prompting technique!

It also have some function-style modules:

6. dspy.majority: Can do basic voting to return the most popular response from a set of predictions.

You can check out further examples in each modules respective guide.

On the other hand, what about RAG? We can chain the modules together to deal with bigger problems!

First we define a retriever, for our example we use a ColBERT retriever getting information from Wikipedia Abstracts 2017

Then we define the RAG class inherited from dspy.Module. It needs two methods:

Note: Code in this section is borrowed from DSPys introduction notebook

Then we make use of the class to perform a RAG

Inspecting the prompt, we see that 3 passages retrieved from Wikipedia Abstracts 2017 is interpersed as context for Chain of Thought generation

The above examples might not seem much. At its most basic application the DSPy seemed only doing nothing that cant be done with f-string, but it actually present a paradigm shift for prompt writing, as this brings modularity to prompt composition!

First we describe our objective with Signature, then we apply different prompting techniques with Modules. To test different prompt techniques for a given problem, we can simply switch the modules used and compare their results, rather than hardcoding the lets think step by step (for Chain of Thought) or you will interleave Thought, Action, and Observation steps (for ReAct) phrases. The benefit of modularity will be demonstrated later in this article with a full-fledged example.

The power of DSPy is not only limited to modularity, it can also optimize our prompt based on training samples, and test it systematically. We will be exploring this in the next section!

In this section we try to optimize our prompt for a RAG application with DSPy.

Taking Chain of Thought as an example, beyond just adding the lets think step by step phrase, we can boost its performance with a few tweaks:

Doing this manually would be highly time-consuming and cant generalize to different problems, but with DSPy this can be done automatically. Lets dive in!

#1: Loading test data: Like machine learning, to train our prompt we need to prepare our training and test datasets. Initially this cell will take around 20 minutes to run.

Inspecting our dataset, which is basically a set of question-and-answer pairs

#2 Set up Phoenix for observability: To facilitate understanding of the optimization process, we launch Phoenix to observe our DSPy application, which is a great tool for LLM observability in general! I will skip pasting the code here, but you can execute it in the notebook.

Note: If you are on Windows, please also install Windows C++ Build Tools here, which is necessary for Phoenix

Then we are ready to see what this opimitzation is about! To train our prompt, we need 3 things:

Now we train our prompt.

Before using the compiled_rag to answer a question, lets see what went behind the scene during the training process (aka compile). We launch the Phoenix console by visiting http://localhost:6006/ in browser

In my run I have made 14 calls using the RAG class, in each of those calls we post a question to LM to obtain a prediction.

Refer to the result summary table in my notebook, 4 correct answers are made from these 14 samples, thus reaching our max_bootstrapped_demos parameter and stopping the calls.

But what are the prompts DSPy issued to obtain the bootstrapped demos? Heres the prompt for question #14. We can see as DSPy tries to generate one bootstrapped demo, it would randomly add samples from our trainset for few-short learning.

Time to put the compiled_rag to test! Here we raise a question which was answered wrongly in our summary table, and see if we can get the right answer this time.

We now get the right answer!

Again lets inspect the prompt issued. Notice how the compiled prompt is different from the ones that were used during bootstrapping. Apart from the few-shot examples, bootstrapped Context-Question-Reasoning-Answer demonstrations from correct predictions are added to the prompt, improving the LMs capability.

So the below is basically went behind the scene with BootstrapFewShot during compilation:

The above example still falls short of what we typically do with machine learning: Even boostrapping maybe useful, we are not yet proving it to improve the quality of the responses.

Ideally, like in traditional machine learning we should define a couple of candidate models, see how they perform against the test set, and select the one achieving the highest performance score. This is what we will do next!

In this section, we want to evaluate what is the best prompt (expressed in terms of module and optimizer combination) to perform a RAG against the HotpotQA dataset (distributed under a CC BY-SA 4.0 License), given the LM we use (GPT 3.5 Turbo).

The Modules under evaluation are:

And the Optimizer candidates are:

As for evaluation metric, we again use exact match as criteria (dspy.evaluate.metrics.answer_exact_match) against the test set.

Lets begin! First, we define our modules

Then define permutations for our model candidates

Then I defined a helper class to facilitate the evaluation. The code is a tad bit long so I am not pasting it here, but it could be found in my notebook. What it does is to apply each the optimizers against the modules, compile the prompt, then perform evaluation against the test set.

We are now ready to start the evaluation, it would take around 20 minutes to complete

Heres the evaluation result. We can see the COT module with BootstrapFewShot optimizer has the best performance. The scores represent the percentage of correct answers (judged by exact match) made for the test set.

But before we conclude the exercise, it might be useful to inspect the result more deeply: Multihop with BootstrapFewShot, which supposedly equips with more relevant context than COT with BootstrapFewShot, has a worse performance. It is strange!

Now head to the Phoenix Console to see whats going on. We pick a random question William Hughes Miller was born in a city with how many inhabitants ?, and inspect how did COT, ReAct, BasicMultiHop with BoostrapFewShot optimizer came up with their answer. You can type this in the search bar for filter: """William Hughes Miller was born in a city with how many inhabitants ?""" in input.value

These are the answers provided by the 3 models during my run:

The correct answer is 7,402 at the 2010 census. Both ReAct with BootstrapFewShot and COT with BootstrapFewShot provided relevant answers, but Multihop with BootstrapFewShot simply failed to provide one.

Checking the execution trace in Phoenix for Multihop with BootstrapFewShot, looks like the LM fails to understand what is expected for the search_query specified in the signature.

So we revise the signature, and re-run the evaluation with the code below

We now see the score improved across all models, and Multihop with LabeledFewShot and Multihop with no examples now have the best performance! This indicates despite DSPy tries to optimize the prompt, there is still some prompt engineering involved by articulating your objective in signature.

The best model now produce an exact match for our question!

Since the best prompt is Multihop with LabeledFewShot, the prompt does not contain bootstrapped Context-Question-Reasoning-Answer demonstrations. So bootstrapping may not surely lead to better performance, we need to prove which one is the best prompt scientifically.

It does not mean Multihop with BootstrapFewShot has a worse performance in general however. Only that for our task, if we use GPT 3.5 Turbo to bootstrap demonstration (which might be of questionable quality) and output prediction, then we might better do without the bootstrapping, and keep only the few-shot examples.

This lead to the question: Is it possible to use a more powerful LM, say GPT 4 Turbo (aka teacher) to generate demonstrations, while keeping cheaper models like GPT 3.5 Turbo (aka student) for prediction?

The answer is YES as the following cell demonstrates, we will use GPT 4 Turbo as teacher.

Using GPT-4 Turbo as teacher does not significantly boost our models performance however. Still it is worthwhile to see its effect to our prompt. Below is the prompt generated just using GPT 3.5

And heres the prompt generated using GPT-4 Turbo as teacher. Notice how the Reasoning is much better articulated here!

Currently we often rely on manual prompt engineering at best abstracted as f-string. Also, for LM comparison we often raise underspecified questions like how do different LMs compare on a certain problem, borrowed from the Stanford NLP papers saying.

But as the above examples demonstrate, with DSPys modular, composable programs and optimizers, we are now equipped to answer toward how they compare on a certain problem with Module X when compiled with Optimizer Y, which is a well-defined and reproducible run, thus reducing the role of artful prompt construction in modern AI.

Thats it! Hope you enjoy this article.

*Unless otherwise noted, all images are by the author

Read this article:

Prompt Like a Data Scientist: Auto Prompt Optimization and Testing with DSPy - Towards Data Science

Related Posts

Comments are closed.