Inside GPT II. The core mechanics of prompt engineering | by Fatih Demirci | Dec, 2023 | Medium – Towards Data Science

As you can see above with greedy strategy, we append the token with the highest probability to the input sequence and predict the next token.

Using this strategy lets generate a longer text with 128 next tokens using greedy-search decoding.

As you we can see from the text above, although it is the simplest logic, the drawback of this approach is the generated repetitive sequences. As it fails to capture the probabilities of sequences, meaning, the overall probability of a several words coming one after another is overlooked. Greedy search predicts and considers only the probability one step at a time.

Repetitive text is a problem. We would desire our generated output to be concise, how can we achieve it?

Instead of choosing the token that has highest probability at each step, we consider future x-steps and calculate the joint probability(simply multiplication of consecutive probabilities) and choose the next token sequence that is most probable. While x refers to number of beams, it is the depth of the future sequence we look into the future steps. This strategy is known as the beam search.

Lets go back to our example from GPT-2 and explore beam vs greedy search scenarios.

Given the prompt, looking at the two tokens with highest probability and their continuation(4 beams) in a tree diagram

Lets calculate the join probabilities of the green sequences above.

Germany is known for its -> high-quality beer

with the joint probability 3.30%*24.24%*31.26%*6.54% = 0.0016353

whereas the lower path with the sequence;

Germany is known for its -> strong tradition of life

2.28%*2.54%*87.02%*38.26% = 0.0019281.

The bottom sequence overall resulted with the higher joint probability, although the first next token prediction step in the top sequence has higher probability.

While greedy search priorities the absolute maximum probability at each prediction step, it neglects the token probabilities in sequences. Beam search decoding enables us to go in depth of sequences and help us decode text in more extensive fashion. So is beam-search the ultimate solution?

Lets explore further and decode the next 128 tokens with the depth of 5 beams.

Here are 10 of the most beautiful places in Germany.

1. Lake Constance

Lake Constance is one of the largest lakes in Germany.

It is located in the state of North Rhine-Westphalia and is the second largest lake in Germany after Lake Constance in Bavaria.

Lake Constance is located in the state of North Rhine-Westphalia and is the second largest lake in Germany after Lake Constance in Bavaria.

"""

Although comparatively lesser than the greedy-search, beam-search suffers from repetitive output too. However, with beam search decoding, we can solve this problem by penalising the repeated pairs of word sequences. In other words, the probability of token sequences is assigned zero, if the sequence has already been decoded before. This penalisation of a repeated tokens sequence is also know as n-gram penalty.

While n signifies the length of the sequence, gram is a term that refers to unit in computational linguistic often corresponds to the term token in our case.

The reasoning behind is to discourage the generation of sequences that contain consecutive repeating n-grams. The decoding algorithm will penalise generated sequences that contain repeating pairs of words in the output.

Knowing this, lets apply n-gram penalty of n = 2.

The country's capital, Berlin, is the largest city in Europe, with a population of more than 8.5 million people.

The city is located in the former East Germany, which was divided into East and West Germany after World War II.

Today, Germany is a member of both the European Union and NATO, as well as the World Trade Organization and the Organization for Economic Cooperation and Development (OECD).<|endoftext|>

"""

This is the best completion of the input prompt we extracted from the model so far in terms of coherence and compactness. Through n-gram penalisation the output decoded with beam-search became more human-like.

When should we use beam-search and when greedy-search? Where the factualness is paramount, like solving a math problem, key information extraction, summarisation or translation, greedy-search should be preferred. However, when we want to achieve creative output and factuality is not our priority (like it can be in the case of story generation) beam-search is often the better suited approach.

Why exactly does your prompt matter? Because every word you choose to use, the sentence structure, the layout of your instructions will activate different series of parameters in the deep layers of large language model and the probabilities will be formed differently for each different prompt. In the essence of the matter, the text generation is a probability expression conditional on your prompt.

There are also alternative methods to prevent repetitions and influence the factuality/creativity of the generated text, such as truncating the distribution of vocabulary or sampling methods. If you are interested in a higher-level in-depth exploration of the subject, Id highly recommend the article from Patrick von Platen in HuggingFace blog.

Next and the last article of this series will explore fine-tuning and reinforcement learning through human feedback which played an important role on why pre-trained models succeeded to surpass SOTA models in several benchmarks. I hope in this blog post, I was able help you understand the reasoning of prompt engineering better. Many thanks for the read. Until next time.

Read this article:

Inside GPT II. The core mechanics of prompt engineering | by Fatih Demirci | Dec, 2023 | Medium - Towards Data Science

Related Posts

Comments are closed.