Jump to content

Top-p sampling

From Wikipedia, the free encyclopedia

Top-p sampling, also known as nucleus sampling, is a stochastic decoding strategy for generating sequences from autoregressive probabilistic models. It was originally proposed by Ari Holtzman and his colleagues in 2019 for natural language generation to address the issue of repetitive and nonsensical text generated by other common decoding methods like beam search.[1] The technique has since been applied in other scientific fields, such as protein engineering[2] and geophysics.[3]

In top-p sampling, a probability threshold p is set, and the next item in a sequence is sampled only from the smallest possible set of high-probability candidates whose cumulative probability exceeds p. This method adapts the size of the candidate pool based on the model's certainty, making it more flexible than top-k sampling, which samples from a fixed number of candidates. Due to its effectiveness, top-p sampling is a widely used technique in many large language model applications.[4]

Technique

[edit]

At each step of the text generation process, a language model calculates a probability distribution over its entire vocabulary for the next token. While simply picking the token with the highest probability (greedy search) or a limited set of high-probability sequences (beam search) is possible, these deterministic methods often produce text that is dull, repetitive, or nonsensical.[1] Top-p sampling introduces randomness to avoid these issues while maintaining quality.

The core idea is to sample from a smaller, more credible set of tokens at each step, called the nucleus. This nucleus contains the most likely next tokens whose combined, or cumulative probability, just exceeds the threshold p. By sampling only from this dynamically-sized group, the model can adapt to different situations. When the model is confident about the next token (e.g., one token has a very high probability), the nucleus will be small. When the model is uncertain (the probabilities are more evenly distributed), the nucleus will be larger, allowing for more diversity.

The process at each step is as follows:

  1. The model calculates the probabilities for all possible next tokens.
  2. The tokens are sorted by their probability in descending order.
  3. The nucleus is formed by selecting tokens from the top of the list until their cumulative probability exceeds the predefined threshold, p.
  4. The probabilities of tokens within this nucleus are then rescaled so that they sum to 1. All tokens outside the nucleus are discarded (given a probability of 0).
  5. The final next token is randomly sampled from this new, smaller distribution.

Formally, the nucleus, , is defined as the smallest set of tokens satisfying: In this formula, represents the probability of a token given the preceding tokens .

Example

[edit]

Imagine at a certain step, a language model has a vocabulary of five words: `[the, a, cat, dog, eats]` and produces the following probabilities:

  • the: 0.5
  • a: 0.2
  • cat: 0.1
  • dog: 0.1
  • eats: 0.1

If we set :

  1. The tokens are sorted by probability: [the, a, cat, dog, eats].
  2. The cumulative probability is calculated:
    • the: 0.5
    • the + a: 0.5 + 0.2 = 0.7
    • the + a + cat: 0.7 + 0.1 = 0.8
  3. The nucleus is the smallest set with cumulative probability ≥ 0.8, which is .
  4. The probabilities for this set are rescaled to sum to 1:
    • P(the) = 0.5 / 0.8 = 0.625
    • P(a) = 0.2 / 0.8 = 0.25
    • P(cat) = 0.1 / 0.8 = 0.125
  5. The next token is then sampled from this new distribution, meaning dog and eats have a 0% chance of being chosen.

Top-k sampling

[edit]

Top-k sampling is a similar technique where the pool of candidate tokens is restricted to the most likely tokens. The main advantage of top-p is its adaptability. When the model is very certain about the next token (a peaked distribution), the nucleus can be very small. When the model is uncertain (a flat distribution), the nucleus can be much larger, allowing for more diversity. In contrast, top-k always samples from a fixed number of tokens, which may be too restrictive or too broad depending on the context.[1]

Applications

[edit]

While top-p sampling is most famously used as a decoding strategy for large language models, the technique has also been adapted for use in other scientific domains that involve generating or analyzing sequential data from probabilistic models.

Natural language generation

[edit]

In its original domain of natural language generation, top-p sampling is valued for its ability to produce more diverse and coherent text compared to deterministic methods. It has been shown to be beneficial in tasks like automatic question generation, where sample diversity is important for creating effective training data for question answering models.[5]

Drug and protein design

[edit]

Top-p sampling is used in computational biology to generate novel molecular and protein sequences from specialized language models. In de novo drug design, chemical language models trained on molecular structures use nucleus sampling to generate focused libraries of new, valid drug candidates. By combining this generation with a predictive model for bioactivity, researchers have identified novel, potent kinase inhibitors.[6] Similarly, in protein engineering, the technique is used to sample protein language models to explore the vast space of possible amino acid sequences to find novel, functional candidates for use in therapeutics or new materials.[2]

Geophysics

[edit]

The technique has also been applied in geophysics for denoising audio magnetotelluric (AMT) data. In one method, nucleus sampling is integrated into an attention mechanism to help identify and remove complex anthropogenic noise from geophysical signals. This improves the accuracy of AMT in interpreting the Earth's subsurface resistivity structure, which is critical for applications like mineral exploration.[3]

Limitations and alternatives

[edit]

While top-p and top-k sampling address many of the issues found in deterministic methods like beam search, they are not without shortcomings. Research has shown that these stochastic methods can produce text with undesirable repetitions and may not fully capture the statistical properties of human language.[7][8][9]

A range of alternative sampling strategies have been proposed to address these limitations.

  • Factual-nucleus sampling was proposed to counter the tendency for the "uniform randomness" applied to the nucleus to harm the factuality of the generated text. It dynamically adapts the level of randomness to improve factual accuracy while maintaining text quality.[10]
  • Locally typical sampling frames text generation in an information-theoretic light. Instead of selecting only the highest-probability tokens, it samples from a set of tokens that are "locally typical" in an information-theoretic sense, which has been shown to reduce repetition and improve quality.[7]
  • Priority sampling is a deterministic alternative designed to address the issue of repeated or incoherent samples. It produces a set of unique samples ordered by the model's confidence and has been shown to outperform nucleus sampling in some compiler optimization tasks.[11]

See also

[edit]

References

[edit]
  1. ^ a b c Holtzman, Ari; Buys, Jan; Du, Li; Forbes, Maxwell; Choi, Yejin (22 April 2019). "The Curious Case of Neural Text Degeneration". arXiv:1904.09751 [cs.CL].
  2. ^ a b Darmawan, Jeremie Theddy; Gal, Yarin; Notin, Pascal (6 March 2025). "Sampling Protein Language Models for Functional Protein Design". ICLR 2025 Workshop on Reasoning and Learning. Retrieved 31 July 2025.
  3. ^ a b Li, Jin; Luo, Yucheng; Li, Guang; Liu, Yecheng; Tang, Jingtian (2024). "Atom-profile updating dictionary learning with nucleus sampling attention mechanism sparse coding for audio magnetotelluric denoising". Geophysics. 89 (3): E73 – E85. doi:10.1190/geo2023-0205.1.
  4. ^ von Platen, Patrick. "How to generate text: using different decoding methods for language generation with Transformers". Hugging Face. Retrieved 23 August 2023.
  5. ^ Sultan, Md Arafat; Chandel, Shubham; Astudillo, Ramón Fernandez; Castelli, Vittorio (July 2020). "On the Importance of Diversity in Question Generation for QA". Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Online: Association for Computational Linguistics. pp. 5651–5656. doi:10.18653/v1/2020.acl-main.500.
  6. ^ Moret, Michael; Angona, Irene Pachon; Cotos, Leandro; Yan, Shen; Atz, Kenneth; Brunner, Cyrill; Baumgartner, Martin; Grisoni, Francesca; Schneider, Gisbert (7 January 2023). "Leveraging molecular structure and bioactivity with chemical language models for de novo drug design". Nature Communications. 14 (1): 114. doi:10.1038/s41467-022-35692-6. PMC 9825484. PMID 36611005.
  7. ^ a b Meister, Clara; Pimentel, Tiago; Wiher, Gian; Cotterell, Ryan (2023-01-12). "Locally Typical Sampling". Transactions of the Association for Computational Linguistics. 11: 102–121. doi:10.1162/tacl_a_00536. ISSN 2307-387X.
  8. ^ Nadeem, Moin; He, Tianxing; Cho, Kyunghyun; Glass, James (15 September 2020). "A Systematic Characterization of Sampling Algorithms for Open-ended Language Generation". arXiv:2009.07243 [cs.CL].
  9. ^ Chang, Haw-Shiuan; Peng, Nanyun; Bansal, Mohit; Ramakrishna, Anil; Chung, Tagyoung (2025-07-18). "REAL Sampling: Boosting Factuality and Diversity of Open-ended Generation by Extrapolating the Entropy of an Infinitely Large LM". Transactions of the Association for Computational Linguistics. 13: 760–783. doi:10.1162/tacl_a_00757. ISSN 2307-387X.
  10. ^ Lee, Nayeon; Ping, Wei; Xu, Peng; Patwary, Mostofa; Fung, Pascale N.; Shoeybi, Mohammad; Catanzaro, Bryan (2022). "Factuality Enhanced Language Models for Open-Ended Text Generation". Advances in Neural Information Processing Systems 35 (NeurIPS 2022).
  11. ^ Grubisic, Dejan; Seeker, Volker; Synnaeve, Gabriel; Leather, Hugh; Mellor-Crummey, John; Cummins, Chris (22 April 2024). "Priority Sampling of Large Language Models for Compilers". Proceedings of the 4th Workshop on Machine Learning and Systems. pp. 91–97. arXiv:2402.18734. doi:10.1145/3642970.3655831.