Beam Search Visualizer

Play with the parameters below to understand how beam search decoding works!

Here's GPT2 doing beam search decoding for you.

Parameters:

  • Sentence to decode from (inputs): the input sequence to your decoder.
  • Number of steps (max_new_tokens): the number of tokens to generate.
  • Number of beams (num_beams): the number of beams to use.
  • Length penalty (length_penalty): the length penalty to apply to outputs. length_penalty > 0.0 promotes longer sequences, while length_penalty < 0.0 encourages shorter sequences. This parameter will not impact the beam search paths, but only influence the choice of sequences in the end towards longer or shorter sequences.
  • Number of return sequences (num_return_sequences): the number of sequences to be returned at the end of generation. Should be <= num_beams.
1 12
1 4
-3 3
1 4