vllm.config.pooler ¶
SEQ_POOLING_TYPES module-attribute ¶
SEQ_POOLING_TYPES: tuple[SequencePoolingType, ...] = (
get_args(SequencePoolingType)
)
TOK_POOLING_TYPES module-attribute ¶
TOK_POOLING_TYPES: tuple[TokenPoolingType, ...] = get_args(
TokenPoolingType
)
PoolerConfig ¶
Controls the behavior of output pooling in pooling models.
Source code in vllm/config/pooler.py
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 | |
activation class-attribute instance-attribute ¶
activation: float | None = None
DEPRECATED: please use use_activation instead.
dimensions class-attribute instance-attribute ¶
dimensions: int | None = None
Reduce the dimensions of embeddings if model support matryoshka representation. Defaults to None.
enable_chunked_processing class-attribute instance-attribute ¶
enable_chunked_processing: bool | None = None
Whether to enable chunked processing for long inputs that exceed the model's maximum position embeddings. When enabled, long inputs will be split into chunks, processed separately, and then aggregated using weighted averaging. This allows embedding models to handle arbitrarily long text without CUDA errors. Defaults to False.
logit_bias class-attribute instance-attribute ¶
logit_bias: float | None = None
If provided, apply classification logit biases. Defaults to None.
max_embed_len class-attribute instance-attribute ¶
max_embed_len: int | None = None
Maximum input length allowed for embedding generation. When set, allows inputs longer than max_embed_len to be accepted for embedding models. When an input exceeds max_embed_len, it will be handled according to the original max_model_len validation logic. Defaults to None (i.e. set to max_model_len).
normalize class-attribute instance-attribute ¶
normalize: bool | None = None
DEPRECATED: please use use_activation instead.
pooling_type class-attribute instance-attribute ¶
pooling_type: (
SequencePoolingType | TokenPoolingType | None
) = None
The pooling method used for pooling.
If set, seq_pooling_type or tok_pooling_type are automatically populated with this field. Alternatively, users can set seq_pooling_type and tok_pooling_type explicitly.
This field is mainly for user convenience. Internal code should always use seq_pooling_type or tok_pooling_type instead of pooling_type.
returned_token_ids class-attribute instance-attribute ¶
A list of indices for the vocabulary dimensions to be extracted, such as the token IDs of good_token and bad_token in the math-shepherd-mistral-7b-prm model.
seq_pooling_type class-attribute instance-attribute ¶
seq_pooling_type: SequencePoolingType | None = None
The pooling method used for sequence pooling.
softmax class-attribute instance-attribute ¶
softmax: float | None = None
DEPRECATED: please use use_activation instead.
step_tag_id class-attribute instance-attribute ¶
step_tag_id: int | None = None
If set, only the score corresponding to the step_tag_id in the generated sentence should be returned. Otherwise, the scores for all tokens are returned.
tok_pooling_type class-attribute instance-attribute ¶
tok_pooling_type: TokenPoolingType | None = None
The pooling method used for tokenwise pooling.
use_activation class-attribute instance-attribute ¶
use_activation: bool | None = None
Whether to apply activation function to the classification outputs. Defaults to True.
__post_init__ ¶
Source code in vllm/config/pooler.py
compute_hash ¶
compute_hash() -> str
WARNING: Whenever a new field is added to this config, ensure that it is included in the factors list if it affects the computation graph.
Provide a hash that uniquely identifies all the configs that affect the structure of the computation graph from input ids/embeddings to the final hidden states, excluding anything before input ids/embeddings and after the final hidden states.
Source code in vllm/config/pooler.py
get_seq_pooling_type ¶
get_seq_pooling_type() -> SequencePoolingType
get_tok_pooling_type ¶
get_tok_pooling_type() -> TokenPoolingType
get_use_activation ¶
get_use_activation(o: object)