curl"https://qstash.upstash.io/llm/v1/chat/completions"\-X POST \-H"Authorization: Bearer QSTASH_TOKEN"\-H"Content-Type: application/json"\-d '{"model":"meta-llama/Meta-Llama-3-8B-Instruct","messages":[{"role":"user","content":"What is the capital of Turkey?"}]}'
{"id":"cmpl-abefcf66fae945b384e334e36c7fdc97","object":"chat.completion","created":1717483987,"model":"meta-llama/Meta-Llama-3-8B-Instruct","choices":[{"index":0,"message":{"role":"assistant","content":"The capital of Turkey is Ankara."},"logprobs":null,"finish_reason":"stop","stop_reason":null}],"usage":{"prompt_tokens":18,"total_tokens":26,"completion_tokens":8}}
Creates a chat completion that generates a textual response
for one or more messages using a large language model.
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing
frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
Modify the likelihood of specified tokens appearing in the completion.
Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer)
to an associated bias value from -100 to 100. Mathematically, the bias is added to
the logits generated by the model prior to sampling. The exact effect will vary
per model, but values between -1 and 1 should decrease or increase likelihood
of selection; values like -100 or 100 should result in a ban or exclusive
selection of the relevant token.
Whether to return log probabilities of the output tokens or not. If true, returns
the log probabilities of each output token returned in the content of message.
An integer between 0 and 20 specifying the number of most likely tokens to return at
each token position, each with an associated log probability. logprobs must be set
to true if this parameter is used.
Number between -2.0 and 2.0. Positive values penalize new tokens
based on whether they appear in the text so far, increasing the
model’s likelihood to talk about new topics.
An object specifying the format that the model must output.
Setting to { "type": "json_object" } enables JSON mode,
which guarantees the message the model generates is valid JSON.
Important: when using JSON mode, you must also instruct the model
to produce JSON yourself via a system or user message. Without this,
the model may generate an unending stream of whitespace until the
generation reaches the token limit, resulting in a long-running and
seemingly “stuck” request. Also note that the message content may
be partially cut off if finish_reason="length", which indicates the
generation exceeded max_tokens or the conversation exceeded the max context length.
This feature is in Beta. If specified, our system will make a best effort to sample
deterministically, such that repeated requests with the same seed and parameters
should return the same result. Determinism is not guaranteed, and you should
refer to the system_fingerprint response parameter to monitor changes in the backend.
If set, partial message deltas will be sent. Tokens will be sent as
data-only server-sent events as they become available, with the stream
terminated by a data: [DONE] message.
What sampling temperature to use, between 0 and 2. Higher values
like 0.8 will make the output more random, while lower values
like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
An alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p
probability mass. So 0.1 means only the tokens comprising the top
`10%“ probability mass are considered.
We generally recommend altering this or temperature but not both.
The reason the model stopped generating tokens. This will be stop if the
model hit a natural stop point or a provided stop sequence, length if
the maximum number of tokens specified in the request was reached.
The stop string or token id that caused the completion to stop,
null if the completion finished for some other reason including
encountering the EOS token
The log probability of this token, if it is within the top 20 most likely tokens.
Otherwise, the value -9999.0 is used to signify that the token is very unlikely.
A list of integers representing the UTF-8 bytes representation of the token.
Useful in instances where characters are represented by multiple tokens and
their byte representations must be combined to generate the correct text
representation. Can be null if there is no bytes representation for the token.
List of the most likely tokens and their log probability, at this token position.
In rare cases, there may be fewer than the number of requested top_logprobs returned.
The log probability of this token, if it is within the top 20 most likely tokens.
Otherwise, the value -9999.0 is used to signify that the token is very unlikely.
A list of integers representing the UTF-8 bytes representation of the token.
Useful in instances where characters are represented by multiple tokens and
their byte representations must be combined to generate the correct text
representation. Can be null if there is no bytes representation for the token.
The reason the model stopped generating tokens. This will be stop if the
model hit a natural stop point or a provided stop sequence, length if
the maximum number of tokens specified in the request was reached.
The log probability of this token, if it is within the top 20 most likely tokens.
Otherwise, the value -9999.0 is used to signify that the token is very unlikely.
A list of integers representing the UTF-8 bytes representation of the token.
Useful in instances where characters are represented by multiple tokens and
their byte representations must be combined to generate the correct text
representation. Can be null if there is no bytes representation for the token.
List of the most likely tokens and their log probability, at this token position.
In rare cases, there may be fewer than the number of requested top_logprobs returned.
The log probability of this token, if it is within the top 20 most likely tokens.
Otherwise, the value -9999.0 is used to signify that the token is very unlikely.
A list of integers representing the UTF-8 bytes representation of the token.
Useful in instances where characters are represented by multiple tokens and
their byte representations must be combined to generate the correct text
representation. Can be null if there is no bytes representation for the token.
Total number of tokens used in the request (prompt + completion).
curl"https://qstash.upstash.io/llm/v1/chat/completions"\-X POST \-H"Authorization: Bearer QSTASH_TOKEN"\-H"Content-Type: application/json"\-d '{"model":"meta-llama/Meta-Llama-3-8B-Instruct","messages":[{"role":"user","content":"What is the capital of Turkey?"}]}'
{"id":"cmpl-abefcf66fae945b384e334e36c7fdc97","object":"chat.completion","created":1717483987,"model":"meta-llama/Meta-Llama-3-8B-Instruct","choices":[{"index":0,"message":{"role":"assistant","content":"The capital of Turkey is Ankara."},"logprobs":null,"finish_reason":"stop","stop_reason":null}],"usage":{"prompt_tokens":18,"total_tokens":26,"completion_tokens":8}}
curl"https://qstash.upstash.io/llm/v1/chat/completions"\-X POST \-H"Authorization: Bearer QSTASH_TOKEN"\-H"Content-Type: application/json"\-d '{"model":"meta-llama/Meta-Llama-3-8B-Instruct","messages":[{"role":"user","content":"What is the capital of Turkey?"}]}'
{"id":"cmpl-abefcf66fae945b384e334e36c7fdc97","object":"chat.completion","created":1717483987,"model":"meta-llama/Meta-Llama-3-8B-Instruct","choices":[{"index":0,"message":{"role":"assistant","content":"The capital of Turkey is Ankara."},"logprobs":null,"finish_reason":"stop","stop_reason":null}],"usage":{"prompt_tokens":18,"total_tokens":26,"completion_tokens":8}}