stream-chatgpt endpoint
Performs a managed ChatGPT API request to create an streaming endpoint.
Performs an OpenAI Chat Completion request generating custom endpoints in order to display the response status and send the stream output in text/event-stream format.
This endpoint is designed for sending streaming data. If this is not the case, it is recommended to use the async-chagpt endpoint.
The only required parameter -besides headers- is the messages
parameter. All other parameters refer to the default values of the Chat Completion API.
Request for a Stream ChatGPT endpoint
Generates streaming and asynchronous endpoints for chat responses.
POST
https://offloadgpt.p.rapidapi.com/v1/stream-chatgpt
Headers
Content-Type
String
application/json
X-OpenAI-API-Key*
String
<Your OpenAI API key>
X-RapidAPI-Key*
String
<Your RapidAPI key>
X-RapidAPI-Host*
String
offloadgpt.p.rapidapi.com
Request Body
access
string
Privacy of the generated endpoints: public
to be available for anyone, or private
to access only using a generated Bearer Token. Default is public
.
timeout
Number
The timeout of the request in seconds. Default value is 90 seconds. Max timeout allowed is 90 seconds.
connect_timeout
Number
The timeout to stablish connection with the OpenAI API. Default value is 5 seconds. Max connection timeout allowed is 10 seconds.
from_status_url
String
The Url of a previously generated status_url
. This allows to concatenate the previous messages with the new one sent in the current request.
from_bearer_token
String
In the case of setting a value to the from_status_url
argument, if this URL is private then it is necessary to provide its associated bearer_token
generated on the same request.
conversation_id
String
If provided, any other conversation derived from this one will keep this conversation identifier. If not provided, a default id will be generated in UUID format.
webhook_url
String
A external URL to send, using the POST method, with all the information processed. There is only one parameter called response
containing a JSON with the same information of the final status_url
response.
model
String
Refers to the model parameter of the OpenAI Chat Completion API. If omitted, the default value is gpt-3.5-turbo
.
messages*
Array
Refers to the messages parameter of the OpenAI Chat Completion API. This is the only one required parameter.
temperature
Number
Refers to the temperature parameter of the OpenAI Chat Completion API. Defaults to 1.
max_tokens
Integer
Refers to the max_tokens parameter of the OpenAI Chat Completion API. Defaults to inf.
stop
String or Array
Refers to the stop parameter of the OpenAI Chat Completion API. Defaults to null.
presence_penalty
Number
Refers to the presence_penalty parameter of the OpenAI Chat Completion API. Defaults to 0.
frequency_penalty
Number
Refers to the frequency_penalty parameter of the OpenAI Chat Completion API. Defaults to 0.
logit_bias
Map
Refers to the logit_bias parameter of the OpenAI Chat Completion API. Defaults to null.
from_max_length
Number
In the case of setting a value to the from_status_url
argument, here you can restrict the number of characters from the last response of the previous messages.
{
"status": "success",
"created_at": 1685617626,
"conversation_id": "24b94bef-d2a6-4faa-bb20-1429e846c9d3",
"README": "The `stream_events_url` endpoint below streams data sent by the ChatGPT API. Open it to receive incoming messages.",
"authorization": {
"access": "public"
},
"endpoints": {
"status_url": "https://api.offloadgpt.com/1/r/pub/2023/06/01/11/07/06/24b94bef-d2a6-4faa-bb20-1429e846c9d3.json",
"stream_events_url": "https://api.offloadgpt.com/1/r/pub/2023/06/01/11/07/06/24b94bef-d2a6-4faa-bb20-1429e846c9d3.txt",
"stop_url": "https://api.offloadgpt.com/1/r/pub/2023/06/01/11/07/06/24b94bef-d2a6-4faa-bb20-1429e846c9d3/stop"
}
}
Response from the Stream ChatGPT endpoint
For a successful request, the response will look as follows, having a success
status:
{
"status": "success",
"created_at": 1685617626,
"conversation_id": "24b94bef-d2a6-4faa-bb20-1429e846c9d3",
"README": "The `stream_events_url` endpoint below streams data sent by the ChatGPT API. Open it to receive incoming messages.",
"authorization": {
"access": "public"
},
"endpoints": {
"status_url": "https://offloadgpt.microdeploy.com/1/r/pub/2023/06/01/11/07/06/24b94bef-d2a6-4faa-bb20-1429e846c9d3.json",
"stream_events_url": "https://offloadgpt.microdeploy.com/1/r/pub/2023/06/01/11/07/06/24b94bef-d2a6-4faa-bb20-1429e846c9d3.txt",
"stop_url": "https://offloadgpt.microdeploy.com/1/r/pub/2023/06/01/11/07/06/24b94bef-d2a6-4faa-bb20-1429e846c9d3/stop"
}
}
We can see other properties such as created_at
, the conversation_id
(filled from the parameters or generated if missing), and the generated endpoints
property.
Response for private access requests
For private access requests, the response would look as follows:
{
"status": "success",
"created_at": 1685686261,
"conversation_id": "633249b8-cee5-4636-ae71-5ed45624ac93",
"README": "The `stream_events_url` endpoint below streams data sent by the ChatGPT API. Open it to receive incoming messages.",
"authorization": {
"access": "private",
"bearer_token": "718862c1382b2ffbb445f6c1abec79b2",
"stream_url_arg": "stream_token=2d1fe9502cd65b84dc90577f322d0300"
},
"endpoints": {
"status_url": "https://offloadgpt.microdeploy.com/1/r/priv/2023/06/02/06/11/01/633249b8-cee5-4636-ae71-5ed45624ac93.json",
"stream_events_url": "https://offloadgpt.microdeploy.com/1/r/priv/2023/06/02/06/11/01/633249b8-cee5-4636-ae71-5ed45624ac93.txt",
"stop_url": "https://offloadgpt.microdeploy.com/1/r/priv/2023/06/02/06/11/01/633249b8-cee5-4636-ae71-5ed45624ac93/stop"
}
}
Here we can see the following changes from the authorization
property:
The value of
access
is nowprivate
.It provides a
bearer_token
property.Additionally there is a
stream_url_arg
property.
In private requests, the generated endpoints can be accessed via GET requests using this header:
Authorization: Bearer <bearer_token>
In case you are using the stream_events_url
from the Javascript EventSource object -which does not allow to add headers- you can grant access adding the stream_url_arg
property to the stream_events_url
endpoint:
https://offloadgpt.microdeploy.com/1/r/priv/...5ed45624ac93.txt?<stream_url_arg>
Stopping active requests using the stop_url endpoint
While the request is active and has not finished, you can stop the streaming flow of data and terminate the request using the stop_url
endpoint.
It works as the same way as the other endpoints, so it is publicly accesible in case of public access, and needs the Authorization: Bearer
for private access.
Last updated