async-chatgpt endpoint
Performs a managed ChatGPT API request creating a status endpoint.
Performs an OpenAI Chat Completion request generating a custom endpoints in order to store and display the final response data.
This endpoint is designed for storing final request data. If you need streaming capabilities, it is recommended to use the stream-chatgpt endpoint.
The only required parameter -besides headers- is the messages
parameter. All other parameters refer to the default values of the Chat Completion API.
Request for a Async ChatGPT endpoint
Generates an asynchronous endpoint to store the final chat response.
POST
https://offloadgpt.p.rapidapi.com/v1/async-chatgpt
Headers
Content-Type
String
application/json
X-OpenAI-API-Key*
String
<Your OpenAI API key>
X-RapidAPI-Key*
String
<Your RapidAPI key>
X-RapidAPI-Host*
String
offloadgpt.p.rapidapi.com
Request Body
access
string
Privacy of the generated endpoints: public
to be available for anyone, or private
to access only using a generated Bearer Token. Default is public
.
timeout
Number
The timeout of the request in seconds. Default value is 90 seconds. Max timeout allowed is 90 seconds.
connect_timeout
Number
The timeout to stablish connection with the OpenAI API. Default value is 5 seconds. Max connection timeout allowed is 10 seconds.
from_status_url
String
The Url of a previously generated status_url
. This allows to concatenate the previous messages with the new one sent in the current request.
from_bearer_token
String
In the case of setting a value to the from_status_url
argument, if this URL is private then it is necessary to provide its associated bearer_token
generated on the same request.
conversation_id
String
If provided, any other conversation derived from this one will keep this conversation identifier. If not provided, a default id will be generated in uuid format.
webhook_url
String
A external URL to send, using the POST method, with all the information processed. There is only one parameter called response
containing a JSON with the same information of the final status_url
response.
model
String
Refers to the model parameter of the OpenAI Chat Completion API. If omitted, the default value is gpt-3.5-turbo
.
messages*
Array
Refers to the messages parameter of the OpenAI Chat Completion API. This is the only one required parameter.
temperature
Number
Refers to the temperature parameter of the OpenAI Chat Completion API. Defaults to 1.
max_tokens
Integer
Refers to the max_tokens parameter of the OpenAI Chat Completion API. Defaults to inf.
stop
String or Array
Refers to the stop parameter of the OpenAI Chat Completion API. Defaults to null.
presence_penalty
Number
Refers to the presence_penalty parameter of the OpenAI Chat Completion API. Defaults to 0.
frequency_penalty
Number
Refers to the frequency_penalty parameter of the OpenAI Chat Completion API. Defaults to 0.
logit_bias
Map
Refers to the logit_bias parameter of the OpenAI Chat Completion API. Defaults to null.
from_max_length
Number
In the case of setting a value to the from_status_url
argument, here you can restrict the number of characters from the last response of the previous messages.
{
"status": "success",
"created_at": 1685617626,
"conversation_id": "24b94bef-d2a6-4faa-bb20-1429e846c9d3",
"README": "The `stream_events_url` endpoint below streams data sent by the ChatGPT API. Open it to receive incoming messages.",
"authorization": {
"access": "public"
},
"endpoints": {
"status_url": "https://api.offloadgpt.com/1/r/pub/2023/06/01/11/07/06/24b94bef-d2a6-4faa-bb20-1429e846c9d3.json",
"stream_events_url": "https://api.offloadgpt.com/1/r/pub/2023/06/01/11/07/06/24b94bef-d2a6-4faa-bb20-1429e846c9d3.txt",
"stop_url": "https://api.offloadgpt.com/1/r/pub/2023/06/01/11/07/06/24b94bef-d2a6-4faa-bb20-1429e846c9d3/stop"
}
}
Response from the Async ChatGPT endpoint
For a successful request, the response will look as follows, having a success
status:
{
"status": "success",
"created_at": 1685695773,
"conversation_id": "b7c4669e-40d4-4d16-bd83-bb34511db8a1",
"README": "The `status_url` endpoint below continuously updates with data sent by the ChatGPT API. Load it to check for new data.",
"authorization": {
"access": "public"
},
"endpoints": {
"status_url": "https://offloadgpt.microdeploy.com/1/r/pub/2023/06/02/08/49/33/b7c4669e-40d4-4d16-bd83-bb34511db8a1.json",
"stop_url": "https://offloadgpt.microdeploy.com/1/r/pub/2023/06/02/08/49/33/b7c4669e-40d4-4d16-bd83-bb34511db8a1/stop"
}
}
We can see other properties such as created_at
, the conversation_id
(filled from the parameters or generated if missing), and the generated endpoints
property.
Response for private access requests
For private access requests, the response would look as follows:
{
"status": "success",
"created_at": 1685695812,
"conversation_id": "c23780e2-2fc5-4b83-b5bc-5297f47d5360",
"README": "The `status_url` endpoint below continuously updates with data sent by the ChatGPT API. Load it to check for new data.",
"authorization": {
"access": "private",
"bearer_token": "ad7b1834232536e9c59cb141b5fabe61"
},
"endpoints": {
"status_url": "https://offloadgpt.microdeploy.com/2/r/priv/2023/06/02/08/50/12/c23780e2-2fc5-4b83-b5bc-5297f47d5360.json",
"stop_url": "https://offloadgpt.microdeploy.com/2/r/priv/2023/06/02/08/50/12/c23780e2-2fc5-4b83-b5bc-5297f47d5360/stop"
}
}
Here we can see the following changes from the authorization
property:
The value of
access
is nowprivate
.It provides a
bearer_token
property.
In private requests, the generated endpoints can be accessed via GET requests using this header:
Authorization: Bearer <bearer_token>
Stopping active requests using the stop_url endpoint
While the request is active and has not finished, you can stop and terminate the request using the stop_url
endpoint.
It works as the same way as the other endpoints, so it is publicly accesible in case of public access, and needs the Authorization: Bearer
for private access.
Last updated