Quick Start
The OffloadGPT API works in combination of the RapidAPI platform and the OpenAI Chat Completion API.
Get your API keys
Your API requests are authenticated using API keys in the request headers. Any request that doesn't include an API key will return an error.
You can generate an API key from your RapidAPI Developer Dashboard at any time.
After that, you need to subscribe to the OffloadGPT API in order to make requests. Just for testing purposes, there is a free subscription plan allowing 1000 requests per month.
In addition, you will need an OpenAI API key for the internal request to the Chat Completion API.
Make your first request
To make your first request, send an authenticated request to the stream-chatgpt
endpoint. This will create a new generated endpoint that will store the ChatGPT API response.
Take a look at how you might call this method using any programming language or via cURL
:
curl --request POST \
--url https://offloadgpt.p.rapidapi.com/v1/stream-chatgpt \
--header 'content-type: application/json' \
--header 'X-RapidAPI-Host: offloadgpt.p.rapidapi.com' \
--header 'X-RapidAPI-Key: <REQUIRED>' \
--header 'X-OpenAI-API-Key: <REQUIRED>' \
--data '{
"messages": [
{
"role": "system",
"content": "You are an assistant of an online store selling hardware and I do not want you to talk about anything other than my products"
},
{
"role": "user",
"content": "Can you resume the pros and cons of the Soundcore by Anker Space Q45 Adaptive Active Noise Cancelling Headphones?"
}
]
}'
Check the results
If all goes well the expected HTTP response code is 200
, serving the content in JSON format as stated from the application/json
header Content-Type.
This is an example of a valid response:
{
"status": "success",
"created_at": 1685617626,
"conversation_id": "24b94bef-d2a6-4faa-bb20-1429e846c9d3",
"README": "The `stream_events_url` endpoint below streams data sent by the ChatGPT API. Open it to receive incoming messages.",
"authorization": {
"access": "public"
},
"endpoints": {
"status_url": "https://offloadgpt.microdeploy.com/1/r/pub/2023/06/01/11/07/06/24b94bef-d2a6-4faa-bb20-1429e846c9d3.json",
"stream_events_url": "https://offloadgpt.microdeploy.com/1/r/pub/2023/06/01/11/07/06/24b94bef-d2a6-4faa-bb20-1429e846c9d3.txt",
"stop_url": "https://offloadgpt.microdeploy.com/1/r/pub/2023/06/01/11/07/06/24b94bef-d2a6-4faa-bb20-1429e846c9d3/stop"
}
}
Next, the first property to check is status
, where the string success
informs us that everything went well and the request has been sent to the OpenAI API.
We continue at the endpoints
property and its subproperty stream_events_url
. This value is intended to provide an URL where the response is sent in text/event-stream format, allowing to create an text stream to the browser using Javascript server-Sent events.
You can see an example of use of this stream API in this demo project.
Another important endpoint property is status_url
, an URL intented to show the request status, informing if the request is still waiting the OpenAI ChatGPT response, an error has occurred or the request is completed and the response is available. In addition to the streaming endpoint, this status_url
method also provides the response as it is generated.
Last updated