OffloadGPT API
  • Introduction
  • Quick Start
  • Server Events
  • Reference
    • API Reference
      • stream-chatgpt endpoint
      • async-chatgpt endpoint
      • Status URL endpoints
Powered by GitBook
On this page
  • Request for a Async ChatGPT endpoint
  • Generates an asynchronous endpoint to store the final chat response.
  • Response from the Async ChatGPT endpoint
  • Response for private access requests
  • Stopping active requests using the stop_url endpoint
  1. Reference
  2. API Reference

async-chatgpt endpoint

Performs a managed ChatGPT API request creating a status endpoint.

Previousstream-chatgpt endpointNextStatus URL endpoints

Last updated 1 year ago

Performs an request generating a custom endpoints in order to store and display the final response data.

This endpoint is designed for storing final request data. If you need streaming capabilities, it is recommended to use the .

The only required parameter -besides headers- is the messages parameter. All other parameters refer to the default values of the Chat Completion API.

Request for a Async ChatGPT endpoint

Generates an asynchronous endpoint to store the final chat response.

POST https://offloadgpt.p.rapidapi.com/v1/async-chatgpt

Headers

Name
Type
Description

Content-Type

String

application/json

X-OpenAI-API-Key*

String

<Your OpenAI API key>

X-RapidAPI-Key*

String

<Your RapidAPI key>

X-RapidAPI-Host*

String

offloadgpt.p.rapidapi.com

Request Body

Name
Type
Description

access

string

Privacy of the generated endpoints: public to be available for anyone, or private to access only using a generated Bearer Token. Default is public.

timeout

Number

The timeout of the request in seconds. Default value is 90 seconds. Max timeout allowed is 90 seconds.

connect_timeout

Number

The timeout to stablish connection with the OpenAI API. Default value is 5 seconds. Max connection timeout allowed is 10 seconds.

from_status_url

String

The Url of a previously generated status_url. This allows to concatenate the previous messages with the new one sent in the current request.

from_bearer_token

String

In the case of setting a value to the from_status_url argument, if this URL is private then it is necessary to provide its associated bearer_token generated on the same request.

conversation_id

String

webhook_url

String

A external URL to send, using the POST method, with all the information processed. There is only one parameter called response containing a JSON with the same information of the final status_url response.

model

String

messages*

Array

temperature

Number

top_p

Number

n

Integer

max_tokens

Integer

stop

String or Array

presence_penalty

Number

frequency_penalty

Number

logit_bias

Map

user

String

from_max_length

Number

In the case of setting a value to the from_status_url argument, here you can restrict the number of characters from the last response of the previous messages.

{
    "status": "success",
    "created_at": 1685617626,
    "conversation_id": "24b94bef-d2a6-4faa-bb20-1429e846c9d3",
    "README": "The `stream_events_url` endpoint below streams data sent by the ChatGPT API. Open it to receive incoming messages.",
    "authorization": {
        "access": "public"
    },
    "endpoints": {
        "status_url": "https://api.offloadgpt.com/1/r/pub/2023/06/01/11/07/06/24b94bef-d2a6-4faa-bb20-1429e846c9d3.json",
        "stream_events_url": "https://api.offloadgpt.com/1/r/pub/2023/06/01/11/07/06/24b94bef-d2a6-4faa-bb20-1429e846c9d3.txt",
        "stop_url": "https://api.offloadgpt.com/1/r/pub/2023/06/01/11/07/06/24b94bef-d2a6-4faa-bb20-1429e846c9d3/stop"
    }
}

Response from the Async ChatGPT endpoint

For a successful request, the response will look as follows, having a success status:

{
    "status": "success",
    "created_at": 1685695773,
    "conversation_id": "b7c4669e-40d4-4d16-bd83-bb34511db8a1",
    "README": "The `status_url` endpoint below continuously updates with data sent by the ChatGPT API. Load it to check for new data.",
    "authorization": {
        "access": "public"
    },
    "endpoints": {
        "status_url": "https://offloadgpt.microdeploy.com/1/r/pub/2023/06/02/08/49/33/b7c4669e-40d4-4d16-bd83-bb34511db8a1.json",
        "stop_url": "https://offloadgpt.microdeploy.com/1/r/pub/2023/06/02/08/49/33/b7c4669e-40d4-4d16-bd83-bb34511db8a1/stop"
    }
}

We can see other properties such as created_at, the conversation_id (filled from the parameters or generated if missing), and the generated endpoints property.

Note that this response has been created with the public value of the access argument, as specified in the property authorization.access.

This means that the resulting endpoints are publicly available via GET requests, and can be accessed by anyone even when navigating from a web browser.

Response for private access requests

For private access requests, the response would look as follows:

{
    "status": "success",
    "created_at": 1685695812,
    "conversation_id": "c23780e2-2fc5-4b83-b5bc-5297f47d5360",
    "README": "The `status_url` endpoint below continuously updates with data sent by the ChatGPT API. Load it to check for new data.",
    "authorization": {
        "access": "private",
        "bearer_token": "ad7b1834232536e9c59cb141b5fabe61"
    },
    "endpoints": {
        "status_url": "https://offloadgpt.microdeploy.com/2/r/priv/2023/06/02/08/50/12/c23780e2-2fc5-4b83-b5bc-5297f47d5360.json",
        "stop_url": "https://offloadgpt.microdeploy.com/2/r/priv/2023/06/02/08/50/12/c23780e2-2fc5-4b83-b5bc-5297f47d5360/stop"
    }
}

Here we can see the following changes from the authorization property:

  • The value of access is now private.

  • It provides a bearer_token property.

In private requests, the generated endpoints can be accessed via GET requests using this header:

Authorization: Bearer <bearer_token>

In the same way, if you are chaining conversations using the from_status_url parameter, and the referenced conversation has private access, then you need to stablish the from_bearer_token parameter using the previous bearer_token value, ensuring to continue from a private request (even if the new request is public).

Stopping active requests using the stop_url endpoint

While the request is active and has not finished, you can stop and terminate the request using the stop_url endpoint.

It works as the same way as the other endpoints, so it is publicly accesible in case of public access, and needs the Authorization: Bearer for private access.

After the request has finished and the OpenAI API response has been processed, this endpoint no longer has any effect and returns a 405 status code.

If provided, any other conversation derived from this one will keep this conversation identifier. If not provided, a default id will be generated in .

Refers to the of the OpenAI Chat Completion API. If omitted, the default value is gpt-3.5-turbo.

Refers to the of the OpenAI Chat Completion API. This is the only one required parameter.

Refers to the of the OpenAI Chat Completion API. Defaults to 1.

Refers to the of the OpenAI Chat Completion API. Defaults to 1.

Refers to the of the OpenAI Chat Completion API. Defaults to 1.

Refers to the of the OpenAI Chat Completion API. Defaults to inf.

Refers to the of the OpenAI Chat Completion API. Defaults to null.

Refers to the of the OpenAI Chat Completion API. Defaults to 0.

Refers to the of the OpenAI Chat Completion API. Defaults to 0.

Refers to the of the OpenAI Chat Completion API. Defaults to null.

Refers to the of the OpenAI Chat Completion API. Defaults to null.

OpenAI Chat Completion
stream-chatgpt endpoint
uuid format
model parameter
messages parameter
temperature parameter
top_p parameter
n parameter
max_tokens parameter
stop parameter
presence_penalty parameter
frequency_penalty parameter
logit_bias parameter
user parameter