# async-chatgpt endpoint

Performs an [OpenAI Chat Completion](https://platform.openai.com/docs/api-reference/chat/create) request generating a custom endpoints in order to store and display the final response data.

This endpoint is designed for storing final request data. If you need **streaming capabilities**, it is recommended to use the [stream-chatgpt endpoint](https://offloadgpt-docs.microdeploy.com/reference/api-reference/stream-chatgpt-endpoint).

The only required parameter -besides headers- is the `messages` parameter. All other parameters refer to the default values of the Chat Completion API.

## Request for a Async ChatGPT endpoint&#x20;

## Generates an asynchronous endpoint to store the final chat response.

<mark style="color:green;">`POST`</mark> `https://offloadgpt.p.rapidapi.com/v1/async-chatgpt`

#### Headers

| Name                                               | Type   | Description                 |
| -------------------------------------------------- | ------ | --------------------------- |
| Content-Type                                       | String | `application/json`          |
| X-OpenAI-API-Key<mark style="color:red;">\*</mark> | String | \<Your OpenAI API key>      |
| X-RapidAPI-Key<mark style="color:red;">\*</mark>   | String | \<Your RapidAPI key>        |
| X-RapidAPI-Host<mark style="color:red;">\*</mark>  | String | `offloadgpt.p.rapidapi.com` |

#### Request Body

| Name                                       | Type            | Description                                                                                                                                                                                                                      |
| ------------------------------------------ | --------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| access                                     | string          | Privacy of the generated endpoints: `public` to be available for anyone, or `private` to access only using a generated Bearer Token. Default is `public`.                                                                        |
| timeout                                    | Number          | The timeout of the request in seconds. Default value is 90 seconds. Max timeout allowed is 90 seconds.                                                                                                                           |
| connect\_timeout                           | Number          | The timeout to stablish connection with the OpenAI API. Default value is 5 seconds. Max connection timeout allowed is 10 seconds.                                                                                                |
| from\_status\_url                          | String          | The Url of a previously generated `status_url`. This allows to concatenate the previous messages with the new one sent in the current request.                                                                                   |
| from\_bearer\_token                        | String          | In the case of setting a value to the `from_status_url` argument, if this URL is private then it is necessary to provide its associated `bearer_token` generated on the same request.                                            |
| conversation\_id                           | String          | If provided, any other conversation derived from this one will keep this conversation identifier. If not provided, a default id will be generated in [uuid format](https://en.wikipedia.org/wiki/Universally_unique_identifier). |
| webhook\_url                               | String          | A external URL to send, using the POST method, with all the information processed. There is only one parameter called `response` containing a JSON with the same information of the final `status_url` response.                 |
| model                                      | String          | Refers to the [model parameter](https://platform.openai.com/docs/api-reference/chat/create#chat/create-model) of the OpenAI Chat Completion API. If omitted, the default value is `gpt-3.5-turbo`.                               |
| messages<mark style="color:red;">\*</mark> | Array           | Refers to the [messages parameter](https://platform.openai.com/docs/api-reference/chat/create#chat/create-messages) of the OpenAI Chat Completion API. This is the only one required parameter.                                  |
| temperature                                | Number          | Refers to the [temperature parameter](https://platform.openai.com/docs/api-reference/chat/create#chat/create-temperature) of the OpenAI Chat Completion API. Defaults to 1.                                                      |
| top\_p                                     | Number          | Refers to the [top\_p parameter](https://platform.openai.com/docs/api-reference/chat/create#chat/create-top_p) of the OpenAI Chat Completion API. Defaults to 1.                                                                 |
| n                                          | Integer         | Refers to the [n parameter](https://platform.openai.com/docs/api-reference/chat/create#chat/create-top_p) of the OpenAI Chat Completion API. Defaults to 1.                                                                      |
| max\_tokens                                | Integer         | Refers to the [max\_tokens parameter](https://platform.openai.com/docs/api-reference/chat/create#chat/create-max_tokens) of the OpenAI Chat Completion API. Defaults to inf.                                                     |
| stop                                       | String or Array | Refers to the [stop parameter](https://platform.openai.com/docs/api-reference/chat/create#chat/create-stop) of the OpenAI Chat Completion API. Defaults to null.                                                                 |
| presence\_penalty                          | Number          | Refers to the [presence\_penalty parameter](https://platform.openai.com/docs/api-reference/chat/create#chat/create-presence_penalty) of the OpenAI Chat Completion API. Defaults to 0.                                           |
| frequency\_penalty                         | Number          | Refers to the [frequency\_penalty parameter](https://platform.openai.com/docs/api-reference/chat/create#chat/create-frequency_penalty) of the OpenAI Chat Completion API. Defaults to 0.                                         |
| logit\_bias                                | Map             | Refers to the [logit\_bias parameter](https://platform.openai.com/docs/api-reference/chat/create#chat/create-logit_bias) of the OpenAI Chat Completion API. Defaults to null.                                                    |
| user                                       | String          | Refers to the [user parameter](https://platform.openai.com/docs/api-reference/chat/create#chat/create-user) of the OpenAI Chat Completion API. Defaults to null.                                                                 |
| from\_max\_length                          | Number          | In the case of setting a value to the `from_status_url` argument, here you can restrict the number of characters from the last response of the previous messages.                                                                |

{% tabs %}
{% tab title="200 Endpoints successfully created" %}

```json
{
    "status": "success",
    "created_at": 1685617626,
    "conversation_id": "24b94bef-d2a6-4faa-bb20-1429e846c9d3",
    "README": "The `stream_events_url` endpoint below streams data sent by the ChatGPT API. Open it to receive incoming messages.",
    "authorization": {
        "access": "public"
    },
    "endpoints": {
        "status_url": "https://api.offloadgpt.com/1/r/pub/2023/06/01/11/07/06/24b94bef-d2a6-4faa-bb20-1429e846c9d3.json",
        "stream_events_url": "https://api.offloadgpt.com/1/r/pub/2023/06/01/11/07/06/24b94bef-d2a6-4faa-bb20-1429e846c9d3.txt",
        "stop_url": "https://api.offloadgpt.com/1/r/pub/2023/06/01/11/07/06/24b94bef-d2a6-4faa-bb20-1429e846c9d3/stop"
    }
}
```

{% endtab %}

{% tab title="401 Permission denied" %}

{% endtab %}
{% endtabs %}

## Response from the Async ChatGPT endpoint&#x20;

For a successful request, the response will look as follows, having a `success` status:

```json
{
    "status": "success",
    "created_at": 1685695773,
    "conversation_id": "b7c4669e-40d4-4d16-bd83-bb34511db8a1",
    "README": "The `status_url` endpoint below continuously updates with data sent by the ChatGPT API. Load it to check for new data.",
    "authorization": {
        "access": "public"
    },
    "endpoints": {
        "status_url": "https://offloadgpt.microdeploy.com/1/r/pub/2023/06/02/08/49/33/b7c4669e-40d4-4d16-bd83-bb34511db8a1.json",
        "stop_url": "https://offloadgpt.microdeploy.com/1/r/pub/2023/06/02/08/49/33/b7c4669e-40d4-4d16-bd83-bb34511db8a1/stop"
    }
}
```

We can see other properties such as `created_at`, the `conversation_id` (filled from the parameters or generated if missing), and the generated `endpoints` property.

{% hint style="info" %}
Note that this response has been created with the **`public`** value of the `access` argument, as specified in the property `authorization.access`.

This means that the resulting endpoints are **publicly available via GET requests**, and can be accessed by anyone even when navigating from a web browser.
{% endhint %}

### Response for private access requests&#x20;

For **private** access requests, the response would look as follows:

```json
{
    "status": "success",
    "created_at": 1685695812,
    "conversation_id": "c23780e2-2fc5-4b83-b5bc-5297f47d5360",
    "README": "The `status_url` endpoint below continuously updates with data sent by the ChatGPT API. Load it to check for new data.",
    "authorization": {
        "access": "private",
        "bearer_token": "ad7b1834232536e9c59cb141b5fabe61"
    },
    "endpoints": {
        "status_url": "https://offloadgpt.microdeploy.com/2/r/priv/2023/06/02/08/50/12/c23780e2-2fc5-4b83-b5bc-5297f47d5360.json",
        "stop_url": "https://offloadgpt.microdeploy.com/2/r/priv/2023/06/02/08/50/12/c23780e2-2fc5-4b83-b5bc-5297f47d5360/stop"
    }
}
```

Here we can see the following changes from the `authorization` property:

* The value of `access` is now `private`.
* It provides a `bearer_token` property.

In private requests, the generated endpoints can be accessed via GET requests using this header:

```
Authorization: Bearer <bearer_token>
```

{% hint style="info" %}
In the same way, if you are chaining conversations using the `from_status_url` parameter, and the referenced conversation has private access, then you need to stablish the **`from_bearer_token`** parameter using the previous `bearer_token` value, ensuring to continue from a private request (even if the new request is public).
{% endhint %}

### Stopping active requests using the stop\_url endpoint

While the request is active and has not finished, you can stop and terminate the request using the **`stop_url`** endpoint.

It works as the same way as the other endpoints, so it is publicly accesible in case of public access, and needs the `Authorization: Bearer` for private access.

{% hint style="info" %}
After the request has finished and the OpenAI API response has been processed, this endpoint no longer has any effect and returns a `405` status code.
{% endhint %}
