# Introduction

The [OffloadGPT API](https://rapidapi.com/microdeploy/api/offloadgpt) is a server-side API client that manages OpenAI ChatGPT API requests.

## How it works

OffloadGPT is an asynchronous API that stores ChatGPT responses on generated permalinks **without the need to wait the OpenAI API response**.

It saves the chat status of every request in order to easily retrieve the conversation or continue it.

The API offers support for the following features:

* Delegates API requests and relieves your server load from busy scripts.
* Instantly generates custom endpoint permalinks for each ChatGPT request.
* Parallel execution of multiple requests without compromising your server load.
* Full compatibility with the official [OpenAI Chat Completion](https://platform.openai.com/docs/api-reference/chat/create) parameters.
* Private and Public access to share conversations with others or ensure chat privacy.
* Real-time storing of Streaming and Asynchronous API responses.
* Notifies request finalization to external webhook URLs with the full processed data.
* Concatenates messages from previous responses using the `from_status_url` param.

## Want to jump right in?

Feeling like an eager beaver? Jump in to the quick start docs and get making your first request:

{% content-ref url="quick-start" %}
[quick-start](https://offloadgpt-docs.microdeploy.com/quick-start)
{% endcontent-ref %}

## Want to deep dive?

Dive a little deeper and start exploring our API reference to get an idea of everything that's possible with the API:

{% content-ref url="reference/api-reference" %}
[api-reference](https://offloadgpt-docs.microdeploy.com/reference/api-reference)
{% endcontent-ref %}
