OffloadGPT API
  • Introduction
  • Quick Start
  • Server Events
  • Reference
    • API Reference
      • stream-chatgpt endpoint
      • async-chatgpt endpoint
      • Status URL endpoints
Powered by GitBook
On this page
  • How it works
  • Want to jump right in?
  • Want to deep dive?

Introduction

Generate asynchronous conversation endpoints for the OpenAI ChatGPT API

NextQuick Start

Last updated 1 year ago

The is a server-side API client that manages OpenAI ChatGPT API requests.

How it works

OffloadGPT is an asynchronous API that stores ChatGPT responses on generated permalinks without the need to wait the OpenAI API response.

It saves the chat status of every request in order to easily retrieve the conversation or continue it.

The API offers support for the following features:

  • Delegates API requests and relieves your server load from busy scripts.

  • Instantly generates custom endpoint permalinks for each ChatGPT request.

  • Parallel execution of multiple requests without compromising your server load.

  • Full compatibility with the official parameters.

  • Private and Public access to share conversations with others or ensure chat privacy.

  • Real-time storing of Streaming and Asynchronous API responses.

  • Notifies request finalization to external webhook URLs with the full processed data.

  • Concatenates messages from previous responses using the from_status_url param.

Want to jump right in?

Feeling like an eager beaver? Jump in to the quick start docs and get making your first request:

Want to deep dive?

Dive a little deeper and start exploring our API reference to get an idea of everything that's possible with the API:

OffloadGPT API
OpenAI Chat Completion
Quick Start
API Reference