This repository provides a Dockerized proxy for accessing the OpenAI API, allowing for simplified and streamlined interaction with the model.
With the Docker image, you can easily deploy a proxy instance to serve as a gateway between your application and the OpenAI API, reducing the complexity of API interactions and enabling more efficient development.
- For users who are restricted from direct access to the OpenAI API, particularly those in countries where OpenAI will be blocking API access starting July 2024
- For users who need to access private APIs that lack Cross-Origin Resource Sharing (CORS) headers, this solution provides a proxy to bypass CORS restrictions and enable seamless API interactions.
-
API demo https://api.aiql.com
-
UI demo ChatUI
-
OpenAI API Reference (official docs)
-
RESTful OpenAPI (provided by AIQL)
Just:
sudo docker run -d -p 9017:9017 aiql/openai-proxy-docker:latest
Then, you can use it by YOURIP:9017
For example, the proxied OpenAI Chat Completion API will be:
YOURIP:9017/v1/chat/completions
It should be the same as
api.openai.com/v1/chat/completions
You can change default port and default target by setting -e
in docker, which means that you can use it for any backend followed by OpenAPI format:
Parameter | Default Value |
---|---|
PORT | 9017 |
TARGET | https://api.openai.com |
Click below to use the GitHub Codespace:
Or fork this repo and create a codespace manually:
- Wait for env ready in your browser
npm install ci
npm start
And then, the codespace will provide a forward port (default 9017) for you to check the running.
If everything is OK, check the docker by:
docker build .
If you want to maintaine your own docker image, refer to github Actions
Fork this repo and set DOCKERHUB_USERNAME
and DOCKERHUB_TOKEN
in your secrets
Normally, the step should be:
- Fork this repo
- Settings → Secrets and variables → Actions → New repository secret
You can apply this approach to other APIs, such as Nvidia NIM:
- The proxied Nvidia NIM Completion API will be:
YOURIP:9101/v1/chat/completions
For convenience, a readily available API is provided for those who prefer not to deploy it independently:
https://nvidia.aiql.com/v1/chat/completions
services:
nvidia-proxy:
image: aiql/openai-proxy-docker:latest
container_name: nvidia-proxy
environment:
PORT: "9101"
TARGET: "https://integrate.api.nvidia.com"
restart: always
network_mode: host
You can apply this approach with your own domain over HTTPS:
YOUREMAILADDR@example.com
will be used to get certification notification from ACME server- The proxied OpenAI Chat Completion API will be:
api.example.com/v1/chat/completions
api.example.com
should be replaced by your domain name
services:
nginx-proxy:
image: nginxproxy/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443/tcp"
- "443:443/udp"
environment:
ENABLE_HTTP3: "true"
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
network_mode: bridge
acme-companion:
image: nginxproxy/acme-companion
container_name: nginx-proxy-acme
environment:
- DEFAULT_EMAIL=YOUREMAILADDR@example.com
volumes_from:
- nginx-proxy
volumes:
- certs:/etc/nginx/certs:rw
- acme:/etc/acme.sh
- /var/run/docker.sock:/var/run/docker.sock:ro
network_mode: bridge
openai-proxy:
image: aiql/openai-proxy-docker:latest
container_name: openai-proxy
environment:
LETSENCRYPT_HOST: api.example.com
VIRTUAL_HOST: api.example.com
VIRTUAL_PORT: "9017"
network_mode: host
depends_on:
- "nginx-proxy"
volumes:
conf:
vhost:
html:
certs:
acme: