- 📍 Overview
- 📦 Features
- 📂 Structure
- 💻 Installation
- 🏗️ Usage
- 🌐 Hosting
- 📜 License
- 👏 Authors
The AI Request Handler is a backend service designed to simplify the integration of the OpenAI API, allowing developers and businesses to leverage AI capabilities in their applications with minimal complexity. The system acts as a middleware, encapsulating the complexities of API requests and responses while enhancing the user experience.
Feature | Description | |
---|---|---|
⚙️ | Simplified API | Streamlines requests to OpenAI through a dedicated API endpoint. |
✅ | Request Validation | Validates incoming requests to ensure they meet the expected format prior to processing. |
🔄 | Response Formatting | Reformats responses from OpenAI into a standardized JSON format for easier consumption. |
🚨 | Error Handling | Captures errors during API interactions, providing meaningful feedback to users in a well-defined format. |
🏥 | Health Check | Endpoint to monitor the application’s status, ensuring it is operational at all times. |
⏱️ | Rate Limiting | Implements controls to manage the frequency of requests made to the OpenAI API. |
📖 | Comprehensive Logging | Tracks all interactions and errors for easier debugging and monitoring. |
.
├── .env
├── .gitignore
├── requirements.txt
├── main.py
├── startup.sh
├── api
│ └── v1
│ ├── routes.py
│ └── controllers
│ └── aiController.py
├── models
│ ├── requestModel.py
│ └── responseModel.py
├── services
│ └── aiService.py
├── database
│ └── db.py
├── utils
│ ├── logger.py
│ └── validation.py
└── tests
├── unit
│ └── test_ai_service.py
└── integration
└── test_routes.py
- Python 3.9+
- PostgreSQL Database
- OpenAI API Key
-
Clone the repository:
git clone https://github.com/coslynx/ai-request-handler-mvp.git cd ai-request-handler-mvp
-
Create a
.env
file and add your environment configurations:OPENAI_API_KEY=your_openai_api_key DATABASE_URL=postgresql://username:password@localhost/dbname JWT_SECRET=your_jwt_secret PORT=8000
-
Install the necessary dependencies:
pip install -r requirements.txt
-
Run the application:
uvicorn main:app --host 0.0.0.0 --port 8000 --reload
- Start the application:
uvicorn main:app --host 0.0.0.0 --port $PORT
- Access the application through the following endpoints:
- Health Check: http://localhost:8000/health
- AI Request: http://localhost:8000/ai/request
- Modify the
.env
file for different environment settings, ensuring your database and API keys are correctly set.
POST http://localhost:8000/ai/request
Content-Type: application/json
{
"prompt": "Translate the following English text to French: 'Hello, world!'",
"max_tokens": 50
}
{
"output": "Bonjour le monde!",
"status_code": 200,
"message": null
}
Deploy this application on any platform supporting ASGI applications (like Heroku, AWS, etc.)
-
Use a service like Heroku for hosting:
- Install the Heroku CLI:
heroku login heroku create ai-request-handler
- Set environment variables:
heroku config:set OPENAI_API_KEY=your_key heroku config:set DATABASE_URL=your_url
- Install the Heroku CLI:
-
Deploy the app:
git push heroku main
DATABASE_URL
: Connection string for PostgreSQL.JWT_SECRET
: Secret key used for JWT encoding.OPENAI_API_KEY
: Key for accessing OpenAI services.
This project is licensed under the MIT License - see the LICENSE file for details.
This Minimum Viable Product (MVP) is created by Drix10 and Kais Radwan.
Create Your Custom MVP in Minutes With CosLynxAI!