A sophisticated Flask-based application that rephrases text generated by large language models (LLMs) to make it sound more natural and human-like. The tool employs advanced techniques including context-aware rephrasing, grammar enhancement, and semantic restructuring.
- Real-time text humanization
- Context-aware rephrasing
- Secure API with CSRF protection
- Caching for improved performance
- Response compression
- Detailed error handling
- Logging and monitoring
- Python 3.11 or higher
- pip (Python package manager)
- Virtual environment (recommended)
- Clone the repository:
git clone https://github.com/yourusername/re-phrasing-tool.git
cd re-phrasing-tool
- Create and activate a virtual environment:
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
- Install dependencies:
pip install -r requirements.txt
- Configure the application:
- Copy
text_humanizer/config/config.example.json
totext_humanizer/config/config.development.json
- Adjust settings as needed
- Set the environment:
export APP_ENV=development # On Windows: set APP_ENV=development
- Start the server:
python -m text_humanizer.main
The application will be available at http://127.0.0.1:5000
Humanizes the provided text using context-aware rephrasing.
Request Body:
{
"text": "Text to be humanized",
"context": "Optional context for better understanding",
"style": "Optional style preferences"
}
Response:
{
"status": "success",
"humanized_text": "Rephrased human-like text",
"confidence_score": 0.95
}
Retrieves available context segments for text humanization.
Response:
{
"status": "success",
"contexts": [
{
"id": "context-1",
"description": "Context description",
"active": true
}
]
}
text_humanizer/
├── main.py # Application entry point
├── input_processor/ # Text preprocessing
├── context_manager/ # Context handling
├── llm_client/ # LLM integration
├── providers/ # Service providers
└── utils/ # Utility functions
- Input Processor: Handles text preprocessing and validation
- Context Manager: Manages context storage and retrieval
- LLM Client: Interfaces with language models
- Local LLM Provider: Implements local model integration
- Config Manager: Handles configuration and hot-reloading
- Request received → Input validation
- Context retrieval and processing
- LLM processing with context
- Response formatting and caching
- Compressed response delivery
The application uses a hierarchical configuration system:
- Default configuration (
config.default.json
) - Environment-specific configuration (
config.[environment].json
) - Environment variables (override file-based config)
{
"debug_mode": false,
"secret_key": "your-secret-key",
"csrf_enabled": true,
"cache_settings": {
"type": "simple",
"timeout": 300
},
"model_settings": {
"model_name": "your-model",
"temperature": 0.7
}
}
- Set up a production server (e.g., Ubuntu 20.04 LTS)
- Install Python 3.11 and required packages
- Configure a reverse proxy (nginx recommended)
- Set up SSL/TLS certificates
- Configure environment variables:
export APP_ENV=production export SECRET_KEY=your-secure-key
- Use a process manager (e.g., supervisord)
FROM python:3.11-slim
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "-m", "text_humanizer.main"]
Build and run:
docker build -t text-humanizer .
docker run -p 5000:5000 text-humanizer
- Fork the repository
- Create a feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
For support, please open an issue in the GitHub repository or contact the maintainers.