Skip to content

This repository contains a simple tool to rephrase text generated by a large language model (LLM) to make it sound more human. The tool uses a variety of techniques, including synonym replacement, grammar correction, and sentence reordering, to improve the readability and naturalness of the text.

License

Notifications You must be signed in to change notification settings

akougkas/re-phrasing-tool

Repository files navigation

Text Humanizer

A sophisticated Flask-based application that rephrases text generated by large language models (LLMs) to make it sound more natural and human-like. The tool employs advanced techniques including context-aware rephrasing, grammar enhancement, and semantic restructuring.

Features

  • Real-time text humanization
  • Context-aware rephrasing
  • Secure API with CSRF protection
  • Caching for improved performance
  • Response compression
  • Detailed error handling
  • Logging and monitoring

Quick Start

Prerequisites

  • Python 3.11 or higher
  • pip (Python package manager)
  • Virtual environment (recommended)

Installation

  1. Clone the repository:
git clone https://github.com/yourusername/re-phrasing-tool.git
cd re-phrasing-tool
  1. Create and activate a virtual environment:
python -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate
  1. Install dependencies:
pip install -r requirements.txt
  1. Configure the application:
  • Copy text_humanizer/config/config.example.json to text_humanizer/config/config.development.json
  • Adjust settings as needed

Running the Application

  1. Set the environment:
export APP_ENV=development  # On Windows: set APP_ENV=development
  1. Start the server:
python -m text_humanizer.main

The application will be available at http://127.0.0.1:5000

API Documentation

REST Endpoints

POST /api/humanize

Humanizes the provided text using context-aware rephrasing.

Request Body:

{
    "text": "Text to be humanized",
    "context": "Optional context for better understanding",
    "style": "Optional style preferences"
}

Response:

{
    "status": "success",
    "humanized_text": "Rephrased human-like text",
    "confidence_score": 0.95
}

GET /api/context

Retrieves available context segments for text humanization.

Response:

{
    "status": "success",
    "contexts": [
        {
            "id": "context-1",
            "description": "Context description",
            "active": true
        }
    ]
}

Architecture

Component Overview

text_humanizer/
├── main.py           # Application entry point
├── input_processor/  # Text preprocessing
├── context_manager/  # Context handling
├── llm_client/      # LLM integration
├── providers/       # Service providers
└── utils/          # Utility functions

Key Components

  1. Input Processor: Handles text preprocessing and validation
  2. Context Manager: Manages context storage and retrieval
  3. LLM Client: Interfaces with language models
  4. Local LLM Provider: Implements local model integration
  5. Config Manager: Handles configuration and hot-reloading

Data Flow

  1. Request received → Input validation
  2. Context retrieval and processing
  3. LLM processing with context
  4. Response formatting and caching
  5. Compressed response delivery

Configuration

The application uses a hierarchical configuration system:

  1. Default configuration (config.default.json)
  2. Environment-specific configuration (config.[environment].json)
  3. Environment variables (override file-based config)

Key Configuration Options

{
    "debug_mode": false,
    "secret_key": "your-secret-key",
    "csrf_enabled": true,
    "cache_settings": {
        "type": "simple",
        "timeout": 300
    },
    "model_settings": {
        "model_name": "your-model",
        "temperature": 0.7
    }
}

Deployment

Production Deployment

  1. Set up a production server (e.g., Ubuntu 20.04 LTS)
  2. Install Python 3.11 and required packages
  3. Configure a reverse proxy (nginx recommended)
  4. Set up SSL/TLS certificates
  5. Configure environment variables:
    export APP_ENV=production
    export SECRET_KEY=your-secure-key
  6. Use a process manager (e.g., supervisord)

Docker Deployment

FROM python:3.11-slim
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "-m", "text_humanizer.main"]

Build and run:

docker build -t text-humanizer .
docker run -p 5000:5000 text-humanizer

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Commit your changes
  4. Push to the branch
  5. Create a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Support

For support, please open an issue in the GitHub repository or contact the maintainers.

About

This repository contains a simple tool to rephrase text generated by a large language model (LLM) to make it sound more human. The tool uses a variety of techniques, including synonym replacement, grammar correction, and sentence reordering, to improve the readability and naturalness of the text.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published