Controlling diffusion-based image generation with just a few strokes
-
Updated
Dec 21, 2023 - Python
Controlling diffusion-based image generation with just a few strokes
Text-to-image generation using Huggingface stable diffusion ControlNet conditioning and AWS Translate's prompt translation function
This repo implements ControlNet with DDPM and Latent Diffusion Model in PyTorch with canny edges as conditional control for Mnist and CelebHQ
Virtual Try-On FashionGenAI is an AI-powered tool that allows users to visualize themselves in different clothes based on their own images and text prompts. The project utilizes Stable Diffusion Inpainting and the U2Net segmentation model to isolate clothing parts (upper or lower body) and generate a customized outfit based on the user's choice.
This model is designed to mainly modify mainstream logos and to give them a personalized revamp.
Used ControlNetModel, StableDiffusionControlNetPipeline to create a QR Code generator for any given link.
Takes a logo as input and generates a cool branding video!
Add a description, image, and links to the controlnetmodel topic page so that developers can more easily learn about it.
To associate your repository with the controlnetmodel topic, visit your repo's landing page and select "manage topics."