Hello to everyone visiting my homepage!
I'm thrilled to share my latest project, where I leveraged the power of stable diffusion and a fine-tuned LoRA (Low-Rank Adaptation of Large Language Models) model to create striking portraits of myself.
Figure 1. Unveiling my AI-generated portraits! 🎨 Shared a few on social media and, fun fact, many of my family and friends were tricked into thinking they were genuine shots of me. The power of technology, right? 😄 #StableDiffusionMagic
Data Collection: The journey began with collecting a dataset comprising photographs of myself. These photos span various lighting conditions, expressions, and angles, ensuring a comprehensive representation for the subsequent steps.
Figure 2. Take a look at my original dataset and marvel at the uncanny resemblance between the AI-generated portraits and my real images. Can you spot the difference?
Fine-tuning LoRA: Low-Rank Adaptation, or LoRA, freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. For this project, I fine-tuned it using my personal photo dataset. This personalized the model, making it more apt for generating images that closely resembled my features.
Generating Portraits using Stable Diffusion: The crux of the project involved generating images using stable diffusion. Armed with text prompts, I was able to guide the diffusion process to create a series of portraits that were both surreal and deeply personal.