IC-Light icon

IC-Light

Open Source
4.5
Lvmin Zhang

IC-Light (Intrinsic Compositing Light) is an AI relighting model developed by Lvmin Zhang, the creator of ControlNet, that manipulates and transforms lighting conditions in photographs with remarkable realism. Built on a Stable Diffusion backbone with specialized lighting conditioning, the model with over one billion parameters can take any photograph of an object or person and completely alter the light source direction, color temperature, intensity, and ambient lighting while maintaining photorealistic shadows, highlights, and surface reflections. IC-Light operates in two distinct modes: foreground relighting where the subject is extracted and relit independently, and background-compatible relighting where the lighting is adjusted to match a new background environment. The model understands physical light behavior including specular reflections, subsurface scattering on skin, metallic surfaces, and transparent materials, producing results that respect real-world optical properties. IC-Light accepts text descriptions or reference images to define the target lighting setup, offering intuitive control over the final appearance. Released under the Apache 2.0 license, the model is fully open source and has been integrated into ComfyUI with dedicated workflow nodes. Professional photographers, product photographers, digital artists, and e-commerce teams use IC-Light for correcting unfavorable lighting in existing photos, creating studio-quality lighting from casual snapshots, matching product lighting across catalog images, generating dramatic cinematic lighting for creative projects, and preparing composited images with consistent illumination across elements.

Image to Image
Image Editing

Key Highlights

AI-Powered Lighting Control

Produces professional studio-quality results by rearranging lighting in existing images with artificial intelligence.

HDRI Map Support

Achieves realistic environment lighting by using HDRI environment maps as lighting references.

Light Direction and Color Control

Provides the ability to create desired atmosphere by precisely controlling light direction and color.

Background-Consistent Lighting

Creates natural composition by making lighting on objects or portraits consistent with the background.

About

IC-Light (Intrinsic Compositing Light) is a revolutionary model developed by lllyasviel (the creator of ControlNet) in 2024 that rearranges and manipulates lighting in images using artificial intelligence. The model takes a photograph of an object or person and can completely alter the light source direction, intensity, color, and overall lighting atmosphere. Designed to resolve lighting inconsistencies in compositing and photo editing workflows, IC-Light produces professional-grade results that would traditionally require expensive studio setups or complex 3D rendering pipelines.

The technical foundation of IC-Light is built upon intrinsic image decomposition principles. The model decomposes an image into its constituent components — albedo (material color), normal maps, and shading — enabling independent manipulation of each component. Through this decomposition, it can completely recalculate lighting while preserving the original object's material properties. Two primary modes are offered: IC-Light-FC (Foreground Conditioned) modifies the foreground subject's lighting while automatically generating a compatible background, while IC-Light-FGBG (Foreground-Background) ensures consistent lighting harmonization when both foreground and background images are provided.

IC-Light produces remarkable results in terms of lighting quality and realism across diverse scenarios. It achieves consistent and physically plausible results for different light directions (top, bottom, side, backlighting), various light colors (warm, cool, colored gels), soft and hard light transitions, and multi-source lighting simulations. It demonstrates particularly strong performance in portrait photography relighting, product photography studio light simulation, and lighting harmonization in composite images where elements from different sources must appear naturally unified.

The practical applications span from professional image production to e-commerce and content creation. Photographers use IC-Light to add professional lighting effects to outdoor shoots, product photographers virtually experiment with different lighting scenarios without physical setups, graphic designers ensure lighting consistency across compositing projects, and game developers adjust character and object lighting for scene integration. It is also employed in film post-production for correcting scene lighting and in social media content creation for producing dramatic lighting effects that enhance visual impact.

IC-Light is published as fully open source under the Apache 2.0 license on GitHub. Custom nodes are available for ComfyUI, enabling complex lighting editing workflows to be constructed visually through a node-based interface. An interactive demo page is hosted on Hugging Face Spaces for immediate experimentation. Since the model is built on Stable Diffusion infrastructure, it can run on consumer GPUs with 8 GB VRAM. A Gradio-based local interface is also provided for standalone use without cloud dependencies.

In the lighting manipulation domain, IC-Light stands out as the first open-source model to cross the practical usability threshold for AI-based relighting. While traditional relighting methods require 3D scene information, detailed depth maps, or multi-view captures, IC-Light's ability to work from a single 2D image represents a revolutionary step in accessibility and workflow simplification. Created by the same developer behind ControlNet, this project successfully transfers deep expertise in controllable image generation to the lighting manipulation domain, strengthening the bridge between professional photography and AI-powered creative tools.

Use Cases

1

Product Photography

Boosting sales by rearranging e-commerce product images with professional studio lighting.

2

Portrait Editing

Creating dramatic or natural light effects by changing lighting in portrait photographs.

3

Scene Compositing

Creating natural scenes by combining images from different sources with consistent lighting.

4

Concept Art Lighting

Determining atmosphere by quickly testing different lighting scenarios in concept art work.

Pros & Cons

Pros

  • AI-powered relighting capability for images
  • Independent control of foreground and background lighting
  • Achieves studio-like results in product photography
  • Integrates with ComfyUI and other diffusion tools

Cons

  • Lighting artifacts can occur in complex scenes
  • Only lighting changes — cannot perform general image editing
  • High GPU requirements — real-time processing limited
  • Light reflections may not look natural on some material types

Technical Details

Parameters

1B+

Architecture

Stable Diffusion + Lighting Conditioning

Training Data

Synthetic lighting pairs

License

Apache 2.0

Features

  • Relighting
  • HDR lighting
  • Direction control
  • Color control
  • Background generation
  • Portrait relighting
  • ComfyUI support

Benchmark Results

MetricValueCompared ToSource
Aydınlatma Doğruluğu (User Study)%87 tercih oranıDiffusionLight: %62IC-Light Paper (arXiv:2405.00126)
SSIM (Relighting)0.89Total Relighting: 0.83Papers With Code
İşleme Süresi (512×512)~5 saniye (A100)GitHub Repository

Available Platforms

GitHub
ComfyUI
Replicate

Frequently Asked Questions

Related Models

ControlNet icon

ControlNet

Lvmin Zhang|1.4B

ControlNet is a conditional control framework for Stable Diffusion models that enables precise structural guidance during image generation through various conditioning inputs such as edge maps, depth maps, human pose skeletons, segmentation masks, and normal maps. Developed by Lvmin Zhang and Maneesh Agrawala at Stanford University, ControlNet adds trainable copy branches to frozen diffusion model encoders, allowing the model to learn spatial conditioning without altering the original model's capabilities. This architecture preserves the base model's generation quality while adding fine-grained control over composition, structure, and spatial layout of generated images. ControlNet supports multiple conditioning types simultaneously, enabling complex multi-condition workflows where users can combine pose, depth, and edge information to guide generation with extraordinary precision. The framework revolutionized professional AI image generation workflows by solving the fundamental challenge of maintaining consistent spatial structures across generated images. It has become an essential tool for professional artists and designers who need precise control over character poses, architectural layouts, product placements, and scene compositions. ControlNet is open-source and available on Hugging Face with pre-trained models for various Stable Diffusion versions including SD 1.5 and SDXL. It integrates seamlessly with ComfyUI and Automatic1111. Concept artists, character designers, architectural visualizers, fashion designers, and animation studios rely on ControlNet for production workflows. Its influence has extended beyond Stable Diffusion, inspiring similar control mechanisms in FLUX.1 and other modern image generation models.

Open Source
4.8
InstantID icon

InstantID

InstantX Team|N/A

InstantID is a zero-shot identity-preserving image generation framework developed by InstantX Team that can generate images of a specific person in various styles, poses, and contexts using only a single reference photograph. Unlike traditional face-swapping or personalization methods that require multiple reference images or time-consuming fine-tuning, InstantID achieves accurate identity preservation from just one facial photograph through an innovative architecture combining a face encoder, IP-Adapter, and ControlNet for facial landmark guidance. The system extracts detailed facial identity features from the reference image and injects them into the generation process, ensuring that the generated person maintains recognizable facial features, proportions, and characteristics across diverse output scenarios. InstantID supports various creative applications including generating portraits in different artistic styles, placing the person in imagined scenes or contexts, creating profile pictures and avatars, and producing marketing materials featuring consistent character representations. The model works with Stable Diffusion XL as its base and is open-source, available on GitHub and Hugging Face for local deployment. It integrates with ComfyUI through community-developed nodes and can be accessed through cloud APIs. Portrait photographers, social media content creators, marketing teams creating personalized campaigns, game developers designing character variants, and digital artists exploring identity-based creative work all use InstantID. The framework has influenced subsequent identity-preservation models and remains one of the most effective solutions for single-image identity transfer in the open-source ecosystem.

Open Source
4.7
IP-Adapter icon

IP-Adapter

Tencent|22M

IP-Adapter is an image prompt adapter developed by Tencent AI Lab that enables image-guided generation for text-to-image diffusion models without requiring any fine-tuning of the base model. The adapter works by extracting visual features from reference images using a CLIP image encoder and injecting these features into the diffusion model's cross-attention layers through a decoupled attention mechanism. This allows users to provide reference images as visual prompts alongside text prompts, guiding the generation process to produce images that share stylistic elements, compositional features, or visual characteristics with the reference while still following the text description. IP-Adapter supports multiple modes of operation including style transfer, where the generated image adopts the artistic style of the reference, and content transfer, where specific subjects or elements from the reference appear in the output. The adapter is lightweight, adding minimal computational overhead to the base model's inference process. It can be combined with other control mechanisms like ControlNet for multi-modal conditioning, enabling sophisticated workflows where pose, style, and content can each be controlled independently. IP-Adapter is open-source and available for various Stable Diffusion versions including SD 1.5 and SDXL. It integrates with ComfyUI and Automatic1111 through community extensions. Digital artists, product designers, brand managers, and content creators who need to maintain visual consistency across generated images or transfer specific aesthetic qualities from reference material particularly benefit from IP-Adapter's capabilities.

Open Source
4.6
IP-Adapter FaceID icon

IP-Adapter FaceID

Tencent|22M (adapter)

IP-Adapter FaceID is a specialized adapter module developed by Tencent AI Lab that injects facial identity information into the diffusion image generation process, enabling the creation of new images that faithfully preserve a specific person's facial features. Unlike traditional face-swapping approaches, IP-Adapter FaceID extracts face recognition feature vectors from the InsightFace library and feeds them into the diffusion model through cross-attention layers, allowing the model to generate diverse scenes, styles, and compositions while maintaining consistent facial identity. With only approximately 22 million adapter parameters layered on top of existing Stable Diffusion models, FaceID achieves remarkable identity preservation without requiring per-subject fine-tuning or multiple reference images. A single clear face photo is sufficient to generate the person in various artistic styles, different clothing, diverse environments, and novel poses. The adapter supports both SDXL and SD 1.5 base models and can be combined with other ControlNet adapters for additional control over pose, depth, and composition. IP-Adapter FaceID Plus variants incorporate additional CLIP image features alongside face embeddings for improved likeness and detail preservation. Released under the Apache 2.0 license, the model is fully open source and widely integrated into ComfyUI workflows and the Diffusers library. Common applications include personalized avatar creation, custom portrait generation in various artistic styles, character consistency in storytelling and comic creation, personalized marketing content, and social media content creation where maintaining a recognizable likeness across multiple generated images is essential.

Open Source
4.5

Quick Info

Parameters1B+
TypeDiffusion
LicenseApache 2.0
Released2024-05
ArchitectureStable Diffusion + Lighting Conditioning
Rating4.5 / 5
CreatorLvmin Zhang

Links

Tags

relighting
lighting
photo
editing
Visit Website