ProPainter is an advanced deep learning model developed by S-Lab at Nanyang Technological University for video inpainting and object removal with exceptional temporal consistency. The model employs a dual-domain propagation architecture combined with Transformer-based attention to fill in masked or removed regions across video frames while maintaining seamless visual continuity. ProPainter takes a video and a binary mask indicating regions to be removed or filled, then generates the completed video with content that naturally blends with surrounding pixels and remains consistent across frames. The dual-domain approach propagates information in both spatial and temporal dimensions, using optical flow-guided warping to transfer texture details from neighboring frames and Transformer attention to synthesize content for regions with no visible reference. This combination allows ProPainter to handle challenging scenarios including large masked areas, fast camera motion, and complex scene dynamics that cause previous methods to produce flickering or ghosting artifacts. The model achieves state-of-the-art results on standard video inpainting benchmarks including DAVIS and YouTube-VOS, significantly outperforming previous approaches in both quantitative metrics and perceptual quality. Released under the S-Lab license, the model is open source for research purposes. Practical applications include removing unwanted objects or people from video footage, restoring damaged or corrupted video content, removing watermarks, creating clean background plates for visual effects compositing, and video-based content moderation. ProPainter integrates with standard video processing pipelines and can process videos at practical speeds on modern GPUs.