FidelityFx Super Resolution
FidelityFX Super Resolution (FSR) is AMD's open-source spatial upscaling technology designed to boost performance in real-time rendering applications, particularly video games. Unlike NVIDIA's DLSS which requires dedicated Tensor Cores, FSR is hardware-agnostic and runs on AMD, NVIDIA, and Intel GPUs including integrated graphics. The technology has evolved through multiple generations: FSR 1.0 used Lanczos-based spatial upscaling on single frames, FSR 2.0 introduced temporal upscaling leveraging motion vectors and previous frame data for near-native quality, and FSR 3.0 added optical flow-based frame generation to dramatically increase perceived frame rates. Quality modes range from Ultra Quality to Ultra Performance, letting users balance visual fidelity against performance gains of up to 2x or more. FSR supports DirectX 11, DirectX 12, and Vulkan APIs and is deployed across PC, Xbox, PlayStation, and portable devices like Steam Deck where it enables playable frame rates within limited GPU power budgets. Hundreds of major titles including Cyberpunk 2077, Starfield, and Hogwarts Legacy feature FSR integration, with engine-level support in Unreal Engine and Unity simplifying adoption. Released under the MIT license through AMD's GPUOpen platform, FSR encourages transparent collaboration and modification by developers and researchers. Its platform independence and open-source nature have made it one of the most widely adopted upscaling solutions in the gaming industry, shaping the future of real-time image quality enhancement.
Key Highlights
Hardware Agnostic Approach
Works on any supported GPU including AMD, NVIDIA and Intel, providing broad compatibility without requiring dedicated AI hardware
Temporal Upscaling (FSR 2.0)
Advanced temporal upscaling that accumulates information from multiple frames using motion vectors, depth buffers and historical frame data
Frame Generation Technology (FSR 3.0)
Offers frame generation capability that doubles the perceived frame rate by creating new interpolated frames between rendered frames
Wide Game Compatibility
Broad ecosystem support integrated into hundreds of game titles and major game engines like Unreal Engine and Unity
About
FidelityFX Super Resolution (FSR) is AMD's open-source upscaling technology designed primarily for real-time applications such as video games. First released in 2021, FSR bridges the gap between low-resolution frame rendering and high-resolution display output, significantly boosting gaming performance while minimizing visual quality trade-offs. It embodies AMD's commitment to an open platform approach that benefits all gamers regardless of their hardware vendor, and its hardware-agnostic philosophy has been setting industry direction.
The technical evolution of FSR spans multiple generations, with each generation introducing significant innovations. FSR 1.0 employed a spatial upscaling approach that analyzes a single frame and applies advanced Lanczos-based filtering combined with edge enhancement algorithms for improved visual clarity. FSR 2.0 introduced a fundamental architectural shift to temporal upscaling, leveraging motion vectors and information from previous frames to achieve substantially higher quality results comparable to native resolution rendering. FSR 3.0 added frame generation technology, using optical flow-based intermediate frame synthesis to dramatically increase perceived FPS beyond what the GPU can natively render. Each generation has progressively improved the visual quality and performance balance, elevating the overall gaming experience.
Platform independence is one of FSR's most defining characteristics and the primary advantage that distinguishes it from competing solutions. Unlike NVIDIA's DLSS, which requires dedicated Tensor Cores available only on RTX hardware, FSR operates without specialized hardware requirements and runs across NVIDIA, AMD, and Intel GPUs, including integrated graphics solutions found in budget systems and laptops. This broad compatibility enables game developers to reach their entire player base with a single implementation effort. FSR supports DirectX 11, DirectX 12, and Vulkan graphics APIs, and is widely deployed on console platforms including Xbox and PlayStation, ensuring comprehensive cross-platform coverage. It also plays a critical role on portable gaming devices.
Industry adoption within the gaming ecosystem has been exceptionally widespread and continues to grow with each new title release. Hundreds of AAA and independent titles offer FSR support across diverse genres. Major productions including God of War, Cyberpunk 2077, Forspoken, Starfield, and Hogwarts Legacy feature FSR integration as a standard option. Engine-level integrations with Unreal Engine and Unity enable developers to incorporate FSR into their projects with minimal implementation effort and rapid deployment. On portable gaming devices such as the Steam Deck, FSR has become a critical technology for achieving playable frame rates within limited GPU power budgets, fundamentally transforming the mobile gaming experience.
Quality modes include Ultra Quality, Quality, Balanced, Performance, and Ultra Performance tiers, each designed for different quality-performance trade-off preferences. Ultra Quality mode delivers near-native resolution image quality with modest performance gains, while Performance mode provides up to 2x performance improvement for demanding titles. Ultra Performance mode offers dramatic performance gains even at 4K resolution, though with more noticeable visual quality trade-offs. Users can select the most appropriate mode based on their hardware capabilities, display resolution, and personal quality-performance preferences to achieve an optimal gaming experience.
FSR is released as open source under the MIT license through AMD's GPUOpen platform, encouraging transparent collaboration and innovation. This open approach enables both game developers and researchers to inspect, modify, and adapt the technology for their specific needs, accelerating the pace of innovation across the industry. AMD's continued investment in the FSR ecosystem pushes quality and performance boundaries with each new generation, maintaining its position as a leading platform for real-time image upscaling technology. FSR is widely recognized as one of the foundational technologies shaping the future of the gaming industry.
Use Cases
Performance-Boosted Gaming
Maintaining visual quality while increasing frame rates by rendering at lower internal resolution
High Resolution on Older GPUs
Providing 4K or high-resolution gaming experience on mid-range or older GPUs
Game Engine Integration
Game developers adding upscaling technology to their Unreal Engine and Unity projects
Portable Device Optimization
Optimizing the balance between performance and visual quality on Steam Deck and other portable gaming devices
Pros & Cons
Pros
- AMD's open-source gaming upscaling technology — vendor agnostic
- Works on both AMD and NVIDIA GPUs
- Frame generation support with FSR 3 — FPS boost
- Easy integration into game engines — Unreal and Unity support
- No license fee — free to use
Cons
- Falls behind NVIDIA DLSS in visual quality
- Temporal instability — ghosting and shimmer artifacts
- Not AI-based (until FSR 2) — traditional spatial upscaling
- Noticeable quality loss in performance mode
Technical Details
Parameters
N/A
Architecture
Spatial upscaling (FSR 1) + Temporal upscaling with optical flow (FSR 2/3)
Training Data
N/A (algorithmic approach, not ML-trained)
License
MIT
Features
- Temporal Upscaling (FSR 2.0/3.0)
- Frame Generation (FSR 3.0)
- Cross-Platform GPU Support
- Open Source Implementation
- Game Engine Integration (Unreal/Unity)
- Multiple Quality Presets
Benchmark Results
| Metric | Value | Compared To | Source |
|---|---|---|---|
| Performans Artışı (Ultra Quality) | ~1.3x FPS | DLSS Ultra Quality: ~1.3x | AMD GPUOpen Documentation |
| Performans Artışı (Performance) | ~2.0x FPS | DLSS Performance: ~2.0x | AMD GPUOpen Documentation |
| Desteklenen GPU | Vendor-agnostic (AMD, NVIDIA, Intel) | DLSS: yalnızca NVIDIA RTX | AMD GPUOpen |
| Upscale Oranları | 1.3x, 1.5x, 1.7x, 2.0x | — | AMD GPUOpen Documentation |
Frequently Asked Questions
Related Models
Real-ESRGAN
Real-ESRGAN is an open-source image upscaling and restoration model developed by Xintao Wang and collaborators at Tencent ARC Lab that enhances low-resolution, degraded, or compressed images to high-resolution outputs with remarkable detail recovery. Released in 2021 under the BSD license, Real-ESRGAN builds on the original ESRGAN architecture by introducing a high-order degradation modeling approach that simulates the complex, unpredictable quality loss found in real-world images, including compression artifacts, noise, blur, and downsampling. The model uses a U-Net architecture with Residual-in-Residual Dense Blocks as its generator network, trained with a combination of perceptual loss, GAN loss, and pixel loss to produce sharp, natural-looking upscaled results. Real-ESRGAN supports upscaling factors of 2x, 4x, and higher, and includes specialized model variants for anime and illustration content alongside the general-purpose photographic model. The model handles real-world degradations far better than its predecessor ESRGAN, which was trained only on synthetic degradation patterns. Real-ESRGAN has become one of the most widely deployed AI upscaling solutions, integrated into numerous applications including desktop tools, web services, mobile apps, and professional image editing workflows. The model runs efficiently on both CPU and GPU, with the lighter RealESRGAN-x4plus-anime variant optimized for consumer hardware. As a fully open-source project available on GitHub with pre-trained weights, it serves as the backbone for popular tools like Upscayl and various ComfyUI nodes. Real-ESRGAN is essential for photographers, content creators, game developers, and anyone who needs to enhance image resolution while preserving natural appearance and adding realistic detail.
Topaz Gigapixel AI
Topaz Gigapixel AI is a commercial desktop application for AI-powered image upscaling and enhancement developed by Topaz Labs, positioned as an industry-standard tool for professional photographers, graphic designers, and image processing specialists. Available on Windows and macOS, the software uses a proprietary hybrid neural network architecture that combines multiple AI models to upscale images by up to 600 percent while preserving and even enhancing fine details, textures, and sharpness. Topaz Gigapixel AI includes specialized processing modes for different content types including faces, standard photography, computer graphics, and low-resolution sources, with each mode optimized to produce the best possible results for its target content. The software features intelligent face detection and enhancement that improves facial details during upscaling, producing natural-looking results even from very low-resolution source images. Topaz Gigapixel AI supports batch processing for handling large volumes of images and integrates with Adobe Lightroom and Photoshop as a plugin, fitting seamlessly into professional photography workflows. The application processes images locally on the user's machine using GPU acceleration, ensuring privacy and fast processing without requiring an internet connection. Output quality is widely regarded as among the best available in commercial upscaling software, with particular strength in preserving natural textures and avoiding the artificial smoothing common in many AI upscalers. As a proprietary product with a one-time purchase or subscription model, Topaz Gigapixel AI is particularly valued by professional photographers enlarging prints, real estate photographers enhancing property images, forensic analysts improving evidence imagery, and archivists restoring historical photographs to modern resolution standards.
Upscayl
Upscayl is a free and open-source desktop application for AI-powered image upscaling, built on top of Real-ESRGAN and other super-resolution models. Developed by Nayam Amarshe and TGS963, Upscayl provides a user-friendly graphical interface that makes advanced AI image upscaling accessible to non-technical users on Windows, macOS, and Linux platforms. The application wraps multiple AI upscaling models in an Electron-based desktop app, allowing users to enhance image resolution with just a few clicks without any command-line knowledge or Python environment setup. Upscayl includes several pre-installed upscaling models optimized for different content types including general photography, digital art, anime, and sharpening, with each model producing different aesthetic characteristics suited to its target content. Users can select upscaling factors of 2x, 3x, or 4x and process individual images or entire folders through batch processing. The application supports common image formats including PNG, JPG, and WebP, and provides options for output format and quality settings. Upscayl also supports custom model loading, allowing users to import additional NCNN-compatible upscaling models from the community. Released under the AGPL-3.0 license, Upscayl is fully open source with its code available on GitHub and has accumulated a large community of users and contributors. The application runs entirely locally with no internet connection required, ensuring privacy for sensitive images. Upscayl is particularly popular among photographers, graphic designers, content creators, and hobbyists who need a simple, free solution for enhancing image quality without subscriptions or cloud processing dependencies.
CodeFormer
CodeFormer is a state-of-the-art blind face restoration model developed by researchers at Nanyang Technological University in collaboration with Tencent ARC, presented at NeurIPS 2022. The model employs a unique Transformer-based architecture with a discrete codebook lookup mechanism to restore severely degraded facial images with exceptional fidelity. Its most distinguishing feature is an adjustable w parameter ranging from 0.0 to 1.0 that gives users precise control over the balance between identity preservation and restoration quality. Architecturally, CodeFormer consists of three core components: a VQGAN encoder-decoder that learns discrete visual codes from high-quality face datasets, a codebook that stores these learned representations, and a Transformer module that predicts optimal code combinations during restoration. This approach enables the model to produce plausible facial details even under extreme degradation because it draws information from learned priors rather than solely from the corrupted input. In benchmark evaluations on CelebA-HQ and WIDER-Face datasets, CodeFormer achieves superior results across FID, NIQE, and identity similarity metrics compared to previous methods. Practical applications include restoring old family photographs, enhancing faces in AI-generated images, extracting facial details from low-resolution video frames, and professional photo retouching. The model is open source, integrates with popular tools like ComfyUI, AUTOMATIC1111 WebUI, and Fooocus, and offers cloud inference through Replicate API and Hugging Face Spaces demos for accessible experimentation.