This Person Does Not Exist
This Person Does Not Exist is a web-based demonstration created by Uber software engineer Philip Wang that generates photorealistic portraits of entirely fictional people using NVIDIA's StyleGAN technology. Launched in February 2019, the website became a viral sensation by producing a new AI-generated human face each time the page is refreshed, showcasing the capability of generative adversarial networks to synthesize convincing portraits indistinguishable from real photographs. The underlying model was trained on the FFHQ dataset containing 70,000 high-resolution photographs of real human faces, learning to generate novel facial compositions with realistic skin textures, hair patterns, lighting, eye reflections, and natural asymmetries. The generated faces span diverse demographics including various ages, ethnicities, and genders, demonstrating the model's understanding of facial diversity. While outputs are convincing at first glance, careful examination occasionally reveals telltale artifacts such as asymmetric earrings, distorted backgrounds, or inconsistencies in hair at image edges. The project serves multiple purposes beyond demonstration: it has been widely used in discussions about deepfake technology and media literacy, serves as a privacy-preserving source of placeholder portraits for design mockups and UI prototyping, and provides stock-photo-like imagery without licensing concerns. The website itself is proprietary, though the underlying StyleGAN architecture is open source. This Person Does Not Exist remains one of the most recognized public demonstrations of GAN capabilities and continues to spark conversations about AI-generated media authenticity and digital trust in an era of increasingly sophisticated synthetic content.
Key Highlights
Photorealistic Face Generation
Generates highly convincing synthetic faces at 1024x1024 resolution that are difficult to distinguish from real people
Instant Generation
Creates a completely new and unique human face within seconds on each page refresh
Unlimited Diversity
Capability to generate faces with unlimited diversity across different ethnicities, ages, genders, and facial features
Education and Awareness Tool
Creates public awareness about the capabilities and limitations of AI-generated imagery
About
This Person Does Not Exist is a viral website and technology demonstrator created by NVIDIA researcher Philip Wang in February 2019, generating completely artificial, photorealistic human faces using StyleGAN technology. The site produces a unique, non-existent human face at 1024x1024 resolution with every page refresh. Going viral immediately after launch, the project dramatically demonstrated to the general public the level of realism achievable by AI-generated images, accelerating public discourse around deepfakes and synthetic media authenticity.
The technological foundation is built on NVIDIA's StyleGAN and subsequently StyleGAN2 architecture. StyleGAN employs a synthesis network that generates feature maps at progressively increasing resolutions starting from a latent space vector. Adaptive Instance Normalization (AdaIN) layers inject style information at each resolution level, enabling independent control over attributes such as hair color, skin tone, facial structure, age, and gender. The discriminator network continuously challenges the generator by attempting to distinguish generated images from real face photographs, driving iterative quality improvements. The model is trained on the FFHQ (Flickr-Faces-HQ) dataset comprising 70,000 high-quality face images.
The quality of generated faces has reached a level where the majority of human observers cannot reliably distinguish them from real photographs. Multiple research studies have found that participants' accuracy in identifying StyleGAN faces versus real photographs approaches chance level. The model produces outputs exhibiting diversity across different ethnicities, age groups, genders, and facial expressions. However, occasional artifacts can be observed in areas such as eyeglasses, earrings, asymmetric backgrounds, and fine hair details at image boundaries.
The use cases encompass both legitimate and controversial dimensions. Legitimate applications include placeholder profile photographs for UI/UX design mockups, privacy-friendly user representations in marketing materials, NPC faces for game and virtual world development, educational AI awareness demonstrations, and anonymous face datasets for research purposes. Controversial uses include fake social media profiles, identity fraud, and disinformation campaigns, raising significant ethical concerns about the technology's potential for misuse.
The website is free and openly accessible, requiring no registration for immediate use. The underlying StyleGAN and StyleGAN2 models have been open-sourced by NVIDIA on GitHub, enabling researchers and developers to build upon the technology. Numerous derivative projects have emerged, including This Cat Does Not Exist, This Artwork Does Not Exist, and This Rental Does Not Exist, applying similar technology across different visual domains and demonstrating the generality of the GAN-based approach.
This Person Does Not Exist is widely regarded as a landmark moment in artificial intelligence history. While technically serving as a demonstrator application for StyleGAN, its societal impact extends far deeper: it brought the potential risks of AI-generated content into mainstream public awareness, accelerated deepfake detection research, and emphasized the critical importance of digital media literacy. The platform continues to serve as a symbolic reference point in ongoing discussions about AI ethics, synthetic media governance, and the broader societal implications of generative models in an era of increasing digital content creation.
Use Cases
Privacy-Preserving Data Generation
Creating synthetic face data for research and training purposes while protecting real people's privacy
Design Mockups
Creating placeholder profile photos for websites, applications, and marketing materials
Digital Literacy Education
Used as educational material to teach identification of AI-generated images
Character Design
Creating unique character faces and reference images for game, animation, and storytelling projects
Pros & Cons
Pros
- Photorealistic face generation based on StyleGAN — indistinguishable from real
- Instant new face generation with single click
- Free and web-based — no registration required
- Practical for prototyping and placeholder images
Cons
- Face generation only — no full body or scene creation
- No control — cannot specify age, gender, ethnicity
- Risk of deepfake and fake profile creation
- Artifacts sometimes at background and hair edges
Technical Details
Parameters
N/A
Architecture
StyleGAN2 (NVIDIA) for face generation
Training Data
FFHQ dataset (70K high-quality face images from Flickr)
License
Proprietary
Features
- Photorealistic Face Synthesis
- Instant Web Generation
- 1024x1024 Resolution
- StyleGAN2 Powered
- Random Latent Sampling
- No Registration Required
Benchmark Results
| Metric | Value | Compared To | Source |
|---|---|---|---|
| FID Score (StyleGAN2 tabanlı) | 2.84 | — | StyleGAN2 Paper (CVPR 2020, NVIDIA) |
| Çıktı Çözünürlüğü | 1024x1024 | — | thispersondoesnotexist.com |
| Üretim Hızı | ~0.05s (GPU inference) | — | StyleGAN2 NVIDIA Benchmarks |
Frequently Asked Questions
Related Models
LivePortrait
LivePortrait is an efficient AI portrait animation model developed by Kuaishou Technology that generates expressive and lifelike facial animations from a single static portrait photograph. The model takes a source portrait image and a driving video containing facial movements, then transfers the expressions, head rotations, eye movements, and mouth gestures from the video onto the portrait while maintaining the original person's identity and appearance. Built on an implicit keypoint detection architecture with warping-based rendering, LivePortrait achieves real-time inference speeds that make it practical for interactive applications and live content creation. The model introduces stitching and retargeting modules that prevent common artifacts in portrait animation such as face boundary distortion, neck disconnection, and unnatural eye movements, producing seamless results that preserve the natural appearance of the subject. LivePortrait handles diverse portrait types including photographs, paintings, illustrations, and even cartoon characters, adapting its animation approach to different artistic styles. The model supports fine-grained control over individual facial action units, allowing selective animation of specific facial features like eyebrow raises, eye blinks, or smile intensity independently. Released under the MIT license, LivePortrait is fully open source and has been integrated into ComfyUI and other creative tools. Common applications include creating animated avatars for social media and messaging, producing animated portrait NFTs, generating facial animations for virtual presenters and digital humans, creating engaging content from historical photographs, and building interactive portrait experiences for museums and exhibitions.
StyleGAN3
StyleGAN3 is the third generation of NVIDIA's groundbreaking StyleGAN series of generative adversarial networks, designed to produce high-quality, photorealistic images with unprecedented control over visual attributes. Presented at NeurIPS 2021, StyleGAN3 addresses a fundamental limitation of its predecessors by eliminating texture sticking artifacts that occurred during continuous transformations and animations. Previous GAN architectures suffered from features that appeared fixed to pixel coordinates rather than moving naturally with objects, creating noticeable visual glitches during interpolation. StyleGAN3 solves this through alias-free generation using continuous signal processing principles, ensuring that fine details move smoothly and naturally with the underlying content. The architecture introduces rotation and translation equivariance, meaning generated features transform correctly and consistently when the image undergoes geometric transformations. This makes StyleGAN3 particularly suited for video generation, animation, and any application requiring smooth transitions between generated frames. The model supports configurable output resolutions and maintains the style mixing capabilities from earlier versions, allowing granular control over coarse features like pose and face shape independently from fine details like hair texture and skin quality. StyleGAN3 has been trained on various domains including human faces (FFHQ dataset), animal faces (AFHQv2), and other image categories. The model is fully open source under a custom NVIDIA license permitting research and commercial use, with official PyTorch implementations available on GitHub. It continues to serve as a benchmark reference for unconditional image generation quality and has influenced numerous subsequent GAN architectures and diffusion model designs in the generative AI landscape.
ProGAN
ProGAN (Progressive Growing of GANs) is a generative adversarial network architecture developed by NVIDIA researchers Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen, introduced in 2017, that pioneered progressively growing both generator and discriminator networks during training to produce high-resolution face images. Instead of training at the target resolution directly, ProGAN starts at 4x4 pixels and incrementally adds layers handling progressively higher resolutions, smoothly fading in each detail level. This progressive strategy stabilizes training by learning large-scale structure before fine details, reduces training time compared to full-resolution training from scratch, and enables much higher resolution output than previously possible with GANs. ProGAN was the first GAN to convincingly generate 1024x1024 photorealistic face images, a milestone that captured widespread attention. The model was trained on CelebA-HQ, a high-quality celebrity faces dataset curated for this research. Beyond faces, ProGAN successfully generated high-resolution images of bedrooms, cars, and other categories, demonstrating versatility. The architecture introduced minibatch standard deviation for output diversity and equalized learning rate for training stability. ProGAN is fully open source with official TensorFlow implementations and community PyTorch ports. While subsequent architectures like StyleGAN built upon ProGAN's progressive training foundation to achieve higher quality and controllability, ProGAN remains a landmark contribution that changed how high-resolution GANs are trained and inspired an entire generation of improved generative models.
DCGAN Face
DCGAN (Deep Convolutional Generative Adversarial Network) Face is a pioneering architecture introduced by Alec Radford, Luke Metz, and Soumith Chintala in their influential 2015 paper that established foundational principles for using convolutional neural networks in GAN architectures. DCGAN was among the first models to demonstrate that deep convolutional networks could reliably generate coherent images, particularly human faces, moving GANs beyond simple fully-connected architectures into practical image generation. The architecture introduces key design guidelines that became standard practice: replacing pooling layers with strided convolutions in the discriminator and fractional-strided convolutions in the generator, using batch normalization to stabilize training, removing fully connected hidden layers, and applying ReLU activation in the generator with LeakyReLU in the discriminator. Trained on the CelebA celebrity faces dataset, DCGAN Face produces 64x64 pixel facial images that, while modest by modern standards, were groundbreaking at publication. The model also demonstrated meaningful latent space arithmetic, showing that vector operations produce semantically meaningful results such as combining features from different faces. This work has become one of the most cited papers in GAN literature and remains essential reading in deep learning education. DCGAN is fully open source with implementations in PyTorch, TensorFlow, and other frameworks. While surpassed in quality by ProGAN, StyleGAN, and diffusion models, DCGAN remains historically significant as the architecture that proved convolutional GANs were viable for image generation and established design patterns still used in modern generative models.