AI Design Glossary
All the AI design terms you need to know, explained simply and clearly
AI Art
Artworks created using AI technologies or where AI plays an active role in the creative process. It enables producing digital art without needing traditional artistic skills.
Read MoreAI Design
A broad field encompassing the integration of AI technologies into design processes. It includes applications ranging from UI/UX design to graphic design, architectural visualization to product design.
Read MoreCLIP
A multimodal AI model developed by OpenAI that can represent text and images in the same vector space. Used as a prompt understanding layer in image generation tools.
Read MoreControlNet
A neural network architecture that adds additional control layers to diffusion models to specify structural conditions like pose, edges, and depth maps during image generation.
Read MoreDeepfake
A technique that uses deep learning technology to realistically superimpose a person's face, voice, or movements onto another person.
Read MoreDiffusion Model
A deep learning model that generates images by gradually denoising. It starts from random noise and step by step creates a meaningful image.
Read MoreEmbedding
The process of converting text, images, or other data types into dense, fixed-size numerical vectors. Used for semantic similarity calculation and model input representation.
Read MoreFine-Tuning
The process of customizing a pre-trained AI model by providing additional training on a specific task, style, or dataset.
Read MoreGAN (Generative Adversarial Network)
A deep learning model where two neural networks are trained against each other: a generator and a discriminator. The generator tries to produce realistic data, while the discriminator tries to distinguish between real and fake data.
Read MoreGenerative AI
The general term for AI systems that can produce new and original content based on patterns learned from training data. It covers text, image, video, music, and code generation.
Read MoreImage-to-Image
A technique that generates or transforms a new image using AI by referencing an existing image. The structure of the input image is preserved while style, content, or details can be modified.
Read Moreimg2img
Abbreviation for image-to-image. In the Stable Diffusion ecosystem, it refers to the mode of generating new images using a reference image. It transforms while preserving the structure of the original image.
Read MoreInference
The process where a trained AI model makes predictions or generates output on new inputs. In image generation, it corresponds to converting a prompt into an image.
Read MoreInpainting
A technique for regenerating or editing a specific area of an image by masking it with AI. Used for removing unwanted objects or modifying specific areas.
Read MoreLatent Space
A multidimensional space where data is compressed and mathematically represented. Diffusion models perform image generation in this compressed space for computational efficiency.
Read MoreLoRA (Low-Rank Adaptation)
A method for efficiently fine-tuning large AI models by adding small, trainable matrices. The original model weights remain unchanged.
Read MoreNegative Prompt
A text command that defines unwanted elements in AI image generation. The model generates images while avoiding the elements specified in the negative prompt.
Read MoreOutpainting
A technique that extends the boundaries of an existing image using AI. It enlarges the image by creating new areas that are consistent with the original content.
Read MorePrompt
A text-based instruction or command given to AI models. It is used to describe the desired output and guides the model's generation process.
Read MorePrompt Engineering
A discipline that encompasses techniques and strategies for writing prompts to get the best results from AI models. It includes proper word choice, structuring, and parameter usage.
Read MoreStyle Transfer
A technique that applies the artistic style of one image while preserving the content of another. It has uses such as reinterpreting photographs in the style of famous painters.
Read MoreText-to-Image
Technology that generates images from natural language text descriptions using artificial intelligence. The prompt written by the user is interpreted by the AI model and converted into an image.
Read MoreText-to-Video
Technology that generates video content from natural language text descriptions using artificial intelligence. It converts text prompts into moving, consistent frame sequences.
Read MoreToken
The basic unit used by AI models when processing text. It can be a word, word fragment, or character. Prompt length and model capacity are measured in token count.
Read MoreTransformer
A deep learning architecture based on the attention mechanism with parallel processing capability. It forms the foundation of both language and visual models.
Read Moretxt2img
Abbreviation for text-to-image. In the Stable Diffusion ecosystem, it refers to the mode of generating images from text prompts. Unlike img2img, it produces images from scratch.
Read MoreUpscaling
The process of enlarging low-resolution images using AI without quality loss or with quality enhancement. It differs from traditional resizing with its ability to add detail and sharpen.
Read MoreVAE (Variational Autoencoder)
A probabilistic deep learning model that encodes data into a compressed latent space and can generate new data from this space. Used in the image encoding layer of diffusion models.
Read More