AI Design Glossary

All the AI design terms you need to know, explained simply and clearly

A

AI Art

General Concepts

Artworks created using AI technologies or where AI plays an active role in the creative process. It enables producing digital art without needing traditional artistic skills.

Read More

AI Design

General Concepts

A broad field encompassing the integration of AI technologies into design processes. It includes applications ranging from UI/UX design to graphic design, architectural visualization to product design.

Read More
C

CLIP

Model Architectures

A multimodal AI model developed by OpenAI that can represent text and images in the same vector space. Used as a prompt understanding layer in image generation tools.

Read More

ControlNet

Advanced Techniques

A neural network architecture that adds additional control layers to diffusion models to specify structural conditions like pose, edges, and depth maps during image generation.

Read More
D

Deepfake

Generation Techniques

A technique that uses deep learning technology to realistically superimpose a person's face, voice, or movements onto another person.

Read More

Diffusion Model

Model Architectures

A deep learning model that generates images by gradually denoising. It starts from random noise and step by step creates a meaningful image.

Read More
E

Embedding

Model Architectures

The process of converting text, images, or other data types into dense, fixed-size numerical vectors. Used for semantic similarity calculation and model input representation.

Read More
F

Fine-Tuning

Advanced Techniques

The process of customizing a pre-trained AI model by providing additional training on a specific task, style, or dataset.

Read More
G

GAN (Generative Adversarial Network)

Model Architectures

A deep learning model where two neural networks are trained against each other: a generator and a discriminator. The generator tries to produce realistic data, while the discriminator tries to distinguish between real and fake data.

Read More

Generative AI

General Concepts

The general term for AI systems that can produce new and original content based on patterns learned from training data. It covers text, image, video, music, and code generation.

Read More
I

Image-to-Image

Generation Techniques

A technique that generates or transforms a new image using AI by referencing an existing image. The structure of the input image is preserved while style, content, or details can be modified.

Read More

img2img

Generation Techniques

Abbreviation for image-to-image. In the Stable Diffusion ecosystem, it refers to the mode of generating new images using a reference image. It transforms while preserving the structure of the original image.

Read More

Inference

Basic Concepts

The process where a trained AI model makes predictions or generates output on new inputs. In image generation, it corresponds to converting a prompt into an image.

Read More

Inpainting

Generation Techniques

A technique for regenerating or editing a specific area of an image by masking it with AI. Used for removing unwanted objects or modifying specific areas.

Read More
L

Latent Space

Model Architectures

A multidimensional space where data is compressed and mathematically represented. Diffusion models perform image generation in this compressed space for computational efficiency.

Read More

LoRA (Low-Rank Adaptation)

Advanced Techniques

A method for efficiently fine-tuning large AI models by adding small, trainable matrices. The original model weights remain unchanged.

Read More
N

Negative Prompt

Basic Concepts

A text command that defines unwanted elements in AI image generation. The model generates images while avoiding the elements specified in the negative prompt.

Read More
O

Outpainting

Generation Techniques

A technique that extends the boundaries of an existing image using AI. It enlarges the image by creating new areas that are consistent with the original content.

Read More
P

Prompt

Basic Concepts

A text-based instruction or command given to AI models. It is used to describe the desired output and guides the model's generation process.

Read More

Prompt Engineering

Basic Concepts

A discipline that encompasses techniques and strategies for writing prompts to get the best results from AI models. It includes proper word choice, structuring, and parameter usage.

Read More
S

Style Transfer

Generation Techniques

A technique that applies the artistic style of one image while preserving the content of another. It has uses such as reinterpreting photographs in the style of famous painters.

Read More
T

Text-to-Image

Generation Techniques

Technology that generates images from natural language text descriptions using artificial intelligence. The prompt written by the user is interpreted by the AI model and converted into an image.

Read More

Text-to-Video

Generation Techniques

Technology that generates video content from natural language text descriptions using artificial intelligence. It converts text prompts into moving, consistent frame sequences.

Read More

Token

Basic Concepts

The basic unit used by AI models when processing text. It can be a word, word fragment, or character. Prompt length and model capacity are measured in token count.

Read More

Transformer

Model Architectures

A deep learning architecture based on the attention mechanism with parallel processing capability. It forms the foundation of both language and visual models.

Read More

txt2img

Generation Techniques

Abbreviation for text-to-image. In the Stable Diffusion ecosystem, it refers to the mode of generating images from text prompts. Unlike img2img, it produces images from scratch.

Read More
U

Upscaling

Generation Techniques

The process of enlarging low-resolution images using AI without quality loss or with quality enhancement. It differs from traditional resizing with its ability to add detail and sharpen.

Read More
V

VAE (Variational Autoencoder)

Model Architectures

A probabilistic deep learning model that encodes data into a compressed latent space and can generate new data from this space. Used in the image encoding layer of diffusion models.

Read More