Mochi 1 Preview
Mochi 1 Preview is an open-source text-to-video AI model developed by Genmo that sets a new standard for motion quality and physical realism in generated video content. With 10 billion parameters built on an Asymmetric Diffusion Transformer architecture, Mochi 1 Preview produces videos with remarkably natural and physically plausible motion that distinguishes it from competing models. The asymmetric architecture processes spatial and temporal information through dedicated pathways optimized for their respective characteristics, resulting in videos where objects move with realistic momentum, gravity, and interaction dynamics. Mochi 1 Preview generates 480p resolution videos at 30 frames per second with smooth, continuous motion free from the temporal flickering and object morphing artifacts common in earlier video generation models. The model demonstrates strong understanding of real-world physics including fluid dynamics, rigid body interactions, and natural phenomena like fire, smoke, and water, producing content that feels grounded in physical reality. Mochi 1 Preview responds well to detailed text prompts describing camera movements, scene transitions, and specific motion choreography, giving creators meaningful control over the generated output. Released under the Apache 2.0 license, the model is fully open source and represents one of the strongest open alternatives to proprietary video generation services. It is available through Hugging Face and supported by cloud inference providers for accessible deployment. Key applications include creating concept videos for film and advertising pre-production, generating social media video content, producing animated product demonstrations, creating visual references for motion design projects, and prototyping video ideas before committing to expensive live-action production.
Key Highlights
Realistic Motion and Physics
Capability to create natural and realistic video motions with physics simulation-based motion generation.
Temporal Coherence
Produces flicker-free and smooth video output by maintaining strong temporal coherence between frames.
Open Source and Accessible
Released as fully open source, allowing community development and customization by developers worldwide.
Natural Dynamics
Realistically simulates natural dynamics such as water flow, wind effects, and object interactions.
About
Mochi 1 Preview is an open-source text-to-video AI model developed by Genmo. Producing particularly impressive results in motion quality and temporal consistency, Mochi 1 Preview has set a new standard in open-source video generation models. Released as the precursor version to the full Mochi 1 release, it represents the first publicly available implementation demonstrating the potential of the AsymmDiT architecture. It has attracted significant attention as one of the pioneering projects of the open-source movement in video generation.
The most notable feature of Mochi 1 Preview is the naturalness and smoothness of motion dynamics in its generated videos. The model uses a new architecture called Asymmetric Diffusion Transformer (AsymmDiT), which more effectively processes the temporal and spatial dimensions in video generation. Text tokens and video tokens are processed through attention patterns customized for each modality, optimizing both efficiency and quality simultaneously. It can produce videos up to 5 seconds long at 848x480 pixel resolution. The model's 24fps frame rate ensures smooth and natural-looking video outputs. Its MLLM-based text encoder demonstrates strong performance in translating detailed descriptions of complex scenes into their correct visual counterparts.
In quality evaluations, Mochi 1 Preview delivers strong results in creating physically realistic scenes. It can accurately simulate physical interactions such as human movements, fluid dynamics, and camera movements. It has achieved high scores on VBench benchmark particularly in motion consistency and aesthetic quality categories. The model's temporal consistency — meaning objects and scenes remaining coherent from one frame of a video to the next — is at a remarkable level among open-source models. Its ability to naturally render complex motion dynamics such as hair, fabric, and smoke reveals the depth of the model's understanding of the physical world. Color harmony and lighting consistency are also maintained throughout the video, providing a professional appearance.
Mochi 1 Preview's practical applications span a wide range of use cases. It serves as an ideal tool for storyboard animation and concept visualization in creative video production. In social media content creation, it offers an accessible solution for individual creators looking to produce short, impactful video clips. In education, it facilitates the visual explanation of scientific concepts, historical events, or complex processes. It can be used for reference videos and atmosphere studies in the gaming and animation industry. Its capacity to produce rapid client presentation materials for advertising agencies also enhances the model's professional value.
Genmo has released Mochi 1 Preview under the Apache 2.0 license, which also covers commercial use. The model can be used for both research and production purposes. Weights and source code are accessible through Hugging Face and GitHub. ComfyUI integration is available, allowing easy incorporation into visual workflows. Thanks to Diffusers library support, custom Python-based pipelines can be created for specialized applications. Considered an important step in the democratization of video generation, Mochi 1 Preview also forms the foundation of Genmo's commercial video generation platform and has played a critical role in gathering community feedback on the path to the full Mochi 1 release.
Use Cases
Creative Video Content Generation
Accelerating creative production by generating realistic motion video content from text descriptions.
Physics-Based Simulation
Creating motion and interaction simulations that follow natural physics rules.
Prototype and Concept Video
Creating rapid prototype videos to visualize product and project concepts.
Research and Academic Work
Use as an open source foundation model for academic research in video generation and motion modeling.
Pros & Cons
Pros
- Early access version of Genmo AI's Mochi 1 model
- Open source — open to community contributions
- First implementation of AsymmDiT architecture
- Usable for research and prototyping
Cons
- Preview version — no stability or quality guarantee
- Low resolution and short video durations
- Limited features compared to full Mochi 1 model
- High GPU requirements
Technical Details
Parameters
10B
Architecture
Asymmetric Diffusion Transformer
Training Data
Proprietary
License
Apache 2.0
Features
- Realistic motion
- Physics simulation
- 480p output
- Open source
- Temporal coherence
- Natural dynamics
Benchmark Results
| Metric | Value | Compared To | Source |
|---|---|---|---|
| Çözünürlük & Süre | 848×480, 5.4 saniye (163 kare) | CogVideoX-5B: 720×480, 6 saniye | Genmo Official Blog |
| Hareket Kalitesi (VBench Motion) | 0.85 | Open-Sora 1.2: 0.78 | Genmo Technical Report |
| Parametre Sayısı | 10B (AsymmetricDiT) | CogVideoX-5B: 5B | Hugging Face Model Card |
Available Platforms
Frequently Asked Questions
Related Models
Sora
Sora is OpenAI's groundbreaking text-to-video generation model that can create realistic and imaginative video content up to one minute long from text descriptions, still images, or existing video inputs. Announced in February 2024, Sora represents a major advancement in video generation AI, demonstrating an unprecedented ability to understand and simulate the physical world in motion with remarkable temporal coherence and visual fidelity. The model operates as a diffusion transformer trained on a vast dataset of video and image data at varying durations, resolutions, and aspect ratios, enabling it to generate content in multiple formats without cropping or resizing. Sora can produce videos with complex camera movements, multiple characters with consistent appearances, detailed environments with accurate lighting and reflections, and physically plausible interactions between objects. The model demonstrates emergent capabilities in understanding 3D consistency, object permanence, and cause-and-effect relationships within generated scenes. Beyond text-to-video generation, Sora supports image-to-video animation, video extension, video-to-video style transfer, and connecting multiple video segments with seamless transitions. The model handles a wide range of creative styles from photorealistic footage to animated content, architectural visualizations, and abstract artistic compositions. As a proprietary model, Sora is available exclusively through OpenAI's platform with usage-based pricing and content safety filtering. While the model occasionally struggles with complex physical simulations and may produce artifacts in longer sequences, its overall quality and versatility have established it as a benchmark for video generation capability, pushing the boundaries of what AI can achieve in dynamic visual content creation.
Runway Gen-3 Alpha
Runway Gen-3 Alpha is an advanced video generation model developed by Runway that offers fine-grained temporal and visual control over generated video content, representing a significant evolution from the company's earlier Gen-1 and Gen-2 models. Released in June 2024, Gen-3 Alpha was trained jointly on images and videos to develop deep understanding of both spatial composition and temporal dynamics, resulting in substantially improved motion coherence, visual fidelity, and prompt adherence. The model supports both text-to-video and image-to-video generation modes, allowing users to create video from detailed text descriptions or animate existing still images with natural motion. Gen-3 Alpha introduces enhanced camera control capabilities, enabling users to specify pans, tilts, zooms, and tracking shots through intuitive text-based or parametric controls. The model excels at generating consistent character appearances across frames, maintaining temporal coherence in complex scenes, and accurately interpreting nuanced creative direction from text prompts. It handles diverse visual styles including photorealistic footage, cinematic compositions, stylized animation, and artistic interpretations with professional-grade quality. The model also supports motion brush functionality for localized motion control and video extension for seamlessly continuing existing clips. As a proprietary model available exclusively through Runway's platform, Gen-3 Alpha operates on a credit-based pricing system with various subscription tiers. It has been widely adopted by filmmakers, content creators, and advertising professionals as a rapid prototyping and production tool for video content that previously required extensive live-action filming or complex CGI production pipelines.
Veo 3
Veo 3 is Google DeepMind's most advanced video generation model, producing high-quality video content with native audio from text descriptions. The model generates videos at up to 4K resolution with remarkable temporal consistency, smooth motion, and realistic physics simulation. Veo 3's most distinguishing feature is generating synchronized audio alongside video, including ambient sounds, music, dialogue, and sound effects matching the visual content, eliminating the need for separate audio generation. The model understands cinematic concepts including camera movements like dolly shots, pans, and zooms, lighting conditions, depth of field, and film grain effects, enabling professional-grade cinematographic directions in prompts. Veo 3 handles complex multi-subject scenes with coherent interactions, maintains character consistency throughout clips, and produces natural-looking transitions between actions and poses. The architecture builds on Google DeepMind's diffusion transformer expertise and leverages large-scale training on diverse video datasets for broad stylistic range from photorealistic footage to animation and artistic interpretations. Video outputs extend to multiple seconds with smooth temporal coherence. The model is available through Google's AI platforms and integrated into creative tools within the Google ecosystem. Applications span advertising content creation, social media video production, film previsualization, educational content, product demonstrations, and creative storytelling. Veo 3 represents the current state of the art in AI video generation, setting new benchmarks for quality, audio integration, and prompt understanding in the generative video space.
Runway Gen-4 Turbo
Runway Gen-4 Turbo is Runway's fastest and most advanced video generation model, producing high-quality AI-generated video with significantly improved speed, visual fidelity, and motion coherence compared to predecessors. The model generates videos from text descriptions and image inputs with enhanced temporal consistency, producing smooth natural-looking motion that maintains subject integrity throughout clips. Gen-4 Turbo features substantially faster inference than previous Runway models, making it practical for iterative creative workflows where rapid feedback is essential. It handles diverse content types including human figures with realistic body mechanics, natural environments with dynamic elements, architectural scenes with accurate perspective, and abstract artistic compositions. Multiple generation modes are supported: text-to-video for creating clips from descriptions, image-to-video for animating still images, and video-to-video for style transformations on existing footage. The architecture builds on Runway's years of video diffusion research, incorporating temporal attention mechanisms and motion modeling for physically plausible results. Gen-4 Turbo is available through Runway's web platform and API with integration options for creative applications. Professional use cases include commercial content creation, social media video production, music video concepts, film previsualization, product advertising, and motion design. The model operates on a credit-based pricing system within Runway's subscription tiers. Gen-4 Turbo solidifies Runway's position as a leading AI video generation platform, offering professional-grade tools enabling creators to produce compelling video content without traditional production infrastructure.