news 2026/4/30 18:40:37

【202511】Cosmos-Predict2.5-01-数据篇:用于PhysicalAI的基于视频基础模型的世界模拟【2亿原始视频(3500万小时)➞60亿经整理的片段➞4%➞2亿高质量clips】

作者头像

张小明

前端开发工程师

1.2k 24
文章封面图
【202511】Cosmos-Predict2.5-01-数据篇:用于PhysicalAI的基于视频基础模型的世界模拟【2亿原始视频(3500万小时)➞60亿经整理的片段➞4%➞2亿高质量clips】

《World Simulation with Video Foundation Models for Physical AI》

Abstract

摘要

We introduce [Cosmos-Predict2.5], the latest generation of the Cosmos World Foundation Models for Physical AI. Built on a flow-based architecture, [Cosmos-Predict2.5] unifies Text2World, Image2World, and Video2World generation in a single model and leverages [Cosmos-Reason1], a Physical AI visionlanguage model, to provide richer text grounding and finer control of world simulation. Trained on 200 M curated video clips and refined with reinforcement learning-based post-training, [Cosmos-Predict2.5] achieves substantial improvements over [Cosmos-Predict1] in video quality and instruction alignment, with models released at and scales. These capabilities enable more reliable synthetic data generation, policy evaluation, and closed-loop simulation for robotics and autonomous systems. We further extend the family with [Cosmos-Transfer2.5], a control-net style framework for Sim2Real and Real2Real world translation. Despite being3.5 × 3 . 5 \times3.5×smaller than [Cosmos-Transfer1], it delivers higher fidelity and robust long-horizon video generation. Together, these advances establish [Cosmos-Predict2.5] and [Cosmos-Transfer2.5] as versatile tools for scaling embodied intelligence. To accelerate research and deployment in Physical AI, we release source code, pretrained checkpoints, and curated benchmarks under the NVIDIA Open Model License at http s://github.com/nvidia-cosmos/cosmos-predict2.5 and https://github.com/nvidia-cosmos/cosmos-transfer2. 5. We hope these open resources lower the barrier to adoption and foster innovation in building the next generation of embodied intelligence.

我们推出 [Cosmos-Predict2.5],这是 Cosmos 物理人工智能世界基础模型的最新一代产品。

基于流式架构构建的 [Cosmos-Predict2.5] 将 Text2World、Image2World 和 Video2World 生成功能整合于单一模型之中,并借助物理人工智能视觉语言模型 [Cosmos-Reason1],提供更丰富的文本锚定能力以及对世界模拟的更精细控制。

[Cosmos-Predict2.5] 基于 2 亿段精心筛选的视频片段进行训练,并通过基于强化学习的后训练进行优化,在视频质量和指令对齐方面较 [Cosmos-Predict1] 实现了显著提升,并发布了 和 两种规模的模型。

这些能力为机器人和自主系统提供了更可靠的合成数据生成、策略评估和闭环仿真。

我们进一步扩展了该系列,推出了 [Cosmos-Transfer2.5]— 一个用于 Sim2Real 和 Real2Real 世界转换的 Control-Net风格框架。

尽管其规模比 [Cosmos-Transfer1] 小了3.5 × 3 . 5 \times3.5×,但它能提供更高保真度且稳健的长期视频生成

这些进展共同确立了 [Cosmos-Predict2.5] 和 [Cosmos-Transfer2.5] 作为扩展具身智能的多功能工具的地位 。

为 加 速 Physical AI 的 研 究 与 部 署 , 我 们 根 据 NVIDIA Open Model License 在https://github.com/nvidia-cosmos/cosmos-predict2.5 和 https://github.com/nvidia-cosmos/cosmos-transfer2.5 发布源代码、预训练 checkpoints 以及精选 benchmarks。

我们希望这些开放资源能够降低采用门槛,并促进构建下一代 embodied intelligence 的创新。

Contents目录

1 Introduction … 3

1 引言 … 3

2 Data … 3

2 数据 … 3

2.1 Video Curation Pipeline … 4

2.1 视频筛选流程 … 4

2.2 Domain Specific Data … 6

2.2 领域特定数据 … 6

2.2.1 Robotics … 6

2.2.1 机器人技术 … 6

2.2.2 Autonomous Driving … 7

2.2.2 自动驾驶 … 7

2.2.5 Physics … 8
2.2.5 物理 … 8
3 Method … 8
3 方法 … 8
3.1 Flow Matching … 8
3.2 Network Architecture … 9
3.2 网络架构 … 9
4 Training … 10
4 训练 … 10
4.1 Pre-training … 10
4.1 预训练 … 10
4.2 Post-training … 11
4.2 后训练 … 11
4.2.1 Supervised Fine-tuning … 11
4.2.2 Reinforcement Learning … 13
4.2.2 强化学习 … 13
4.2.3 Timestep Distillation … 14
4.2.3 时间步蒸馏 … 14
4.3 Infrastructure … 14
4.3 基础设施 … 14
5 Results … 15
5 结果 … 15
6 Applications … 17
6 应用 … 17
6.1 Cosmos-Transfer2.5 … 18
6.1.1 Results … 19
6.1.1 结果 … 19
6.1.2 Long Video Generation … 19
6.1.2 长视频生成 … 19
6.2 Cosmos-Transfer2.5 for Robot Policy Learning … 20
6.2 用于机器人策略学习的 Cosmos-Transfer2.5 … 20
6.2.1 System and Task Settings … 21
6.2.1 系统和任务设置 … 21
6.2.2 Data Augmentation Strategy … 22
6.2.2 数据增强策略 … 22
6.2.3 Experiments and Results … 23
6.2.3 实验与结果 … 23
6.3 Cosmos-Transfer2.5 for Driving Simulation … 25
6.3 用于驾驶仿真的 Cosmos-Transfer2.5 … 25
6.3.1 Model Architecture … 25
6.3.1 模型架构 … 25
6.3.2 Training Datasets … 25
6.3.2 训练数据集 … 25
6.3.3 Experiments and Results … 28
6.3.3 实验与结果 … 28
6.4 Multi-view Generation with Camera Control … 28
6.4 具有相机控制的多视图生成 … 28
6.5 Synthetic Data Generation for VLA training … 32
6.5 用于 VLA 训练的合成数据生成 … 32
6.6 Action-Conditioned World Generation … 33
6.6 动作条件世界生成 … 33
7 Related Work … 35
7 相关工作 … 35
8 Conclusion … 36
8 结论 … 36
A Contributors and Acknowledgments … 37
A 贡献者与致谢 … 37
A. 1 Contributors … 37
A. 1 贡献者 … 37
A. 2 Acknowledgments … 37
A. 2 致谢 … 37

  1. Introduction

1. 引言

Physical AI systems-embodied agents equipped with sensors and actuators-assist humans in carrying out physical tasks by interacting with the environment: their actuators act on the world in response to sensory inputs. Training such systems directly in the real world, however, is often slow, costly, and risky.

This is particularly true in the early stages, when system imperfections may lead to unsafe actions that damage either the agent, the environment, or both.

Aworld simulatorthat can generatehigh-quality, diverse visual environmentsbased onthe actions of a Physical AI agentcan serve as a safe proxy for the physical world.

Such simulators enable agents to acquire perception and control skills entirely in silicon before deployment in the real world (NVIDIA, 2025).

In this paper, we introduce [Cosmos-Predict2.5], our latest world foundation model based on flow matching that significantly enhances simulation fidelity across diverse Physical AI domains.

Physical AI 系统——配备传感器和执行器的具身智能体——通过与环境交互来协助人类完成物理任务:它们的执行器根据传感输入对世界施加作用。

然而,直接在现实世界中训练这类系统往往缓慢、成本高且风险大。

这在早期阶段尤为如此,因为系统缺陷可能导致不安全的动作,从而损坏智能体、环境,或两者兼而有之。

一个能够基于Physical AI 智能体动作生成高质量、多样化视觉环境世界模拟器,可以作为物理世界的安全代理。

此类模拟器使智能体能够在部署到现实世界之前,完全在硅基环境中获得感知与控制技能(NVIDIA,2025)。

在本文中,我们介绍 [Cosmos-Predict2.5],这是我们最新的基于 flow matching 的世界 Foundation Model,显著提升了不同 Physical AI 领域中的模拟保真度。

[Cosmos-Predict2.5] leapfrogs the diffusion video world model in [Cosmos-Predict1] (NVIDIA, 2025) via three key improvements. First, we strengthen the data filtering pipeline to produce higher-quality pre-training datasets and manually curate specialized post-training data tailored for Physical AI. Second, we simplify the model architecture and combine Text2World, Image2World, and Video2World capabilities into a single model. Third, we improve the training recipe, leveraging model merging and a novel reinforcement learning algorithm for post-training, and replace the T5 text encoder used in [Cosmos-Predict1] with [Cosmos-Reason1] (NVIDIA, 2025), a modern decoder-only VLM architecture specialized for Physical AI, providing richer text representations and enabling finer-grained control over world generation. Through extensive experiments, we demonstrate that [Cosmos-Predict2.5] delivers substantial gains over [Cosmos-Predict1] in both

output quality and prompt alignment.

[Cosmos-Predict2.5] 通过三项关键改进,超越了 [Cosmos-Predict1](NVIDIA,2025)中的 diffusion video worldmodel。

  • 首先,我们强化了数据过滤流水线,以生成更高质量的预训练数据集,并为 Physical AI 手动整理了定制化的专业后训练数据。
  • 其次,我们简化了模型架构,并将 Text2World、Image2World 和 Video2World 能力整合到单一模型中。
  • 第三,我们改进了训练方案,利用模型合并和一种用于后训练的新型强化学习算法,并将 [Cosmos-Predict1] 中使用的 T5text encoder 替换为 [Cosmos-Reason1](NVIDIA,2025) —一 种面向 Physical AI 的现代 decoder-only VLMarchitecture,可提供更丰富的文本表征,并实现对 world generation 更细粒度的控制。

通过大量实验,我们表明[Cosmos-Predict2.5] 在输出质量和 Prompt Alignment 两方面都较 [Cosmos-Predict1] 有显著提升。

We further demonstrate that these advancements yield broad and practical benefits across diverse downstream Physical AI applications.

In particular, they enable more efficient synthetic data generation for Vision-LanguageAction (VLA) model training (Jang et al., 2025), a key ingredient for scaling embodied intelligence.

Beyond this, [Cosmos-Predict2.5] improvesaction-conditioned video world generationforpolicy validation, enhances coherent multi-view video world generation for autonomous driving simulation, and supports camera-controllable multi-view video world generation for robotic manipulation.

我们进一步证明,这些进展为各类下游物理人工智能应用带来了广泛且切实的益处。

特别是,它们能够更高效地生成用于视觉-语言-动作(VLA)模型训练的合成数据(Jang et al., 2025),而这正是实现具身智能规模化应用的关键要素。

除此之外,[Cosmos-Predict2.5] 还改进了用于策略验证(policy validation)

版权声明: 本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若内容造成侵权/违法违规/事实不符,请联系邮箱:809451989@qq.com进行投诉反馈,一经查实,立即删除!
网站建设 2026/4/30 18:36:50

抖音直播弹幕数据采集终极指南:如何用Python实现实时数据抓取

抖音直播弹幕数据采集终极指南:如何用Python实现实时数据抓取 【免费下载链接】DouyinLiveWebFetcher 抖音直播间网页版的弹幕数据抓取(2025最新版本) 项目地址: https://gitcode.com/gh_mirrors/do/DouyinLiveWebFetcher 在当今的社交…

作者头像 李华
网站建设 2026/4/30 18:28:15

机器学习模型调优实战:从痛点分析到自动化解决方案

1. 模型调优的痛点与解决方案作为一名长期从事机器学习模型调优的从业者,我深刻理解新手在模型微调过程中面临的挑战。最近分析了数千个调优任务后,我发现几个普遍存在的痛点:1.1 专业知识门槛过高大多数用户带着准备好的模型和数据集来到调优…

作者头像 李华
网站建设 2026/4/30 18:24:24

别再死记硬背了!我用Anki+艾宾浩斯曲线,半个月搞定408核心知识点

科学记忆法实战:用Anki攻克408核心考点的15天高效策略 备考计算机专业研究生入学考试(408科目)的同学们,一定对"知识点多如牛毛、概念抽象难记"深有体会。传统的一遍遍翻书、机械重复不仅效率低下,更让人产生…

作者头像 李华