News

A robot powered by V-JEPA 2 can be deployed in a new environment and successfully manipulate objects it has never encountered ...
To think about an artificial limb is to think about a person. It's an object of touch and motion made to be used, one that ...
V-JEPA 2, our state-of-the-art world model, trained on video, enables robots and other AI agents to understand the physical ...
Dubbed as a “world model,” Meta’s New V-JEPA 2 AI model uses visual understanding and physical intuition to enhance reasoning ...
Meta on Wednesday announced it’s rolling out a new AI “world model” that can better understand the 3D environment and ...
New three-dimensional (3D) models of objects in space have been released by NASA's Chandra X-ray Observatory. These 3D models allow people to explore—and print—examples of stars in the early ...
The new open-source model, called Video Joint Embedding Predictive Architecture 2, or V-JEPA 2, is designed to help artificial intelligence understand things like gravity and object permanence, Meta ...
On Wednesday, Meta announced an AI model called the Segment Anything Model (SAM) that can identify individual objects in images and videos, even those not encountered during training, reports Reuters.
For example, in a photo of a box of fruit, using image segmentation, an AI model would be able to identify each individual fruit photographed and the box, as seen by the Meta demo. Screenshot from ...
Explore how Sui’s object-centric model and Move programming language improve blockchain scalability and security compared to traditional blockchain models.