Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Meta’s Motivo AI Model Could Deliver More Lifelike Digital Avatars: Here’s How it Works

Share

Meta is researching and developing new AI models, that could have potential uses in Web3 applications. The Facebook parent firm has released an AI model called Meta Motivo, that can control the bodily movements of digital avatars. It is expected to make the overall metaverse experience better. The newly unveiled model is expected to offer optimised body motion and interaction of avatars in metaverse ecosystems.

The company claims that Motivo the ‘first-of-its-kind behavioural foundation model’. The AI model can enable virtual human avatars to complete a variety of complex whole-body tasks, while making virtual physics more seamless in metaverse.

Through unsupervised reinforcement learning, Meta has made it convenient for Motivo to perform an array of tasks in complex environments. A novel algorithm has been deployed to train this AI model that uses an unlabelled dataset of motions to help it pick up on human-like behaviours while retaining zero-shot inference capabilities, the company said in a blog post.

Announcing the launch of Motivo on X, Meta shared a short video demo showing what the integration of this model with virtual avatars would entail. The clip showed a humanoid avatar performing dance moves and kicks using whole body tasks. Meta said it’s incorporating ‘unsupervised reinforcement learning’ to trigger these ‘human-like behaviour’ in virtual avatars, as part of its attempts to make them look more realistic

The company says that Motivo can solve a range of whole-body control tasks. This includes motion tracking, goal pose reaching, and reward optimisation without any additional training.

Reality Labs is Meta’s internal unit that is working on its metaverse-related initiatives. Since being launched in 2022, Reality Labs has consecutively recorded losses. Despite the pattern, Zuckerberg has hedged his bets on the metaverse, testing newer technologies to fine-tune the overall experience.

Earlier this year, Meta showcased a demo of Hyperscape which turns a smartphone camera into a gateway to photorealistic metaverse environments. Through this, the tool enables smartphones to scan 2D spaces and transform them into hyperrealistic metaverse backgrounds.

In June, Meta bifurcated its Reality Labs team into two divisions, where one team was tasked to work on the metaverse-focussed Quest headsets and the other was made responsible for working on hardware wearables that Meta may launch in the future. The aim of this step was to consolidate the time the Reality Labs’ team puts in to develop newer AI and Web3 technologies.