Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Apple Researchers Working on MM1, a Family of Multimodal AI Model With Up to 30 Billion Parameters

Share

Apple researchers have shared their work on building a multimodal artificial intelligence (AI) large language model (LLM), in a pre-print paper. Published on an online portal on March 14, the paper highlights how it was able to achieve the advanced capabilities of multimodality and make the foundation model train on both text-only data as well as images. The new advancements in AI for the Cupertino-based tech giant come following CEO Tim Cook’s remarks made during the company’s earning calls where he said that AI features could arrive later this year.

The pre-print version of the research paper has been published on arXiv, an open-access online repository of scholarly papers. However, the papers posted here are not peer-reviewed. While the paper itself does not mention Apple, most of the researchers mentioned are affiliated with the company’s machine learning (ML) division, leading to the belief that the project is also affiliated with the iPhone maker.

As per the researchers, they are working on MM1, a family of multimodal models containing up to 30 billion parameters. Calling it a “performant multimodal LLM (MLLM), the authors of the paper highlighted that image encoders, the vision language connector, and other architecture components and data choices were made to create the AI model which is capable of understanding both text as well as image-based inputs.

Giving an example, the paper stated, “We demonstrate that for large-scale multimodal pre-training using a careful mix of image-caption, interleaved image-text, and text-only data is crucial for achieving state-of-the-art (SOTA) few-shot results across multiple benchmarks, compared to other published pre-training results.”

To break it down, the AI model is currently in the pre-training phase, which means it is not trained enough to give the desired outputs. This is the stage when the algorithm and the AI architecture are used to design the workflow of the model and how it processes data, eventually. The team of Apple researchers were able to add computer vision to the model using image encoders and a vision language connector. Then, when testing with a mix of just images, image and text, and text-only data set, the team found that the results were competitive compared to existing models at the same stage.

While the breakthrough is significant, this research paper is not enough to ascertain that a multimodal AI chatbot will be added to Apple’s operating system. At this stage, it is difficult to even say whether the AI model is multimodal while taking inputs or in giving output as well (whether it can generate AI images or not). But if the results are confirmed to be consistent after peer review, it can be said that the tech giant has taken another big step towards building a native generative AI foundation model.


Affiliate links may be automatically generated – see our ethics statement for details.