Article 6MNCD Multimodal: AI’s new frontier

Multimodal: AI’s new frontier

by
MIT Technology Review Insights
from MIT Technology Review on (#6MNCD)

Multimodality is a relatively new term for something extremely old: how people have learned about the world since humanity appeared. Individuals receive information from myriad sources via their senses, including sight, sound, and touch. Human brains combine these different modes of data into a highly nuanced, holistic picture of reality.

Communication between humans is multimodal," says Jina AI CEO Han Xiao. They use text, voice, emotions, expressions, and sometimes photos." That's just a few obvious means of sharing information. Given this, he adds, it is very safe to assume that future communication between human and machine will also be multimodal."

MIT_JINA_Cover1200.pngA technology that sees the world from different angles

We are not there yet. The furthest advances in this direction have occurred in the fledgling field of multimodal AI. The problem is not a lack of vision. While a technology able to translate between modalities would clearly be valuable, Mirella Lapata, a professor at the University of Edinburgh and director of its Laboratory for Integrated Artificial Intelligence, says it's a lot more complicated" to execute than unimodal AI.

DOWNLOAD THE REPORT

In practice, generative AI tools use different strategies for different types of data when building large data models-the complex neural networks that organize vast amounts of information. For example, those that draw on textual sources segregate individual tokens, usually words. Each token is assigned an embedding" or vector": a numerical matrix representing how and where the token is used compared to others. Collectively, the vector creates a mathematical representation of the token's meaning. An image model, on the other hand, might use pixels as its tokens for embedding, and an audio one sound frequencies.

Jina-Landing-page-7200x4050-1.png?w=3000

A multimodal AI model typically relies on several unimodal ones. As Henry Ajder, founder of AI consultancy Latent Space, puts it, this involves almost stringing together" the various contributing models. Doing so involves various techniques to align the elements of each unimodal model, in a process called fusion. For example, the word tree", an image of an oak tree, and audio in the form of rustling leaves might be fused in this way. This allows the model to create a multifaceted description of reality.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review's editorial staff.

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments