'TRIBE v2': Meta's new AI model can predict brain activity
27 Mar 2026
Meta has unveiled an innovative artificial intelligence (AI) model, TRIBE v2 (Trimodal Brain Encoder), that predicts neural responses to sight, sound, and language.
The foundation model uses pre-trained audio, video, and text embeddings to predict brain activity.
The tech giant hopes this groundbreaking development will help create digital twins for neural activity and accelerate breakthroughs in treating neurological disorders.
It is trained on data from over 700 volunteers
Model features
TRIBE v2 processes interpretations through a transformer for universal representation across all stimuli, tasks, and individuals.
Meta trained the system on brain imaging data from over 700 volunteers, a major improvement over earlier versions that used only a handful of subjects.
The participants were exposed to various media such as podcasts, movies, images, and text while their brain activity was recorded using functional magnetic resonance imaging (fMRI).
Model offers 70-fold increase in resolution
Model performance
TRIBE v2 learns patterns from fMRI data and predicts what a brain scan would look like without actually running the scan.
Meta claims this new model offers a 70-fold increase in resolution over similar systems, along with significant improvements in speed and accuracy.
This enables "zero-shot prediction," or the ability to predict brain responses for new individuals, languages, and tasks without retraining the model.
-
Eight children of ‘King of luxury brands’ Johnathan Hanh Nguyen, six with Philippines ties

-
Javeria Saud apologizes after backlash over Yumna Zaidi remarks

-
Make delicious creamy dessert from mango in summer, family members will be happy after eating Mango Mastani.

-
4 Zodiac Signs Receive A Powerful Gift From The Universe On March 31, 2026

-
Ford Changed Its Tune After Initially Refusing To Replace A Recalled Engine
