Meta is working on two proprietary frontier models: Avocado, a large language model, and Mango, a multimedia file generator. The open-source variants are expected to be made available at a later date.
EchoPrime, a video-based vision-language model, analyses echocardiogram footage and generates a written report of cardiac form and function. Its findings were published in Nature (volume 650, pages 970-977) in February 2026, under the title 'Comprehensive echocardiogram evaluation with view primed vision language AI.'
The large volume of abdominal computed tomography (CT) scans coupled with the shortage of radiologists have intensified the need for automated medical image analysis tools. Previous state-of-the-art approaches for automated analysis leverage vision-language models (VLMs) that jointly model images and radiology reports.
There was a group of neurons that predicted the wrong answer, yet they kept getting stronger as the model learned. So we went back to the original macaque data, and the same signal was there, hiding in plain sight. It wasn't a quirk of the model - the monkeys' brains were doing it too. Even as their performance improved, both the real and simulated brains maintained a reserve of neurons that continued to predict the incorrect answer.
In a head-to-head comparison with five experienced physicians, each with more than a decade of practice, the system achieved higher accuracy across the board. DeepRare correctly identified the disease on its first suggestion 64.4 per cent of the time, compared to 54.6 per cent for the doctors. When given three suggestions instead of one, the AI system achieved diagnostic success in 79 per cent of cases versus 66 per cent for the human specialists.
Artificial intelligence (AI) machine learning is making a difference in assistive technology to help restore movement for the paralyzed. A new study in the American Institute of Physics journal APL Bioengineering shows how AI has the potential to restore lower-limb functions in those with severe spinal cord injuries (SCIs) by identifying patterns in brain signals captured noninvasively via electroencephalography (EEG).
Brain implants are beginning to help people with severe disabilities to speak and even sing in near-real time. Now, a company wants to read people's minds and treat mental conditions without implanting electrodes deep into the brain by using ultrasound - high-frequency sound waves above the range of human hearing. Merge Labs, which launched last month with only a vague description of its goals, is one of many companies in a booming brain-computer interface (BCI) market.