RESEARCHERS have developed an AI-powered tool that can recreate music people are listening to during brain scans.

The findings were shared in a new paper published in the database arXiv that has not yet been peer-reviewed. 

The experiment, conducted by a team of researchers from Google and Osaka University in Japan, is the first of its kind.

Scientists said the AI tool, called Brain2Music, works by analyzing brain imaging data from people who are listening to music.

After examining a person’s brain activity, the AI produces a song that matches the genre, rhythm, mood, and instrumentation of the music the subject was listening to.

Brain imaging data that was fed to the AI pipeline was collected via a technique called functional magnetic resonance imaging.

This is a type of imaging method that can show regional, time-varying changes in the brain.

In other words, fMRI can see which parts of the brain are activated while a person is listening to music and when.

THE EXPERIMENT

In the experiment, participants listened to 15-second music clips of blues, classical, country, disco, hip-hop, jazz, and pop. 

After data was collected, the AI program was trained to identify links between elements of the music and participants’ brain signals.

Most read in News Tech

The AI would then convert the imaging data into a form that emulates music from the original song clips.

Researchers then input this data into an AI model developed by Google, dubbed MusicLM.

The experimental AI tool, announced by Google earlier this year, works by turning your text descriptions into music.

“The agreement, in terms of the mood of the reconstructed music and the original music, was around 60 percent,” study co-author Timo Denk, a software engineer at Google in Switzerland, told Live Science.

“The method is pretty robust across the five subjects we evaluated,” Denk said.

“If you take a new person and train a model for them, it’s likely that it will also work well.”