Researchers at Kaunas University of Technology, Finland, have developed an artificial intelligence model that combines speech and brain activity data to diagnose depression with an accuracy of 97.53 percent. This innovative approach, detailed in an article for Brain Sciences dated October 15, 2024, leverages both electroencephalogram signals and voice recordings to assess emotional states more comprehensively than single-data methods.
The study used EEG data from the Multimodal Open Dataset for Mental Disorder Analysis (MODMA) and audio recordings from speech tasks. The AI model visualizes these inputs through spectrograms, applies noise reduction, and analyzes the data using a modified DenseNet-121 deep-learning algorithm. Results classify individuals as either healthy or depressed, demonstrating potential for remote and objective depression diagnosis.
Professor Maskeliūnas emphasized the need for future clinical trials and improvements to ensure the technology supports medical professionals effectively. The study highlights the growing importance of explainable artificial intelligence to build trust and transparency in sensitive applications like mental health care.
This breakthrough could transform depression diagnostics, providing accessible and reliable tools for early intervention and improved patient care.
Read more at this link on Brain Sciences
Citation
Yousufi, M.; Damaševičius, R.; Maskeliūnas, R. Multimodal Fusion of EEG and Audio Spectrogram for Major Depressive Disorder Recognition Using Modified DenseNet121. Brain Sci. 2024, 14, 1018. https://doi.org/10.3390/brainsci14101018
Many thanks to Louise Bouman for summarising the original article.


