Modular Multi-Modal Attention Network for Alzheimer’s Disease Detection Using Patient Audio and Language Data


Modular Multi-Modal Attention Network for Alzheimer’s Disease Detection Using Patient Audio and Language Data

Wang, N.; Cao, Y.; Hao, S.; Shao, Z.; Subbalakshmi, K. P.

In this work, we propose a modular multi-modal architecture to automatically detect Alzheimer’s disease using the dataset provided in the ADReSSo challenge. Both acoustic and text-based features are used in this architecture. Since the dataset provides only audio samples of controls and patients, we use Google cloud-based speech-to-text API to automatically transcribe the audio files to extract text-based features. Several kinds of audio features are extracted using standard packages. The proposed approach consists of 4 networks: C-attention-acoustic network (for acoustic features only), C-Attention-FT network (for linguistic features only), C-Attention-Embedding network (for language embeddings and acoustic embeddings), and a unified network (uses all of those features). The architecture combines attention networks and a convolutional neural network (CAttention network) in order to process these features. Experimental results show that the C-Attention-Unified network with Linguistic features and X-Vector embeddings achieves the best accuracy of 80.28% and F1 score of 0.825 on the test dataset.

Keywords: Alzheimer’s disease; Multi-Modal Approach; CNN-Attention network; Acoustic feature; Linguistic feature

  • Open Access Logo Contribution to proceedings
    INTERSPEECH 2021, 30.08.2021, Brno, Czech Republic
    Alzheimer's Dementia Recognition through Spontaneous Speech The ADReSSo Challenge

Downloads

Permalink: https://www.hzdr.de/publications/Publ-33597