Deep Learning Architectures for EEG: from CNN to Transformers
DOI:
https://doi.org/10.62051/3nfg1g43Keywords:
Electroencephalography (EEG); convolutional neural networks (CNNs); transformer; brain-computer interfaces (BCIs).Abstract
This review provides a comprehensive overview of the advancements in deep learning for electroencephalography (EEG) analysis. EEG offers a non-invasive means to monitor brain activity, with applications in brain-computer interfaces (BCIs), clinical diagnostics, emotion recognition, and sleep monitoring. We trace the historical progression from early convolutional neural networks (CNNs) in the mid-2010s to the recent emergence of transformer-based models. Key CNN architectures, such as EEGNet, are examined for their efficiency in feature extraction from raw EEG data. Transformer and hybrid models are discussed, highlighting their ability to model long-range dependencies and achieve state-of-the-art performance across diverse tasks like motor imagery, seizure detection, and sleep staging. The review also addresses self-supervised and unsupervised learning methods to mitigate data scarcity. Challenges including inter-subject variability, noise artifacts, and model interpretability are analyzed, alongside future directions such as transfer learning, synthetic data generation, domain adaptation, and multi-modal fusion. By synthesizing these developments, this work aims to guide future research toward more generalizable and practical EEG deep learning systems.
Downloads
References
[1] A. Craik, Y. He, and J. L. Contreras-Vidal, “Deep learning for electroencephalogram (EEG) classification tasks: a review,” J. Neural Eng., vol. 16, no. 3, p. 031001, 2019. DOI: https://doi.org/10.1088/1741-2552/ab0ab5
[2] Y. Roy et al., “Deep learning-based electroencephalography analysis: a systematic review,” J. Neural Eng., vol. 16, no. 3, p. 031001, 2019. DOI: https://doi.org/10.1088/1741-2552/ab260c
[3] V. J. Lawhern et al., “EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces,” J. Neural Eng., vol. 15, no. 5, 2018, Art. no. 056013. DOI: https://doi.org/10.1088/1741-2552/aace8c
[4] RT. Schirrmeister et al., “Deep learning with convolutional neural networks for EEG decoding and visualization,” J. Neural Eng., vol. 14, no. 2, 2017, Art. no. 026003.
[5] M. X. Ding et al., “EEG-based emotion recognition using multi-scale dynamic CNN and gated transformer,” Sci. Rep., vol. 14, Art. 31319, 2024. DOI: https://doi.org/10.1038/s41598-024-82705-z
[6] W. Zhao et al., “CTNet: a convolutional transformer network for EEG-based motor imagery classification,” Sci. Rep., vol. 14, Art. 20237, 2024. DOI: https://doi.org/10.1038/s41598-024-71118-7
[7] X. Shi et al., “EEG-VTTCNet: a loss joint training model based on the vision transformer and the temporal convolution network for EEG-based motor imagery classification,” Neuroscience, vol. 556, pp. 42–51, 2024.
[8] W. Lu et al., “CNN+LSTM hybrid deep transfer learning for cross-subject EEG emotion recognition,” Front. Hum. Neurosci., vol. 17, 2023, Art. 13280241. DOI: https://doi.org/10.3389/fnhum.2023.1280241
[9] G. Greiner and Y. Zhang, “Multi-modal EEG NEO-FFI with trained attention layer (MENTAL) for mental disorder prediction,” Brain Inform., vol. 11, Art. 26, 2024. DOI: https://doi.org/10.1186/s40708-024-00240-z
[10] O. Ali et al., “ConTraNet: a hybrid network for improving the classification of EEG and EMG signals with limited training data,” Comput. Biol. Med., vol. 168, 2024, Art. 107649. DOI: https://doi.org/10.1016/j.compbiomed.2023.107649
[11] SH. Mostafaei et al., “A novel deep learning model based on transformer encoder-decoder and cross-modal attention for classification of sleep stages,” J. Biomed. Inform., vol. 157, 2024, Art. 104689. DOI: https://doi.org/10.1016/j.jbi.2024.104689
[12] E. Vafaei and M. Hosseini, “Transformers in EEG analysis: a review of architectures and applications in motor imagery, seizure, and emotion classification,” Sensors, vol. 25, no. 5, 2025, Art. 1293. DOI: https://doi.org/10.3390/s25051293
[13] Z. Wan et al., “EEGformer: a transformer–based brain activity classification method using EEG signals,” Front. Neurosci., vol. 17, 2023, Art. 1148855. DOI: https://doi.org/10.3389/fnins.2023.1148855
[14] T. Klein et al., “Flexible Patched Brain Transformer model for EEG decoding,” Sci. Rep., vol. 15, Art. 10935, 2025. DOI: https://doi.org/10.1038/s41598-025-86294-3
[15] R. Yang et al., “ViT2EEG: Leveraging hybrid pretrained vision transformers for EEG data,” Proc. ACM KDD Workshop MLBI, 2023.
[16] Y. Saadoon, M. Khalil, and D. Battikh, “Machine and deep learning-based seizure prediction: a scoping review,” Appl. Sci., vol. 15, no. 11, Art. 6279, 2025. DOI: https://doi.org/10.3390/app15116279
[17] E. Suryawati et al., “Unsupervised feature learning-based encoder and adversarial networks,” J. Big Data, vol. 8, no. 1, Art. 148, 2021. DOI: https://doi.org/10.1186/s40537-021-00508-9
[18] X. Shi et al., “EEG-VTTCNet: Vision Transformer and temporal convolution network for MI-EEG classification,” Neuroscience, vol. 556, pp. 42–51, 2024. DOI: https://doi.org/10.1016/j.neuroscience.2024.07.051
[19] A. Shoeibi et al., “Epileptic seizures detection using deep learning techniques: a review,” Int. J. Environ. Res. Public Health, vol. 18, Art. 5780, 2021. DOI: https://doi.org/10.3390/ijerph18115780
[20] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Adv. Neural Inf. Process. Syst., vol. 25, pp. 1097–1105, 2012.
[21] A. Vaswani et al., “Attention is all you need,” Adv. Neural Inf. Process. Syst., vol. 30, 2017, pp. 5998–6008.
[22] Z. Tayeb et al., “Validating deep neural networks for online decoding of motor imagery movements from EEG signals,” Sensors, vol. 19, no. 1, Art. 210, 2019. DOI: https://doi.org/10.3390/s19010210
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Transactions on Computer Science and Intelligent Systems Research

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.








