Abstract
Cognitive state detection holds significant research value in the field of human–computer interaction and neural engineering. However, existing works are insufficient in modeling the temporal dynamics of multimodal physiological signals, which leads to heterogeneous distribution differences in cross-modal feature interactions. In addition, domain shift issues under cross-subject and few-sample conditions restrict the model generalization performance. To cope with these problems, this work proposes a cognitive state detection framework that integrates Transformer-based multimodal feature interaction and self-training of pseudolabel optimization. First, the multihead attention mechanism is introduced to model the temporal evolution patterns across modalities, dynamically harmonizing cross-modal contributions to extract cognitive state-related shared features. Then, a dual-model cross-validation strategy is designed to filter high-quality pseudolabeled samples from the target domain for subsequent self-training, effectively avoiding the dependency on auxiliary modules in domain adaptation. Finally, Extensive experiments show that the proposed work significantly improves the recognition accuracy, and the designed pseudolabel optimization mechanism can be transferred to related tasks without increasing model complexity.
| Original language | English |
|---|---|
| Journal | IEEE Transactions on Industrial Informatics |
| DOIs | |
| State | Accepted/In press - 2025 |
Keywords
- Cognitive state detection
- domain adaptation
- multimodal
- self-training
- transformer
Fingerprint
Dive into the research topics of 'Multimodal Feature Interaction and High-Quality Pseudolabel Generation With Self-Training for Cognitive State Detection'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver