[PDF][PDF] A Grassmannian manifold self-attention network for signal classification
Proceedings of the Thirty-Third International Joint Conference on Artificial …, 2024•ijcai.org
In the community of artificial intelligence, significant progress has been made in encoding
sequential data using deep learning techniques. Nevertheless, how to effectively mine
useful information from channel dimensions remains a major challenge, as these features
have a submanifold structure. Linear subspace, the basic element of the Grassmannian
manifold, has proven to be an effective manifold-valued feature descriptor in statistical
representation. Besides, the Euclidean selfattention mechanism has shown great success in …
sequential data using deep learning techniques. Nevertheless, how to effectively mine
useful information from channel dimensions remains a major challenge, as these features
have a submanifold structure. Linear subspace, the basic element of the Grassmannian
manifold, has proven to be an effective manifold-valued feature descriptor in statistical
representation. Besides, the Euclidean selfattention mechanism has shown great success in …
Abstract
In the community of artificial intelligence, significant progress has been made in encoding sequential data using deep learning techniques. Nevertheless, how to effectively mine useful information from channel dimensions remains a major challenge, as these features have a submanifold structure. Linear subspace, the basic element of the Grassmannian manifold, has proven to be an effective manifold-valued feature descriptor in statistical representation. Besides, the Euclidean selfattention mechanism has shown great success in capturing long-range relationships of data. Inspired by these facts, we extend the self-attention mechanism to the Grassmannian manifold. Our framework can effectively characterize the spatiotemporal fluctuations of sequential data encoded in the Grassmannian manifold. Extensive experimental results on three benchmarking datasets (a drone recognition dataset and two EEG signal classification datasets) demonstrate the superiority of our method over the state-of-the-art. The code and supplementary material for this work can be found at https://github. com/ChenHu-ML/GDLNet.
ijcai.org
顯示最佳搜尋結果。 查看所有結果