default search action
Neurocomputing, Volume 569
Volume 569, February 2024
- Yujun Ma, Ruili Wang, Ming Zong, Wanting Ji, Yi Wang, Baoliu Ye:
Convolutional transformer network for fine-grained action recognition. 127027 - Kejie Lyu, Yingming Li, Zhongfei Zhang:
Dual-DIANet: A sharing-learnable multi-task network based on dense information aggregation. 127035 - Zhongtian Hu, Lifang Wang, Yangqi Chen, Yushuang Liu, Ronghan Li, Meng Zhao, Xinyu Lu, Zejun Jiang:
Dynamically retrieving knowledge via query generation for informative dialogue generation. 127036 - Anastasia G. Giannari, Alessandro Astolfi:
Nonlinear control of neurodegenerative diseases. A case study on optical illusion networks disrupted by diabetic retinopathy. 127099 - Manuel García-Domínguez, César Domínguez, Jónathan Heras, Eloy J. Mata, Vico Pascual:
Deep style transfer to deal with the domain shift problem on spheroid segmentation. 127105 - Minyang Jiang, Yongwei Wang, Martin J. McKeown, Zhen Jane Wang:
Occlusion-robust FAU recognition by mining latent space of masked autoencoders. 127107 - Tao Meng, Yuntao Shou, Wei Ai, Jiayi Du, Haiyan Liu, Keqin Li:
A multi-message passing framework based on heterogeneous graphs in conversational emotion recognition. 127109 - Xianbin Wei, Kechen Song, Wenkang Yang, Yunhui Yan, Qinggang Meng:
A visible-infrared clothes-changing dataset for person re-identification in natural scene. 127110 - Fan Xu, Lei Zeng, Qi Huang, Keyu Yan, Mingwen Wang, Victor S. Sheng:
Hierarchical graph attention networks for multi-modal rumor detection on social media. 127112 - Tianyue Zheng, Zhe Chen, Shuya Ding, Chao Cai, Jun Luo:
Adv-4-Adv: Thwarting changing adversarial perturbations via adversarial domain adaptation. 127114 - Jing Mi, Xuxiu Zhang, Honghai Zeng, Lin Wang:
DERGCN: Dynamic-Evolving graph convolutional networks for human trajectory prediction. 127117 - Bingjie Zhang, Jian Wang, Chao Zhang, Jie Yang, Tufan Kumbasar, Wei Wu:
Zero-order fuzzy neural network with adaptive fuzzy partition and its applications on high-dimensional problems. 127118 - Shixuan Zhou, Peng Song:
Consistency-exclusivity guided unsupervised multi-view feature selection. 127119 - Zhenyang Hao, Xinggang Wang, Jiawei Liu, Zhihang Yuan, Dawei Yang, Wenyu Liu:
Stabilized activation scale estimation for precise Post-Training Quantization. 127120 - Zezheng Zhang, Ryan K. Y. Chan, Kenneth K. Y. Wong:
GlocalFuse-Depth: Fusing transformers and CNNs for all-day self-supervised monocular depth estimation. 127122 - Andrea Marinelli, Michele Canepa, Dario Di Domenico, Emanuele Gruppioni, Matteo Laffranchi, Lorenzo De Michieli, Michela Chiappalone, Marianna Semprini, Nicoló Boccardo:
A comparative optimization procedure to evaluate pattern recognition algorithms on hannes prosthesis. 127123 - Yongbin Zheng, Peng Sun, Qiang Ren, Wanying Xu, Di Zhu:
A novel and efficient model pruning method for deep convolutional neural networks by evaluating the direct and indirect effects of filters. 127124 - Hanchi Ren, Jingjing Deng, Xianghua Xie, Xiaoke Ma, Yichuan Wang:
FedBoosting: Federated learning with gradient protected boosting for text recognition. 127126 - Gopendra Vikram Singh, Mauajama Firdaus, Dushyant Singh Chauhan, Asif Ekbal, Pushpak Bhattacharyya:
Zero-shot multitask intent and emotion prediction from multimodal data: A benchmark study. 127128 - Tongtong Chen, Fuyong Wang, Meiling Feng, Chengyi Xia, Zengqiang Chen:
Fully distributed consensus of linear multi-agent systems via dynamic event-triggered control. 127129 - Ji Zhang, Guoping Liu:
Model-free distributed integral sliding mode predictive control for multi-agent systems with communication delay. 127133 - Jiacun Wang, GuiPeng Xi, XiWang Guo, Shixin Liu, ShuJin Qin, Henry Han:
Reinforcement learning for Hybrid Disassembly Line Balancing Problems. 127145
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.