與「Wasi Uddin Ahmad」相符的使用者個人學術檔案

Wasi Uddin Ahmad

Senior Research Scientist, NVIDIA
在 ucla.edu 的電子郵件地址已通過驗證
被引用 3380 次

Unified pre-training for program understanding and generation

WU Ahmad, S Chakraborty, B Ray… - arXiv preprint arXiv …, 2021 - arxiv.org
Code summarization and generation empower conversion between programming language
(PL) and natural language (NL), while code translation avails the migration of legacy code …

A transformer-based approach for source code summarization

WU Ahmad, S Chakraborty, B Ray… - arXiv preprint arXiv …, 2020 - arxiv.org
Generating a readable summary that describes the functionality of a program is known as
source code summarization. In this task, learning code representation by modeling the …

Retrieval augmented code generation and summarization

MR Parvez, WU Ahmad, S Chakraborty, B Ray… - arXiv preprint arXiv …, 2021 - arxiv.org
Software developers write a lot of source code and documentation during software development.
Intrinsically, developers often recall parts of source code or code summaries that they …

Multi-lingual evaluation of code generation models

…, Z Wang, X Li, Y Tian, M Tan, WU Ahmad… - arXiv preprint arXiv …, 2022 - arxiv.org
We present new benchmarks on evaluation code generation models: MBXP and Multilingual
HumanEval, and MathQA-X. These datasets cover over 10 programming languages and …

Context attentive document ranking and query suggestion

WU Ahmad, KW Chang, H Wang - … of the 42nd international ACM SIGIR …, 2019 - dl.acm.org
We present a context-aware neural ranking model to exploit users' on-task search activities
and enhance retrieval performance. In particular, a two-level hierarchical recurrent neural …

BanglaBERT: Language model pretraining and benchmarks for low-resource language understanding evaluation in Bangla

A Bhattacharjee, T Hasan, WU Ahmad, K Samin… - arXiv preprint arXiv …, 2021 - arxiv.org
In this work, we introduce BanglaBERT, a BERT-based Natural Language Understanding (NLU)
model pretrained in Bangla, a widely spoken yet low-resource language in the NLP …

Immediate psychological responses during the initial period of the COVID-19 pandemic among Bangladeshi medical students

…, V Podder, KN Koly, DT Azad, WU Ahmad… - Children and youth …, 2021 - Elsevier
Background The most recent global pandemic of COVID-19 has been creating multidimensional
damages, including a detrimental impact on the mental health status of individuals. …

Cocomic: Code completion by jointly modeling in-file and cross-file context

Y Ding, Z Wang, WU Ahmad, MK Ramanathan… - arXiv preprint arXiv …, 2022 - arxiv.org
While pre-trained language models (LM) for code have achieved great success in code
completion, they generate code conditioned only on the contents within the file, ie, in-file context, …

On difficulties of cross-lingual transfer with order differences: A case study on dependency parsing

WU Ahmad, Z Zhang, X Ma, E Hovy, KW Chang… - arXiv preprint arXiv …, 2018 - arxiv.org
Different languages might have different word orders. In this paper, we investigate cross-lingual
transfer and posit that an order-agnostic model will perform better when transferring to …

Gate: Graph attention transformer encoder for cross-lingual relation and event extraction

WU Ahmad, N Peng, KW Chang - … of the AAAI Conference on Artificial …, 2021 - ojs.aaai.org
Recent progress in cross-lingual relation and event extraction use graph convolutional
networks (GCNs) with universal dependency parses to learn language-agnostic sentence …