與「Wasi Uddin Ahmad」相符的使用者個人學術檔案
![]() | Wasi Uddin AhmadSenior Research Scientist, NVIDIA 在 ucla.edu 的電子郵件地址已通過驗證 被引用 3380 次 |
Unified pre-training for program understanding and generation
Code summarization and generation empower conversion between programming language
(PL) and natural language (NL), while code translation avails the migration of legacy code …
(PL) and natural language (NL), while code translation avails the migration of legacy code …
A transformer-based approach for source code summarization
Generating a readable summary that describes the functionality of a program is known as
source code summarization. In this task, learning code representation by modeling the …
source code summarization. In this task, learning code representation by modeling the …
Retrieval augmented code generation and summarization
Software developers write a lot of source code and documentation during software development.
Intrinsically, developers often recall parts of source code or code summaries that they …
Intrinsically, developers often recall parts of source code or code summaries that they …
Multi-lingual evaluation of code generation models
We present new benchmarks on evaluation code generation models: MBXP and Multilingual
HumanEval, and MathQA-X. These datasets cover over 10 programming languages and …
HumanEval, and MathQA-X. These datasets cover over 10 programming languages and …
Context attentive document ranking and query suggestion
We present a context-aware neural ranking model to exploit users' on-task search activities
and enhance retrieval performance. In particular, a two-level hierarchical recurrent neural …
and enhance retrieval performance. In particular, a two-level hierarchical recurrent neural …
BanglaBERT: Language model pretraining and benchmarks for low-resource language understanding evaluation in Bangla
In this work, we introduce BanglaBERT, a BERT-based Natural Language Understanding (NLU)
model pretrained in Bangla, a widely spoken yet low-resource language in the NLP …
model pretrained in Bangla, a widely spoken yet low-resource language in the NLP …
Immediate psychological responses during the initial period of the COVID-19 pandemic among Bangladeshi medical students
Background The most recent global pandemic of COVID-19 has been creating multidimensional
damages, including a detrimental impact on the mental health status of individuals. …
damages, including a detrimental impact on the mental health status of individuals. …
Cocomic: Code completion by jointly modeling in-file and cross-file context
While pre-trained language models (LM) for code have achieved great success in code
completion, they generate code conditioned only on the contents within the file, ie, in-file context, …
completion, they generate code conditioned only on the contents within the file, ie, in-file context, …
On difficulties of cross-lingual transfer with order differences: A case study on dependency parsing
Different languages might have different word orders. In this paper, we investigate cross-lingual
transfer and posit that an order-agnostic model will perform better when transferring to …
transfer and posit that an order-agnostic model will perform better when transferring to …
Gate: Graph attention transformer encoder for cross-lingual relation and event extraction
Recent progress in cross-lingual relation and event extraction use graph convolutional
networks (GCNs) with universal dependency parses to learn language-agnostic sentence …
networks (GCNs) with universal dependency parses to learn language-agnostic sentence …