Hey all, I'm excited to share that our recent paper titled "Crafting Clarity: Leveraging Large Language Models to Decode Consumer Reviews" has been accepted in the Journal of Retailing and Consumer Services (ABDC - A, SSCI, Impact factor: 11.0).
This is one of the interesting works I have published so far. In this study, we utilized 1,031,478 consumer reviews and trained four prominent large language models (Falcon-7B, MPT-7B, GPT-2, and BERT) to assess their performance in understanding consumer sentiment. Our findings show that Falcon-7B performed the best, while GPT-2 performed the worst in decoding consumer emotions. Additionally, we performed topic modeling on the reviews using both LDA and Falcon-7B. By comparing the results, we emphasize the necessity of a powerful deep learning model that can understand semantic relationships to comprehend consumer perceptions, compared to a Bayesian probabilistic approach which relies on the likelihood of observing specific words within topics.
To the best of my knowledge, this study represents the first comprehensive evaluation of large language models' ability to decode consumer reviews.
Thanks to my co-authors Pranshav Gajjar Ashutosh Dutt for working along with me in this project.