GPT-4 Passes the Bar Exam
382 Philosophical Transactions of the Royal Society A (2024)
35 Pages Posted: 15 Mar 2023 Last revised: 3 Apr 2024
Date Written: March 15, 2023
Abstract
In this paper, we experimentally evaluate the zero-shot performance of a preliminary version of GPT-4 against prior generations of GPT on the entire Uniform Bar Examination (UBE), including not only the multiple-choice Multistate Bar Examination (MBE), but also the open-ended Multistate Essay Exam (MEE) and Multistate Performance Test (MPT) components. On the MBE, GPT-4 significantly outperforms both human test-takers and prior models, demonstrating a 26% increase over ChatGPT and beating humans in five of seven subject areas. On the MEE and MPT, which have not previously been evaluated by scholars, GPT-4 scores an average of 4.2/6.0 as compared to much lower scores for ChatGPT. Graded across the UBE components, in the manner in which a human tast-taker would be, GPT-4 scores approximately 297 points, significantly in excess of the passing threshold for all UBE jurisdictions. These findings document not just the rapid and remarkable advance of large language model performance generally, but also the potential for such models to support the delivery of legal services in society.
Keywords: GPT, GPT-4, Bar Exam, Legal Data, NLP, Legal NLP, Legal Analytics, natural language processing, natural language understanding, evaluation, machine learning, artificial intelligence, artificial intelligence and law, Neural NLP, Legal Tech, ChatGPT
JEL Classification: C45, C55, K49, O33, O30
Suggested Citation: Suggested Citation