The Painful Irony
I'm not amused that the organization in charge of the MPRE refuses to release the raw scores that it uses to scale the exams.
I've asked google and all the legal people I know to estimate how to translate percentage correct on practice exams into scaled scores. Turns out, no one seems to know.
Instead, I get:
"Don't worry. Everyone passes the MPRE."
Or, better yet:
"80 is passing in California."
Oh, that's comforting. Too bad a scaled score of 80 means approximately the same to me as "two monkeys." Scaled to what, exactly? We don't know anything about it other than that 100 is the mean they shoot for when scaling scores. The only relevant information I have found is that it looks like only 15 percent of the people who took the exam in 2004 actually scored low enough to fail in California. So, pragmatically, I guess I can go to sleep and rely upon the hope that when I'm prepared and lucky, I don't usually find myself in the bottom 15 percent.
But, the bigger question is this: How can the ABA allow an organization to assign numbers that determine whether or not you are "ethical" enough to be a lawyer while not requiring that the organization disclose exactly what those numbers mean? Why do we give the National Conference of Bar Examiners the power to make this judgment without disclosing the methods by which it determines its scaled scores, or at the very least, which raw scores correspond to the scaled scores for past tests.
I can think of several reasons why an organization with a monopoly on testing would want to keep their scoring methodology a secret. But most of those reasons lead to the conclusion that the NCBE is hiding something, and not one of them is justifiable. Any help? Or are you all with me?