Throwing a Wrench in the Document Review Machine
Computers against humans — is this the debate being waged around predictive coding to determine the future of document review? if this is the debate, is it what legal practitioners should really be focusing on in e-discovery?
In a recent New York Law Journal article, Steve Green and Mark Yacano of Hudson Legal question two sources widely cited to support the use of technology-assisted review. One source is Laura Grossman and Gordon Cormack's "Technology-Assisted Review Can (and Does) Yield More Accurate Results Than Exhaustive Manual Review, With Much Lower Effort" from the Richmond Journal of Law & Technology; the other is the 2009 Text Retrieval Conference (TREC) Legal Track Interactive Task study.
After providing a useful primer on the differences and potential uses of terminology used to gauge the efficiency of review methods, such as recall, precision and the F1 score, the authors dig into the heart of their argument: the Richmond Journal article relies too heavily on the TREC 2009 study to support its assertions about manual review since TREC "doesn't appear designed to compare manual review with technology-assisted review at all."
The authors do argue persuasively that the way the study was set up it didn't pit manual review teams directly against technology-assisted review teams. There was no manual review control group that competed against the eleven participants submitting their computer-aided coding methods to the test. Manual reviewers were used to judge the success of each team's technology-assisted document review by assessing the responsiveness of samples of each team's final document production — a process open to appeal. As the authors assert: "No exhaustive manual review was actually conducted on the full data set."
They additionally assert that only two out of the 11 teams performed demonstrably better than the limited participation of manual reviewers. The upshot of all of this is that Grossman's and Cormack's assertions against manual document review are drawing the wrong conclusions from what appears to be a flawed study — or at least an ineffective source for pitting man against machine.
But wait, there's more. In the comments section at the article's bottom on the Law Technology News website, LTN's own Craig Ball feels compelled to question the questioners of Grossman and Cormack. He asserts that G and C's assertions on the inefficacy of exhaustive human manual review is based on data from other studies — not only TREC 2009. By mounting their attack squarely on TREC, Green and Yacano miss their targets. While insisting the authors as industry veterans should by all means air their views, he also writes that "the economics and the efficiency of technology-assisted review cannot be gainsaid by taking peculiar potshots at the messengers."
This is a friendly debate — Ball ties up his comments with a quote from "A Few Good Men" and a dose of good humor.
Who do you agree with?
Image by Clipart