Reduce the Document Set for Human Review = 29%
Comment and Analysis on Results
Most of the recent interest with Technology Assisted Review (TAR) or Predictive Coding, as it is sometimes called, stems from Da Silva Moore
(2012 U.S. Dist. LEXIS 23350 (S.D.N.Y. Feb. 24, 2012)), a decision rendered by Magistrate Judge Andrew Peck and confirmed by U.S. District Judge Andrew Carter (11 Civ. 1279 (ALC)(AJP)). However, TAR is not new and its value and reliability has been studied for years.
As an example, in a 2009 study published in the Journal of the American Society for Information Science and Technology by Roitblat, Kershaw and Oot, titled, “Document Categorization in Legal Electronic Discovery: Computer Classification vs. Manual Review“, two TAR systems each agreed with an original human review on about 83% of the documents reviewed. By comparison, two new human teams only agreed on about 73% of the documents. As a result, the Roitblat, Kershaw and Oot study concluded that TAR was, “no worse than using human review.”
Other studies have also addressed the “myth” that human document review is somehow inherently more reliable than could be obtained with TAR (e.g., Grossman & Cormack, 2011; Baron, et al., 2009). As such, the evidence has been clear for some time that TAR can be very effective.
However, in an industry known for evolving and embracing new technology at the pace of a snail, with a built in prejudice and comfort level for human review, I am not convinced that the market has agreed upon the best practice for using TAR. As an example, I have witnessed users utilizing TAR to cull down very large data sets with statistical “hit rate” settings in the high 90%, leaving the “important” review work to humans. I have also seen users utilizing TAR to “spot check” the accuracy of human reviewers with the necessary “rework” going back to a different set of human reviewers. During this early adaptor phase in the evolution of TAR, neither of the uses that I sited for TAR are necessarily wrong. I just believe that the research and evidence already justifies the more productive and financially rewarding use of TAR is to replace human reviewers.
Given all of this, I was actually very pleased to see that 57% of the voters in this eDSG poll agreed that the number one reason for using Technology Assisted Review (TAR) was to replace human reviewers. However, more in line with what I would have thought, 43% of the respondants voted otherwise. I would suspect that these are the mainstream buyers and the laggards (see Crossing the Chasm). Or maybe these were the voters that own and operate offshore document review organizations?
In any case, maybe TAR is going to be the “tipping point” technology that finally drives the legal industry to explore and possibly embrace to the value of technology? Let’s hope so.