Is 31,000 Missed Relevant Documents an Acceptable Outcome? – eDiscovery Case Law
January 28, 2013
It might be, if the alternative is 62,000 missed relevant documents.
Last week, we reported on the first case for technology assisted review to be completed, Global Aerospace Inc., et al, v. Landow Aviation, L.P. dba Dulles Jet Center, et al, in which predictive coding was approved last April by Virginia State Circuit Court Judge James H. Chamblin. Now, as reported by the ABA Journal (by way of the Wall Street Journal Law Blog), we have an idea of the results from the predictive coding exercise. Notable numbers:
- Attorneys “coded a sample of 5,000 documents out of 1.3 million as either relevant or irrelevant” to serve as the seed set for the predictive coding process,
- The predictive coding “program turned up about 173,000 documents deemed relevant”,
- The attorneys “checked a sample of about 400 documents deemed relevant by the computer program. About 80 percent were indeed relevant. The lawyers then checked a sample of the documents deemed irrelevant. About 2.9 percent were possibly relevant”,
- Subtracting the 173,000 documents deemed relevant from the 1.3 million total document population yields 1,127,000 documents not deemed relevant. Extrapolating the 2.9 percent sample of missed potentially relevant document to the rest of the documents deemed non relevant yields 32,683 potentially relevant documents missed.
“For some this may be hard to stomach,” the WSJ Law Blog says in the article. “The finding suggests that more than 31,000 documents may be relevant to the litigation but won’t get turned over to the other side. What if the smoking gun is among them?”
However, the defendants, in arguing for the use of predictive coding in this case, asserted that “manual review of the approximately two million documents at issue would be extremely costly while locating only about 60 percent of potentially relevant documents”. Of course, the rise in popularity of technology assisted review is not only due to the cost savings but also the growing belief of increased accuracy over human review as concluded in the oft-cited Richmond Journal of Law and Technology white paper from Maura Grossman and Gordon Cormack, Technology-Assisted Review in e-Discovery can be More Effective and More Efficient than Exhaustive Manual Review.
Assuming that the defendants’ effectiveness estimate of manual review is reasonable, then it could be argued that more than 62,000 relevant documents could have been missed using manual review at a much higher cost for review. While we don’t know what the actual number of missed documents would have been, it’s certainly fair to conclude that the predictive coding effort saved considerable review costs in this case with comparable, if not better, accuracy.
So, what do you think? What do you think of the results? Please share any comments you might have or if you’d like to know more about a particular topic.Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.