ILTA White Papers

Litigation and Practice Support

Issue link: http://read.uberflip.com/i/25416

Contents of this Issue

Navigation

Page 37 of 47

person reading and classifying a pile of documents. The reviewer is asked to read the document and apply a review code, based on his or her judgment. While this may appear to be an easy task, it can be one of the most demanding and, at the same time, boring jobs for a well-educated, well-trained knowledge worker. Even with technology and software advances, a reviewer is required to read documents in relatively constrained workflows –– scrolling through pages of a document to determine its meaning and intent in the context of the e-discovery request can be stressful. Adding to this stress, reviewers are often measured based on their productivity (i.e., the number of documents or pages they review per day or per hour) and quality of work, without consideration for the “Linear review produces less than desirable results, which can lead to higher electronic discovery costs.” workflow constraints that impact quality. It’s no wonder that study after study has found that a straight plow through linear review produces less than desirable results, which can lead to higher electronic discovery costs. One way to measure the effectiveness of a review exercise is to submit the same collection of documents to multiple reviewers and assess their level of agreement on the classification of documents into specific categories. One such study, “Document Categorization in Legal Electronic Discovery: Computer Classification vs. Manual Review,” found that the level of agreement among human reviewers was only in the 70 percent range, even when agreement was limited to positive determination. As noted in the study, previous TREC Interassessor Agreement Notes and other studies on this subject show a similar and consistent result. Especially noteworthy, from the Text REtrieval Conference (TREC), is the fact that only nine out of 40 topics studied had an agreement level higher than 70 percent and four topics had no agreement at all. Some of the disagreement is due to the fact that most documents receive varying levels of interpretation that cannot be easily judged by a binary yes/ no decision (i.e., where do you draw the line on relevance). However, a significant amount of variability can also be attributed to the fatigue that comes with the repetitiveness of the review process. Given that linear review is flawed and can result in higher e-discovery costs, what are the remedies? Intelligent use of new technologies, along with a workflow that leverages these technologies, can produce an improvement in accuracy and speed that has been demonstrated in other industries. Let’s examine a few of strategies. www.iltanet.org Litigation and Practice Support 39

Articles in this issue

Links on this page

Archives of this issue

view archives of ILTA White Papers - Litigation and Practice Support