Outsourcing Subjective Review – a few things to keep in mind.
When a client has given you, say, 400,000 documents/records/emails/pieces of data, however they are comprised, your subjective review team (SRT) advising you that you have 160,000 “relevant” documents is of limited practical assistance.
Part of the problem with outsourcing subjective review is that the “team” you have retained may not be functioning as a team. In some organisations, subjective reviewers work alone, off-site and without any real control.
Any review project begins with a slower and quite conservative approach to assessing “relevance” (what are the facts, who are the key players, what are the issues and how are these files/documents related?) Certainly, no one wants to “miss” a relevant piece of information let alone the proverbial “smoking gun.” However, a conservative mindset results in too many “relevant” documents. After the learning curve has been passed there is a danger, especially where there is insufficient supervision and monitoring, of over inclusion continuing. The necessary attrition is not achieved.
lmi has devised methods to overcome the problem of “too many relevant documents.” First and foremost lmi uses a “real team” approach. The groups are assembled and work in the same room or within the vicinity permitting real time interaction and assessment. The concept of being "in the room" is fundamental to the lmi model. If a reviewer has a question, or even better “an idea,” they are comfortable talking with their colleagues to think it through. From a management perspective, team members’ strengths and weaknesses are easier to detect. Those not meeting lmi's standards are easily identified.
As previously mentioned, common "managed review" model of other firms involves having reviewers work from home. Some of the weaknesses in this model are significant:
- individuals work in complete isolation from one another and will not learn from or co-ordinate with their colleagues; - after an initial training session they necessarily rely on the initial materials for guidance; - it does not lend itself to a reviewer being comfortable asking questions or expressing concerns; - each team member is limited by the strength and stability of their internet connection - the review platform process is often slow and unstable due to the nature of using web-based applications; - management response to ideas is necessarily slow, and - the individual nature of the process lacks the "collective" wisdom of a group.
The reality is that privilege/relevancy review is subjective. It is common that clients may look at the work of a review team and disagree with the decisions being made. With lmi's model, when this occurs a very constructive process takes place. As the team is "on the same page" as a group, they can explain why they are reviewing in a particular manner. A client, looking at comparatively few documents, may not fully understand the full ramifications of their decisions. The reviewers "in the room" always have a sense of the mass of information that they are dealing with - a sense of how a small decision will cascade through the process. Often a client will either agree with the wisdom of the group OR a new approach will develop. In any case, provided there is consistency, changes in categorisations may be made quickly and efficiently by programmatic methods.
Compare this model with the "work from home" model: reviewers are sent updated instructions from the client with no real back and forth on whether that is the wise course of action. Each individual reviewer has only their small set of documents to judge how the review is proceeding. With almost no contact with other reviewers they cannot gain a realistic sense of the "whole."
Another important aspect of the lmi model is having an in-house relationship between objective and subjective teams. The workflow is timely and seamless and information openly exchanged. Everyone, from the first person who opens a PST file to the most senior subjective reviewer, has “relevancy” on their mind.