For data-based evidence, the analysis is the heart of the content: the output of the data compiled for a case. In most instances, the analytics do not need to be complex. Indeed, powerful results can be derived by simply calculating summary statistics (mean, median, standard deviation). More complicated techniques, like regressions, time-series models, and pattern analyses, do require a background in statistics and coding languages. But even the most robust results are ineffective if an opposing witness successfully argues they are immaterial to the case. Whether simple or complex, litigants and expert witnesses should ensure an analysis is both relevant and robust against criticism.

 

What type of result would provide evidence of a party’s assertion? The admissibility and validity of statistical evidence varies by jurisdiction. In general, data-based evidence should be as straightforward as possible; more complex models should only be used when necessary. Superfluous analytics are distractions, leading to expert witnesses “boiling the ocean” in search of additional evidence. Additionally, courts still approach statistical techniques with some skepticism, despite their acceptance in other fields.

 

If more complex techniques are necessary, like regressions, litigants must be confident in their methods. For example, what kind of regression will be used? Which variables are “relevant” as inputs? What is the output, and how does it relate to a party’s assertion of fact? Parties need to link outputs, big or small, to a “therefore” moment: “the analysis gave us a result, therefore it is proof of our assertion in the following ways.” Importantly, this refocuses the judge or jury’s attention to the relevance of the output, rather than its complex derivation.

 

Does the analysis match the scope of the complaint or a fact in dispute? Is the certified class all employees, or just a subset of in a company? Is the location a state, or a county within a state? If the defendant is accused of committing fraud, for how many years? Generalizing from a smaller or tangential analysis is inherently risky, and an easy target for opposing witnesses. If given a choice, avoid conjecture. Do not assume that an analysis in one area, for one class, or for one time automatically applies to another.

 

A key component of analytical and statistical work is replicability. In fields such as finance, insurance, or large scale employment cases, the analysis of both parties should be replicable. Outside parties should be able to analyze the same data and obtain the same results. In addition, replicability can expose error, slights of hand, or outright manipulation.

 

Data-based evidence requires focus, clarity, and appropriate analytical techniques, otherwise an output is just another number.

After acquiring and merging data, litigants will want to rush to an analysis. But raw datasets, no matter how perfectly constructed, are inevitably riddled with errors. Such errors can potentially bias or invalidate results. Data cleaning, the process which ensures a slice of data is correct, consistent, and usable, is a vital step for any data-based evidence.

 

There is a often quoted rule in data science which says 80% of one’s time is spent cleaning and manipulating data, while only 20% is spent actually analyzing it. Spelling mistakes, outliers, duplicates, extra spaces, missing values, the list of potential complications is near infinite. Corrections should be recorded at every stage, ideally in scripts of the program being used (ex. R, SAS, SQL, STATA); data cleaning scripts leave behind a structured, defensible record. Different types of data will require different types of cleaning, but a structured approach will produce error free analytical results.

 

One should start with simple observations. Look at batches of random rows, what values are stored for a given variable, and are these values consistent? Some rows may format phone numbers differently, inconsistently capitalize, or round values. How many values are null, and are there patterns in null entries? Calculate summary statistics for variables, are there obvious mistakes (ex. negative time values)? After an assessment, cleaning can begin.

 

Fixing structural errors is straightforward: input values with particular spellings, capitalization, split values (ex. data containing ‘N/A’ and ‘Not Available’), or formatting issues (ex. numbers stored as strings rather than integers) can be systematically reformatted. Duplicate observations, common when datasets are merged, can be easily removed.

 

However, data cleaning is not entirely objective. Reasonable assumptions must be made when handling irrelevant observations, outliers, and missing values. If class X or transaction type Y is excluded from litigation, its reasonable to remove their observations. However, one cannot automatically assume Z, a similar class, can be removed as well. Outliers function the same way. What legal reasoning do I have to remove this value from my dataset? Suspicious measurements are a good excuse; but, just because a value is too big or too small, that alone does not make it reasonable to remove.

 

Missing data is a difficult problem: how many missing or null values are acceptable for this analysis to still produce robust results? Should you ignore missing values, or should you generate values based off of similar data points? There is no easy answer.  Both approaches assume missing observations are similar to the rest of the dataset. But the fact that the observations are missing data is informative in of itself. A more cautious stance, the one with the least assumptions, will inevitably be easier to defend in court.

 

Skipping data cleaning, and assuming perfect data, casts doubt on any final product. Data-based evidence follows the maxim “garbage in, garbage out.”

Data analytics is only beginning to tap into the unstructured data which forms the bulk of everyday life. Text messages, emails, maps, audio files, PDF files, pictures, blog posts, these sources represent ‘unstructured data,’ as opposed to the structured data sources mentioned thus far. Up to 80% of all enterprise data is unstructured. So, how can a client’s text messages or recorded phone calls be analyzed like a SQL table? Unstructured data is not easily stored into pre-defined models or schema; some CRM tools (e.x. Salesforce) do store text-based fields. But typically, documents do not lend themselves to traditional queries from a database. This does not mean ‘structured’ and ‘unstructured’ data are in conflict with each other.

 

Document based evidence is of course, an integral part of the legal system. Lawyers and law offices now have access to comprehensive e-discovery programs, which sift through millions of documents based on keywords and terms. Selecting relevant information to prove a case is nothing new. The intersection with Data Analytics arises when hundreds of thousands or millions of text based data are analyzed as a whole, to prove an assertion in court.

 

Turning unstructured text into analyzable, structured data is made possible by increasingly sophisticated methods. Some machine learning algorithms, for example, analyze pictures and pick up on repeating patterns. Text mining programs scrape PDFs, websites, and social media for content, and then download the text into preassigned columns and variables. Analyses can be run, for example, on the positivity or negativity of a sentence, the frequency of certain words, or the correlation of certain phrases to one another. Natural language processing (NLP) includes speech recognition, which itself has seen significant progress in the past two decades. Analytics on unstructured data is now more useful in producing relevant evidence.

 

As important as the unstructured data is its corresponding Metadata: data that describes data. A text message or email contains additional information about itself: for example the author, the recipient, the time, and the length of the message. These bits of information can be stored in a structured data set, without any reference to the original content, and then analyzed. For example, a company has metadata on electronic documents at specific points in a transaction’s life-cycle; running a pattern analysis on this metadata could identify whether or not certain documents were made, altered, or destroyed after an event.

 

In instances of high profile fraud, such as the London Inter-bank Offered Rate (LIBOR) manipulation scandal, prolific emails and text messages between traders added a new dimension to the regulator’s cases against major banks. Overwhelming and repeated textual evidence, which can be produced through analyses on unstructured data, is yet another tool for litigating parties to prove a pattern of misconduct.

Evidence based on Data Analytics hinges on the relevance of its underlying sources. Determining what potential data sources can prove is as important as generating an analysis. The first question should be “What claims do I want to assert with data?” The type of case and nature of the complaint should inform litigants where they should start looking in discovery. For example, a dataset of billing information could determine whether or not a healthcare provider committed fraud. Structured data sources like Excel files, SQL servers, or third party databases (e.x. Oracle), are the primary source material for statistical analyses, particularly those using transactional data.

 

In discovery, it’s important that both parties be aware of these structured data sources. Often, these sources do not have a single designated custodian, rather they may be the purview of siloed departments or an IT group. For any particular analysis, rarely is all the necessary data all held in one place. Identifying valuable source material is more difficult as the complexity of interactions between different sources increases. To efficiently stitch together smaller databases and tables, a party should conduct detailed data mapping by identifying links between structured data sources. For example, how two tables relate to another, how a SQL table relates to an Excel file, or how a data cube relates to a cloud file. Data mapping identifies which structured data sources are directly linked to one another through their variables, and how they as a whole fit together in an analysis.

 

However when using data based evidence to answer a question, structured data is rarely clean and/or well organized. Variables defined in a table may be underutilized or unused. Legacy files imported into newer systems can become corrupted. The originators of macros or scripts for data pulls may no longer work for an organization and forgo detailed instructions. Sometimes the data simply do not exist: not from a party burying evidence, but by the very nature of electronically stored information (ESI).

 

Any defensible analysis is inherently limited by what data is available. With data analytics the maxim “evidence of absence, is not absence of evidence,” is apparent. It’s always more dangerous to exaggerate or generalize from the available data than to produce a narrow, but statistically sound result. Thus, given the data available, what questions can be asked? What questions can be answered? Finally, if there is no data, does it mean there is no problem?

With businesses and government now firmly reliant on electronic data for their regular operations, litigants are increasingly presenting data-driven analyses to support their assertions of fact in court. This application of Data Analytics, the ability to draw insights from large data sources, is helping courts answer a variety of questions. For example, can a party establish a pattern of wrongdoing based on past transactions? Such evidence is particularly important in litigation involving large volumes of data: business disputes, class actions, fraud, and whistleblower cases. The use cases for data based evidence increasingly cuts across industries, whether its financial services, education, healthcare, or manufacturing.  

 

Given the increasing importance of Big Data and Data Analytics, parties with a greater understanding of data-based evidence have an advantage. Statistical analyses of data can provide judges and juries with information that otherwise would not be known. Electronic data hosted by a party is discoverable, data is impartial (in the abstract), and large data sets can be readily analyzed with increasingly sophisticated techniques. Data based evidence, effectively paired with witness testimony, strengthens a party’s assertion of the facts. Realizing this, litigants engage expert witness to provide dueling tabulations or interpretations of data at trial. As a result, US case law on data based evidence is still evolving. Judges and juries are making important decisions based the validity and correctness of complex and at times contradictory analyses.

 

This series will discuss best practices in applying analytical techniques to complex legal cases, while focusing on important questions which must be answered along the way. Everything, from acquiring data, to preparing an analysis, to running statistical tests, to presenting results, carries huge consequences for the applicability of data based evidence. In cases where both parties employ expert witnesses to analyze thousands if not millions of records, a party’s assertions of fact are easily undermined if their analysis is deemed less relevant or inappropriate. Outcomes may turn on the statistical significance of a result, the relevance of a prior analysis to a certain class, the importance of excluded data, or the rigor of an anomaly detection algorithm. At worst, expert testimony can be dismissed.

 

Many errors in data based evidence, at their heart, are faulty assumptions on what the data can prove. Lawyers and clients may overestimate the relevance of their supporting analysis, or mold data (and assumptions) to fit certain facts. Litigating parties and witnesses must constantly ensure data-driven evidence is grounded on best practices, while addressing the matter at hand. Data analytics is a powerful tool, and is only as good as the user.

All data projects can benefit from building a Data Management Plan (“DMP”) before the project begins.  Typically a DMP is a formal document that describes your data and what your team will do with it during and after the data project.

There is no cookie-cutter DMP that is right for every project, but in most cases the following questions should be addressed in your DMP:

  1. What kind of data will your project analyze?  What file formats and software packages will you use?  What will your data output be?  How will you collect and process the data?
  2. How will you document and organize your data?  What metadata will you collect?  What standards and formats will you use?
  3. What are your plans for data access within your team?  What are the roles that the individuals in your team will play in the data analysis process?  How will you address any privacy or ethical issues, if applicable?
  4. What are your plans for long term archiving?  What file formats will you archive the data in?  Who will be responsible for the data after the project is complete?  Where will you save the files?
  5. What outside resources do you need for your project?  How much time will the project take your team to complete and audit?  How much will it cost?

When working on any type of data project, planning ahead is a crucial step.  Before starting in on a project, it’s important to think through as many of the details as possible so you can budget enough time and resources to accomplish all of the objectives.  As a matter of fact, some organizations and government entities require a Data Management Plan (“DMP”) to be in place in all of their projects.

 

A DMP is a formal document that describes the data and what your team will do with it during and after the data project.  Many organizations and agencies require one, and each entity has specific requirements for their DMPs.

 

DMPs can be created in just a simple readme.txt file, or can be as detailed as DMPs tailored to specific disciplines using online templates such as DMPTool.org.  The DMPTool is designed to help create ready-to-use data management plans.

Doug Berg, Ph.D., is an expert in big data, and has been working with EmployStats and Principal Economist Dr. Dwight Steward for several years regarding class action and discrimination lawsuits.  Dr. Berg is currently a professor at Sam Houston State University in the Department of Economics.  He received his Bachelor’s degree in Accounting from the University of Minnesota, and his Ph.D. in Economics from Texas A&M University.  Dr. Berg will provide additional support and his expert insight into using big data in employment litigation.  Doug Berg, Ph.D., describes litigation as “living on data”, and the better the data, the better the argument.  EmployStats welcomes his insight into the underlying meaning behind the data our clients provide us!

Due to the massive computational requirements of analyzing big data, trying to find the best approach to big data projects can be a daunting task for most individuals.  At EmployStats, our team of experts utilize top of the line data systems and software to seamlessly analyze big data and provide our clients with high quality analysis as efficiently as possible.

  1. The general approach for big data analytics begins with fully understanding the data provided as a whole.  Not only must the variable fields in the data be identified, but one must also understand what these variables represent and determine what values are reasonable for each variable in the data set.  
  2. Next, the data must be cleaned and reorganized into the clearest format, ensuring that data values are not missing and are within reasonable ranges of certainty.  As the size of the data increases, the amount of work necessary to clean the data increases.  In larger datasets there are more individual components which are typically dependent on each other, therefore it is necessary to write computer programs to evaluate the accuracy of the data.
  3. Once the entire dataset has been cleaned and properly formatted, one needs to define the question that will be answered with the data.  One must look at the data and see how it relates to the question.  The questions for big data projects may be related to frequencies, probabilities, economic models, or any number of statistical properties.  Whatever it is, one must then process the data in the context of the question at hand.
  4. Once the answer has been obtained, one must determine that the answer is a strong answer.  A delicate answer, or one that would significantly change if the technique of the analysis was altered, is not ideal.  The goal of big data analytics is to have a robust answer, and one must try to attack the same question in a number of different ways in order to build confidence in the answer.

Big data is not simply a size, it is a way of describing the type of data tools that will be utilized for an analysis.  Most, if not all, of the big data we work with at EmployStats requires specific data tools that are ever changing and evolving, as well as new tools that are being introduced into the market constantly.  

Each avenue will handle big data differently, and offer specific benefits that will determine how an analysis will be performed, as well as how results will be interpreted.  EmployStats constantly keeps up to date with the latest and greatest data analytic software for large data sets in order to optimize the outcome of these types of analyses.  

Many recent cases such as United States of America v. Abbott Laboratories and Pompliano v. Snapchat have utilized big data analysis techniques in litigation, proving that not only is it common to use big data in litigation, it is necessary to bring many cases to a successful close.