Gathering Data for Labor Market and Mitigation Studies

Performing labor market and mitigation studies requires gathering and using specific information. Often, this information pertains to the plaintiff’s job search efforts within the labor market. For example, did the individual apply for jobs that matched their expertise and education level. Additionally, if an application is made to a different job, we must determine if the job qualifications are similar to their previous position.

Labor market data sources such as, U.S. Bureau of Labor Statistics labor market survey (BLS) and U.S Department of Labor’s ONET, are often used to analyze an individual’s potential job matches.  It is through this type of research that an accurate picture of the plaintiff’s job search efforts can be measured and provide needed information in these types of labor market and mitigation studies.

For more information visit Employstats or contact us at info@employstats.com.

EmployStats Advises the EEOC on Collection of Survey Data

EmployStats was brought on to provide our feedback on the best uses of this EEOC-2 data. In these panel meetings, we testified about our industry level experience in using available pay data to analyze claims of disparate pay and employment discrimination. We described to the EEOC how companies like EmployStats, research institutions, and public users utilize federally maintained datasets in practice, comparing the survey data the EEOC collected to other federal databases like the Bureau of Labor Statistics (BLS).

We explained the benefits of current benchmark pay data from different public and private sources, and what additional value the EEO-2 survey data could bring. We also provided EEOC recommendations on best practices for the formatting and publication of the EEOC’s data, so this survey data can be of maximum utility to researchers and the general public. 

A few years ago the EEOC had created an additional component to their Equal Employment Opportunity (EEO) survey sent out to employers in the United States, known as Component 2 (EEOC-2 / EEO-2). This addition to their survey asked employers about the compensation of employees and their hours worked, organized by job category, gender, race, ethnicity, and certain pay bands. After collecting this data, the EEOC was interested in analyzing this data and determining how it could be best utilized by both the commission, and the public at large. Partnering with the National Academy of Sciences (NAS), the EEOC formed a panel to closely examine this compensation data, and collect input on its utilization. EmployStats was able to collaborate with several well known professionals including William Rogers, Elizabeth Hirsh, Jenifer Park, and Claudia Goldin

To discuss a potential case or to answer any questions, you can email info@employstats.com or contact us at 1-866-629-0011.

Meet Our Team!

Our Personnel

  • Dwight Steward: Principal Economist
  • Roberto Cavazos: Practice Lead Economist
  • Valentyna Katsalap: Economist
  • Matt Rigling: Consultant
  • Proma Paromita: Reserach Associate
  • Carl McClain: Economic Researcher
  • Mawching Griffin: Office Manager
  • Adela Botello: Operations Manager
  • Emma Dooley: Marketing and Operations Assistant

To learn more about us visit www.employstats.com

Research Associate Proma Paromita Participates in Stata Online Training Course

Our very own Research Associate Proma Paromita recently participated in an online training course for Stata. Stata is an integrated statistical software package that provides services for data manipulation, visualization, statistics, and automated reporting.

Here at EmployStats our researchers are always working with huge data sets that need to be analyzed and formatted. This course provided Proma with a better understanding of Stata programming and insight on how to more efficiently dissect large quantities of data.

In a sit-down interview Proma discussed her experience and general layout of the course. She explained the scaffolding of course content beginning with the basics and increasing with complexity as the course continued. Proma described the course as concise with practical examples resulting in a perfect tool to utilize for future Stata programming. Stata offers many resources on its website and YouTube channel to help individuals navigate challenges by accessing helpful information.

As a takeaway from the course Proma believes understanding the syntax is more beneficial than memorizing it. Additionally, she stressed it is essential to understand the data that you are working with in order to produce tangible results.

Data Mining and Litigation (Part 1)

Data Mining is one of the many buzzwords floating about in the data science ether, a noun high on enthusiasm, but typically low on specifics. It is often described as a cross between statistics, analytics, and machine learning (yet another buzzword). Data mining is not, as is often believed, a process that extracts data. It is more accurate to say that data mining is a process of extracting unobserved patterns from data. Such patterns and information can represent real value in unlikely circumstances.

Those who work in economics and the law may find themselves confused by, and suspicious of, the latest fads in computer science and analytics. Indeed, concepts in econometrics and statistics are already difficult to convey to judges, juries, and the general public. Expecting a jury composed entirely of mathematics professors is fanciful, so the average economist and lawyer must find a way to convincingly say that X output from Y method is reliable, and presents an accurate account of the facts. In that instance, why make a courtroom analysis even more remote with “data mining” or “machine learning”? Why risk bamboozling a jury, especially with concepts that even the expert witness struggles to understand? The answer is that data mining and machine learning open up new possibilities for economists in the courtroom, if used for the right reasons and articulated in the right manner.

Consider the following case study:

A class action lawsuit is filed against a major Fortune 500 company, alleging gender discrimination. In the complaint, the plaintiffs allege that female executives are, on average, paid less than men. One of the allegations is that starting salaries for women are lower than men, and this bias against women persists as they continue working and advancing at this company. After constructing several different statistical models, the plaintiff’s expert witness economist confirms that the starting salaries for women are, on average, several percentage points lower than men. This pay gap is statistically significant, the findings are robust, and the regressions control for a variety of different employment factors, such as the employee’s department, age, education, and salary grade.

However, the defense now raises an objection in the following vein: “Of course men and women at our firm have different starting salaries. The men we hire tend to have more relevant prior job experience than women.” An employee with more relevant experience would (one would suspect) be paid more than an employee with less relevant prior experience. In that case, the perceived pay gap would not be discriminatory, but a result of an as-of-yet unaccounted variable. So, how can the expert economist quantify relevant prior job experience?

For larger firms, one source could be the employees’ job applications. In this case, each job application was filed electronically and can be read into a data analytics programs. These job applications list the last dozen job titles the employee held, prior to their position at this company. Now the expert economist lets out a small groan. In all, there are tens of thousands of unique job titles. It would be difficult (or if not difficult, silly) to add every single prior job title as a control in the model. So, it would make sense to organize these prior job titles into defined categories. But how?

This is one instance where new techniques in data science come into play.

What is the Latest on the Minimum Wage?

A paper published in the NBER in January of 2021 attempts to cast new light on minimum wage research in the United States. The working paper, co-authored by Professor David Neumark and Peter Shirley is titled “Myth or Measurement: What Does the New Minimum Wage Research Say About Minimum Wages and Job Loss in the United States?”. The paper argues that, contrary to more traditional summaries of the literature, there is a clear evidence of the negative impacts of minimum wages on employment.

Concentrating on research evidence from within the United States since the early 1990s, Neumark and Shirley assembled all the available papers and literature published in the 30 years on the topic. Neumark and Shirley identified the core estimates and the key takeaways from the authors and researchers on each study. After assembling all of the literature, they find that almost 80% of studies in the literature suggest negative employment effects from raising the minimum wage.

There were several other takeaways from Neumark’s research. For instance, the evidence that the minimum wage had strong, negative employment effects was far more robust for certain populations, such as teens, young adults, and the less educated. At the same time, while studies of low wage industries broadly show negative employment effects, the research is not as decisively one sided.

The evidence is not unambiguous, with some research in specific categories (such low-skilled workers) showing net zero or even positive effects from raising the minimum wage. But, as the paper shows clearly that most of the evidence indicates that “minimum wages reduce low-skilled employment.” And that “It is incumbent on anyone arguing that research supports the
opposite conclusion to explain why most of the studies are wrong.”

See here for Neumark & Shirley working paper.

When Does Age Discrimination Begin?

An article published by Forbes in January 2020, discusses age discrimination in the workplace, specifically during the hiring process. The article written by Patricia Barnes highlights the working paper by Professor David Neumark titled “Age Discrimination in Hiring: Evidence from Age-Bling vs. Non-Age-Blind Hiring Procedures”.  Both the article and paper indicate that discrimination begins to occur at the time age becomes apparent to the employer. This can be at different times and are often specific to each employer’s practices and hiring procedures. 

One of the key findings of Neumark’s research is that individuals that apply for a job position in-person are substantially less likely to continue on in the hiring process than those individuals that apply for a job position on the Internet. While other indicators of age, such as dates of education and employment, may lead to discrimination through Internet applications, they are less obvious and less accurate indicators of age.  

In Neumark’s study and working paper, an individual turning in an application for a restaurant in-person was about 50% more likely to not receive a job offer than someone who did not apply in person, but received an interview. Potential discrimination existed throughout the hiring process depending on the time the employer was made aware of an applicant’s age. 

When calculating damages in discrimination lawsuits specifically claiming failure to hire, it is important to understand the timeline and when during hiring potential discrimination might have taken place.  It is likely necessary to investigate multiple steps in the hiring process to reveal or refute discriminatory hiring practices, as outlined by Neumark’s paper. 

See here for Professor Neumark’s full working paper.

Upcoming EmployStats Seminar for State Auditors

EmployStats is honored to announce it be teaching a course on statistical sampling for the Texas State Auditors Office (SAO) this winter.  The course, titled Statistical Sampling for Large Audits, will take place online between December 14 and 15, 2020.

The State Auditor’s Office (SAO) is the independent auditor for Texas state government. The SAO performs audits, reviews, and investigations of any entity receiving state funds. EmployStats’ principal economist, Dwight Steward, Ph.D., along with Matt Rigling, MA and Carl McClain, MA, will be instructing this course for auditors from state and local government.

Over this two day, all-online course, the EmployStats team will provide a crash course to participants in the uses of statistical sampling, how statistical samples are conducted, and when statistical samples are legally and scientifically valid in performing audits.

To find out more about the seminar and the Texas State Auditor’s Office, please visit the SAO Website. For more on EmployStats, visit our website: Employstats.com!

EmployStats Publishes Big Data Book

Big Data permeates our society, but how will it affect U.S. courts? In civil litigation, attorneys and experts are increasingly reliant on analyzing of large volumes of electronic data, which provide information and insight into legal disputes that could not be obtained through traditional sources. There are limitless sources of Big Data: time and payroll records, medical reimbursements, stock prices, GPS histories, job openings, credit data, sales receipts, and social media posts just to name a few.  Experts must navigate complex databases and often messy data to generate reliable quantitative results. Attorneys must always keep an eye on how such evidence is used at trial. Big Data analyses also present new legal and public policy challenges in areas like privacy and cybersecurity, while advances continue in artificial intelligence and algorithmic design.  For these and many other topics, Employstats has a roadmap on the past, present, and future of Big Data in our legal system.

Order your copy of Dr. Dwight Steward and Dr. Roberto Cavazos’ book on Big Data Analytics in U.S. Courts!

Sampling in Wage and Hour Cases

Often in wage and hour cases, attorneys are faced with the decision of analyzing the complete time and payroll records for a class population, or analyzing just a sample of the population’s records.  While in an ideal world, analyzing the full population of data is the best approach, it may not always be feasible to do so.

For instance, some of the individuals within the class may be missing records due to poor data management, or perhaps both sides agree that the analysis of the full population may be too costly or time consuming.  In these cases, the attorneys can elect to have an expert randomly select a random sample from the full population to perform a reliable and statistically significant random sample.

Below are some common terms related that attorney’s can expect to hear when discussing sampling in their wage and hour cases:

Random Sampling, n. sampling in which every individual has a known probability of being selected.

Sample, n. a set of individuals or items drawn from a parent population.

Sample Size, n. the number of individuals or items in a sample.

Simple Random Sampling, n. sampling in which every individual has an equal probability of being selected and each selection is independent of the others.

Discussion: This very common statistical routine is analogous to ‘pulling a name out of a hat’.

Stratified Sampling, n. a method of statistical sampling that draws sub-samples from different sections, or strata, of the overall data population.

Discussion: Stratified sampling routines are used in employment settings when there are important differences between different groups of employees being surveyed. For example, in a survey of off-the-clock work, workers at different locations, and with different supervisors, may have different work cultures that make them more (or less) likely than other workers to have worked during their lunch period. In this instance, a stratified sampling routine may be used to account for those differences.