Categories
Blog

30 Errors in Joan Meier’s 2019 Study

January 5, 2021 by Robert Franklin, JD, Member, National Board of Directors

joseph-gonzalez-NCR1FHsrl3U-unsplash.jpg

Jennifer Harman and Demosthenes Lorandos’ takedown of Joan Meier’s survey of cases involving allegations of domestic violence, child abuse and parental alienation effectively reveals Meier’s frank bias in the matter.  Meier has long played for the team that seeks to convince us that parental alienation is a sham concocted by scheming fathers to take custody from “protective” mothers.  Her study, published in September, 2019, is Meier’s effort to coat that bias with a scientific gloss. 

Here and here are my two previous pieces on Harman/Lorandos as they lay waste to Meier’s shoddy work.

Meier is herself plainly biased against fathers and the very idea of parental alienation and her 2019 study’s many flaws all promote that bias.  Harman/Lorandos identify an astonishing 30 errors in the design and implementation of Meier’s work.  Needless to say, I won’t go into all of them, but I will hit a few of the most egregious ones.

One of the most significant issues is the fact that Meier plainly doesn’t want skeptics (or anyone else) to be able to know exactly what she did in her study or how she did it.  The organization that funded her work, the taxpayer-funded National Institute of Justice, requires that recipients make their methodology and data readily available prior to publication, but Meier simply refused.  She published in September, 2019, but it was almost a year before she got around to complying with her obligations of honesty and transparency.  That way, apparently, she and her fellow travelers could point to her work as authoritative without anyone being able to criticize her methodology or her treatment of data.

That included behavior that gets close to “the dog ate my homework” type of dodge that even fourth-graders get called out on.  Not Meier, whose work is routinely cited as authoritative by the MSM.

So, for example, readers of Meier’s study are directed to refer to “Appendix B” and “Appendix C” for information on coding and the plan for analyzing data.  Those appendices would then allow other academics to understand how coding of legal cases and the allegations therein was conducted and how data were analyzed.  But Meier failed to include those two appendices, rendering criticism of her work all but impossible.  Not only that, but when Harman/Lorandos contacted Meier to obtain the information, she refused to provide it.  Normally, academics are quite open and accommodating of such requests by other academics, but not in this case.

Meier’s study was of mostly appellate cases, each of which was coded for data such as claims of parental alienation, various forms of abuse, etc.   Now, the facts of each case are unique, so questions arise about the nature of the alienation, the nature of the abuse, the severity of each, whether either was deemed “founded” by the court, etc.   In dealing with a large number of cases, researchers have to rely on coders to accurately analyze the facts and findings of each case and code them correctly for later statistical analysis.

Who were Meier’s coders, how were they trained, how were discrepancies in the coding resolved?  We don’t know because Meier refused to tell us.  Perhaps more importantly, did the coders know what the study was about prior to coding the data?  Did they know the hypotheses to be tested beforehand?  How were they selected?  Did Meier hire only coders who share her biases about PA, allegations of abuse and child custody?  Did she tell them the goals of her study?  We don’t know, but clearly, that information is vital to the validity or invalidity of her work.  Truth to tell, anyone who’s hired by Meier to code data likely knows, without being told, both her biases and that the study is intended to confirm them.

Then there was the problem of how Meier selected the cases to be studied:

One of the most striking problems with Meier et al.’s (2019) research paper is how the legal cases for two data sets were selected, leading to what may be a “cherry-picked” sample that is stacked in favor of the hypotheses that were described. There was a lack of transparency about the search terms used to select cases and processes by which they were developed in the original paper. The inclusion and exclusion criteria for the cases appeared biased because the Meier team deliberately selected the “cleanest” and most “paradigmatic” cases involving abuse and alienation (Meier et al. 2019, p. 7). The Meier et al. paper also notes that a large number of cases that reflect a significant proportion of postdecree appellate cases, such as cases where both parents claimed the other was abusive, were excluded.

The clear shortcomings of Meier’s work go on and on and eventually leave the reader agog at the lack of basic honesty and integrity of the paper.  In short, not only Meier’s work itself, but her history of frank bias, strongly suggest that her 2019 study is less a disinterested search for the truth about the interaction of dueling claims of PA and abuse than an effort to simply confirm that bias.  Harman/Lorandos suggest the same.

Given the sampling, coding, and analytical problems described above, it is highly likely that Meier et al.’s (2019) interpretation of their findings is plagued by confirmation bias…

The good news is that Harman and Lorandos have done their own analysis of legal cases and very publicly and transparently corrected Meier’s many errors.  Unlike Meier, they’ve done their work for all to see and are content to let the chips fall where they may.  That means their findings are much more likely to be reliable and therefore much more interesting than hers.

We’ll see what those findings are next time.

Leave a Reply

Your email address will not be published. Required fields are marked *