As the American press reports breathlessly on the #WarOnWomen in conjunction with Hillary Clinton’s Presidential Run Version 2.0, two of Professor Jacobson’s colleagues are battling to defend research showing that there may actually be a campus #WarOnMen.

A favorite assertion of campus-level feminist activists is that women in the sciences have a more difficult time achieving jobs, recognition, and tenure than their male counterparts. Cornell University professors Wendy M. Williams and Stephen J. Ceci decided to test that theory, and published a study of faculty hiring preferences showing that women were preferred over identically-qualified men.

A look at the hard data reveals a shocking truth: Women are being offered science positions at colleges and universities at rates higher than their actual presence within the pool of applicants. For example, analysis of the numbers between 2002 and 2004 reveals that 20% of applicants in mathematics were women, but they received 32% of the job offers.

As it was with the other “incontrovertible” science of climate change, the actual result is the opposite from the one predicted by the models; and, as was the case with climate scientists when their data refuted preferred models, the response to the researchers and their report has been exceedingly hostile.

However, Williams and Ceci have soldiered on, countering the numerous attacks on their study.

The bulk of the charges made by academics unhappy with the results focused on the methodology used to obtain the numbers. Williams and Ceci explain why these allegations are patently untrue:

Perhaps the most interesting aspect of this research is that one of the professors’ major goals was to help resolve hiring barriers by determining the root causes of discrimination against women. A careful and unbiased review of their work clearly demonstrates that if the data had shown men were hired preferentially, Williams and Ceci would have detailed that result.

The difference, however, is that their integrity would have remained unquestioned had their results coincided with activist talking points. In all likelihood, both would be receiving numerous scholarly accolades. They would have been the academic equivalent of rock stars.

Instead, both researchers have spent a great deal of time beating back attacks on their work. A favorite claim being used to dismiss their results is that the participants knew that the study was focusing on sexist hiring practices, so the answers obtained were biased to be politically correct.

Williams describes why this charge is completely unsubstantiated.

… First, 30 faculty were asked to guess what the study was about. None guessed correctly. Second, in Experiment 5, faculty were given only a single applicant to rate, male or female. They had no knowledge a mirror applicant was sent to other faculty with the gender changed. Thus, faculty rating the male applicant could not have downgraded him to 7.14 under an assumption that some unknown faculty member elsewhere would upgrade a female applicant to 8.20. Third, if faculty were aware that our purpose was to determine if they are biased, they should have given the same rank to the identically-qualified man and woman (i.e., tie them for first place). Only a handful of faculty chose this option. Fourth, if respondents knew the purpose of the study, why in some conditions was there no female preference? It seems implausible to argue that faculty knew the hypothesis, but only acted upon it occasionally.

In another post, Williams counters the charges that “they studied the wrong question” and that there was “self selection bias” in their study. An unbiased review of the information offered proves these claims are also quite baseless.

…If self-selection was a factor, it would have changed the outcome when the analyses were re-run using sample weights to control for nonresponse. But the results did not change. The irony of this criticism about our alleged failure to check on self-selection is that we went far beyond any previous experiment on gender bias in hiring, by using national-polling-type sample weights to control for response and nonresponse rates in each subgroup of our sample.

It will be fascinating to read their responses to the more ludicrous and politically charged allegations directed at their work: that they are tools of the right wing, their work was not peer-reviewed, they responded only to comments from men, etc. etc. etc.

This is just one more example of the meltdown that occurs when data generated from models is different from the actual numbers generated by reality.