Richard Dawkins saying poorly thought through reactionary things again

Oh dear:

And…

Alternatively we could not do anything like that because it is an appalling idea.

There are at least three levels of confused thinking here. The first is that in the past such attempts to ensure people were sufficiently ‘qualified’ to vote intellectually have been attempts to disenfranchise specific ethnic groups. When coupled with restricted access to education and with the test wittingly and unwittingly full of the biases of the more powerful ethnic group, such tests would be simply a way of creating a kind of apartheid electoral system.

OK, but what if somehow only people who could really understand the issues of the day could vote? Wouldn’t that be better? Isn’t it because of stupid people that we have Trump and Brexit? No or at least not ‘stupid’ as the term is usually used. Voting for Trump or falling for Nigel Farage’s propaganda are certainly daft things to do but a terrible secret of the world is that these are the kinds of ‘stupid’ that otherwise intelligent people do. There are connections between levels of education and political preference but they are neither simple nor straightforward. There is evidence of an ‘educational gradient‘ with how people voted in the UK on Brexit but that gradient does not account for other regional variations (e.g. Scotland). It’s also important to remember that any educational gradient represents people with quite different economic interests as well. Nor was that gradient as smooth as it might sound:

“So, based on the above, the Leave vote was not more popular among the low skilled, but rather among individuals with intermediate levels of education (A-Levels and GSCE high grades), especially when their socio-economic position was perceived to be declining and/or to be stagnant. “

https://blogs.lse.ac.uk/politicsandpolicy/brexit-and-the-squeezed-middle/

Blaming the UK’s current Brexit confusion on stupidity maybe cathartic but it provides zero insight into a way forward. Further it ignores that the architects of the political chaos are products of the reputedly the best education you can get in Britain. Boris Johnson is manifestly a buffoon but he is a buffoon with a good degree in classics from Oxford. The Boris Johnson’s of this world would waltz past Dawkins’s test.

US politics also has a complex relationship with educational attainment. Conservative views peak at mid-ranges of education (e.g. https://www.pewresearch.org/fact-tank/2016/09/15/educational-divide-in-vote-preferences-on-track-to-be-wider-than-in-recent-elections/ ) People with college degrees and more advanced higher education are more likely to vote Democrat currently but in the past (e.g. 1990s) this was less so. The growing (indeed, reversed) education divide doesn’t account for differences among ethnic groups or between genders. Other divides (e.g. urban versus rural) may work causally in the other direction (i.e. different economic demands making decisions about higher education a different choice in rural v urban contexts but the underlying politics resting on other urban v rural differences).

Even if we imagine a Dawkins-dystopia in which you had to have a university degree to vote (a much more substantial hurdle than the demands of either the UK or US citizenship tests) the proposal falls into the political fallacy of technocracy as an alternative to democracy. By ‘fallacy’ I don’t mean that competence or technical understanding or evidence-based policy are bad ideas or things we don’t want to see in government but rather that is a reasoning error to judge democracy in principle as a process by which technically competent policy is formed.

Democracy serves to provide consent from the governed to the government. That’s its purpose. It provides a moral and practical basis on which there can be any kind of government that is even vaguely just. Logically, a vote doesn’t determine whether something is true or not (except in trivial cases on questions about ‘what will people vote for’). Consequently, it is always easy to attack democracy by setting it up AS IF that’s what voting is supposed to achieve. A referendum can’t determine what the smartest course of action is but then that’s not what a referendum or an election is supposed to do. Instead asking people to vote is a way of trying to establish broad social agreement on what a country will do.

Without that kind of broad social agreement a country has only two options: disunity or authoritarianism. Restricting the franchise along any axis will lead to overt authoritarianism. Paternalistic ‘benevolent’ authoritarianism is still a system that depends on brutality.

The shorter version: democracy is about consent of the governed not about how smart voters are. The political divides we currently have wouldn’t be solved by a test that high school graduate would pass. A nation in which only college graduates could vote would be a shitty one and politically unstable. Well educated people can and do advance bad ‘stupid’ political ideas. Come to think of it, there’s a great example here: Richard Dawkins is very well educated and here he is putting forward a stupid idea.

It’s Voynich time again

Which means another round of breathless headlines, as I discussed in 2017: [https://camestrosfelapton.wordpress.com/2017/08/29/speaking-of-fantastical-drawings/ ] That time the theory was that the author was an Italian Jewish doctor. This time the theory is the author was a nun who lived here https://en.wikipedia.org/wiki/Aragonese_Castle

The peer-reviewed paper [https://www.tandfonline.com/doi/full/10.1080/02639904.2019.1599566 ] presents a plausible case that the writing is in a “proto-romance” language with its own writing system related to Latin letters.

Unfortunately the response to “the Voynich manuscript has been cracked!” is “oh no it hasn’t!”. Ars Technica has a more sceptical article than most on the news [https://arstechnica.com/science/2019/05/no-someone-hasnt-cracked-the-code-of-the-mysterious-voynich-manuscript/ ] Specifically, it seems “proto-romance language” is more of a means to make matching possible labels to words i.e. find a word that matches from across any one of Europe’s Romance languages from Spanish to Romanian without requiring a consistent mapping. It also allows for borrowings from near-by languages (e.g. Slavic or Persian) — it’s plausible for a language to have such borrowings but it gives a lot of freedom for matching script to words and then “translating” the words.

There’s a longer sceptical analysis here https://voynichportal.com/2019/05/07/cheshire-recast/ [ETA: and a follow up that is even more brutal https://voynichportal.com/2019/05/16/cheshire-reprised/ ]

Way beyond my capacity to judge but the test would be in translating longer un-illustrated passages, which apparently has yet to occur.

Poverty and IQ

Among the section of the right that regards IQ as the only explanatory variable in society aside from money, the relationship between poverty and IQ is used to defend the huge inequities in ours society as an outcome of a functioning meritocracy. It does not require much deep inspection of how modern capitalist societies work to see that they are neither functioning well not are they meritocracies.

The opposite view is that difference in performance on IQ is more caused by poverty than vice-versa. There are multiple reasons for believing this from access to education, motivation and attitudes towards the role of test taking in a person’s life (e.g. how much effort do you put into something that you expect to do poorly in?) However, specific causes are hard to demonstrate empirically. Hard to demonstrate, perhaps, but maybe a clever experimental design can shed more light on that.

I was just reading a 2013 paper that looked at the impact of poverty on cognition in an interesting way: Poverty Impedes Cognitive Function Anandi Mani, Sendhil Mullainathan, Eldar Shafir, Jiaying Zhao (Abstract http://science.sciencemag.org/content/341/6149/976 )

“The poor often behave in less capable ways, which can further perpetuate poverty. We hypothesize that poverty directly impedes cognitive function and present two studies that test this hypothesis. First, we experimentally induced thoughts about finances and found that this reduces cognitive performance among poor but not in well-off participants. Second, we examined the cognitive function of farmers over the planting cycle. We found that the same farmer shows diminished cognitive performance before harvest, when poor, as compared with after harvest, when rich. This cannot be explained by differences in time available, nutrition, or work effort. Nor can it be explained with stress: Although farmers do show more stress before harvest, that does not account for diminished cognitive performance. Instead, it appears that poverty itself reduces cognitive capacity. We suggest that this is because poverty-related concerns consume mental resources, leaving less for other tasks. These data provide a previously unexamined perspective and help explain a spectrum of behaviors among the poor. We discuss some implications for poverty policy.”

Poverty Impedes Cognitive Function Anandi Mani et al. Science 341, 976 (2013); DOI: 10.1126/science.1238041

First some caveats. It’s two fairly narrow experiments both of which have some contrived circumstances (for good reasons). I don’t know if these results have been reproduced.

Having said that it is interesting to look at the two experiments and what the results were.

The basic hypothesis was this:

“We propose a different kind of explanation, which focuses on the mental processes required by poverty. The poor must manage sporadic in- come, juggle expenses, and make difficult trade- offs. Even when not actually making a financial decision, these preoccupations can be present and distracting. The human cognitive system has lim- ited capacity (12–15). Preoccupations with pressing budgetary concerns leave fewer cognitive resources available to guide choice and action.

Poverty Impedes Cognitive Function Anandi Mani et al. Science 341, 976 (2013); DOI: 10.1126/science.1238041

To test this they compared individual performance on cognitive test both with and without some degree of financial stress. The first was a ‘laboratory study’ that demonstrates the impact as a kind of proof of concept. The financial stress here is artificial but if anything that makes the results more interesting.

In the first study the researchers went to a New Jersey shopping mall and recruited shoppers (who got paid) to take part in four related experiments. The basic principle of each experiment was two tasks. One task asked people to consider a realistic but hypothetical financial problem. For example, they might be asked about their car having to get some urgent repairs. Participants randomly were given a ‘hard’ situation were the costs would be high or an ‘easy’ situation were the costs were low but both easy & hard situations were cognitively similar. The second task was a more classic IQ style test (Raven’s Progressive Matrices) and a spatial compatibility task.

The four versions were designed to control for cognitive impacts of the first activity. The first two versions changed the amount of maths needed in the financial scenario. The third version added incentives to correct answers. The fourth version separated the two activities so that the first was completely finished before the person sat the IQ style test.

The group being studied also provided information on their income and the the data was analysed by looking at the participants as either rich or poor. The point being to see not if the ‘rich’ participants performed better on the IQ test but rather how much impact did the first activity (i.e. having to engage with a potentially financial stressful situation) have on the cognitive scores

Accuracy on the Raven’s matrices and the cognitive control tasks in the hard and easy conditions, for the poor and the rich participants in experiment 1

The graph is for experiment 1 but the results were similar for all four. The impact of ‘hard’ versus ‘easy’ of the first activity on the second activity was much bigger for people with less money. For the wealthier participants, the ‘hard’ scenario had less impact, almost certainly because they were faced with a situation that would have less of an impact on their own finances. In short having to worry about money and how you will pay for things that you need has a genuine and measurable impact on your ability to perform some cognitive tasks… At least within this experimental scenario but that a PRETEND bit of financial stress had a measurable impact is itself notable.

The second study was quite different and looked at some real financial stress.

“Our second study examined 464 sugarcane farmers living in 54 villages in the sugarcane- growing areas around the districts of Villupuram and Tiruvannamalai in Tamil Nadu, India. These were a random sample of small farmers (with land plots of between 1.5 and 3 acres) who earned at least 60% of their income from sugarcane and were interviewed twice—before and after harvest—over a 4-month period in 2010. There were occasional nonresponses, but all of our pre- post comparisons include only farmers we surveyed twice. “

Poverty Impedes Cognitive Function Anandi Mani et al. Science 341, 976 (2013); DOI: 10.1126/science.1238041

The work of the sugarcane farmers created a set of natural controls. An individual farmer has only one harvest a year and hence essentially only one pay-day a year. However, the timing of harvests are staggered over several months. So at a particular time of year it maybe post-harvest for one farmer but pre-harvest for another. The farmers naturally face greater financial pressure the longer it has been since their last harvest.

The results showed a similar but slightly smaller impact than the laboratory study. Farmers performed better on an IQ style test (Raven’s Progressive Matrices) after* they had been paid than before and the difference was large.

How large are these effects? Sleep researchers have examined the cognitive impact (on Raven’s) of losing a full night of sleep through experi- mental manipulations (38). In standard deviation terms, the laboratory study findings are of the same size, and the field findings are three quarters that size. Put simply, evoking financial concerns has a cognitive impact comparable with losing a full night of sleep. In addition, similar effect sizes have been observed in the performance on Raven’s matrices of chronic alcoholics versus normal adults (39) and of 60- versus 45-year-olds (40). By way of calibration, according to a common approximation used by intelligence researchers, with a mean of 100 and a standard deviation of 15 the effects we observed correspond to ~13 IQ points. These sizable magnitudes suggest the cognitive impact of poverty could have large real consequences.

Poverty Impedes Cognitive Function Anandi Mani et al. Science 341, 976 (2013); DOI: 10.1126/science.1238041

Put another way: we don’t think in isolation (even if you aren’t neurotypical). Background concerns and worries all have an impact on how you think and your capacity to problem solve. They definitely have an impact on your thinking in the artificial conditions of an IQ test.

*[There were also controls on the order they did the tests. Some of the participants took the test first after they had been paid and then were tested later in the year when their money had run low.]

The Right would rather men died than admit any flaws in masculinity

I shouldn’t read Quillette. For those unfamiliar with the Australian/International online magazine, it is part of that genre of modern political thought that could be called anti-left contrarianism, that covers various soughs from Steven Pinker to Jordan Peterson. Its stock style of article is shallowness dressed up as depth, utilizing the same style of misrepresentation of issues as the tabloid press but with longer sentences and a broader vocabulary.

Over the past few days it has published a couple of pieces on the American Psychological Associations Guidelines for Psychological Practice for Men and Boys. Now you would think that the stalwart defenders of innate gender differences would be happy that an influential body like the APA would be overtly recognising that men and boys have distinct psychological needs that require special advice for practitioners. After all, is this not the ‘moderate’ criticism of the rise of feminism? That somehow, men’s needs and men’s issues have been sidelined? Ha, ha, who am I kidding 🙂 The APA guidelines were characterised by MRAs, conservatives and the so-called “Intellectual dark web” as a direct attack on masculinity.

Here is one particularly stupid piece at Quillette that reflects the harrumphing style of response: https://quillette.com/2019/01/23/thank-you-apa/ The writer (a professor of psychology at North Dakota State University) either haven’t read the guidelines or is actively misrepresenting them.

However, a second piece is what actually caught my attention. It’s better written but also is attacking a strawman version of the guidelines: https://quillette.com/2019/01/23/how-my-toxic-stoicism-helped-me-cope-with-brain-cancer/

The writer describes how his stocial attitude helped him through a diagnosis & treatment for brain cancer and uses that to lambast the APA’s (apparent) criticism of stoicism in its guidelines. I, perhaps foolishly, left a comment on the piece. What follows is an edited version of my comment.

The piece is basically a strawman argument. It misrepresents what the APA guidelines say to imply that the guidelines have blanket disapproval for people acting stoically. e.g. Take the APA’s own article on the guidelines:

“It’s also important to encourage pro-social aspects of masculinity, says McDermott. In certain circumstances, traits like stoicism and self-sacrifice can be absolutely crucial, he says”

https://www.apa.org/monitor/2019/01/ce-corner.aspx

In the guidelines themselves, the word “stoicism” appears only twice and in neither case is a blanket condemnation of it. Once is in relation to difficulties SOME men have forming emotional bonds with other men:

“Psychologists can discuss with boys and men the messages they have received about withholding affection from other males to help them understand how components of traditional masculinity such as emotional stoicism, homophobia, not showing vulnerability, self-reliance, and competitiveness might deter them from forming close relationships with male peers”

American Psychological Association, Boys and Men Guidelines Group. (2018).
APA guidelines for psychological practice with boys and men

And the other connects with a broader health issue of men not seeking care that they may need:

“Psychologists also strive to reduce mental health stigma for men by acknowledging and challenging socialized messages related to men’s mental health stigma (e.g., male stoicism, self-reliance). “

American Psychological Association, Boys and Men Guidelines Group. (2018).
APA guidelines for psychological practice with boys and men

Neither example relates to be being stoical in the face of medical diagnosis but rather social pressures that mean some men (no, not ALL men) don’t seek care that they need (including for physical ailments) because of a misguided belief that they have to battle through by themselves.

The writer’s example is NOT an example of the case the APA guidelines were addressing. The writer sought out medical care, received a diagnosis and stuck with treatment. The writer self-described actions are the OPPOSITE of what the guidelines are discussing — they show a man taking their health seriously and SEEKING HELP. That’s good and healthy but many men aren’t doing that and as a consequence are dying of treatable diseases

As guideline 8 points out:

“For most leading causes of death in the United States and in every age group, males have higher death rates than females”

American Psychological Association, Boys and Men Guidelines Group. (2018).
APA guidelines for psychological practice with boys and men

At least some of this is due men not seeking out healthcare they need:

“Between 2011 and 2013, men’s mortality rates for colorectal cancer, a generally preventable disease with regular screenings, were significantly higher than women’s, suggesting that many men do not engage in preventative care (American Cancer Society, 2015).”

American Psychological Association, Boys and Men Guidelines Group. (2018).
APA guidelines for psychological practice with boys and men

A stoical attitude need not be toxic but when misapplied/misunderstood or adopted out of a feeling of social obligation, it can take on a harmful form of thinking that you shouldn’t seek out help. I’m glad the writer’s stoicism was of the positive kind but the writer should perhaps also take greater care in researching what the APA guidelines had actually said.


To put not too fine a point on it: toxic aspects of masculinity kills men. There is nothing pro-man about it. Nobody is actually sticking up for men by pushing back against the APA guidelines.

50% chance of doing X

This is a bit abstract and it follows on from this previous post about voting demographics.

Let’s say you’ve got a statistical model that predicts a person Z with Y characteristics has a 50% chance of doing X. The actual percentage doesn’t matter but 50% is a nice amount of measurable uncertainty — maximally knowing that we don’t know what person Z will do about X given the context of Y.

Empircally, the data would be looking at lots of Y people and seeing they do X 50% of the time. However, note that there’s a big and important distinction here between two extremes.

  1. Half of Y people do X and half of Y people don’t but those two halves are distinct. This implies that Y isn’t really the relevant factor here and we should be looking for some other feature of these people that better explains X behaviour.
  2. Y people do X half of the time randomly. That is Y people are essentially a coin toss with regards to X. In that case Y isn’t great for predicting whether people will do X but it is really relevant to the question (particulalry if W people behave more decisively).

In the demographic voting model and taking a figure of say 80%:20% for atheists splitting between left and right, I suspect this is a grouping where individuals have even less variability in their actual voting patterns. Some of that 20% will be Ayn Rand style atheists who are very committed to a right-wing viewpoint, rather than representing a 20% chance that a given atheist would vote Republican. However, that is not neccesarily true of other groups where the percentage may more closely represent a degree of individual variability.

 

US Voting Demographic Model

The Economist has a fascinating demographic model on US voters here: https://www.economist.com/graphic-detail/2018/11/03/how-to-forecast-an-americans-vote

There are no details on how robust the model is but they claim to have built it up from a large number of surveys of sufficient detail to compare the relative chance of a given person voting Republican or Democrat within a sub-group and controlling for the other sub-groups that person would be in.

It is an interesting perspective on political groupings. It’s not causal exactly but could help disentangle what relates to what in other groups.

For example, imagine you had a group of people who weren’t ostensibly related by politics. It could be a profession or members of a hobby related club. Now imagine that the members of the club were 70%/30% atheists v Christian and 60%/40% Democrat v Republican. Does the club lean Democrat because it has so many atheists in it or does it lean atheist because it has so many Democrats in it? The Economist’s model helps answer that question. Most Democrats aren’t atheists (mainly because few Americans are atheists) but atheism strongly implies a person will vote Democrat. Based on those numbers it looks more like the Democrat leanings are more due to the large number of atheists than vice versa.

You can plug in your own demographic details to see how close you fit. You can also plug in counterfactuals about yourself. I’m not American so I can’t factually describe what part of the US I live in but in a parallel universe in which I did but was otherwise much the same I’d have at least an 80% chance of voting Democratic REGARDLESS of where I was from in the US.

Claims and false claims

[A content warning: this post discusses sexual assault reports.]

All reports of a crime have potential consequences. We live in an age where false reports of crimes lead to death and where “SWATting” is a murderous prank. However, only one class of crime leads to constant concern from conservatives that false allegations are sufficiently common to require a kind of blanket scepticism. Amid the allegations against Supreme Court nominee Brett Kavanaugh, conservatives are pushing back against treating allegations of sexual assault at face value. This is part of a long history of people demanding that sexual assault crimes, in particular, require additional scepticism and scrutiny. That history pushed an idea that rape claims are made by women to ruin a man’s reputation even though historically the consequences of speaking out have always fallen more heavily on women than men*.

A piece by David French at the conservative magazine National Review attempts to pushback against modern feminist advocacy for supporting victims of sexual violence:

“It happens every single time there’s a public debate about sex crimes. Advocates for women introduce, in addition to the actual evidence in the case, an additional bit of  “data” that bolsters each and every claim of sexual assault. You see, “studies” show that women rarely file false rape claims. According to many activists, when a woman makes a claim of sexual assault, there is an empirically high probability that she’s telling the truth. In other words, the very existence of the claim is evidence of the truth of the claim.” https://www.nationalreview.com/2018/09/brett-kavanaugh-accusations-rape-claim-statistics/

The tactic here is one we’ve seen in multiple circumstances where research runs counter to conservative beliefs. FUD, fear-uncertainty-doubt — everything from cigarettes to DDT to climate change has had the FUD treatment as intentional strategy to undermine research. Note the ‘how ridiculous’ tone of ‘In other words, the very existence of the claim is evidence of the truth of the claim.’ when, yes the existence of somebody claiming a crime happened to them IS evidence that a crime happened to them. It is typically the first piece of evidence of a crime! It isn’t always conclusive evidence of a crime for multiple reasons but yes, mainfestly it is evidence. The rhetorical trick here is to take something that is actually commonplace (i.e. a default assumption that when a person makes a serious claim of a crime there is probably a crime) and make it sound spurious or unusual.

The thrust of the article rests on an attempt to debunk research that has been done on the issue of false rape allegations. To maintain the fear of men suffering from false rape allegations, the article aims to emphasise the uncertainty in the statistics to provoke doubt (and uncertainty) amid its target audience.

After a broad preamble, the article focuses on one study in particular and to the article’s credit it does actually link to the paper. The 2010 study in question is this one False Allegations of Sexual Assault: An Analysis of Ten Years of Reported Cases by David Lisak, Lori Gardinier, Sarah C. Nicksa and Ashley M. Cote. The specific study looks at reports of sexual assault to campus police at major US Northeastern university. However, the study also contains (as you might expect) a literature review of other studies conducted. What is notable about the studies listed is that they found frequencies of flase allegations were over reported. For example a 2005 UK Home Office study found:

“There is an over-estimation of the scale of false allegations by both police officers
and prosecutors which feeds into a culture of skepticism, leading to poor communi-
cation and loss of confidence between complainants and the police.”

The space were David French seeks to generate uncertainty around these studies is two-fold:

  1. That sexual assault and rape are inherently difficult topics to research because of the trauma of the crime and social stigma [both factors that actually point to false allegations being *less* likely than other crimes, of course…]
  2. That there are a large numbers of initial reports of sexual assault were an investigation does not proceed.

That large numbers of rape and sexual assault reports to police go univestigated may sound more like a scandal than a counter-argument to believing victims but this is a fertile space for the right to generate doubt.

French’s article correctly reports that:

“researchers classified as false only 5.9 percent of cases — but noted that 44.9 percent of cases where classified as “Case did not proceed.””

And goes on to say:

“There is absolutely no way to know how many of the claims in that broad category were actually true or likely false. We simply know that the relevant decision-makers did not deem them to be provably true. Yet there are legions of people who glide right past the realities of our legal system and instead consider every claim outside those rare total exonerations to be true. According to this view, the justice system fails everyone else.”

The rhetorical trick is to confuse absolute certainty (i.e. we don’t know exactly the proportion of the uninvestigated claims might be false) with reasonable inferences that can be drawn from everything else we know (i.e. it is very, very, unlikely to be most of them). We can be confident that cases that did not proceed BECAUSE the allegation was false (i.e. it was investigated and found to be false) were NOT included in the 44.9% of cases precicesly because those cases were counted in false allegation. More pertinently, linking back to the “fear” aspect of the FUD strategy, the 44.9% of cases also led to zero legal or formal consequences to alleged perpetrators.

I don’t know if this fallacy has a formal name but it is one I see over and over. I could call it “methodological false isolation of evidence” by which I mean the tendency to treat evidence for a hypothesis as seperate and with no capacity for multiple sources of evidence to cross-coroborate. If I may depart into anthropoegenic global warming for a moment, you can see the fallacy work like this:

  • The physics of carbon dioxide and the greenhouse effect imply that increased CO2 will lead to warming: countered by – ah yes, but we can’t know by how much and maybe it will be less than natural influences on climate and maybe the extra CO2 gets absorbed…
  • The temperature record shows warming consistent with the rises in anthopogencic greenhouse gases: countered by – ah yes, but maybe the warming is caused by something natural…

Rationally the the two pieces of evidence function together: correlation might not be causation but if you have causation AND correlation then, well that’s stronger evidence than the sum of its parts.

With these statistics we are not operating in a vacuum. They need to be read an understood along with the other data that we know. Heck, that idea is built into the genre of research papers and is exactly why literature reviews are included. Police report statistics are limited and do contain uncertainty and aren’t a window into some Platonic world of ideal truth BUT that does not mean we know nothing and can infer nothing. Not even remotely. What it means is we have context to examine the limitations of that data and consider where the bias is likely to lie i.e. is the police report data more likely to OVERestimate the rate of false allegations or UNDERestimate compared to the actual number of sexual assaults/rapes?

It’s not even a contest. Firstly as the 2010 report notes:

“It is notable that in general the greater the scrutiny applied to police classifica-
tions, the lower the rate of false reporting detected. Cumulatively, these findings con-
tradict the still widely promulgated stereotype that false rape allegations are a common occurrence.”

But the deeper issue is the basic bias in the data that depends on reports to the police.

“It is estimated that between 64% and 96% of victims do not report the crimes committed against them (Fisher et al., 2000; Perkins & Klaus, 1996), and a major reason for this is victims’ belief that his or her report will be met with suspicion or outright disbelief (Jordan, 2004).”

Most victims of sexual assault do not report the crime at all i.e. most victims aren’t even the data sets we are looking at. Assume for a moment that the lower bound of that figure (64%) is itself exaggerated (although why that would be the case I don’t know) and assume, to give David French an advantage, that 50% of actual sexual assaults go unreported and that half of the 44.9% figure were somehow actual FALSE allegations (again, very unlikely) that would make the proportion of false allegations compared with (actual assaults+false allegations) about 14% based on the 2010 study’s campus figure. It STILL, even with those overt biases included, points to false allegations being very unlikely.

It makes sense to believe. The assumption that rape in particular is likely to draw malicious allegations is a misogynistic assumption. That does not mean nobody has ever made a false claim of rape, it just means that we do not place the same burden of doubt on people when they claim to be robbed or mugged etc. People make mistakes and some people do sometimes maliciously accuse others of crimes but such behaviour is unusual and, if anything, it is particulalry unusual with sexual crimes where, in fact, the OPPOSITE is more likely to occur: the victim makes no allegation out of fear of the consequences and because of the trauma involved.

Somehow it is 2018 and we still have to say this.

*[I don’t want to ignore that men are also victims of sexual violence, perhaps at far greater rates than are currently quantified, but the specific issue here relates to a very gendered view of sex and sexual assault.]