A Big Hugo Finalist List

I am primarily a machine that turns carbohydrates into spreadsheets. I’m OK with relational databases but they are not my natural territory. I prefer my data in great big lists of everything and if I want lots of little tables then I’ll pivot it.

I’d started rationalising some of the throw-away spreadsheets I made for blog posts and I’d started with the sheet I’d made for my Hugo Window posts [https://camestrosfelapton.wordpress.com/2020/06/21/one-way-the-mid-1980s-did-change-in-the-hugo-awards/ ] As Google Sheets now has pivot tables and other features that make it a reasonable alternative to Excel I’ve put all that data there. You can see for yourself here https://docs.google.com/spreadsheets/d/1lL9bm3I7yrkKxSAZwN1NhWr6OB8-s10IkV1g_MSSGXY/edit?usp=sharing

Now, I was also working on a new sheet for the IGNYTE awards (which I need to get back to) but I got diverted by a query over Twitter, which was sufficiently interesting and which led to some extra oomph to the Google Sheet listed above.

Yasser Bahjatt is a fan from Saudi Arabia who was part of the JeddiCon bit and is a Guest of Honour at FIYAHCON. He’s also trying to collate Hugo data and looking at diversity across awards. I said I’d share what I had and see what I could add to my great-big-spreadsheet.

I started asking a few other people who had collected Hugo data what they had (and thanks in particular to ErsatzCulture for tips on getting stuff out of ISFDB efficiently) but then life and work got in the way. There’s other people I meant to hassle but time etc…(so don’t feel left out that I didn’t bug you!)

I’m still think about categories that could help inform analysis and track diversity and inclusion in awards. One issue is that some of the most relevant fields are not easily collated – in particular ethnicity. A second issue is that once you step away from data that is easily found on Wikipedia or an author’s public bio, you start shifting towards collecting personal data about living individuals, which is an ethical and legal minefield.

Here’s a list of categories and some thoughts on them:

  • Year: Already have this in my sheet (note my great big sheet only has the main story categories)
  • Award: I have Novel, Novella, Novelette and Short Story but not other Hugos
  • Name of Nominee: I had names but I augmented this with ISFDB data to help match pseudonyms
  • Number of nominations: available and I’ll add this progressively. I do have the number of times the author was a finalist as well.
  • Number of Votes: available and I’ll add this progressively.
  • Gender of Nominee: I’ve done pronouns instead because it is quicker to collect and data is more reliable. I’ll probably need an extra column for the pronouns of the named author v the pronouns the author uses aka James Tiptree v Alice Sheldon.
  • Year of Birth: Imported from ISFDB
  • Age of Nominee when nominated: Age at finalist based on the ISFDB data. Approximate as I’m not taking time of year into account (i.e. which side of the award announcement their birthday fell on)
  • Ethnicity of Nominee: Tricky, tricky. I need to think about this more because you really need a personal identification of ethnicity here and it’s relative to the country the person lives in and the the general USA-context of the Hugo Awards.
  • Religion of Nominee: Less ambiguous than ethnicity and sometimes public but one of those categories that is on the border between public data and personal/private data.
  • Country of Nominee: ISFDB has country of birth (which can be complex for pre-WW2/WW1 births) but nationality may be more relevant.
  • Original language of the Nominated work: ISFDB has this.
  • Nominated language of the Nominated work: I think all the Hugo finalist stories have been nominated in English translations, so possibly redundant.
  • Profession(s) of Nominee: This is interesting and I’ve looked at this kind of info before when looking at the number of academics nominated for Hugos. It’s largely on the public side of the public/private divide I think. However, it’s very mutable and really what is most interesting would be “profession at first time finalist”, so that the column isn’t just “writer”

Anyway, a work-in-progress.

Review: Superior by Angela Saini

Science journalist Angela Saini’s third book Superior: the Return of Race Science is a very timely survey of the history and contemporary impact of the attempts to use science to prop up racism and beliefs about race.

From Carl Linnaeus to the sinister Pioneer Fund, Saini maps the shifts both in actual understanding and the layers of post-hoc rationalisations for prejudices. She does this with minimal (but appropriate) editorialising and instead lets the views of a very wide range of interviewees inform the reader about how views have shifted or, in some cases, stubbornly refused to shift.

Much of it covered topics and personalities I was already familiar with and if you have read books like Stephen J Gould’s The Mismeasure of Man, then you’ll be familiar with a lot of the background. However, Saini takes a broader survey and branches out into topics like the misguided but often well intentioned use of race in prescription medicines. I found that the sections that covered areas I was already very familiar with where both interesting and provided good insights, although I obviously got more value out of the sections on topics I was less aware of.

Saini also charts recent events such as the rise of the alt-right, the renewed ideological racism in populist governments (in particular Trump’s America but also Modi’s Hindu nationalism) and demonstrates how the 18th century obsession with race is connected to modern concerns and pseudoscience.

The people-centred approach of the book gives it a very human quality. Saini has a knack at humanising many of the protagonists without excusing or apologising either for their mistakes or (in many cases) their bigotry. Rather, by focusing on the individuals her approach highlights their motives and in the cases of many of the scientists involved how they managed to fool themselves into thinking they had transcended their own prejudices and somehow found objective truths instead of discovering convoluted ways of having their own biased assumptions echoing back to them.

I listened to the audio-book version which is narrated by Saini herself. I really highly recommend this book both in terms of the insights she gives on the topic but also as an example of excellent modern science writing.

Back to Flint

A follow up to yesterday’s post. One rabbit-hole I had to stop myself running down was Eric Flint’s 2015 post THE DIVERGENCE BETWEEN POPULARITY AND AWARDS IN FANTASY AND SCIENCE FICTION. Eric Flint, often cast as the token left-winger of Baen’s stable, tread a difficult line during the Debarkle with many of his colleagues or professional collaborators (e.g. Dave Freer) very much advocating the Sad Puppy line. Flint’s overall position could be described as conceding that there was some sort of issue with the Hugo Awards but disagreeing with the tactic and rhetoric of the Sad Puppies and the underlying causes of the problem.

Flint’s diagnosis of the issue is explained in the post I linked to and can be summarised by this proposition:

“the Hugos (and other major F&SF awards) have drifted away over the past thirty years from the tastes and opinions of the mass audience”

This was not a post-hoc reaction to the Debarkle but a view he had held for several years:

Here’s the history: Back in 2007, I wound up (I can’t remember how it got started) engaging in a long email exchange with Greg Benford over the subject of SF awards. Both of us had gotten a little exasperated over the situation, which is closely tied to the issue of how often different authors get reviewed in major F&SF magazines.

[some punctuation characters have been cleaned up -CF]

Flint goes on to describes the issues he had trying to substantiate the feeling. He acknowledges that the basic issue with any simple analysis to corroborate his impression is that sales data is not readily available or tractable. He goes on to attempt to address that deficit of data in other ways. However, regardless of of his method (how much space book stores dedicate to given writers) his approach only address one part of what is actually a two part claim:

  • There is a current disparity between popularity of authors and recognition of authors in the Hugo Award.
  • Thirty years ago this was not the case (or was substantially less).

Now I have even less access to sales data than Flint and publishing has changed even further since even 2015. Nor do I have any way of travelling back to 1985 (or 1977) to compare book stores then with the Hugo Awards. Flint’s claim is far to subject to impressions and confirmation bias to really get a handle on. I could counter Flint’s more anecdotal evidence of current (at the time) big genre sellers unrecognised by the Hugo Awards with examples form 1985. An obvious one would Jean M. Auel’s whose Clan of the Cave Bear series was selling bucket load in the early 80’s and beyond (The Mammoth Hunters would have been cluttering up book stores in 1985). A more high-brow megaseller from 1985 would be Carl Sagan and Ann Druyan’s Contact, which, again, did not make into the Hugo list of finalists. Yet, these counter-examples lack bite because the Hugo’s missing a couple of books don’t demonstrate that Flint’s impression is wrong even if they help demonstrate that his evidence for the current (as of 2015 or 2007*) is weak.

However, Flint does go on to make a different kind of argument by using the example of Orson Scott Card:

“With the last figure in the group, of course,Orson Scott Card,we find ourselves in the presence of a major award-winner. Card has been nominated for sixteen Hugo awards and won four times, and he was nominated for a Nebula on nine occasions and won twice. And he was nominated for a World Fantasy Award three times and won it once.
But…
He hasn’t been nominated for a WFC in twenty years, he hasn’t been nominated for a Nebula in eighteen years, and hasn’t been nominated for a Hugo in sixteen years. And he hasn’t won any major award (for a piece of fiction) in twenty years.
This is not because his career ended twenty years ago. To the contrary, Card continues to be one of our field’s active and popular authors. What’s really happened is that the ground shifted out from under him – not as far as the public is concerned, but as far as the in-crowds are concerned. So, what you’re really seeing with Orson Scott Card’s very impressive looking track record is mostly part of the archaeology of our field, not its current situation. As we’ll see in a moment, the situation is even more extreme with Anne McCaffrey and almost as bad with George R.R. Martin.

[some punctuation characters have been cleaned up -CF]

Well this is more tractable. We can track authors over time through the Hugo Awards and we can look at what we might call ‘windows’ in which they receive awards. So that’s what I did. I grabbed list of Hugo finalists for the story categories (novel, novella, novelette, short story), put them in a big spreadsheet, cleaned up all sorts of things as per usual and went to have a look.

I’ll save a lot of the data for another post. There are two big issues with looking at the data over time. The first is that there are built in patterns to the data that show changes overtime that arise just out of the data being collected. Back in 1953 a Hugo finalist could only possibly have been nominated that once. Likewise a first time Hugo finalist in 2020 has a hard limit on the span of years between their first and last Hugo nomination.

A different issue is exemplified by this grouping of data where span of years if the difference between the first year an author was a Hugo finalist to the last year.

Span of YearsTotal
0201
1 to 576
6 to 1035
11 to 1527
16 to 2021
21 to 2517
26 to 309
31 to 357
36 to 402
fee-fi-fo-fum I smell the blood of a power-law distributi-um

More than half of the data set are one-hit wonders because everybody’s first go as a finalist is a one-hit wonder until they get their next one. That’s quite a healthy sign IMHO but I digress. 70% of the authors are in 0 to 5 year span but there a small number of authors who have large time spans of nominations. The top two being George RR Martin and Isaac Asimov (38 years and 36 years). This kind of data is not summarised well by arithmetic means.

I’ll save some of the geekier aspects for another time. Is there a shift in some of these spans recently? Maybe but both the structural issues with the data and (ironically) the Debarkle itself make it hard to spot.

What we can do though is look at specific cases and Orson Scott Card is a great example. He’s great because he undeniably fell out of favour with people by being an enormous arse and we can corroborate that externally from this data set. However! EVEN GIVEN THAT the table of groupings I posted shows us something that severely undermines Flint’s point.

Card’s Hugo span (last year as finalist minus first year as a finalist) is 14 years. That puts him in the top 14% of writers by Hugo span. Card has been very far from being short changed compared to other authors. These are his 14 year-span companions:

FinalistMin of YearMax of Year
C. M. Kornbluth19591973
Dan Simmons19902004
James Blish19561970
Joan D. Vinge19781992
Orson Scott Card19781992
Robert J. Sawyer19962010

Note that the group is from multiple decades. The broader 11-15 group includes writers like Frank Herbert, China Miéville, C. M. Kornbluth, Philip K. Dick, and John Scalzi. Now Miéville and Scalzi might still extend their span (as might Card but probably not).

Flint goes on to suggest that awards get more literary over time and maybe they do but looking at the data I think Flint is sort of seeing a phenomenon but misreading what it is.

I would suggest instead that Awards favour a sweet-spot of novelty. A work that is too out-there won’t garner enough support quickly enough to win awards. A work that is too like stuff people have seen before isn’t going to win awards either — almost by definition, if we are saying ‘this book is notable’ it has to stand out from other books. For the Sad Puppies or even the LMBPN Nebula slate, this was apparent in works that struggled to differentiate themselves from other stories in an anthology or another book in a series. Jim Butcher’s Skin Game (to pick a Debarkle example) was just another book in his long running series and not even a particularly good episode.

The same applies to some degree for authors. I am not saying John Scalzi will never win another Hugo Award but I don’t expect him to even though I think he’ll be writing good, entertaining sci-fi for many years. This is not because he’s not sufficiently left-wing for current Hugo voters but because we’ve read lots of John Scalzi now and sort of know what to expect.

A future equivalent of Eric Flint in 2036 may look back to 2006 and say “Back in the day the Hugos used to reward popular authors like John Scalzi. Look at the virtual-cyber shelf on Googlazon and you’ll see rows of Scalzi books up to his latest ‘Collapsing Old Red Shirt 23: Yogurt’s Revenge’ – why don’t the Hugo’s give him rockets any more!”**

The Hugo’s move on, it is true but they have repeatedly picked out not exactly brand new talent but authors when they are at a sweet spot of their careers. Yes some have much longer Hugo spans but they are unusual and many are the sci-fi giants of yore and others are people with long gaps between nominations.

Card actually had a good run but even without his more giant-arsehole like antics, it is very unlikely that he would have got a Hugo nomination any time soon. Note, for example, that Card has not yet been a Dragon Award finalist despite having eligible novels and despite the Dragons (championed by Flint) as supposedly addressing the popularity issue.

*[Or 2020, as I don’t think Flint has said everything is fine now.]

**[I suspect future John Scalzi will be more inventive than just rehashing his former hits but also I think he’d actually be quite brilliant at writing a parody pastiche of his own work.]

Loved Books: The Mismeasure of Man by Stephen J Gould

Stephen Jay Gould is a voice that is missed in today’s world. Smart, compassionate and analytical but also with a deft capacity to write about complex ideas in an engaging way. In The Mismeasure of Man Gould stepped out of his main field of paleontology and looked at the history of attempts to measure intelligence and the racist assumptions that have run through those attempts. This is the 1981 edition which doesn’t have the chapters on The Bell Curve but still a worthy read.

Is it perfect? No but then a popular account of broad area of research necessarily simplifies and skips over some details. As gateway into understanding the issues there is no better book that I’m aware of.

A not-actually-a-paper has the Right excited about global warming denial again

One of my favourite topics is the methodical destruction of our planet’s climatic status-quo by our fun habit of burning the deep past for larks aka Global Warming. As a reminder, global warming currently looks like this*:

UAH satellite temps – not because they are the best record but just because they avoid two thoughtless arguments

The 1990s argument of ‘we need more research is dead, the 2000s ‘pause’ argument is dead. It’s getting hotter and anthropogenic greenhouse gas emissions are definitely the cause.

One lingering hypothesis is Henrik Svensmark’s comsic-rays versus cloud cover theory (https://en.wikipedia.org/wiki/Henrik_Svensmark#Galactic_Cosmic_Rays_vs_Cloud_Cover ). It doesn’t work and the evidence is against it but the mills of denial keep coming back to it because cloud cover is hard to model. So there’s always some mileage to obstuficate the question by waving your hands at clouds.

Enter a new ‘paper’ with the clickbait title “No experimental evidence for the significant anthropogenic climate change”. The paper isn’t about experiments or experimental data and doesn’t back up that title. Instead it is an unreviewed discussion of some modelling that’s available on the open access arXiv.org: https://arxiv.org/abs/1907.00165

The paper points to a relationship between temperature and cloud cover (fewer clouds ~ warmer temperatures) asserts that it is the changes in clouds cover that is driving changes in temperature (rather than vice versa or a complex mix of both) and that if clouds change temperature following their model then they can account for all the increase in warmth.

Except, that then leaves a massive hole in why the anthropogenic gases aren’t leading to warming as well, never mind why cloud cover should be changing in this way.

It would be uninteresting, except the usual suspects have got very excited about it because it looks sciencey. Russia Today published this article: https://www.rt.com/news/464051-finnish-study-no-evidence-warming/ and from there the story was picked up by braniacs such Paul Joseph Watson, Stefan Molyneux and, of course, our old pal Vox Day.

Richard Dawkins saying poorly thought through reactionary things again

Oh dear:

And…

Alternatively we could not do anything like that because it is an appalling idea.

There are at least three levels of confused thinking here. The first is that in the past such attempts to ensure people were sufficiently ‘qualified’ to vote intellectually have been attempts to disenfranchise specific ethnic groups. When coupled with restricted access to education and with the test wittingly and unwittingly full of the biases of the more powerful ethnic group, such tests would be simply a way of creating a kind of apartheid electoral system.

OK, but what if somehow only people who could really understand the issues of the day could vote? Wouldn’t that be better? Isn’t it because of stupid people that we have Trump and Brexit? No or at least not ‘stupid’ as the term is usually used. Voting for Trump or falling for Nigel Farage’s propaganda are certainly daft things to do but a terrible secret of the world is that these are the kinds of ‘stupid’ that otherwise intelligent people do. There are connections between levels of education and political preference but they are neither simple nor straightforward. There is evidence of an ‘educational gradient‘ with how people voted in the UK on Brexit but that gradient does not account for other regional variations (e.g. Scotland). It’s also important to remember that any educational gradient represents people with quite different economic interests as well. Nor was that gradient as smooth as it might sound:

“So, based on the above, the Leave vote was not more popular among the low skilled, but rather among individuals with intermediate levels of education (A-Levels and GSCE high grades), especially when their socio-economic position was perceived to be declining and/or to be stagnant. “

https://blogs.lse.ac.uk/politicsandpolicy/brexit-and-the-squeezed-middle/

Blaming the UK’s current Brexit confusion on stupidity maybe cathartic but it provides zero insight into a way forward. Further it ignores that the architects of the political chaos are products of the reputedly the best education you can get in Britain. Boris Johnson is manifestly a buffoon but he is a buffoon with a good degree in classics from Oxford. The Boris Johnson’s of this world would waltz past Dawkins’s test.

US politics also has a complex relationship with educational attainment. Conservative views peak at mid-ranges of education (e.g. https://www.pewresearch.org/fact-tank/2016/09/15/educational-divide-in-vote-preferences-on-track-to-be-wider-than-in-recent-elections/ ) People with college degrees and more advanced higher education are more likely to vote Democrat currently but in the past (e.g. 1990s) this was less so. The growing (indeed, reversed) education divide doesn’t account for differences among ethnic groups or between genders. Other divides (e.g. urban versus rural) may work causally in the other direction (i.e. different economic demands making decisions about higher education a different choice in rural v urban contexts but the underlying politics resting on other urban v rural differences).

Even if we imagine a Dawkins-dystopia in which you had to have a university degree to vote (a much more substantial hurdle than the demands of either the UK or US citizenship tests) the proposal falls into the political fallacy of technocracy as an alternative to democracy. By ‘fallacy’ I don’t mean that competence or technical understanding or evidence-based policy are bad ideas or things we don’t want to see in government but rather that is a reasoning error to judge democracy in principle as a process by which technically competent policy is formed.

Democracy serves to provide consent from the governed to the government. That’s its purpose. It provides a moral and practical basis on which there can be any kind of government that is even vaguely just. Logically, a vote doesn’t determine whether something is true or not (except in trivial cases on questions about ‘what will people vote for’). Consequently, it is always easy to attack democracy by setting it up AS IF that’s what voting is supposed to achieve. A referendum can’t determine what the smartest course of action is but then that’s not what a referendum or an election is supposed to do. Instead asking people to vote is a way of trying to establish broad social agreement on what a country will do.

Without that kind of broad social agreement a country has only two options: disunity or authoritarianism. Restricting the franchise along any axis will lead to overt authoritarianism. Paternalistic ‘benevolent’ authoritarianism is still a system that depends on brutality.

The shorter version: democracy is about consent of the governed not about how smart voters are. The political divides we currently have wouldn’t be solved by a test that high school graduate would pass. A nation in which only college graduates could vote would be a shitty one and politically unstable. Well educated people can and do advance bad ‘stupid’ political ideas. Come to think of it, there’s a great example here: Richard Dawkins is very well educated and here he is putting forward a stupid idea.

It’s Voynich time again

Which means another round of breathless headlines, as I discussed in 2017: [https://camestrosfelapton.wordpress.com/2017/08/29/speaking-of-fantastical-drawings/ ] That time the theory was that the author was an Italian Jewish doctor. This time the theory is the author was a nun who lived here https://en.wikipedia.org/wiki/Aragonese_Castle

The peer-reviewed paper [https://www.tandfonline.com/doi/full/10.1080/02639904.2019.1599566 ] presents a plausible case that the writing is in a “proto-romance” language with its own writing system related to Latin letters.

Unfortunately the response to “the Voynich manuscript has been cracked!” is “oh no it hasn’t!”. Ars Technica has a more sceptical article than most on the news [https://arstechnica.com/science/2019/05/no-someone-hasnt-cracked-the-code-of-the-mysterious-voynich-manuscript/ ] Specifically, it seems “proto-romance language” is more of a means to make matching possible labels to words i.e. find a word that matches from across any one of Europe’s Romance languages from Spanish to Romanian without requiring a consistent mapping. It also allows for borrowings from near-by languages (e.g. Slavic or Persian) — it’s plausible for a language to have such borrowings but it gives a lot of freedom for matching script to words and then “translating” the words.

There’s a longer sceptical analysis here https://voynichportal.com/2019/05/07/cheshire-recast/ [ETA: and a follow up that is even more brutal https://voynichportal.com/2019/05/16/cheshire-reprised/ ]

Way beyond my capacity to judge but the test would be in translating longer un-illustrated passages, which apparently has yet to occur.

Poverty and IQ

Among the section of the right that regards IQ as the only explanatory variable in society aside from money, the relationship between poverty and IQ is used to defend the huge inequities in ours society as an outcome of a functioning meritocracy. It does not require much deep inspection of how modern capitalist societies work to see that they are neither functioning well not are they meritocracies.

The opposite view is that difference in performance on IQ is more caused by poverty than vice-versa. There are multiple reasons for believing this from access to education, motivation and attitudes towards the role of test taking in a person’s life (e.g. how much effort do you put into something that you expect to do poorly in?) However, specific causes are hard to demonstrate empirically. Hard to demonstrate, perhaps, but maybe a clever experimental design can shed more light on that.

I was just reading a 2013 paper that looked at the impact of poverty on cognition in an interesting way: Poverty Impedes Cognitive Function Anandi Mani, Sendhil Mullainathan, Eldar Shafir, Jiaying Zhao (Abstract http://science.sciencemag.org/content/341/6149/976 )

“The poor often behave in less capable ways, which can further perpetuate poverty. We hypothesize that poverty directly impedes cognitive function and present two studies that test this hypothesis. First, we experimentally induced thoughts about finances and found that this reduces cognitive performance among poor but not in well-off participants. Second, we examined the cognitive function of farmers over the planting cycle. We found that the same farmer shows diminished cognitive performance before harvest, when poor, as compared with after harvest, when rich. This cannot be explained by differences in time available, nutrition, or work effort. Nor can it be explained with stress: Although farmers do show more stress before harvest, that does not account for diminished cognitive performance. Instead, it appears that poverty itself reduces cognitive capacity. We suggest that this is because poverty-related concerns consume mental resources, leaving less for other tasks. These data provide a previously unexamined perspective and help explain a spectrum of behaviors among the poor. We discuss some implications for poverty policy.”

Poverty Impedes Cognitive Function Anandi Mani et al. Science 341, 976 (2013); DOI: 10.1126/science.1238041

First some caveats. It’s two fairly narrow experiments both of which have some contrived circumstances (for good reasons). I don’t know if these results have been reproduced.

Having said that it is interesting to look at the two experiments and what the results were.

The basic hypothesis was this:

“We propose a different kind of explanation, which focuses on the mental processes required by poverty. The poor must manage sporadic in- come, juggle expenses, and make difficult trade- offs. Even when not actually making a financial decision, these preoccupations can be present and distracting. The human cognitive system has lim- ited capacity (12–15). Preoccupations with pressing budgetary concerns leave fewer cognitive resources available to guide choice and action.

Poverty Impedes Cognitive Function Anandi Mani et al. Science 341, 976 (2013); DOI: 10.1126/science.1238041

To test this they compared individual performance on cognitive test both with and without some degree of financial stress. The first was a ‘laboratory study’ that demonstrates the impact as a kind of proof of concept. The financial stress here is artificial but if anything that makes the results more interesting.

In the first study the researchers went to a New Jersey shopping mall and recruited shoppers (who got paid) to take part in four related experiments. The basic principle of each experiment was two tasks. One task asked people to consider a realistic but hypothetical financial problem. For example, they might be asked about their car having to get some urgent repairs. Participants randomly were given a ‘hard’ situation were the costs would be high or an ‘easy’ situation were the costs were low but both easy & hard situations were cognitively similar. The second task was a more classic IQ style test (Raven’s Progressive Matrices) and a spatial compatibility task.

The four versions were designed to control for cognitive impacts of the first activity. The first two versions changed the amount of maths needed in the financial scenario. The third version added incentives to correct answers. The fourth version separated the two activities so that the first was completely finished before the person sat the IQ style test.

The group being studied also provided information on their income and the the data was analysed by looking at the participants as either rich or poor. The point being to see not if the ‘rich’ participants performed better on the IQ test but rather how much impact did the first activity (i.e. having to engage with a potentially financial stressful situation) have on the cognitive scores

Accuracy on the Raven’s matrices and the cognitive control tasks in the hard and easy conditions, for the poor and the rich participants in experiment 1

The graph is for experiment 1 but the results were similar for all four. The impact of ‘hard’ versus ‘easy’ of the first activity on the second activity was much bigger for people with less money. For the wealthier participants, the ‘hard’ scenario had less impact, almost certainly because they were faced with a situation that would have less of an impact on their own finances. In short having to worry about money and how you will pay for things that you need has a genuine and measurable impact on your ability to perform some cognitive tasks… At least within this experimental scenario but that a PRETEND bit of financial stress had a measurable impact is itself notable.

The second study was quite different and looked at some real financial stress.

“Our second study examined 464 sugarcane farmers living in 54 villages in the sugarcane- growing areas around the districts of Villupuram and Tiruvannamalai in Tamil Nadu, India. These were a random sample of small farmers (with land plots of between 1.5 and 3 acres) who earned at least 60% of their income from sugarcane and were interviewed twice—before and after harvest—over a 4-month period in 2010. There were occasional nonresponses, but all of our pre- post comparisons include only farmers we surveyed twice. “

Poverty Impedes Cognitive Function Anandi Mani et al. Science 341, 976 (2013); DOI: 10.1126/science.1238041

The work of the sugarcane farmers created a set of natural controls. An individual farmer has only one harvest a year and hence essentially only one pay-day a year. However, the timing of harvests are staggered over several months. So at a particular time of year it maybe post-harvest for one farmer but pre-harvest for another. The farmers naturally face greater financial pressure the longer it has been since their last harvest.

The results showed a similar but slightly smaller impact than the laboratory study. Farmers performed better on an IQ style test (Raven’s Progressive Matrices) after* they had been paid than before and the difference was large.

How large are these effects? Sleep researchers have examined the cognitive impact (on Raven’s) of losing a full night of sleep through experi- mental manipulations (38). In standard deviation terms, the laboratory study findings are of the same size, and the field findings are three quarters that size. Put simply, evoking financial concerns has a cognitive impact comparable with losing a full night of sleep. In addition, similar effect sizes have been observed in the performance on Raven’s matrices of chronic alcoholics versus normal adults (39) and of 60- versus 45-year-olds (40). By way of calibration, according to a common approximation used by intelligence researchers, with a mean of 100 and a standard deviation of 15 the effects we observed correspond to ~13 IQ points. These sizable magnitudes suggest the cognitive impact of poverty could have large real consequences.

Poverty Impedes Cognitive Function Anandi Mani et al. Science 341, 976 (2013); DOI: 10.1126/science.1238041

Put another way: we don’t think in isolation (even if you aren’t neurotypical). Background concerns and worries all have an impact on how you think and your capacity to problem solve. They definitely have an impact on your thinking in the artificial conditions of an IQ test.

*[There were also controls on the order they did the tests. Some of the participants took the test first after they had been paid and then were tested later in the year when their money had run low.]

The Right would rather men died than admit any flaws in masculinity

I shouldn’t read Quillette. For those unfamiliar with the Australian/International online magazine, it is part of that genre of modern political thought that could be called anti-left contrarianism, that covers various soughs from Steven Pinker to Jordan Peterson. Its stock style of article is shallowness dressed up as depth, utilizing the same style of misrepresentation of issues as the tabloid press but with longer sentences and a broader vocabulary.

Over the past few days it has published a couple of pieces on the American Psychological Associations Guidelines for Psychological Practice for Men and Boys. Now you would think that the stalwart defenders of innate gender differences would be happy that an influential body like the APA would be overtly recognising that men and boys have distinct psychological needs that require special advice for practitioners. After all, is this not the ‘moderate’ criticism of the rise of feminism? That somehow, men’s needs and men’s issues have been sidelined? Ha, ha, who am I kidding 🙂 The APA guidelines were characterised by MRAs, conservatives and the so-called “Intellectual dark web” as a direct attack on masculinity.

Here is one particularly stupid piece at Quillette that reflects the harrumphing style of response: https://quillette.com/2019/01/23/thank-you-apa/ The writer (a professor of psychology at North Dakota State University) either haven’t read the guidelines or is actively misrepresenting them.

However, a second piece is what actually caught my attention. It’s better written but also is attacking a strawman version of the guidelines: https://quillette.com/2019/01/23/how-my-toxic-stoicism-helped-me-cope-with-brain-cancer/

The writer describes how his stocial attitude helped him through a diagnosis & treatment for brain cancer and uses that to lambast the APA’s (apparent) criticism of stoicism in its guidelines. I, perhaps foolishly, left a comment on the piece. What follows is an edited version of my comment.

The piece is basically a strawman argument. It misrepresents what the APA guidelines say to imply that the guidelines have blanket disapproval for people acting stoically. e.g. Take the APA’s own article on the guidelines:

“It’s also important to encourage pro-social aspects of masculinity, says McDermott. In certain circumstances, traits like stoicism and self-sacrifice can be absolutely crucial, he says”

https://www.apa.org/monitor/2019/01/ce-corner.aspx

In the guidelines themselves, the word “stoicism” appears only twice and in neither case is a blanket condemnation of it. Once is in relation to difficulties SOME men have forming emotional bonds with other men:

“Psychologists can discuss with boys and men the messages they have received about withholding affection from other males to help them understand how components of traditional masculinity such as emotional stoicism, homophobia, not showing vulnerability, self-reliance, and competitiveness might deter them from forming close relationships with male peers”

American Psychological Association, Boys and Men Guidelines Group. (2018).
APA guidelines for psychological practice with boys and men

And the other connects with a broader health issue of men not seeking care that they may need:

“Psychologists also strive to reduce mental health stigma for men by acknowledging and challenging socialized messages related to men’s mental health stigma (e.g., male stoicism, self-reliance). “

American Psychological Association, Boys and Men Guidelines Group. (2018).
APA guidelines for psychological practice with boys and men

Neither example relates to be being stoical in the face of medical diagnosis but rather social pressures that mean some men (no, not ALL men) don’t seek care that they need (including for physical ailments) because of a misguided belief that they have to battle through by themselves.

The writer’s example is NOT an example of the case the APA guidelines were addressing. The writer sought out medical care, received a diagnosis and stuck with treatment. The writer self-described actions are the OPPOSITE of what the guidelines are discussing — they show a man taking their health seriously and SEEKING HELP. That’s good and healthy but many men aren’t doing that and as a consequence are dying of treatable diseases

As guideline 8 points out:

“For most leading causes of death in the United States and in every age group, males have higher death rates than females”

American Psychological Association, Boys and Men Guidelines Group. (2018).
APA guidelines for psychological practice with boys and men

At least some of this is due men not seeking out healthcare they need:

“Between 2011 and 2013, men’s mortality rates for colorectal cancer, a generally preventable disease with regular screenings, were significantly higher than women’s, suggesting that many men do not engage in preventative care (American Cancer Society, 2015).”

American Psychological Association, Boys and Men Guidelines Group. (2018).
APA guidelines for psychological practice with boys and men

A stoical attitude need not be toxic but when misapplied/misunderstood or adopted out of a feeling of social obligation, it can take on a harmful form of thinking that you shouldn’t seek out help. I’m glad the writer’s stoicism was of the positive kind but the writer should perhaps also take greater care in researching what the APA guidelines had actually said.


To put not too fine a point on it: toxic aspects of masculinity kills men. There is nothing pro-man about it. Nobody is actually sticking up for men by pushing back against the APA guidelines.

50% chance of doing X

This is a bit abstract and it follows on from this previous post about voting demographics.

Let’s say you’ve got a statistical model that predicts a person Z with Y characteristics has a 50% chance of doing X. The actual percentage doesn’t matter but 50% is a nice amount of measurable uncertainty — maximally knowing that we don’t know what person Z will do about X given the context of Y.

Empircally, the data would be looking at lots of Y people and seeing they do X 50% of the time. However, note that there’s a big and important distinction here between two extremes.

  1. Half of Y people do X and half of Y people don’t but those two halves are distinct. This implies that Y isn’t really the relevant factor here and we should be looking for some other feature of these people that better explains X behaviour.
  2. Y people do X half of the time randomly. That is Y people are essentially a coin toss with regards to X. In that case Y isn’t great for predicting whether people will do X but it is really relevant to the question (particulalry if W people behave more decisively).

In the demographic voting model and taking a figure of say 80%:20% for atheists splitting between left and right, I suspect this is a grouping where individuals have even less variability in their actual voting patterns. Some of that 20% will be Ayn Rand style atheists who are very committed to a right-wing viewpoint, rather than representing a 20% chance that a given atheist would vote Republican. However, that is not neccesarily true of other groups where the percentage may more closely represent a degree of individual variability.