A new sighting of Dave Freer’s argument has been spotted on Sarah A Hoyt’s blog.
But it goes beyond that. Yeah, this started by noticing that anyone who wasn’t parroting the mintruth’s line of the year had as much chance of winning awards (except for the Prometheus) as a snow ball of setting up residence in hell. As Dave freer noted, and file 770 figured, only 19 conservatives earned an award in the last 20 years (and that’s counting as conservative anyone who doesn’t think Stalin had some good ideas but was a bit eager.) This is far less than is statistically likely.
I don’t know which post Freer claimed 19 conservatives in 20 years, as I don’t think it appeared in the Petunias argument. If anybody knows I’d be grateful for a pointer (or if it was in Petunias, which bit).
Anyway. 19 out of 20 years. I’ll assume this awards rather than nominations but I’m unsure of the categories she is including. The more categories, the more unlikely a small number will be. For example, if she was just referring to Best Novel (she presumably wasn’t) then 19 out of 20 would make the Hugos the Fox News of literary awards. If it is 13 categories then we’d expect about 30 winners. 19 or less would be just under p=1%. 12 awards gives p=2.7%
8 awards seems like the best guess (on the grounds that artists and editors and other things may not be very obvious in terms of leanings). For 8 I think it comes to about 54% of 19 or less (assuming 12% as the US proportion of steadfast conservatives) I’ll also assume Hoyt’s characterization of “anyone who doesn’t think Stalin had some good ideas but was a bit eager” if literally applied would provide a different value than 19 – for example by that definition China Meiville would be a conservative (he is/was a Trotskyist – they aren’t keen on Stalin because of the whole ice-pick thing)
Part 1, 2, 3, 4, 5, 6, 7 and 8
Dave Freer’s argument does not show what he thinks it shows. The flaws in the argument are:
- His description of a left wing category of authors is probably faulty as it relies on key issues that enjoy more popular support in the US public than some conservatives realize.
- Consequently his estimate of 15%, while accurate for genuinely “solid liberal” people, is too low when considering Hugo eligible authors. The likelihoods he needed to model may have an upper range beyond 50%.
- The model he uses in his analogy has some flaws but is not unreasonable and the flaws don’t severely undermine his argument
- Using his model an expected proportion of 45% for what he calls “red” nominees would produce results that are not highly improbable and which match his analysis of past Hugo nominees for best novel.
- His choice of years to analyze may be distorted by avoiding 2004 and by including WorldCon years held in countries other than the US, but his analysis would still hold if his assumption of 15% for reds was correct.
- There is some plausible evidence of statistical bias against very conservative authors but overall the evidence of bias is slim
- Dave’s argument even if it was sound does not address multiple sources of bias – some of which may be beyond WorldCon (or Puppy) influence
In truth there is no good reason why we should expect the Hugo awards to reflect the political spectrum of the USA. Neither authors, reader or fans are a random sample of the US population. Ideology in the United States has geographic, socioeconomic, ethnic and cultural dimensions. While none of those are deterministic, there is no reason to assume that group defined by common cultural interests would coincidentally be a decent random sample of the US population when it comes to ideology.
The shortest, simplest objection to Dave’s argument is this: any person knowledgeable about statistics would not use science-fiction/fantasy readers as a way of generating a representative cross-section of US politics. Yet the core premise of his argument relies on that being the case – otherwise in what sense is their a discrepancy?
Worse yet the 2015 Year of the Puppy has revealed a very narrow set of nominees, with conservative works being represented by a small number of authors.
In part 7 I found some evidence of bias – specifically a plausible bias against Hugo eligible authors who might fit into the Pew typology (covered in previous posts) of “steadfast conservatives“. Dave Freer’s argument had looked at this from the other direction – considering whether there was a bias in favor of “red” authors.
Overall I don’t think the numbers do suggest an active political bias in the Hugo voting for people who are particularly left wing. However, there may be a bias against people on the right of the spectrum. [NOTE: I shan’t say ‘far right” because that is a whole other argument. The Pew typology I’ve used is looking at mainstream beliefs on mainstream issues. There are fringe views beyond these that will behave quite differently]
This takes me back to Part 1 and Part 2 of this discussion. If you recall I’d discussed the fact that discovering a statistical bias is not the same as discovering some active discrimination, prejudice, vote rigging or nefarious acts against a given group. The bias can come from many directions. Here are a few: Continue reading “On petunias and whales: part 8”
In part 6 I wandered off topic to nit-pick on issues that do not add much to the overall argument.
So far I think I have shown that we can’t, based on Dave Freer’s red/white/black classification of Hugo nominees conclude political bias towards the left in the Hugos. the central feature of that argument has been simply that he the author’s aren’t as left as he might think they are – but the case isn’t closed yet and I’ve made challengable assumptions and broad estimates.
In the next two posts I’m going to do a couple of things. Firstly the checks for bias aren’t quite finished and secondly I need to return to an earlier point – the potential sources and characteristics of bias in the Hugos.
Note that all I’ve shown so far is that we can’t really reject the “unbiased” hypothesis. That is not same as actually showing there isn’t bias – it just shows that using the methods we employed (Dave Freer’s methodology) I couldn’t detect it.
What about the steadfast conservatives?
Dave’s account of past Hugo results was intended to show two things that pertain to bias and I’ve only looked at one of them “too many reds”. The other thing we have not considered is the flip-side “not enough blacks” i.e. not enough outspoken conservatives. Continue reading “On petunias and whales: part 7”
Part 5 was the number crunching post. I promised pedantry and I delivered 🙂
This post is about some nit-picks, caveats and other stuff that are worth pointing out partly because it is important to get the maths right as best we can and partly because they don’t matter that much in terms of the broad sweep of Dave Freer’s argument. Put another way: all models are simplifications and imperfect representations. When critiquing an argument based on a model, figuring out what is a deep flaw and what is a minor departure from reality is important.
Dave uses for his analogy a person pulling colored balls from the bag. When he calculates the probability of several of the same color in each row he uses the same probability each time e.g. ” ½ x ½ x ½ = 1/8″. This is not quite correct. Continue reading “On petunias and whales: part 6”
In part 4 I started trying to get a better handle on Dave’s 15% estimate. I explained why category he thinks of “left wing” maybe much larger than he imagines when considering authors as a population.
In this post I’ll try and look at how Dave Freer then models the actual results from various Hugos and what results we might have expected.
Dave starts with 2005 – unfortunately that breaks the model straight away. I assume he picked 2005 to avoid Dan Simmons’s nomination for Ilium in 2004. In fact 2005 is an excellent year to consider bias because it was a year in which the best novel nominations show an indisputable bias! Continue reading “On petunias and whales: part 5”
In a bit of a marathon, Part 3 looked at some of the political markers raised by Dave Freer post and looked at those markers compared with survey data in the US. I also discussed why caution had to be applied when thinking about authors as reflecting the US population as a whole. The short version is: US SF/F authors are not a random sample of the US population.
Dave listed a range of issue and his choices were apt in the sense that they do show marked difference between the 15% of Americans the Pew typology calls “Solid Liberal” and the 12% of Americans the Pew typology calls “Steadfast Conservatives”. What should be clear on the data, though is that only on some issues is there a smooth left-right transition. Indeed the only issue on which there was a neat match was gun rights. On same-sex marriage and affirmative action positive “left wing” views on the issue could be found in significant numbers to the right of the US center. Continue reading “On petunias and whales: part 4”