Firstly if you haven’t checked out Martin Pyne’s Sankey diagrams showing how the preferences flowed, check them out on Twitter:
One thing we’ve looked at before is how many finalists should there be. I still think 6 is the sweet-spot and I also think this year validates that.

This bubble graph compares the ranking of the finalists in the EPH stats with the final ranking from the transferable vote stats. As a generality, popular nominees are popular finalists, as you might expect. If you had to bet on what the final rankings would be with nothing but the EPH rankings to go on then you generally wouldn’t be very wrong if you just picked the EPH ranking. However, you’d still be wrong quite often.
Notably Best Related Work and Best Editor Short Form both had winners that were sixth in the nomination process. That’s a notable divergence from the bubble graph being something other than a diagonal line. A more quirky difference is Jonathan Strahan in Best Editor Short Form is the only finalist to be second in both processes.
O Westin asked in the comments:
“I might be misreading/misrepresenting the data, but if I understand things correctly, the closer the initial points are to the number of nominations, the more focused that personโs nominators are”
I think that is correct and if so, we could look at the ratio of the raw vote to the initial set of points to quantify that a bit. Here I’ve ranked fan writers by that stat (sorry, it’s the only category where I grabbed these numbers).
Writer | ratio |
Elsa Sjunneson | 81% |
Adam Whitehead | 76% |
O.Westin/MicroSFF | 74% |
Gavia Baker-Whitelaw | 71% |
Stitch | 70% |
James Davis Nicoll | 66% |
Jason Sanford | 60% |
Alasdair Stuart | 53% |
Paul Weimer | 51% |
Sarah Gailey | 48% |
Bogi Takรกcs | 48% |
Charles Payseur | 47% |
Cora Buhlert | 43% |
Camestros Felapton | 41% |
Adri Joy | 40% |
Aidan Moher | 39% |
Note that the ratio certainly doesn’t sort finalists from non-finalists. There is a finalist (Adam) in the 70s and a finalist in the 40s (Cora). Primarily this is because with EPH the raw votes matter most. When it comes to each elimination you get more points for your raw votes if your votes are more “bullety” which makes it a bit less likely that you end up in a head-to-head elimination. However, in the end, it is raw votes that decide whether you get eliminated. As people get eliminated, the survivors own points get more bullety.
tl;dr a “bullet vote” set of ballots is neither a substantial advantage or a disadvantage with EPH and nor is the opposite. EPH really only makes a difference when comparing two nominees with a similar number of raw votes.
I think the two-stage voting process for the Hugos is pretty neat all round. I wouldn’t change it currently. However, if I was devising a new award and wanted only one stage of voting, EPH looks pretty good.
- Voters only have to list things they like.
- You get many of the features of ranked voting without the rankings
- It avoids ties. (arguably this is a bug rather than a feature)
If I suddenly had a lot of money/time to create a new SF award program, I’d go with single stage EPH voting with voters having up to 10 nominees per category. However, rather than a single winner I’d award the final three as the joint winner.
16 responses to “A few more Hugo Stats”
However, rather than a single winner Iโd award the final three as the joint winner.
Yeah, that’s never gonna fly. (I mean, I support it completely, but the inertia in favour of “single winner” is really large. Even in a field that is good at embracing change.)
LikeLiked by 1 person
Yes, but these would be my magical awards and I’d get to make the rules ๐ [seriously – I wouldn’t suggest it for the Hugos or Nebulas etc. ]
LikeLike
Even with EPH (and this was a conscious design criterion for it), adding a work to your nominating ballot always increases the chances of its making the shortlist. If you want to maximize the chance that something you nominated makes the shortlist, optimal is not bullet-nominating, but rather using all five nomination slots.
LikeLiked by 3 people
[…] bloggers, such as Hugo nominee Cora Buhlert and Camestros Felapton attempted to take a somewhat higher perspective by presenting analysis and statistics of the awards […]
LikeLike
Funnily enough, whenever I’ve pondered “how would I structure a new SF award?”, I also tended towards 10 finalists/3 winners, albeit with an early nomination close (Jan/Feb say), and an extended period for reading/voting (till Sep/Oct?).
The big issue I have with all (AFAIK?) the current awards – and I’m just considering the “Best Novel” category here – is the use of the 12-month publication window for eligibility. For public-vote awards, I feel that this privileges people who are prepared to splash out for new release hardbacks – or similarly priced ebooks – and/or people who have access to ARCs, and thus runs the risk of appearing elitist or disconnected from many readers.
Prompted by earlier comments from several people about them not recognising any books in the then-recent Goodreads Choice awards, I did a very informal survey on a Reddit group early this year, asking what/how many books people had read in 2019. The resulting averages were: 51 books read of any type, of which an average of 31 were SF/F/H novels, but only 6.7 were 2019-published SF/F/H novels. Now, my survey was carried out very non-scientifically, and had less than 30 responses, and I’m sure a similar one done here or at File 770 would produce very different results, but IIRC a Clarke Award survey from a few years ago reported an average 50 books a year read, of which 5 were current year publications, which is pretty close to my findings.
What I’d instead want to try in my hypothetical award is a 3-5 year rolling period of eligibility, which would open things up to titles that got later paperback publications, late-in-year releases that didn’t have a torrent of pre-release hype, maybe even works translated from English to other languages. (The Kurd-Laรwitz finalists in the translated category are an interesting mix of titles that would never go up against each other in a US or UK award, for example.) This is basically the same as the “judgement of history” argument described by Dave Langford way back in 1981. https://ansible.uk/writing/ff08.html
Given that this is clearly not a new idea, I’d be curious if wiser or more experienced heads know if anyone has ever tried anything like this, and/or what the flaws are? I suspect that such an award would run the risk of just being a delayed echo of the titles that won/were nominated for the regular annual awards, and – similar to the filtering on Hugo BDP categories – you’d want to have countermeasures for the same authors/series coming up multiple times.
LikeLiked by 1 person
Since tying my reading more to anticipating the Hugos, I’ve read more books in their year of publication but even then I’m playing catch-up with a lot of books (e.g. The Raven Tower) this year.
LikeLike
I think it’s another illustration of the fact that while the general public does a decent job of picking from a short list, it does a terrible job of selecting a shortlist in the first place. The primary elections in the US are another example of this. What’s needed is a panel of experts to select the short list, much as political parties used to do in the US.
The Nebulas have it worse than the Hugos since they have so few voters. Busy writers just don’t have to time to read broadly enough to do a good job with nominations. They’d be much better served by a nominating committee.
The Hugos and Nebulas, of course, are not going to change, but that doesn’t mean we shouldn’t at least be aware of the problem: popular vote generally does a poor job of nominating candidates.
LikeLiked by 1 person
Is there any evidence that experts pick better book shortlists than the Hugo or Nebula nominators? As I’m not a reader of short fiction, I don’t have any data to hand for those categories – although I imagine RSR probably does? – but digging through a spreadsheet I maintain of Best Novel nominees/finalists for various awards, it seems that the majority of recent Hugo or Nebula novel finalists were also on the recommended reading lists produced by the Locus team, the exceptions being:
2016/2017 – A Closed and Common Orbit, Too Like the Lightning (Hugo finalists); Borderline (Nebula finalist)
2017/2018 – Six Wakes (Hugo and Nebula finalist)
2018/2019 – None; all Hugo and Nebula finalists were on the Locus recommended lists
(I forgot to update the spreadsheet for this year ๐ฆ )
Obviously the Locus recommended lists are much bigger than the Hugo and Nebula shortlists, but those popular vote nominations don’t seem to be much out of sync with the lists produced by experts, and I think there’s a strong argument that Too Like the Lightning is a major omission from the Locus lists.
Plus, this year’s Clarke Award panel of judges – which includes two of this year’s Hugo finalists – recently produced a shortlist which included The Last Astronaut, which is by a long way the worst book I’ve read this year, and recent reviews from the likes of Nina Allan and Ian Sales were equally scathing. (Has anyone else had the misfortune of reading it?)
LikeLiked by 2 people
John S / ErsatzCulture: The Last Astronaut, which is by a long way the worst book Iโve read this year, and recent reviews from the likes of Nina Allan and Ian Sales were equally scathing. (Has anyone else had the misfortune of reading it?)
I loved it. A lot. And I’m not fond of horror, of which it has a fair bit. I suppose I’m going to have to go look up the Allan & Sales reviews now — though I disagree with them on almost everything, so they’re not likely to change my mind.
LikeLiked by 1 person
Ooooh – a marmite book!
LikeLike
I was about the say that the fact that Ian Sales and Nina Allan hated the book is not necessarily a negative recommendation, since I tend to disagree with both of them a lot.
LikeLike
Just to make it more confusing, I’d go with number of winners dependent on number of voters on a logarithmic scale.
< 10 voters = No winner
10 – 99 voters = 1 winner
100 – 999 voters = 2 winners
1000 – 9999 voters = 3 winners
…
LikeLiked by 1 person
I was nodding but I thought the relationship would go the other way around i.e. fewer voters = more winners. With lots of voters, the easier it should be to pick a single winner
LikeLiked by 1 person
I just want to thank Martin again for his graphs. They explained it perfectly at a glance.
LikeLiked by 3 people
I had fun making them! Dramatic Presentation (Short) in particular is one I am going to pull out the next time somebody worries that having two episodes of the same series on the ballot will split the vote.
LikeLiked by 1 person
I’m just surprised that “A God Walks into Abar” did so much better than “This Extraordinary Being”, since the latter stood alone much better and was also IMO the better story.
But Best Dramatic Presentation Short is one category where I’m very much out of step with Hugo voters and nominators.
LikeLiked by 2 people