A few more Hugo Stats

Firstly if you haven’t checked out Martin Pyne’s Sankey diagrams showing how the preferences flowed, check them out on Twitter:

One thing we’ve looked at before is how many finalists should there be. I still think 6 is the sweet-spot and I also think this year validates that.

This bubble graph compares the ranking of the finalists in the EPH stats with the final ranking from the transferable vote stats. As a generality, popular nominees are popular finalists, as you might expect. If you had to bet on what the final rankings would be with nothing but the EPH rankings to go on then you generally wouldn’t be very wrong if you just picked the EPH ranking. However, you’d still be wrong quite often.

Notably Best Related Work and Best Editor Short Form both had winners that were sixth in the nomination process. That’s a notable divergence from the bubble graph being something other than a diagonal line. A more quirky difference is Jonathan Strahan in Best Editor Short Form is the only finalist to be second in both processes.

O Westin asked in the comments:

“I might be misreading/misrepresenting the data, but if I understand things correctly, the closer the initial points are to the number of nominations, the more focused that person’s nominators are”

I think that is correct and if so, we could look at the ratio of the raw vote to the initial set of points to quantify that a bit. Here I’ve ranked fan writers by that stat (sorry, it’s the only category where I grabbed these numbers).

Writerratio
Elsa Sjunneson81%
Adam Whitehead76%
O.Westin/MicroSFF74%
Gavia Baker-Whitelaw71%
Stitch70%
James Davis Nicoll66%
Jason Sanford60%
Alasdair Stuart53%
Paul Weimer51%
Sarah Gailey48%
Bogi Takács48%
Charles Payseur47%
Cora Buhlert43%
Camestros Felapton41%
Adri Joy40%
Aidan Moher39%

Note that the ratio certainly doesn’t sort finalists from non-finalists. There is a finalist (Adam) in the 70s and a finalist in the 40s (Cora). Primarily this is because with EPH the raw votes matter most. When it comes to each elimination you get more points for your raw votes if your votes are more “bullety” which makes it a bit less likely that you end up in a head-to-head elimination. However, in the end, it is raw votes that decide whether you get eliminated. As people get eliminated, the survivors own points get more bullety.

tl;dr a “bullet vote” set of ballots is neither a substantial advantage or a disadvantage with EPH and nor is the opposite. EPH really only makes a difference when comparing two nominees with a similar number of raw votes.

I think the two-stage voting process for the Hugos is pretty neat all round. I wouldn’t change it currently. However, if I was devising a new award and wanted only one stage of voting, EPH looks pretty good.

  • Voters only have to list things they like.
  • You get many of the features of ranked voting without the rankings
  • It avoids ties. (arguably this is a bug rather than a feature)

If I suddenly had a lot of money/time to create a new SF award program, I’d go with single stage EPH voting with voters having up to 10 nominees per category. However, rather than a single winner I’d award the final three as the joint winner.

16 thoughts on “A few more Hugo Stats

  1. However, rather than a single winner I’d award the final three as the joint winner.
    Yeah, that’s never gonna fly. (I mean, I support it completely, but the inertia in favour of “single winner” is really large. Even in a field that is good at embracing change.)

    Liked by 1 person

  2. Even with EPH (and this was a conscious design criterion for it), adding a work to your nominating ballot always increases the chances of its making the shortlist. If you want to maximize the chance that something you nominated makes the shortlist, optimal is not bullet-nominating, but rather using all five nomination slots.

    Liked by 3 people

  3. Funnily enough, whenever I’ve pondered “how would I structure a new SF award?”, I also tended towards 10 finalists/3 winners, albeit with an early nomination close (Jan/Feb say), and an extended period for reading/voting (till Sep/Oct?).

    The big issue I have with all (AFAIK?) the current awards – and I’m just considering the “Best Novel” category here – is the use of the 12-month publication window for eligibility. For public-vote awards, I feel that this privileges people who are prepared to splash out for new release hardbacks – or similarly priced ebooks – and/or people who have access to ARCs, and thus runs the risk of appearing elitist or disconnected from many readers.

    Prompted by earlier comments from several people about them not recognising any books in the then-recent Goodreads Choice awards, I did a very informal survey on a Reddit group early this year, asking what/how many books people had read in 2019. The resulting averages were: 51 books read of any type, of which an average of 31 were SF/F/H novels, but only 6.7 were 2019-published SF/F/H novels. Now, my survey was carried out very non-scientifically, and had less than 30 responses, and I’m sure a similar one done here or at File 770 would produce very different results, but IIRC a Clarke Award survey from a few years ago reported an average 50 books a year read, of which 5 were current year publications, which is pretty close to my findings.

    What I’d instead want to try in my hypothetical award is a 3-5 year rolling period of eligibility, which would open things up to titles that got later paperback publications, late-in-year releases that didn’t have a torrent of pre-release hype, maybe even works translated from English to other languages. (The Kurd-Laßwitz finalists in the translated category are an interesting mix of titles that would never go up against each other in a US or UK award, for example.) This is basically the same as the “judgement of history” argument described by Dave Langford way back in 1981. https://ansible.uk/writing/ff08.html

    Given that this is clearly not a new idea, I’d be curious if wiser or more experienced heads know if anyone has ever tried anything like this, and/or what the flaws are? I suspect that such an award would run the risk of just being a delayed echo of the titles that won/were nominated for the regular annual awards, and – similar to the filtering on Hugo BDP categories – you’d want to have countermeasures for the same authors/series coming up multiple times.

    Liked by 1 person

    1. Since tying my reading more to anticipating the Hugos, I’ve read more books in their year of publication but even then I’m playing catch-up with a lot of books (e.g. The Raven Tower) this year.

      Like

  4. I think it’s another illustration of the fact that while the general public does a decent job of picking from a short list, it does a terrible job of selecting a shortlist in the first place. The primary elections in the US are another example of this. What’s needed is a panel of experts to select the short list, much as political parties used to do in the US.

    The Nebulas have it worse than the Hugos since they have so few voters. Busy writers just don’t have to time to read broadly enough to do a good job with nominations. They’d be much better served by a nominating committee.

    The Hugos and Nebulas, of course, are not going to change, but that doesn’t mean we shouldn’t at least be aware of the problem: popular vote generally does a poor job of nominating candidates.

    Liked by 1 person

    1. Is there any evidence that experts pick better book shortlists than the Hugo or Nebula nominators? As I’m not a reader of short fiction, I don’t have any data to hand for those categories – although I imagine RSR probably does? – but digging through a spreadsheet I maintain of Best Novel nominees/finalists for various awards, it seems that the majority of recent Hugo or Nebula novel finalists were also on the recommended reading lists produced by the Locus team, the exceptions being:

      2016/2017 – A Closed and Common Orbit, Too Like the Lightning (Hugo finalists); Borderline (Nebula finalist)

      2017/2018 – Six Wakes (Hugo and Nebula finalist)

      2018/2019 – None; all Hugo and Nebula finalists were on the Locus recommended lists

      (I forgot to update the spreadsheet for this year 😦 )

      Obviously the Locus recommended lists are much bigger than the Hugo and Nebula shortlists, but those popular vote nominations don’t seem to be much out of sync with the lists produced by experts, and I think there’s a strong argument that Too Like the Lightning is a major omission from the Locus lists.

      Plus, this year’s Clarke Award panel of judges – which includes two of this year’s Hugo finalists – recently produced a shortlist which included The Last Astronaut, which is by a long way the worst book I’ve read this year, and recent reviews from the likes of Nina Allan and Ian Sales were equally scathing. (Has anyone else had the misfortune of reading it?)

      Liked by 2 people

      1. John S / ErsatzCulture: The Last Astronaut, which is by a long way the worst book I’ve read this year, and recent reviews from the likes of Nina Allan and Ian Sales were equally scathing. (Has anyone else had the misfortune of reading it?)

        I loved it. A lot. And I’m not fond of horror, of which it has a fair bit. I suppose I’m going to have to go look up the Allan & Sales reviews now — though I disagree with them on almost everything, so they’re not likely to change my mind.

        Liked by 1 person

      2. I was about the say that the fact that Ian Sales and Nina Allan hated the book is not necessarily a negative recommendation, since I tend to disagree with both of them a lot.

        Like

  5. Just to make it more confusing, I’d go with number of winners dependent on number of voters on a logarithmic scale.

    < 10 voters = No winner
    10 – 99 voters = 1 winner
    100 – 999 voters = 2 winners
    1000 – 9999 voters = 3 winners

    Liked by 1 person

    1. I had fun making them! Dramatic Presentation (Short) in particular is one I am going to pull out the next time somebody worries that having two episodes of the same series on the ballot will split the vote.

      Liked by 1 person

      1. I’m just surprised that “A God Walks into Abar” did so much better than “This Extraordinary Being”, since the latter stood alone much better and was also IMO the better story.

        But Best Dramatic Presentation Short is one category where I’m very much out of step with Hugo voters and nominators.

        Liked by 2 people

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.