This is a follow up to the earlier post. Read that post first for background and the data I’m looking at.

I’ve looked at 2018 Hugo data for both stages:

- The nomination stage by EPH
- The final voting stage by IRV

My impression was that there are some changes in the ranking between the two but not so many as to cast doubt on the nomination process itself nor so few changes as to make the final voting stage redundant. It looks like things are pretty much in a sweet spot:

- final winners are often the top finalists — which implies there’s not a mismatch between how people nominate and how they vote (or between the people voting at each stage etc)
- low ranked finalists often do better in the final voting — which implies that there is a lot of value in a two stage process.

To show that here is a graph of how the rankings compare between EPH stage 1 and IRV stage 2 of the Hugo voting process:

The width of a blob indicates the frequency of that pair of ranks. For example there were 9 cases of 1st rank EPH coming 1st in the final stage and 10 cases of 4th ranked finalist coming 3rd in the final stage. I’m not sure if a simple linear regression is appropriate with this data but Excel tells me that the first stage voting accounts for about 25% of the variance in the second stage ranks.

However, can we look at this data and say how long the finalist list should be? Are there ENOUGH finalists? Should there be a list of 7 or 8? Putting administrative and practical limits aside I think we can examine this question with the data.

Obviously, I’m only looking at one year, so any conclusions are tentative and limited. I could look further but recent data is weird due to Puppy activities and there have been rule changes since. So, I’m sticking with 2018 (also I’m lazy).

One graph I drew was to look at the distribution of the differences in rank between the two stages.

Again we can see that no change (zero on the x-axis) is common but that bigger changes in rank happen. Unfortunately, we really can’t take this as being true of every ranking. Obviously rank 6 finalists can only either stay the same of go upwards.

A different way of thinking about the issue would be to consider what would happen with different number of finalists. For example, what if in 2018 there was only 1 finalist per category? Yes, that’s silly be we can work out that of the 15 categories I looked at, 9 would have the same winner as what actually happened and that 6 wouldn’t. 1 finalist would contain 60% of the actual winners.

- 1 finalist: 9 or 60% of winners
- 2 finalists: 12 or 80% of winners
- 3 finalists: 14 or 93% of winners
- 4 finalists: 14 or 93% of winners (i.e. no extra winners)
- 5 finalists: 15 or 100% of winners
- 6 finalists: 15 or 100% of winners

So for most categories 3 finalists would just about do. Adding finalists after 3 brings only small gains but 2018 still need 5 finalists to capture all the eventual winners.

Now, obviously, if we added more finalists people’s choices and the voting would change but we can see from the trend that the gains trail off quickly after 3 finalists.

So is five enough? Five clearly works but that’s actually an argument for having six finalists if you want to be confident you’ve got all the plausible contenders. As we definitely got one fifth ranked finalist winning a category (Rebecca Roanhorse in the Campbell Award) there’s maybe a 6% chance of rank 5 finalist winning (one winner out of 15).

Add in the possibility of one finalist being in some way dodgy or have cheated etc then 6 is a safe contingency. Does the same argument not work for 7 or 8 finalists? No, because we can see that the gains trail off rapidly after 3 finalists. Five is probably enough, six is almost certainly enough.

## 2 responses to “How many finalists? Crunching continued…”

Thanks for looking at this. While five is fine, I do like having six. By the way, I checked the ranks you had against my own post and the only error I found turned out to be mine!

LikeLike

Thanks for checking!

LikeLike