On the systematic evaluation of creative works of science fiction: part the first

At Felapton Towers our crack team of bloggers* give great thought to the intersection of speculative fiction, logic and mathematics. Applying

The staff
Your blogging team, hard at blogging.

mathematics to the Hugo awards has been done in various ways but one method has not yet been attempted. That is: to establish a statistical model of Hugo-award-worthiness i.e. a way of examining a work of fiction and scoring it in such a way that the overall score tells us how good it is.

Why has this not been attempted? The primary reason is because it is both stupid and un-useful. Stupid because we don’t want speculative fiction to fit some cookie-cutter model and un-useful because it is exactly the idiosyncratic qualities of the work that will set it apart.

However I have never let stupidity be a deterrent. First though, I’ll need some general rules. Luckily at the MadGeniusClub Kate Paulk (a key Sad Puppy) has provided two posts expressing her thoughts here and here. A set of guidelines is what I need and from that something alchemical can be produced: a marking rubric – the tool of exam boards worldwide to assess the writing of hapless students.How does a marking rubric work? The idea is a clever one. For a given kind of writing (and often specific to a given writing prompt) a set of criteria are developed. Each criteria carries a set of marks and expert markers evaluate the piece of writing criterion by criterion. The writing ends up with a set of marks that add up to an overall score. Criteria can relate to features such as grammar, use of paragraphs, vocabulary, textual devices etc but they all need to work together (i.e. in general not be negatively correlated over significant numbers of students) but also be independent of one another (i.e. not evaluating the same thing twice). The statistical properties of the criteria can be assessed by psychometricians and further refined. There is a neat overview here http://pareonline.net/getvn.asp?v=7&n=10

Now an obvious flaw with this approach is that writing that is weird, unusual or non-standard may score unreasonably poorly. For educational assessment purposes this does limit the kind of writing that students should do but so long as it is understood as assessing a general level of competence in writing rather than ground-breaking creativity, it isn’t going to be too misleading. In other words if You are James Joyce and you are asked to write an essay on why a tourist should visit Dublin, write an effective essay in the style of tourist information and don’t write Ulysses – write Ulysses later, there is no reason why one kind of writing will stop you doing the other. Fears that this will somehow destroy student’s innate creativity are, I believe, unwarranted. Knowing what the supposed rules are actually makes it so much easier to deliberately break them.

A weird consequence of this way of assessing writing is that it turns out it can be mechanized. http://www.journalofwritingassessment.org/article.php?article=65 Naturally that only adds to the impression that the process is a soulless, reductive system stripping all joy from the world. On the other hand I like robots and a full discussion will take us off track.

So the mission: take Kate Paulk’s two blog messages. Turn them into a set of criteria. Try them out on some Hugo nominees. See what happens.

I can hardly wait and neither can Timothy the talking cat.

*{and fictional cat}

3 thoughts on “On the systematic evaluation of creative works of science fiction: part the first

Comments are closed.