Category: Statistics

You say ‘a-loomin-um’, I say ‘al-you-min-ee-um’, we both say ‘bunkum’

I resolved to not bother talking about Vox Day for awhile but circumstances compel me. The synergies of nonsense bind extreme nationalism, Trumpism, misogyny, creationism and antivaxxerism. It is always remarkable to see what apparently scientific studies the Alt-Right will quote as if gospel and which they will turn their selective scepticism too.

To wit: https://web.archive.org/web/20171202043719/http://voxday.blogspot.com.au/2017/11/a-new-meaning-to-metalhead.html

What is all this about? It is the old and thoroughly debunked canard that vaccines cause autism. The idea is rooted in two coincidences: an increase in the numbers of people diagnosed with autism (primarily due to better clinical descriptions of autism spectrum and increased awareness among doctors and the public) and the timing of when autisim symptoms are often identified at an age close to when early childhood vaccinations occur. Campaigners against vaccinations have been looking for a more substantial way of linking the two and one generic culprit has been ‘toxins’ in vaccines – i.e. various additives used in the manufacture of vaccines. For a long time the supposed guilty party was mercury, particularly in the form of thiomersal – a preservative used in some vaccines. However, studies linking the two were famously debunked and many vaccines didn’t use thiomersal or other mercury compounds anyway.

Of later the antivaxxers have been pointing their fingers at a different metal: aluminium – which is just like the metal aluminum but more British. ‘Aluminium adjuvants’  are an additive to vaccine that use aluminium. Adjuvants are any substances added to vaccines whose role is to provoke an immune response (see here for a better explanation https://www.cdc.gov/vaccinesafety/concerns/adjuvants.html ). Tiny amounts of aluminium are added intentionally because the body’s immune system will react to the aluminium and it is that principle (which is central to the whole idea of vaccines) that has vaccination critics concerned.

Back to the study quoted. Vox Day is quoting from The Daily Mail:

http://www.dailymail.co.uk/health/article-5133049/Aluminium-vaccines-cause-autism.html

BUT….the Mail article is little more than a cut and paste from here:

https://www.hippocraticpost.com/infection-disease/aluminium-and-autism/

Which is an article by a “Chris Exley” who mainly writes alarming articles about the terrible things aluminium might do to you. Exley  is quoting a study from Keele University which is available here:

http://www.sciencedirect.com/science/article/pii/S0946672X17308763

And that study was conducted by three people including…Professor Chris Exley. Who, conincidentally enough is on the editorial board of the journal the study is published in:

https://www.journals.elsevier.com/journal-of-trace-elements-in-medicine-and-biology/editorial-board

It is a long chain and yet oddly this is a rare case where the populist half-baked version of the study is alomost directly from the scientist involved.

Now I don’t know much about Professor Exley’s field, so I can’t really comment on the validity of the methods used. The study involved detecting aluminium in a very small number of samples of brain tissue from dead people who at some point in their lives had been disagnosed with an Autism Spectrum Disorder. There’s not much in the way of comparisons in the paper and I get the (perhaps mistaken) impression that the method is relatively new. The paper correctly concedes that “A limitation of our study is the small number of cases that were available to study and the limited availability of tissue.”

But take a critical look at the next step in the reasoning. Exley hedges what he says but Vox follows the dog whistle:

“So, the obvious question this raises is: how did so much aluminum get into the brain tissue in the first place? And the obvious answer is: from being injected with vaccines containing aluminum.” (Vox Day)

Of course a moments thought reveals that cannot be the answer. Most people do not have a diagnosed Austism Spectrum Disorder but most people are vaccinated. For Exley’s hypothesis to be correct there would need to be some additional factor, which Exley does describe in his media article:

“Perhaps there is something within the genetic make-up of specific individuals which predisposes them to accumulate and retain aluminium in their brain, as is similarly suggested for individuals with genetically passed-on Alzheimer’s disease.”

Well perhaps there is but Exley’s study doesn’t show that. More to the point, if this IS true then vaccines and aluminium adjuvants are irrelevant – we are encounter far more aluminium in our diets than we do from the tiny amounts we might get from vaccinations. Exley has zero reason to point at vaccines, indeed his speculation would imply that vaccines CANNOT be the main reason for larger amounts of aluminium in his samples because neccesarily bigger sources are more likely.

Exley appears to be trying to join two different healthscare bandwagons together: general concerns about aluminium in stuff (see his other posts) and antivaxxerism.

Is the study itself flawed? As I said, I don’t know but the connection the paper makes to vaccines has zero substance and no evidence from the study itself. That in itself should have raised red flags with reviewers.

In the past, I’d have gone to Science Blogs for some extra background on something like this but that venerable home of blogs has been wound down.

Luckily ‘Orac’ of Respectful Insolence has set up their own blog here https://respectfulinsolence.com/ and has a deep dive into Exley’s paper here:

https://respectfulinsolence.com/2017/11/29/christopher-exley-using-bad-science-to-demonize-aluminum-adjuvants-in-vaccines/

Yup, it is as dodgy as somebody dodging things in a dodgy dodge. Orac points out the dubious funding source:

“The second time, I noted that he’s one of a group of scientists funded by the Child Medical Safety Research Institute (CMSRI), which is a group funded by Claire and Al Dwoskin, who are as rabidly antivaccine as anyone I’ve seen, including even Mike Adams. Among that group of antivaccine “scientists” funded by CMSRI? Anthony Mawson, Christopher Shaw, Lucija Tomljenovic, and Yehuda Shoenfeld, antivaccine crank “scientists” all. And guess what? This study was funded by CMSRI, too. Fair’s fair. If antivaxers can go wild when a study is funded by a pharmaceutical company and reject it out of hand, I can point out that a study funded by an antivaccine “foundation” is deserving of more scrutiny and skepticism.”

And it just gets worse from there. No controls, some tiny sample jiggery-pokery with the numbers and so on. Best read directly.

 

Advertisements

Blogstrology

It is only a tiny step from pointless science to pseudoscience and I’m thinking…it’s a rainy Sunday and my head hurts…

After my previous post on this topic, it occurred to me that I should check the profile of some other websites. I’d already identified that Vox Day’s blog was disproportionately Goat-Wolf-Rabbit. What about Monster Hunter Nation?

mhiblogstrology

A clear Tiger-Goat-Cow blog. Cats do quite well at MHI in terms of raw numbers but not when compared against their general frequency.

Moving away from the right, how about File770?

770blogstrology

Mike is running a Cat-Tiger-Goat blog it seems. Now note that the search method includes comments, so it may be the readers that have a thing about cats (this has been independently confirmed).

What do all three blogs have in common? GOATS.

[ETA – Rocket Stack Rank www.rocketstackrank.com is interesting because the animals mentioned would be more determined by their incidence in short fiction. Overall low frequencies and RSR has no presence on the otter or goose dimensions. Wolf-Rabbit-Cat blog – “Cat” strongly assisted by reviews of the works of Cat Rambo 🙂

Goat has a presence but is just shy of the top 3.

RSRanimals

Today in Pointless Statistics

Yesterday, I was speculating about how the far-right may have a fear of rabbits. I’ve no means of ascertaining that but I did wonder if rabbits got mentioned more than you would expect.

Disproportionate Lagomorphic Referencing in Ideologically Extreme Propaganda

By C.Felapton, M.Robot 2017

Abstract

It has been postulated that the alt-right talks about rabbits a lot. Our research unit examined this hypothesis empirically using highly advanced data-mining techniques.

Using a sample of common animal words, the frequency of use of those words was established and then compared with word frequency in an established corpus of English words. It was established that at least one member of the alt-right talks about rabbits disproportionately.

Method

A weblog site produced by a notable “alt-right” writer was identified by a process of his being the obvious one to have a look at (a blogger who use the pseudonym of “Vox Day”). A set of 14 common animal nouns was identified: cat, chicken, cow, dog, elephant, goat, goose, mouse, otter, rabbit, sheep, tiger, wolf.

For comparison purposes, a corpus of English words was identified to establish standard frequencies for each word. The selected corpus was the BYU-BNC.

The British National Corpus (BNC) was originally created by Oxford University press in the 1980s – early 1990s, and it contains 100 million words of text texts from a wide range of genres (e.g. spoken, fiction, magazines, newspapers, and academic). https://corpus.byu.edu/bnc/

Using Google’s site specific search function, the target website was searched using each animal word in turn as the search term. An example search query being “mouse site::http://voxday.blogspot.com”

The number of “hits” per search term was recorded.

Results

The most common animal name used from the sample was “dog”. However, given the very high frequency of “dog” in English, this result is unremarkable. The ratio of the blog frequency versus the corpus frequency was calculated. The mean ratio for the sample was 0.728 (to 3 s.f.) [blog freq/BNC freq].

The most disproportionately under mentioned animal was “mouse”. The most disproportionately over mentioned animal was “goat”. While the frequencies of both “rabbit” and “wolf” were quite different in both the blog and the corpus, both words were over mentioned in a similar ratio (1.20 for rabbit and 1.21 for wolf).

Full results are shown in Appendix A.

Discussion

It was agreed by the research team that this had been a pointless exercise that provided no valuable insights and which was methodologically flawed due to its arbitrary choice of words, blog and corpus. Meat Robot complained about having a cold a lot and suggested that a day spent re-watching Rogue One: A Star Wars Story would be a better plan. “You’re not the boss of me.” said Camestros but had to concede that it was impossible to exist as incorporeal being.

A cat refused to comment on the result and no other animals were consulted.

Appendix A: Full results

The table shows the full results in ascending order of ratio.

Animal Blog Freq BNC Freq Ratio
mouse

559

1,728

0.323

chicken

757

2,027

0.373

otter

79

188

0.420

sheep

1,420

2,942

0.483

cow

662

1,334

0.496

cat

1,880

3,788

0.496

dog

4,880

7,780

0.627

elephant

616

892

0.691

goose

404

479

0.843

tiger

764

870

0.878

rabbit

1,670

1,393

1.199

wolf

975

804

1.213

goat

844

593

1.423

Spotting Fakery?

I previously pointed to an article on people manipulating Amazon rankings for their books, today there is a bigger brouhaha on whether somebody has manipulated the New York Time bestseller list: http://www.pajiba.com/book_reviews/did-this-book-buy-its-way-onto-the-new-york-times-bestseller-list.php The method used (if true) isn’t new and political books have been prone to this approach before i.e. buy lots of the book from the right bookshops and head up the rankings.

One thing new to me from those articles was this site: http://fakespot.com/about It claims to be a site that will analyse reviews on sites like Amazon and Yelp and then rate the reviews in terms of how “fake” they seem to be. The mechanism looks at reviewers and review content and looks for relations with other reviews, and also rates reviewers who only ever give positive reviews lower. Now, I don’t know if their methods are sound or reliable, so take the rest of this with a pinch of salt for the time being.

Time to plug some things into their machine but what! Steve J No-Relation Wright has very bravely volunteered to start reading Vox Day’s epic fantasy book because it was available for $0 ( https://stevejwright.wordpress.com/2017/08/23/a-throne-of-bones-by-vox-day-preamble-on-managing-expectations/ ) and so why not see what Fakespot has to say about “Throne of Bones” http://fakespot.com/product/a-throne-of-bones-arts-of-dark-and-light

thronebonesFAKESPOT

Ouch…but to some extent, we already know that the comment section of Vox’s blog is full of willing volunteers ready to do sycophanting stuff and/or trolling/griefing at Vox’s request. Arguably those are genuine reviews, just that they are hard to distinguish between click-farm fakery. Think of it as a kind of Turing Test, which his right-wing minions repeatedly fail by acting like…well, minions.

How reliable is this? There’s no easy way to tell. As a side-by-side experiment I put in Castalia’s attempt at spoiler campaign versus the mainstream SF book they were trying to spoil:

http://fakespot.com/product/corrosion-the-corroding-empire-book-1

http://fakespot.com/product/the-collapsing-empire

Ironically, the reviews that Vox complains about, probably improve the Fakespot rating of the reviews – i.e. many negative reviews from people will make the rating of the quality of the reviews better. I also don’t see a way in general of Fakespot distinguishing between fake NEGATIVE reviews -i.e. showing that the poor ratings of a book aren’t genuine.

[A note of caution: the site doesn’t re-analyse automatically so the analysis you get may be out of date. The initial ratings for those two books were different but changed when I clicked the option to re-analyse]

I also don’t see a way in general of Fakespot distinguishing between fake NEGATIVE reviews -i.e. showing that the poor ratings of a book aren’t genuine. The basic report seems to assume that fake reviews are for the purpose of the seller artificially boosting a book rather than somebody maliciously trying to make a book look bad.

 

Even More Hugo Wisdening

I’ve never been a fan of cricket but my family growing up were and there were numerous copies of Wisden in the house, which for those who don’t know of it is best described here https://en.wikipedia.org/wiki/Wisden_Cricketers%27_Almanack I guess some in the house hoped that I might find it intriguing and I could see the appeal but resisted.

These days we’ve got something better! All the fun of tables of dry numbers PLUS science fiction books! I don’t have a round up of other takes on the numbers yet though.

Normally Brandon Kempner at Chaos Horizon has posted something by now but there’s not been a post there since February. I hope he is OK.

Greg Hullender of Rocket Stack Rank is actually in Helsinki – and having a fun time I hope – so probably won’t post anything yet.

In the comments JJ gave links to three rich sources of data:

The first one is great for seeing EPH in action.

Continue reading

The Black SFF Writer Survey Report

This is an interesting read http://www.fiyahlitmag.com/bsfreport/ from FIYAH Literary Magazine. I’ll let the report speak for itself and I’m still digesting it but I’d like to pick up a point they make in the introduction:

“A final note: We know that some usual suspects will attempt to invalidate what we’ve captured by claiming that our analysis lacks rigor, or our methodology was faulty. This is a smokescreen that these individuals use to hide the fact that they are against making the speculative fiction publishing space inclusive and respectful to black writers–all writers, really–and their work. Using assumed (and faulty) scientific expertise to attack the experiences of marginalized people is not a new tactic, and one that is frequently used by these groups in an attempt to maintain the oppressive systems that they believe should solely benefit them. They will never admit that fact so we are making it plain here.”

Strongly worded but a reasonable response given some of the muddleheaded reactions we saw to the Fireside report.

This is not to say that the report is somehow methodologically perfect or has flawless data or answers all question. Rather, the point is that gathering a complete data picture of an area of study takes time, multiple studies and necessarily is an iterative process of collecting incomplete data which then inform new surveys and new studies. There is a bootstrap element to all statistical study e.g. how do you know whether your sample is representative without first having statistical data about the population you are sampling, which you can’t get without doing a representative sample of the population your want to sample? The answer is that *perfection* is unobtainable but *good-enough* is both obtainable and part of an iterative process of gaining knowledge.

So does the report have limitations? Yes, obviously – the writers aren’t omniscient.  The question is does it improve our understanding?

Survey results! Freeped by squirrels

surveymonkey

After 77 votes, some of which were rigged, the surprise result was “Maybe its is squirrels who do all the real work around here. Just saying” – which isn’t even grammatically correct and wasn’t even an option initially.

Freeped by squirrels.

Again.

[Also: nice graph option there from Survey Monkey. The proportionally divided bar graph is a nice alternative to the pie-chart and is arguably easier to read.]