What Daniel Andrews Gets Wrong about Science

Dan Andrews, the divisive Premier of Victoria, triggered a collective sigh amongst his state’s 6.4 million constituents recently when he announced that Victoria’s lockdown restrictions would be extended for yet another two weeks, bringing the total term of the state’s strict Stage 4 restrictions to at least 10 weeks.

At the commencement of his 90-minute press conference, below, Andrews played the science card.

“You can’t argue with this sort of data. You can’t argue with science”.

No items found.

Because Science

Whatever you think about Victoria’s lockdown laws, playing the “because science” card should not and does not immediately render one absolutely correct, and eliminate room for inquiry and discourse.

Firstly, I should stress that this piece is by no means an attack on science, nor the forecasting model Andrews describes, or his decision to extend the lockdown.

I for one am a massive advocate of science, having built a company around emerging tech startups over the past six years. Science has improved our lives by orders of magnitude over the past 150 years. Clean water, sanitation, automobiles, planes, electricity and lighting, communications, medicine and the doubling of the average lifespan since 1900, and of course, the internet (although it has its drawbacks).

As the saying goes, status and all of its trapping aside, a typical person today lives a better and more comfortable life than kings and queens did just several centuries ago.

But I am also a massive advocate for truth and reason, and for not pulling the wool over unsuspecting people’s eyes by pointing to the science and deeming it case closed, especially not when it comes to politicians wielding authority over millions of people, as is currently the case in Victoria.

You Can Argue with Science

The fact is that despite Andrews’ rhetoric, you can argue with science.

The scientific method, at its core, is about falsifying assumptions. It is about bringing us closer to truth, but it rarely if ever amounts to absolute truth.

As the late Novel-prize winning physicist, Richard P Feynman, put it, “we can never be sure we’re right, we can only ever be sure we’re wrong”.

And this is because science can be both subject to error, known unknowns, unknown unknowns, and can also be gamed.

As Carl Bergstrom and Jevin D West, authors of the bestselling Calling Bullshit: The Art of Scepticism in a Data-Driven World, put it, “there’s plenty of bullshit in science, some accidental and some deliberate”.

They point out that every living scientist we know acts from the same human motivations as everyone else and that goes beyond their quest for understanding — money, reputation, influence, power.

The Replication Crisis

These flawed incentives have contributed to the replication crisis that is currently plaguing the sciences. A significant number of scientific studies are difficult or impossible to replicate or reproduce. And this is particularly true of medicine, psychology, and economics.

Medicine: Of 49 medical studies from 1990–2003 with more than 1,000 citations, only 24% remained largely unchallenged, prompting John Ioannidis, Professor of Medicine at Stanford University, to publish an article on Why Most Clinical Research Is Not Useful. Furthermore, in a 2012 paper, Glenn Beglet, a biotech consultant, published a paper arguing that only 11% of pre-clinical cancer studies could be replicated.

Psychology: A report penned by Open Science Collaboration in August 2015 found that study replication rates for in the psychology department ranged from just 23% for social psychology up to about 50% for cognitive psychology. To take the more encouraging of these numbers, that translates to half of such studies not being replicable.

Economics: Perhaps this should be the least surprising of the lot. A 2016 study found that one-third of 18 studies from two top-tier economics journals failed to successfully replicate. A subsequent study argued that “the majority of the average effects in the empirical economics literature are exaggerated by a factor of at least 2 and at least one-third are exaggerated by a factor of 4 or more”.

A single study doesn’t tell you much about the world. Researchers weigh the evidence across multiple studies to form not a correct, but a ‘more likely to be correct’ view of how the world works, and oftentimes, this view is revisited and updated over time. Hello butter and fat being nothing but bad for you.

Science is ultimately susceptible to all of the following failings.

1 Garbage in, Garbage out

Andrews said that “you can’t argue with this sort of data”.

But as computer scientists know all too well, the quality of output is determined by the quality of input. In order for data to be effective, it needs to be the right data, it needs to be interpreted correctly, and come to some type of meaningful conclusion free of error or intentional fudging.

Not only that, but things change. The past isn’t always a timely predictor of the future, and it also fails to account for unknown unknowns and black swan events such as, say, a novel coronavirus bring down the world economy.

This is precisely why there is a growing trend in business away from being data-driven, towards being data-informed. The former puts all of your faith in data — flawed or otherwise, whereas the latter combines data with the professional judgment obtained from many years of experience in a particular domain, accounting for the nuances surrounding a particular decision.

We wouldn’t think that artificial intelligence would be racist, but it turns out that when you feed an algorithm data based on how the world currently works, it adopts our failings too.

2 Confirmation and Selection Bias

This refers to a tendency to search for or interpret information in a way that confirm’s one’s preconceptions, and ignore all of the disconfirming evidence. Whatever argument I have, I could probably find supporting evidence for it depending on how I choose select and interpret the data.

I might want to argue that ‘people do their best work’ at night time, and to support my case I might just sample night owls, and exclude everybody else, including early-birds who do their best work in the morning.

3 Cherry Picking

Closely related to selection bias, cherry-picking is all about presenting the results of a study or experiment that best support an argument, instead of reporting all of the findings. If only 10% of your results support your argument, but the other 90% don’t, it’s tempting and common to report only the 10%, especially if it leads to said reputation, influence, money and power.

The same goes with referencing studies that support your position on something like vaccines, but ignoring all of the studies that don’t.

4 Confusing Correlation with Causation

Just because A and B correlate, it does not mean that A causes B.

My friend, Daniel Cannizzaro, recently raised $1 million for his fintech company, at around the same time that Victoria’s coronavirus case numbers started to plummet.

There is an inverse correlation here. I could make the wild claim that the Victorian Government should invest millions into his startup and see case numbers drop to zero but that would just be stupid. The two clearly have nothing to do with each other. Correlation does not mean causation.

5 P-Hacking

A P-value ultimately refers to statistical significance. In order for findings to be meritable, they need to be statistically significant.

Scientists can engage in the conscious or subconscious manipulation of data in a way that produces the desired p-value and therefore a ‘statistically significant’ result, in order to get published and build their brand and reputation.

Just like teenagers are prone to do whatever it takes to rack up likes on Instagram, scientists are chasing citations, and many will do whatever it takes.

6 Predatory Journals

Nowadays, there are numerous predatory journals that effectively operate on a ‘pay to play’ model.

Scientists can get all sorts of junk findings published in these journals, providing they pay up, and they can then point to their published journals to further their influence or careers.

For example, John McCool, a Seinfeld fan, submitted an article to the predatory Urology and Nephrology Open Access Journal. It was entirely based on the Seinfeld episode, The Parking Garage in which Jerry Seinfeld forgets where he parked his car, ultimately forcing him to urinate in public and get arrested for it.

He later pleaded his case, suggesting that he would die of uromycitisis poisoning if he didn’t relieve himself. No such condition exists, but that didn’t stop the journal from accepting McCool’s mock paper, ‘Uromycitis Poisoning Results in Lower Urinary Tract Infection and Acute Renal Failure’, providing, of course, he fronted up with the $799 fee. He never did.

7 Peer Review

Just because something has been peer-reviewed, it does not make it uncontestable.

Usually, peer review amounts to high-level review of the methodology followed by researchers and scanning work to check for obvious errors or oversights, but it does not amount to a complete replication of the entire study.

One systematic review into the practice found that peer review sometimes picks up errors and fraud by chance, but that it is generally not a reliable method for detecting fraud.

As Bergstrom and West put it, “peer reviewers make mistakes and they cannot possibly check every aspect of the work…peer review cannot catch every innocent mistake, let alone uncover well-concealed acts of scientific misconduct”.

How to Refute Science and Data

The list goes on. These are just some of the many ways that science can indeed be wrong.

“The first thing to recognize is that any scientific paper can be wrong”, says Bergstrom.

Science can and is gamed by researchers, politicians, entrepreneurs — basically, anybody who has a vested interest in the science telling a specific story to benefit them.

So the next time someone pulls out the “because science” card, be they a politician or your friend, realise that it’s not the end of the conversation, but the start of it.

You might want to question:

  • the publication (is it credible? even so, it might be part of the replication crisis)
  • the number of studies (was this just an isolated study or is there a massive body of literature that more or less comes to the same conclusion?)
  • who conducted the study and what their vested interests are
  • the source data and how it was interpreted
  • whether correlation is being confused with causation
  • whether visual charts are being presented in a way to tell a story (charts can be hacked to tell almost any story by zooming in, zooming out, changing the x and y-axis scale, and so on)
  • the myriad ways data can be fudged — through selection bias, cherry picking, p-hacking and so on — to come to a specific conclusion

Final Thoughts

If you know what to look for, you can call bullshit quite easily and have people scrambling to change the subject. But rather than let them, engage in a conversation instead.

In our efforts to be right and win arguments, finding out what’s actually more right and improving our world view often becomes the victim.

The more informed we are about how the world works — and that includes science, the better our decisions will become.

Pick up Calling Bullshit, the most important book of the year as far as I’m concerned, here.

Posted 
September 8, 2020
 in 

FOLLOW US ON SOCIAL

Discussions

MORE FROM

Steve Glaveski

RELATED

What Daniel Andrews Gets Wrong about Science

Asteroid Headed For Earth To End Year With A Bang

Tesla-Sized Asteroid Almost Hit Earth Without Anyone Noticing

Bad Ideas: Put Politicians on a Workers’ Wage

Episode 86: The Therapeutic Relationship: Human Connection & Life Principles With Solomon Petrovski (The Mind Ninja)

Episode 85: Ketones - All You Need To Know: With Joe Rogister