By the numbers: when you should NOT do the experiment.

Doing scientific research on medical treatments helps us to know whether or not they work. That seems obvious, but there are times when a scientist may not want to test whether or not something works. Let me explain.

It may seem obvious that we should not conduct medical experiments on treatments like castration for depression. The potential harms are obvious. But what about interventions that seem benign at worst, like your best-friend’s-mom’s cure-all tea or Reiki?

What is the harm in doing research into anything out there? Great question! The answer is this: The less likely something is to be measurably true, the more likely one is to find misleading results. Ergo, only research things that are measurable and likely to be true. And even then, you need to replicate to make sure you are not fooling yourself.

When the result is likely only to increase confusion and explain nothing, don’t do the experiment.

Dr. O

False positives. When doctors give screening tests to patients, the tests are good at identifying people who possibly have a condition. However, they are often too good! That means they have a lot of false positives. That is why doctors usually give theses tests to people who already have a high prior probability of having the condition.

For example, say there is an island on which no one has left and no one has come since the beginning of the COVID-19 pandemic. The likelihood anyone on that island has COVID-19 is practically zero. If we were to give COVID screening tests to the entire island, some would give false positive results. Now all the inhabitants are forced to socially distance themselves because of a false positive.

Now consider giving the same test to the same number of people in a place where there is a lot of SARS-CoV-2 infections. In that case, the positive tests are far less likely to be false positives. They are more likely to be true positives.

The same is true of scientific research. To vastly oversimplify, most medical scientific research that compares a real treatment to a fake treatment has a statical baseline 5 percent chance of finding an actually effect when one dose not exist. It’s called the alpha, or the acceptable false positive rate, because, well, we have to pick one, so we did.

Basically by performing tests at all, you are guaranteed to get false results, so if the test is unlikely to show a true positive, you are greatly increasing the relative probability of getting a false positive. You are turning a 1 in a bazillion chance into a 1 in 20 chance.

So lets apply this to Reiki (or other immeasurable types of energy therapy) which is what happens when a Reiki practitioner channels god-consciousness through you. It is immeasurable, and therefore, the probability of it being legitimately scientifically valid is very low. In fact, it is nearly infinitely small. In this respect, it is at the same level as channeling the spirit of King Arthur’s sword Excalibur. And few people would be lining up for that because it is sold as fantasy.

So the powerful reason why the research into highly unlikely hypotheses should not even be conducted in the first place: any result is only likely to increase confusion and explain nothing.

However, when something is being pushed into the public realm and many people are spending money and time seeking the treatment, then no matter how unlikely it is to be helpful, putting it to the test may be necessary. These tests need to be carefully designed to mimic the actual therapy closely and conducted with the knowledge that statistically, there is a much higher probability of a false positive than a true positive.

In addition to this random confusion, there is a lot of bias that can be introduced to an experiment. This is IN ADDITION to the relatively huge difference in getting a false signal. It vastly increases the already vastly increased chance of a false positive.

  • Blinding bias. If a patient knows they are getting the fake treatment, they may report it is not working, even if it is. If the patient does not know, but the researcher knows that they are giving a fake treatment, the patient is still likely to report less of an effect. The opposite is true for the actual treatment. Knowledge that one is getting the “real” treatment may produce a placebo response, feeling better despite no actual effect. Even the person asking the patient how they feel can influence the result if they know whether or not the patient got the fake treatment. These are biases that occur because of lack of blinding. The person giving, receiving, and measuring should be blinded.
  • Publication bias. There are other ways to give false confidence that an ineffective treatment actually works. One is to only publish positive trials and let the negative ones find their way to the paper shredder. This is called publication bias. This is especially effective when the false positive rate is higher than the the truth positive rate.
  • Selection bias. All you have to do is select patients that are likely to give you a certain result. Let’s say, you are doing a surgery, and you only choose patients that are likely to do better, whether they got the surgery or not. This is a selection bias.
  • Randomization bias. A sneaky way to get false positive results is to not randomize patients between the experimental treatment and the fake treatment. This is another way of stacking the deck in your favor. This is called a randomization bias.
  • P-hacking. There are many other ways to introduce false results. One common way is that a researcher can keep conducting an experiment and analyzing the data over and over again until they get a “positive” result and then stop the experiment. This a a form of p-hacking, because “p” is the number that tells a researcher whether their results are significant or not. There are many other ways to p-hack, I won’t cover here.

I hope this helps you understand why scientists may not even do an experiment in the first place or they may not believe the results of an experiment. The take home is that you can pick your poison or your fake poison and then you can make it medicine with a wave of your wand. It is so easy to do this that scientific journals are full of the stuff. They hope that the ethics of the researchers are aligned with finding actual metrics of truth and that the reviewers will identify the non-sense. Unfortunately, as the pool of non-sense grows larger, it is harder and harder to filter it.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.