Do you feel bombarded with medical news and not sure who to trust?
Clinical trials are a mystery to you? And now you see them all over the news?
In this article, I am not telling you what to believe, I am showing you how to critically appraise medical literature.
This list will help you better understand the information regarding clinical trials and studies that hit the headlines and get shared over social media.
Here I summarise what you need to look for in a medical study that will make it worth your time.
Due to the urgency to publish COVID-19 studies, peer review is postponed.
How to critically appraise medical literature is an especial important skill today amidst the coronavirus crisis.
Mostly all COVID-19 papers are going now through the fast-track line, or even published before peer-review, like preprints.
The consequence is that some studies are not of the highest quality.
More than 1000 clinical trials on COVID-19
Currently, there are more than 4500 clinical trials on COVID-19 listed in clinicaltrials.gov. (Update January 2021: when I first published this article in May 2020 there were around 1500 articles, that’s more than double the amount of clinical trials in 8 months!)
Furthermore, the number of publications on COVID-19 double every 2 weeks.
This means that soon medical communicators will need to evaluate, translate and communicate results that could save the lives of thousands.
And the public is waiting for this information more than ever.
Not only you need to choose your sources carefully. You need to go deep and fast into the data without drowning in it.
Even in normal times, it is not easy.
The truth is, that many published medical papers have serious flaws, even with detailed peer-review in regular, non-pandemic times.
With publications being still the currency of science, there is always a risk of flaws when it comes to publishing.
Predatory journals are a real threat to science
Furthermore, predatory journals are another threat that scientific communications face.
Is it possible to know if the study is serious?
Meanwhile, how can you detect flaws in a study? What study results are relevant?
It is surely possible when you pay critical attention to the parts of the study.
Let’s check out the groundwork of a clinical study.
Scientific evidence hierarchy
First, you need to know that scientific evidence is not equal to scientific evidence.
What this means is: there is a hierarchy when it comes to scientific data.
Subsequently, meta-analyses that evaluate several clinical trials have more power than one clinical trial, which in turn has more power than experiments on animals or cells.
Each type of evidence has its own importance and limitations.
How can we tell whether a study is trustworthy?
Introducing the Study Design.
The study design is the essence of a study.
Here is where the authors plan and put together all the questions and how they will approach them.
Therefore, proper planning is the foundation for good research.
Depending on the question the researchers want to answer, they will need to choose one or another type of study:
Observational studies refer to empirical, non-experimental research.
An observational study can only establish an association, not causation.
Most epidemiologic studies are observational.
- Cross-sectional: researchers collect data at a certain point in time.
Can be used as a guide to other studies and is easy to do.
Limitation: it is not possible to know what happened first or what will happen afterwards. It gives an idea, not the big picture.
- Case-control: researchers look at people after the disease (case-participants) and compare them to people that do not have the disease (control participants).
It is a retrospective study based on information that patients can provide.
Limitation: the answers of participants may not be accurate.
- Cohort: researchers follow a group of people over time to see how any of them develop the disease.
This is a prospective study and the most reliable type of observational research.
Limitation: requires more effort, time and resources.
A synonym for clinical trials.
Researchers provide participants with treatment or placebo and measure the outcomes of the therapy.
- Uncontrolled: useful for cancer or diseases that do not tend to cure on its own.
Limitation: there is no negative control to compare with and is not helpful for many other conditions.
- Historical control: data of the new study is compared to old data obtained from different research.
It is used in trials where it would be unethical to have a control group, for example, cancer patients.
Limitation: the availability, quality and nature of historical data limit the conclusions.
- Parallel group: one group of patients receive treatment A, and another group gets treatment B.
Limitation: there may be ethical issues regarding the allocation of the drug.
- Crossover studies: one group receives treatment A, another group gets treatment B, then the groups switch treatments. Each patient is its control.
Statistically, this is the most reliable type of study.
Nevertheless, it is not suitable for curative treatments, treatments with long-lasting effects like vaccines and unstable diseases (worsening, improving or randomly fluctuating).
Most often used for healthy volunteer studies and it does not need a lot of participants.
But, only works for diseases where you withdraw the medicine and symptoms return, like Parkinson’s or chronic conditions. Limitation: patients that get the medication in the second half of the study, might be already cured or might be at a different point of the disease.
These are essential parts of a study with direct influence in its reliability.
What was measured in the study? Was it relevant for patients?
Outcomes in morbidity, mortality, quality of life, or tumour size are crucial for the patient.
Check if the measurements taken in the study are relevant for patients.
Usually, you can find them in the objectives of the study.
Bias is a systematic error that researchers may or not consider in the study.
More importantly, it makes results to deviate from the truth.
Many known and unknown variables can not be easily controlled, thus, comprises a massive problem in epidemiological studies.
Types of Bias
Bias can take many forms: there are some 56 forms described.
Types of bias include:
- publication bias (positive results publish better),
- recall bias (patients may not remember accurately),
- allocation bias (randomisation error),
- observer bias (not properly blinded),
- and selection bias.
Example of selection bias: in a clinical trial for patients with a cognitive disorder, patients received a very complicated piece of information about the test.
Most probably, people with the disease did not enrol because they did not understand the information.
Confounding is a false correlation between two unrelated variables as a result of relationships between both variables and a third variable.
- The patient takes medicine X, feels better but gains weight.
- Question: medicine X makes patients gain weight?
- Answer: No, because there is an alternative explanation for what we observe: the patient feels better, eats more, gains weight.
Correlation is not causation.
Therefore, confounding is another massive problem in epidemiological studies and a widespread mistake everybody makes.
Fortunately, randomised trials are less prone to confounding but also not immune.
With a critical appraisal, one can find third variables that are not considered in the study like gender, personality, IQ, time of day, etc.
How to get rid of Bias and Confounding?
Again, with proper study design.
Also, statistical analysis can adjust for confounding if the variable is known, although a good study design is more important.
As a matter of fact, statisticians and epidemiologists are continually looking for the confounding variable of a study.
Design features of clinical trials that aim at reducing bias and confounding:
- Single-blind: researchers know if patients get treatment or placebo.
- Double-blind: researcher does not know which patients receive treatment or placebo.
- Triple-blind: independent researchers, for example, a radiologist that analyses some data, also do not know who gets treatment or placebo.
If the study was blinded, check out how the researchers did it:
- Where the pills different? Colour, form, flavour, etc.
- Where the treatments different? For example, one group received massage the other did exercise.
- Was it possible to do a blind study? Researchers tested if touching a dog or a cat was more helpful (you can not make the pets invisible).
- Was there a mild side effect that only patients within the treatment group showed? This might signal to everyone that patient X was receiving the treatment.
Participants of the study are assigned to treatment or control groups by chance.
Randomisation is the essential feature of clinical trials and ensures that groups are balanced for known and unknown variables.
Noteworthy, randomised trials are less prone to bias, although there is always the possibility to accidentally include some type of bias.
Check out how randomisation was done:
- Random-number table?
- Computerised random number generator?
- Third-party randomisation?
- Are participants intentionally allocated in unequal numbers to each intervention?
Usually, third-party randomisation is preferred, although it may be very expensive, especially for small preliminary studies.
Statistical methods, if you insist
Ideally, a single trial should use only one statistical test.
Fact: on average, 1 in 20 statistical tests done on completely random data will be significant.
This means that impressive results in subgroups can be random.
Multiple testing is bad statistical practice
Certainly, multiple testing and subgroup analysis are especially prone to false conclusions.
If multiple testing is unavoidable or if it has an excellent biological rationale, then corrections like Bonferroni can be used.
Besides, a result can be statistically significant, but the clinical significance is more important.
Furthermore, absolute risks are more important than relative risks.
Fast checklist for clinical studies
This list is to help you know what you are looking for in a study.
It will also be a first step to help you elaborate on your conclusions.
How relevant is the study?
Once you can identify all the critical aspects of clinical studies, you can have an overview of its relevance.
By identifying the critical components of the study design, sources of bias, or confounding, it is possible to understand how much information can be used as a take-home message from the study.
Randomised Clinical Trials are the golden standard in medical research.
But even they could have serious flaws, like bad randomisation or no real blinding.
Every study has its own value, range of applicability and limits.
What about your personal bias?
And last, be aware of your own bias.
Do you want the study to be valid? Maybe you are overlooking some fundamental flaws.
Be critical, think, and look for the key parts of the study.
Based on EMWA workshop by Adam Jacobs, November 2019 Malmö