Children’s health Defense notes that there are some glaring examples of mainstream Covid articles where the grand conclusions do not follow from the body of argument in the papers. One important example is the rejection of the lab leak hypothesis by an authoritative statement in the journal Nature Medicine, followed by a summary in Science and an affidavit in the Lancet signed by a list of prominent scientists.
There was in fact only one very weak argument given in the original Nature Medicine article, the virus’s spike protein was not a perfect fit to the human ACE-2 receptor. The assumption here is that a lab-designed virus would have had a perfect fit, and that natural evolution does not produce such a perfect fit. Both assumptions need to be supported by evidence and argument, but were not even articulated in the article. We now know that an array scientists at the time wanted to shield communist China from investigation.
Trust science? Sure, but it is another thing to trust blindly scientists, who are controlled by those who pay their bills. But what is science other than that which scientists do?
“It was January 2020, the very beginning of COVID, when news articles began appearing that connected the genetics of the virus with gain-of-function research on bat coronaviruses at the Wuhan Institute of Virology.
These speculations were put to rest by an authoritative statement in the prestigious journal Nature Medicine, echoed by a summary in Science and an unusual affidavit in the Lancet signed by an impressive list of prominent scientists.
The message in the Nature Medicine article was dispositive: “Our analyses clearly show that SARS-CoV-2 is not a laboratory construct or a purposefully manipulated virus.”
But where was the support for this confident conclusion in the article itself?
The 2,200-word article in Nature Medicine (Anderson, et al) contained a lot of natural history and sociological speculation, but only one tepid argument against laboratory origin: that the virus’s spike protein was not a perfect fit to the human ACE-2 receptor.
The authors expressed confidence that any genetic engineers would certainly have computer-optimized the virus in this regard, and since the virus was not so optimized, it could not have come from a laboratory. That was the full content of their argument.
Most readers, even most scientists, take in the executive summary of an article and do not wade through the technical details. But for careful readers of the article, there was a stark disconnect between the Cliff Notes and the novel, between the article’s succinct (and specious) conclusion and its detailed scientific content.
This was the beginning of a new practice in the write-up of medical research. Recent revelations in the Fauci/Collins emails shed light on the origins of this tactic and the motives behind it.
In the past, if a company wanted, for example, to make a drug look more effective than it really was, it would choose a statistical technique that masked its downside, or it would tamper with the data.
What companies would not do, in the past, was describe the results of a statistical analysis that proves X is false, then publish it with an Abstract that claims X is true.
But this strange practice has become more common in the last two years. Academic papers are being published in which the abstract, the discussion section and even the title flatly contradict the content within.
Why is this happening? There are at least three possibilities:
- The authors cannot understand their own data.
- The authors are being impelled by the editorial staff to arrive at conclusions that match the ascendant narrative.
- The authors and editors realize the only way to get their results into publication is to avoid a censorship net that gets activated by any statement critical of vaccination efficacy or safety.
Before reaching any conclusions, let’s take a closer look at some examples of this troubling phenomenon arising in what should be the foundation of what is known: published scientific data.
In this article, we present five different published studies. Each to varying degrees exemplifies a disconnect between the data and the conclusions.
Example 1: ‘Phase I Study of High-Dose L-Methylfolate in updates Combination with Temozolomide and Bevacizumab in Recurrent IDH Wild-Type High-Grade Glioma’
This example is unrelated to the pandemic, but it typifies a common practice in the pharma-dominated world of medical research. If a remedy is cheap and out of patent, there is no one motivated to study its efficacy.
But research practice has gone well beyond neglect. In fact, investigators are skewing statistics to make cheap, effective treatments look ineffective if they are in competition with expensive pharma products.
This is ridiculously easy to do — all it requires is incompetence. Using the wrong statistical test, using a weak test when a stronger one applies — or just about any mistake in parsing the data — is far more likely to make compelling data appear random than the opposite.
Is it always incompetence? Or is it more often a well-thought-out deception that uses seemingly erudite analysis to lead the undiscerning reader into believing the wrong conclusion?
In the case of this article, a simple B vitamin (L-Methylfolate) was shown to double the life expectancy of 6 out of 14 brain cancer patients who received it, while showing no benefit (and no harm) to the other half of the patients.
The purple jagged line extending out to the right represents 40% of patients who lived dramatically longer when treated with L-Methylfolate (LMF).
The abstract reports that “LMF-treated patients had median overall survival of 9.5 months [95% confidence interval (CI), 9.1–35.4] comparable with bevacizumab historical control 8.6 months (95% CI, 6.8–10.8).”
The increase in median survival time is just a few months and not statistically significant. But the average survival time of the folate-treated group was more than double, and the difference was statistically significant (by my calculation, not in the article).
But the average is what is more commonly reported, and most readers don’t understand the difference between average and median.
The longest surviving patient on the B vitamin was still alive at the end of the study (3.5 years) when every one of the patients treated only with traditional chemo was dead before 1.5 years.
There were three different dosages in the study, (30, 60, 90 mg) and it was not reported whether the longest-living patients were receiving the highest dosages.
This is, in fact, a hugely promising pilot study about treating a common, fatal cancer with a simple vitamin. If it were an expensive chemotherapy drug instead of a cheap vitamin, you can be sure it would have been hailed as a breakthrough.
But this study will not create much excitement, and few oncologists will even know to prescribe methylfolate for their glioma patients.”