BIT 3.70% 2.6¢ biotron limited

Statistical issues in the analysis of BITs P2 trial

  1. 19,125 Posts.
    lightbulb Created with Sketch. 6029
    This post is in response to @Southoz's post. I thought I would make it into a new thread so we can skip a lot of the other distracting posts. Southoz raised a number of issues, but to try and keep this focused I will just deal with the first of his claims at this point (we can deal with the other’s later if he wishes). This is what @Southoz wrote.

    I’ll have a crack at explaining it Davisite – because I also don’t see a problem with this figure and how BIT have applied that statistical test.

    From the figure CD8 mean levels do appear to increase in the first few weeks in the bit225 cohort compared to placebo. This fits with BITs theory those patients were exposed to a unique antigen source.

    And at the third time point guess what these means levels were statistically different. Funnily enough this was tested using a Welsh T-test which is more conservative than a student t-test and less prone to type one error.

    This all seems reasonable and it appears on the surface that @Southoz has refuted what I have written. The only problem is I was not suggesting that an inappropriate statistical test was performed by BIT when analysing the CD8 data, but that they were engaging in post-hoc analysis (also known as data dredging). What I was suggesting is that no statistical tests should have been applied to the CD8 (and CD4 data), because there was no pre-specified hypothesis registered for this data.

    What is data dredging and why is it bad?

    In a scientific study, "post hoc" analysis (from Latin post hoc, "after this") or "data dredging" consists of statistical analyses done that were not specified before the data was seen. A humorous, but very revealing example of this is described by Siddhartha Mukherjee in the NY Times [1].

    This kind of search-and-rescue mission is called “post hoc” analysis. It’s exhilarating — and dangerous. On one hand, it promises the possibility of resuscitating the medicine: Find the right group of responsive patients within the trial group — men above 60, say, or postmenopausal women — and you can, perhaps, pull the drug out of the rubble of the failed study.

    But it’s also a treacherous seduction. The reasoning is fatally circular — a just-so story. You go hunting for groups of patients that happened to respond — and then you turn around and claim that the drug “worked” on, um, those very patients that you found. (It’s quite different if the subgroups are defined before the trial. There’s still the statistical danger of overparsing the groups, but the reasoning is fundamentally less circular.) It would be as if Sacks, having found that the three long-term responders to L-dopa happened to be 80-year-old women from one nursing home, then published a study claiming that the drug “worked” on Brooklyn octogenarians.

    Perhaps the most stinging reminder of these pitfalls comes from a timeless paper published by the statistician Richard Peto. In 1988, Peto and colleagues had finished an enormous randomized trial on 17,000 patients that proved the benefit of aspirin after a heart attack. The Lancet agreed to publish the data, but with a catch: The editors wanted to determine which patients had benefited the most. Older or younger subjects? Men or women?

    Peto, a statistical rigorist, refused — such analyses would inevitably lead to artifactual conclusions — but the editors persisted, declining to advance the paper otherwise. Peto sent the paper back, but with a prank buried inside. The clinical subgroups were there, as requested — but he had inserted an additional one: “The patients were subdivided into 12 ... groups according to their medieval astrological birth signs.” When the tongue-in-cheek zodiac subgroups were analyzed, Geminis and Libras were found to have no benefit from aspirin, but the drug “produced halving of risk if you were born under Capricorn.” Peto now insisted that the “astrological subgroups” also be included in the paper — in part to serve as a moral lesson for posterity. I’ve often thought of Peto’s paper as required reading for every medical student.

    The reason why post hoc analysis is wrong is that if you torture any data long enough you will find some relationship that appears to be statistically significant, but that has really only occurred by chance. It is for this reason that all regulatory agencies (like the FDA and TGA) require that you pre-specify your hypothesis before the trial and they won’t accept post-hoc analysis.

    What were the pre-specified hypothesis of the BIT-009 trial?

    The key to determining if BIT has engaged in data dredging is to look at the registered (pre-specified) aims (hypothesis) of the BIT-009 trial. These are described in detail in the ANZCTR Registry [2], but they were summarised on the BIT-009 trial poster.

    BIT009_Aims.png
    The primary aims were plasma viral load and safety. BIT225 proved to be safe in that no patients dropped out of the trial (tolerability was more debatable), but BIT failed to show any significant effect on viral load (i.e. the trial failed the primary objective).

    The secondary aims were the “impact” (whatever that means) of BIT225 on the sCD163 monocyte activation marker and to look at the pharmacokinetics of BIT225 (this data has not been released). There were no pre-specified hypothesis as to the effect of BIT225 on either CD4 or CD8 activation.

    What does all this mean?

    BIT should not have engaged in any statistical analysis of the CD4 & CD8 data since they did not pre-register any hypothesis for this data and by doing so they have engaged in data dredging. No conclusions should be drawn from the CD4 or CD8 data. Combined with the failure of BIT225 to have any effect on viral load, the only data that was of any significance was the sCD163 data. The sCD163 was only just significant with a p value of 0.036. The clinical importance of the cCD163 change is unknown. While it is known that the level of sCD163 correlates with HIV disease progression [3], the effect of reducing sCD163 independently of reducing viral load on a patient’s health is unknown. A large and complex trial would need to be undertaken to explore this question.

    It is for each investor (or potential investor) in BIT to determine what a drug like BIT225 is worth, but so far no pharma company has leapt at the chance to fund further trials of BIT225. It is my opinion that none will on the basis of the data and analysis BIT has released.

    References
    1. https://www.nytimes.com/2017/11/28/magazine/a-failure-to-heal.html
    2. https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=372075
    3. https://www.frontiersin.org/articles/10.3389/fimmu.2017.01698/full
 
watchlist Created with Sketch. Add BIT (ASX) to my watchlist
(20min delay)
Last
2.6¢
Change
-0.001(3.70%)
Mkt cap ! $24.36M
Open High Low Value Volume
2.7¢ 2.7¢ 2.6¢ $4.605K 176.6K

Buyers (Bids)

No. Vol. Price($)
2 464991 2.6¢
 

Sellers (Offers)

Price($) Vol. No.
2.8¢ 189421 3
View Market Depth
Last trade - 12.24pm 20/08/2024 (20 minute delay) ?
BIT (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.