IMU 3.53% 8.2¢ imugene limited

The Importance of P values to clinical trials

  1. 757 Posts.
    lightbulb Created with Sketch. 631
    Hoping some of the more informed posters on this forum can shed some light on the Her-Vaxx ph 2 trial and its interim results and why IMU decided on 1-sided p-value pre-specified alpha of 0.10 (10%)....achieving a P Value of 0.083 (8.3%) in the interim results.

    Is there some pre trial guidance that we can view to see the pre specified alpha was set at 0.1, I've searched on the clinicaltrials.gov website but could not see details on the alpha on the trial design

    Could this be the reason for the shorting on IMU which started at the time of Interim results??.

    A few links to help understand statistical significance and the meaning of P-Value

    https://www.investopedia.com/terms/s/statistically_significant.asp

    https://www.investopedia.com/terms/p/p-value.asp

    https://hotcopper.com.au/data/attachments/2861/2861504-961fbcf562584cb40e8f508401f0b657.jpg

    Some common misperceptions about p-values


    A p-value<0.05 is perceived by many as the Holy Grail of clinical trials (as with most research in the natural and social sciences).Itis greatly sought after because of its (undeserved) power to persuade the clinical community to accept or not accept a new treatment into practice. Yet few, if any, of us know why 0.05 is so sacred. Literature abounds with answers to the question, “What is a p-value?” and how the value 0.05 was adopted, more or less arbitrarily or subjectively, by R. A. Fisher, in the 1920’s. He selected 0.05 partly because of the convenient fact that in a normal distribution, the five percent cutoff falls around the second standard deviation away from the mean.1

    But little is written on how 0.05 became the standard by which manyclinical trial results have been judged. A commentary2ponders whether this phenomenon is similar to the results from the “monkeys in the stairs” experiment, whereby a group of monkeys were placed in a cage with a set of stairs with some fruit at the top. When a monkey went on the steps, blasts of air descended upon it as a deterrent. After a while, any monkey that attempted to get on the steps wasdissuaded by the group. Eventually, the monkeys were gradually replaced by new monkeys, but the practiceof dissuasion continued, even when the deterrent was no longer rendered. In other words, the new monkeys were unaware of the reason why they were not supposed to go up the steps, yet the practice continued.

    In the following, I first review what a p-value is. Then, I address two of the many issues regarding p-values in clinical trials. The first challenges the conventional need to show p<0.05 to conclude statistical significance of a treatment effect; and the second addresses the misuse of p-values in the context of testing group differences in baseline characteristics in randomized trials. Many excellent papers and books have been published that address these topics; nevertheless, the intention of this paper is to revive and renew them (using a less statistical language) to aid the clinical investigators in planning and reporting study results.

    What is a p-value anyway?

    We equate p<0.05 with statistical significance. Statistical significance isabout hypothesis testing,specifically of the null hypothesis (H0) that means the treatment has no effect. For example, if the outcome measure is continuous, the H0may be that the group difference in mean response (Δ) is equal to zero. Statistical significance isthe rejection of the H0 based on the level of evidence in the study data. Note that failure to reject the H0does not imply that Δ=0 is necessarily true; just that the data from the study provide insufficient evidence to show that Δ≠0.

    To declare statistical significance, we need a criterion. The alpha (also known as the Type I error probability or the significance level)is that criterion. The alpha does not change with the data. In contrast, the p-value depends on the data. Ap-value is defined as the probability of observing treatment effect(e.g., group difference in mean response)as extreme or more extreme(away from the H0) if the H0is true. Hence, the smaller the p-value, the more extreme or rare the observed data are,given the H0to be true.The p-value obtained from the data is judged against the alpha. If alpha=0.05 and p=0.03, then statistical significance is achieved. If alpha=0.01, and p=0.03, statistical significance is not achieved. Intuitively, if the p-value is less than the pre-specified alpha, then the data suggest that the study result is so rare thatit does not appear to be consistent with H0, leading to rejection of the H0. For example, if the p-value is 0.001, it indicates that, if the null hypothesis were indeed true, then there would be only a 1 in 1,000 chance of observing data this extreme. So either very unusual data have been observed, or else the supposition regarding the veracity of the H0is incorrect. Therefore, small p-values (less than alpha) lead to rejection of the H0.

    In the Interventional Management of Stroke (IMS) III Trial that compared the efficacy of IV tPA (N=222) and IV tPA plus endovascular (N=434) treatment of acute ischemic stroke, the alpha was specified as 0.05. The unadjusted absolute group differencein the proportion of the good outcome, defined as the modified Rankin Scale score of 0-2, was 2.1% (40.8% in endovascular and 38.7% in IV tPA)3. Under the normal theory test for binomial proportions, this yields a p-value of0.30, meaning that if the H0were true (i.e., the treatment did not work), there would be a 30% chance of observing a difference between the treatment groups at least as large as 2.1%. Since this is not so unusual, wefail to reject H0: Δ=0 and conclude that the difference of 2.1% is not “statistically significant.”

    Thinking outside the “p<0.05” box

    Another interpretation of the alpha is that it is the probability of rejecting the H0 when in fact it is true. In other words, alpha is the false positive probability.Typically, we choose alpha of 0.05, and hence, our desire to obtain p<0.05. There is nothing magical about 0.05. Why not consider the risk (or cost) to benefit ratio in the choice of the false positive probability the research community is willing to tolerate for a particular study? For some studies, should one consider a more conservative (like 0.01) or more liberal (like 0.10) alpha? In the case of comparative effectiveness trial, where two or more treatments, similar in cost and safety profile, that have been adopted in clinical practice are tested to identify the “best” treatment, one might be willing to risk a higher likelihood of a false positive finding with alpha of, say 0.10. In contrast, if a new intervention to be tested is associated with high safety risks and/or that isvery expensive, one would want to be sure that the treatment is effective by minimizing the false positive probability to, say 0.01.For a certain Phase II clinical trial, where the safety and efficacy of a new treatment is still being explored, one can argue for a more liberal alpha to give the treatment a higher level of the benefit of doubt, especially when the disease or condition have only a few, if any, effective treatment options. If it should pass, it would be weeded out in a Phase III trial with a more stringent significance level. Also, if the H0is widely accepted as true (perhaps, for example, in the case of hyperbaric oxygen treatment for stroke), then one might wish to be more sure that rejecting the H0implies that the treatment is effective by using alpha of 0.01 or even lower. Of course, this means a larger study has to be conducted.

    While proposing to use anything greater than an alpha of 0.05 may be challenging, especially for studies to be submitted to the US Food and Drug Administration for New Drug Application approval, scientifically sound rationaleand experienced clinical judgment should encourage one to think outside the box about the choice of the alpha.In doing so, one should ensure that scientific and ethical rationale is the driving argument for proposing a larger alpha, and not only the financial savings (as a result of smaller required sample size with a larger alpha).


    https://hotcopper.com.au/data/attachments/2861/2861537-f96cd2a9bc615b48a758b7572901d3bf.jpg

    Is it Possible that certain hedge funds are short due to the High P-value and small sample size, reduced from 68 patients to 36?
    I am hoping some of the more informed posters here might be able to put my mind at ease on this, I am quite new to biotech...most of my knowledge is in mining so forgive me for my ignorance here.

 
watchlist Created with Sketch. Add IMU (ASX) to my watchlist
(20min delay)
Last
8.2¢
Change
-0.003(3.53%)
Mkt cap ! $600.2M
Open High Low Value Volume
8.5¢ 8.6¢ 8.2¢ $1.246M 14.84M

Buyers (Bids)

No. Vol. Price($)
4 446526 8.2¢
 

Sellers (Offers)

Price($) Vol. No.
8.3¢ 535664 2
View Market Depth
Last trade - 16.10pm 03/05/2024 (20 minute delay) ?
Last
8.2¢
  Change
-0.003 ( 2.38 %)
Open High Low Volume
8.6¢ 8.6¢ 8.2¢ 4518456
Last updated 15.57pm 03/05/2024 ?
IMU (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.