TSN 0.00% 1.0¢ the sustainable nutrition group ltd

what could go wrong?, page-70

  1. 483 Posts.
    lightbulb Created with Sketch. 60
    Have been reading the comments regarding the trial length and thought it might be useful to repost the bootstrapping simulation that I ran to get an idea of when the 350th PFS event should occur.

    To avoid misinterpretation and confusion, I’ll first spend some text describing the method and assumptions so that is clear what you can- and –can’t conclude from the analysis.

    Firstly, the sole point of the analysis is to understand the distribution of dates that the 350th PFS event is expected to occur. The reason that we need to examine the distribution is that this is a prospective analysis, meaning there is uncertainty around when the event occurs.

    Obviously, there is no theoretical distribution of the expected date for the 350th event, so we need to simulate the study using as much known data as possible.

    The two known data pools that we need (and have) are:

    1) A function that describes enrollment into the trial; and

    2) A probability function for time to progression.

    The function for trial enrollment is fairly well defined as ACL have provided enough cumulative updates to fit a polynomial. So this assumption is pretty reliable. Curve is below.



    As for the PFS probability function, there are three relevant studies that I can get my hands on, all of which differ, with about 1-month difference for median PFS (4 – 5 months). I can use any of these (or a pooled estimate), but to be as conservative as possible, I ran with the longest PFS function (VELOUR). Even so, this might not be a good proxy for the ACL control arm (but it’s as good as I can do).



    As I have quantitative descriptions of both the enrollment date (i.e. the polynomial fitted to cumulative recruitment), and the PFS probability function, it’s pretty straight forward to run a simulation of the trial, record when the 350th event occurs and then examine its distribution by simulating the trial multiple times (I ran it 10,000 times).

    Each simulation generates a progression date for each patient in the trial by:

    1) Generating an enrollment date for the nth patient using the enrollment polynomial; and

    2) Generating a stochastic time to progression by sampling the progression probability function (using a random number generator); then

    3)Adding the time to progression to the enrollment date to get a progression date; and finally

    4)Recording the date of the 350th progression.
    To help visualise the process, each simulation generates curve like the one below. This is just one of the 10,000 curves that determine the 350th event distribution (number of progression events are on the y-axis; date on the x-axis).



    When I do this I get the following histogram for the frequency of the 350th event falling on a certain date. The mean of the date of the 350th event is 15 October 2013, and the median is also 15 October 2013. The upper 95th percentile for the 350th event is 21 November 2013 and the upper 99th is 8 December 2013. In other words, only 1% of the trial simulations reach the 350th event after 8 December 2013.



    So what can we conclude from this?

    Not much really. Even though this is a quantitative analysis, we can really only make a qualitative conclusion i.e. that the trial has taken significantly longer to reach the 350th event than expected. And, while we can say this with absolute certainty, we have no information as to why.

    Hopefully most of the difference is explained by the fact that half of the patients are on an experimental drug, and that the experimental drug boosts PFS. Alternatively, the difference could be due to a more systemic difference between the cohort enrolled into this trial versus the VELOUR study (hopefully not).

    That’s about all we can infer. We can’t infer anything quantitative about the shape of the PFS curve and therefore can’t make ANY quantitative assessment of that is happening at the median of either the pooled data, or each arm of the trial.

    We also don’t know the profile of dropouts and censored events in the trial. Both of these factors would extend the time to reach the 350th event and would not be captured in this analysis.

    From my point of view, this analysis is encouraging, but it does have a downside. If the extension in trial time is due to a systemic factor (tendency of patients to have a longer PFS), then all else equal, (and as pointed out by NEO) it will retard the p-value. This is interesting as it increases probabilities of a cracking result (increase in time is due to experimental treatment) AND a disappointing result (increase is due to something systemic which threatens the p-value).

    One last word – there’s been some comments about the relationship between the phase-II and phase-III studies. Neo pointed out that the extension observed in the phase-II was a boundary for what to expect in the phase-III.

    Ordinarily I’d agree with that, but in this instance I’m a little more relaxed as there’s sound biochemical FOLFIRI rationale to explain why changes in tolerance of this combination therapy might have a very non-linear response to changes in both tolerability and potency of the chemo agents.

    I’ve re-posted the model to 4Shared.com with the hope that someone can help me extend and improve the analysis.

    http://www.4shared.com/file/EGttwF2Tce/HyAct_p3_Trial_Analysis_v3_n10.html

    ……oh yeah, and we do know that the trial is yet to reach the 350th event (a few posters have rung the company about this).
 
watchlist Created with Sketch. Add TSN (ASX) to my watchlist
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.