Share
3,628 Posts.
lightbulb Created with Sketch. 734
clock Created with Sketch.
05/06/18
08:16
Share
Originally posted by Southoz
↑
So …. where are we round two?
Nurses who receive training, follow a strict clinical trial protocol, who know their recordings are going to be checked for quality are able to collect analysable cough recordings in a hospital setting.
Yep … you would hope so. And this rigor will flow through to the “conditions of use” requirements which will present quite a barrier for adoption.
In addition notably the study population (children presenting to hospitals) has next to nothing to do with one of the intended use populations often discussed here … telehealth.
But these will be “morning after” after a big party problems.
The first challenge of the trial is to meet the PPA / PNA benchmarks. And this occurs in a very high stakes context.
It is high stakes because the results from this second US clinical trial trump all other studies conducted by RAP. And these results will stand until a methodologically stronger trial is conducted. What this means is the results from the non-regulated, non-registered Australian studies can support the US trial results, but not over-turn them.
So what are the chances of success second time around?
The first trial failed because of (a) dud sound recordings (poor PI oversight), (b) the algorithm needed tweaking (RAPs fault) and (c) the reference comparator was wrong (the CROs fault).
Naturally RAP talk most about (a) and (c). Not so much about (b).
With respect to the dud recordings the results presented in the first trial excluded the duds – and they were not much better than chance. So this is not where the problem was.
The reference comparator was obviously a problem. But I would guess it’s only worth a few percentage points – worth addressing as it might mean the difference between a near miss and success.
But its problem (b) that is the fly in the ointment.
In discussing the failure of the first US trial RAP noted that there had been only limited previous prospective testing of the app in Australia prior to the US study.
This was the point I made in a couple of posts prior to the trial failure (see post 24202980); the prospective nature of the US trial would shrink the estimates derived from the earlier training “studies”.
The upshot of this lack of prospective testing according to RAP was that the app needed to be retrained for the US situation. But this is not actually right. The problem is not US children coughing differently to Australian children. The problem was the app predicting to a different data-set it hadn't been trained on.
But even I didn’t envisage that this would shrink the estimates to no better than chance. This is actually the big issue – which is kind of obvious … because RAP don’t talk about it much.
But RAP do have a clever solution. Use the data from the first study to re-train the app. Then as closely as closely as possible mimic the first study in the second study. The same hospitals, same staff, the same population of children. With any luck you will get the same results as if you had just performed a slit-sample validation.
On this basis I think the second study has a much better chance of success – as (to some extent) it sidesteps to some extent the difficulty of prospective testing. The FDA will spot this trick but what they will make of it is anyone’s guess.
So as bizzaro as this might sound coming from me about RAP I suspect the true probability of success is higher here that what the market might be expecting. One of the rare binary trial results at this end of the market with a good risk – reward scenario - for those with strong stomachs only.
Expand
South Oz...you me tioned that....”With respect to the dud recordings the results presented in the first trial excluded the duds – and they were not much better than chance. So this is not where the problem was.”
As far as I can read and recall, the poor results for the first study included the poor recordings and other errors.
Please correct me f I am wrong.
Thanks
Fox