Thanks for your detailed analysis
@Southoz it's always good to hear from people who haven't got a vested interest in the company to see their take on the story.
Whilst reading your posts in detail and finding it intriguing to say the least I disagree with your take on things and so does the rest of the top 20 by the looks of things, actually the top 50 haven't really changed around much since the results were posted...
"I think the results were very poor and RAP will realise that it would now be better off not to apply (and get rejected) for FDA approval."
The announcement clearly stated "positive results from Smartcough-C2 clinical trial" Also take note of this in the quarterly: "ResApp is working closely with its
US
and Australian-based consultants on a De Novo premarket submission for its acute
paediatric respiratory disease diagnostic smartphone application and plans to file this with the US Food and Drug Administration (FDA) in the first quarter of this calendar year." Clearly the company will be filing for regulatory submission. Our USA consultants would not have anything to do with us to tarnish their record of a 99% clearance rate from thousands of cases...
"
The US second trial failed to meet the pre-defined end-points that had been set in the first US trial (superiority to 75% PPA+NPA) for any illness. RAP now claims the endpoints are arbitrary but they were set following a pre-submission FDA meeting (in 2016)."
The 75% PPA and NPA end points were set by ResApp all along, we are a novel technology we only need to be as accurate as a doctor with stethoscope!
Refer to this cited paper as one example from one quick google search:
"Comparing the auscultatory accuracy of health care professionals using three different brands of stethoscopes on a simulator"
"The overall correct identification of all the sounds put together was 68.6%."
"The aim of this prospective study was to report how well three different brands of stethoscopes perform in the hands of a diverse group of health care professionals in a controlled setting. Furthermore, the accuracy of health care professionals at different levels of expertise was tested in identifying common auscultatory sounds."
As you can clearly see their accuracy levels are less than 70% and these were professionals using quality stethoscopes.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4140709/#!po=0.892857
The accuracy of x-rays are very similar hence the need for an additional tool. this will be particularly useful in a telehealth setting where a doctor is not physically able to reach through the screen on your phone or monitor to give you a proper diagnosis.
From the recent quarterly report:
"A total of 1,470 patients were recruited at three hospital sites in the US from which 1,251 patients completed the study and were analysable. ResApp announced in October that ResAppDx achieved a positive percent agreement between 73% and 78% and a negative percent agreement between 71% and 86% when compared to a clinical diagnosis for lower respiratory tract disease, asthma/reactive airway disease and primary upper respiratory tract
disease"
Better than 68.6%!!! -- I say ResApp will be bringing home the bacon
!
"There were a high number of participants (15%) in RAPs second US trial who did not provide analysable data (only 1251 out of 1470). Model a worse-case scenario and assume all these participants would have been wrongly diagnosed by the app. RAPs algorithm accuracy estimates will drop below non-inferiority to clinical diagnosis and (most likely) are not statistically significantly better than chance (50%)."
One cannot assume that 14.9% of the other test subjects were all in disagreement. These patients were discarded from the study as they were not analyzable. so therefor these are not included in the analysis or on the minds of the FDA regulatory body. At the end of the day it all comes down to having statistically significant numbers for each individual respiratory disease. they have the numbers and that's all that counts.
"MEB tried exactly this same algorithm non-inferiority to clinical diagnosis argument to the FDA and failed. It too had a high problem with drop outs."
Stress measurement and depression are no comparison to our machine learning algorithms to diagnose respiratory disease from the sound of ones cough. Also 3 possibly 4 out of 6 respiratory diseases are successfully above 70% accuracy only pneumonia and bronchitis fell short and may require this argument however in saying that without looking into MEB in detail, ResApp have Massachusetts general Hospital backing us up, a paper within a medical journal will be published soon and this information within will aid in this argument. Did MEB have a collaboration agreement set with a world leading hospital? I'm salivating waiting for the scientific board meeting to present the detailed findings of the SmartCough-C2 study very shortly at some of the upcoming medical conferences. Representatives of ResApp will attend an announcement most likely will be published there after IMO...
"Finally the accuracy estimates presented by RAP include patient (parent) reported symptoms. What exactly does the analysis of cough sounds contribute over and above symptoms? No-one knows. The results presented to the market to date conceals what is supposed to be the whole point of the app (analysis of cough sounds). But the FDA is unlikely to approve an app with an interesting charade - (you have to cough into a smartphone)."
Detailed work has been prepared using sound alone to locate the signatures in ones cough with high accuracy.
SCIENTIFIC PRESENTATIONS & PUBLICATIONS
https://www.resapphealth.com.au/technology/#pubs
Yes time will tell, my money is where my mouth is this will be a beast
!
All the best mate and thanks for taking out some time to share your opinion...
Cheers
Red bar