You expressed some concerns and requested information on the likelihood of the RAP algorithm being able to detect asymptomatic COVID.
There have been several studies published by others on the use of cough sounds for COVID detection. Linked extracts from three of these reports are summarised below.
It would be important to note that the results reported there relate only to the ability of their algorithms to distinguish between a known COVID-positive and a healthy forced cough. They don’t have the capability to distinguish COVID from other respiratory infections that RAP would have because of its cough signal database.
The first relates to an MIT study that has been referred to previously on the RAP threads, and which was specifically designed to test for asymptomatic recognition.
The encouraging results reported give some idea of what capability might be expected from RAP’s developed algorithm.
Goal: We hypothesized that COVID-19 subjects, especially including asymptomatics, could be accurately discriminated only from a forced-cough cell phone recording using Artificial Intelligence. Methods: We developed an AI speech processing framework that leverages acoustic biomarker feature extractors to pre-screen for COVID-19 from cough recordings, … Results: When validated with subjects diagnosed using an official test, the model achieves COVID-19 sensitivity of 98.5% with a specificity of 94.2% (AUC: 0.97). For asymptomatic subjects it achieves sensitivity of 100% with a specificity of 83.2%. Conclusions: AI techniques can produce a free, non-invasive, real-time, any-time, instantly distributable, large-scale COVID-19 asymptomatic screening tool to augment current approaches in containing the spread of COVID-19.
We demonstrate that solicited-cough sounds collected over a phone, when analysed by our AI model, have statistically significant signal indicative of COVID-19 status (AUC 0.72, t-test,p <0.01,95% CI 0.61-0.83). This holds true for asymptomatic patients as well.