Like many here, I’m very excited about the possibility that ResApp could mass-screen and monitor COVID-19 disease progression.
Cough analysis is becoming a hot topic in COVID-19 related literature. There are a now a number of publications that support the proposition that COVID-19 patients exhibit a unique cough in both symptomatic and asymptomatic cases. For those that are interested, I’ve copied links below to a number of these papers and some extracts (these are papers / studies that I’m aware of, there would be many others). Nevertheless, it’s important to call out that the authors of these publications note that their work is preliminary in nature and requires further validation work. DYOR
My high level observations from the publications:
- Overallthese publications are very encouraging and support ResApp’s hypothesis that aCOVID-19 cough contains a specific signature
- Still risks and many unknowns in this space. But rewards could be massive for ResApp if the company can successfully discern COVID-19 signature and commercialise tech
- ResApp’sapproach does not rely on crowdsourced data which addresses some of thelimitations experienced in other studies performed to date
- ResApphas a library of non-COVID-19 related respiratory disease coughs which will bevery useful / important in being able to confirm whether the COVID-19 cough istruly unique compared to coughs associated with other respiratory diseases(e.g. asthma, bronchitis etc). Two of the studies referred to below did conduct some analysis of COVID-19 vs non-COVID-19 respiratory disease coughsand concluded that they were distinguishable. This is a very excitingdevelopment! Nevertheless, their analysis was quite preliminary and thenon-COVID-19 respiratory disease coughs appeared to be crowdsourced impactingthe robustness of their findings
- Regardingpotential competition for a smartphone based mass screening tool, the MIT based “cough in a box” solution appearsto be the front runner. The technology continues to be developed and testedacross the UK. Whilst the MIT paper was published in 2020, an update on theprogress on further work was provided in an 5 August 2021 update. Refer tolinks below for further info:
Publications:
1. Pay Attention to the cough: Early Diagnosis of COVID-19 using Interpretable Symptoms Embeddings with Cough Sound Signal Processing - https://arxiv.org/abs/2010.02417
“The proposed framework's performance was evaluated using a medical dataset containing Symptoms and Demographic data of 30000 audio segments, 328 cough sounds from 150 patients with four cough classes ( COVID-19, Asthma, Bronchitis, and Healthy). Experiments' results show that the model captures the better and robust feature embedding to distinguish between COVID-19 patient coughs and several types of non-COVID-19 coughs with higher specificity and accuracy of 95.04 ± 0.18% and 96.83± 0.18% respectively, all the while maintaining interpretability.” Pg 1
“Compared to asthma and bronchitis, it consists of fewer peaks, starting with multiple small peaks and ending with large peaks in phase 3, which shows a significant difference between COVID-19 and other types of cough”
2. COVID-19 Artificial Intelligence Diagnosis Using Only Cough Recordings - https://ieeexplore.ieee.org/document/9208795
“Abstract—Goal: We hypothesized that COVID-19 sub[1]jects, especially including asymptomatics, could be accu[1]rately discriminated only from a forced-cough cell phone recording using Artificial Intelligence. To train our MIT Open Voice model we built a data collection pipeline of COVID-19 cough recordings through our website (open[1]sigma.mit.edu) between April and May 2020 and created the largest audio COVID-19 cough balanced dataset reported to date with 5,320 subjects. Methods: We developed an AI speech processing framework that leverages acoustic biomarker feature extractors to pre-screen for COVID-19 from cough recordings, and provide a personalized patient saliency map to longitudinally monitor patients in real-time, non-invasively, and at essentially zero variable cost. Cough recordings are transformed with Mel Frequency Cepstral Coefficient and inputted into a Convolutional Neural Net[1]work (CNN) based architecture made up of one Poisson biomarker layer and 3 pre-trained ResNet50’s in parallel, outputting a binary pre-screening diagnostic. Our CNN[1]based models have been trained on 4256 subjects and tested on the remaining 1064 subjects of our dataset. Transfer learning was used to learn biomarker features on larger datasets, previously successfully tested in our Lab on Alzheimer’s, which significantly improves the COVID-19 discrimination accuracy of our architecture. Results: When validated with subjects diagnosed using an official test, the model achieves COVID-19 sensitivity of 98.5% with a specificity of 94.2% (AUC: 0.97). For asymptomatic sub[1]jects it achieves sensitivity of 100% with a specificity of 83.2%. Conclusions: AI techniques can produce a free, non[1]invasive, real-time, any-time, instantly distributable, large[1]scale COVID-19 asymptomatic screening tool to augment current approaches in containing the spread of COVID-19. Practical use cases could be for daily screening of stu[1]dents, workers, and public as schools, jobs, and transport reopen, or for pool testing to quickly alert of outbreaks in groups. General speech biomarkers may exist that cover several disease categories, as we demonstrated using the same ones for COVID-19 and Alzheimer’s.”
3. COVID-19 cough classification using machine learning and global smartphone recordings -https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8213969/
“Respiratory data such as breathing, sneezing, speech, eating behav[1]iour and coughing can be processed by machine learning algorithms to diagnose respiratory illness [21–23]. Simple machine learning tools, like binary classifiers, are able to distinguish COVID-19 respiratory sounds from healthy counterparts with an area under the ROC curve (AUC) exceeding 0.80 [24]. Detecting COVID-19 by analysing only the cough sounds is also possible. AI4COVID-19 is a mobile app that records 3 s of cough audio which is analysed automatically to provide an indication of COVID-19 status within 2 min [25]. A deep neural network (DNN) was shown to distinguish between COVID-19 and other coughs with an ac[1]curacy of 96.83% on a dataset containing 328 coughs from 150 patients of four different classes: COVID-19, asthma, bronchitis and healthy [26]. There appear to be unique patterns in COVID-19 coughs that allow a pre-trained Resnet18 classifier to identify COVID-19 coughs with an AUC of 0.72. In this case, cough samples were collected over the phone from 3621 individuals with confirmed COVID-19 [27]. COVID-19 coughs were classified with a higher AUC of 0.97 (sensitivity = 98.5% and specificity = 94.2%) by a Resnet50 architecture, trained on coughs from 4256 subjects and evaluated on 1064 subjects that included both COVID-19 positive and COVID-19 negative subjects by implementing four biomarkers [28]. A high AUC exceeding 0.98 was also achieved in Ref. [29] when discriminating COVID-19 positive coughs from COVID-19 negative coughs on a clinically validated dataset consisting of 2339 COVID-19 positive and 6041 COVID-19 negative subjects using DNN based classifiers” pg 2
4. Exploring Automatic Diagnosis of COVID-19 from Crowdsourced Respiratory Sound Data - https://arxiv.org/abs/2006.05919
“In this paper we describe our data analysis over a large-scale crowdsourced dataset of respiratory sounds collected to aid diagnosis of COVID-19. We use coughs and breathing to under[1]stand how discernible COVID-19 sounds are from those in asthma or healthy controls. Our results show that even a simple binary machine learning classifier is able to classify correctly healthy and COVID-19 sounds. We also show how we distinguish a user who tested positive for COVID-19 and has a cough from a healthy user with a cough, and users who tested positive for COVID-19 and have a cough from users with asthma and a cough. Our models achieve an AUC of above 80% across all tasks. These results are preliminary and only scratch the surface of the potential of this type of data and audio-based machine learning. This work opens the door to further investigation of how automatically analysed respiratory patterns could be used as pre-screening signals to aid COVID-19 diagnosis.” Pg 1
5. Telehealthcare and Covid-19: A Noninvasive & Low Cost Invasive, Scalable and Multimodal Real-Time Smartphone Application for Early Diagnosis of SARS-CoV-2 Infection - https://arxiv.org/abs/2109.07846
“This paper proposes and experimentally demon[1]strates a novel online framework to identify posi[1]tive Covid-19 patients using machine learning. It can be accessed through a Smartphone application by both non-ambulatory and hospitalized patients and provides three modes of diagnosis using symp[1]toms, coughsound and hematological biomarkers with an accuracy of 77.59%, 95.65% and 95.24% (for 5 blood features, but 100% for 25 blood fea[1]tures) respectively. The algorithms had a sensi[1]tivity of 100% for blood and sound, which indi[1]cates correct identification of Covid-19 positive pa[1]tients” pg 12
6. Exploring Self-Supervised Representation Ensembles for COVID-19 Cough Classification - https://arxiv.org/abs/2105.07566
“As pointed out by Imran et al. [17], cough is one of the major symptoms of COVID-19 patients. Compared to PCR (Poly[1]merase Chain Reaction) tests and radiological images, diagnosis using cough sounds can be easily accessed by people through a smartphone app. In the meantime, however, cough is also a common symptom of many other medical conditions that are not related to COVID-19. Therefore, automatically classifying respiratory sounds for COVID-19 diagnostic is a non-trivial and challenging task. During the pandemic, many crowdsourcing platforms (such as COUGHVID2 [24], COVID Voice Detector3 , and COVID-19 Sounds App4 ) have been designed to gather respiratory sound audios from both healthy and COVID-19 positive groups for the research pur[1]pose. With these collected datasets, researchers in the artificial intelligence community have started to develop machine learn[1]ing and deep learning based methods (e.g., [5, 12, 17, 25, 27]) for cough classification to detect COVID-19. Nevertheless, these meth[1]ods share one common characteristic, that is they are all designed and trained in a fully-supervised way. On the one hand, the fully[1]supervised setting limits the applicability, effectiveness and impact of the collected datasets, since the method has to be trained and tested on the same dataset. This means additional datasets can[1]not be directly used to boost the predictive performance and the model is limited to the same source dataset. On the other hand, such fully-supervised based classification methods inevitably need to rely on well-annotated cough sounds data” pg 1