RAP 0.00% 20.5¢ raptor resources limited

How come you would lose compliance? As long as they're still...

Currently unlisted. Proposed listing date: 4 SEPTEMBER 2024 #
  1. 239 Posts.
    lightbulb Created with Sketch. 128
    How come you would lose compliance? As long as they're still performing the other validations, why would such an automated testing system not be compliant/a compliant part of that testing? Can you point out the issue?

    The only main difference between an actual recording from a person and playing back the recorded cough is any degradation of the sound and so loss of quality and other artefacts appearing in the waveform, which yes that could affect the model but in the approach I'm talking about it's not about testing the ML model itself but rather the mic/hardware (the models will work the same regardless if they're given the same data, and can easily validate that code in software alone through unit tests, property tests and other testing strategies). So if you play back the recordings to an iPhone, you can compare the results and see what margin of error it has (compared to the real data) and what effect that has on the model's ability to identify what conditions are apparent. Then when you're doing the same on any android devices (or future iPhone devices) it can compare against the iPhone results and see whether it's still performing within a certain measured degree of error or whether it's very wrong. You could also combine this with some more mic specific tests such as recording different frequencies (single sine waves at a specific frequencies, more complex waveforms and running that through an FFT and seeing what frequencies it is made out of) and seeing how accurate it is and whether it is picking up on frequency ranges that are important to the ML model's feature extraction, or whether they're frequencies that just get filtered out prior to the feature extraction (have no effect on the model/diagnosis).

    Anyway I'm not saying this is specifically what they're doing/what was meant when TK mentioned they've built an automated testing environment, this is all just a guess/speculation on my part as to what that might've meant. They'll know whatever they're doing is compliant and abides by the quality control guidelines they've set out, so I wouldn't be concerned about that. I just have an interest in the technical side is all, so trying to figure this stuff out is interesting to me. Like as soon as one of my 64-bit iDevices becomes "decommissioned" I'll be jailbreaking it so I can reverse engineer SleepCheck/ResAppDx just to satisfy my own personal curiosity (as the only real good insight of how things work under the hood is from the patent filing, the rest has just been little crumbs we get every now and then).

    And the calibration remark is because I just assume that's what they'll be doing (as that's what they're doing on the iOS version of SleepCheck, e.g. the tone that gets played when you begin recording) so they can make sure the mic is working properly and possibly use it for adjusting levels. For instance when you start it unobstructed it plays the tone once, but if you obstruct the mic (place your finger over the mic) it'll play the tone twice. Since there's no guarantee a person's mic is working correctly (may have been damaged), or they may have had the device repaired with non-OEM parts, etc.
 
watchlist Created with Sketch. Add RAP (ASX) to my watchlist
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.