Show simple item record

dc.contributor.authorWang, Weiyao
dc.contributor.authorTamhane, Aniruddha
dc.contributor.authorSantos, Christine
dc.contributor.authorRzasa, John R
dc.contributor.authorClark, James H
dc.contributor.authorCanares, Therese L
dc.contributor.authorUnberath, Mathias
dc.date.accessioned2022-03-01T17:38:33Z
dc.date.available2022-03-01T17:38:33Z
dc.date.issued2022-02-10
dc.identifier.urihttp://hdl.handle.net/10713/18128
dc.description.abstractEar related concerns and symptoms represent the leading indication for seeking pediatric healthcare attention. Despite the high incidence of such encounters, the diagnostic process of commonly encountered diseases of the middle and external presents a significant challenge. Much of this challenge stems from the lack of cost effective diagnostic testing, which necessitates the presence or absence of ear pathology to be determined clinically. Research has, however, demonstrated considerable variation among clinicians in their ability to accurately diagnose and consequently manage ear pathology. With recent advances in computer vision and machine learning, there is an increasing interest in helping clinicians to accurately diagnose middle and external ear pathology with computer-aided systems. It has been shown that AI has the capacity to analyze a single clinical image captured during the examination of the ear canal and eardrum from which it can determine the likelihood of a pathognomonic pattern for a specific diagnosis being present. The capture of such an image can, however, be challenging especially to inexperienced clinicians. To help mitigate this technical challenge, we have developed and tested a method using video sequences. The videos were collected using a commercially available otoscope smartphone attachment in an urban, tertiary-care pediatric emergency department. We present a two stage method that first, identifies valid frames by detecting and extracting ear drum patches from the video sequence, and second, performs the proposed shift contrastive anomaly detection (SCAD) to flag the otoscopy video sequences as normal or abnormal. Our method achieves an AUROC of 88.0% on the patient level and also outperforms the average of a group of 25 clinicians in a comparative study, which is the largest of such published to date. We conclude that the presented method achieves a promising first step toward the automated analysis of otoscopy video.en_US
dc.description.urihttps://doi.org/10.3389/fdgth.2021.810427en_US
dc.language.isoenen_US
dc.publisherFrontiers Media S.A.en_US
dc.relation.ispartofFrontiers in Digital Healthen_US
dc.rightsCopyright © 2022 Wang, Tamhane, Santos, Rzasa, Clark, Canares and Unberath.en_US
dc.subjectanomaly detectionen_US
dc.subjectdeep learningen_US
dc.subjectotoscopeen_US
dc.subjectpediatric healthcareen_US
dc.subjectself-supervised learningen_US
dc.titlePediatric Otoscopy Video Screening With Shift Contrastive Anomaly Detection.en_US
dc.typeArticleen_US
dc.identifier.doi10.3389/fdgth.2021.810427
dc.identifier.pmid35224535
dc.source.journaltitleFrontiers in digital health
dc.source.volume3
dc.source.beginpage810427
dc.source.endpage
dc.source.countrySwitzerland


This item appears in the following Collection(s)

Show simple item record