Show simple item record

dc.contributor.authorJiang, Zhuoran
dc.contributor.authorZhang, Zeyu
dc.contributor.authorChang, Yushi
dc.contributor.authorGe, Yun
dc.contributor.authorYin, Fang Fang
dc.contributor.authorRen, Lei
dc.date.accessioned2021-11-18T19:05:19Z
dc.date.available2021-11-18T19:05:19Z
dc.date.issued2021-12-01
dc.identifier.urihttp://hdl.handle.net/10713/17162
dc.description.abstractBackground: Acquiring sparse-view cone-beam computed tomography (CBCT) is an effective way to reduce the imaging dose. However, images reconstructed by the conventional filtered back-projection method suffer from severe streak artifacts due to the projection under-sampling. Existing deep learning models have demonstrated feasibilities in restoring volumetric structures from the highly under-sampled images. However, because of the inter-patient variabilities, they failed to restore the patient-specific details with the common restoring pattern learned from the group data. Although the patient-specific models have been developed by training models using the intra-patient data and have shown effectiveness in restoring the patient-specific details, the models have to be retrained to be exclusive for each patient. It is highly desirable to develop a generalized model that can utilize the patient-specific information for the under-sampled image augmentation. Methods: In this study, we proposed a merging-encoder convolutional neural network (MeCNN) to realize the prior image-guided under-sampled CBCT augmentation. Instead of learning the patient-specific structures, the proposed model learns a generalized pattern of utilizing the patient-specific information in the prior images to facilitate the under-sampled image enhancement. Specifically, the MeCNN consists of a merging-encoder and a decoder. The merging-encoder extracts image features from both the prior CT images and the under-sampled CBCT images, and merges the features at multi-scale levels via deep convolutions. The merged features are then connected to the decoders via shortcuts to yield high-quality CBCT images. The proposed model was tested on both the simulated CBCTs and the clinical CBCTs. The predicted CBCT images were evaluated qualitatively and quantitatively in terms of image quality and tumor localization accuracy. Mann- Whitney U test was conducted for the statistical analysis. P<0.05 was considered statistically significant. Results: The proposed model yields CT-like high-quality CBCT images from only 36 half-fan projections. Compared to other methods, CBCT images augmented by the proposed model have significantly lower intensity errors, significantly higher peak signal-to-noise ratio, and significantly higher structural similarity with respect to the ground truth images. Besides, the proposed method significantly reduced the 3D distance of the CBCT-based tumor localization errors. In addition, the CBCT augmentation is nearly real-time. Conclusions: With the prior-image guidance, the proposed method is effective in reconstructing high-quality CBCT images from the highly under-sampled projections, considerably reducing the imaging dose and improving the clinical utility of the CBCT. © Quantitative Imaging in Medicine and Surgery. All rights reserved.en_US
dc.description.sponsorshipNational Institutes of Healthen_US
dc.description.urihttps://doi.org/10.21037/qims-21-114en_US
dc.language.isoenen_US
dc.publisherAME Publishing Companyen_US
dc.relation.ispartofQuantitative Imaging in Medicine and Surgeryen_US
dc.subjectDeep learningen_US
dc.subjectMerging-encoderen_US
dc.subjectPrior-image guidanceen_US
dc.subjectUnder-sampled cbct augmentationen_US
dc.titlePrior image-guided cone-beam computed tomography augmentation from under-sampled projections using a convolutional neural networken_US
dc.typeArticleen_US
dc.identifier.doi10.21037/qims-21-114
dc.source.volume11
dc.source.issue12
dc.source.beginpage4767
dc.source.endpage4780


This item appears in the following Collection(s)

Show simple item record