• Login
    View Item 
    •   UMB Digital Archive
    • UMB Open Access Articles
    • UMB Open Access Articles
    • View Item
    •   UMB Digital Archive
    • UMB Open Access Articles
    • UMB Open Access Articles
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of UMB Digital ArchiveCommunitiesPublication DateAuthorsTitlesSubjectsThis CollectionPublication DateAuthorsTitlesSubjects

    My Account

    LoginRegister

    Statistics

    Display statistics

    Generative Self-training for Cross-Domain Unsupervised Tagged-to-Cine MRI Synthesis

    • CSV
    • RefMan
    • EndNote
    • BibTex
    • RefWorks
    Author
    Liu, Xiaofeng
    Xing, Fangxu
    Stone, Maureen
    Zhuo, Jiachen
    Reese, Timothy
    Prince, Jerry L.
    El Fakhri, Georges
    Woo, Jonghye
    Date
    2021-09-21
    Journal
    Lecture Notes in Computer Science
    Publisher
    Springer Nature
    Type
    Article
    
    Metadata
    Show full item record
    See at
    https://doi.org/10.1007/978-3-030-87199-4_13
    https://arxiv.org/abs/2106.12499
    Abstract
    Self-training based unsupervised domain adaptation (UDA) has shown great potential to address the problem of domain shift, when applying a trained deep learning model in a source domain to unlabeled target domains. However, while the self-training UDA has demonstrated its effectiveness on discriminative tasks, such as classification and segmentation, via the reliable pseudo-label selection based on the softmax discrete histogram, the self-training UDA for generative tasks, such as image synthesis, is not fully investigated. In this work, we propose a novel generative self-training (GST) UDA framework with continuous value prediction and regression objective for cross-domain image synthesis. Specifically, we propose to filter the pseudo-label with an uncertainty mask, and quantify the predictive confidence of generated images with practical variational Bayes learning. The fast test-time adaptation is achieved by a round-based alternative optimization scheme. We validated our framework on the tagged-to-cine magnetic resonance imaging (MRI) synthesis problem, where datasets in the source and target domains were acquired from different scanners or centers. Extensive validations were carried out to verify our framework against popular adversarial training UDA methods. Results show that our GST, with tagged MRI of test subjects in new target domains, improved the synthesis quality by a large margin, compared with the adversarial training UDA methods.
    Sponsors
    National Institutes of Health
    Keyword
    generative self-training (GST)
    unsupervised domain adaptation
    Artificial Intelligence
    Machine Learning
    Identifier to cite or link to this item
    http://hdl.handle.net/10713/16877
    ae974a485f413a2113503eed53cd6c53
    10.1007/978-3-030-87199-4_13
    Scopus Count
    Collections
    UMB Open Access Articles

    entitlement

     
    DSpace software (copyright © 2002 - 2022)  DuraSpace
    Quick Guide | Policies | Contact Us | UMB Health Sciences & Human Services Library
    Open Repository is a service operated by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.