Self-contained deep learning-based boosting of 4D cone-beam CT reconstruction

Abstract

PURPOSE: Four-dimensional cone-beam computed tomography (4D CBCT) imaging has been suggested as a solution to account for interfraction motion variability of moving targets like lung and liver during radiotherapy (RT) of moving targets. However, due to severe sparse view sampling artifacts, current 4D CBCT data lack sufficient image quality for accurate motion quantification. In the present paper, we introduce a deep learning-based framework for boosting the image quality of 4D CBCT image data that can be combined with any CBCT reconstruction approach and clinical 4D CBCT workflow.

METHODS: Boosting is achieved by learning the relationship between so-called sparse view pseudo-time-average CBCT images obtained by a projection selection scheme introduced to mimic phase image sparse view artifact characteristics and corresponding time-average CBCT images obtained by full view reconstruction. The employed convolutional neural network architecture is the residual dense network (RDN). The underlying hypothesis is that the RDN learns the appearance of the streaking artifacts that is typical for 4D CBCT phase images - and removes them without influencing the anatomical image information. After training the RDN, it can be applied to the 4D CBCT phase images to enhance the image quality without affecting the contained temporal and motion information. Different to existing approaches, no patient-specific prior knowledge about anatomy or motion characteristics is needed, that is, the proposed approach is self-contained.

RESULTS: Application of the trained network to reconstructed phase images of an external (SPARE challenge) as well as in-house 4D CBCT patient and motion phantom data set reduces the phase image streak artifacts consistently for all patients and state-of-the-art reconstruction approaches. Using the SPARE data set, we show that the root mean squared error compared to ground truth data provided by the challenge is reduced by approximately 50% while normalized cross correlation of reconstruction and ground truth is improved up to 10%. Compared to direct deep learning-based 4D CBCT to 4D CT mapping, our proposed method performs better because inappropriate prior knowledge about the patient anatomy and physiology is taken into account. Moreover, the image quality enhancement leads to more plausible motion fields estimated by deformable image registration (DIR) in the 4D CBCT image sequences.

CONCLUSIONS: The presented framework enables significantly boosting of 4D CBCT image quality as well as improved DIR and motion field consistency. Thus, the proposed method facilitates extraction of motion information from severely artifact-affected images, which is one of the key challenges of integrating 4D CBCT imaging into RT workflows.

Bibliografische Daten

OriginalspracheEnglisch
ISSN0094-2405
DOIs
StatusVeröffentlicht - 11.2020

Anmerkungen des Dekanats

© 2020 The Authors. Medical Physics published by Wiley Periodicals LLC on behalf of American Association of Physicists in Medicine.

PubMed 33063329