Enhancement of image quality and quantification in PET
AI-based noise reduction
Low noise in PET image data is important for detectability of small lesions, thus good image quality and quantification. The amount of noise in PET data is dependent on several factors based on the PET methodology, e.g. physological limitation of the amount of applicable radiotracer. Therefore, to reduce noise in PET image data, image filters are usually applied either within the image reconstruction chain of PET systems itself or retrospectively by clinicians in the software used for evalution. However, applying such image filters does not only reduce image noise, but also reduces image resolution, ultimately resulting in worse small lesions detectability. In addition, filtering of PET data usually impairs quantitative accuracy, thus negatively affects quantitative analyses of PET data because standard Gaussian filters are commonly used. As an alternative to traditional Gaussian filtering, we showed that noise in PET image data can be reduced using edge-preserving image filter variants, such as a bilateral filter. However, as these special filters require careful and time-consuming manual adjustments of the filter parameters for each individual PET image dataset, such a bilateral filter have not yet been introduced in clinical application.
The goal was therefore to train a convolutional neural network (CNN) to reproduce the edge-preserving properties of a manually adjusted bilateral filter, thus maintain the quantitative accuracy and similar improved image quality of PET image data with minimal user intervention.
Our developed deep-learning based neural network enables noise suppression of PET image data with similar image quality and quantification properties than a manually tuned bilateral filter while at the same time reducing the amount of manual parameter tuning, supporting a smoother clinical integration of such more superior filter algorithms.
Key publications:AI-based respiratory motion correction
In order to achieve acceptable diagnostic image quality, PET image information have to be acquired over a prolonged period of time. Due to the patient motion during PET data acquisition, image artefacts can occur resulting in smeared out parts of the image. As a consequence, tumors and metastases e.g. in the lung or liver dome area may get blurred due to respiratory movements. Moreover, motion also affects the quantitative accuracy of PET image data, thus negatively impacts detectability of small lesions as well as biases quantitative values in PET data. In order to compensate for motion, the patient's respiratory motion signal can be recorded with the PET acquisition in parallel and used to split the PET data into suitable bins (so-called gates) each corresponding to a specific phase of the respiratory motion cycle. These gates can then be reconstructed separately, resulting in a sequence of images with reduced motion artefacts but also increased noise level compared to the original image due to the lower amount of data used for reconstruction.
Most motion compensation algorithms rely on non-rigid co-registration of all gates to one reference gate for improving noise characteristics. Co-registration is the process of deforming and aligning one image to match the state of another one. Thus, once all gates are co-registered to a single reference gate, they are averaged to produce the final motion-corrected image. Since the resulting image is built using all available data, it possesses noise characteristics similar to the original image. The quality of the motion artefacts reduction relies, however, heavily on the quality of the selected co-registration algorithm. Current approaches rely on iterative co-registration – a lengthy process of multidimensional optimization of an image similarity function – performed in small steps. In addition to quite time consuming computations (up to an hour), the accuracy of such registration methods can be limited by several factors, such as high image noise level or large motion amplitudes. We have therefore developed an AI-based approach for correcting respiratory motion in oncological PET image data, which is able to perform a one-shot co-registration of the gates and is robust against varying imaging conditions.
Our solution is based on a combination of a convolutional neural network (CNN) which predicts the necessary deformation for co-registration and has an image transformation module attached to it. This architecture is usually referred to as a Spatial Transformer Network (STN). While training a conventional CNN for co-registration tasks would require knowledge of the ground truth deformation fields mapping one gate to another which are not available in vivo, STNs can be trained in an unsupervised manner. We have shown that after training on 80 gated datasets, our AI-based solution is able to efficiently reduce motion-induced artefacts without increasing the noise levels. It is competitive with other commercially available motion compensation solutions while outperforming them for motion with high amplitudes.
Key publications: