Deep Learning in Ultrasound Imaging

Deep Learning in Medical Ultrasound

Deep Learning is a subfield of machine learning that is currently being used in a wide range of research areas and is characterised by the ability to identify complex relations and patterns in large amounts of data. The current boom in the field of deep learning can be explained not only by algorithmic advances, allowing to train deeper networks, but also by the availability of increasingly powerful hardware, especially in the form of GPUs (graphics processing units), as well as by the introduction of various deep learning toolsets and frameworks such as Tensorflow, Keras and Pytorch, which have significantly simplified the programming and training of neural networks.

Deep learning involves the design of deep neural networks (DNNs) whose “depth” is defined by the number of successive layers chained together. The layers in neural networks perform various operations such as multiplying the inputs with respective weights and introducing non-linearities with the help of activation functions. DNNs can approximate any complex function with arbitrary accuracy based on the Universal Approximation Theorem, by training on vast amounts of data.

At the chair for Medical Engineering, deep learning is used in several application areas:
Deep-learning based tomographic reconstruction methods

The quantitative reconstruction of material parameters using tomographic reconstruction methods such as the so-called Contrast Source Inversion or the Kaczmarz method are very computationally intensive and time-consuming. The high time expenditure is a consequence of the fact that an inverse problem has to be solved, which on the one hand represents a so-called ill-posed as well as non-linear problem. To solve this inverse problem, a time-consuming iterative data fitting procedure is used. As a novel, alternative reconstruction method, work is currently being done on new deep-learning-based methods. A so-called Convolutional Neural Network is trained on simulated ultrasound data, specifically on a coherence measure derived from the data, together with the ground truths to be reconstructed in the form of the speed of sound (SoS) distribution.

The aim is to enable the neural network to learn the non-linear relationship between measured ultrasound data and the material parameter distribution in the form of the SoS. Instead of using ultrasonic measurements with ring-shaped transducers setups enclosing the medium to be imaged, which are necessary for classical tomographic reconstruction methods, this method uses conventional linear ultrasonic transducers. The challenges of this approach are to generate a sufficiently large and realistic training data set and to transfer the technique from the application on simulation data to the application on real ultrasound data.

 

 

Single-shot Harmonic Imaging

Tissue Harmonic Imaging (THI) is an invaluable tool in clinical ultrasound owing to its enhanced contrast resolution and reduced reverberation clutter in comparison to fundamental mode imaging. However, harmonic content separation based on high pass filtering suffers from potential contrast degradation or lower axial resolution due to spectral leakage. Whereas nonlinear multi-pulse harmonic imaging schemes, such as amplitude modulation and pulse inversion, suffer from a reduced framerate and comparatively higher motion-artifacts due to the necessity of at least two pulse echo acquisitions. To address this problem, we propose a deep-learning-based single-shot harmonic imaging technique capable of generating comparable image quality to conventional methods, yet at a higher framerate and with fewer motion artefacts. The model was evaluated across various targets and samples to illustrate generalizability as well as the possibility and impact of transfer learning. Hence, we demonstrate the ability of the proposed approach to generate harmonic images with a single firing that are comparable to those from a multi-pulse acquisition and outperform those acquired by linear filtering.

Learning-based Needle Localization in Ultrasound Images

Sonography is commonly used to monitor the insertion of a needle into human tissue. However, good visibility in B-mode images is rarely guaranteed, especially with deep punctures and steep insertion angles. We hypothesize that the utilization of deep neural networks is particularly suitable to process this nonlinear needle-related information. Moreover, we presume that the exploitation of different information found in different plane waves could enhance the localization of the needle compared to the direct utilization of compounded B-mode images. Therefore, a deep learning-based framework capable of extracting the information provided by the separate plane waves is employed to enhance the localization of the shaft and tip of the needle. Using separate plane wave B-mode images and their difference images substantially improved the visualization of the needle by achieving 4.5-times lower tip localization error and improving the estimation of the puncture angle by 30.6% compared to the direct utilization of compounded B-mode images.

Project Information

Project start
Project end
Funded by
not funded - proposal state
Project Number

Contact