Generative Adversarial Networks: Ultrasound Image Translation

Mentor 1

Istvan Lauko

Mentor 2

Adam Honts

Start Date

1-5-2020 12:00 AM

Description

Low cost and highly portable ultrasound devices under development are designed to be equipped with low frequency ultrasound transducers that provide relatively low imaging quality after reconstruction. There is strong interest to develop software technology to improve such images and to approximate the image quality of high-end devices with high frequency transducers. With this research, we aim at using deep learning technology to translate images, produced using a low frequency transducers, to the higher frequency image domain to counter the hardware limitations. The type of deep learning we are experimenting with is referred to as Generative Adversarial Network (GAN), based on convolutional neural nets. This type of network structure is commonly used in image translation tasks. Specifically, based on unpaired ultrasound images from high and low frequency domain, collected from volunteers by the staff of UWMs sonography program, we aim at helping to better identify cephalic veins using GAN enhanced imaging. We present our results, that could readily translate to use on other anatomies, and which show substantial, structurally stable enhancement of image quality over low-frequency imaging results, compensating for the lack of high-end hardware components.

This document is currently not available here.

Share

COinS
 
May 1st, 12:00 AM

Generative Adversarial Networks: Ultrasound Image Translation

Low cost and highly portable ultrasound devices under development are designed to be equipped with low frequency ultrasound transducers that provide relatively low imaging quality after reconstruction. There is strong interest to develop software technology to improve such images and to approximate the image quality of high-end devices with high frequency transducers. With this research, we aim at using deep learning technology to translate images, produced using a low frequency transducers, to the higher frequency image domain to counter the hardware limitations. The type of deep learning we are experimenting with is referred to as Generative Adversarial Network (GAN), based on convolutional neural nets. This type of network structure is commonly used in image translation tasks. Specifically, based on unpaired ultrasound images from high and low frequency domain, collected from volunteers by the staff of UWMs sonography program, we aim at helping to better identify cephalic veins using GAN enhanced imaging. We present our results, that could readily translate to use on other anatomies, and which show substantial, structurally stable enhancement of image quality over low-frequency imaging results, compensating for the lack of high-end hardware components.