Title

Generative adversarial network prediction of optical properties from single images with structured- or homogenous-illumination

Conference Dates

June 2-6, 2019

Abstract

Figure 1: Example results for ex vivo human sample. Row (a): RGB and raw input structured illumination image; (b) ground truth; (c) SSOP results; (d) GANPOP results; All values are measured in mm-1.

We present a deep learning approach for mapping absorption and scattering coefficients from widefield images, called Generative Adversarial Network Prediction of Optical Properties (GANPOP). This approach is purely data-driven—no assumptions or models of light-tissue interaction are used. We obtain a series of ground truth reduced scattering and absorption coefficient maps from tissue phantoms, ex-vivo pig gastrointestinal tissues, and ex-vivo human esophagus samples using Spatial Frequency Domain Imaging (SFDI). SFDI was performed at 660 nm with a three-phase (0, 2/3 π, and 4/3 π) and two spatial frequency (0 and 0.2 mm-1) strategy. The training set comprised nine different tissue phantoms and seven ex-vivo human esophagus samples. For testing, nine phantoms, four pig samples, and one human esophagus specimen were used. Patches of paired optical properties and raw acquired images were used to simultaneously train a Generator that estimates realistic optical properties given a conventional image, and a Discriminator that learns to classify real versus predicted image pairs. After training is complete, the Generator is used to estimate optical properties from an input of a single widefield image along with three constant calibration images. Compared to model-based approaches, such as Single Snapshot Optical Properties (SSOP), GANPOP estimates both reduced scattering and absorption coefficients in human esophagus with an approximately 50% smaller root-mean-square error when given a 0.2 mm-1 spatial frequency input image [Figure 1]. Interestingly, GANPOP also produces a closer estimate of optical properties to the SFDI ground truth from a homogenous illumination image than SSOP estimates from structured light. When tested on pig tissues, which represent an organism completely unrepresented in the training set, GANPOP estimates optical properties with similar error rates to SSOP. These results suggest that, given relevant training, GANPOP can improve accuracy of single snapshot optical properties from structured illumination images. Furthermore, GANPOP may enable optical property mapping from conventional flood-illuminated images, raising the possibility that optical properties can be mapped using images obtained from unmodified commercial endoscopes.

Please click Additional Files below to see the full abstract.

34.pdf (365 kB)

This document is currently not available here.

Share

COinS