Date of Award

12-1-2016

Degree Type

Dissertation

Degree Name

Doctor of Philosophy

Department

Engineering

First Advisor

Zeyun Yu

Committee Members

Ichiro Suzuki, Jun Zhang, Dexuan Xie, Roshan M. Dsouza, Zeyun Yu

Abstract

With increasing utilization of various imaging techniques (such as CT, MRI and PET) in medical fields, it is often in great need to computationally extract the boundaries of objects of interest, a process commonly known as image segmentation. While numerous approaches have been proposed in literature on automatic/semi-automatic image segmentation, most of these approaches are based on image pixels. The number of pixels in an image can be huge, especially for 3D imaging volumes, which renders the pixel-based image segmentation process inevitably slow. On the other hand, 3D mesh generation from imaging data has become important not only for visualization and quantification but more critically for finite element based numerical simulation. Traditionally image-based mesh generation follows such a procedure as: (1) image boundary segmentation, (2) surface mesh generation from segmented boundaries, and (3) volumetric (e.g., tetrahedral) mesh generation from surface meshes. These three majors steps have been commonly treated as separate algorithms/steps and hence image information, once segmented, is not considered any more in mesh generation.

In this thesis, we investigate a super-pixel based scheme that integrates both image segmentation and mesh generation into a single method, making mesh generation truly an image-incorporated approach. Our method, called image content-aware mesh generation, consists of several main steps. First, we generate a set of feature-sensitive, and adaptively distributed points from 2D grayscale images or 3D volumes. A novel image edge enhancement method via randomized shortest paths is introduced to be an optional choice to generate the features’ boundary map in mesh node generation step. Second, a Delaunay-triangulation generator (2D) or tetrahedral mesh generator (3D) is then utilized to generate a 2D triangulation or 3D tetrahedral mesh. The generated triangulation (or tetrahedralization) provides an adaptive partitioning of a given image (or volume). Each cluster of pixels within a triangle (or voxels within a tetrahedron) is called a super-pixel, which forms one of the nodes of a graph and adjacent super-pixels give an edge of the graph. A graph-cut method is then applied to the graph to define the boundary between two subsets of the graph, resulting in good boundary segmentations with high quality meshes. Thanks to the significantly reduced number of elements (super-pixels) as compared to that of pixels in an image, the super-pixel based segmentation method has tremendously improved the segmentation speed, making it feasible for real-time feature detection. In addition, the incorporation of image segmentation into mesh generation makes the generated mesh well adapted to image features, a desired property known as feature-preserving mesh generation.

Share

COinS