Date of Award

August 2014

Degree Type

Dissertation

Degree Name

Doctor of Philosophy

Department

Mathematics

First Advisor

Yi Ming Zou

Committee Members

Allen Bell, Jeb Willenbring, Gabriella Pinter, Peter Hinow

Abstract

The study of Newton's method in complex-valued neural networks (CVNNs) faces many difficulties. In this dissertation, we derive Newton's method backpropagation algorithms for complex-valued holomorphic multilayer perceptrons (MLPs), and we investigate the convergence of the one-step Newton steplength algorithm for the minimization of real-valued complex functions via Newton's method. The problem of singular Hessian matrices provides an obstacle to the use of Newton's method backpropagation to train CVNNs. We approach this problem by developing an adaptive underrelaxation factor algorithm that avoids singularity of the Hessian matrices for the minimization of real-valued complex polynomial functions.

To provide experimental support for the use of our algorithms, we perform a comparison of using sigmoidal functions versus their Taylor polynomial approximations as activation functions by using the Newton and pseudo-Newton backpropagation algorithms developed here and the known gradient descent backpropagation algorithm. Our experiments indicate that the Newton's method based algorithms, combined with the use of polynomial activation functions, provide significant improvement in the number of training iterations required over the existing algorithms. We also test our underrelaxation factor algorithm using a small-scale polynomial neuron and a polynomial MLP. Finally, we investigate the application of an algebraic root-finding technique to the case of a polynomial MLP to develop a theoretical framework for the location of initial weight vectors that will guarantee successful training.

Share

COinS