Date of Award

August 2023

Degree Type

Dissertation

Degree Name

Doctor of Philosophy

Department

Mathematics

First Advisor

Richard H Stockbridge

Keywords

control, probability, stochastic

Abstract

Under consideration are convergence results between optimality criteria for two infinite-horizon stochastic control problems: the long-term average problem and the $\alpha$-discounted problem, where $\alpha \in (0,1]$ is a given discount rate. The objects under control are those stochastic processes that arise as (relaxed) solutions to a controlled martingale problem; and such controlled processes, subject to a given budget constraint, comprise the feasible sets for the two stochastic control problems.

In this dissertation, we define and characterize the expected occupation measures associated with each of these stochastic control problems, and then reformulate each problem as an equivalent linear program over a space of such measures. We then establish sufficient conditions under which the long-term average linear program can be ``asymptotically approximated'' by the $\alpha$-parameterized family of (suitably normalized) $\alpha$-discounted linear programs as $\alpha \downarrow 0$. This approach is what can be referred to as the vanishing discount method. To state these conditions precisely, our analysis turns to set-valued mappings called correspondences. In particular, once we establish the appropriate framework, we see that our main results can be stated in a manner similar to that of Berge's Theorem.

Included in

Mathematics Commons

Share

COinS