Abstract. This paper deals with control of partially observable discrete-time stochastic systems.
It introduces and studies Markov Decision Processes with Incomplete Information and with
semiuniform Feller transition probabilities. The important feature of these models is that their
classic reduction to Completely Observable Markov Decision Processes with belief states preserves
semiuniform Feller continuity of transition probabilities. Under mild assumptions on cost functions,
optimal policies exist, optimality equations hold, and value iterations converge to optimal values for
these models. In particular, for Partially Observable Markov Decision Processes the results of this
paper imply new and generalize several known sufficient conditions on transition and observation
probabilities for weak continuity of transition probabilities for Markov Decision Processes with belief
states, the existence of optimal policies, validity of optimality equations defining optimal policies,
and convergence of value iterations to optimal values.
Key words. Markov decision process, incomplete information, semiuniform Feller transition
probabilities, value iterations, optimality equations
MSC codes. Primary, 90C40; Secondary, 90C39