Please use this identifier to cite or link to this item:
Title: Robust optimal policies for Markov decision processes with safety-threshold constraints
Authors: Dimitrova, Rayna
Fu, Jie
Topcu, Ufuk
First Published: 29-Dec-2016
Presented at: 55th IEEE Conference on Decision and Control (CDC), Las Vegas, NV, USA
Start Date: 12-Dec-2016
End Date: 14-Dec-2016
Publisher: IEEE
Citation: 55th IEEE Conference on Decision and Control (CDC), 2016, pp. 7081-7086 (6)
Abstract: Abstract: We study the synthesis of robust optimal control policies for Markov decision processes with transition uncertainty (UMDPs) and subject to two types of constraints: (i) constraints on the worst-case, maximal total cost and (ii) safety-threshold constraints that bound the worst-case probability of visiting a set of error states. For maximal total cost constraints, we propose a state-augmentation method and a two-step synthesis algorithm to generate deterministic, memoryless optimal policies given the reward to be maximized. For safety threshold constraints, we introduce a new cost function and provide an approximately optimal solution by a reduction to an uncertain Markov decision process under a maximal total cost constraint. The safety-threshold constraints require memory and randomization for optimality. We discuss the use and the limitations of the proposed solution.
DOI Link: 10.1109/CDC.2016.7799360
ISSN: 0743-1546
Version: Post-print
Status: Peer-reviewed
Type: Conference Paper
Rights: Copyright © 2016, IEEE. Deposited with reference to the publisher’s open access archiving policy. (
Appears in Collections:Conference Papers & Presentations, Dept. of Computer Science

Files in This Item:
File Description SizeFormat 
cdc16.pdfPost-review (final submitted author manuscript)313.1 kBAdobe PDFView/Open

Items in LRA are protected by copyright, with all rights reserved, unless otherwise indicated.