{"id":582,"date":"2013-09-30T13:08:06","date_gmt":"2013-09-30T11:08:06","guid":{"rendered":"http:\/\/cio.umh.es\/?p=582"},"modified":"2013-09-30T13:08:06","modified_gmt":"2013-09-30T11:08:06","slug":"conferencia-del-prof-dr-pavlo-kasyanov","status":"publish","type":"post","link":"https:\/\/cio.umh.es\/en\/2013\/09\/30\/conferencia-del-prof-dr-pavlo-kasyanov\/","title":{"rendered":"Conferencia del Prof. Dr. Pavlo Kasyanov"},"content":{"rendered":"<p><!--:es--><strong>T\u00edtulo<\/strong>: Partially observable total-cost Markov decision processes with general state and action spaces<br \/>\n<strong>Ponente<\/strong>: Pavlo Kasyanov<br \/>\n<strong>Fecha<\/strong>: 30\/09\/2013 12:30h<br \/>\n<strong>Lugar<\/strong>: Sala de Seminarios, Edificio Torretamarit<\/p>\n<h4><!--:--><!--:en--><strong>Title<\/strong>: Partially observable total-cost Markov decision processes with general state and action spaces<br \/>\n<strong>Speaker<\/strong>: Pavlo Kasyanov<br \/>\n<strong>Date<\/strong>: 30\/09\/2013 12:30h<br \/>\n<strong>Location<\/strong>: Sala de Seminarios, Edificio Torretamarit<\/p>\n<h4><!--:--><!--more--><!--:es--><\/h4>\n<h4 style=\"color: #555\">Resumen<\/h4>\n<p>For Partially Observable Markov Decision Processes (POMDPs) with Borel state, observation, and action sets and with the expected total costs, this talk provides sufficient conditions for the existence of optimal policies and validity of other optimality properties including that optimal policies satisfy optimality equations and value iterations converge to optimal values. Action sets may not be compact and one-step functions may not be bounded. Since POMDPs can be reduced to Completely Observable Markov Decision Processes (COMDPs), whose states are posterior state distributions, this paper focuses on the validity of the above mentioned optimality properties for COMDPs. The central question is whether transition probabilities for a COMDP are weakly continuous. We introduce sufficient conditions for this and show that the transition probabilities for a COMDP are weakly continuous, if observation probabilities for the POMDP are continuous in the total variation, and the continuity in the total variation cannot be weakened to setwise continuity. The results are illustrated with counterexamples and examples.<\/p>\n<h4 style=\"color: #555\">Breve Bio<\/h4>\n<p>Pavlo Kasyanov es Director del departamento de Sistemas Matem\u00e1ticos del Instituto de An\u00e1lisis Aplicado de Sistemas de la Universidad de Polit\u00e9cnica de Kiev. Ha publicado cinco monograf\u00edas y gran cantidad de art\u00edculos en revistas cient\u00edficas de alto nivel. Sus \u00e1reas de inter\u00e9s se centran en las inclusiones diferenciales no lineales de evoluci\u00f3n, la teor\u00eda de sistemas din\u00e1micos en dimensi\u00f3n infinita y los m\u00e9todos num\u00e9ricos en el an\u00e1lisis no lineal y la teor\u00eda de optimizaci\u00f3n.<!--:--><!--:en--><\/h4>\n<h4 style=\"color: #555\">Abstract<\/h4>\n<p>For Partially Observable Markov Decision Processes (POMDPs) with Borel state, observation, and action sets and with the expected total costs, this talk provides sufficient conditions for the existence of optimal policies and validity of other optimality properties including that optimal policies satisfy optimality equations and value iterations converge to optimal values. Action sets may not be compact and one-step functions may not be bounded. Since POMDPs can be reduced to Completely Observable Markov Decision Processes (COMDPs), whose states are posterior state distributions, this paper focuses on the validity of the above mentioned optimality properties for COMDPs. The central question is whether transition probabilities for a COMDP are weakly continuous. We introduce sufficient conditions for this and show that the transition probabilities for a COMDP are weakly continuous, if observation probabilities for the POMDP are continuous in the total variation, and the continuity in the total variation cannot be weakened to setwise continuity. The results are illustrated with counterexamples and examples.<\/p>\n<h4 style=\"color: #555\">Brief Bio<\/h4>\n<p>Pavlo Kasyanov es Director del departamento de Sistemas Matem\u00e1ticos del Instituto de An\u00e1lisis Aplicado de Sistemas de la Universidad de Polit\u00e9cnica de Kiev. Ha publicado cinco monograf\u00edas y gran cantidad de art\u00edculos en revistas cient\u00edficas de alto nivel. Sus \u00e1reas de inter\u00e9s se centran en las inclusiones diferenciales no lineales de evoluci\u00f3n, la teor\u00eda de sistemas din\u00e1micos en dimensi\u00f3n infinita y los m\u00e9todos num\u00e9ricos en el an\u00e1lisis no lineal y la teor\u00eda de optimizaci\u00f3n.<!--:--><\/p>","protected":false},"excerpt":{"rendered":"<p>T\u00edtulo: Partially observable total-cost Markov decision processes with general state and action spaces<br \/>\nPonente: Pavlo Kasyanov<br \/>\nFecha: 30\/09\/2013 12:30h<br \/>\nLugar: Sala de Seminarios, Edificio Torretamarit<br \/>\nTitle: Partially observable total-cost Markov decision processes with general state and action spaces<br \/>\nSpeaker: Pavlo Kasyanov<br \/>\nDate: 30\/09\/2013 12:30h<br \/>\nLocation: Sala de Seminarios, Edificio Torretamarit<\/p>\n<p>Resumen<br \/>\nFor Partially Observable Markov Decision Processes (POMDPs) with Borel state, observation, and action [&#8230;]<\/p>","protected":false},"author":3477,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_links_to":"","_links_to_target":""},"categories":[4,873],"tags":[],"_links":{"self":[{"href":"https:\/\/cio.umh.es\/en\/wp-json\/wp\/v2\/posts\/582"}],"collection":[{"href":"https:\/\/cio.umh.es\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cio.umh.es\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cio.umh.es\/en\/wp-json\/wp\/v2\/users\/3477"}],"replies":[{"embeddable":true,"href":"https:\/\/cio.umh.es\/en\/wp-json\/wp\/v2\/comments?post=582"}],"version-history":[{"count":0,"href":"https:\/\/cio.umh.es\/en\/wp-json\/wp\/v2\/posts\/582\/revisions"}],"wp:attachment":[{"href":"https:\/\/cio.umh.es\/en\/wp-json\/wp\/v2\/media?parent=582"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cio.umh.es\/en\/wp-json\/wp\/v2\/categories?post=582"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cio.umh.es\/en\/wp-json\/wp\/v2\/tags?post=582"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}