Libro

CITAS A LOS ARTÍCULOS DE INVESTIGACIÓN.

 

[1]. R. Cavazos-Cadena, Finite State Approximations for Denumerable State Discounted Markov Decision Processes, Journal of Applied Mathematics and Optimization, 14,1-26, 1986.

Citado en:

1. E. Altman, Denumerable Constrained Markov Decision Problems and Finite Approximations, Mathematics of Operations Research, En Prensa.

2. O. Hernández-Lerma and M. Muñoz de Ozak, Discrete-time Markov control processes with discounted unbounded costs: Optimality criteria, Kybernetika, (Prague), 28,191-212,1992.

3. O. Hernández-Lerma, Value Iteration and Rolling Horizon Policies in General Markov Control Processes, Proc. 29th Conference on Decision and control, Honolulu, Hawaii, 1381-1386, 1990.

4. O. Hernández-Lerma, Adaptive Markov Control Processes, Springer- Veríag, New York, 1989.

5. O. Hernández-Lerma, and Jean B. Lasserre, Error bounds for rolling horizon policies in general Markov control processes, IEEE Transactions on Automatic Control, 35, 1118-1124,1989.

6. M. Puterman, Bounds and approximations for discounted countable state Markov decision processes with unbounded rewards, IEEE Transactions on Automatic Control, En Prensa.

7. E. Altman, Asymptotic properties of constrained Markov decision processes, Zeitschrift fur Operations Research 37,15-170, 1993.

[2]. R. Cavazos-Cadena, Finite state approximations and adaptive control of discounted Markov decision processes with unbounded rewards, Control and Cybernetics, 14, 31-58, 1987.

Citado en:

1. O. Hernández-Lerma, Adaptive Markov Control Processes, Springer-Verlag, New York, 1989.

2. M. Puterman, Bounds and approximations for discounted countable state Markov decision processes with unbounded rewards, IEEE Transactions on Automatic Control, En Prensa.

[3]. R. Cavazos-Cadena, Necessary and sufficient conditions for the optimality equation in average reward Markov decision processes, Systems and Control Letters, 10, 71-78, 1988.

Citado en:

1. E. Fernández-Gaucherand, A. Arapostathis and 5.1. Marcus, Remarks on the existence of solutions to the optimality equation in Markov decision processes, Systems and Control Letters, 15, 425-432,1990.

2. O. Hernández-Lerma, Adaptive Markov Control Processes, Springer- Veríag, New York, 1989.

3. M.K. Gosh and S.l. Marcus, Ergodic control of Markov Chains, Proc. 29th Conference on Decision and Control, Honolulu, Hawaii, 258-263, 1990.

4. E. Fernández-Gaucherand, A. Arapostathis and 5.1. Marcus, On the average cost optimality equation and the structure of optimal stationary policies for partially observable Markov decision processes, Annals of Operation Research, 29, 439-470, 1991.

5. E. Fernández-Gaucherand, A. Arapostathis and 5.1. Marcus, Controlled Markov processes on the infinite planning horizon with a weighted cost criterion, Contribuciones en Probabilidad y Estadística Matemática, 3,145-162, 1992.

6. Arapostathis, V.K. Borkar. E. Fernández-Gaucherand, M.K. Gosh, and 5.1. Marcus, Discrete time controlled Markov processes with average cost criterion: a survey, SIAM Journal on Control and Optimization, 31, 282-334,1993.

[4]. R. Cavazos-Cadena and Jean B. Lasserre, Strong 1-optimal stationary policies in denumerable Markov decision chains, Systems and Control Letters, 11, 65-71,1988.

Citado en:

1. F.M. Spieksma, Geometrically Ergodic Markov Chains and the control of queues, Ph. D. Thesis, University of Leiden, The Netherlands, 1990.

2. A. Hordik and F.M. Spieksma, Are the limits of a discounted optimal policies Blackwell optimal? A counterexample, System and Control Letters, 33, 31-41, 1989.

[5]. R. Cavazos-Cadena, Weak conditions for the existence of optimal stationary policies in average Markov decision chains with unbounded costs, Kybernetika, (Prague), 25,145-156, 1989.

Citado en:

1. Linn 1. Sennott, Average cost optimal stationary policies in infinite state Markov decision processes with unbounded costs, Operations Research, 37, 626-633,1989.

2. O. Hernández-Lerma and J.B. Lasserre, Average cost optimal policies for Markov control processes with Borel state space and unbounded costs, Systems and Control Letters, 15, 349-356,1990.

3. O. Hernández-Lerma, Average optimality in dynamic programming on Borel spaces: unbounded costs and controls, Systems and Control Letters, 17, 237-242, 1991.

4. O. Hernández-Lerma, Adaptive Markov Control Processes, Springer-Verlag, New York, 1989.

5. O. Hernández-Lerma, Existence of average optimal policies in Markov control processes with strictly unbounded costs, Kybernetika (Prague), 29,1-17,1993.

6. Linn 1. Sennott, Ihe average cost optimality equation and critical number policies, Probability in the Engineering and Informational Sciences, 7, 47-67,1993.

[6]. R. Cavazos-Cadena, Necessary conditions for the optimality equation in average reward Markov decision processes, Journal of Applied Mathematics and Optimization, 19,97-112,1989.

Citado en:

1. E. Fernández-Gaucherand, A. Arapostathis, 5.1. Marcus, Remarks on the existence of solutions to the optimality equation in Markov decision processes, Systems and Control Letters, 15, 425-432,1990.

2. O. Hernández-Lerma, Adaptive Markov Control Processes, Springer- Veríag, New York, 1989.

3. M.K. Gosh, 5.1. Marcus, Ergodic control of Markov Chains, Proc. 29th Conference on Decision and Control, Honolulu, Hawaii, 258-263,1990.

4. E. Fernández-Gaucherand, A. Arapostathis, 5.1. Marcus, On the average cost optimality equation and the structure of optimal stationary policies for partially observable Markov decision processes, Annals of Operation Research, 29, 439-470.

5. M.K. Gosh, S.l. Marcus, On strong average optimality of Markov decision processes with unbounded costs, Operations Research Letters, 11, 99-104,1992.

6. A. Arapostathis, V.K. Borkar, E. Fernández Gaucherand, M.K. Gosh and 5.1. Marcus, Discrete time controlled Markov processes with average cost criterion; a survey, SIAM J. Control and Optimization, 31, 282-334,1993.

[7]. R. Cavazos-Cadena, Solution to the optimality equation in a class of Markov decision chains with the average cost criterion, Kybernetyka, (Prague), 27,1-25,1991.

Citado en:

1. Linn 1. Sennott, The average cost optimality equation and critical number policies, Probability in the Engineering and Informational Sciences, 7, 47-67,1993.

[8]. R. Cavazos-Cadena, A counterexample on the optimality equation in Markov decision chains with the average cost criterion, Systems and Control Letters, 16, 387-392,1991.

Citado en:

1. O. Hernández-Lerma and J.B. Lassere, Average cost optimal policies for Markov control processes with Borel state space and unbounded costs, Systems and Control Letters, 15, 349-356,1990.

2. E. Fernández-Gaucheraud, A. Arapostatis and 5.1. Marcus, Convex Stochastic Control Problems, Proc. 31 st. IEBE Conference on Decision and Control, Tucson, AZ, 1992.

3. Linn 1. Sennot, The average cost optimality equation and critical number policies, Probability in the Engineering and Informational Science, 7, 47-67,1993.

4. A. Arapostathis, V. K. Borkar, E. Fernández Gaucherand, M.K. Gosh, and 5.1. Marcus, Discrete time controlled Markov processes with average cost criterion: A survey, SIAM J. Control and Optimization, 31, 282-334,1993.

5. Raúl Montes de Oca and Onésimo Hernández-Lerma, The average cost optimality equation for Markov control processes on Borel spaces, Systems and Control Letters, En Prensa.

6.O. Hernández-Lerma and Jean B. Lasserre, Linear programming and average optimality of Markov control Processes on Borel spaces unbounded costs, SIAM Journal on Control and Optimization, En Prensa.

[9]. R. Cavazos-Cadena, Recent results on conditions for the existence of average optimal stationary policies, it Annals of Operations Research, 28, 3-28,1991.

Citado en:

1. R.K. Ritt and L. 1. Sennot, Optimal stationary policies in general state space Markov decision chains with finite action sets, Mathematics of Operations Research, 17, 901-909,1992.

2. Linn 1. Sennot, The average cost optimality equation and critical number policies, Probability in the Engineering and Informational Sciences, 7, 47-67, 1993.

3. A. Arapostathis, V.K. Borkar, E. Fernández Gaucherand, M.K. Gosh, and 5.1. Marcus, Discrete time controlled Markov processes with average cost criterion: A survey, SIAM J. Control and Optimization, 31, 282-334,1993.

[10]. R. Cavazos-Cadena and Linn 1. Sennot, Comparing recent assumptions for the existence of average optimal stationary policies, Operations Research Letters, 11, 33-37, 1992.

Citado en:

1. O. Hernández-Lerma and J.B. Lasserre, Average cost optimal policies for Markov control processes with Borel state space and unbounded costs, Systems and Control Letters, 15, 340-356,1990.

2. M.K. Gosh and 5.1. Marcus, Ergodic control of Markov Chains, Proc. 29th Conference on Decision and Control, Honolulu, Hawaii, 258-263, 1990.

3. M.K. Gosh and 5.1. Marcus, On strong average optimality of Markov decision processes with unbounded costs, Operations Research Letters, 11, 99-104, 1992.

4. A. Arapostathis, V. K. Borkar, E. Fernández Gaucherand, M.K. Gosh, and 5.1. Marcus, Discrete time controlled Markov processes with average cost criterion: A survéy, SIAM J. Control and Optimization, 31, 282-334, 1993.

[11]. O. Hernández-Lerma, R. Montes de Oca and R. Cavazos-Cadena, Recurrence conditions for Markov decision processes with Borel state space; A survey, Annals of Operations Research, 29, 29-46,1991.

Citado en:

1. E. Fernández-Gaucherand, A. Arapostáthis and 5.1. Marcus, on partially observable Markov decision processes with an average cost criterion, Proc. 28th IBEE Conference on Decision and Control, Tampa, Fí, 1267-1272,1989.

2. A. Arapostathis, V.K. Borkar, E. Fernández Gaucherand, M.K. Gosh, and 5.1. Marcus, Discrete time controlled Markov processes with average cost criterion; a survey, SIAM J. Control and Optimization, 31, 282-334,1993.

3. E. Fernández-Gaucherand, A. Arapostathis and 5.1. Marcus, On the average cost optimality equation and the structure of optimal stationary policies for partially observable Markov decision processes, Annals of Operations Research, 29, 439-470.

4. E. Fernández-Gaucherand. A. Arapostathis and 5.1. Marcus, Remarks on the existence of solutions to the optimality equation in Markov decision processes, Systems and Control Letters, 15, 425-432,1990.

5. E. Fernández-Gaucherand, Controlled Markov Processes on the Infinite Planning Horizon: Optimal and Adaptive Control, Ph D. Dissertation, University of Texas at Austin, 1991.

6. A.S. Nowak, Zero-sun average payoff stochastic games with general state space, Games and Economic Behaviour, En Prensa.

7. O. Vega Amaya, Average optimality in semi-Markov control models on Borel spaces: Unbounded costs and controls, Boletín de la Sociedad Matemática Mexicana, En Prensa.

8. M. T. Robles Alcaraz, Procesos de Control Markoviano con Ganancia Promedio, Tesis de Maestría, Depto. de Matemáticas, UAM-Iztapalapa, 1991.

9. J. A. Minjarez Sosa, Procesos de Control de Markov con Costos y Controles no acotados, Tesis de Maestría, UAM-Iztapalapa, 1993.

10. E. Fernández-Gaucherand, A. Arapostathis, 5.1. Marcus, Controlled Markov processes on the Infinite planning horizon with a. weighted cost criterion, Contribuciones en Probabilidad y Estadistica Matemática, 3,145-162,1992.

[12]. O. Hernández-Lerma and R. Cavazos-Cadena, Density estimation and adaptive control of Markov processes average and discounted criteria, Acta Applicandae Mathematicae, 20, 285-307,1990.

Citado en:

1. M. Duflo, Methods Récursives Aléatoires, Masson, Paris, 1990.

[13]. R. Cavazos-Cadena, Existence of Optimal stationary policies in average reward markov decision processes with a recurrent state, Journal of Applied Mathematics and Optimization, 26,171-194, 1992.

Citado en:

1. Jean B. Lasserre, Conditions for the existence of average and Blackwell optimal stationary policies in denumerable Markov decision chains, Journal of Mathematical Analysis and Applications, 136, 479-490, 1988.

2. E. Altman and A. Schwartz, Markov optimization problems: state action frequencies revisted, Proc. 27th IEEE Conference on Decision and Control, Austin, TX, 640-645,1988.

3. E. Altman and A. Schwartz, Markov decision problems and state action frequencies, SIAM J. Control and Optimization, 29, 786-809, 1991.

 

rcavcad@coah1.telmex.net.mex; rcavazos@uaaan.mx