Treballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2015, Director: Jesús Cerquides Bueno
The purpose of this Project is to implement the one-step Q-Learning algorithm and a similar version using linear function approximation in a combat scenario in the Real-Time Strategy game Starcraft: BroodwarTM. First, there is a brief description of Real-Time Strategy games, and particularly about Starcraft, and some of the work done in the field of Reinforcement Learning. After the introduction and previous work are covered, a description of the Reinforcement Learning problem in Real-Time Strategy games is shown. Then, the development of the Reinforcement Learning agents using Q-Learning and Approximate Q-Learning is explained. It is divided into three phases: the first phase consists of defining the task that the agents must solve as a Markov Decision Process and implementing the Reinforcement Learning agents. The second phase is the training period: the agents have to learn how to destroy the rival units and avoid being destroyed in a set of training maps. This will be done through exploration because the agents have no prior knowledge of the outcome of the available actions.
The third and last phase is testing the agents’ knowledge acquired in the training period in a different set of maps, observing the results and finally comparing which agent has performed better. The expected behavior is that both Q-Learning agents will learn how to kite (attack and flee) in any combat scenario. Ultimately, this behavior could become the micro-management portion of a new Bot or could be added to an existing bot.