Changes between Version 3 and Version 4 of GettingStartedReinforcementLearning


Ignore:
Timestamp:
Feb 19, 2021, 7:36:46 PM (3 years ago)
Author:
Brian Broll
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • GettingStartedReinforcementLearning

    v3 v4  
    66Since r23917 the Pyrogenesis engine now features a dedicated interface for reinforcement learning.
    77
    8 Machine learning and reinforcement learning have been making impressive strides across a variety of domains from videogames to robotics. In this post, we will show how you can get up and running with reinforcement learning within 0 A.D., an open source RTS game! Before we start, we will be assuming some background knowledge of the [https://spinningup.openai.com/en/latest/spinningup/rl_intro.html key concepts] in reinforcement learning and familiarity with OpenAI gym. Another good resource for learning about [https://gym.openai.com/docs/#spaces state and actions spaces] is available on the OpenAI gym website!
     8Machine learning and reinforcement learning have been making impressive strides across a variety of domains from videogames to robotics. In this post, we will show how you can get up and running with reinforcement learning within 0 A.D., an open source RTS game! Before we start, we will be assuming some background knowledge of the [https://spinningup.openai.com/en/latest/spinningup/rl_intro.html key concepts] in reinforcement learning and familiarity with OpenAI gym. Another good resource for learning about [https://gym.openai.com/docs/#spaces state and action spaces] is available on the OpenAI gym website!
    99
    1010== Installation ==
     
    8181Although we were able to effectively train an RL agent from scratch to learn to play our small skirmish scenario, there is still plenty of room for improvement! A few ideas include:
    8282
    83     make the RL agent generalize better by using a more expressive state and action spaces
    84     train the agent where the enemy units are spawned in different locations
    85     train it using a different scenario
    86     train the agent via imitation from human demonstrations first
     83    * make the RL agent generalize better by using a more expressive state and action spaces
     84    * train the agent where the enemy units are spawned in different locations
     85    * train it using a different scenario
     86    * train the agent via imitation from human demonstrations first
    8787
    8888Stay tuned for a future post on how to get started on some of these!