Home
Scholarly Works
Transfer learning between RTS combat scenarios...
Conference

Transfer learning between RTS combat scenarios using component-action deep reinforcement learning

Abstract

Real-time Strategy (RTS) games provide a challenging environment for AI research, due to their large state and action spaces, hidden information, and real-time gameplay. StarCraft II has become a new test-bed for deep reinforcement learning systems using the StarCraft II Learning Environment (SC2LE). Recently the full game of StarCraft II has been approached with a complex multi-agent reinforcement learning (RL) system, however this is currently only possible with extremely large financial investments out of the reach of most researchers. In this paper we show progress on using variations of easier to use RL techniques, modified to accommodate actions with multiple components used in the SC2LE. Our experiments show that we can effectively transfer trained policies between RTS combat scenarios of varying complexity. First, we train combat policies on varying numbers of StarCraft II units, and then carry out those policies on larger scale battles, maintaining similar win rates. Second, we demonstrate the ability to train combat policies on one StarCraft II unit type (Terran Marine) and then apply those policies to another unit type (Protoss Stalker) with similar success.

Authors

Kelly R; Churchill D

Volume

2862

Publication Date

January 1, 2020

Conference proceedings

Ceur Workshop Proceedings

ISSN

1613-0073

Labels

Fields of Research (FoR)

Contact the Experts team