![]() The network is then used to select a script - an abstract action - to produce low level actions for all units. The CNN is trained by supervised learning on game states labelled by Puppet Search, a strategic search algorithm that uses action abstractions. We propose to use a deep convolutional neural network (CNN) to select among a limited set of abstract action choices, and to utilize the remaining computation time for game tree search to improve low level tactics. A competing method is to sample the search space which often leads to good tactical performance in simple scenarios, but poor high-level planning. High-level abstractions can often lead to good strategic decision making, but tactical decision quality may suffer due to lost details. We show a similar performance to other scripted and search based agents on smaller scenarios, while outperforming them on larger ones.Ī commonly used technique for managing AI complexity in real-time strategy (RTS) games is to use action and/or state abstractions. We tested Puppet Search in µRTS, an abstract RTS game popular within the research community, allowing us to directly compare our algorithm against state-of-the-art agents published in the last few years. Such moves can be directly executed in the actual game, or in an abstract representation of the game state which can be used by an adversarial tree search algorithm. Selecting a combination of a script and decisions for its choice points represents an abstract move to be applied next. Our main contribution is Puppet Search, a new adversarial search framework that reduces the search space by using scripts that can expose choice points to a look-ahead search procedure. We provide experimental results on the performance of these agents on increasingly larger scenarios. However, experiments have thus far only been performed using small scenarios. Some of the latest approaches have focused on enhancing standard game tree search techniques with a smart sampling of the search space, or on directly reducing this search space. Significant progress has been made in recent years towards stronger Real-Time Strategy (RTS) game playing agents. We simulate the strategy in MicroRTS developed in java EE by conducting a game-play between human player and MicroRTS AI (Game AI), though our proposed strategy out-performs the Game AI rarely as we did not account game playing-speed that makes a huge difference in victory but at least we succeeded in introducing a strategy that could well compete the Game AI and may defeat it but rarely. ![]() And one must be able to give orders quickly and efficiently so in this paper we propose a competitive battle strategy with the help of a plot and decision tree. To improve the game, one has to be able to keep track of everything that's going on over the entire map. The key to winning, in StarCraft or any other RTS game is to balance strategy, tactics, macro and micro. To put it simply, if a player is building up base, he is losing out on creating an army and If he is building up his army, he is losing out on having a strong base. This paper presents a competitive combat strategy and tactics in RTS Games AI.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |