StarCraft 2 robot is based on the intercepting and rendering analysis
Matthew Fisher from Stanford University wrote an interesting article about implementation of a robot on the basis of stream interception API library D3D9 (Microsoft Direct 3D, which is a part of the library DirectX).
According to the author, the robot plays StarCraft 2 (SC2) by intercepting, understanding and reacting to the D3D9 API stream, then sending keypresses and mouse commands back to the game. It is not like other robots made in the SC2 editor using a scripting language, or projects like BWAPI (it works with the original StarCraft only) that works by attaching to the address space of the host application. The robots that are based on these methods often can to bypass several restrictions that the human players must cope with; for example, they can give different orders to different units at the same time, they can see exactly what is happening off-screen at any time, and they do not have to deal with trying to click on ground units that are covered by flying units.
The article has a lot of the technical details and a source code that implements the robot. The main objective of article to show a simple program that interfaces with the D3D9 interceptor and interprets the intercepted commands. The advantages of this approach to the implementation of the robot are obvious in comparison with other methods. This method is universal and can be used for other programs, creating a robot should be easier and more affordable using this method. Disadvantages of the method are also quite obvious: the scene analysis takes a lot of time and efforts, as well as to get APM (a number of actions per minute) in comparison with other methods will likely fail.
Robot is divided into three main components:
1. Mirror Driver - a storage of graphics objects that are drawn on the map. Objects are the textures, shaders, pixels and other graphics information.
2. Scene Understanding – the data that is received by Mirror Driver comes to the entry of this component, which converts them into entities that are present in the game. Namely, the basic information is transferred to a higher level, which allows building a strategy for the game.
3. Decision Making - the component that is responsible for the decision making, or simply – a robot’s brain.
Since the stream calls of scene rendering is constant, so the result of scene analysis presents itself a table (a lot of traffic), which must be converted into information that allows making decisions and controlling the game. A mouse click or keypresses are being sent after a decision has been made on the basis of the graphic information.
The robot stores information about all available parameters of the game, such as the number of units that is being controlled, enemies, etc., and it makes a decision based on this information. Also, the robot outputs to the console all information on its actions that makes it easy to debug.
Decision making algorithm is very similar to the human actions. The particular command is given and the robot tries to do it. For example, build a building, add the unit to a group, attack the enemy, etc.
The most interesting part is watching the game and the robot's actions. Here is the video of the game under control of a robot:
According to the play analysis, SC2 robot achieves an "idle" APM of about 500 and its battle mode APM is between 1000 and 2000 . Not all of these actions are useful and battle micro is a very difficult thing to implement; occasionally robot’s battle commands might be useless or even detrimental compared to the default combat behavior.
Considering that the author of the article aimed to show what tools can be used to write the "base" for the robot, rather than the implementation of the robot, it is a pretty good result. After all, the decision making component can be changed in order to improve its performance, and this implementation does not impose any technical limitations to build the robot.
|Vote for this post
Bring it to the Main Page