Analysis of Moba Game Research Paper
Analysis, Moba, Game, Research, Paper
Introduction
The problem that will be analyzed is how the Multiplayer Online Battle Arena (MOBA) games are affected by the implementation of different hierarchical macro strategy models. As more competitive games continue to develop, the MOBA games continue to experience challenges.
This is a great problem, especially for the people who have never played any competitive game. The majority of new players wish to make more attacks since that sounds fun on them. More attention has been taken to the MOBA games, including the Defense of the Ancients (Dota) and StarCraft (Wu, 2019). Such issues have continued to bring another level of complexity to the problems affecting the MOBA game AI.
Although the micro and macro concepts might be stable of the MOBA games, the key problem with such concepts is that they do not interact with one another. The MOBA game seems to be the most complex game that one can play at any level. The Dota is well known as one of the science fiction 5v5 Multiplayer Online Battle Arena (MOBA) games (Ripamonti et al., 2018).
The coordination among different agents is a key problem in the MOBA macro strategy operation. This problem has not yet been identified in an explicit manner. There is always a challenge in developing the top coordination capability in the macro strategy level because of the OpenAI Five decisions that are complicated.
It is important to understand the Real Time Strategy (RTS) game as the greatest problem that affects game AI. The first importance is understanding the fact that success happens in the games not only through clicking a button faster but also it is determined by the fact of making a decision despite the pressure in the game.
Players tend to gain an advantage over the MOBA game so that they can defeat their opponents as fast as possible (Wu, 2019). That situation happens before the opponent gets enough time to react successfully. When a player gains the advantage of the MOBA game, then the results of the game can be decided fast.
There is a great motivation for studying the problem. One of the key motivations is how the players are forced to make different strategic decisions. The players are expected to have a clear understanding of the game phases. The players make their decisions based on the phases in the game.
This happens through first mapping their game and making their decision on where and how they should dispatch their heroes. During the laning phase, the players tend to focus more on the teamfight spots.
MOBA games have different forms of applications. One such potential applications are the Plague Inc that has the responsibility of combining a premise with gameplay. This application has the potential of becoming one of such applications that everyone would wish to use and one of the best MOBA game applications of all time.
Another potential application is the Rebel Inc. Under this form of application, the player has the responsibility of taking care of a particular country that has just come over from a war (Zhang et al., 2019). Having a clear understanding of such applications helps in understating the MOBA games.
Other than the above-discussed applications, there are as well other potential applications that can be used under the MOBA game. Some of such applications include Dungeon Warfare 2. This is quite challenging for the MOBA Tower Defense application, where the player is expected to make decisions concerning how best they can take a treasure inside its depths.
With all the complications associated with the potential applications, it is important to have the best style graphics that would promote better usage of the applications. Such applications need to be streamlined in a better way that can easily be understood by the new players while allowing the players to manage their resources.
Another potential application that could be used in the problem is the Iron Marines. If there was a necessity of having the StarCraft application is the mobile phones, then the Iron Marines would be the most efficient one.
It entails a combination of a variety of Awesomenauts` graphics and art style (Wu, 2019). Such features have greatly contributed towards making the Iron Marines be one of the best MOBA games on android. Due to the simplicity of most of the potential applications, the problem will then have a delectable real-time tension.
Previous Work
In order to have a better understanding of the MOBA game, much attention was put on both the macro and micro levels of execution. Nevertheless, the recent study focused more on the micro level execution. In The International 2018, more demonstrations were conducted on the OpenAI Five (Wu, 2019).
The study was conducted to present how strong was the team fights within a game and how the coordination in such games could be compared to the top professional Dota2. The OpenAI reinforcement learning has ever since been used in understanding the development of the Dota2 AI.
Similar work has also been conducted in the macro strategy operation. The work that was conducted on the operations of the macro strategy operation emphasized more on the navigation. The main purpose of emphasizing on the navigation was to identify and present the destination spots.
The task of emphasizing the navigation was also conducted so that the sufficient routes for the agents could be identified. However, there was a great weakness in the macro strategy management. Such a weakness greatly contributed to the failure of the OpenAI Five.
The majority of the previous works applied the influence maps. One of the key purposes of applying the influence maps was to quantify units. The process of quantifying such units was mostly dependent on the handcrafted equations. The multiple influence maps were then put into consideration in the entire process.
The main purpose of using the multiple influence maps was to offer a single-value output that would be used in the navigation of the agents (Wu, 2019). Based on the macro strategy operation that was used in the study, offering a destination was one of the most significant purposes for navigating the processes.
The previous work was conducted with the aim of getting the right spots at the correct time. The study helped in identifying the main difference that existed between the higher level players and the rest of the players. While in the process of determining the macro strategy operation, proper planning was conducted by the researchers as part of ensuring that the results that would be derived would be correct and efficient.
The Adversarial Hierarchical-Task Network (AHTN) Planning was proposed in the study as the most significant process of planning (Wu, 2019). The planning was mostly applied in the process of searching the hierarchical tasks under the MOBA game playing.
Different results have already been derived regarding the problem. Although there are promising results that have been derived in the previous works, there are issues with efficiencies. Such forms of inefficiencies have resulted in the challenge of applying the MOBA games to the full capability.
Based on the previous works that have already been done in the problem, complete solutions have not yet been developed. That was even after the development of a rich and promising literature.
The results derived from the previous works showed that there were challenges in reasoning macro strategy implicitly. It was obvious about the capability of the gap that existed between the micro and macro level executions. Leaving the models to carry on with the process of figuring out the high level strategies was a form of being overly optimistic about the results that would be developed (Wu, 2019).
That process was achieved through simply taking a look at the micro level actions as well as the rewards. The results indicated that there was a necessity of considering the explicit macro strategy. Such a necessity is what greatly contributed to the micro level actions and rewards.
The previous works that were undertaken to understand the problem were mostly based on the handcrafted equations for the influence maps. Such maps were used in deriving the results through effectiveness in the computation process and the fusion. While deriving the results, there were a variety of numerical parameters that could be used in deciding the results but on a manual basis.
Those factors contributed a lot to the impossibility of achieving better results. That is because of the factors affecting the performance of the numerical parameters while determining the results. The planning methods could not meet the efficiency requirements that could be used in the full MOBA games.
Main Results
The opening attention is one of the results that were derived from the study works. The opening is determined as a significant strategy used in the MOBA. The results indicated that there was single opening attention in a variety of heroes. All the subfigures that were derived from the works had two different square images (Stanescu, 2019).
One part of the square images indicated the distribution of the attention while the other side of the square indicated the mini-map. Four different heroes were listed under the prediction attention. The opening attention was eventually understood as a safer and efficient result of the study works.
Another result that was derived from the study works is that the attention distribution was affected by the phase layer. That happened after a visualization was conducted on a variety of phases. The results indicated that the attention was distributed among the key sources of all phases.
The attention was mainly distributed in the middle lane. The middle lane is located in the area in front of the base (Wu, 2019). The results of the study work showed the there was a correlation of the phase layer with the game phases.
This was identified after the researchers conducted the t-Distributed Stochastic Neighbor Embedding (t-SNE). The results showed that the separation of the samples was based on the time stages.
The macro strategy embedding was another main result of the study work. Such results were derived after an evaluation was conducted on the significance of the macro strategy modeling. The macro strategy embedding was taken away from the experiment, and different forms of actions were used in the replays. The micro level model that was used in the study was the same as the OpenAi Five. The results that were derived from the study showed a detailed description of the micro level modeling.
Based on the results that were presented after the study work, AI was outperformed by the HMS. The most obvious change that was presented based on the results was that there was more focus on the nearby targets by the AI Without Macro Strategy.
There was less care on the agents about how supporting the other teammates was going to be performed. The teammates were pushed to a more distance compared to their previous location. The teammates concentrated more on killing the neutral creeps. That change could be observed from comparing the engagement rate and the number of turrets.
Another main result derived from the study was the match against human players. The match between the AI and human players was mainly undertaken to understand the accuracy that was associated with the AI performance. The results were evaluated after approximately 250 human player teams were invited to participate in the study.
Based on the standard procedure that was used in the ranking, the evaluators used the ban-pick rules (Wu, 2019). The implementation of such rules followed more understandable rules. The gamecores resulted in the limitation of Honour of Kings, whose frequency was the same as the one that was being used by the humans.
Another result that was derived from the study work showed that there was an imitated cross-agents communication. The key aim of this study was to understand the significance of the cross-agents communication mechanism. The significance was supposed to be compared to the ability of the AI.
The matches were then conducted between the HMS and the HMS that did not apply the cross-agent communication. The HMS attained a higher percentage compared to the other version that did not have any communication at all. Once the cross-agents communication was introduced in the study, a cross-agents communication could be identified.
The impact of the phase layer towards the performance of the HMS was identified. The researchers then removed the phase layer, where they compared it to the full version of the HMS. The results of the study work indicated that there was a drastic improvement in the HMS (Khan et al., 2017).
There was still a clear observation of the downgrading of the AI ability once the researchers removed the phase layer. This means that there was no accuracy in the timing during the first appearance of the baron. However, the other full version of the HMS agents were later on prepared to be engaged in the process of gaining baron.
Techniques
While evaluating the problem, different techniques were imposed, and this resulted in the effectiveness of the results derived. The entire statistics that were being used in the study were first listed in a column, and the column was named as the Human Teams.
Through the listing of the statistics, the results of the AI attained an achievement of a 48% winning rate (Wu, 2019). This rate was based on the total number of games played. Using this technique, it was true that the AI team did not have any added advantage over the human teams.
Another technique that was used in work was the comparison of different variables. One of the comparisons was performed between the turrets destruction with the IA and humans. The other comparison was performed between the gold per minute with the AI and human.
The results that were derived through this technique showed that there was a further observation of the AI being destroyed. The destruction took place approximately 2.5 more turrets compared to the human average during the first 10 minutes of the experiment. The technique suggested that the operation of the AI macro strategy was based on human opponents, where it could take place even above the human opponents.
Another technique that was used in the study work was the quantification of the computational complexity of the MOBA through the use of the Honour of Kings. The Honour of Kings was being used in the quantification of the computational complexity as an example. The normal length of the game that was being used as an example in the study was approximately 20 minutes (Wu, 2019). Based on the gamecore, the approximate number of minutes in the game could be understood as 20,000 frames.
Under the computational complexity, players that were based on every frame could make a clear decision based on the total number of options that were being available. An example of an option that could be identified is the movement button with 24 directions.
Based on the decision that the players took, the reaction period could increase to 200ms while the magnitude of the action space reduced. As of the state space, the resolution of the game that was being used as an example was 130,000 by 130,000 pixels (Wu, 2019). The diameter of each unit that was being used in the study was 1,000. The technique determined that each frame could have a different status, which included hit points.
Another technique used in the study was the model setup. Under this technique, a mixture of convolution was used. The main purpose of using the mixture was mainly to take any input from either the visual or the attributed characteristics.
Different convolution layers were set under the model setup technique, where each of such layers was having 512 channels with each channel having one padding (Wu, 2019). There was a similar number of configurations under the convolution layer where there were two shared fully-connected layers. Such layers had approximately 512 nodes.
Data presentation was mainly used to train a model. A total of 300 game replays were collected under the technique. The samples that were being collected entailed those of the King Professional League competition. The training records were also collected as the samples in the study.
A total of 250 million cases were used in the process of training (Adil et al., 2017). A total of 85 features were extracted from the visual side, and they include the position and hit points of all units. The technique extracted approximately 181 features, which included the role of heroes. The other features that were extracted by the technique included the kill-death-assistance statistics.
Discussion
The paper proposed a novel Hierarchical Macro Strategy model as the most significant model that could be used under the MOBA games. The HMS considered the attention of the game maps together with modeling of the game phases. There was also much cooperation when imitating cross-agent communication.
For a better evaluation of the MHS, it is important to use the Honour of Kings as an example. It is important to have a better comparison between the AI and the best performing human player teams. The HMS model can be termed as the first learning based model recordings.
Although the techniques used in the study show a stringer potential under the MOBA games, they can as well be used in other addressing other problems. For example, the setup modeling can be applied in the StarCraft. The techniques can be used in the study to extend more meaningful behaviors (Wu, 2019).
Such behaviors can be categorized as a building operation. Although the phase layer modeling can seem to be more specific to the games, it can as well be used in understanding cooperation. Nevertheless, a generalization can be made on the underlying idea that can be used in capturing the game phases.
There are different suggestions that can be integrated in the future to perform other tasks. Planning can be more integrated into future planning, depending on the HMS. Using the MCTS roll outs will need to be included in the planning process. Including the factor in the planning process can help in outperforming the top human players (Wu, 2019).
It will be expected that the MCTS roll outs will be used in the imperfect information gaming. It will as well be important to bring the expected rewards. Application of the expected rewards would help in supervising the learning failures that need to be supervised.
References
Adil, K., Jiang, F., Liu, S., Jifara, W., Tian, Z., & Fu, Y. (2017). State-of-the-art and open challenges in RTS game-AI and Starcraft. International Journal of Advanced Computer Science & Applications, 8(12), 16-24.
Khan, A., Yang, K., Fu, Y., Lou, F., Jifara, W., Jiang, F., & Shaohui, L. (2017, September). A competitive combat strategy and tactics in RTS games AI and StarCraft. In Pacific Rim Conference on Multimedia (pp. 3-12). Springer, Cham.
Ripamonti, L. A., Granato, M., Trubian, M., Knutas, A., Gadia, D., & Maggiorini, D. (2018). Multi-agent simulations for the evaluation of Looting Systems design in MMOG and MOBA games. Simulation Modelling Practice and Theory, 83, 124-148.
Stanescu, A. M. (2019). Outcome Prediction and Hierarchical Models in Real-Time Strategy Games.
Wu, B. (2019, July). Hierarchical macro strategy model for MOBA game ai. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 33, pp. 1206-1213).
Ye, D., Liu, Z., Sun, M., Shi, B., Zhao, P., Wu, H., … & Chen, Q. (2019). Mastering Complex Control in MOBA Games with Deep Reinforcement Learning. arXiv preprint arXiv:1912.09729.
Zhang, Z., Li, H., Zhang, L., Zheng, T., Zhang, T., Hao, X., … & Zhou, W. (2019). Hierarchical Reinforcement Learning for Multi-agent MOBA Game. arXiv preprint arXiv:1901.08004.