Improve performance of entity management for the AI
|Reported by:||Yves||Owned by:|
Description (last modified by )
Entities are created, destroyed and modified in the simulation code. The AI scripts also need access to these entities and they store a separate representation of the required entities. The AIProxy and AIInterface simulation components keep track of the differences that are interesting for AIs. These differences are passed to the AI for each AI update (doesn't happen every turn) and the sharedAI updates its internal (full) representation of entity data.
ApplyEntitiesDelta in shared.js does this updating. This does not only update the entities themselves but also entity collections. Collections can have filters associated and the collections are also updated according to these filters (entities are added or removed to collections).
Later each AI player works with entity collections by iterating through them and calling actions per entity or by adding additional collections or removing collections.
An important aspect of the current system is its support for multi-threading. Also check this forum thread for a detailed explanation of the interface and the multi-threading support.
The performance bottlenecks
From the current experience, the parts happening on the simulation side (AIInterface, AIProxy) are relatively inexpensive. ApplyEntitiesDelta takes way to long in late-game, which can be easily profiled because it's all in one function. I also think that iterating through collections adds way too much overhead and is another problem that can be less easily measured, because it's distributed across multiple AI objects working with entities and entity collections.
Approaches for improvement
Currently I see two approaches for improving performance:
1: Improve entitycollections in JS code
Using more suitable data structures for the collections could reduce the time for iterating and updating significantly. I've found an interesting overview of data structures that could help in this case. IIRC the AI often uses a kind of tree structure for some of its entity collections anyway (without using a tree for the data structure). Maybe it even makes sense to offer different types of collections that are optimized for different usages.
There was some input from SpiderMonkey developers here. One suggestion was using Map instead of normal objects. I think Map isn't supported in SpiderMonkey 1.8.5, so you'd have to apply the patch in #1886 first or wait until the upgrade is completed.
2. Store entity data in C++, implement an interface to JS
Also keep in mind the requirements for multi-threading! We most likely still need a copy of entity data because the simulation code should be able to run independently from the AI code and because there are different requirements for the data structures.