Opened 10 years ago

Last modified 2 years ago

#2370 new defect

Improve performance of entity management for the AI — at Version 1

Reported by: Yves Owned by:
Priority: Should Have Milestone: Backlog
Component: AI Keywords:
Cc: Patch:

Description (last modified by Yves)

Overview

Entities are created, destroyed and modified in the simulation code. The AI scripts also need access to these entities and they store a separate representation of the required entities. The AIProxy and AIInterface simulation components keep track of the differences that are interesting for AIs. These differences are passed to the AI for each AI update (doesn't happen every turn) and the sharedAI updates its internal (full) representation of entity data.

ApplyEntitiesDelta in shared.js does this updating. This does not only update the entities themselves but also entity collections. Collections can have filters associated and the collections are also updated according to these filters (entities are added or removed to collections).

Later each AI player works with entity collections by iterating through them and calling actions per entity or by adding additional collections or removing collections.

An important aspect of the current system is its support for multi-threading. Also check this forum thread for a detailed explanation of the interface and the multi-threading support.

The performance bottlenecks

From the current experience, the parts happening on the simulation side (AIInterface, AIProxy) are relatively inexpensive. ApplyEntitiesDelta takes way to long in late-game, which can be easily profiled because it's all in one function. I also think that iterating through collections adds way too much overhead and is another problem that can be less easily measured, because it's distributed across multiple AI objects working with entities and entity collections.

Approaches for improvement

Currently I see two approaches for improving performance:

1: Improve entitycollections in JS code

Using more suitable data structures for the collections could reduce the time for iterating and updating significantly. I've found an interesting overview of data structures that could help in this case. IIRC the AI often uses a kind of tree structure for some of its entity collections anyway (without using a tree for the data structure). Maybe it even makes sense to offer different types of collections that are optimized for different usages.

When optimizing Javascript code, the JIT compiler should also be considered. Using less polymorphic objects and avoiding deleting and adding properties allows the JIT compiler to optimize your code much better. I currently suspect that this polymorphic structure of entity collections is the main reason why the SpiderMonkey upgrade didn't improve performance for us.

There was some input from SpiderMonkey developers here. One suggestion was using Map instead of normal objects. I think Map isn't supported in SpiderMonkey 1.8.5, so you'd have to apply the patch in #1886 first or wait until the upgrade is completed.

2. Store entity data in C++, implement an interface to JS

C++ is generally faster and more predictable for optimization. C++ also avoids overhead for garbage collection and for passing structured clones to Javascript. On the other hand, there's additional overhead for each call from JS to C++. Arguments need to be converted to C++ types and the call itself causes overhead too. The SpiderMonkey developers even use a system called self hosting which implements parts of the JS engine in JS to avoid this kind of overhead! I don't know how many calls it needs until the overhead gets too big, but it's probably enough to cause serious problems. The fact that the context wrapping overhead for keeping the sharedAI in its own JS compartment was so big also tells me that the JS/C++ overhead could be significant in this case and needs special consideration.

Also keep in mind the requirements for multi-threading! We most likely still need a copy of entity data because the simulation code should be able to run independently from the AI code and because there are different requirements for the data structures.

Change History (1)

comment:1 by Yves, 10 years ago

Description: modified (diff)
Note: See TracTickets for help on using tickets.