Opened 10 years ago
Closed 10 years ago
#2481 closed enhancement (fixed)
[PATCH] test_headerless.h test failure on i686 gcc4.9
Reported by: | pcpa | Owned by: | philip |
---|---|---|---|
Priority: | Should Have | Milestone: | Alpha 17 |
Component: | Core engine | Keywords: | patch |
Cc: | Patch: |
Description
I did some trial&error changing/reverting code in the failure path and found that the attached patch corrects the problem.
I do not fully understand the code, but the comment about the reason of the cast appears to tell the test case is relying on undefined behaviour, and moving out from the loop the variables initialization appear to correct it, but may still not be correct.
Attachments (1)
Change History (10)
by , 10 years ago
Attachment: | 0ad-check.patch added |
---|
comment:1 by , 10 years ago
I think you're breaking the point of the test. It should try to allocate arandom chunk of memory 1000 times, while your patch only tests it for one value.
So I think the allocators need to be fixed, rather than the test.
comment:2 by , 10 years ago
I see that I did break the point of the test by not randomizing the arguments.
At first I was assuming the test implementation is not wrong, neither is gcc 4.9, so I thought it could be some strict aliasing issue or somehow access a variable out of scope, as it is passing test with gcc 4.8 and on x86_64 still works with gcc 4.9.
I just did a test only changing the srand argument to see if there was some pattern... The numbers are:
printf("%d %d %d %d\n", maxSize, size, HeaderlessAllocator::minAllocationSize, HeaderlessAllocator::allocationAlignment);
in the allocate branch.
srand(1); 413540 413536 128 16 837225 837216 128 16 207147 207136 128 16 382504 382496 128 16 998485 998480 128 16 666592 666592 128 16 148481 148480 128 16 17092 17088 128 16 headerless.cpp(490): Assertion failed: "m_freeBytes >= freedBlock->Size()" srand(2); 849007 848992 128 16 69609 69600 128 16 674199 674192 128 16 310083 310080 128 16 400739 400736 128 16 288687 288672 128 16 583449 583440 128 16 193591 193584 128 16 headerless.cpp(490): Assertion failed: "m_freeBytes >= freedBlock->Size()" srand(3) 235912 235904 128 16 906681 906672 128 16 242016 242016 128 16 990771 990768 128 16 10482 10480 128 16 140445 140432 128 16 596056 596048 128 16 216927 216912 128 16 807892 807888 128 16 98313 98304 128 16 255305 255296 128 16 headerless.cpp(490): Assertion failed: "m_freeBytes >= freedBlock->Size()" srand(4) 140490 140480 128 16 203807 203792 128 16 702733 702720 128 16 311786 311776 128 16 10715 10704 128 16 99679 99664 128 16 962384 962384 128 16 headerless.cpp(210): Assertion failed: "m_freeBlocks != 0" srand(5) 83918 83904 128 16 16436 16432 128 16 1012126 1012112 128 16 500557 500544 128 16 16954 16944 128 16 50348 50336 128 16 658493 658480 128 16 775067 775056 128 16 491707 491696 128 16 628359 628352 128 16 591909 591904 128 16 999526 999520 128 16 577093 577088 128 16 1020384 1020384 128 16 342392 342384 128 16 360498 360496 128 16 508977 508976 128 16 548698 548688 128 16 943559 943552 128 16 388323 388320 128 16 21007 20992 128 16 937526 937520 128 16 862609 862608 128 16 81889 81888 128 16 420786 420784 128 16 810187 810176 128 16 81661 81648 128 16 171092 171088 128 16 333569 333568 128 16 892014 892000 128 16 799375 799360 128 16 1030413 1030400 128 16 headerless.cpp(210): Assertion failed: "m_freeBlocks != 0"
comment:3 by , 10 years ago
Milestone: | Backlog → Alpha 17 |
---|
comment:4 by , 10 years ago
After a system upgrade here (using i686 Arch), I noticed it's indeed not a problem with that test. But a problem with the allocators. Long or big games crash immediately with the same assertion failure.
It's pretty easy to reproduce by loading the units_demo map (which is a map containing every entity in game).
comment:5 by , 10 years ago
I cannot try myself now, but can you try changing "OptimizeSpeed" in "Optimize" here: http://trac.wildfiregames.com/browser/ps/trunk/build/premake/premake4.lua#L166
Building with -O3 sometimes leads to problems as it is not as tested as -O2 (and produces bigger and eventually also slower code).
comment:6 by , 10 years ago
I added some logging to try to narrow this down, and it looks like the problem is when BoundaryTagManager::RemoveTags
calls freedBlock->~FreedBlock();
which is:
~FreedBlock() { // clear all fields to prevent accidental reuse prev = next = 0; m_id = 0; m_size = ~size_t(0); m_magic = 0; }
In GCC 4.9.0, that call does not set the memory to 0, so the code subsequently misbehaves.
comment:7 by , 10 years ago
$ cat test.cpp #include <stdio.h> #include <memory.h> struct C { int m; ~C() { m = 0; } }; int main() { char x[4] = { -1, -1, -1, -1 }; C *p = (C *)x; printf("%d\n", p->m); p->~C(); printf("%d\n", p->m); } $ g++-4.8.1 test.cpp -O1 && ./a.out -1 0 $ g++-4.9.0 test.cpp -O1 && ./a.out -1 -1
comment:8 by , 10 years ago
I don't see an obvious way to argue that GCC is wrong here. The C++11 standard says "Once a destructor is invoked for an object, the object no longer exists", so the explicit destructor call is clearly more than just a simple function call, and it seems reasonable for GCC to optimise out the memory writes to an object just before it ceases to exist.
(GCC is not just optimising away the whole destructor - if you put e.g. a printf in there, the printf gets called but it still ignores the writes to memory.)
To make BoundaryTagManager
safe, I think it mustn't access a pointer as a FreedBlock
until it knows a valid FreedBlock
exists there; and it mustn't make any assumptions about the bytes that correspond to an existing FreedBlock
.
0ad-check.patch