It does create bidmp files, but I have no way to work with them, if there IS a way to work with them (even for advanced users that can use WinDBG to debug operating system issues and can debug applications), please point me to that information and I can debug this to save you guys some time.
Since I had to re-create an issue that was never fixed, I have put all the files, each containing a bidump and it's corresponding RPT file (I have enabled rpt output since this started happening). You can get them from the following links:
https://paronity.com/i/crash1.zip
https://paronity.com/i/crash2.zip
https://paronity.com/i/crash3.zip
https://paronity.com/i/crash4.zip
https://paronity.com/i/crash5_default_malloc.zip
https://paronity.com/i/dump_default_malloc_with_mdmp.zip
https://paronity.com/i/crash_6_default_malloc.zip
https://paronity.com/i/crash_7_default_malloc.zip
https://paronity.com/i/crash_8_default_malloc.zip
https://paronity.com/i/crash_9_default_malloc.zip
------------------------------------------------------------------------
It's not windows. If it were windows, there would be events being thrown and a memory dump would be created (because I have user-mode dumps enabled for that executable).
Memory related crash is possible, but you, as the software that is running are responsible for managing those memory calls and handling them should they fail.
Is there anything I can do to help further debug the issue on your end? I'm an operating system level debugger for a living and have a lot of experience in this area, I just need to know what I can do with your software to do so. I am doing everything possible from the OS level, and am getting no where, which leads me to believe it's on ARMA side of things.
------------------------------------------------------------------
The only thing on the box is this particular server (and the MySQL instance for it) - and the server has 32 GB of RAM.
Most of the time, the ARMA server idles around 110-200 MB of RAM (with the 3rd party malloc) and 1.4GB-1.8GB (with the default malloc) depending on player count, object count, ect....
The system NEVER gets above 25% RAM usage (if we are doing testing with another server, or something else with the box at that point in time). Point being, we aren't even coming close to running out of RAM.
https://paronity.com/i/8i8I3.png
------------------------------------------------------------------
For the sake of proving absolutely nothing, I ran a server on the box (default map with no customization) since my last post, with a tool restarting it every 12 hours and it never crashed. So, as was already obvious it is something in the custom code that is causing the issue (but we already knew that), but that doesn't help us (content creators and server admins) figure out what is causing the issue because it's YOUR engine that isn't handling the exception correctly. What's more, your code is "handling" it enough to prevent windows from seeing the failure (hence no user-mode dumps and no event log entries) but not enough to do its job.
I would like to be able to figure out what code is causing the issue, but you, the creator of the engine, provide me no valid way of doing so.
-----------------------------------------------------------------------
I poked around in the hex editor in the bidmp files and can see the error
that it "thinks" is causing the issue which is:
"Out of memory (requested 4203 KB).
footprint 536870912 KB.
pages 81920 KB.
B, mapped 50040832 B), free 60440576 B", but as I said before, the server is on a box with 32 GB of RAM (https://paronity.com/i/6Z7g3.png [^]) and its the only thing running. It's RAM consumption stays around the 110MB-169MB range. (https://paronity.com/i/1E2l4.png [^])