As of now, it appears that fog decay syncing from server to client has a precision of 1/127 (8-bit); any fog decay value that does not match a multiple of 1/127 is rounded to the nearest value when being transferred to the client.
This is quite problematic when trying to achieve precise values below 0.1, especially since that <b>the visual effect of fog decay varies extremely greatly for values between 0 and 0.01</b>.
Here are some of the readings I took:
<pre>
server client
0 0 0.0025 0 0.005 0.00787401 (1/127) 0.0075 0.00787401 (1/127) 0.01 0.00787401 (1/127) 0.0125 0.015748 (2/127) 0.015 0.015748 (2/127) 0.0175 0.015748 (2/127) 0.02 0.023622 (3/127) 0.0225 0.023622 (3/127) 0.025 0.023622 (3/127) 0.0275 0.023622 (3/127) 0.03 0.0314961 (4/127)
</pre>
Please make it so the fog decay is properly transmitted as 32-bit float over the network, not as a rounded 8-bit integer divided by 127 on the other side.
Thanks.