Just a thought from participating in large scale battles and TiDi.
GPU assisted computing ( GPGPU) for all/some taylor expansion calculations would be very interesting to see, GPU/CPU can use shared memory, unlike running multiple CPU where every CPU need dedicated ram.
Moving some calculations, especially float point such as taylor expansion for sin/cos tracking, orbit etc could offload the cpu quite a bit.
My thought is for every server tick, the cpu can delegate a clearly defined array of calculations to be done by the GPU from ram, by this it can be ensured the calculations will be done before the servertick time is up.
I think CUDA calculations on a tesla card could really boost performance and since it’s defined what calculations it’s supposed to do, you also know how long the number crunching will take.