by IV_Mark_VI Wed Sep 19, 2012 3:00 pm
Whee...
Any programmers around?
This is how I understand they'd likely do it:
Have your main code in one thread and separate the calls to the GPU. Execute the rendering every 30 seconds, but have it in synch with the main thread.
Let's assume they aim for 30fps, so every 30 ms a call is made to the GPU. Because they want to squeeze every bit of performance out of this game, their models for each console and platform is different.
So if on one box, the cpu is overutilized, and it takes 32ms to render a frame, when the call to the GPU is made it'll look glitchy.
OS latency issues are negligible until you go down to the nano second or pico second level.
The faster the CPu, the faster they can update the GPU, if they want to tighten their model and up their utilization percentage. But to my limited understanding, that would just increase fluidity (which combined with good alias would make it seem really smooth). The only way the animations would actually be out of synch is if the models are different (my guess) or if the Xbox is always behind the rendering (doubtful).
It's also possible that their GPU calls aren't in a separate thread, but I really really doubt that.
It does appear that the xbox is frame dropping, but I'm fine with that.
Anyone have access to a dev kit?