(Also remember there are a lot of other issues with Sleep(n) ; the times are only reliable here because this is in a no-op test app)
This actually started because I was looking into Linux thread sleep timing, so I wrote a little test to just Sleep(n) a bunch of times and measure the observed duration of the sleep.
(Of course on Windows I do timeBeginPeriod(1) and bump my thread to very high priority (and timeGetDevCaps says the minp is 1)).
Anyway, what I'm seeing is this :
sleep(1) : average = 0.999 , sdev = 0.035 ,min = 0.175 , max = 1.568
sleep(2) : average = 2.000 , sdev = 0.041 ,min = 1.344 , max = 2.660
sleep(3) : average = 3.000 , sdev = 0.040 ,min = 2.200 , max = 3.774
Sleep(n) averages n
duration in [n-1,n+1]
sleep(1) : average = 1.952 , sdev = 0.001 ,min = 1.902 , max = 1.966
sleep(2) : average = 2.929 , sdev = 0.004 ,min = 2.665 , max = 2.961
sleep(3) : average = 3.905 , sdev = 0.004 ,min = 3.640 , max = 3.927
Sleep(n) averages (n+1)
duration very close to (n+1) every time (tiny sdev)
sleep(1) : average = 2.002 , sdev = 0.111 ,min = 1.015 , max = 2.101
sleep(2) : average = 2.703 , sdev = 0.439 ,min = 2.017 , max = 3.085
sleep(3) : average = 3.630 , sdev = 0.452 ,min = 3.003 , max = 4.130
average no good
Sleep(n) minimum very precisely n
duration in [n,n+1] (+ a little error)
rather larger sdev
it's like completely different logic on each of my 3 machines. XP is the most precise,
but it's sleeping for (n+1) millis instead of (n) ! Win8 has a very precise min of n, but
the average and max is quite sloppy (sdev of almost half a milli, very high variation even
with nothing happening on the system). Win7 hits the average really nicely but has a large
range, and is the only one that will go well below the requested duration.
As noted before, I had a look at this because I'm running Linux in a VM and seeing very poor
performance from my threading code under Linux-VM. So I ran this experiment :
Sleep(1) on Linux :
native : average = 1.094 , sdev = 0.015 , min = 1.054 , max = 1.224
in VM : average = 3.270 , sdev =14.748 , min = 1.058 , max = 656.297
in VM2 : average = 1.308 , sdev = 2.757 , min = 1.052 , max = 154.025
obviously being inside a VM on Windows is not being very kind to Linux's threading system.
On the native box, Linux's sleep time is way more reliable than Windows (small min-max range)
(and this is just with default priority threads and SCHED_OTHER, not even using a high priority
trick like with the Windows tests above).
added "in VM2". So the VM threading seems to be much better if you let it see many fewer cores than you have. I'm running on a 4 core (8 hypercore) machine; the base "in VM" numbers are with the VM set to see 4 cores. "in VM2" is with the VM set to 2 cores. Still a really bad max in there, but much better overall.