shadow at agamin.co.il
Mon Jul 22 10:06:08 PDT 2002
I've read in the changelog that you have disabled this API in the
newer builds of SDL 1.2.4 and above.
How does that change affect the function SDL_GetTicks ? How can this
function measure the time now?
Let me please quote a part of a post found in GRC newsgroups that
mentions that API. Also, I would like to ask you if the problem
described in this message is the same that is causing trouble in
The original message was written by Steve Gibson:
------------------ start quote ----------------------
ALL IBM-compatible PC's, from the original 4.77 Mhz 8088 IBM PC on,
have contained a clock running at 14,318,160 Hz. For reasons tied to
the original PC design, this clock is internally Divided by 12,
yielding a frequency of 1,193,180 Hz. That divided clock is then
fed into a 16-bit counter which overflows once every 65,536 counts,
effectively dividing the 1,193,180 Hz by 65,536 to result in a
frequency of 18.2 Hz. And THAT frequency was available as a
hardware interrupt on the original PC.
The current count of the 16-bit counter/timer could be read out of
the chip, though only at 8-bits at a time, and the number of
interrupts (overflows) which had occurred could be counted.
So, the bottom line is that with sufficiently clever programming, to
make up for the fact that you can't read either the interrupt count
or the hardware divider at the same time, and to deal with little
complications like when you read the number of interrupts
(overflows) you're not sure whether you have read them before the
next one comes in or after, relative to the count in the 16-bit
divider, it *is* possible to "synthesize" the hardware state into a
coherent counter running at 1.193180 Megahertz. Inverting this
gives a timer counter interval of 0.838 microseconds, or 838
Okay. So ... *THAT* is the count that has been available (by hook
or by crook) on every PC ever made.
Then, with the Pentium, Intel added a cool new instruction known as
RDTSC (read time stamp counter) which gives the software direct
access to the number of clock counts the processor has experienced
since its last power-on or hardware reset. With contemporary clock
rates of, for example, 850 Mhz, that results in a timing period of
only 1.176 nanoseconds (billionths of a second).
The problem is ... this clock rate is 100% dependent upon the clock
speed of the system, so naturally, unlike the older, slower, but
still plenty fast 1.193180 Megahertz timer, this RDTSC counter
varies its speed across machines.
Well ... here's what I discovered: The Uniprocessor version of
Windows 2000 uses the same 1.193180 megahertz counter for its
"QueryPerformanceCounter" API as all previous versions of Windows...
but the Multiprocessor version of Windows 2000 uses the system's own
clock rate -- as surfaced through the RDTSC processor instruction.
So ... on Multiprocessor machines, like my own 850 Mhz box, the
counter is running *much* faster than on uniprocessor systems, or
earlier (Win9x) platforms. Thus ... as I was concerned, the timing
values really WERE overflowing ... because they were running so
------------------ end quote ----------------------
Thanks in advance.
More information about the SDL