[previous]
[index]
[next]
Example 11: Measuring Timing Jitter
Real-time code should execute with more deterministic timing than
normal Linux process code. How much more deterministic is it? This
example measures the variation in the actual timing of a pure periodic
task, using the Pentium's built-in Time Stamp Counter (TSC), and shows
a plot something like this:
Refer to the commented real-time
source code and the commented application
source code for the details.
Principle of Operation
- A pure periodic real-time task reads the TSC at the beginning of
each 50-microsecond cycle. The TSC is available via the Intel
instruction RDTSC [INT2B]. See tsc.c for the code used here.
- The TSC is a 64-bit unsigned number that increments each clock
cycle. On a 1 GHz processor, this gives 1 nanosecond resolution. The
TSC will roll over after 264 cycles, which for the same 1
GHz processor is over 584 years.
- The TSC is logged into a shared memory array with 1K entries.
- A Linux process waits for the array to fill up, and computes the
difference between successive entries. This should nominally be 50
microseconds, but variation in the execution time due to the cache,
branching in the scheduling algorith, etc. will introduce some
variation ("jitter").
- The differences are plotted to show the magnitude, which are
typically on the order of about 10 microseconds.
- More detail on jitter analysis is available in [PRO], including a software technique
that reduces jitter to below a tenth of a microsecond.
Running the Demo
To run the demo, change to the 'ex11_jitter' subdirectory of the
top-level tutorial directory, and run the 'run' script by typing
./run
Alternatively, change to the top-level tutorial directory and run the
'runall' script there by typing
./runall
and selecting the "Jitter Testing" button.
You'll see a plot window showing the magnitude of the jitter.
See the Real-Time Task Code
See the Linux Application Code
Next: Example 12, Floating Point in Real
Time Tasks
Back: Example 10, Determining Stack Size