Skip navigation.
Home
The QNX Community Portal

View topic - Timer quantization error

Timer quantization error

Read-only archive of qnx.rtos (Writing resources managers, and general discussion around the QNX Neutrino RTOS) at inn.qnx.com

Timer quantization error

Postby Mustafa Yavas » Sat Aug 18, 2007 5:42 pm

Hi,

We are trying to run a task at 100Hz. So, we are waiting 10 ms difference
between the start time of consecutive frames. But at each 6.53 seconds one
frame shifts 1 ms in a way that one frame starts after 11 ms with respect to
previous frame start time.

We have used SIGEV_SIGNAL_CODE signal and connected it to a timer. We
noticed that source of the problem is timer quantization error. When clock
period is tried to set to 1ms, it is actually set to 999.847 nanoseconds. I
have tried to set this values 500, 200, or 100 microseconds. But the actual
tick rate was all the time a bit different than what I had set. Are there
any way to handle this problem.

Thanks,

Mustafa Yavas
Mustafa Yavas
 

Re: Timer quantization error

Postby evanh » Mon Aug 20, 2007 9:25 am

Mustafa Yavas wrote:
We have used SIGEV_SIGNAL_CODE signal and connected it to a timer. We
noticed that source of the problem is timer quantization error. When clock
period is tried to set to 1ms, it is actually set to 999.847 nanoseconds. I
have tried to set this values 500, 200, or 100 microseconds. But the actual
tick rate was all the time a bit different than what I had set. Are there
any way to handle this problem.



Don't use the Posix timers. You have to hook the system tick and count them, then generate your own event.

It should be safe to use Interrupt_Attach_Event() with this IRQ as it won't be device sharing.


Evan
evanh
QNX Master
 
Posts: 737
Joined: Sat Feb 01, 2003 8:04 am

Re: Timer quantization error

Postby evanh » Mon Aug 20, 2007 9:32 am

Evan Hillas wrote:
Don't use the Posix timers. You have to hook the system tick and count them, then generate your own event.

It should be safe to use Interrupt_Attach_Event() with this IRQ as it won't be device sharing.


Posix timers are not designed to give a regular interval. They are designed to give an amount of time. There is no guarantee it will be exact for any one period.


Evan
evanh
QNX Master
 
Posts: 737
Joined: Sat Feb 01, 2003 8:04 am

Re: Timer quantization error

Postby Yuriy Synytskyy » Mon Aug 20, 2007 2:13 pm

Hello Mustafa,

Please check this link:


http://www.qnx.com/developers/articles/ ... 826_2.html



Regards,

Yuriy



"Mustafa Yavas" <mustafayavas@gmail.com> wrote in message
news:fa7b1s$3fb$1@inn.qnx.com...
Hi,

We are trying to run a task at 100Hz. So, we are waiting 10 ms difference
between the start time of consecutive frames. But at each 6.53 seconds one
frame shifts 1 ms in a way that one frame starts after 11 ms with respect
to previous frame start time.

We have used SIGEV_SIGNAL_CODE signal and connected it to a timer. We
noticed that source of the problem is timer quantization error. When clock
period is tried to set to 1ms, it is actually set to 999.847 nanoseconds.
I have tried to set this values 500, 200, or 100 microseconds. But the
actual tick rate was all the time a bit different than what I had set.
Are there any way to handle this problem.

Thanks,

Mustafa Yavas



Yuriy Synytskyy
 

Re: Timer quantization error

Postby David Gibbs » Tue Aug 21, 2007 5:05 pm

Mustafa Yavas <mustafayavas@gmail.com> wrote:
Hi,

We are trying to run a task at 100Hz. So, we are waiting 10 ms difference
between the start time of consecutive frames. But at each 6.53 seconds one
frame shifts 1 ms in a way that one frame starts after 11 ms with respect to
previous frame start time.

We have used SIGEV_SIGNAL_CODE signal and connected it to a timer. We
noticed that source of the problem is timer quantization error. When clock
period is tried to set to 1ms, it is actually set to 999.847 nanoseconds. I
have tried to set this values 500, 200, or 100 microseconds. But the actual
tick rate was all the time a bit different than what I had set. Are there
any way to handle this problem.

Try using CLOCK_MONOTONIC, and using a delay that is some multiple of
the actual ClockPeriod() that the system is using.

That is, not a 10ms interval, but a 10*.999847 ms interval.

-David
--
David Gibbs
QNX Training Services
dagibbs@qnx.com
David Gibbs
 

Re: Timer quantization error

Postby evanh » Wed Aug 22, 2007 1:04 am

David Gibbs wrote:
Try using CLOCK_MONOTONIC, and using a delay that is some multiple of
the actual ClockPeriod() that the system is using.

That is, not a 10ms interval, but a 10*.999847 ms interval.


I wouldn't trust that.
evanh
QNX Master
 
Posts: 737
Joined: Sat Feb 01, 2003 8:04 am

RE: Re: Timer quantization error

Postby maschoen » Wed Aug 22, 2007 2:57 am

Hey David,

The last documentation I read said CLOCK_MONOTONIC wasn't implemented yet. I take it from your post it is now?

Mitchell
maschoen
QNX Master
 
Posts: 2640
Joined: Wed Jun 25, 2003 5:18 pm

Re: Timer quantization error

Postby Armin » Wed Aug 22, 2007 8:30 am

Mustafa,

try to use the timer interrupt IRQ8 ... that interrupt is independent
from the OS.

Regards

--Armin

PS: Let me know if you need the code


Mustafa Yavas wrote:
Hi,

We are trying to run a task at 100Hz. So, we are waiting 10 ms difference
between the start time of consecutive frames. But at each 6.53 seconds one
frame shifts 1 ms in a way that one frame starts after 11 ms with respect to
previous frame start time.

We have used SIGEV_SIGNAL_CODE signal and connected it to a timer. We
noticed that source of the problem is timer quantization error. When clock
period is tried to set to 1ms, it is actually set to 999.847 nanoseconds. I
have tried to set this values 500, 200, or 100 microseconds. But the actual
tick rate was all the time a bit different than what I had set. Are there
any way to handle this problem.

Thanks,

Mustafa Yavas



Armin
 

Re: Timer quantization error

Postby Steve Reid » Wed Aug 22, 2007 1:19 pm

maschoen <maschoen@pobox-dot-com.no-spam.invalid> wrote:
Hey David,

The last documentation I read said CLOCK_MONOTONIC wasn't
implemented yet. I take it from your post it is now?


Maybe you should read the docs more than once every six years. :-)

------------------------------------------
Steve Reid stever@qnx.com
Technical Editor
QNX Software Systems
------------------------------------------
Steve Reid
 

Re: Timer quantization error

Postby David Gibbs » Wed Aug 22, 2007 3:39 pm

Evan Hillas <evanh@clear.net.nz> wrote:
David Gibbs wrote:
Try using CLOCK_MONOTONIC, and using a delay that is some multiple of
the actual ClockPeriod() that the system is using.

That is, not a 10ms interval, but a 10*.999847 ms interval.


I wouldn't trust that.

Why would you not trust that?

Ok... missed interrupts and stuff can still screw with it, and it may not
run exactly on the time due to quartz chip irregularities and other stuff,
but it may still be the best choice.

-David
--
David Gibbs
QNX Training Services
dagibbs@qnx.com
David Gibbs
 

Re: Timer quantization error

Postby David Gibbs » Wed Aug 22, 2007 3:40 pm

maschoen <maschoen@pobox-dot-com.no-spam.invalid> wrote:
Hey David,

The last documentation I read said CLOCK_MONOTONIC wasn't
implemented yet. I take it from your post it is now?

It doesn't say that any more. (6.3.0 SP2 documentation.)

-David
--
David Gibbs
QNX Training Services
dagibbs@qnx.com
David Gibbs
 

Re: Timer quantization error

Postby evanh » Thu Aug 23, 2007 12:16 pm

David Gibbs wrote:
Why would you not trust that?


Two reasons:
- It's simply not a guaranteed method of achieving a sampling rate. And by this I mean not adding jitter. In fact, to the contrary, the articles on "tick-tock" make it clear that jitter is added. Btw, sampling is the usual reason for needing a perfectly regular trigger. On the other side of this coin is the question of why such a design isn't using some hardware assist to perform the sampling into/out-of a hardware buffer.

- I may be out-of-date now but, to backup the above point, the results from clock_getres() may not exactly match the OS's calculated interval of time per system tick and therefore not map one-to-one with the IRQ even when the application tries to.
evanh
QNX Master
 
Posts: 737
Joined: Sat Feb 01, 2003 8:04 am

Re: Timer quantization error

Postby David Gibbs » Thu Aug 23, 2007 6:23 pm

Evan Hillas <evanh@clear.net.nz> wrote:
David Gibbs wrote:
Why would you not trust that?


Two reasons:
- It's simply not a guaranteed method of achieving a sampling rate.
And by this I mean not adding jitter. In fact, to the contrary,
the articles on "tick-tock" make it clear that jitter is added. Btw,
sampling is the usual reason for needing a perfectly regular trigger.
On the other side of this coin is the question of why such a design
isn't using some hardware assist to perform the sampling into/out-of a
hardware buffer.

True it isn't a guarenteed method -- but, really, nothing is. For
more precise work, you do need an external hardware timer, the better
quality the hw timer, the better your precision.

Now, I'm not sure what you mean by "added jitter", especially as mentioned
in the tick-tock articles.

Clearly if you use delay(), nanosleep(), sleep(), or whatever, there is
the need to start from the next tick.

But, if you use a repeating timer that is a multiple of the period
of the hardware clock (e.g. a multiple of 999847 ns on an x86), then
nothing in those articles suggests any further jitter is added.

If using such a repeating timer off CLOCK_REALTIME, there is one
possible source of jitter which is not described in either of the
tick-tock articles, and that is any use of ClockAdjust(). Using
CLOCK_MONOTONIC should avoid the jitter from ClockAdjust() as well.

- I may be out-of-date now but, to backup the above point, the results
from clock_getres() may not exactly match the OS's calculated interval
of time per system tick and therefore not map one-to-one with the IRQ
even when the application tries to.

I recommended ClockPeriod(), rather than clock_getres(), though I expect
that clock_getres() actually just calls ClockPeriod(). From what I have
seen, ClockPeriod() gives the actual value the OS is using, the actual
calculated interval.

e.g. if I set the clock period to 1ms, and then do a ClockPeriod() to
query it, I will see 999847 ns as the result.

-David
--
David Gibbs
QNX Training Services
dagibbs@qnx.com
David Gibbs
 

Re: Timer quantization error

Postby evanh » Fri Aug 24, 2007 11:27 am

David Gibbs wrote:
True it isn't a guarenteed method -- but, really, nothing is. For
more precise work, you do need an external hardware timer, the better
quality the hw timer, the better your precision.


What I mean is it's not clear that the stored resolution is the only factor in the calculation for each clock tick. That we are not guaranteed that using this resolution figure will produce a flawless metronome. Ie, Absolutely no error accumulating resulting in no skipping/dropping ticks for event generation.


Now, I'm not sure what you mean by "added jitter", especially as mentioned
in the tick-tock articles.


The skipping/dropping is what I mean by intentional adding of jitter - from the app's point-of-view. The tick-tock article refers to this as a beat but from the pov of an unwanted signal it's also jitter. I guess I could have also said noise.


I recommended ClockPeriod(), rather than clock_getres(), though I expect
that clock_getres() actually just calls ClockPeriod(). From what I have
seen, ClockPeriod() gives the actual value the OS is using, the actual
calculated interval.

e.g. if I set the clock period to 1ms, and then do a ClockPeriod() to
query it, I will see 999847 ns as the result.


Yup, and any future enhancements have a chance to break any app that relies on shuffling an exact multiple of the OS calculation. You are right about using ClockPeriod() - any use of the extended data-structure will show up there but not so likely in clock_getres().


Evan
evanh
QNX Master
 
Posts: 737
Joined: Sat Feb 01, 2003 8:04 am

Re: Timer quantization error

Postby David Gibbs » Fri Aug 24, 2007 3:50 pm

Evan Hillas <evanh@clear.net.nz> wrote:
David Gibbs wrote:
True it isn't a guarenteed method -- but, really, nothing is. For
more precise work, you do need an external hardware timer, the better
quality the hw timer, the better your precision.


What I mean is it's not clear that the stored resolution is the only
factor in the calculation for each clock tick. That we are not
guaranteed that using this resolution figure will produce a flawless
metronome. Ie, Absolutely no error accumulating resulting in no
skipping/dropping ticks for event generation.

Unless you miss hardware interrupts (due to extended periods of
interrupts being disabled, masked, or some hardware errors), as long
as you use a multiple of the timer frequency and use CLOCK_MONOTONIC,
there should be no further accumulated error -- there should be no
other factor in the calculation for clock tick.

QNX Neutrino stores time internally as 2 64-bit nanosecond values:
a) nanoseconds since boot +
b) boot time in nanoseconds since Jan 1st 1970.

On each timer interrupt, once clock period (as reported by ClockPeriod())
is added to the nanoseconds since boot. If there is a current active
ClockAdjust(), then the adjust is added to the boot time as well.

Current time is a+b.

CLOCK_REALTIME timers fire based on a+b.
CLOCK_MONOTIC timers fire based on a.

Now, I'm not sure what you mean by "added jitter", especially as mentioned
in the tick-tock articles.


The skipping/dropping is what I mean by intentional adding of jitter -
from the app's point-of-view. The tick-tock article refers to this as
a beat but from the pov of an unwanted signal it's also jitter.
I guess I could have also said noise.

Yes, but I said to use an exact multiple of the ClockPeriod(), so you won't
get this noise/jitter/beat.

I recommended ClockPeriod(), rather than clock_getres(), though I expect
that clock_getres() actually just calls ClockPeriod(). From what I have
seen, ClockPeriod() gives the actual value the OS is using, the actual
calculated interval.

e.g. if I set the clock period to 1ms, and then do a ClockPeriod() to
query it, I will see 999847 ns as the result.


Yup, and any future enhancements have a chance to break any app that
relies on shuffling an exact multiple of the OS calculation. You are
right about using ClockPeriod() - any use of the extended data-structure
will show up there but not so likely in clock_getres().

You have to be a bit smart about figuring out your exact multiple. Query
the fundamental clock period, then calculate the closest match between
integer multiples of that value and what your actual wanted period is.

You may, even, need to as part of your system design choose a different
clock period to give closer/better/more accurate results.

But, yeah, if you naively hard-code it as 15* ClockPeriod(), that could
definitely cause problems.

-David
--
David Gibbs
QNX Training Services
dagibbs@qnx.com
David Gibbs
 

Next

Return to qnx.rtos

Who is online

Users browsing this forum: No registered users and 1 guest