for any given 'float f' consider what is the smallest

increment/decrement that can be represented by that constant. I.E. what is

the smallest amount that will generate a different binary value.

Then when we want:

if( float_1 == float_2 )

I'm willing to accept:

if( float_1 + smallest_increment <= float_2 - smallest_increment

&& float_1 - smallest_increment >= float_2 + smallest_increment )

as an acceptable calculation.

Yes, I know that this represents a lot of extra work. But you would only

have to do it when you needed this kind of testing.

"Weijie Zhang" <wzhang@qnx.com> wrote in message

news:ai9dcd$jm2$1@nntp.qnx.com...

"Bill Caroselli (Q-TPS)" <QTPS@EarthLink.net> wrote in message

news:ai6jht$dtf$1@inn.qnx.com...

Has anyone come up with a math library that will do comparisons down to

the

last bit that CAN be accurately identified?

There is no a "standard" lib for that because it is "embedded" into your

definition of "good" value. For example,

you can use absolute difference or relative difference as the "cut-off"

policy.

As to "go to the last bit" (how do you define?), you can know it is

dangerous

simply by noticing the fact that a school of scientific workers are taking

adantage

of its randomness ..., they call that "Monte Carlo" simulation ...

In the practical world, you may consider also a way of

1) first abstract your model into integer field

(simply because you have up to 2^31 value to play with

without defining your own "big-number".)

and the space is hugh enough for me (order of 2^32 elements! may cost

my

whole life to count though:)...)

2) play in integer field instead.

Although there are a lot of c playing in numerical simulation, fortran

sounds to me

more seriously.

Weijie

I.E.

If I really want to test:

if( f == 0.9 )

I wouldn't want to write in my code

if( f > 0.89999 && f < 0.90001 )

if the value of 0.900003 could pass as equal when in fact the correct

value

should be

0.89999999927