for any given 'float f' consider what is the smallest
increment/decrement that can be represented by that constant. I.E. what is
the smallest amount that will generate a different binary value.
Then when we want:
if( float_1 == float_2 )
I'm willing to accept:
if( float_1 + smallest_increment <= float_2 - smallest_increment
&& float_1 - smallest_increment >= float_2 + smallest_increment )
as an acceptable calculation.
Yes, I know that this represents a lot of extra work. But you would only
have to do it when you needed this kind of testing.
"Weijie Zhang" <email@example.com> wrote in message
"Bill Caroselli (Q-TPS)" <QTPS@EarthLink.net> wrote in message
Has anyone come up with a math library that will do comparisons down to
last bit that CAN be accurately identified?
There is no a "standard" lib for that because it is "embedded" into your
definition of "good" value. For example,
you can use absolute difference or relative difference as the "cut-off"
As to "go to the last bit" (how do you define?), you can know it is
simply by noticing the fact that a school of scientific workers are taking
of its randomness ..., they call that "Monte Carlo" simulation ...
In the practical world, you may consider also a way of
1) first abstract your model into integer field
(simply because you have up to 2^31 value to play with
without defining your own "big-number".)
and the space is hugh enough for me (order of 2^32 elements! may cost
whole life to count though:)...)
2) play in integer field instead.
Although there are a lot of c playing in numerical simulation, fortran
sounds to me
If I really want to test:
if( f == 0.9 )
I wouldn't want to write in my code
if( f > 0.89999 && f < 0.90001 )
if the value of 0.900003 could pass as equal when in fact the correct