Reducing Error in Self-Dependent Recurrent Functions

Discussion in 'Physics & Math' started by CheskiChips, Oct 26, 2009.

  1. CheskiChips Banned Banned

    Messages:
    3,538
    This is partially a computer science question, and it may belong there.

    There's probably a proper term for the kind of function I am referring to.

    \(x_{n+1} = f(x_n)\) is a simply example. I think they can also be written as a differential, but for my case I'm opting not to.

    In computer modeling often one the biggest flaws is the error from rounding that compounds after numerous cycles of the function.

    #.####### from \(\frac{A}{B}\) is +- 0.00000001

    Here's my proposal:
    Storing the number format into an array of data.
    \(int[] number = {A,B,...}\)
    An example format would be..
    \(FORMAT --> x_n = (\frac{A}{B})^{\frac{C}{D}} (exp[(\frac{E}{F})^{\frac{G}{H}} + I] + ln((\frac{K}{L})^{\frac{M}{N} + O}+P) + J\)

    Such that the function of x \(f(x_n)\) can be always be written in the FORMAT. Where every variable in the int[] number is an integer.

    Therefore the error will not be compounded, rather the error will be isolated to the single event calculation, and the number transferred to the next function instance will be perfectly accurate.



    Does anyone know of why this would not work? Or if it's been implemented before?
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. rpenner Fully Wired Valued Senior Member

    Messages:
    4,833
    Have you looked into the behavior of the iterated logistic map?

    \(x_{n+1} = \frac{37}{10} x_n ( 1 - x_n )\)

    The Lyapunov exponent is about 0.358 which means any uncertainty in x_0 doubles about every two iterations. Which means for a IEEE float, you have about 50 iterations, or for a IEEE double, about 110 iterations before the predictability of the outcome goes to zero. This uncertainty is fundamental to the input specification and the nature of the problem.

    What has been done is reducing error by using just x = A + B where A and B are both IEEE doubles which are chosen so that A has the most important bits and B is left over to hold any extra bits desired. -- This approach can be (and has been) used to develop a math library with over 100 significant bits.

    Other math libraries allow arbitrary amounts of precision. I myself have used PHP to calculate exact integers with thousands of digits.

    Another technique is to quantify error -- One example of this is interval arithmetic.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. CheskiChips Banned Banned

    Messages:
    3,538
    I am unfamiliar with iterated logistic maps. Can you suggest a good reading?

    A series of doubles would be universally applicable where as an array of integers would be limited to the specific functions they're programmed to fit. Other than this, are there any advantages?

    As for interval arithmetic, as I understand it, it's only useful if you know the nature of the divergence. And would fail in examples such as this, I think referred to as the Laplacian: On second thought, it might be a Jacobian.

    Code:
       i1  i2  i3  i4  i5  i6
    j1   ...................
    j2   ...................
    j3   ...................
    j4   ...................
    j5   ...................
    j6   .............    i6j6
    
    Code:
       i1 i2 i3 i4 i5 i6
    j1 1  2  3  4  5  6
    j2 20 21 22 23 24 7
    j3 19 32 33 34 25 8
    j4 18 31 36 35 26 9
    j5 17 30 29 28 27 10
    j6 16 15 14 13 12 11
    
    Okay, I found a better way to organize and explain the first pattern, sequential order.
    That's my crude attempt at describing the pattern.

    \(\nabla_{i_n j_n} = sqrt(\frac{4i_nj_n^2 + (i_{(n-1)}j_{(n)})^2 +(i_{(n+1)}j_{(n)})^2 + (i_{(n)}j_{(n+1)})^2 + (i_{(n)}j_{(n-1)})^2}{4}) \to \nabla_{i_{(n+1)}j_n}\)

    In this pattern for example clockwise working inwards then clockwise working outwards. Repeated until...\({\nabla_{i_x j_y}-\nabla_{i_{x-1} j_{y-1} < .01\) for variation of i & j where x and y are instances.

    In which case it resembles interval error reduction, however its algorithm is so complex to calculate, a certain part of the convergence to the final solutions is due solely to the compounded error.
     
    Last edited: Oct 26, 2009
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. rpenner Fully Wired Valued Senior Member

    Messages:
    4,833

Share This Page