3-Body Figure 8 orbit questions.

D H: You seem to be knowledgeable, and are surely at least partially correct in recommending Runge Kutta methods. They would not be advocated in so many texts and articles if they did not have advantages over first order methods.

My experiences recently have been as follows.
  • Precision was poor using first order numerical integration for analysis of geodesics on a torus. I had analytical expressions for second derivatives, and did not consider Runge Kutta. The second order integration resulted in excellent precision.

    I verified precision by integrating backwards to initial points, which I assume is a good indicator of precision. Do you consider this a valid assumption?

    BTW: I had two differential equations defining the torus geodesics: One for Latitude & one for Longitude. Would Runge Kutta be better than second order using known second derivatives?

  • First order methods seemed to work very well for gravitational simulation. For my software, I simulated our solar system for many revolutions of the inner planets to verify precision, which seemed seemed good.
I hope to revise my gravitational software to provide a Runge Kutta option, allowing me to compare precision with the simple first order method.

I am not convinced that anything more sophisticated than second order (if second derivative is known) or 4th order Runge Kutta is required for sufficient precision. I believe that the sophisticated methods were developed prior to ultra fast systems, which allow working with extremely small increments of the independent variable. With very small increments of the independent variable, even first order methods can be reasonably accurate for some types of equations.

Except for extreme initial conditions, I suspect that gravitational systems tend to have orbits with small curvatures, allowing first order methods to obtain excellent precision if small time slices are used.

I cannot believe that your comments about Cartesian coordinates are valid. While cylindrical & spherical coordinates are more convenient for describing gravitational systems, the actual calculations should use Cartesian coordinates due it being very easy to calculate distances between objects and add the force vectors. The distance & vector addition calculations are complex in anything but Cartesian coordinates.

My gravitational simulation software allowed cylindrical, spherical, or Cartesian coordinates for describing initial conditions and displyaing results, but did all the internal computations using Cartesian coordinates.
 
Dinosaur said:
Except for extreme initial conditions, I suspect that gravitational systems tend to have orbits with small curvatures, allowing first order methods to obtain excellent precision if small time slices are used.
Generally the problem with the Euler method has nothing to do with wether or not the curvature is large, after all, you can always find a step size for which any given curvature is small. The problem is when the curvature is always in the same direction, then the errors accumulate without bound regardless of the step size.

-Dale
 
Dinosaur said:
I am not convinced that anything more sophisticated than second order (if second derivative is known) or 4th order Runge Kutta is required for sufficient precision. ... I suspect that gravitational systems tend to have orbits with small curvatures, allowing first order methods to obtain excellent precision if small time slices are used.

Using first-order methods to propagate state represented as Cartesian coordinates is an always-lose proposition. DaleSpam's response is spot-on:
DaleSpam said:
Generally the problem with the Euler method has nothing to do with wether or not the curvature is large, after all, you can always find a step size for which any given curvature is small. The problem is when the curvature is always in the same direction, then the errors accumulate without bound regardless of the step size.

Two new problems will arise if you try to combat the erroneous propagation problem by shrinking the step size:
  • Computational load. It takes a lot of one microsecond steps to simulate a system for hundreds of years. Make the timestep small enough and you will eventually come to the point where you need a roomful of supercomputers.
  • Precision loss. Make the step size small enough and the planets won't move. This is because most computers represent floating point numbers in the IEEE floating point standard. With 64-bit doubles, (1.0+1e-16)-1.0 = 0.0, which represents complete loss of precision. Precision loss begins to kick in at a much larger step size. With simple Euler integration, precision loss kicks in before conservation problems become negligible.

Dinosaur said:
I cannot believe that your comments about Cartesian coordinates are valid. While cylindrical & spherical coordinates are more convenient for describing gravitational systems, the actual calculations should use Cartesian coordinates due it being very easy to calculate distances between objects and add the force vectors. The distance & vector addition calculations are complex in anything but Cartesian coordinates.

The form in which state is propagated does not have to be the same as the form in which derivatives are calculated or results are displayed. Some examples:
  • A planet with non-spherical gravity. The gravitational acceleration can only be computed in the planet-fixed frame. That frame is usually not the one in which the state is propagated (too messy).
  • A planet with moons, all orbiting a star. Representing the states of the moons in Cartesian solar system barycentric coordinates is a losing proposition. If the moon is fairly massive (Earth-Moon and Pluto-Charon system), propagating the planet state in Cartesian solar system barycentric coordinates is also a losing proposition.
  • Lagrange, or libration, point orbits. Suppose you want to determine whether a vehicle can use a solar sail to maintain an orbit about the Earth-Moon L1 point. You won't be able to properly answer the question if you propagate state in Cartesian solar system barycentric coordinates.
  • A simple two body point mass system. The best state representation for propagating such a system is the mean motion M plus five other orbital elements. With this representation scheme, all but M are constant, and M varies linearly with time. This schema even works with Euler integration for a very large step size.

Expounding on the final example, orbital elements are not directly useful for purposes such as portraying the results graphically. It is, however, an easy matter to compute the Cartesian position and velocity given the orbital elements. The orbital elements are the propagated state, and the Cartesian position and velocity vectors are derived state variables.

The choice of propagated versus derived parameters is an art. One rather exotic choice of propagated state elements are the Brouwer-Lyddane mean orbital elements, which is an analytical solution of satellite motion that incorporates the J2 through J5 zonal harmonics.
 
DaleSpam: Your remarks about positive curvature are cogent. The errors must accumulate. Now that you mentioned it, I am surprised that my simulator was able to produce what seemed like good results over many years of simulated real time. Even very symmetric systems resisted chaos for a while.

I agree in principle with your (and D H’s) remarks about accumulation of errors when using first order methods. My program for torus geodesics rapidly encountered precision problems, and could not move away from special positions where the first derivative was zero. I had to provide an option for a second order method. I never considered Runge Kutta because I had an analytical expression for second derivatives.

My gravitational simulator seemed to do well with first order methods.

D H: You seem very knowledgeable. Do you have an opinion relating to Runge Kutta versus a second order method usable when there is an analytical expression for the second derivative?

As mentioned above: In principle, first order methods will accumulate errors. In practice, the problem does not seem as bad as you suggest. Following are your remarks in bold, with some my comments.
  • Computational load. It takes a lot of one microsecond steps to simulate a system for hundreds of years. Make the time step small enough and you will eventually come to the point where you need a roomful of supercomputers. Microsecond time steps do not seem necessary.

  • Precision loss. Make the step size small enough and the planets won't move. This is because most computers represent floating point numbers in the IEEE floating point standard. With 64-bit doubles, (1.0+1e-16)-1.0 = 0.0, which represents complete loss of precision. Precision loss begins to kick in at a much larger step size. With simple Euler integration, precision loss kicks in before conservation problems become negligible. There is a typo in the previous remark, but I get the point. However, 10<sup>-15</sup> is extreme enough to be suggestive of a strawman argument. If you sliced a year into seconds, your interval would be 3.17*10<sup>-8</sup> years, which would not strain IEEE floating point precision. One second intervals for simulating the (Mars, Earth, Luna, Venus, Mercury, & Sol) system seems like overkill.
The following expands on the above issues.
  • After being given initial conditions, my software determines a default a display interval equivalent to a few degrees or less of angular displacement for the fastest object in the system. This display interval is divided by 4096 to determine the initial time slice. The concept here is to simulate a few years of real time with the default time slice. Additional additional simulations are then run using smaller time slices in order to decide how small a slice is required for reasonable precision. To avoid some minor round off errors due to binary arithmentic, I always divide the display interval into 2<sup>n</sup> slices for some integer value of n.

    I simulated a symmetric system of four massive stars in eccentric elliptical orbits about their mutual center of gravity, using 4096 time slices for an 11 day display interval. It took about 15 seconds to simulate about ten years of real time, at which time the system became unsymmetric, indicating that a smaller time slice was required. This type of system is expected to be a problem due to the high velocities which occur when the four objects are very close to their common center of gravity.

    A similar system with nearly circular orbits, remained symmetric for over 100 years using 4096 time slices for 60 day display intervals. This took about 30 seconds of computer time.

    I used such symmetric systems to gain intuition relating to choice of time slice size. because they theoretically remain stable indefinitely, but deteriorate in practice due to loss of precision. My experience with them indicated that for my purposes, first order methods could be used for simulating systems with 10-20 objects for a few hundred years of real time. I let the computer run while I was asleep for some of the more intractable systems.
I am not spending a lot of time on the project, but have been converting the gravity simulation software from Visual Basic 6.0 to Visual Basic Net. In view of the remarks by you and Dale, I intend to structure the source code to allow me to later add a user option giving a choice of first order or Runge Kutta methods. With just a little planning, it should not be difficult to provide such an option after converting and testing the current source code.

In the remainder of your post, you seem to be referring to methods with which I am not familiar. I assume these methods are used for very precise computations relating to Earth satellites and/or our solar system. I would appreciate your describing the methods to which you are referring. In particular can you provide a description of the following posted by you?
The choice of propagated versus derived parameters is an art. One rather exotic choice of propagated state elements are the Brouwer-Lyddane mean orbital elements, which is an analytical solution of satellite motion that incorporates the J2 through J5 zonal harmonics.

For purposes of simulating gravitational systems assuming point masses and classical gravitational equations, I find it hard to believe that Cartesian coordinates would not be used for the actual computation of the forces, accelerations, velocities, & positions. A 10-body system requires adding nine vectors to determine the force on each object. Adding those vectors seems like a formidable task using cylindrical or spherical coordinates.

If not Cartesian coordinates & classical equations, what equations and coordinate system would you prefer for the actual computations?

I am aware that Cartesian coordinates are a nightmare for working out initial conditions or understanding a display of computed positions & velocities. I use cylindrical & spherical coordinates when developing initial conditions for a made up system. I also use such coordinates for display of computed positions & velocities. Internally, I use only Cartesian coordinates.
 
Dinosaur said:
My gravitational simulator seemed to do well with first order methods.


Try it with a two-body system and compare with analytic results. If you used simple Euler integration, I will be very surprised if report good results.

Do you have an opinion relating to Runge Kutta versus a second order method usable when there is an analytical expression for the second derivative?
In my experience, second-order Euler-Cromer single-step methods fare slightly better than RK2 but are not as good as RK4.


In principle, first order methods will accumulate errors. In practice, the problem does not seem as bad as you suggest.

I would <b>never</b> use Euler integration. It is much worse in practice than you think. I just finished an exercise of verifying our propagation techniques. We include Euler integration primarily as the simplest example of how to write a propagator.

I used a two-body system to test our integrators, as an analytic solution exists for such systems.
  • We provide nine propagation techniques: Euler integration, Nystrom-Lear 2, RK2 (i.e. midpoint method), a modified midpoint method, RK4 and a variant, two Runge-Kutta-Feldberg methods, and one Adams-Bashforth-Mills integrator.
  • I initialized a small body in orbit around a very large body, varying the orbital plane and initial position/velocity.
  • I propagated the state for ten true orbits with varying step sizes and looked at the worst error over the ten orbits as a function of integration technique and step size.
  • I characterize the step size as the product of orbital mean motion and simulator time step (i.e., dividing the orbit into angular chunks instead of dividing it into time steps).
  • I characterize the error as ||x_computed-x_true||/||x_true|| + ||v_computed-v_true||/||v_true||.
  • I deam an integrator to be acceptable if this scaled error is 1e-7 or smaller, good if the error is 1e-9 or smaller, excellent if the error is 1e-11 or smaller. The best one could hope for is 1e-15 or so, numerical precision.
  • <b>I could not find an acceptable step size for Euler integration.</b>.
I have such a pretty graph of the results, but no site to which to upload it. A summary:
  • At Mdot*dt = 1e-4 degrees, scaled error for Euler integration is 1e-2, all others are less than 1e-8.
  • Euler integration exhibits complete failure (scaled error > 1) near Mdot*dt = 1e-2 degrees. Even the midpoint methods still provide acceptable values at these time steps.
  • The winner (both in terms of best accuracy and fewest steps) is RK7/8, A Runge-Kutta-Feldberg method. It's accuracy improves as the step size <b>increases</b> up until Mdot*dt = 2 degrees, at which the scaled error is 1e-12. (i.e., incredible results are obtained chopping an orbit into just 180 pieces).

If you sliced a year into seconds, your interval would be 3.17*10<sup>-8</sup> years, which would not strain IEEE floating point precision. One second intervals for simulating the (Mars, Earth, Luna, Venus, Mercury, & Sol) system seems like overkill.

That would strain IEEE floating point incredibly. Precision loss starts to occur when the orbit is sliced into more than 4000 or so parts. You are talking about dividing the Earth's orbit about the Sun into 32e6 parts.

The error results from computing x<sub>new</sub>=x<sub>old</sub>+delta_t*v<sub>old</sub> and v<sub>new</sub>=v<sub>old</sub>+delta_t*a<sub>old</sub>. The added terms (delta_t*v, delta_t*a) shrink as the time step shrinks. Example: Low Earth Orbit is about 4e6 meters from the center of the Earth, with v=7e3 m/s and a period of about 5400 sec. With a step size of one second, the magnitude of v*delta t is three orders of magnitude smaller than the magnitude of the position vector. <b>The very first addition loses accuracy.</b>The losses pile up as more and more steps are taken.

For purposes of simulating gravitational systems assuming point masses and classical gravitational equations, I find it hard to believe that Cartesian coordinates would not be used for the actual computation of the forces, accelerations, velocities, & positions.
The coordinates in which derivatives are calculated do not need to be the same as the coordinates in which states are propagated.

If not Cartesian coordinates & classical equations, what equations and coordinate system would you prefer for the actual computations?
I use primarily Cartesian coordinates; in fact, I use a lot of different Cartesian coordinates. Then again, I am not in the business of determining the state of the solar system. (I get that for free from the JPL models). I am primarily concerned with the relative states between two or more space vehicles over a relatively short time span.
 
Last edited:
Dinosaur,

Do you have access to Mathematica? The numerical differential equation solver used there is pretty nice, it automatically switches algorithms as needed unless you specify the algorithm.

I think Matlab also has many pre-packaged solvers.

-Dale
 
DaleSpam: I am aware of mathematica, but cannot afford it. I use MathCad, but have yet to use its differential equations capabilities.

You and D H are probably correct about loss of precision with first order methods. I have no reliable method of determining the precision obtained by my gravitational simulator, which was developed more as an amusement than a serious tool.

I originally used second order methods for torus geodesics due to being stopped at places where the first derivative was zero, a very common occurrence for this type of analysis. I made use of the second derivative as option, allowing me to compare first & second order results. The first order results were good enough in most instances to determine the shape of the geodesics, but the precision was hardly acceptable.
 
Back
Top