next up previous
Next: Results Up: Traces and determinants of Previous: Formalism

  
Numerical methods

The required quantities, ${\rm tr}{\cal L}^n$, are n-dimensional integrals, which in the case of weak noise ( $\sigma\ll 1$) have a large number of sharp peaks surrounding the periodic points of the deterministic map. Obtaining an accurate numerical estimate of the integral for any n>2 seems prohibitively difficult, since Monte Carlo approaches take too long to converge, and direct integration schemes require a small step size, but cover a large configuration space. See Ref. [14] for more discussion.

In the case of Gaussian (or similar) noise and smooth dynamics f(x) the integrand is smooth and decays exponentially fast at the boundaries. This in turn implies that the simplest possible integration algorithm, summing the integrand at a cubic array of coordinate values, converges faster than any power of the step size, and is typically exponential once the step size is smaller than $\sigma$.

This remarkable convergence rate for smooth, exponentially decaying integrands follows from the observation that by multiplying the terms near the boundary by appropriate factors, it is possible to obtain algorithms of higher and higher order in the step size [14]. Exponentially decaying integrands are impervious to any such coefficients, and so converge faster than any power of the step size.

Note also, that having chosen, say x0 and x1, and the argument of the exponential, $-(x_1-f(x_0))^2/(2\sigma^2)$ happens to be too small, it is not necessary to consider the other xj. This provides a very substantial saving in time for n>2.

Finally, the integral is symmetric under a cyclic interchange of the xj: this implies an additional saving of a factor n. The logic required here is not trivial since the contribution differs depending on whether some of the xj are identical. For example, for n=4, choose two values xmin and xmax beyond which there is no possible contribution. Then sum $x_{min}\leq x_0\leq x_{max}$, defining x0 to be the largest of the xj, and the one occurring first, if more than one are maximum. Sum $x_{min}\leq x_1\leq x_0$, checking that the argument of the exponential is not too small. Sum $x_{min}\leq x_2\leq x_0$, again checking the argument of the exponential. Then sum $x_{min}\leq x_3<x_0$, and multiply each contribution by 4. If x2=x0 the xj could form a 2-cycle repeated twice, so when x3=x1 count the term twice instead of four times, and stop the sum over x3 to avoid double counting. Finally, the repeated fixed point x0=x1=x2=x3 has been excluded, so sum this explicitly and count it once. The case n=5 is simpler as there is only a repeated fixed point, but there are more possibilities for which of the xj are maximum.

Even with the above short cuts, large n, small $\sigma$, and stringent precision requirements can lead to sums of 109 terms. This means it is advisable to group them in size (using the argument of the exponential) as they are summed, then combine the groups from smallest to largest to minimize roundoff error.

Given the above algorithm the step size h is decreased until two successive estimates agree to within a specified precision (for example 10 digits). Since the amount of time increases as h-n the optimal sequence is probably hj=h0 e-j/n. Note that large initial values of h can lead to a zero result as the entire contribution region may be missed.

With the above algorithm, calculation of the trace up to n=5 with $\sigma\geq0.01$, and n=6 for somewhat higher values of $\sigma$, is feasible for the case of Gaussian noise and smooth one dimensional dynamics.


next up previous
Next: Results Up: Traces and determinants of Previous: Formalism
Carl Philip Dettmann
1998-05-15