In the case of Gaussian (or similar) noise and smooth dynamics f(x)
the integrand is smooth and decays exponentially
fast at the boundaries. This in turn implies that the simplest possible
integration algorithm, summing the integrand at a cubic array of coordinate
values, converges faster than any power of the step size, and is typically
exponential once the step size is smaller than .
This remarkable convergence rate for smooth, exponentially decaying integrands follows from the observation that by multiplying the terms near the boundary by appropriate factors, it is possible to obtain algorithms of higher and higher order in the step size [14]. Exponentially decaying integrands are impervious to any such coefficients, and so converge faster than any power of the step size.
Note also, that having chosen, say x0 and x1, and the argument of
the exponential,
happens to be too small,
it is not necessary to consider the other xj.
This provides a very substantial saving in time for n>2.
Finally, the integral is symmetric under a cyclic interchange of the
xj: this implies an additional saving of a factor n. The logic
required here is not trivial since the contribution differs depending
on whether some of the xj are identical. For example, for n=4,
choose two values xmin and xmax beyond which there is no
possible contribution. Then sum
,
defining
x0 to be the largest of the xj, and the one occurring first, if more
than one are maximum. Sum
,
checking that the
argument of the exponential is not too small. Sum
,
again checking the argument of the exponential.
Then sum
,
and multiply each contribution by 4.
If x2=x0 the xj could form a 2-cycle repeated twice, so when
x3=x1 count the term twice instead of four times, and stop the sum
over x3 to avoid double counting. Finally, the repeated fixed point
x0=x1=x2=x3 has been excluded, so sum this explicitly and
count it once. The case n=5 is simpler as there is only a repeated
fixed point, but there are more possibilities for which of the xj
are maximum.
Even with the above short cuts, large n, small ,
and stringent
precision requirements can lead to sums of 109 terms. This means it is
advisable to group them in size (using the argument of the exponential)
as they are summed, then combine the groups from smallest to largest
to minimize roundoff error.
Given the above algorithm the step size h is decreased until two successive estimates agree to within a specified precision (for example 10 digits). Since the amount of time increases as h-n the optimal sequence is probably hj=h0 e-j/n. Note that large initial values of h can lead to a zero result as the entire contribution region may be missed.
With the above algorithm, calculation of the trace up to n=5 with
,
and n=6 for somewhat higher values of
,
is feasible for the case of Gaussian noise and smooth one dimensional
dynamics.