Article 4ZF2W Scaling and memoization

Scaling and memoization

by
John
from John D. Cook on (#4ZF2W)

The previous post explained that Lobatto's integration method is more efficient than Gaussian quadrature when the end points of the interval need to be included as integration points. It mentioned that this is an advantage when you need to integrate over a sequence of contiguous intervals, say [1, 2] then [2, 3], because the function being integrated only needs to be evaluated at the common end points once.

This occurs in application, for example, when numerically solving differential equations. An integral might need to be evaluated at a sequence of contiguous intervals, one for each time step.

This post will illustrate the time savings from the combination of Lobatto integration and memoization. i.e. caching function evaluations. Some languages have a built-in feature for memoization. In other languages you may need to write your own memoization code.

Scaling

In order to integrate functions over intervals other than [-1, 1] we need a change of variables to rescale the domain. We'll incorporate that in the code below.

Here is our code to integrate a function f over an interval [a, b].

 def lobatto(f, a, b): # Change integration interval to [-1, 1] c = (b-a)/2 d = (b+a)/2 # Multiply by c becase c is the # Jacobian of the change of variables return c*integrate(lambda x: f(c*x + d), lobatto_points, lobatto_weights)
Reducing function evaluations

Next, we create a function to integrate which takes 10 second to evaluate. This is an artificial example, but the time required for numerical integration often is dominated by function evaluations. Here we choose an example that makes this obvious.

 from time import sleep, time def my_slow_function(x): sleep(10) return x**3 + x**2

The following code integrates my_slow_function over three contiguous intervals.

 t0 = time() lobatto(my_slow_function, 1, 2) lobatto(my_slow_function, 2, 3) lobatto(my_slow_function, 3, 4) t1 = time() print("Elapsed time: ", t1-t0)

This code takes 150 seconds because each integration requires five function evaluations at 10 seconds each.

However, by adding one line of code we can reduce the run time to 130 seconds. We add the decorator functools.lru_cache() to ask Python to cache evaluations of our integrand.

 @functools.lru_cache() def my_slow_function(x): sleep(10) return x**3 + x**2

Now the three integrations above take 130 seconds because my_slow_function is only evaluated at 2 and 3 one time each.

You could write your own code to cache function evaluations, and that might be worthwhile if efficiency is a priority, but it's easy to let the language do it for you.

More on memoizationgU3IjC2FnHA
External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/TheEndeavour?format=xml
Feed Title John D. Cook
Feed Link https://www.johndcook.com/blog
Reply 0 comments