Article 36NCV Yogi Berra meets Pafnuty Chebyshev

Yogi Berra meets Pafnuty Chebyshev

by
John
from John D. Cook on (#36NCV)

I just got an evaluation copy of The Best Writing on Mathematics 2017. My favorite chapter was Inverse Yogiisms by Lloyd N. Trefethen.

Trefethen gives several famous Yogi Berra quotes and concludes that

Yogiisms are statements that, if taken literally, are meaningless or contradictory or nonsensical or tautological-yet nevertheless convey something true.

An inverse yogiism is the opposite,

[a] statement that is literally true, yet conveys something false.

What a great way way to frame a chapter! Now that I've heard the phrase, I'm trying to think of inverse yogiisms. Nothing particular has come to mind yet, but I feel like there must be lots of things that fit that description. Trefethen comes up with three inverse yogiisms, and my favorite is the middle one: Faber's theorem on polynomial interpolation.

Faber's theorem is a non-convergence result for interpolants of continuous functions. Trefethen quotes several numerical analysis textbooks that comment on Faber's theorem in a way that implies an overly pessimistic interpretation. Faber's theorem is true for continuous functions in general, but if the function f being interpolated is smooth, or even just Lipschitz continuous, the theorem doesn't hold. In particular, Chebyshev interpolation produces a sequence of polynomials converging to f.

A few years ago I wrote a blog post that shows a famous example due to Carle Runge that if you interpolate f(x) = 1/(1 + x^2) over [-5, 5] with evenly spaced nodes, the sequence of interpolating polynomials diverges. In other words, adding more interpolation points makes the fit worse.

Here's the result of fitting a 16th degree polynomial to f at evenly spaced nodes.

interpolatec16.gif

The error near the ends is terrible, though the fit does improve in the middle. If instead of using evenly spaced nodes you use the roots of Chebyshev polynomials, the interpolating polynomials do in fact converge, and converge quickly. If the kth derivative of f has bounded variation, then the error in interpolating f at n points is O(n-k).

q?_encoding=UTF8&ASIN=B072J31FMD&Format=ir?t=theende-20&l=li3&o=1&a=B072J31FMD

1Vfpgq6oJn0
External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/TheEndeavour?format=xml
Feed Title John D. Cook
Feed Link https://www.johndcook.com/blog
Reply 0 comments