Within one percent
This post looks at some common approximations and determines the range over which they have an error of less than 1 percent. So everywhere in this post " means with relative error less than 1%."
Whether 1% relative error is good enough completely depends on context.
ConstantsThe familiar approximations for and e are good to within 1%: 22/7 and e 19/7. (OK, the approximation for e isn't so familiar, but it should be.)
Also, the speed of light is c 300,000 km/s and the fine structure constant is 1/137. See also Koide's coincidence.
Trig functionsThe following hold for angles in radians.
- sin x x for |x| < 0.244.
- cos x 1 - x^2/2 for |x| < 0.662.
- tan x x for |x| < 0.173.
Here again angles are in radians.
- arcsin x x for |x| < 0.242.
- arccos x /2 - x for |x| < 0.4.
- arctan x x for |x| < 0.173.
Natural log has the following useful approximation:
- log(1 + x) x for -0.0199 < x < 0.0200.
Sterling's approximation leads to the following.
- (x) (2/x) (x/e)x for x > 8.2876.
- n! (2/(n+1)) ((n+1)/e)(n+1) for n >= 8.
Stirling's approximation is different from the other approximations because it is an asymptotic approximation: it improves as its argument gets larger.
The rest of the approximations are valid over finite intervals. These intervals are symmetric when the function being approximated is symmetric, that is, even or odd. So, for example, it holds for sine but not for log.
For sine and tangent, and their inverses, the absolute error is O(x3) and the value is O(x), so the relative error is O(x2). [1]
The widest interval is for cosine. That's because the absolute error and relative error are O(x4). [2]
The narrowest interval for is log(1 + x) due to lack of symmetry. The absolute error is O(x2), the value is O(x), and so the relative error is only O(x).
VerificationHere's Python code to validate the claims above, assuming the maximum relative error always occurs on the ends, which it does in these examples. We only need to test one side of symmetric approximations to symmetric functions because they have symmetric error.
from numpy import *from scipy.special import gammadef sterling_gamma(x): return sqrt(2*pi/x)*(x/e)**xid = lambda x: xfor f, approx, x in [ (sin, id, 0.244), (tan, id, 0.173), (arcsin, id, 0.242), (arctan, id, 0.173), (cos, lambda x: 1 - 0.5*x*x, 0.662), (arccos, lambda x: 0.5*pi - x, 0.4), (log1p, id, 0.02), (log1p, id, -0.0199), (gamma, sterling_gamma, 8.2876) ]: assert( abs((f(x) - approx(x))/f(x)) < 0.01 )Related posts
- Sine approximation for small angles
- Simple approximations for logarithms
- Simple approximation for ex
- Simple approximation for the gamma function
- Two useful asymptotic series
[1] Odd functions have only terms with odd exponents in their series expansion around 0. The error near 0 in a truncated series is roughly equal to the first series term not included. That's why we get third order absolute error from a first order approximation.
[2] Even functions have only even terms in their series expansion, so our second order expansion for cosine has fourth order error. And because cos(0) = 1, the relative error is basically the absolute error near 0.
The post Within one percent first appeared on John D. Cook.