Here's a collection of different ways to generate sine waves in software.

This is by far the simplest technique. The library function "sin()" does all of the work. However, unless you have dedicated co-processor hardware, this is also one of the slowest ways.

In the table look up approach, you first generate an array of the sine values and store them in memory. Typically, one makes use of the symmetry properties of the sine wave to minimize the memory storage requirements. For example, suppose we wish to obtain *y = sin(x) *in one degree steps*.* For x between zero and 90 degrees, we could create the array:

float sine[91],pi=3.141592653; for(int i=0;i<=90;i++) sine[i] = sin(pi/180 * i);

Then, if we wanted the sine of 45 degrees, we simply write

y = sine[45];

Now, to obtain the other 3/4's of the circle, we can use the symmetry of sine wave. Each quadrant is obtained as follows:

y = sine[ 180 - x]; /* 90 <= x <= 180 */ y = -sine[x - 180]; /* 180 <= x <= 270 */ y = -sine[360 - x]; /* 270 <= x <= 360 */

The table look up technique works very well, however it is limited to relatively few values (only 360 in the previous example). The apparent dynamic range may be increased by interpolating between successive values. For example, if we wanted the sine of 45.5 degrees, we write

y = sine[45] + (sine[46] - sine[45])/2;

We have taken the sine of 45 degrees and added to it the one half degree interpolation between it and the sine of 46 degrees. In general, the first order interpolation formula goes as follows:

y = sine[x] + (sine[x+1] - sine[x]) * delta_x

A natural question to ask is, "How much error does Linear Interpolation introduce?" Well, you can either use the Cauchy Remainder Theorem for Polynomial Interpolation (1) or you can do a brute force calculation for linear interpolation. I used the brute force approach (I only knew about Cauchy later), and calculated a relative error, where I define relative error to be:

relative error = approximated sine / actual sine.

While this formula appears to blow up when sine is zero, my calculations (which may be prone to error) indicate the max relative error is:

max relative error = cos(w*delta_t/2)

where, w is the sinusoid's frequency and delta_t is the separation between the tabulated interpolation points. The frequency, as I show in the example below, is really the normalized one. In other words, assuming the table approximates one full sine wave, the normalized frequency is 2*pi. As an example, suppose you want to know how much error can be expected if we have a table of 256 values and our frequency is 60 Hz?

w, which is in radians, is related to f by:

w = 2*pi*f = 120*pi

delta_t is related to the period, T (=1/f):

delta_t = T/N = 1/(f*N) = 1/(60*256)

When w and delta_t are multiplied, notice the frequency drops out:

w * delta_t = (2*pi*f) * (1/(f*N)) = 2*pi/N.

Substituting back into the formula:

max rel error = cos(2*pi/N/2) = cos(pi/N)

Thus the max error depends only on the number of samples in the table, N, as one might intuitively expect. For this example:

max relative error = cos(pi/256) = 0.9999247

or about 0.03%. Since a relative error of 1 would mean no error, we are very close with

256 values in our table.

In fact, this may be too good. So next we might ask, "How many samples do I need in my table to keep the error below 1%?" (or whatever percent you desire) We can work backwards and show:

1 - rel error < 1%

or,

1 - cos(pi/N) < 1%

cos(pi/N) > .99

N > pi / arccos(.99) ~ 22.19

Since N is an integer, we would need a table containing 23 samples of the sine wave. We would probably increase the size to 32 for practical reasons, and also divide it by four by using the symmetry arguments described above. So, if we have 8 equally spaced samples of the first quadrant of a sine wave, we can use interpolation to find the exact value of the sine wave to with in 1%. That's not bad!

Suppose you need a consecutive sequence of evenly spaced sine values like:

sin(1), sin(1.1), sin(1.2), ..., sin(9.9)

Furthermore, suppose you don't feel like creating a table to store them. Consider the trigonometric identities:

sin(a+b) = sin(a)cos(b) + cos(a)sin(b)

cos(a+b) = cos(a)cos(b) - sin(a)sin(b)

We can define the starting angle as *a* and the step in angles as *b*, and precompute the following:

s1 = sin(a); c1 = cos(a); sd = sin(b); cd = cos(b);

For the first iteration, we have

sin(a+b) = s1 * cd + c1 * sd; cos(a+b) = c1 * cd - s1 * sd;

Now, let `s1 = sin(a+b) and c1 = cos(a+b).`

For the second iteration:

sin(a+2*b) = sin((a+b) + b) = sin(a+b)*cd + cos(a+b)*sd = s1 * sd + c1 * sd; cos(a+2*b) = cos((a+b) + b) = cos(a+b)*cd + sin(a+b)*sd = c1 * cd - s1 * sd;

Again, let `s1 = sin(a+2*b)` and `c1 = cos(a+2*b)`. The third and successive steps are similar. These steps can be collected into a simple program:

s1 = sin(a); c1 = cos(a); sd = sin(b); cd = cos(b); for(i=0;i<N;i++) { temp = s1 * cd + c1 * sd; c1 = c1 * cd - s1 * sd; s1 = temp; }

Each iteration requires four multiplications and two additions. While this is a substantial savings in terms of computing sines from scratch, the method shown below can optimize this approach one step further and cut the computation in half.

In additon, notice that we have obtained the cosine values for *free* (a matter of perspective; could be a burden if you don't need them). There are cases when having both the sine and cosine values is handy (e.g. to draw a circle). However, the next algorithm can produce sines only or sines and cosines more efficiently then this one.

The previous section required both sine and cosines to generate a series of equally spaced (in frequency) sine values. However, if you only want sines (or cosines) there is a more efficient way to obtain them.

Using the nomenclature of the previous section, we can define the starting angle as *a* and the step in angles as *b*. We wish to compute:

sin(a), sin(a + b), sin(a + 2*b), sin(a + 3*b), ..., sin(a + n*b)

The goal is to recursively compute `sin(a+n*b)` using the two previous values in the sine series, `sin(a + (n-1)*b)` and `sin(a + (n-2)*b)`:

`sin(a + n*b) = x * sin(a + (n-2)*b) + y * sin(a + (n-1)*b)`

Here, x and y are two constants that need to be determined. Note that the next value or the n'th value depends on the previous two values, n-2 and n-1. Re-arranging and simplifying:

sin(a + n*b) = x * sin(a + n*b - 2*b) + y * sin(a + n*b - 1*b) sin(a + n*b) = x * [ sin(a + n*b) * cos(2*b) - cos(a + n*b) * sin(2*b)] + y * [ sin(a + n*b) * cos(b) - cos(a + n*b) * sin(b)] sin(a + n*b) = [x * cos(2*b) + y * cos(b)] * sin(a + n*b) - [x * sin(2*b) + y * sin(b)] * cos(a + n*b)

For this to be true for all n, we must have the two expressions in brackets satisfy:

`[x * cos(2*b) + y * cos(b)] = 1`

`[x * sin(2*b) + y * sin(b)] = 0`

which, when solved yields

`x = -1`

`y = 2*cos(b)`

Finally, substituting into the original equation we get:

`sin(a + n*b) = -sin(a + (n-2)*b) + 2*cos(b) * sin(a + (n-1)*b)`

The fortuitous "x=-1" reduces a multiplication to a simple subtraction. Consequently, we only have to do one multiplication and one addition (subtraction) per iteration.

Here's a simple program to implement the algorithm:

c = 2* cos(b); snm2 = sin(a + b); snm1 = sin(a + 2*b); for(i=0;i<N;i++) { s = c * snm1 - snm2; snm2 = snm1; snm1 = s; }

Incidently, the cosine function has an identical recursive relationship:

`cos(a + n*b) = -cos(a + (n-2)*b) + 2*cos(b) * cos(a + (n-1)*b)`

Putting this into the program yields:

cb = 2* cos(b); snm2 = sin(a + b); snm1 = sin(a + 2*b); cnm2 = cos(a + b); cnm1 = cos(a + 2*b); for(i=0;i<N;i++) { s = cb * snm1 - snm2; c = cb * cnm1 - cnm2; snm2 = snm1; cnm2 = cnm1; snm1 = s; cnm1 = c; }

Two multiplications and two additions are required at each step. This is twice as fast as the previous algorithm.

When using this algorithm (or the previous one), be aware that round off errors will accumulate. This may not be too serious of a problem for a series of say 50 terms when floating point arithmetic is used. However, if we want 8 bit accuracy and start off with one bit error, the accumulated error rapidly deteriorates the useful dynamic range.

The Taylor series of the sine function is:

sin(x) = x - (x^3)/3! + (x^5)/5! - (x^7)/7! + ...

Keep in mind that x is in radians and not in degrees!

For small values of x, only the first few terms are needed. However, as x approaches pi/2 many terms are needed (assuming you wish to have more than just a few significant figures). The number of needed terms can easily be computed since each succesive term is smaller than its predecessor. For example, if we want to compute sin(pi/4) to 10 significant figures, then we have to compute all terms up to the one satisfying the inequality:

10^(-10) < ((pi/4)^n)/n!,

where n is odd.

While it is possible to use the Taylor series for the entire range (+ and - infinity), it is practical to limit the range to -pi/2 < x < pi/2. For angles outside this range, use the periodicity and symmetry of the sine function to extrapolate an answer.

Also, notice that it is not practical to compute x^3, x^5, x^7,... Instead, re-write the Taylor series in a nested form

sin(x) = x*(1 - (x^2)/3! + (x^4)/5! - (x^6)/7! + ...)

sin(x) = x*(1 - (x^2)*(1/3! - (x^2)/5! + (x^4)/7! -) ...)

Or, dropping the higher order terms

sin(x) ~ x*(1 - (x^2)*(1/3! - (x^2)(1/5! - (x^2)/7! )))

Finally, the factorial terms may be factored:

sin(x) ~ x*(1 - (x^2)*(1 - (x^2)(1 - (x^2)/(6*7) ) / (4*5) ) /(2*3) )

Thus, we only need to calculate x^2 as opposed to x^7 and we don't have to calculate all of the factorial terms. Note that this technique of factoring a series is called "Horner's Rule".

7. Small Angle Approximation

8. Polynomial Approximation

9. Infinite Product

A Padé approximation is a rational function whose power series agrees with the power series of the function it is approximating. For example, the ratio of two polynomials N(x) and D(x):

R(x) = N(x) / D(x)

is said to be Padé approximation to P(x) if

R(0) = P(0)

R'(x) = P'(x) , evaluated at x=0

R''(x) = P''(x), evaluated at x=0, etc. up to the order of interest

Here, the primes refer to differentiation with respect to x. So each case above, we differentiate R and P, and then evaluate them at x=0.

The Padé approximation for the sine function is

sin(x) = x * num(x) / den(x)

num(x) = 1 + n1*x^2 + n2*x^4 + n3*x^6

den(x) = 1 + d1*x^2 + d2*x^6 + d3*x^6

n1 = -325523/2283996

n2 = 34911/7613320

n3 = 479249/11511339840

d1 = 18381/761332

d2 = 1261/4567992

d3 = 2623/1644477120

References:

1) *Interpolation & Approximation*, Philip J. Davis. Dover Publications, Inc. 1975

2) *Numerical Mathematics and Computing*, Ward Cheney and David Kincaid. Brooks/Cole Publishing Company.

3) *The Art of Computer Programming Vol 1*, Donald E. Knuth, Addison-Wesley Publishing Company

This page is maintained by Scott Dattalo. You can reach me at: scott@dattalo.com Last modified on 10MAR00.