Fourier series
Decomposition of periodic functions
.mw-parser-output .hatnote{font-style:italic}.mw-parser-output div.hatnote{padding-left:1.6em;margin-bottom:0.5em}.mw-parser-output .hatnote i{font-style:normal}.mw-parser-output .hatnote+link+.hatnote{margin-top:-0.5em}@media print{body.ns-0 .mw-parser-output .hatnote{display:none!important}}
A Fourier series (/ˈfʊrieɪ, –iər/[1]) is a series expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series.[2] By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Fourier series § Definition.
.mw-parser-output .sidebar{width:22em;float:right;clear:right;margin:0.5em 0 1em 1em;background:var(–background-color-neutral-subtle,#f8f9fa);border:1px solid var(–border-color-base,#a2a9b1);padding:0.2em;text-align:center;line-height:1.4em;font-size:88%;border-collapse:collapse;display:table}body.skin-minerva .mw-parser-output .sidebar{display:table!important;float:right!important;margin:0.5em 0 1em 1em!important}.mw-parser-output .sidebar-subgroup{width:100%;margin:0;border-spacing:0}.mw-parser-output .sidebar-left{float:left;clear:left;margin:0.5em 1em 1em 0}.mw-parser-output .sidebar-none{float:none;clear:both;margin:0.5em 1em 1em 0}.mw-parser-output .sidebar-outer-title{padding:0 0.4em 0.2em;font-size:125%;line-height:1.2em;font-weight:bold}.mw-parser-output .sidebar-top-image{padding:0.4em}.mw-parser-output .sidebar-top-caption,.mw-parser-output .sidebar-pretitle-with-top-image,.mw-parser-output .sidebar-caption{padding:0.2em 0.4em 0;line-height:1.2em}.mw-parser-output .sidebar-pretitle{padding:0.4em 0.4em 0;line-height:1.2em}.mw-parser-output .sidebar-title,.mw-parser-output .sidebar-title-with-pretitle{padding:0.2em 0.8em;font-size:145%;line-height:1.2em}.mw-parser-output .sidebar-title-with-pretitle{padding:0.1em 0.4em}.mw-parser-output .sidebar-image{padding:0.2em 0.4em 0.4em}.mw-parser-output .sidebar-heading{padding:0.1em 0.4em}.mw-parser-output .sidebar-content{padding:0 0.5em 0.4em}.mw-parser-output .sidebar-content-with-subgroup{padding:0.1em 0.4em 0.2em}.mw-parser-output .sidebar-above,.mw-parser-output .sidebar-below{padding:0.3em 0.8em;font-weight:bold}.mw-parser-output .sidebar-collapse .sidebar-above,.mw-parser-output .sidebar-collapse .sidebar-below{border-top:1px solid #aaa;border-bottom:1px solid #aaa}.mw-parser-output .sidebar-navbar{text-align:right;font-size:115%;padding:0 0.4em 0.4em}.mw-parser-output .sidebar-list-title{padding:0 0.4em;text-align:left;font-weight:bold;line-height:1.6em;font-size:105%}.mw-parser-output .sidebar-list-title-c{padding:0 0.4em;text-align:center;margin:0 3.3em}@media(max-width:640px){body.mediawiki .mw-parser-output .sidebar{width:100%!important;clear:both;float:none!important;margin-left:0!important;margin-right:0!important}}body.skin–responsive .mw-parser-output .sidebar a>img{max-width:none!important}@media screen{html.skin-theme-clientpref-night .mw-parser-output .sidebar:not(.notheme) .sidebar-list-title,html.skin-theme-clientpref-night .mw-parser-output .sidebar:not(.notheme) .sidebar-title-with-pretitle{background:transparent!important}html.skin-theme-clientpref-night .mw-parser-output .sidebar:not(.notheme) .sidebar-title-with-pretitle a{color:var(–color-progressive)!important}}@media screen and (prefers-color-scheme:dark){html.skin-theme-clientpref-os .mw-parser-output .sidebar:not(.notheme) .sidebar-list-title,html.skin-theme-clientpref-os .mw-parser-output .sidebar:not(.notheme) .sidebar-title-with-pretitle{background:transparent!important}html.skin-theme-clientpref-os .mw-parser-output .sidebar:not(.notheme) .sidebar-title-with-pretitle a{color:var(–color-progressive)!important}}@media print{body.ns-0 .mw-parser-output .sidebar{display:none!important}}.mw-parser-output .plainlist ol,.mw-parser-output .plainlist ul{line-height:inherit;list-style:none;margin:0;padding:0}.mw-parser-output .plainlist ol li,.mw-parser-output .plainlist ul li{margin-bottom:0}
The study of the convergence of Fourier series focus on the behaviors of the partial sums, which means studying the behavior of the sum as more and more terms from the series are summed. The figures below illustrate some partial Fourier series results for the components of a square wave.
-
A square wave (represented as the blue dot) is approximated by its sixth partial sum (represented as the purple dot), formed by summing the first six terms (represented as arrows) of the square wave’s Fourier series. Each arrow starts at the vertical sum of all the arrows to its left (i.e. the previous partial sum).
-
The first four partial sums of the Fourier series for a square wave. As more harmonics are added, the partial sums converge to (become more and more like) the square wave.
-
Function
s
6
(
x
){displaystyle s_{6}(x)}
(in red) is a Fourier series sum of 6 harmonically related sine waves (in blue). Its Fourier transformS
(
f
){displaystyle S(f)}
is a frequency-domain representation that reveals the amplitudes of the summed sine waves.
Fourier series are closely related to the Fourier transform, a more general tool that can even find the frequency information for functions that are not periodic. Periodic functions can be identified with functions on a circle; for this reason Fourier series are the subject of Fourier analysis on the circle group, denoted by
T
{displaystyle mathbb {T} }
or
S
1
{displaystyle S_{1}}
. The Fourier transform is also part of Fourier analysis, but is defined for functions on
R
n
{displaystyle mathbb {R} ^{n}}
.
Since Fourier’s time, many different approaches to defining and understanding the concept of Fourier series have been discovered, all of which are consistent with one another, but each of which emphasizes different aspects of the topic. Some of the more powerful and elegant approaches are based on mathematical ideas and tools that were not available in Fourier’s time. Fourier originally defined the Fourier series for real-valued functions of real arguments, and used the sine and cosine functions in the decomposition. Many other Fourier-related transforms have since been defined, extending his initial idea to many applications and birthing an area of mathematics called Fourier analysis.
The Fourier series is named in honor of Jean-Baptiste Joseph Fourier (1768–1830), who made important contributions to the study of trigonometric series, after preliminary investigations by Leonhard Euler, Jean le Rond d’Alembert, and Daniel Bernoulli.[A] Fourier introduced the series for the purpose of solving the heat equation in a metal plate, publishing his initial results in his 1807 Mémoire sur la propagation de la chaleur dans les corps solides (Treatise on the propagation of heat in solid bodies), and publishing his Théorie analytique de la chaleur (Analytical theory of heat) in 1822. The Mémoire introduced Fourier analysis, specifically Fourier series. Through Fourier’s research the fact was established that an arbitrary (at first, continuous[3] and later generalized to any piecewise-smooth[4]) function can be represented by a trigonometric series. The first announcement of this great discovery was made by Fourier in 1807, before the French Academy.[5] Early ideas of decomposing a periodic function into the sum of simple oscillating functions date back to the 3rd century BC, when ancient astronomers proposed an empiric model of planetary motions, based on deferents and epicycles.
Independently of Fourier, astronomer Friedrich Wilhelm Bessel introduced Fourier series to solve Kepler’s equation. His work was published in 1819, unaware of Fourier’s work which remained unpublished until 1822.[6]
The heat equation is a partial differential equation. Prior to Fourier’s work, no solution to the heat equation was known in the general case, although particular solutions were known if the heat source behaved in a simple way, in particular, if the heat source was a sine or cosine wave. These simple solutions are now sometimes called eigensolutions. Fourier’s idea was to model a complicated heat source as a superposition (or linear combination) of simple sine and cosine waves, and to write the solution as a superposition of the corresponding eigensolutions. This superposition or linear combination is called the Fourier series.
From a modern point of view, Fourier’s results are somewhat informal, due to the lack of a precise notion of function and integral in the early nineteenth century. Later, Peter Gustav Lejeune Dirichlet[7] and Bernhard Riemann[8][9][10] expressed Fourier’s results with greater precision and formality.
Although the original motivation was to solve the heat equation, it later became obvious that the same techniques could be applied to a wide array of mathematical and physical problems, and especially those involving linear differential equations with constant coefficients, for which the eigensolutions are sinusoids. The Fourier series has many such applications in electrical engineering, vibration analysis, acoustics, optics, signal processing, image processing, quantum mechanics, econometrics,[11] shell theory,[12] etc.
Beginnings
Joseph Fourier wrote[13]
.mw-parser-output .templatequote{overflow:hidden;margin:1em 0;padding:0 32px}.mw-parser-output .templatequotecite{line-height:1.5em;text-align:left;margin-top:0}@media(min-width:500px){.mw-parser-output .templatequotecite{padding-left:1.6em}}
φ
(
y
)
=a
0
cos
π
y2
+
a
1
cos
3π
y2
+
a
2
cos
5π
y2
+
⋯
.{displaystyle varphi (y)=a_{0}cos {frac {pi y}{2}}+a_{1}cos 3{frac {pi y}{2}}+a_{2}cos 5{frac {pi y}{2}}+cdots .}
Multiplying both sides by
cos
(
2
k
+
1
)π
y2
{displaystyle cos(2k+1){frac {pi y}{2}}}
, and then integrating from
y
=
−
1{displaystyle y=-1}
to
y
=
+
1{displaystyle y=+1}
yields:
a
k
=
∫
−
11
φ
(
y
)
cos
(
2
k
+
1
)π
y2
d
y
.{displaystyle a_{k}=int _{-1}^{1}varphi (y)cos(2k+1){frac {pi y}{2}},dy.}
— Joseph Fourier, Mémoire sur la propagation de la chaleur dans les corps solides (1807).
This immediately gives any coefficient ak of the trigonometric series for φ(y) for any function which has such an expansion. It works because if φ has such an expansion, then (under suitable convergence assumptions) the integral
∫
−
1
1
φ
(
y
)
cos
(
2
k
+
1
)
π
y
2
d
y
=
∫
−
1
1
(
a
cos
π
y
2
cos
(
2
k
+
1
)
π
y
2
+
a
′
cos
3
π
y
2
cos
(
2
k
+
1
)
π
y
2
+
⋯
)
d
y
{displaystyle {begin{aligned}&int _{-1}^{1}varphi (y)cos(2k+1){frac {pi y}{2}},dy\&=int _{-1}^{1}left(acos {frac {pi y}{2}}cos(2k+1){frac {pi y}{2}}+a’cos 3{frac {pi y}{2}}cos(2k+1){frac {pi y}{2}}+cdots right),dyend{aligned}}}
can be carried out term-by-term. But all terms involving
cos
(
2
j
+
1
)
π
y
2
cos
(
2
k
+
1
)
π
y
2
{displaystyle cos(2j+1){frac {pi y}{2}}cos(2k+1){frac {pi y}{2}}}
for j ≠ k vanish when integrated from −1 to 1, leaving only the
k
th
{displaystyle k^{text{th}}}
term, which is 1.
In these few lines, which are close to the modern formalism used in Fourier series, Fourier revolutionized both mathematics and physics. Although similar trigonometric series were previously used by Euler, d’Alembert, Daniel Bernoulli and Gauss, Fourier believed that such trigonometric series could represent any arbitrary function. In what sense that is actually true is a somewhat subtle issue and the attempts over many years to clarify this idea have led to important discoveries in the theories of convergence, function spaces, and harmonic analysis.
When Fourier submitted a later competition essay in 1811, the committee (which included Lagrange, Laplace, Malus and Legendre, among others) concluded: “…the manner in which the author arrives at these equations is not exempt of difficulties and…his analysis to integrate them still leaves something to be desired on the score of generality and even rigour“.[14]
Fourier’s motivation
The Fourier series expansion of the sawtooth function (below) looks more complicated than the simple formula
s
(
x
)
=
x
π
{displaystyle s(x)={tfrac {x}{pi }}}
, so it is not immediately apparent why one would need the Fourier series. While there are many applications, Fourier’s motivation was in solving the heat equation. For example, consider a metal plate in the shape of a square whose sides measure
π
{displaystyle pi }
meters, with coordinates
(
x
,
y
)
∈
[
0
,
π
]
×
[
0
,
π
]
{displaystyle (x,y)in [0,pi ]times [0,pi ]}
. If there is no heat source within the plate, and if three of the four sides are held at 0 degrees Celsius, while the fourth side, given by
y
=
π
{displaystyle y=pi }
, is maintained at the temperature gradient
T
(
x
,
π
)
=
x
{displaystyle T(x,pi )=x}
degrees Celsius, for
x
{displaystyle x}
in
(
0
,
π
)
{displaystyle (0,pi )}
, then one can show that the stationary heat distribution (or the heat distribution after a long time has elapsed) is given by
-
T
(
x
,
y
)
=
2∑
n
=
1∞
(
−
1)
n
+
1n
sin
(
n
x
)sinh
(
n
y
)sinh
(
n
π
).
{displaystyle T(x,y)=2sum _{n=1}^{infty }{frac {(-1)^{n+1}}{n}}sin(nx){sinh(ny) over sinh(npi )}.}
Here, sinh is the hyperbolic sine function. This solution of the heat equation is obtained by multiplying each term of the equation from Analysis § Example by
sinh
(
n
y
)
/
sinh
(
n
π
)
{displaystyle sinh(ny)/sinh(npi )}
. While our example function
s
(
x
)
{displaystyle s(x)}
seems to have a needlessly complicated Fourier series, the heat distribution
T
(
x
,
y
)
{displaystyle T(x,y)}
is nontrivial. The function
T
{displaystyle T}
cannot be written as a closed-form expression. This method of solving the heat problem was made possible by Fourier’s work.
Other applications
Another application is to solve the Basel problem by using Parseval’s theorem. The example generalizes and one may compute ζ(2n), for any positive integer n.
The Fourier series of a complex-valued P-periodic function
s
(
x
)
{displaystyle s(x)}
, integrable over the interval
[
0
,
P
]
{displaystyle [0,P]}
on the real line, is defined as a trigonometric series of the form
∑
n
=
−
∞
∞
c
n
e
i
2
π
n
P
x
,
{displaystyle sum _{n=-infty }^{infty }c_{n}e^{i2pi {tfrac {n}{P}}x},}
such that the Fourier coefficients
c
n
{displaystyle c_{n}}
are complex numbers defined by the integral[15][16]
c
n
=
1
P
∫
0
P
s
(
x
)
e
−
i
2
π
n
P
x
d
x
.
{displaystyle c_{n}={frac {1}{P}}int _{0}^{P}s(x) e^{-i2pi {tfrac {n}{P}}x},dx.}
The series does not necessarily converge (in the pointwise sense) and, even if it does, it is not necessarily equal to
s
(
x
)
{displaystyle s(x)}
. Only when certain conditions are satisfied (e.g. if
s
(
x
)
{displaystyle s(x)}
is continuously differentiable) does the Fourier series converge to
s
(
x
)
{displaystyle s(x)}
, i.e.,
s
(
x
)
=
∑
n
=
−
∞
∞
c
n
e
i
2
π
n
P
x
.
{displaystyle s(x)=sum _{n=-infty }^{infty }c_{n}e^{i2pi {tfrac {n}{P}}x}.}
For functions satisfying the Dirichlet sufficiency conditions, pointwise convergence holds.[17] However, these are not necessary conditions and there are many theorems about different types of convergence of Fourier series (e.g. uniform convergence or mean convergence).[18] The definition naturally extends to the Fourier series of a (periodic) distribution
s
{displaystyle s}
(also called Fourier-Schwartz series).[19] Then the Fourier series converges to
s
(
x
)
{displaystyle s(x)}
in the distribution sense.[20]
The process of determining the Fourier coefficients of a given function or signal is called analysis, while forming the associated trigonometric series (or its various approximations) is called synthesis.
Synthesis
A Fourier series can be written in several equivalent forms, shown here as the
N
th
{displaystyle N^{text{th}}}
s
N
(
x
)
{displaystyle s_{N}(x)}
of the Fourier series of
s
(
x
)
{displaystyle s(x)}
:[21]
s
(
x
)
{displaystyle s(x)}
in blue defined only over the red interval from 0 to P. The function can be analyzed over this interval to produce the Fourier series in the bottom graph. The Fourier series is always a periodic function, even if original function s
(
x
)
{displaystyle s(x)}
is not..mw-parser-output table.numblk{border-collapse:collapse;border:none;margin-top:0;margin-right:0;margin-bottom:0}.mw-parser-output table.numblk>tbody>tr>td{vertical-align:middle;padding:0}.mw-parser-output table.numblk>tbody>tr>td:nth-child(2){width:99%}.mw-parser-output table.numblk>tbody>tr>td:nth-child(2)>table{border-collapse:collapse;margin:0;border:none;width:100%}.mw-parser-output table.numblk>tbody>tr>td:nth-child(2)>table>tbody>tr:first-child>td:first-child,.mw-parser-output table.numblk>tbody>tr>td:nth-child(2)>table>tbody>tr:first-child>td:last-child{padding:0 0.4ex}.mw-parser-output table.numblk>tbody>tr>td:nth-child(2)>table>tbody>tr:first-child>td:nth-child(2){width:100%;padding:0}.mw-parser-output table.numblk>tbody>tr>td:nth-child(2)>table>tbody>tr:last-child>td{padding:0}.mw-parser-output table.numblk>tbody>tr>td:last-child{font-weight:bold}.mw-parser-output table.numblk.numblk-raw-n>tbody>tr>td:last-child{font-weight:unset}.mw-parser-output table.numblk>tbody>tr>td:last-child::before{content:”(“}.mw-parser-output table.numblk>tbody>tr>td:last-child::after{content:”)”}.mw-parser-output table.numblk.numblk-raw-n>tbody>tr>td:last-child::before,.mw-parser-output table.numblk.numblk-raw-n>tbody>tr>td:last-child::after{content:none}.mw-parser-output table.numblk>tbody>tr>td{border:none}.mw-parser-output table.numblk.numblk-border>tbody>tr>td{border:thin solid}.mw-parser-output table.numblk>tbody>tr>td:nth-child(2)>table>tbody>tr:first-child>td{border:none}.mw-parser-output table.numblk.numblk-border>tbody>tr>td:nth-child(2)>table>tbody>tr:first-child>td{border:thin solid}.mw-parser-output table.numblk>tbody>tr>td:nth-child(2)>table>tbody>tr:last-child>td{border-left:none;border-right:none;border-bottom:none}.mw-parser-output table.numblk.numblk-border>tbody>tr>td:nth-child(2)>table>tbody>tr:last-child>td{border-left:thin solid;border-right:thin solid;border-bottom:thin solid}.mw-parser-output table.numblk:target{color:var(–color-base,#202122);background-color:#cfe8fd}@media screen{html.skin-theme-clientpref-night .mw-parser-output table.numblk:target{color:var(–color-base,#eaecf0);background-color:#301702}}@media screen and (prefers-color-scheme:dark){html.skin-theme-clientpref-os .mw-parser-output table.numblk:target{color:var(–color-base,#eaecf0);background-color:#301702}}
|
s N ( a 0 + ∑ n N ( a n cos ( 2 n x ) + b n sin ( 2 n x ) ) {displaystyle s_{N}(x)=a_{0}+sum _{n=1}^{N}left(a_{n}cos left(2pi {tfrac {n}{P}}xright)+b_{n}sin left(2pi {tfrac {n}{P}}xright)right)} |
Eq.1 |
|
s N ( ∑ n N c n
e i n x {displaystyle s_{N}(x)=sum _{n=-N}^{N}c_{n} e^{i2pi {tfrac {n}{P}}x}} |
Eq.2 |
The harmonics are indexed by an integer,
n
,
{displaystyle n,}
which is also the number of cycles the corresponding sinusoids make in interval
P
{displaystyle P}
. Therefore, the sinusoids have:
- a wavelength equal to
P
n{displaystyle {tfrac {P}{n}}}
in the same units asx
{displaystyle x}
. - a frequency equal to
n
P{displaystyle {tfrac {n}{P}}}
in the reciprocal units ofx
{displaystyle x}
.
These series can represent functions that are just a sum of one or more frequencies in the harmonic spectrum. In the limit
N
→
∞
{displaystyle Nto infty }
, a trigonometric series can also represent the intermediate frequencies or non-sinusoidal functions because of the infinite number of terms.
Analysis
The coefficients can be given/assumed, such as a music synthesizer or time samples of a waveform. In the latter case, the exponential form of Fourier series synthesizes a discrete-time Fourier transform where variable
x
{displaystyle x}
represents frequency instead of time. In general, the coefficients are determined by analysis of a given function
s
(
x
)
{displaystyle s(x)}
whose domain of definition is an interval of length
P
{displaystyle P}
|
a 0 = 1 ∫ P s d a n = 2 ∫ P s ( 2 n x ) d for b n = 2 ∫ P s ( 2 n x ) d for {displaystyle {begin{aligned}&a_{0}={frac {1}{P}}int _{P}s(x),dx&\&a_{n}={frac {2}{P}}int _{P}s(x)cos left(2pi {tfrac {n}{P}}xright),dx, &{textrm {for}}~ngeq 1\&b_{n}={frac {2}{P}}int _{P}s(x)sin left(2pi {tfrac {n}{P}}xright),dx, &{text{for}}~ngeq 1\end{aligned}}} |
Eq.3 |
The
2
P
{displaystyle {tfrac {2}{P}}}
scale factor follows from substituting Eq.1 into Eq.3 and utilizing the orthogonality of the trigonometric system.[23] The equivalence of Eq.1 and Eq.2 follows from Euler’s formula
cos
x
=
e
i
x
+
e
−
i
x
2
,
sin
x
=
e
i
x
−
e
−
i
x
2
i
,
{displaystyle cos x={frac {e^{ix}+e^{-ix}}{2}},quad sin x={frac {e^{ix}-e^{-ix}}{2i}},}
resulting in:
0,\a_{n}&{text{if }}n=0,\{tfrac {1}{2}}(a_{-n}+ib_{-n})&{text{if }}n
c
n
=
{
1
2
(
a
n
−
i
b
n
)
if
n
>
0
,
a
n
if
n
=
0
,
1
2
(
a
−
n
+
i
b
−
n
)
if
n
<
0
,
{displaystyle c_{n}={begin{cases}{tfrac {1}{2}}(a_{n}-ib_{n})&{text{if }}n>0,\a_{n}&{text{if }}n=0,\{tfrac {1}{2}}(a_{-n}+ib_{-n})&{text{if }}n<0,\end{cases}}}
0,\a_{n}&{text{if }}n=0,\{tfrac {1}{2}}(a_{-n}+ib_{-n})&{text{if }}n
with
c
0
{displaystyle c_{0}}
being the mean value of
s
{displaystyle s}
on the interval
P
{displaystyle P}
.[24] Conversely:
0\b_{n}&=i(c_{n}-c_{-n})qquad &{textrm {for}}~n>0end{aligned}}}” class=”equation-box-elem”>
a
0
=
c
0
a
n
=
c
n
+
c
−
n
for
n
>
0
b
n
=
i
(
c
n
−
c
−
n
)
for
n
>
0
{displaystyle {begin{aligned}a_{0}&=c_{0}&\a_{n}&=c_{n}+c_{-n}qquad &{textrm {for}}~n>0\b_{n}&=i(c_{n}-c_{-n})qquad &{textrm {for}}~n>0end{aligned}}}
0\b_{n}&=i(c_{n}-c_{-n})qquad &{textrm {for}}~n>0end{aligned}}}”>
Example
s
(
x
)
=
x
/
π
{displaystyle s(x)=x/pi }
on the interval (
−
π
,
π
]
{displaystyle (-pi ,pi ]}
Consider a sawtooth function:
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block" alttext="{displaystyle s(x)=s(x+2pi k)={frac {x}{pi }},quad mathrm {for} -pi <x
s
(
x
)
=
s
(
x
+
2
π
k
)
=
x
π
,
f
o
r
−
π
<
x
<
π
,
and
k
∈
Z
.
{displaystyle s(x)=s(x+2pi k)={frac {x}{pi }},quad mathrm {for} -pi <x<pi ,{text{ and }}kin mathbb {Z} .}
<img src="//wikimedia.org/api/rest_v1/media/math/render/svg/bb7d03ad136c69d2f138c2e2cbe6ad71031688c3" class="mwe-math-fallback-image-display mw-invert skin-invert" aria-hidden="true" style="vertical-align: -1.838ex; width:54.889ex; height:4.676ex;" alt="{displaystyle s(x)=s(x+2pi k)={frac {x}{pi }},quad mathrm {for} -pi <x
In this case, the Fourier coefficients are given by
a
0
=
0.
a
n
=
1
π
∫
−
π
π
s
(
x
)
cos
(
n
x
)
d
x
=
0
,
n
≥
1.
b
n
=
1
π
∫
−
π
π
s
(
x
)
sin
(
n
x
)
d
x
=
−
2
π
n
cos
(
n
π
)
+
2
π
2
n
2
sin
(
n
π
)
=
2
(
−
1
)
n
+
1
π
n
,
n
≥
1.
{displaystyle {begin{aligned}a_{0}&=0.\a_{n}&={frac {1}{pi }}int _{-pi }^{pi }s(x)cos(nx),dx=0,quad ngeq 1.\b_{n}&={frac {1}{pi }}int _{-pi }^{pi }s(x)sin(nx),dx\&=-{frac {2}{pi n}}cos(npi )+{frac {2}{pi ^{2}n^{2}}}sin(npi )\&={frac {2,(-1)^{n+1}}{pi n}},quad ngeq 1.end{aligned}}}
It can be shown that the Fourier series converges to
s
(
x
)
{displaystyle s(x)}
at every point
x
{displaystyle x}
where
s
{displaystyle s}
is differentiable, and therefore:
s
(
x
)
=
a
0
+
∑
n
=
1
∞
[
a
n
cos
(
n
x
)
+
b
n
s
i
n
(
n
x
)
]
=
2
π
∑
n
=
1
∞
(
−
1
)
n
+
1
n
sin
(
n
x
)
,
f
o
r
(
x
−
π
)
is not a multiple of
2
π
.
{displaystyle {begin{aligned}s(x)&=a_{0}+sum _{n=1}^{infty }left[a_{n}cos left(nxright)+b_{n}sinleft(nxright)right]\[4pt]&={frac {2}{pi }}sum _{n=1}^{infty }{frac {(-1)^{n+1}}{n}}sin(nx),quad mathrm {for} (x-pi ) {text{is not a multiple of}} 2pi .end{aligned}}}
When
x
=
π
{displaystyle x=pi }
, the Fourier series converges to 0, which is the half-sum of the left- and right-limit of
s
{displaystyle s}
at
x
=
π
{displaystyle x=pi }
. This is a particular instance of the Dirichlet theorem for Fourier series.
This example leads to a solution of the Basel problem.
Amplitude-phase form
If the function
s
(
x
)
{displaystyle s(x)}
is real-valued then the Fourier series can also be represented as[25][26]
|
s N ( A 0 + ∑ n N A n cos ( 2 n x φ n ) {displaystyle s_{N}(x)=A_{0}+sum _{n=1}^{N}A_{n}cos left(2pi {tfrac {n}{P}}x-varphi _{n}right)} |
Eq.4 |
where
A
n
{displaystyle A_{n}}
is the amplitude and
φ
n
{displaystyle varphi _{n}}
is the phase shift of the
n
t
h
{displaystyle n^{th}}
harmonic.
The equivalence of Eq.4 and Eq.1 follows from the trigonometric identity:
cos
(
2
π
n
P
x
−
φ
n
)
=
cos
(
φ
n
)
cos
(
2
π
n
P
x
)
+
sin
(
φ
n
)
sin
(
2
π
n
P
x
)
,
{displaystyle cos left(2pi {tfrac {n}{P}}x-varphi _{n}right)=cos(varphi _{n})cos left(2pi {tfrac {n}{P}}xright)+sin(varphi _{n})sin left(2pi {tfrac {n}{P}}xright),}
which implies[27]
a
n
=
A
n
cos
(
φ
n
)
and
b
n
=
A
n
sin
(
φ
n
)
{displaystyle a_{n}=A_{n}cos(varphi _{n})quad {text{and}}quad b_{n}=A_{n}sin(varphi _{n})}
are the rectangular coordinates of a vector written in polar coordinates as
A
n
∠
φ
n
=
a
n
+
i
b
n
{displaystyle A_{n}angle varphi _{n}=a_{n}+ib_{n}}
where
A
n
=
a
n
2
+
b
n
2
and
φ
n
=
atan2
(
b
n
,
a
n
)
=
−
Arg
(
c
n
)
{displaystyle A_{n}={sqrt {a_{n}^{2}+b_{n}^{2}}}quad {text{and}}quad varphi _{n}=operatorname {atan2} (b_{n},a_{n})=-operatorname {Arg} (c_{n})}
An example of determining the parameter
φ
n
{displaystyle varphi _{n}}
for one value of
n
{displaystyle n}
is shown in Figure 2. It is the value of
φ
{displaystyle varphi }
at the maximum correlation between
s
(
x
)
{displaystyle s(x)}
and a cosine template,
cos
(
2
π
n
P
x
−
φ
)
{displaystyle cos(2pi {tfrac {n}{P}}x-varphi )}
. The blue graph is the cross-correlation function, also known as a matched filter:
-
X
(
φ
)=
∫
P
s
(
x
)
⋅
cos
(
2
πn
Px
−
φ)
d
xφ
∈[
0
,
2
π]
=
cos
(
φ
)∫
P
s
(
x
)
⋅
cos
(
2
πn
Px
)
d
x⏟
X
(
0
)+
sin
(
φ
)∫
P
s
(
x
)
⋅
sin
(
2
πn
Px
)
d
x⏟
X
(
π/
2
){displaystyle {begin{aligned}mathrm {X} (varphi )&=int _{P}s(x)cdot cos left(2pi {tfrac {n}{P}}x-varphi right),dxquad varphi in left[0,2pi right]\&=cos(varphi )underbrace {int _{P}s(x)cdot cos left(2pi {tfrac {n}{P}}xright)dx} _{X(0)}+sin(varphi )underbrace {int _{P}s(x)cdot sin left(2pi {tfrac {n}{P}}xright)dx} _{X(pi /2)}end{aligned}}}
Fortunately, it is not necessary to evaluate this entire function, because its derivative is zero at the maximum:
X
′
(
φ
)
=
sin
(
φ
)
⋅
X
(
0
)
−
cos
(
φ
)
⋅
X
(
π
/
2
)
=
0
,
at
φ
=
φ
n
.
{displaystyle X'(varphi )=sin(varphi )cdot X(0)-cos(varphi )cdot X(pi /2)=0,quad {textrm {at}} varphi =varphi _{n}.}
Hence
φ
n
≡
arctan
(
b
n
/
a
n
)
=
arctan
(
X
(
π
/
2
)
/
X
(
0
)
)
.
{displaystyle varphi _{n}equiv arctan(b_{n}/a_{n})=arctan(X(pi /2)/X(0)).}
Common notations
The notation
c
n
{displaystyle c_{n}}
is inadequate for discussing the Fourier coefficients of several different functions. Therefore, it is customarily replaced by a modified form of the function (
s
,
{displaystyle s,}
in this case), such as
s
^
(
n
)
{displaystyle {widehat {s}}(n)}
or
S
[
n
]
,
{displaystyle S[n],}
and functional notation often replaces subscripting:
-
s
(
x
)=
∑
n
=
−
∞∞
s
^(
n
)
⋅e
i
2
πn
Px
common mathematics notation
=
∑
n
=
−
∞∞
S
[
n
]
⋅e
i
2
πn
Px
common engineering notation
{displaystyle {begin{aligned}s(x)&=sum _{n=-infty }^{infty }{widehat {s}}(n)cdot e^{i2pi {tfrac {n}{P}}x}&&scriptstyle {text{common mathematics notation}}\&=sum _{n=-infty }^{infty }S[n]cdot e^{i2pi {tfrac {n}{P}}x}&&scriptstyle {text{common engineering notation}}end{aligned}}}
In engineering, particularly when the variable
x
{displaystyle x}
represents time, the coefficient sequence is called a frequency domain representation. Square brackets are often used to emphasize that the domain of this function is a discrete set of frequencies.
Another commonly used frequency domain representation uses the Fourier series coefficients to modulate a Dirac comb:
-
S
(
f
)
≜
∑
n
=
−
∞∞
S
[
n
]
⋅
δ(
f
−n
P)
,
{displaystyle S(f) triangleq sum _{n=-infty }^{infty }S[n]cdot delta left(f-{frac {n}{P}}right),}
where
f
{displaystyle f}
represents a continuous frequency domain. When variable
x
{displaystyle x}
has units of seconds,
f
{displaystyle f}
has units of hertz. The “teeth” of the comb are spaced at multiples (i.e. harmonics) of
1
P
{displaystyle {tfrac {1}{P}}}
, which is called the fundamental frequency.
s
(
x
)
{displaystyle s(x)}
can be recovered from this representation by an inverse Fourier transform:
-
F
−
1{
S
(
f
)
}=
∫
−
∞∞
(
∑
n
=
−
∞∞
S
[
n
]
⋅
δ(
f
−n
P)
)
e
i
2
π
f
xd
f
,=
∑
n
=
−
∞∞
S
[
n
]
⋅∫
−
∞∞
δ
(
f
−n
P)
e
i
2
π
f
xd
f
,=
∑
n
=
−
∞∞
S
[
n
]
⋅e
i
2
πn
Px
≜
s
(
x
)
.{displaystyle {begin{aligned}{mathcal {F}}^{-1}{S(f)}&=int _{-infty }^{infty }left(sum _{n=-infty }^{infty }S[n]cdot delta left(f-{frac {n}{P}}right)right)e^{i2pi fx},df,\[6pt]&=sum _{n=-infty }^{infty }S[n]cdot int _{-infty }^{infty }delta left(f-{frac {n}{P}}right)e^{i2pi fx},df,\[6pt]&=sum _{n=-infty }^{infty }S[n]cdot e^{i2pi {tfrac {n}{P}}x} triangleq s(x).end{aligned}}}
The constructed function
S
(
f
)
{displaystyle S(f)}
is therefore commonly referred to as a Fourier transform, even though the Fourier integral of a periodic function is not convergent at the harmonic frequencies.[C]
Some common pairs of periodic functions and their Fourier series coefficients are shown in the table below.
-
s
(
x
){displaystyle s(x)}
designates a periodic function with periodP
.{displaystyle P.}
-
a
0
,
a
n
,
b
n
{displaystyle a_{0},a_{n},b_{n}}
designate the Fourier series coefficients (sine-cosine form) of the periodic functions
(
x
)
.{displaystyle s(x).}
| Time domain
s {displaystyle s(x)}
|
Plot | Frequency domain (sine-cosine form)
a 0 a n for n b n for n {displaystyle {begin{aligned}&a_{0}\&a_{n}quad {text{for }}ngeq 1\&b_{n}quad {text{for }}ngeq 1end{aligned}}}
|
Remarks | Reference |
|---|---|---|---|---|
| <math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{displaystyle s(x)=Aleft|sin left({frac {2pi }{P}}xright)right|quad {text{for }}0leq x
s | sin ( 2 P x ) | for 0 {displaystyle s(x)=Aleft|sin left({frac {2pi }{P}}xright)right|quad {text{for }}0leq x<P} <img src="//wikimedia.org/api/rest_v1/media/math/render/svg/f218f19e446bcda8f4e3b0bb582792c91b099acb" class="mwe-math-fallback-image-inline mw-invert skin-invert" aria-hidden="true" style="vertical-align: -2.505ex; width:38.2ex; height:6.176ex;" alt="{displaystyle s(x)=Aleft|sin left({frac {2pi }{P}}xright)right|quad {text{for }}0leq x |
|
a 0 = 2 π a n = { − π 1 n 2 − n even 0 n odd b n = 0 {displaystyle {begin{aligned}a_{0}=&{frac {2A}{pi }}\a_{n}=&{begin{cases}{frac {-4A}{pi }}{frac {1}{n^{2}-1}}&quad n{text{ even}}\0&quad n{text{ odd}}end{cases}}\b_{n}=&0\end{aligned}}} |
Full-wave rectified sine | [28]: p. 193 |
| <math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{displaystyle s(x)={begin{cases}Asin left({frac {2pi }{P}}xright)&quad {text{for }}0leq x<P/2\0&quad {text{for }}P/2leq x
s { A ( 2 P x ) for 0 / 2 0 for P / 2 {displaystyle s(x)={begin{cases}Asin left({frac {2pi }{P}}xright)&quad {text{for }}0leq x<P/2\0&quad {text{for }}P/2leq x<P\end{cases}}} <img src="//wikimedia.org/api/rest_v1/media/math/render/svg/7b078de6b1c80d44c094a64eb0926adfd04c2967" class="mwe-math-fallback-image-inline mw-invert skin-invert" aria-hidden="true" style="vertical-align: -3.171ex; width:42.998ex; height:7.509ex;" alt="{displaystyle s(x)={begin{cases}Asin left({frac {2pi }{P}}xright)&quad {text{for }}0leq x<P/2\0&quad {text{for }}P/2leq x |
|
1end{cases}}\end{aligned}}}”>
a 0 = A a n = { − π 1 n 2 − n even 0 n odd b n = { A n 0 n {displaystyle {begin{aligned}a_{0}=&{frac {A}{pi }}\a_{n}=&{begin{cases}{frac {-2A}{pi }}{frac {1}{n^{2}-1}}&quad n{text{ even}}\0&quad n{text{ odd}}end{cases}}\b_{n}=&{begin{cases}{frac {A}{2}}&quad n=1\0&quad n>1end{cases}}\end{aligned}}} 1end{cases}}\end{aligned}}}”> |
Half-wave rectified sine | [28]: p.193 |
| <math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{displaystyle s(x)={begin{cases}A&quad {text{for }}0leq x<Dcdot P\0&quad {text{for }}Dcdot Pleq x
s { A for 0 0 for D {displaystyle s(x)={begin{cases}A&quad {text{for }}0leq x<Dcdot P\0&quad {text{for }}Dcdot Pleq x<P\end{cases}}} <img src="//wikimedia.org/api/rest_v1/media/math/render/svg/408e46cde2eb7ff8f07e1400a8466314b0042dad" class="mwe-math-fallback-image-inline mw-invert skin-invert" aria-hidden="true" style="vertical-align: -2.505ex; width:34.198ex; height:6.176ex;" alt="{displaystyle s(x)={begin{cases}A&quad {text{for }}0leq x<Dcdot P\0&quad {text{for }}Dcdot Pleq x |
|
a 0 = A a n = A n sin ( 2 ) b n = 2 n ( sin ( π ) ) 2 {displaystyle {begin{aligned}a_{0}=&AD\a_{n}=&{frac {A}{npi }}sin left(2pi nDright)\b_{n}=&{frac {2A}{npi }}left(sin left(pi nDright)right)^{2}\end{aligned}}} |
0 {displaystyle 0leq Dleq 1} |
|
| <math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{displaystyle s(x)={frac {Ax}{P}}quad {text{for }}0leq x
s A P for 0 {displaystyle s(x)={frac {Ax}{P}}quad {text{for }}0leq x<P} <img src="//wikimedia.org/api/rest_v1/media/math/render/svg/00570f2aab0f311f021003f9c46aaa79420a3684" class="mwe-math-fallback-image-inline mw-invert skin-invert" aria-hidden="true" style="vertical-align: -1.838ex; width:27.36ex; height:5.343ex;" alt="{displaystyle s(x)={frac {Ax}{P}}quad {text{for }}0leq x |
|
a 0 = A a n = 0 b n = − n {displaystyle {begin{aligned}a_{0}=&{frac {A}{2}}\a_{n}=&0\b_{n}=&{frac {-A}{npi }}\end{aligned}}} |
[28]: p.192 | |
| <math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{displaystyle s(x)=A-{frac {Ax}{P}}quad {text{for }}0leq x
s A P for 0 {displaystyle s(x)=A-{frac {Ax}{P}}quad {text{for }}0leq x<P} <img src="//wikimedia.org/api/rest_v1/media/math/render/svg/8537e0bf3d3868f10e0ea96b6236a96a013fc082" class="mwe-math-fallback-image-inline mw-invert skin-invert" aria-hidden="true" style="vertical-align: -1.838ex; width:31.944ex; height:5.343ex;" alt="{displaystyle s(x)=A-{frac {Ax}{P}}quad {text{for }}0leq x |
|
a 0 = A a n = 0 b n = A n {displaystyle {begin{aligned}a_{0}=&{frac {A}{2}}\a_{n}=&0\b_{n}=&{frac {A}{npi }}\end{aligned}}} |
[28]: p.192 | |
| <math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{displaystyle s(x)={frac {4A}{P^{2}}}left(x-{frac {P}{2}}right)^{2}quad {text{for }}0leq x
s 4 P 2 ( x P ) 2 for 0 {displaystyle s(x)={frac {4A}{P^{2}}}left(x-{frac {P}{2}}right)^{2}quad {text{for }}0leq x<P} <img src="//wikimedia.org/api/rest_v1/media/math/render/svg/345db196841ac1e610ae5ef64593cad35f1d8db3" class="mwe-math-fallback-image-inline mw-invert skin-invert" aria-hidden="true" style="vertical-align: -2.505ex; width:38.42ex; height:6.509ex;" alt="{displaystyle s(x)={frac {4A}{P^{2}}}left(x-{frac {P}{2}}right)^{2}quad {text{for }}0leq x |
|
a 0 = A a n = 4 π 2 n 2 b n = 0 {displaystyle {begin{aligned}a_{0}=&{frac {A}{3}}\a_{n}=&{frac {4A}{pi ^{2}n^{2}}}\b_{n}=&0\end{aligned}}} |
[28]: p.193 |
This table shows some mathematical operations in the time domain and the corresponding effect in the Fourier series coefficients. Notation:
- Complex conjugation is denoted by an asterisk.
-
s
(
x
)
,
r
(
x
){displaystyle s(x),r(x)}
designateP
{displaystyle P}
-periodic functions or functions defined only forx
∈
[
0
,
P
]
.{displaystyle xin [0,P].}
-
S
[
n
]
,
R
[
n
]{displaystyle S[n],R[n]}
designate the Fourier series coefficients (exponential form) ofs
{displaystyle s}
andr
.{displaystyle r.}
| Property | Time domain | Frequency domain (exponential form) | Remarks | Reference |
|---|---|---|---|---|
| Linearity |
a {displaystyle acdot s(x)+bcdot r(x)} |
a {displaystyle acdot S[n]+bcdot R[n]} |
a C {displaystyle a,bin mathbb {C} } |
|
| Time reversal / Frequency reversal |
s {displaystyle s(-x)} |
S {displaystyle S[-n]} |
[29]: p. 610 | |
| Time conjugation |
s ∗ ( {displaystyle s^{*}(x)} |
S ∗ [ {displaystyle S^{*}[-n]} |
[29]: p. 610 | |
| Time reversal & conjugation |
s ∗ ( {displaystyle s^{*}(-x)} |
S ∗ [ {displaystyle S^{*}[n]} |
||
| Real part in time |
Re ( {displaystyle operatorname {Re} {(s(x))}} |
1 ( S ∗ [ {displaystyle {frac {1}{2}}(S[n]+S^{*}[-n])} |
||
| Imaginary part in time |
Im ( {displaystyle operatorname {Im} {(s(x))}} |
1 2 ( S ∗ [ {displaystyle {frac {1}{2i}}(S[n]-S^{*}[-n])} |
||
| Real part in frequency |
1 ( s ∗ ( {displaystyle {frac {1}{2}}(s(x)+s^{*}(-x))} |
Re ( {displaystyle operatorname {Re} {(S[n])}} |
||
| Imaginary part in frequency |
1 2 ( s ∗ ( {displaystyle {frac {1}{2i}}(s(x)-s^{*}(-x))} |
Im ( {displaystyle operatorname {Im} {(S[n])}} |
||
| Shift in time / Modulation in frequency |
s x 0 ) {displaystyle s(x-x_{0})} |
S e − x 0 P n {displaystyle S[n]cdot e^{-i2pi {tfrac {x_{0}}{P}}n}} |
x 0 ∈ R {displaystyle x_{0}in mathbb {R} } |
[29]: p.610 |
| Shift in frequency / Modulation in time |
s e i n 0 P x {displaystyle s(x)cdot e^{i2pi {frac {n_{0}}{P}}x}} |
S n 0 ] {displaystyle S[n-n_{0}]!} |
n 0 ∈ Z {displaystyle n_{0}in mathbb {Z} } |
[29]: p. 610 |
Symmetry relations
When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform:[30][31]
-
T
i
m
e
d
o
m
a
i
ns
=
s
R
E+
s
R
O+
i
s
I
E+
i
s
I
O⇕
F
⇕
F
⇕
F
⇕
F
⇕
F
F
r
e
q
u
e
n
c
y
d
o
m
a
i
nS
=
S
R
E+
i
S
I
O+
i
S
I
E+
S
R
O{displaystyle {begin{array}{rlcccccccc}{mathsf {Time domain}}&s&=&s_{mathrm {RE} }&+&s_{mathrm {RO} }&+&i s_{mathrm {IE} }&+&i s_{mathrm {IO} }\&{Bigg Updownarrow }{mathcal {F}}&&{Bigg Updownarrow }{mathcal {F}}&& {Bigg Updownarrow }{mathcal {F}}&& {Bigg Updownarrow }{mathcal {F}}&& {Bigg Updownarrow }{mathcal {F}}\{mathsf {Frequency domain}}&S&=&S_{mathrm {RE} }&+&i S_{mathrm {IO} },&+&i S_{mathrm {IE} }&+&S_{mathrm {RO} }end{array}}}
From this, various relationships are apparent, for example:
- The transform of a real-valued function
(
s
R
E+
s
R
O)
{displaystyle (s_{mathrm {RE} }+s_{mathrm {RO} })}
is the conjugate symmetric functionS
R
E+
i
S
I
O.
{displaystyle S_{mathrm {RE} }+i S_{mathrm {IO} }.}
Conversely, a conjugate symmetric transform implies a real-valued time-domain. - The transform of an imaginary-valued function
(
i
s
I
E+
i
s
I
O)
{displaystyle (i s_{mathrm {IE} }+i s_{mathrm {IO} })}
is the conjugate antisymmetric functionS
R
O+
i
S
I
E,
{displaystyle S_{mathrm {RO} }+i S_{mathrm {IE} },}
and the converse is true. - The transform of a conjugate symmetric function
(
s
R
E+
i
s
I
O)
{displaystyle (s_{mathrm {RE} }+i s_{mathrm {IO} })}
is the real-valued functionS
R
E+
S
R
O,
{displaystyle S_{mathrm {RE} }+S_{mathrm {RO} },}
and the converse is true. - The transform of a conjugate antisymmetric function
(
s
R
O+
i
s
I
E)
{displaystyle (s_{mathrm {RO} }+i s_{mathrm {IE} })}
is the imaginary-valued functioni
S
I
E+
i
S
I
O,
{displaystyle i S_{mathrm {IE} }+i S_{mathrm {IO} },}
and the converse is true.
Riemann–Lebesgue lemma
If
S
{displaystyle S}
is integrable,
lim
|
n
|
→
∞
S
[
n
]
=
0
{textstyle lim _{|n|to infty }S[n]=0}
,
lim
n
→
+
∞
a
n
=
0
{textstyle lim _{nto +infty }a_{n}=0}
and
lim
n
→
+
∞
b
n
=
0.
{textstyle lim _{nto +infty }b_{n}=0.}
Parseval’s theorem
If
s
{displaystyle s}
belongs to
L
2
(
P
)
{displaystyle L^{2}(P)}
(periodic over an interval of length
P
{displaystyle P}
) then:
1
P
∫
P
|
s
(
x
)
|
2
d
x
=
∑
n
=
−
∞
∞
|
S
[
n
]
|
2
.
{displaystyle {frac {1}{P}}int _{P}|s(x)|^{2},dx=sum _{n=-infty }^{infty }{Bigl |}S[n]{Bigr |}^{2}.}
Plancherel’s theorem
If
c
0
,
c
±
1
,
c
±
2
,
…
{displaystyle c_{0},,c_{pm 1},,c_{pm 2},ldots }
are coefficients and <math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{textstyle sum _{n=-infty }^{infty }|c_{n}|^{2}
∑
n
=
−
∞
∞
|
c
n
|
2
<
∞
{textstyle sum _{n=-infty }^{infty }|c_{n}|^{2}<infty }
<img src="//wikimedia.org/api/rest_v1/media/math/render/svg/9a28fe1237efdca7ee9ae68f6351757eb7aa1022" class="mwe-math-fallback-image-inline mw-invert skin-invert" aria-hidden="true" style="vertical-align: -1.005ex; width:18.255ex; height:3.509ex;" alt="{textstyle sum _{n=-infty }^{infty }|c_{n}|^{2} then there is a unique function
s
∈
L
2
(
P
)
{displaystyle sin L^{2}(P)}
such that
S
[
n
]
=
c
n
{displaystyle S[n]=c_{n}}
for every
n
{displaystyle n}
.
Convolution theorems
Given
P
{displaystyle P}
-periodic functions,
s
P
{displaystyle s_{P}}
and
r
P
{displaystyle r_{P}}
with Fourier series coefficients
S
[
n
]
{displaystyle S[n]}
and
R
[
n
]
,
{displaystyle R[n],}
n
∈
Z
,
{displaystyle nin mathbb {Z} ,}
- The pointwise product:
h
P
(
x
)
≜s
P
(
x
)
⋅r
P
(
x
){displaystyle h_{P}(x)triangleq s_{P}(x)cdot r_{P}(x)}
is alsoP
{displaystyle P}
-periodic, and its Fourier series coefficients are given by the discrete convolution of theS
{displaystyle S}
andR
{displaystyle R}
sequences:H
[
n
]
=
{
S
∗
R
}
[
n
]
.{displaystyle H[n]={S*R}[n].}
- The periodic convolution:
h
P
(
x
)
≜∫
P
s
P
(
τ
)
⋅r
P
(
x
−
τ
)d
τ{displaystyle h_{P}(x)triangleq int _{P}s_{P}(tau )cdot r_{P}(x-tau ),dtau }
is alsoP
{displaystyle P}
-periodic, with Fourier series coefficients:H
[
n
]
=
P
⋅
S
[
n
]
⋅
R
[
n
]
.{displaystyle H[n]=Pcdot S[n]cdot R[n].}
- A doubly infinite sequence
{
c
n
}
n
∈
Z{displaystyle left{c_{n}right}_{nin Z}}
inc
0
(
Z
)
{displaystyle c_{0}(mathbb {Z} )}
is the sequence of Fourier coefficients of a function inL
1
(
[
0
,
2
π
]
){displaystyle L^{1}([0,2pi ])}
if and only if it is a convolution of two sequences inℓ
2
(
Z
)
{displaystyle ell ^{2}(mathbb {Z} )}
. See [32]
Derivative property
If
s
{displaystyle s}
is a
P
{displaystyle P}
-periodic function on
R
{displaystyle mathbb {R} }
which is
k
{displaystyle k}
times differentiable, and its
k
th
{displaystyle k^{text{th}}}
derivative is continuous, then
s
{displaystyle s}
belongs to the function space
C
k
(
R
)
{displaystyle C^{k}(mathbb {R} )}
.
- If
s
∈C
k
(
R
)
{displaystyle sin C^{k}(mathbb {R} )}
, then the Fourier coefficients of thek
th
{displaystyle k^{text{th}}}
derivative ofs
{displaystyle s}
can be expressed in terms of the Fourier coefficientss
^[
n
]{displaystyle {widehat {s}}[n]}
ofs
{displaystyle s}
, via the formulas
(
k
)^
[
n
]
=
(
i2
π
nP
)
k
s
^[
n
]
.{displaystyle {widehat {s^{(k)}}}[n]=(i{frac {2pi n}{P}})^{k}{widehat {s}}[n].}
In particular, since for any fixedk
≥
1{displaystyle kgeq 1}
we haves
(
k
)^
[
n
]
→
0{displaystyle {widehat {s^{(k)}}}[n]to 0}
asn
→
∞{displaystyle nto infty }
, it follows that|
n
|
k
s
^[
n
]{displaystyle |n|^{k}{widehat {s}}[n]}
tends to zero, i.e., the Fourier coefficients converge to zero faster than thek
th
{displaystyle k^{text{th}}}
power of|
n
|
{displaystyle |n|}
.
Compact groups
One of the interesting properties of the Fourier transform which we have mentioned, is that it carries convolutions to pointwise products. If that is the property which we seek to preserve, one can produce Fourier series on any compact group. Typical examples include those classical groups that are compact. This generalizes the Fourier transform to all spaces of the form L2(G), where G is a compact group, in such a way that the Fourier transform carries convolutions to pointwise products. The Fourier series exists and converges in similar ways to the [−π,π] case.
An alternative extension to compact groups is the Peter–Weyl theorem, which proves results about representations of compact groups analogous to those about finite groups.
Riemannian manifolds
If the domain is not a group, then there is no intrinsically defined convolution. However, if
X
{displaystyle X}
is a compact Riemannian manifold, it has a Laplace–Beltrami operator. The Laplace–Beltrami operator is the differential operator that corresponds to Laplace operator for the Riemannian manifold
X
{displaystyle X}
. Then, by analogy, one can consider heat equations on
X
{displaystyle X}
. Since Fourier arrived at his basis by attempting to solve the heat equation, the natural generalization is to use the eigensolutions of the Laplace–Beltrami operator as a basis. This generalizes Fourier series to spaces of the type
L
2
(
X
)
{displaystyle L^{2}(X)}
, where
X
{displaystyle X}
is a Riemannian manifold. The Fourier series converges in ways similar to the
[
−
π
,
π
]
{displaystyle [-pi ,pi ]}
case. A typical example is to take
X
{displaystyle X}
to be the sphere with the usual metric, in which case the Fourier basis consists of spherical harmonics.
Locally compact Abelian groups
The generalization to compact groups discussed above does not generalize to noncompact, nonabelian groups. However, there is a straightforward generalization to Locally Compact Abelian (LCA) groups.
This generalizes the Fourier transform to
L
1
(
G
)
{displaystyle L^{1}(G)}
or
L
2
(
G
)
{displaystyle L^{2}(G)}
, where
G
{displaystyle G}
is an LCA group. If
G
{displaystyle G}
is compact, one also obtains a Fourier series, which converges similarly to the
[
−
π
,
π
]
{displaystyle [-pi ,pi ]}
case, but if
G
{displaystyle G}
is noncompact, one obtains instead a Fourier integral. This generalization yields the usual Fourier transform when the underlying locally compact Abelian group is
R
{displaystyle mathbb {R} }
.
Fourier-Stieltjes series
Formally, the Fourier-Stieltjes series can be defined as the Fourier series whose coefficients are given by
c
n
=
μ
^
(
n
)
=
1
P
∫
0
P
e
−
i
2
π
n
P
x
d
μ
(
x
)
,
∀
n
∈
Z
,
{displaystyle c_{n}={hat {mu }}(n)={frac {1}{P}}int _{0}^{P} e^{-i2pi {tfrac {n}{P}}x},dmu (x),quad forall nin mathbb {Z} ,}
for any
μ
∈
M
{displaystyle mu in M}
, where
M
{displaystyle M}
is the space finite Borel measures on the interval
[
0
,
P
]
{displaystyle [0,P]}
. As such, when
μ
∈
M
{displaystyle mu in M}
, the function
μ
^
(
n
)
{displaystyle {hat {mu }}(n)}
is also referred to as a Fourier-Stieltjes transform.[33][34]
This follows from an earlier and more concrete representation of a Radon measure (i.e. a locally finite Borel measure) on
R
{displaystyle mathbb {R} }
, given by F. Riesz. That is, if
F
{displaystyle F}
is function of bounded variation on the interval
[
0
,
P
]
{displaystyle [0,P]}
then the Fourier coefficients can be expressed by the Riemann-Stieltjes integral
c
n
=
1
P
∫
0
P
e
−
i
2
π
n
P
x
d
F
(
x
)
,
∀
n
∈
Z
,
{displaystyle c_{n}={frac {1}{P}}int _{0}^{P} e^{-i2pi {tfrac {n}{P}}x},dF(x),quad forall nin mathbb {Z} ,}
called the Fourier-Stieltjes coefficients of
F
{displaystyle F}
.[35] As the distributional derivative of
F
{displaystyle F}
is a Radon measure, it is subject to the Lebesgue decomposition and can be expressed as
d
F
=
F
′
d
x
+
d
F
s
{displaystyle dF=F’dx+dF_{s}}
d
F
s
=
0
{displaystyle dF_{s}=0}
the expression reduces to the original definition of the Fourier coefficients, hence a Fourier series is a Fourier-Stieltjes series.
The question whether or not
μ
{displaystyle mu }
exists for a given sequence of
c
n
{displaystyle c_{n}}
forms the basis of the trigonometric moment problem.[38]
The Fourier series can be generalized still further from measures to distributions. If the Fourier coefficients are determined by a distribution
F
∈
D
′
{displaystyle Fin {mathcal {D}}’}
then the series is sometimes described as a Fourier-Schwartz series.[39]
While it is often extremely difficult to decide whether a given series is a Fourier or a Fourier-Stieltjes series, deciding whether or not it is a Fourier-Schwartz series is relatively trivial.[40]
Fourier series on a square
We can also define the Fourier series for functions of two variables
x
{displaystyle x}
and
y
{displaystyle y}
in the square
[
−
π
,
π
]
×
[
−
π
,
π
]
{displaystyle [-pi ,pi ]times [-pi ,pi ]}
:
f
(
x
,
y
)
=
∑
j
,
k
∈
Z
c
j
,
k
e
i
j
x
e
i
k
y
,
c
j
,
k
=
1
4
π
2
∫
−
π
π
∫
−
π
π
f
(
x
,
y
)
e
−
i
j
x
e
−
i
k
y
d
x
d
y
.
{displaystyle {begin{aligned}f(x,y)&=sum _{j,kin mathbb {Z} }c_{j,k}e^{ijx}e^{iky},\[5pt]c_{j,k}&={frac {1}{4pi ^{2}}}int _{-pi }^{pi }int _{-pi }^{pi }f(x,y)e^{-ijx}e^{-iky},dx,dy.end{aligned}}}
Aside from being useful for solving partial differential equations such as the heat equation, one notable application of Fourier series on the square is in image compression. In particular, the JPEG image compression standard uses the two-dimensional discrete cosine transform, a discrete form of the Fourier cosine transform, which uses only cosine as the basis function.
For two-dimensional arrays with a staggered appearance, half of the Fourier series coefficients disappear, due to additional symmetry.[41]
Fourier series of a Bravais-lattice-periodic function
A three-dimensional Bravais lattice is defined as the set of vectors of the form
R
=
n
1
a
1
+
n
2
a
2
+
n
3
a
3
{displaystyle mathbf {R} =n_{1}mathbf {a} _{1}+n_{2}mathbf {a} _{2}+n_{3}mathbf {a} _{3}}
where
n
i
{displaystyle n_{i}}
are integers and
a
i
{displaystyle mathbf {a} _{i}}
are three linearly independent but not necessarily orthogonal vectors. Let us consider some function
f
(
r
)
{displaystyle f(mathbf {r} )}
with the same periodicity as the Bravais lattice, i.e.
f
(
r
)
=
f
(
R
+
r
)
{displaystyle f(mathbf {r} )=f(mathbf {R} +mathbf {r} )}
for any lattice vector
R
{displaystyle mathbf {R} }
. This situation frequently occurs in solid-state physics where
f
(
r
)
{displaystyle f(mathbf {r} )}
might, for example, represent the effective potential that an electron “feels” inside a periodic crystal. In presence of such a periodic potential, the quantum-mechanical description of the electron results in a periodically modulated plane-wave commonly known as Bloch state.
In order to develop
f
(
r
)
{displaystyle f(mathbf {r} )}
in a Fourier series, it is convenient to introduce an auxiliary function
g
(
x
1
,
x
2
,
x
3
)
≜
f
(
r
)
=
f
(
x
1
a
1
a
1
+
x
2
a
2
a
2
+
x
3
a
3
a
3
)
.
{displaystyle g(x_{1},x_{2},x_{3})triangleq f(mathbf {r} )=fleft(x_{1}{frac {mathbf {a} _{1}}{a_{1}}}+x_{2}{frac {mathbf {a} _{2}}{a_{2}}}+x_{3}{frac {mathbf {a} _{3}}{a_{3}}}right).}
Both
f
(
r
)
{displaystyle f(mathbf {r} )}
and
g
(
x
1
,
x
2
,
x
3
)
{displaystyle g(x_{1},x_{2},x_{3})}
contain essentially the same information. However, instead of the position vector
r
{displaystyle mathbf {r} }
, the arguments of
g
{displaystyle g}
are coordinates
x
1
,
2
,
3
{displaystyle x_{1,2,3}}
along the unit vectors
a
i
/
a
i
{displaystyle mathbf {a} _{i}/{a_{i}}}
of the Bravais lattice, such that
g
{displaystyle g}
is an ordinary periodic function in these variables,
g
(
x
1
,
x
2
,
x
3
)
=
g
(
x
1
+
a
1
,
x
2
,
x
3
)
=
g
(
x
1
,
x
2
+
a
2
,
x
3
)
=
g
(
x
1
,
x
2
,
x
3
+
a
3
)
∀
x
1
,
x
2
,
x
3
.
{displaystyle g(x_{1},x_{2},x_{3})=g(x_{1}+a_{1},x_{2},x_{3})=g(x_{1},x_{2}+a_{2},x_{3})=g(x_{1},x_{2},x_{3}+a_{3})quad forall ;x_{1},x_{2},x_{3}.}
This trick allows us to develop
g
{displaystyle g}
as a multi-dimensional Fourier series, in complete analogy with the square-periodic function discussed in the previous section. Its Fourier coefficients are
c
(
m
1
,
m
2
,
m
3
)
=
1
a
3
∫
0
a
3
d
x
3
1
a
2
∫
0
a
2
d
x
2
1
a
1
∫
0
a
1
d
x
1
g
(
x
1
,
x
2
,
x
3
)
e
−
i
2
π
(
m
1
a
1
x
1
+
m
2
a
2
x
2
+
m
3
a
3
x
3
)
,
{displaystyle {begin{aligned}c(m_{1},m_{2},m_{3})={frac {1}{a_{3}}}int _{0}^{a_{3}}dx_{3}{frac {1}{a_{2}}}int _{0}^{a_{2}}dx_{2}{frac {1}{a_{1}}}int _{0}^{a_{1}}dx_{1},g(x_{1},x_{2},x_{3}),e^{-i2pi left({tfrac {m_{1}}{a_{1}}}x_{1}+{tfrac {m_{2}}{a_{2}}}x_{2}+{tfrac {m_{3}}{a_{3}}}x_{3}right)}end{aligned}},}
where
m
1
,
m
2
,
m
3
{displaystyle m_{1},m_{2},m_{3}}
are all integers.
c
(
m
1
,
m
2
,
m
3
)
{displaystyle c(m_{1},m_{2},m_{3})}
plays the same role as the coefficients
c
j
,
k
{displaystyle c_{j,k}}
in the previous section but in order to avoid double subscripts we note them as a function.
Once we have these coefficients, the function
g
{displaystyle g}
can be recovered via the Fourier series
g
(
x
1
,
x
2
,
x
3
)
=
∑
m
1
,
m
2
,
m
3
∈
Z
c
(
m
1
,
m
2
,
m
3
)
e
i
2
π
(
m
1
a
1
x
1
+
m
2
a
2
x
2
+
m
3
a
3
x
3
)
.
{displaystyle g(x_{1},x_{2},x_{3})=sum _{m_{1},m_{2},m_{3}in mathbb {Z} },c(m_{1},m_{2},m_{3}),e^{i2pi left({tfrac {m_{1}}{a_{1}}}x_{1}+{tfrac {m_{2}}{a_{2}}}x_{2}+{tfrac {m_{3}}{a_{3}}}x_{3}right)}.}
We would now like to abandon the auxiliary coordinates
x
1
,
2
,
3
{displaystyle x_{1,2,3}}
and to return to the original position vector
r
{displaystyle mathbf {r} }
. This can be achieved by means of the reciprocal lattice whose vectors
b
1
,
2
,
3
{displaystyle mathbf {b} _{1,2,3}}
are defined such that they are orthonormal (up to a factor
2
π
{displaystyle 2pi }
) to the original Bravais vectors
a
1
,
2
,
3
{displaystyle mathbf {a} _{1,2,3}}
,
a
i
⋅
b
j
=
2
π
δ
i
j
,
{displaystyle mathbf {a} _{i}cdot mathbf {b_{j}} =2pi delta _{ij},}
with
δ
i
j
{displaystyle delta _{ij}}
the Kronecker delta. With this, the scalar product between a reciprocal lattice vector
Q
{displaystyle mathbf {Q} }
and an arbitrary position vector
r
{displaystyle mathbf {r} }
written in the Bravais lattice basis becomes
Q
⋅
r
=
(
m
1
b
1
+
m
2
b
2
+
m
3
b
3
)
⋅
(
x
1
a
1
a
1
+
x
2
a
2
a
2
+
x
3
a
3
a
3
)
=
2
π
(
x
1
m
1
a
1
+
x
2
m
2
a
2
+
x
3
m
3
a
3
)
,
{displaystyle mathbf {Q} cdot mathbf {r} =left(m_{1}mathbf {b} _{1}+m_{2}mathbf {b} _{2}+m_{3}mathbf {b} _{3}right)cdot left(x_{1}{frac {mathbf {a} _{1}}{a_{1}}}+x_{2}{frac {mathbf {a} _{2}}{a_{2}}}+x_{3}{frac {mathbf {a} _{3}}{a_{3}}}right)=2pi left(x_{1}{frac {m_{1}}{a_{1}}}+x_{2}{frac {m_{2}}{a_{2}}}+x_{3}{frac {m_{3}}{a_{3}}}right),}
which is exactly the expression occurring in the Fourier exponents. The Fourier series for
f
(
r
)
=
g
(
x
1
,
x
2
,
x
3
)
{displaystyle f(mathbf {r} )=g(x_{1},x_{2},x_{3})}
can therefore be rewritten as a sum over the all reciprocal lattice vectors
Q
=
m
1
b
1
+
m
2
b
2
+
m
3
b
3
{displaystyle mathbf {Q} =m_{1}mathbf {b} _{1}+m_{2}mathbf {b} _{2}+m_{3}mathbf {b} _{3}}
,
f
(
r
)
=
∑
Q
c
(
Q
)
e
i
Q
⋅
r
,
{displaystyle f(mathbf {r} )=sum _{mathbf {Q} }c(mathbf {Q} ),e^{imathbf {Q} cdot mathbf {r} },}
and the coefficients are
c
(
Q
)
=
1
a
3
∫
0
a
3
d
x
3
1
a
2
∫
0
a
2
d
x
2
1
a
1
∫
0
a
1
d
x
1
f
(
x
1
a
1
a
1
+
x
2
a
2
a
2
+
x
3
a
3
a
3
)
e
−
i
Q
⋅
r
.
{displaystyle c(mathbf {Q} )={frac {1}{a_{3}}}int _{0}^{a_{3}}dx_{3},{frac {1}{a_{2}}}int _{0}^{a_{2}}dx_{2},{frac {1}{a_{1}}}int _{0}^{a_{1}}dx_{1},fleft(x_{1}{frac {mathbf {a} _{1}}{a_{1}}}+x_{2}{frac {mathbf {a} _{2}}{a_{2}}}+x_{3}{frac {mathbf {a} _{3}}{a_{3}}}right)e^{-imathbf {Q} cdot mathbf {r} }.}
The remaining task will be to convert this integral over lattice coordinates back into a volume integral. The relation between the lattice coordinates
x
1
,
2
,
3
{displaystyle x_{1,2,3}}
and the original cartesian coordinates
r
=
(
x
,
y
,
z
)
{displaystyle mathbf {r} =(x,y,z)}
is a linear system of equations,
r
=
x
1
a
1
a
1
+
x
2
a
2
a
2
+
x
3
a
3
a
3
,
{displaystyle mathbf {r} =x_{1}{frac {mathbf {a} _{1}}{a_{1}}}+x_{2}{frac {mathbf {a} _{2}}{a_{2}}}+x_{3}{frac {mathbf {a} _{3}}{a_{3}}},}
which, when written in matrix form,
[
x
y
z
]
=
J
[
x
1
x
2
x
3
]
=
[
a
1
a
1
,
a
2
a
2
,
a
3
a
3
]
[
x
1
x
2
x
3
]
,
{displaystyle {begin{bmatrix}x\y\zend{bmatrix}}=mathbf {J} {begin{bmatrix}x_{1}\x_{2}\x_{3}end{bmatrix}}={begin{bmatrix}{frac {mathbf {a} _{1}}{a_{1}}},{frac {mathbf {a} _{2}}{a_{2}}},{frac {mathbf {a} _{3}}{a_{3}}}end{bmatrix}}{begin{bmatrix}x_{1}\x_{2}\x_{3}end{bmatrix}},,}
involves a constant matrix
J
{displaystyle mathbf {J} }
whose columns are the unit vectors
a
j
/
a
j
{displaystyle mathbf {a} _{j}/a_{j}}
of the Bravais lattice. When changing variables from
r
{displaystyle mathbf {r} }
to
(
x
1
,
x
2
,
x
3
)
{displaystyle (x_{1},x_{2},x_{3})}
in an integral, the same matrix
J
{displaystyle mathbf {J} }
appears as a Jacobian matrix
J
=
[
∂
x
∂
x
1
∂
x
∂
x
2
∂
x
∂
x
3
∂
y
∂
x
1
∂
y
∂
x
2
∂
y
∂
x
3
∂
z
∂
x
1
∂
z
∂
x
2
∂
z
∂
x
3
]
.
{displaystyle mathbf {J} ={begin{bmatrix}{dfrac {partial x}{partial x_{1}}}&{dfrac {partial x}{partial x_{2}}}&{dfrac {partial x}{partial x_{3}}}\[12pt]{dfrac {partial y}{partial x_{1}}}&{dfrac {partial y}{partial x_{2}}}&{dfrac {partial y}{partial x_{3}}}\[12pt]{dfrac {partial z}{partial x_{1}}}&{dfrac {partial z}{partial x_{2}}}&{dfrac {partial z}{partial x_{3}}}end{bmatrix}},.}
Its determinant
J
{displaystyle J}
is therefore also constant and can be inferred from any integral over any domain; here we choose to calculate the volume of the primitive unit cell
Γ
{displaystyle Gamma }
in both coordinate systems:
V
Γ
=
∫
Γ
d
3
r
=
J
∫
0
a
1
d
x
1
∫
0
a
2
d
x
2
∫
0
a
3
d
x
3
=
J
a
1
a
2
a
3
{displaystyle V_{Gamma }=int _{Gamma }d^{3}r=Jint _{0}^{a_{1}}dx_{1}int _{0}^{a_{2}}dx_{2}int _{0}^{a_{3}}dx_{3}=J,a_{1}a_{2}a_{3}}
The unit cell being a parallelepiped, we have
V
Γ
=
a
1
⋅
(
a
2
×
a
3
)
{displaystyle V_{Gamma }=mathbf {a} _{1}cdot (mathbf {a} _{2}times mathbf {a} _{3})}
and thus
d
3
r
=
J
d
x
1
d
x
2
d
x
3
=
a
1
⋅
(
a
2
×
a
3
)
a
1
a
2
a
3
d
x
1
d
x
2
d
x
3
.
{displaystyle d^{3}r=Jdx_{1}dx_{2}dx_{3}={frac {mathbf {a} _{1}cdot (mathbf {a} _{2}times mathbf {a} _{3})}{a_{1}a_{2}a_{3}}}dx_{1}dx_{2}dx_{3}.}
This allows us to write
c
(
Q
)
{displaystyle c(mathbf {Q} )}
as the desired volume integral over the primitive unit cell
Γ
{displaystyle Gamma }
in ordinary cartesian coordinates:
c
(
Q
)
=
1
a
1
⋅
(
a
2
×
a
3
)
∫
Γ
d
3
r
f
(
r
)
⋅
e
−
i
Q
⋅
r
.
{displaystyle c(mathbf {Q} )={frac {1}{mathbf {a} _{1}cdot (mathbf {a} _{2}times mathbf {a} _{3})}}int _{Gamma }d^{3}r,f(mathbf {r} )cdot e^{-imathbf {Q} cdot mathbf {r} },.}
Hilbert space
As the trigonometric series is a special class of orthogonal system, Fourier series can naturally be defined in the context of Hilbert spaces. For example, the space of square-integrable functions on
[
−
π
,
π
]
{displaystyle [-pi ,pi ]}
forms the Hilbert space
L
2
(
[
−
π
,
π
]
)
{displaystyle L^{2}([-pi ,pi ])}
. Its inner product, defined for any two elements
f
{displaystyle f}
and
g
{displaystyle g}
, is given by:
⟨
f
,
g
⟩
=
1
2
π
∫
−
π
π
f
(
x
)
g
(
x
)
¯
d
x
.
{displaystyle langle f,grangle ={frac {1}{2pi }}int _{-pi }^{pi }f(x){overline {g(x)}},dx.}
This space is equipped with the orthonormal basis
{
e
n
=
e
i
n
x
:
n
∈
Z
}
{displaystyle left{e_{n}=e^{inx}:nin mathbb {Z} right}}
.
Then the (generalized) Fourier series expansion of
f
∈
L
2
(
[
−
π
,
π
]
)
{displaystyle fin L^{2}([-pi ,pi ])}
, given by
f
(
x
)
=
∑
n
=
−
∞
∞
c
n
e
i
n
x
,
{displaystyle f(x)=sum _{n=-infty }^{infty }c_{n}e^{inx},}
can be written as[42]
f
=
∑
n
=
−
∞
∞
⟨
f
,
e
n
⟩
e
n
.
{displaystyle f=sum _{n=-infty }^{infty }langle f,e_{n}rangle ,e_{n}.}
m
{displaystyle m}
,n
{displaystyle n}
or the functions are different, and π only ifm
{displaystyle m}
andn
{displaystyle n}
are equal, and the function used is the same. They would form an orthonormal set, if the integral equaled 1 (that is, each function would need to be scaled by1
/
π
{displaystyle 1/{sqrt {pi }}}
).The sine-cosine form follows in a similar fashion. Indeed, the sines and cosines form an orthogonal set:
∫
−
π
π
cos
(
m
x
)
cos
(
n
x
)
d
x
=
1
2
∫
−
π
π
cos
(
(
n
−
m
)
x
)
+
cos
(
(
n
+
m
)
x
)
d
x
=
π
δ
m
n
,
m
,
n
≥
1
,
{displaystyle int _{-pi }^{pi }cos(mx),cos(nx),dx={frac {1}{2}}int _{-pi }^{pi }cos((n-m)x)+cos((n+m)x),dx=pi delta _{mn},quad m,ngeq 1,}
∫
−
π
π
sin
(
m
x
)
sin
(
n
x
)
d
x
=
1
2
∫
−
π
π
cos
(
(
n
−
m
)
x
)
−
cos
(
(
n
+
m
)
x
)
d
x
=
π
δ
m
n
,
m
,
n
≥
1
{displaystyle int _{-pi }^{pi }sin(mx),sin(nx),dx={frac {1}{2}}int _{-pi }^{pi }cos((n-m)x)-cos((n+m)x),dx=pi delta _{mn},quad m,ngeq 1}
(where δmn is the Kronecker delta), and
∫
−
π
π
cos
(
m
x
)
sin
(
n
x
)
d
x
=
1
2
∫
−
π
π
sin
(
(
n
+
m
)
x
)
+
sin
(
(
n
−
m
)
x
)
d
x
=
0
;
{displaystyle int _{-pi }^{pi }cos(mx),sin(nx),dx={frac {1}{2}}int _{-pi }^{pi }sin((n+m)x)+sin((n-m)x),dx=0;}
Hence, the set
{
1
2
,
cos
x
2
,
sin
x
2
,
…
,
cos
(
n
x
)
2
,
sin
(
n
x
)
2
,
…
}
,
{displaystyle left{{frac {1}{sqrt {2}}},{frac {cos x}{sqrt {2}}},{frac {sin x}{sqrt {2}}},dots ,{frac {cos(nx)}{sqrt {2}}},{frac {sin(nx)}{sqrt {2}}},dots right},}
also forms an orthonormal basis for
L
2
(
[
−
π
,
π
]
)
{displaystyle L^{2}([-pi ,pi ])}
. The density of their span is a consequence of the Stone–Weierstrass theorem, but follows also from the properties of classical kernels like the Fejér kernel.
In engineering, the Fourier series is generally assumed to converge except at jump discontinuities since the functions encountered in engineering are usually better-behaved than those in other disciplines. In particular, if
s
{displaystyle s}
is continuous and the derivative of
s
(
x
)
{displaystyle s(x)}
(which may not exist everywhere) is square integrable, then the Fourier series of
s
{displaystyle s}
converges absolutely and uniformly to
s
(
x
)
{displaystyle s(x)}
.[43] If a function is square-integrable on the interval
[
x
0
,
x
0
+
P
]
{displaystyle [x_{0},x_{0}+P]}
, then the Fourier series converges to the function almost everywhere. It is possible to define Fourier coefficients for more general functions or distributions, in which case pointwise convergence often fails, and convergence in norm or weak convergence is usually studied.
-
Four partial sums (Fourier series) of lengths 1, 2, 3, and 4 terms, showing how the approximation to a square wave improves as the number of terms increases (animation)
-
Four partial sums (Fourier series) of lengths 1, 2, 3, and 4 terms, showing how the approximation to a sawtooth wave improves as the number of terms increases (animation)
-
Example of convergence to a somewhat arbitrary function. Note the development of the “ringing” (Gibbs phenomenon) at the transitions to/from the vertical sections.
The theorems proving that a Fourier series is a valid representation of any periodic function (that satisfies the Dirichlet conditions), and informal variations of them that do not specify the convergence conditions, are sometimes referred to generically as Fourier’s theorem or the Fourier theorem.[44][45][46][47]
Least squares property
The earlier Eq.2:
-
s
N
(
x
)
=∑
n
=
−
NN
S
[
n
]
e
i
2
πn
Px
,
{displaystyle s_{N}(x)=sum _{n=-N}^{N}S[n] e^{i2pi {tfrac {n}{P}}x},}
is a trigonometric polynomial of degree
N
{displaystyle N}
that can be generally expressed as:
-
p
N
(
x
)
=∑
n
=
−
NN
p
[
n
]
e
i
2
πn
Px
.
{displaystyle p_{N}(x)=sum _{n=-N}^{N}p[n] e^{i2pi {tfrac {n}{P}}x}.}
Parseval’s theorem implies that:
.mw-parser-output .math_theorem{margin:1em 2em;padding:0.5em 1em 0.4em;border:1px solid #aaa;overflow:hidden}@media(max-width:500px){.mw-parser-output .math_theorem{margin:1em 0em;padding:0.5em 0.5em 0.4em}}
Theorem—The trigonometric polynomial
s
N
{displaystyle s_{N}}
is the unique best trigonometric polynomial of degree
N
{displaystyle N}
approximating
s
(
x
)
{displaystyle s(x)}
, in the sense that, for any trigonometric polynomial
p
N
≠
s
N
{displaystyle p_{N}neq s_{N}}
of degree
N
{displaystyle N}
, we have:
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block" alttext="{displaystyle |s_{N}-s|_{2}
‖
s
N
−
s
‖
2
<
‖
p
N
−
s
‖
2
,
{displaystyle |s_{N}-s|_{2}<|p_{N}-s|_{2},}
<img src="//wikimedia.org/api/rest_v1/media/math/render/svg/14a3947837c7980b42c773d8d74ea946bd3fe9c9" class="mwe-math-fallback-image-display mw-invert skin-invert" aria-hidden="true" style="vertical-align: -0.838ex; width:24.008ex; height:2.843ex;" alt="{displaystyle |s_{N}-s|_{2}
where the Hilbert space norm is defined as:
‖
g
‖
2
=
1
P
∫
P
|
g
(
x
)
|
2
d
x
.
{displaystyle |g|_{2}={sqrt {{1 over P}int _{P}|g(x)|^{2},dx}}.}
Convergence theorems
Because of the least squares property, and because of the completeness of the Fourier basis, we obtain an elementary convergence result.
Theorem—If
s
{displaystyle s}
belongs to
L
2
(
P
)
{displaystyle textstyle L^{2}(P)}
, then
s
N
{displaystyle s_{N}}
converges to
s
{displaystyle s}
in
L
2
(
P
)
{displaystyle textstyle L^{2}(P)}
as
N
→
∞
{displaystyle Nto infty }
, that is:
lim
N
→
∞
‖
s
N
−
s
‖
2
=
0.
{displaystyle lim _{Nto infty }|s_{N}-s|_{2}=0.}
If
s
{displaystyle s}
is continuously differentiable, then
(
i
n
)
S
[
n
]
{displaystyle (in)S[n]}
is the
n
{displaystyle n}
th Fourier coefficient of the first derivative
s
′
{displaystyle s’}
. Since
s
′
{displaystyle s’}
is continuous, and therefore bounded, it is square-integrable and its Fourier coefficients are square-summable. Then, by the Cauchy–Schwarz inequality,
-
(
∑
n
≠
0|
S
[
n
]|
)
2
≤
∑
n
≠
01
n
2
⋅
∑
n
≠
0|
n
S
[
n
]|
2
.
{displaystyle {biggl (}sum _{nneq 0}{bigl |}S[n]{bigr |}{biggr )}^{2}leq sum _{nneq 0}{frac {1}{n^{2}}}cdot sum _{nneq 0}{bigl |}nS[n]{bigr |}^{2}.}
This means that
s
{displaystyle s}
is absolutely summable. The sum of this series is a continuous function, equal to
s
{displaystyle s}
, since the Fourier series converges in
L
1
{displaystyle L^{1}}
to
s
{displaystyle s}
:
Theorem—If
s
∈
C
1
(
R
)
{displaystyle textstyle sin C^{1}(mathbb {R} )}
, then
s
N
{displaystyle s_{N}}
converges to
s
{displaystyle s}
This result can be proven easily if
s
{displaystyle s}
is further assumed to be
C
2
{displaystyle textstyle C^{2}}
, since in that case
n
2
S
[
n
]
{displaystyle textstyle n^{2}S[n]}
tends to zero as
n
→
∞
{displaystyle nrightarrow infty }
. More generally, the Fourier series is absolutely summable, thus converges uniformly to
s
{displaystyle s}
, provided that
s
{displaystyle s}
satisfies a Hölder condition of order {tfrac {1}{2}}}”>
α
>
1
2
{displaystyle alpha >{tfrac {1}{2}}}
{tfrac {1}{2}}}”>. In the absolutely summable case, the inequality:
- N}{bigl |}S[n]{bigr |}}”>
sup
x
|
s
(
x
)
−s
N
(
x
)|
≤
∑
|
n
|
>
N|
S
[
n
]|
{displaystyle sup _{x}{bigl |}s(x)-s_{N}(x){bigr |}leq sum _{|n|>N}{bigl |}S[n]{bigr |}}
N}{bigl |}S[n]{bigr |}}”>
proves uniform convergence.
Many other results concerning the convergence of Fourier series are known, ranging from the moderately simple result that the series converges at
x
{displaystyle x}
if
s
{displaystyle s}
is differentiable at
x
{displaystyle x}
, to more sophisticated results such as Carleson’s theorem which states that the Fourier series of an
L
2
{displaystyle textstyle L^{2}}
function converges almost everywhere.
Divergence
Since Fourier series have such good convergence properties, many are often surprised by some of the negative results. For example, the Fourier series of a continuous T-periodic function need not converge pointwise. The uniform boundedness principle yields a simple non-constructive proof of this fact.
In 1922, Andrey Kolmogorov published an article titled Une série de Fourier-Lebesgue divergente presque partout in which he gave an example of a Lebesgue-integrable function whose Fourier series diverges almost everywhere. He later constructed an example of an integrable function whose Fourier series diverges everywhere.[48]
It is possible to give explicit examples of a continuous function whose Fourier series diverges at 0: for instance, the even and 2π-periodic function f defined for all x in [0,π] by[49]
-
f
(
x
)
=∑
n
=
1∞
1
n
2
sin
[
(
2
n
3
+
1)
x
2]
.
{displaystyle f(x)=sum _{n=1}^{infty }{frac {1}{n^{2}}}sin left[left(2^{n^{3}}+1right){frac {x}{2}}right].}
Because the function is even the Fourier series contains only cosines:
-
∑
m
=
0∞
C
m
cos
(
m
x
)
.{displaystyle sum _{m=0}^{infty }C_{m}cos(mx).}
The coefficients are:
-
C
m
=
1
π∑
n
=
1∞
1
n
2
{
2
2
n
3
+
1
−
2
m+
2
2
n
3
+
1
+
2
m}
{displaystyle C_{m}={frac {1}{pi }}sum _{n=1}^{infty }{frac {1}{n^{2}}}left{{frac {2}{2^{n^{3}}+1-2m}}+{frac {2}{2^{n^{3}}+1+2m}}right}}
As m increases, the coefficients will be positive and increasing until they reach a value of about
C
m
≈
2
/
(
n
2
π
)
{displaystyle C_{m}approx 2/(n^{2}pi )}
at
m
=
2
n
3
/
2
{displaystyle m=2^{n^{3}}/2}
for some n and then become negative (starting with a value around
−
2
/
(
n
2
π
)
{displaystyle -2/(n^{2}pi )}
) and getting smaller, before starting a new such wave. At
x
=
0
{displaystyle x=0}
the Fourier series is simply the running sum of
C
m
,
{displaystyle C_{m},}
and this builds up to around
-
1
n
2
π
∑
k
=
02
n
3
/
2
2
2
k
+
1∼
1
n
2
π
ln
2
n
3
=
n
πln
2{displaystyle {frac {1}{n^{2}pi }}sum _{k=0}^{2^{n^{3}}/2}{frac {2}{2k+1}}sim {frac {1}{n^{2}pi }}ln 2^{n^{3}}={frac {n}{pi }}ln 2}
in the nth wave before returning to around zero, showing that the series does not converge at zero but reaches higher and higher peaks. Note that though the function is continuous, it is not differentiable.
.mw-parser-output .div-col{margin-top:0.3em;column-width:30em}.mw-parser-output .div-col-small{font-size:90%}.mw-parser-output .div-col-rules{column-rule:1px solid #aaa}.mw-parser-output .div-col dl,.mw-parser-output .div-col ol,.mw-parser-output .div-col ul{margin-top:0}.mw-parser-output .div-col li,.mw-parser-output .div-col dd{page-break-inside:avoid;break-inside:avoid-column}
- ATS theorem
- Carleson’s theorem
- Dirichlet kernel
- Discrete Fourier transform
- Fast Fourier transform
- Fejér’s theorem
- Fourier analysis
- Fourier inversion theorem
- Fourier sine and cosine series
- Fourier transform
- Gibbs phenomenon
- Half-range Fourier series
- Laurent series – the substitution q = eix transforms a Fourier series into a Laurent series, or conversely. This is used in the q-series expansion of the j-invariant.
- Least-squares spectral analysis
- Multidimensional transform
- Non-harmonic Fourier series
- Residue theorem integrals of f(z), singularities, poles
- Sine and cosine transforms
- Spectral theory
- Sturm–Liouville theory
- Trigonometric moment problem
.mw-parser-output .reflist{margin-bottom:0.5em;list-style-type:decimal}@media screen{.mw-parser-output .reflist{font-size:90%}}.mw-parser-output .reflist .references{font-size:100%;margin-bottom:0;list-style-type:inherit}.mw-parser-output .reflist-columns-2{column-width:30em}.mw-parser-output .reflist-columns-3{column-width:25em}.mw-parser-output .reflist-columns{margin-top:0.3em}.mw-parser-output .reflist-columns ol{margin-top:0}.mw-parser-output .reflist-columns li{page-break-inside:avoid;break-inside:avoid-column}.mw-parser-output .reflist-upper-alpha{list-style-type:upper-alpha}.mw-parser-output .reflist-upper-roman{list-style-type:upper-roman}.mw-parser-output .reflist-lower-alpha{list-style-type:lower-alpha}.mw-parser-output .reflist-lower-greek{list-style-type:lower-greek}.mw-parser-output .reflist-lower-roman{list-style-type:lower-roman}
-
These three did some important early work on the wave equation, especially D’Alembert. Euler’s work in this area was mostly comtemporaneous/ in collaboration with Bernoulli, although the latter made some independent contributions to the theory of waves and vibrations. (See Fetter & Walecka 2003, pp. 209–210).
-
Typically
[
−
P/
2
,
P/
2
]{displaystyle [-P/2,P/2]}
or[
0
,
P
]{displaystyle [0,P]}
. Some authors defineP
≜
2
π{displaystyle Ptriangleq 2pi }
because it simplifies the arguments of the sinusoid functions, at the expense of generality. -
Since the integral defining the Fourier transform of a periodic function is not convergent, it is necessary to view the periodic function and its transform as distributions. In this sense
F
{
e
i
2
πn
Px
}
{displaystyle {mathcal {F}}{e^{i2pi {tfrac {n}{P}}x}}}
is a Dirac delta function, which is an example of a distribution.
-
.mw-parser-output cite.citation{font-style:inherit;word-wrap:break-word}.mw-parser-output .citation q{quotes:”””””””‘””‘”}.mw-parser-output .citation:target{background-color:rgba(0,127,255,0.133)}.mw-parser-output .id-lock-free.id-lock-free a{background:url(“//upload.wikimedia.org/wikipedia/commons/6/65/Lock-green.svg”)right 0.1em center/9px no-repeat}.mw-parser-output .id-lock-limited.id-lock-limited a,.mw-parser-output .id-lock-registration.id-lock-registration a{background:url(“//upload.wikimedia.org/wikipedia/commons/d/d6/Lock-gray-alt-2.svg”)right 0.1em center/9px no-repeat}.mw-parser-output .id-lock-subscription.id-lock-subscription a{background:url(“//upload.wikimedia.org/wikipedia/commons/a/aa/Lock-red-alt-2.svg”)right 0.1em center/9px no-repeat}.mw-parser-output .cs1-ws-icon a{background:url(“//upload.wikimedia.org/wikipedia/commons/4/4c/Wikisource-logo.svg”)right 0.1em center/12px no-repeat}body:not(.skin-timeless):not(.skin-minerva) .mw-parser-output .id-lock-free a,body:not(.skin-timeless):not(.skin-minerva) .mw-parser-output .id-lock-limited a,body:not(.skin-timeless):not(.skin-minerva) .mw-parser-output .id-lock-registration a,body:not(.skin-timeless):not(.skin-minerva) .mw-parser-output .id-lock-subscription a,body:not(.skin-timeless):not(.skin-minerva) .mw-parser-output .cs1-ws-icon a{background-size:contain;padding:0 1em 0 0}.mw-parser-output .cs1-code{color:inherit;background:inherit;border:none;padding:inherit}.mw-parser-output .cs1-hidden-error{display:none;color:var(–color-error,#d33)}.mw-parser-output .cs1-visible-error{color:var(–color-error,#d33)}.mw-parser-output .cs1-maint{display:none;color:#085;margin-left:0.3em}.mw-parser-output .cs1-kern-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right{padding-right:0.2em}.mw-parser-output .citation .mw-selflink{font-weight:inherit}@media screen{.mw-parser-output .cs1-format{font-size:95%}html.skin-theme-clientpref-night .mw-parser-output .cs1-maint{color:#18911f}}@media screen and (prefers-color-scheme:dark){html.skin-theme-clientpref-os .mw-parser-output .cs1-maint{color:#18911f}}“Fourier”. Dictionary.com Unabridged (Online). n.d.
-
Zygmund 2002, p. 1-8.
-
Stillwell, John (2013). “Logic and the philosophy of mathematics in the nineteenth century”. In Ten, C. L. (ed.). Routledge History of Philosophy. Vol. VII: The Nineteenth Century. Routledge. p. 204. ISBN 978-1-134-92880-4.
-
Fasshauer, Greg (2015). “Fourier Series and Boundary Value Problems” (PDF). Math 461 Course Notes, Ch 3. Department of Applied Mathematics, Illinois Institute of Technology. Retrieved 6 November 2020.
-
Cajori, Florian (1893). A History of Mathematics. Macmillan. p. 283.
-
Dutka, Jacques (1995). “On the early history of Bessel functions”. Archive for History of Exact Sciences. 49 (2): 105–134. doi:10.1007/BF00376544.
-
Lejeune-Dirichlet, Peter Gustav (1829). “Sur la convergence des séries trigonométriques qui servent à représenter une fonction arbitraire entre des limites données” [On the convergence of trigonometric series which serve to represent an arbitrary function between two given limits]. Journal für die reine und angewandte Mathematik (in French). 4: 157–169. arXiv:0806.1294.
-
“Ueber die Darstellbarkeit einer Function durch eine trigonometrische Reihe” [About the representability of a function by a trigonometric series]. Habilitationsschrift, Göttingen; 1854. Abhandlungen der Königlichen Gesellschaft der Wissenschaften zu Göttingen, vol. 13, 1867. Published posthumously for Riemann by Richard Dedekind (in German). Archived from the original on 20 May 2008. Retrieved 19 May 2008.
-
Mascre, D.; Riemann, Bernhard (2005) [1867], “Posthumous Thesis on the Representation of Functions by Trigonometric Series”, in Grattan-Guinness, Ivor (ed.), Landmark Writings in Western Mathematics 1640–1940, Elsevier, p. 49, ISBN 9780080457444
-
Remmert, Reinhold (1991). Theory of Complex Functions: Readings in Mathematics. Springer. p. 29. ISBN 9780387971957.
-
Nerlove, Marc; Grether, David M.; Carvalho, Jose L. (1995). Analysis of Economic Time Series. Economic Theory, Econometrics, and Mathematical Economics. Elsevier. ISBN 0-12-515751-7.
-
Wilhelm Flügge, Stresses in Shells (1973) 2nd edition. ISBN 978-3-642-88291-3. Originally published in German as Statik und Dynamik der Schalen (1937).
-
Fourier, Jean-Baptiste-Joseph (2014) [1890]. “Mémoire sur la propagation de la chaleur dans les corps solides, présenté le 21 Décembre 1807 à l’Institut national” [Report on the propagation of heat in solid bodies, presented on December 21, 1807 to the National Institute]. In Darboux, Gaston (ed.). Oeuvres de Fourier [The Works of Fourier] (in French). Vol. 2. Paris: Gauthier-Villars et Fils. pp. 218–219. doi:10.1017/CBO9781139568159.009. ISBN 9781139568159.
Whilst the cited article does list the author as Fourier, a footnote on page 215 indicates that the article was actually written by Poisson and that it is, “for reasons of historical interest”, presented as though it were Fourier’s original memoire.
-
Fourier, Jean-Baptiste-Joseph (2013) [1888]. “Avant-propos des oevres de Fourier” [Foreword]. In Gaston Darboux (ed.). Oeuvres de Fourier [The Works of Fourier] (in French). Vol. 1. Paris: Gauthier-Villars et Fils. pp. VII–VIII. doi:10.1017/cbo9781139568081.001. ISBN 978-1-108-05938-1.
-
Folland 1992, pp. 18–25.
-
Hardy & Rogosinski 1999, pp. 2–4.
-
Edwards 1979, pp. 8–9.
-
Edwards 1982, pp. 57, 67.
-
Schwartz 1966, pp. 152–158.
-
Strang, Gilbert (2008), “4.1” (PDF), Fourier Series And Integrals (2 ed.), Wellesley-Cambridge Press, p. 323 (eq 19)
-
Stade 2005, p. 6.
-
Zygmund, Antoni (1935). “Trigonometrical series”. EUDML. p. 6. Retrieved 2024-12-14.
-
Folland 1992, pp. 21.
-
Stade 2005, pp. 59–64.
-
Alexander & Sadiku 2009, pp. 759–760.
-
Kassam, Saleem A. (2004). “Fourier Series (Part II)” (PDF). Retrieved 2024-12-11.The phase relationships are important because they correspond to having different amounts of “time shifts” or “delays” for each of the sinusoidal waveforms relative to a zero-phase waveform.
-
Papula, Lothar (2009). Mathematische Formelsammlung: für Ingenieure und Naturwissenschaftler [Mathematical Functions for Engineers and Physicists] (in German). Vieweg+Teubner Verlag. ISBN 978-3834807571.
-
Shmaliy, Y.S. (2007). Continuous-Time Signals. Springer. ISBN 978-1402062711.
-
Proakis & Manolakis 1996, p. 291.
-
Oppenheim & Schafer 2010, p. 55.
-
“Characterizations of a linear subspace associated with Fourier series”. MathOverflow. 2010-11-19. Retrieved 2014-08-08.
-
Edwards 1982, p. 67.
-
Katznelson 2004, p. 164.
-
Zygmund 2002, p. 11.
-
Edwards 1982, pp. 53, 72–73.
-
Katznelson 2004, p. 40.
-
Akhiezer 1965, pp. 180–181.
-
Edwards 1982, pp. 48, 67–68.
-
Rudin 1987, p. 82.
-
Tolstov, Georgi P. (1976). Fourier Series. Courier-Dover. ISBN 0-486-63317-9.
-
Siebert, William McC. (1985). Circuits, signals, and systems. MIT Press. p. 402. ISBN 978-0-262-19229-3.
-
Marton, L.; Marton, Claire (1990). Advances in Electronics and Electron Physics. Academic Press. p. 369. ISBN 978-0-12-014650-5.
-
Kuzmany, Hans (1998). Solid-state spectroscopy. Springer. p. 14. ISBN 978-3-540-63913-8.
-
Pribram, Karl H.; Yasue, Kunio; Jibu, Mari (1991). Brain and perception. Lawrence Erlbaum Associates. p. 26. ISBN 978-0-89859-995-4.
-
Gourdon, Xavier (2009). Les maths en tête. Analyse (2ème édition) (in French). Ellipses. p. 264. ISBN 978-2729837594.
Bibliography
.mw-parser-output .refbegin{margin-bottom:0.5em}.mw-parser-output .refbegin-hanging-indents>ul{margin-left:0}.mw-parser-output .refbegin-hanging-indents>ul>li{margin-left:0;padding-left:3.2em;text-indent:-3.2em}.mw-parser-output .refbegin-hanging-indents ul,.mw-parser-output .refbegin-hanging-indents ul li{list-style:none}@media(max-width:720px){.mw-parser-output .refbegin-hanging-indents>ul>li{padding-left:1.6em;text-indent:-1.6em}}.mw-parser-output .refbegin-columns{margin-top:0.3em}.mw-parser-output .refbegin-columns ul{margin-top:0}.mw-parser-output .refbegin-columns li{page-break-inside:avoid;break-inside:avoid-column}@media screen{.mw-parser-output .refbegin{font-size:90%}}
- Akhiezer, N. I. (1965). The Classical Moment Problem and Some Related Questions in Analysis. Philadelphia, PA: Society for Industrial and Applied Mathematics. doi:10.1137/1.9781611976397. ISBN 978-1-61197-638-0.
- Alexander, Charles K.; Sadiku, Matthew N. O. (2009). Fundamentals of Electric Circuits. Boston: McGraw-Hill. ISBN 978-0-07-352955-4.
- Boyce, William E.; DiPrima, Richard C. (2005). Elementary Differential Equations and Boundary Value Problems (8th ed.). New Jersey: John Wiley & Sons, Inc. ISBN 0-471-43338-1.
- Charpentier, Eric; Lesne, Annick; Nikolski, Nikolaï K. (2007). Kolmogorov’s Heritage in Mathematics. Springer. doi:10.1007/978-3-540-36351-4. ISBN 978-3-540-36349-1.
- Edwards, R. E. (1979). Fourier Series. Graduate Texts in Mathematics. Vol. 64. New York, NY: Springer New York. doi:10.1007/978-1-4612-6208-4. ISBN 978-1-4612-6210-7.
- Edwards, R. E. (1982). Fourier Series. Graduate Texts in Mathematics. Vol. 85. New York, NY: Springer New York. doi:10.1007/978-1-4613-8156-3. ISBN 978-1-4613-8158-7.
- Fourier, Joseph (2003). The Analytical Theory of Heat. Dover Publications. ISBN 0-486-49531-0. 2003 unabridged republication of the 1878 English translation by Alexander Freeman of Fourier’s work Théorie Analytique de la Chaleur, originally published in 1822.
- Fetter, Alexander L.; Walecka, John Dirk (2003). Theoretical Mechanics of Particles and Continua. Courier. ISBN 978-0-486-43261-8.
- Folland, Gerald B. (1992). Fourier analysis and its applications. Pacific Grove, Calif: Wadsworth & Brooks/Cole. ISBN 978-0-534-17094-3.
- Gonzalez-Velasco, Enrique A. (1992). “Connections in Mathematical Analysis: The Case of Fourier Series”. American Mathematical Monthly. 99 (5): 427–441. doi:10.2307/2325087. JSTOR 2325087.
- Hardy, G. H.; Rogosinski, Werner (1999). Fourier series. Mineola, N.Y: Dover Publications. ISBN 978-0-486-40681-7.
- Katznelson, Yitzhak (2004). An Introduction to Harmonic Analysis. Cambridge University Press. doi:10.1017/cbo9781139165372. ISBN 978-0-521-83829-0.
- Khare, Kedar; Butola, Mansi; Rajora, Sunaina (2023). Fourier Optics and Computational Imaging. Cham: Springer International Publishing. doi:10.1007/978-3-031-18353-9. ISBN 978-3-031-18352-2.
- Klein, Félix (1979). Development of mathematics in the 19th century. Brookline, Mass: Math Science Press. ISBN 978-0-915692-28-6. Translated by M. Ackerman from Vorlesungen über die Entwicklung der Mathematik im 19. Jahrhundert, Springer, Berlin, 1928.
- Lion, Georges A. (1986). “A Simple Proof of the Dirichlet-Jordan Convergence Test”. The American Mathematical Monthly. 93 (4): 281–282. doi:10.1080/00029890.1986.11971805. ISSN 0002-9890.
- Oppenheim, Alan V.; Schafer, Ronald W. (2010). Discrete-time Signal Processing. Upper Saddle River Munich: Prentice Hall. p. 55. ISBN 978-0-13-198842-2.
- Proakis, John G.; Manolakis, Dimitris G. (1996). Digital Signal Processing: Principles, Algorithms, and Applications (3rd ed.). Prentice Hall. ISBN 978-0-13-373762-2.
- Rudin, Walter (1976). Principles of mathematical analysis (3rd ed.). New York: McGraw-Hill, Inc. ISBN 0-07-054235-X.
- Rudin, Walter (1987). Real and Complex Analysis. New York, NY: McGraw-Hill Education. ISBN 978-0-07-100276-9.
- Stade, Eric (2005). Fourier Analysis. Wiley. doi:10.1002/9781118165508. ISBN 978-0-471-66984-5.
- Schwartz, Laurent (1966). Mathematics for the Physical Sciences. Paris & Reading, MA: Hermann/ Addison-Wesley Publishing.
- Zygmund, A. (2002). Trigonometric Series (third ed.). Cambridge: Cambridge University Press. ISBN 0-521-89053-5. The first edition was published in 1935.
- “Fourier series”, Encyclopedia of Mathematics, EMS Press, 2001 [1994]
- Hobson, Ernest (1911). . Encyclopædia Britannica. Vol. 10 (11th ed.). pp. 753–758.
- Weisstein, Eric W. “Fourier Series”. MathWorld.
- Joseph Fourier – A site on Fourier’s life which was used for the historical section of this article at the Wayback Machine (archived December 5, 2001)
This article incorporates material from example of Fourier series on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
pcs.c1.Page.onBodyEnd();
Source: Wikipedia. License: CC BY-SA 4.0. Changes may have been made. See authors on source page history.
Eksplorasi konten lain dari Tinta Emas
Berlangganan untuk dapatkan pos terbaru lewat email.


