My name is Hanzhang Yin, and I have developed this website as a resource to facilitate the review of key concepts in abstract algebra, and it is not fully completed.
Feel free to email me at hanyin@ku.edu if there are any errors or suggestions for improvement.
Numerical Analysis
Binary
where
For Marc-32, , . For
Marc-64, , .
Roundoff Error
Let . define Suppose that , then we pick two points such that is less than and
is greater than . Then we have
Number System with base
The unit roundoff is where
is the number of digits in the mantissa.
Floating point error analysis
Let be two machine numbers. Let is a binary
operation. where
is the roundoff error.
Orders of Convergence
Let be a sequence of real numbers tending to a limit . We say that the rate of convergence is at least linear if there are a constant and an integer such that
We say that the rate of convergence is at least superlinear if there exist a sequence tending to 0 and an integer such that
The convergence is at least quadratic if there are a constant (not necessarily less than 1) and an integer such that
In general, if there are positive constants and and an integer such that
we say that the rate of convergence is of order at least.
"Numerical Analysis: Mathematics of Scientific Computing" page. 17.
Newton's Method
Definition. Newton's method, also known as the
Newton-Raphson method, is an iterative root-finding algorithm used to
find successively better approximations to the roots (or zeros) of a
real-valued function. Given a function and its derivative , Newton's method starts with an initial guess and
iteratively refines this guess using the following formula: where:
- is the current approximation.
- is the next approximation.
- is the value of the function at .
-
is the value of the derivative of the function at .
Convergence Criteria. The method typically converges if the
initial guess is sufficiently close to the actual root, and is continuously differentiable in the neighborhood of the root.
Example. Consider finding the root of the function . The derivative of the function is . Firstly,
we start with an initial guess . Secondly, we apply the
Newton's iteration: So, the iterative formula becomes: By repeatedly applying this formula, the
sequence will converge to the square root of 2.
Error Analysis for Newton's Method
Let be a solution of (i.e. ).
Suppose that we have already computed and the error in is . We now derive a formula that relates the
error after the next step, , to .
According to Taylor's theorem, there is a between and
such that By the definition of Newton's
method, we have
Subtracting (2) from (1). If is close to , then , which must be between
and , is also close to and
Thus, we can see that Newton's Method is quadratically convergent.
However, the disadvantage of Newton's Method is that if the initial
value is closed enough to the root.
Theorems on Newton's Method
Theorem.
Let be continuous and let be a simple zero of . Then
there is a neighborhood of and a constant such that if
Newton's method is started in that neighborhood, the successive points
become steadily closer to and satisfy
$\textbf{Theorem on Newton's Method for a Convex Function. }$ If
belongs to , is increasing, is convex, and has a
zero, then the zero is unique, and the Newton iteration will converge to
it from any starting point.
$\textbf{Convergence of Newton's method.}$ Assume that , , and are continuous for all in some
neighborhood of and and . If is sufficiently close to , then as . Moreover,
Horner's Algorithm
This method is also known as Horner's method, nested multiplication, or
synthetic division.
Horner's algorithm is a method for evaluating
a polynomial at a point. Given a polynomial , Horner's algorithm evaluates at a
point as follows:
Given a polynomial . Then Horner's algorithm evaluates at a point as follows: The boxed number satisfies .
Given the polynomial and , evaluate . Hence, we have .
Let . Define pairs for by the algorithm Then and .
Contractive Mapping
Newton's method and Steffensen's method are examples of procedures whereby a sequence of points is computed by a formula of the form
The algorithm defined by such an equation is called functional iteration.
"Numerical Analysis: Mathematics of Scientific Computing" page. 100.
A mapping of a metric space
into itself is called a contraction if there exists a constant with such that for all in .
"Numerical Analysis: Mathematics of Scientific Computing" page. 101.
Contractive Mapping Theorem
Theorem (Contractive Mapping Theorem).
Let be a closed subset of the real line.
If is a contractive mapping of into , then has a unique fixed point.
Moreover, this fixed point is the limit of every sequence obtained from Equation (1) with a starting point .
"Numerical Analysis: Mathematics of Scientific Computing" page. 102.
Polynomial Interpretation
Theorem on Polynomial Interpolation Error
Let be a function in ,
and let be the polynomial of degree at most that
interpolates the function at distinct points in the interval . To each in there corresponds a point in such that
Lagrange's Form
The Lagrange form of the interpolation polynomial is a method for
constructing a polynomial that passes through a given set of points.
Specifically, given a set of distinct data points , the Lagrange interpolation
polynomial is defined as: where are the Lagrange basis polynomials
given by: Each is a polynomial of
degree that is 1 at and 0 at for .
Given points: . Here
, and .
Compute the Lagrange basis polynomials :
Construct the interpolation polynomial : Simplifying, we get:
Divided Differences
The divided differences of a set of data points are a sequence of
numbers that are used to construct the Newton form of the interpolation
polynomial.
The
Following table shows the divided differences for a set of data points:
Given the following table of data points: Solution: We arrange the given table vertically and
compute divided differences by the use of Formula above, arriving at
Theorem on Permutations in Divided Differences. The divided
difference is a symmetric function of its arguments. Thus, if is a permutation of ,
then
Theorem on Derivatives and Divided Differences. If
is times continuously differentiable on and if are distinct points in , then there
exists a point in such that
Newton's Form
The Newton form of the interpolation polynomial is a method for
constructing a polynomial that passes through a given set of points.
Specifically, given a set of distinct data points , the Newton interpolation
polynomial is defined as: where are the divided
differences of the data points.
Here, .
Given the following table of data points: We already found the divided differences in the
previous example. The Newton form of the interpolation polynomial is
Theorem on Polynomial Interpolation Error
Let be a function in ,
and let be the polynomial of degree at most that
interpolates the function at distinct points in the interval . To each in there corresponds a point in such that
Hermite Interpolation
Hermite Interpolation Theorem. Let , and suppose that , , are distinct real numbers. Then, given two sets of real numbers , , and , , there is a unique polynomial in such that
Remark. In general, if there is a set of distinct points, there is a unique polynomial of degree that interpolates the function at those points for hermite interpolation.
"Introduction to Numerical Analysis", page. 188.
Theorem. Suppose that and let be a real-valued function, defined, continuous and times differentiable on the interval , such that is continuous on . Further, let denote the Hermite interpolation polynomial of defined by (6.16). Then, for each there exists in such that
where is as defined in (6.9). Moreover,
where .
"Introduction to Numerical Analysis", page. 190.
Theorem on Hermite Interpolation Error Estimate
Let be distinct nodes in and let . If is the polynomial of degree at most such that
then to each in there corresponds a point in such that
"Numerical Analysis: Mathematics of Scientific Computing" page. 344.
Change of Intervals
Suppose that a numerical integration formula is given:
Let us assume that it is exact for all polynomials of degree or less. If we want to integrate the function over the interval , we can use the change of interval formula.
We first define a linear function of such that if traverses , will traverse . The function is given explicitly by
Now in the integral
we make the change of variable . Then , and so we have
Hence, we have
Observe that, because is linear, is a polynomial in if is a polynomial, and the degrees are the same. Hence, the new formula is exact for polynomials of degree .
For Simpson’s rule, this procedure yields Equation (6) as a consequence of Equation (5) using .
"Numerical Analysis: Mathematics of Scientific Computing," page. 486.
Gaussian Quadrature
The theory can be formulated for quadrature rules of a slightly more
general form; namely, where is a fixed positive weight function.
The Gauss quadrature rule:
where the quadrature weights are
and the quadrature points , are chosen as the zeros of the polynomial of degree from a system of orthogonal polynomials over the interval with respect to the weight function . Since this quadrature rule was obtained by exact integration of the Hermite interpolation polynomial of degree for , it gives the exact result whenever is a polynomial of degree or less.
"Introduction to Numerical Analysis", page. 279.
Error Analysis for Gaussian Quadrature
Given the Gaussian quadrature formula where
Theorem. Suppose that is a weight function, defined, integrable, continuous and positive on , and that is defined and continuous on ; suppose further that has a continuous derivative of order on , . Then, there exists a number in such that
and
Consequently, the integration formula (10.6), (10.7) will give the exact result for every polynomial of degree .
"Introduction to Numerical Analysis," page. 282.
Theorem on Gaussian Formula with Error Term
Consider a Gaussian formula with error term:
For an , we have
where and .
Proof. By Hermite interpolation (Section 6.3, p. 338), a polynomial of degree at most exists such that
The error formula for this interpolation is
as given in Theorem 2 of Section 6.3 (p. 344). Here is the polynomial of degree which is -orthogonal to the polynomials of degree at most .
It follows that
Now use the fact that is of degree at most to see that
Furthermore, the Mean-Value Theorem for Integrals can be used to write
This requires continuity of , which can be inferred from Equation (11). The desired error formula results from making the substitution described.
"Numerical Analysis: Mathematics of Scientific Computing" page. 498.
Theorem 8. Let be a weight function and be the monic polynomial of degree orthogonal to all polynomials of smaller degrees. Let , , be zeros of . Let be the quadrature rule defined in Eqs. (36) and (37). Suppose . Then one can find such that
Proof. We use the Hermite interpolation to prove the error estimate. There exists a unique polynomial of degree such that
In addition, there exists depending on such that
Note that as is monic and , , are its roots. Therefore,
Since the quadrature is exact for polynomials of degree , it is exact for . Hence
On the other hand, because on , we can apply the mean value theorem and get
for some . This completes the proof.
-Method
Theta methods. We consider methods of the form where is
a parameter: - If , we recover Euler's method. - If , then the theta method (3.3) is implicit: Each time
step requires the solution of (in general, nonlinear) algebraic
equations for the unknown vector . - The choices and are known as
Backward Euler:
Trapezoidal rule:
Solution of nonlinear algebraic equations can be done by
iteration. For example, for backward Euler, letting , we may use $\textit{Direct iteration}$: $\textit{Newton–Raphson}$: $\textit{Modified Newton–Raphson}$:
Error
In numerically solving a differential equation, several types of errors
arise. These are conveniently classified as follows:
- Local truncation error
- Local roundoff error
- Global truncation error
- Global roundoff error
- Total error
Local Truncation Error. The local truncation error is the
error made in a single step of a numerical method.
{Local Roundoff Error. The local roundoff error is the error
made in a single step of a numerical method due to roundoff.
Global Truncation Error. The global truncation error is the
accumulation of the local truncation errors in the previous steps.
Global Roundoff Error. The global roundoff error is the
accumulation of the local roundoff errors in the previous steps.
Total Error. The total error is the sum of the global
truncation error and the global roundoff error.
"Numerical Analysis: Mathematics of Scientific Computing" page. 533.
Initial value problems for ODEs
Our model is an initial-value problem written in the form
Here is an unknown function of that we hope to construct
from the information given in Equations (1), where . The second of the two equations in (1) specifies
one particular value of the function . The first equation
gives the slope.
"Numerical Analysis: Mathematics of Scientific Computing" page.524.
Existence
$\textbf{First Existence Theorem, Initial-Value Problem}$ If is
continuous in a rectangle centered at , say then the initial-value problem above has a solution for , where is
the maximum of in the rectangle .
"Numerical Analysis: Mathematics of Scientific Computing" page 525.
$\textbf{Second Existence Theorem, Initial-Value Problem.}$ If
is continuous in the strip
and satisfies there an inequality then the initial-value problem (1) has a unique
solution in the interval .
"Numerical Analysis: Mathematics of Scientific Computing" page.526.
Lipschitz Condition
Definition: A function
satisfies the Lipschitz condition on a domain if there exists a constant such that for all : where denotes a norm (commonly the Euclidean norm). And is
called the Lipschitz constant.
Note. Hence, inequality (5) is the Lipschitz condition.
Uniqueness
Uniqueness Theorem, Initial-Value Problem.
If and are continuous in the
rectangle defined by $\textbf{(4)}$, then the initial-value
problem (1) has a unique solution in the interval .
$\textbf{Theorem (Picard's Theorem).}$ Suppose that the real-valued
function is continuous in the rectangular
region defined by , ; that when ; and
that satisfies the Lipschitz condition: there exists
such that Assume further that Then,
there exists a unique function such that
and for ; moreover,
"Introduction to Numerical Analysis", page. 311.
One-step methods
Definition. Generally, a one-step method may be written in the form
where is a continuous function of its variables.
"Introduction to Numerical Analysis", page. 317.
Definition. In order to find the accuracy of the numerical method (12.13), we define the global error, , by
Definition. The truncation error, , is defined by
"Introduction to Numerical Analysis", page. 317.
Theorem. Consider the general one-step method (12.13) where, in addition to being a continuous function of its arguments, is assumed to satisfy a Lipschitz condition with respect to its second argument, that is, there exists a positive constant such that, for and for all and in the rectangle
we have that
Then, assuming that , , it follows that
where .
"Introduction to Numerical Analysis", page. 318.
$\textbf{Euler's method.}$ Given that , let us suppose
that we have already calculated , up to some , , ; we define Thus, taking in succession , one
step at a time, the approximate values at the mesh points can be easily obtained. This numerical method is known as Euler's
method.
"Introduction to Numerical Analysis", page. 317.
Consistency
Definition. The numerical method (12.13) is consistent with the differential equation (12.1) if the truncation error, defined by (12.14), is such that for any there exists a positive for which for and any pair of points on any solution curve in .
"Introduction to Numerical Analysis", page. 321.
Theorem. Suppose that the initial value problem (12.1), (12.2) satisfies the conditions of Picard's Theorem, and also that its approximation generated from (12.13) when lies in the region . Assume further that the function is continuous on , and satisfies the consistency condition (12.21) and the Lipschitz condition
Then, if successive approximation sequences , generated by using the mesh points , , are obtained from (12.13) with successively smaller values of , each less than , we have convergence of the numerical solution to the solution of the initial value problem in the sense that
"Introduction to Numerical Analysis", page. 322.
Adams-Bashforth Formula
Suppose the resulting formula is of the following type: where denotes . An equation of this type is called an
Adams-Bashforth formula.
Note. The Adams-Bashforth formula is an explicit method since it does not include $f_{n+1}$.
Adams-Bashforth Formula of Order $5$
Adams-Moulton Formula
Definition. Suppose the resulting formula is of the following
type: where denotes . An equation of this
type is called an Adams-Moulton formula.
Note. The Adams-Moulton formula is an implicit method since it includes .
Adams-Moulton Formula of Order $5$
"Numerical Analysis: Mathematics of Scientific Computing", page.
551.
Linear Multistep Methods
Linear $k$-step method
The format of any such method is
Or in general, it can be written as where . This is called a -step method.
Note. The left side of the equation (9) is called the corresponding characteristic polynomial of the method.
Note
- The coefficients and are given.
-
denotes an approximation to the solution at , with
.
- The letter denotes .
-
The formula above is used to compute , assuming that are already known (Assume that ).
- If , the method is said to be explicit.
- If , the method is then said to be implicit.
Examples.
-
The implicit Euler method is an implicit linear one-step method.
-
The trapezium rule method is an implicit linear one-step method.
-
The Adams-Bashforth method is an
example of an explicit linear four-step method.
-
The Adams-Moulton method is an
implicit linear three-step method.
"Introduction to Numerical Analysis" page. 331.
Order
The accuracy of a numerical solution of a differential equation is largely determined by the order of the algorithm used.
Definition.The order indicates how many terms in a Taylor-series solution are being simulated by the method.
For example, the Adams-Bashforth method: $x_{n+1} = x_n + \frac{h}{720}
\left[ 1901 f_n - 2774 f_{n-1} + 2616 f_{n-2} - 1274 f_{n-3} + 251
f_{n-4} \right]$ is said to be of order 5 because it produces
approximately the same accuracy as the Taylor-series method with terms
and . The error can then be expected to
be in each step of a solution using the above
method.
In order to have better understanding of the order of the Multistep methods, we will define the following function
However, in the following analysis, we assume that is represented by its Taylor series at . By using the Taylor series for , one can express in this form:
To compute the coefficients, , in Equation (11), we write the Taylor series for and :
These series are then substituted into Equation (10). Rearranging the result in powers of , we obtain an equation having the form of (11), with these values of :
Here we use and .
Theorem on Multistep Method Properties.
These three properties of the multistep method (9) are equivalent:
- for each polynomial of degree
- is for all
"Numerical Analysis: Mathematics of Scientific Computing", page. 552 & 553.
Alternate Definition of Order
Now, we can define the order of the multistep method as follows:
The order of the multistep method in Equation (9) is the unique natural number such that
EXAMPLE 1 What is the order of the method described by the following equation?
Solution. Firstly, we rewrite the equation in the form of Equation (9):
And we have to match it to
The vector is and the vector is . Thus, the are
The order of the method is .
"Numerical Analysis: Mathematics of Scientific Computing", page 554.
Root Condition
Definition. The roots of the first characteristic polynomial lie in the closed unit disc and those on the unit circle are simple is often called the Root Condition.
Stability
Definition. The roots of $\rho(z)$ of modulus one are called essential roots.
The root is called the principal root.
The roots of of modulus are called nonessential roots.
Weakly Stability
Definition. A linear multistep method is strongly stable if all roots of are in magnitude and only one root has magnitude one.
If more than one root has magnitude one, the method is called weakly or conditionally stable.
Note. We still require only simple roots of magnitude one. Also, note these definitions refer to the case .
Strong Stability
Definition. A multistep method is said to be strongly stable if , and all other roots of satisfy the inequality .
"Numerical Analysis: Mathematics of Scientific Computing", page 564.
Absolute Stability
Definition. A linear multistep method is said to be absolutely stable for a given value of if each root of the associated stability polynomial satisfies .
Region of Absolute Stability
Definition. The region of absolute stability of a linear multistep method is the set of all points in the complex plane for which the method is absolutely stable.
-stable
Definition. A linear multistep method is said to be A-stable if its region of absolute stability contains the negative (left) complex half-plane.
Second Barrier Theorem:
- No explicit linear multistep method is A-stable.
- No A-stable linear multistep method can have order greater than 2.
- The second-order A-stable linear multistep method with the smallest error constant is the trapezium rule method.
"Introduction to Numerical Analysis", page 348.
Local Truncation Error, Multistep Method
Theorem (Theorem on Local Truncation Error, Multistep Method).
If the multistep method (2) is of order , if , and if is continuous, then under the hypotheses of the preceding paragraph,
(The coefficients are defined in Section 8.4, p. 553.)
"Numerical Analysis: Mathematics of Scientific Computing", page 560.
Proof It suffices to prove the equation when , because can be interpreted as the value of a numerical solution that began at the point . Using the linear functional , Equation (10) in Section 8.4 (p. 552), we have
On the other hand, the numerical solution satisfies the equation
Because we have assumed that for , the result of subtracting Equation (15) from Equation (14) will be
To the last term of Equation (16) we apply the Mean-Value Theorem, obtaining
where for some between and . Now recall from Equation (13) in Section 8.4 (p. 553) that if the method being used is of order , then will have the form
Combining Equations (17) and (18), we obtain (13). Here we can ignore in the denominator. (See Problem 8.5.8, p. 564.)
"Numerical Analysis: Mathematics of Scientific Computing", page 563.
Hermite Matrix
Definition. A matrix is called a Hermite matrix if it is equal to its conjugate transpose, that is, .
Theorem. Every Hermitian matrix can be diagonalized by a unitary matrix,
where is unitary and is a diagonal matrix.
Proof. Since we know that the hermite matrix has the schur form of , where is upper triangular.
Since , we have . Thus, . Since is upper triangular, we have is lower triangular. Thus, is diagonal.
QR Factorization
Reduced QR Factorization
Definition [Reduced QR factorization]. Let (). There is an orthonormal
matrix and a square upper
triangular matrix such that
There are several ways to compute the QR factorization of a matrix.
-
Gram-Schmidt Process. The Gram-Schmidt process is a method for
orthonormalizing a set of vectors. Given a set of linearly independent
vectors , the Gram-Schmidt process
produces an orthonormal set of vectors
such that .
-
Householder Transformation. The Householder
transformation is a reflection across a plane. Given a vector , the Householder transformation is defined as
where and is the first unit vector.
-
Givens Rotation. The Givens rotation is a rotation in the plane spanned by two vectors.
Given a vector , the Givens rotation is defined as
where and .
LU Decomposition
The LU decomposition of a matrix is a factorization of into a product of a lower triangular matrix and an upper triangular matrix . The LU decomposition with partial pivoting is given by where is a permutation matrix, is a lower triangular matrix with ones on the diagonal, and is an upper triangular matrix. The LU decomposition is computed using Gaussian elimination with partial pivoting.
One thing we need to know is when the LU decomposition exists and the following proposition gives us the answer.
Proposition 5. Let be an matrix and for , let denote the submatrix of consisting of the first rows and columns of . If is invertible for , then has a unique LU-decomposition.
LU Decomposition provides a good explanation of the LU decomposition.
Bauer-Fike Theorem
Consider a diagonalizable matrix with
non-singular eigenvector matrix , such
that , where is a diagonal matrix. If
is invertible, its condition number in
the -norm is denoted by and defined as:
Let A be an matrix with a
complete set of linearly independent eigenvectors and suppose the where is non-singular and is diagonal.
Let be a perturbation of and let be an
eigenvalue of . Then has an eigenvalue such that where is the -norm condition number of .
The Bauer-Fike Theorem:
- Suppose is an eigenvalue of .
-
Then there exists (i.e., an eigenvalue of
) such that:
- Here, denotes the -norm of a matrix.
The Bauer–Fike theorem is a powerful result, and its
proof involves several key concepts from linear algebra and perturbation
theory. Let's outline the main steps of the proof:
-
Eigenvalue Perturbation:
-
Consider a matrix with eigenvalues .
-
Suppose we have a perturbed matrix , where
represents a small perturbation.
-
We want to understand how the eigenvalues of
relate to the eigenvalues of .
-
Eigenvector Matrix and Diagonalization:
-
Recall that can be diagonalized as ,
where:
- is the matrix of eigenvectors of .
-
is the diagonal matrix with eigenvalues
on the diagonal.
- The eigenvector matrix is invertible.
-
Bounding the Deviation:
- Let be an eigenvalue of .
-
We want to find a bound on , where
is an eigenvalue of .
-
Using the eigenvector matrix , we can express as:
Here, corresponds to the -th column of .
-
Since is small, we can bound the deviation: The inequality follows from
the properties of matrix norms.
-
Condition Number of :
-
The condition number of in the -norm is denoted by
.
-
It measures how sensitive the eigenvectors are to perturbations.
-
We have:
-
Alternate Formulation:
-
If we have an approximate eigenvalue-eigenvector pair
, we can bound the error:
where .
-
Relative Bound:
-
If is invertible and is an eigenvalue of , then:
Theorem [Bauer-Fike Theorem] Suppose and , where
Then for any eigenvalue of , there exists , an eigenvalue of , such that
where .
Proof. Suppose is an eigenvector of corresponding to . Then for ,
and from which
Then
Since is diagonal,
Suppose the minimum is attained when . Then
"Lecture note No. 22"
Norms
Definition (Forbenius norm). For a matrix , the Forbenius norm is defined as
Definition (-norm). For a matrix , the -norm is defined as
where is a vector in .
Theorem. For any and unitary , we have
"Numerical Linear Algebra", Page. 23
Theorem. For any and unitary , we have
Positive Definite
The following definitions all involve the term . Notice that this is always a real number for any Hermitian square matrix .
An Hermitian complex matrix is said to be positive-definite if for all non-zero in . Formally,
An Hermitian complex matrix is said to be positive semi-definite or non-negative-definite if for all in . Formally,
Theorem. A matrix is positive-definite if and only if it satisfies any of the following equivalent conditions:
- is congruent with a diagonal matrix with positive real entries.
- is symmetric or Hermitian, and all its eigenvalues are real and positive.
- is symmetric or Hermitian, and all its leading principal minors are positive.
- There exists an invertible matrix with conjugate transpose such that .
Gerschgorin’s Theorem
Theorem [Row version Gerschgorin’s Theorem] Let . Consider disks in the complex plane:
Then we have the following results.
-
Any eigenvalue must lie in at least one of these disks.
-
Suppose the disks form disjoint regions , and each () is connected and consists of disks. Then for each from 1 to , exactly eigenvalues of are in .
Theorem [Column version Gerschgorin’s Theorem] Let . Consider disks in the complex plane:
Then we have the following results.
-
Any eigenvalue must lie in at least one of the disks.
-
The disks form disjoint regions . Suppose each () is connected and consists of disks. Then for each from 1 to , exactly eigenvalues of are in .
Theorem [Row version Generalized Gerschgorin’s Theorem] Let . Consider disks in the complex plane:
where are arbitrary nonzero numbers. Then we have the following results.
-
Any eigenvalue must lie in at least one of these disks.
-
Suppose the disks form disjoint regions , and each () is connected and consists of disks. Then for each from 1 to , exactly eigenvalues of are in .
"Lecture note No. 22"
Rayleigh Quotient
Definition. Let . For a given vector , the Rayleigh quotient of is the scalar defined by
Algorithm: Rayleigh quotient iteration
Choose an initial vector with .
Compute .
For
- Solve for
- Compute
- Compute
- Compute
End
Upper Hessenberg Matrix
Definition. A matrix is upper Hessenberg if for all , or equivalently,
is called unreduced upper Hessenberg if . Otherwise is reduced upper Hessenberg.
References
-
D. Kincard & W. Cheney "Numerical Analysis: Mathematics of Scientific Computing", AMS, Third Edition, 2009.
-
E. Süli & D. F. Mayers "An introduction to numerical analysis", Cambridge University Press, 2003.