Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Gaussian elimination
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Computational efficiency == The number of [[Arithmetic#Arithmetic operations|arithmetic operations]] required to perform row reduction is one way of measuring the algorithm's computational efficiency. For example, to solve a system of {{math|''n''}} equations for {{math|''n''}} unknowns by performing row operations on the matrix until it is in echelon form, and then solving for each unknown in reverse order, requires {{math|''n''(''n'' + 1)/2}} divisions, {{math|(2''n''<sup>3</sup> + 3''n''<sup>2</sup> β 5''n'')/6}} multiplications, and {{math|(2''n''<sup>3</sup> + 3''n''<sup>2</sup> β 5''n'')/6}} subtractions,<ref>{{harvnb|Farebrother|1988|p=12}}</ref> for a total of approximately {{math|2''n''<sup>3</sup>/3}} operations. Thus it has a ''arithmetic complexity'' ([[time complexity]], where each arithmetic operation take a unit of time, independently of the size of the inputs) of {{math|O(''n''<sup>3</sup>)}}. This complexity is a good measure of the time needed for the whole computation when the time for each arithmetic operation is approximately constant. This is the case when the coefficients are represented by [[Floating-point arithmetic|floating-point numbers]] or when they belong to a [[finite field]]. If the coefficients are [[integer]]s or [[rational number]]s exactly represented, the intermediate entries can grow exponentially large, so the [[bit complexity]] is exponential.<ref>{{Cite conference | first1 = Xin Gui | last1 = Fang | first2 = George | last2 = Havas | title = On the worst-case complexity of integer Gaussian elimination | book-title = Proceedings of the 1997 international symposium on Symbolic and algebraic computation | conference = ISSAC '97 | pages = 28β31 | publisher = ACM | year = 1997 | location = Kihei, Maui, Hawaii, United States | url = https://scholar.archive.org/work/2htta67odrfilhxegzjqfratyy | doi = 10.1145/258726.258740 | isbn = 0-89791-875-4 }}</ref> However, [[Bareiss algorithm|Bareiss' algorithm]] is a variant of Gaussian elimination that avoids this exponential growth of the intermediate entries; with the same arithmetic complexity of {{math|O(''n''<sup>3</sup>)}}, it has a bit complexity of {{math|O(''n''<sup>5</sup>)}}, and has therefore a [[strongly-polynomial time]] complexity. Gaussian elimination and its variants can be used on computers for systems with thousands of equations and unknowns. However, the cost becomes prohibitive for systems with millions of equations. These large systems are generally solved using [[iterative method]]s. Specific methods exist for systems whose coefficients follow a regular pattern (see [[system of linear equations]]). ===Bareiss algorithm=== {{main|Bareiss algorithm}} The first [[strongly-polynomial time]] algorithm for Gaussian elimination was published by [[Jack Edmonds]] in 1967.<ref name=":0">{{Cite Geometric Algorithms and Combinatorial Optimization}}</ref>{{Rp|page=37}} Independently, and almost simultaneously, Erwin Bareiss discovered another algorithm, based on the following remark, which applies to a division-free variant of Gaussian elimination. In standard Gaussian elimination, one subtracts from each row <math>R_i</math> below the pivot row <math>R_k</math> a multiple of <math>R_k</math> by <math>r_{i,k}/r_{k,k},</math> where <math>r_{i,k}</math> and <math>r_{k,k}</math> are the entries in the pivot column of <math>R_i</math> and <math>R_k,</math> respectively. Bareiss variant consists, instead, of replacing <math>R_i</math> with <math display=inline>\frac{r_{k,k}R_i-r_{i,k}R_k}{r_{k-1,k-1}}.</math> This produces a row echelon form that has the same zero entries as with the standard Gaussian elimination. Bareiss' main remark is that each matrix entry generated by this variant is the determinant of a submatrix of the original matrix. In particular, if one starts with integer entries, the divisions occurring in the algorithm are exact divisions resulting in integers. So, all intermediate entries and final entries are integers. Moreover, [[Hadamard's inequality]] provides an upper bound on the absolute values of the intermediate and final entries, and thus a bit complexity of <math>\tilde O(n^5),</math> using [[soft O notation]]. Moreover, as an upper bound on the size of final entries is known, a complexity <math>\tilde O(n^4)</math> can be obtained with [[modular arithmetic|modular computation]] followed either by [[Chinese remainder theorem|Chinese remaindering]] or [[Hensel lifting]]. As a corollary, the following problems can be solved in strongly polynomial time with the same bit complexity:<ref name=":0" />{{Rp|page=40}} * Testing whether ''m'' given rational vectors are [[linearly independent]] * Computing the [[determinant]] of a rational matrix * Computing a solution of a rational equation system ''Ax'' = ''b'' * Computing the [[inverse matrix]] of a nonsingular rational matrix * Computing the [[Rank (linear algebra)|rank]] of a rational matrix === Numeric instability === One possible problem is [[numerical stability|numerical instability]], caused by the possibility of dividing by very small numbers. If, for example, the leading coefficient of one of the rows is very close to zero, then to row-reduce the matrix, one would need to divide by that number. This means that any error which existed for the number that was close to zero would be amplified. Gaussian elimination is numerically stable for [[diagonally dominant]] or [[Positive-definite matrix|positive-definite]] matrices. For general matrices, Gaussian elimination is usually considered to be stable, when using [[Pivot element#Partial and complete pivoting|partial pivoting]], even though there are examples of stable matrices for which it is unstable.<ref>{{harvnb|Golub|Van Loan|1996|at=Β§3.4.6}}</ref>
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Gaussian elimination
(section)
Add topic