Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Gaussian elimination
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Definitions and example of algorithm == The process of row reduction makes use of [[elementary row operations]], and can be divided into two parts. The first part (sometimes called forward elimination) reduces a given system to row echelon form, from which one can tell whether there are no solutions, a unique solution, or infinitely many solutions. The second part (sometimes called [[Triangular matrix#Forward and back substitution|back substitution]]) continues to use row operations until the solution is found; in other words, it puts the matrix into reduced row echelon form. Another point of view, which turns out to be very useful to analyze the algorithm, is that row reduction produces a [[matrix decomposition]] of the original matrix. The elementary row operations may be viewed as the multiplication on the left of the original matrix by [[elementary matrix|elementary matrices]]. Alternatively, a sequence of elementary operations that reduces a single row may be viewed as multiplication by a [[Frobenius matrix]]. Then the first part of the algorithm computes an [[LU decomposition]], while the second part writes the original matrix as the product of a uniquely determined invertible matrix and a uniquely determined reduced row echelon matrix. === Row operations === There are three types of elementary row operations which may be performed on the rows of a matrix: # Interchanging two rows. # Multiplying a row by a non-zero [[scalar (mathematics)|scalar]]. # Adding a scalar multiple of one row to another. If the matrix is associated to a system of linear equations, then these operations do not change the solution set. Therefore, if one's goal is to solve a system of linear equations, then using these row operations could make the problem easier. === Echelon form === {{Main|Row echelon form}} For each row in a matrix, if the row does not consist of only zeros, then the leftmost nonzero entry is called the ''[[leading coefficient]]'' (or ''pivot'') of that row. So if two leading coefficients are in the same column, then a row operation of [[#Row operations|type 3]] could be used to make one of those coefficients zero. Then by using the row swapping operation, one can always order the rows so that for every non-zero row, the leading coefficient is to the right of the leading coefficient of the row above. If this is the case, then matrix is said to be in row echelon form. So the lower left part of the matrix contains only zeros, and all of the zero rows are below the non-zero rows. The word "echelon" is used here because one can roughly think of the rows being ranked by their size, with the largest being at the top and the smallest being at the bottom. For example, the following matrix is in row echelon form, and its leading coefficients are shown in red: <math display="block">\begin{bmatrix} 0 & \color{red}{\mathbf{2}} & 1 & -1 \\ 0 & 0 & \color{red}{\mathbf{3}} & 1 \\ 0 & 0 & 0 & 0 \end{bmatrix}.</math> It is in echelon form because the zero row is at the bottom, and the leading coefficient of the second row (in the third column), is to the right of the leading coefficient of the first row (in the second column). A matrix is said to be in reduced row echelon form if furthermore all of the leading coefficients are equal to 1 (which can be achieved by using the elementary row operation of type 2), and in every column containing a leading coefficient, all of the other entries in that column are zero (which can be achieved by using elementary row operations of type 3). === Example of the algorithm === Suppose the goal is to find and describe the set of solutions to the following system of linear equations: <math display="block"> \begin{alignat}{4} 2x &{}+{}& y &{}-{}& z &{}={}& 8 & \qquad (L_1) \\ -3x &{}-{}& y &{}+{}& 2z &{}={}& -11 & \qquad (L_2) \\ -2x &{}+{}& y &{}+{}& 2z &{}={}& -3 & \qquad (L_3) \end{alignat} </math> The table below is the row reduction process applied simultaneously to the system of equations and its associated [[augmented matrix]]. In practice, one does not usually deal with the systems in terms of equations, but instead makes use of the augmented matrix, which is more suitable for computer manipulations. The row reduction procedure may be summarized as follows: eliminate {{mvar|x}} from all equations below {{math|''L''<sub>1</sub>}}, and then eliminate {{mvar|y}} from all equations below {{math|''L''<sub>2</sub>}}. This will put the system into [[triangular form]]. Then, using back-substitution, each unknown can be solved for. : {| class="wikitable" |- ! System of equations !! Row operations !! Augmented matrix |- align="center" | <math> \begin{alignat}{4} 2x &{}+{}& y &{}-{}& z &{}={}& 8 & \\ -3x &{}-{}& y &{}+{}& 2z &{}={}& -11 & \\ -2x &{}+{}& y &{}+{}& 2z &{}={}& -3 & \end{alignat} </math> | | <math> \left[\begin{array}{rrr|r} 2 & 1 & -1 & 8 \\ -3 & -1 & 2 & -11 \\ -2 & 1 & 2 & -3 \end{array}\right] </math> |- align="center" | <math> \begin{alignat}{4} 2x &{}+{}& y &{}-{}& z &{}={}& 8 & \\ & & \tfrac12 y &{}+{}& \tfrac12 z &{}={}& 1 & \\ & & 2y &{}+{}& z &{}={}& 5 & \end{alignat} </math> | <math> \begin{align} L_2 + \tfrac32 L_1 &\to L_2 \\ L_3 + L_1 &\to L_3 \end{align} </math> | <math> \left[\begin{array}{rrr|r} 2 & 1 & -1 & 8 \\ 0 & \frac12 & \frac12 & 1 \\ 0 & 2 & 1 & 5 \end{array}\right] </math> |- align="center" | <math> \begin{alignat}{4} 2x &{}+{}& y &{}-{}& z &{}={}& 8 & \\ & & \tfrac12 y &{}+{}& \tfrac12 z &{}={}& 1 & \\ & & & & -z &{}={}& 1 & \end{alignat} </math> | <math> L_3 + -4 L_2 \to L_3</math> | <math> \left[\begin{array}{rrr|r} 2 & 1 & -1 & 8 \\ 0 & \frac12 & \frac12 & 1 \\ 0 & 0 & -1 & 1 \end{array}\right] </math> |- |colspan=3 align="center"| The matrix is now in echelon form (also called triangular form) |- align="center" | <math> \begin{alignat}{4} 2x &{}+{}& y & & &{}={} 7 & \\ & & \tfrac12 y & & &{}={} \tfrac32 & \\ & & &{}-{}& z &{}={} 1 & \end{alignat} </math> | <math> \begin{align} L_1 - L_3 &\to L_1\\ L_2 + \tfrac12 L_3 &\to L_2 \end{align} </math> | <math> \left[\begin{array}{rrr|r} 2 & 1 & 0 & 7 \\ 0 & \frac12 & 0 & \frac32 \\ 0 & 0 & -1 & 1 \end{array}\right] </math> |- align="center" | <math> \begin{alignat}{4} 2x &{}+{}& y &\quad& &{}={}& 7 & \\ & & y &\quad& &{}={}& 3 & \\ & & &\quad& z &{}={}& -1 & \end{alignat} </math> | <math> \begin{align} 2 L_2 &\to L_2 \\ -L_3 &\to L_3 \end{align} </math> | <math> \left[\begin{array}{rrr|r} 2 & 1 & 0 & 7 \\ 0 & 1 & 0 & 3 \\ 0 & 0 & 1 & -1 \end{array}\right] </math> |- align="center" | <math> \begin{alignat}{4} x &\quad& &\quad& &{}={}& 2 & \\ &\quad& y &\quad& &{}={}& 3 & \\ &\quad& &\quad& z &{}={}& -1 & \end{alignat} </math> | <math> \begin{align} L_1 - L_2 &\to L_1 \\ \tfrac12 L_1 &\to L_1 \end{align} </math> | <math> \left[\begin{array}{rrr|r} 1 & 0 & 0 & 2 \\ 0 & 1 & 0 & 3 \\ 0 & 0 & 1 & -1 \end{array}\right] </math> |} The second column describes which row operations have just been performed. So for the first step, the {{mvar|x}} is eliminated from {{math|''L''<sub>2</sub>}} by adding {{math|{{sfrac|3|2}}''L''<sub>1</sub>}} to {{math|''L''<sub>2</sub>}}. Next, {{mvar|x}} is eliminated from {{math|''L''<sub>3</sub>}} by adding {{math|''L''<sub>1</sub>}} to {{math|''L''<sub>3</sub>}}. These row operations are labelled in the table as <math display="block">\begin{align} L_2 + \tfrac32 L_1 &\to L_2, \\ L_3 + L_1 &\to L_3. \end{align}</math> Once {{mvar|y}} is also eliminated from the third row, the result is a system of linear equations in triangular form, and so the first part of the algorithm is complete. From a computational point of view, it is faster to solve the variables in reverse order, a process known as back-substitution. One sees the solution is {{math|1=''z'' = −1}}, {{math|1=''y'' = 3}}, and {{math|1=''x'' = 2}}. So there is a unique solution to the original system of equations. Instead of stopping once the matrix is in echelon form, one could continue until the matrix is in ''reduced'' row echelon form, as it is done in the table. The process of row reducing until the matrix is reduced is sometimes referred to as Gauss–Jordan elimination, to distinguish it from stopping after reaching echelon form.
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Gaussian elimination
(section)
Add topic