Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Cross product
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Mathematical operation on vectors in 3D space}} {{About|the cross product of two vectors in three-dimensional Euclidean space}} [[File:Cross product vector.svg|thumb|right|The cross product with respect to a right-handed coordinate system]] In [[mathematics]], the '''cross product''' or '''vector product''' (occasionally '''directed area product''', to emphasize its geometric significance) is a [[binary operation]] on two [[Euclidean vector|vector]]s in a three-dimensional [[Orientation (vector space)|oriented]] [[Euclidean vector space]] (named here <math>E</math>), and is denoted by the symbol <math>\times</math>. Given two [[linearly independent vectors]] {{math|'''a'''}} and {{math|'''b'''}}, the cross product, {{math|'''a''' × '''b'''}} (read "a cross b"), is a vector that is [[perpendicular]] to both {{math|'''a'''}} and {{math|'''b'''}},<ref name=":1">{{Cite web|last=Weisstein|first=Eric W.|title=Cross Product|url=https://mathworld.wolfram.com/CrossProduct.html|access-date=2020-09-06|website=Wolfram MathWorld |language=en}}</ref> and thus [[Normal (geometry)|normal]] to the plane containing them. It has many applications in mathematics, [[physics]], [[engineering]], and [[computer programming]]. It should not be confused with the [[dot product]] (projection product). The magnitude of the cross product equals the area of a [[parallelogram]] with the vectors for sides; in particular, the magnitude of the product of two perpendicular vectors is the product of their lengths. The [[Unit of measurement|units]] of the cross-product are the product of the units of each vector. If two vectors are [[parallel vectors|parallel]] or are [[antiparallel vectors|anti-parallel]] (that is, they are linearly dependent), or if either one has zero length, then their cross product is zero.<ref name=":2">{{Cite web|title=Cross Product|url=https://www.mathsisfun.com/algebra/vectors-cross-product.html|access-date=2020-09-06|website=www.mathsisfun.com}}</ref> The cross product is [[anticommutativity|anticommutative]] (that is, {{math|'''a''' × '''b''' {{=}} − '''b''' × '''a'''}}) and is [[distributivity|distributive]] over addition, that is, {{math|'''a''' × ('''b''' + '''c''') {{=}} '''a''' × '''b''' + '''a''' × '''c'''}}.<ref name=":1" /> The space <math>E</math> together with the cross product is an [[algebra over a field|algebra over the real numbers]], which is neither [[commutative]] nor [[associative]], but is a [[Lie algebra]] with the cross product being the [[Lie bracket]]. Like the dot product, it depends on the [[metric space|metric]] of [[Euclidean space]], but unlike the dot product, it also depends on a choice of [[orientation (mathematics)|orientation]] (or "[[Right-hand rule|handedness]]") of the space (it is why an oriented space is needed). The resultant vector is invariant of rotation of basis. Due to the dependence on [[Right-hand rule|handedness]], the cross product is said to be a [[pseudovector]]. In connection with the cross product, the [[exterior algebra|exterior product]] of vectors can be used in arbitrary dimensions (with a [[bivector]] or [[2-form]] result) and is independent of the orientation of the space. The product can be generalized in various ways, using the orientation and metric structure just as for the traditional 3-dimensional cross product; one can, in {{mvar|n}} dimensions, take the product of {{math|''n'' − 1}} vectors to produce a vector perpendicular to all of them. But if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions.<ref name="Massey2">{{cite journal |title=Cross products of vectors in higher dimensional Euclidean spaces |first=William S. |last=Massey |author-link=William S. Massey |jstor=2323537 |journal=The American Mathematical Monthly |volume=90 |issue=10 |date=December 1983 |pages=697–701 |doi=10.2307/2323537 |s2cid=43318100 |url=https://pdfs.semanticscholar.org/1f6b/ff1e992f60eb87b35c3ceed04272fb5cc298.pdf |archive-url=https://web.archive.org/web/20210226011747/https://pdfs.semanticscholar.org/1f6b/ff1e992f60eb87b35c3ceed04272fb5cc298.pdf |url-status=dead |archive-date=2021-02-26 |quote=If one requires only three basic properties of the cross product ... it turns out that a cross product of vectors exists only in 3-dimensional and 7-dimensional Euclidean space. }}</ref> The [[Seven-dimensional cross product|cross-product in seven dimensions]] has undesirable properties (e.g. it [[Seven-dimensional cross product#Relation to the octonions|fails]] to satisfy the [[Jacobi identity]]), so it is not used in mathematical physics to represent quantities such as multi-dimensional [[space-time]].<ref>{{Cite book |last=Arfken |first=George B. |title=Mathematical Methods for Physicists |publisher=Elsevier |year= |isbn= |edition=4th}}</ref> (See {{slink|#Generalizations}} below for other dimensions.) == Definition == [[File:Right hand rule cross product.svg|thumb|Finding the direction of the cross product by the [[right-hand rule]] ]] The cross product of two vectors '''a''' and '''b''' is defined only in three-dimensional space and is denoted by {{nowrap|1='''a''' × '''b'''}}. In [[physics]] and [[applied mathematics]], the wedge notation {{nowrap|1='''a''' ∧ '''b'''}} is often used (in conjunction with the name ''vector product''),<ref>{{cite book|author1=Jeffreys, H. |author2=Jeffreys, B. S. |title=Methods of mathematical physics |year=1999 |publisher=Cambridge University Press |oclc=41158050}}</ref><ref>{{cite book |author1=Acheson, D. J. |author-link=David Acheson (mathematician) |title=Elementary Fluid Dynamics |year=1990 |publisher=Oxford University Press |isbn=0198596790}}</ref><ref>{{cite book |author1=Howison, Sam |title=Practical Applied Mathematics |year=2005 |publisher=Cambridge University Press |isbn=0521842743}}</ref> although in pure mathematics such notation is usually reserved for just the exterior product, an abstraction of the vector product to {{mvar|n}} dimensions. The cross product {{nowrap|'''a''' × '''b'''}} is defined as a vector '''c''' that is [[perpendicular]] (orthogonal) to both '''a''' and '''b''', with a direction given by the [[right-hand rule]]<ref name=":1" /><!-- this is how first time students, who also use right-hand coordinates, learn --> and a magnitude equal to the area of the [[parallelogram]] that the vectors span.<ref name=":2" /> The cross product is defined by the formula<ref>{{harvnb|Wilson|1901|page=60–61}}.</ref><ref name=Cullen>{{cite book |title=Advanced engineering mathematics |author1=Dennis G. Zill |author2=Michael R. Cullen |edition=3rd |year=2006 |publisher=Jones & Bartlett Learning |chapter-url=https://books.google.com/books?id=x7uWk8lxVNYC&pg=PA324 |page=324 |chapter=Definition 7.4: Cross product of two vectors |isbn=0-7637-4591-X}}</ref> : <math>\mathbf{a} \times \mathbf{b} = \| \mathbf{a} \| \| \mathbf{b} \| \sin(\theta) \, \mathbf{n},</math> where : ''θ'' is the [[angle]] between '''a''' and '''b''' in the plane containing them (hence, it is between 0° and 180°), : ‖'''a'''‖ and ‖'''b'''‖ are the [[Magnitude (vector)|magnitudes]] of vectors '''a''' and '''b''', : '''n''' is a [[unit vector]] [[perpendicular]] to the plane containing '''a''' and '''b''', with direction such that the ordered set ('''a''', '''b''', '''n''') is [[Orientation (vector space)|positively oriented]]. If the vectors '''a''' and '''b''' are parallel (that is, the angle ''θ'' between them is either 0° or 180°), by the above formula, the cross product of '''a''' and '''b''' is the [[zero vector]] '''0'''. === Direction === [[File:Cross product.gif|left|thumb|The cross product {{nowrap|'''a''' × '''b'''}} (vertical, in purple) changes as the angle between the vectors '''a''' (blue) and '''b''' (red) changes. The cross product is always orthogonal to both vectors, and has magnitude zero when the vectors are parallel and maximum magnitude ‖'''a'''‖‖'''b'''‖ when they are orthogonal.]] The direction of the vector '''n''' depends on the chosen orientation of the space. Conventionally, it is given by the right-hand rule, where one simply points the forefinger of the right hand in the direction of '''a''' and the middle finger in the direction of '''b'''. Then, the vector '''n''' is coming out of the thumb (see the adjacent picture). Using this rule implies that the cross product is [[Anticommutativity|anti-commutative]]; that is, {{nowrap|1='''b''' × '''a''' = −('''a''' × '''b''')}}. By pointing the forefinger toward '''b''' first, and then pointing the middle finger toward '''a''', the thumb will be forced in the opposite direction, reversing the sign of the product vector. As the cross product operator depends on the orientation of the space, in general the cross product of two vectors is not a "true" vector, but a ''pseudovector''. See {{Section link||Handedness}} for more detail. == Names and origin == [[File:Sarrus rule.svg|upright=1.25|thumb|right|According to [[Sarrus's rule]], the [[determinant]] of a 3×3 matrix involves multiplications between matrix elements identified by crossed diagonals]] In 1842, [[William Rowan Hamilton]] first described the algebra of [[quaternion|quaternions]] and the non-commutative Hamilton product. In particular, when the Hamilton product of two vectors (that is, pure quaternions with zero scalar part) is performed, it results in a quaternion with a scalar and vector part. The scalar and vector part of this Hamilton product corresponds to the negative of dot product and cross product of the two vectors. In 1881, [[Josiah Willard Gibbs]],<ref>{{cite book |others=Founded upon the lectures of J. William Gibbs |author=Edwin Bidwell Wilson |title=[[Vector Analysis]] |chapter=Chapter II. Direct and Skew Products of Vectors |publisher=Yale University Press |place=New Haven |year=1913}} The dot product is called "direct product", and cross product is called "skew product".</ref> and independently [[Oliver Heaviside]], introduced the notation for both the dot product and the cross product using a period ({{nowrap|1='''a''' ⋅ '''b'''}}) and an "×" ({{nowrap|1='''a''' × '''b'''}}), respectively, to denote them.<ref name=ucd>[https://www.math.ucdavis.edu/~temple/MAT21D/SUPPLEMENTARY-ARTICLES/Crowe_History-of-Vectors.pdf ''A History of Vector Analysis''] by Michael J. Crowe, Math. UC Davis.<!-- this source is typeset poorly, using "." for "⋅" and "x" for "×" --></ref> In 1877, to emphasize the fact that the result of a dot product is a [[scalar (mathematics)|scalar]] while the result of a cross product is a [[Euclidean vector|vector]], [[William Kingdon Clifford]] coined the alternative names '''scalar product''' and '''vector product''' for the two operations.<ref name=ucd/> These alternative names are still widely used in the literature. Both the cross notation ({{nowrap|1='''a''' × '''b'''}}) and the name '''cross product''' were possibly inspired by the fact that each [[scalar component]] of {{nowrap|1='''a''' × '''b'''}} is computed by multiplying non-corresponding components of '''a''' and '''b'''. Conversely, a dot product {{nowrap|1='''a''' ⋅ '''b'''}} involves multiplications between corresponding components of '''a''' and '''b'''. As explained [[#Matrix notation|below]], the cross product can be expressed in the form of a determinant of a special {{nowrap|3 × 3}} matrix. According to [[Sarrus's rule]], this involves multiplications between matrix elements identified by crossed diagonals. == Computing == === Coordinate notation === [[File:3D Vector.svg|thumb|right|[[Standard basis]] vectors '''i''', '''j''', '''k''' and [[vector component]]s of '''a''', denoted here '''a'''<sub>x</sub>, '''a'''<sub>y</sub>, '''a'''<sub>z</sub>]] If <math>(\mathbf{\color{blue}{i}}, \mathbf{\color{red}{j}}, \mathbf{\color{green}{k}})</math> is a positively oriented orthonormal basis, the basis vectors satisfy the following equalities<ref name=":1" /> :<math>\begin{alignat}{2} \mathbf{\color{blue}{i}}&\times\mathbf{\color{red}{j}} &&= \mathbf{\color{green}{k}}\\ \mathbf{\color{red}{j}}&\times\mathbf{\color{green}{k}} &&= \mathbf{\color{blue}{i}}\\ \mathbf{\color{green}{k}}&\times\mathbf{\color{blue}{i}} &&= \mathbf{\color{red}{j}} \end{alignat}</math> A [[mnemonic]] for these formulas is that they can be deduced from any other of them by a [[cyclic permutation]] of the basis vectors. This mnemonic applies also to many formulas given in this article. The [[anticommutativity]] of the cross product, implies that :<math>\begin{alignat}{2} \mathbf{\color{red}{j}}&\times\mathbf{\color{blue}{i}} &&= -\mathbf{\color{green}{k}}\\ \mathbf{\color{green}{k}}&\times\mathbf{\color{red}{j}} &&= -\mathbf{\color{blue}{i}}\\ \mathbf{\color{blue}{i}}&\times\mathbf{\color{green}{k}} &&= -\mathbf{\color{red}{j}} \end{alignat}</math> The anticommutativity of the cross product (and the obvious lack of linear independence) also implies that :<math>\mathbf{\color{blue}{i}}\times\mathbf{\color{blue}{i}} = \mathbf{\color{red}{j}}\times\mathbf{\color{red}{j}} = \mathbf{\color{green}{k}}\times\mathbf{\color{green}{k}} = \mathbf{0}</math> (the [[zero vector]]). These equalities, together with the [[distributivity]] and [[linearity]] of the cross product (though neither follows easily from the definition given above), are sufficient to determine the cross product of any two vectors '''a''' and '''b'''. Each vector can be defined as the sum of three orthogonal components parallel to the standard basis vectors: :<math>\begin{alignat}{3} \mathbf{a} &= a_1\mathbf{\color{blue}{i}} &&+ a_2\mathbf{\color{red}{j}} &&+ a_3\mathbf{\color{green}{k}} \\ \mathbf{b} &= b_1\mathbf{\color{blue}{i}} &&+ b_2\mathbf{\color{red}{j}} &&+ b_3\mathbf{\color{green}{k}} \end{alignat}</math> Their cross product {{nowrap|1='''a''' × '''b'''}} can be expanded using distributivity: :<math> \begin{align} \mathbf{a}\times\mathbf{b} = {} &(a_1\mathbf{\color{blue}{i}} + a_2\mathbf{\color{red}{j}} + a_3\mathbf{\color{green}{k}}) \times (b_1\mathbf{\color{blue}{i}} + b_2\mathbf{\color{red}{j}} + b_3\mathbf{\color{green}{k}})\\ = {} &a_1b_1(\mathbf{\color{blue}{i}} \times \mathbf{\color{blue}{i}}) + a_1b_2(\mathbf{\color{blue}{i}} \times \mathbf{\color{red}{j}}) + a_1b_3(\mathbf{\color{blue}{i}} \times \mathbf{\color{green}{k}}) + {}\\ &a_2b_1(\mathbf{\color{red}{j}} \times \mathbf{\color{blue}{i}}) + a_2b_2(\mathbf{\color{red}{j}} \times \mathbf{\color{red}{j}}) + a_2b_3(\mathbf{\color{red}{j}} \times \mathbf{\color{green}{k}}) + {}\\ &a_3b_1(\mathbf{\color{green}{k}} \times \mathbf{\color{blue}{i}}) + a_3b_2(\mathbf{\color{green}{k}} \times \mathbf{\color{red}{j}}) + a_3b_3(\mathbf{\color{green}{k}} \times \mathbf{\color{green}{k}})\\ \end{align}</math> This can be interpreted as the decomposition of {{nowrap|1='''a''' × '''b'''}} into the sum of nine simpler cross products involving vectors aligned with '''i''', '''j''', or '''k'''. Each one of these nine cross products operates on two vectors that are easy to handle as they are either parallel or orthogonal to each other. From this decomposition, by using the above-mentioned [[#Coordinate notation|equalities]] and collecting similar terms, we obtain: :<math>\begin{align} \mathbf{a}\times\mathbf{b} = {} &\quad\ a_1b_1\mathbf{0} + a_1b_2\mathbf{\color{green}{k}} - a_1b_3\mathbf{\color{red}{j}} \\ &- a_2b_1\mathbf{\color{green}{k}} + a_2b_2\mathbf{0} + a_2b_3\mathbf{\color{blue}{i}} \\ &+ a_3b_1\mathbf{\color{red}{j}}\ - a_3b_2\mathbf{\color{blue}{i}}\ + a_3b_3\mathbf{0} \\ = {} &(a_2b_3 - a_3b_2)\mathbf{\color{blue}{i}} + (a_3b_1 - a_1b_3)\mathbf{\color{red}{j}} + (a_1b_2 - a_2b_1)\mathbf{\color{green}{k}}\\ \end{align}</math> meaning that the three [[scalar component]]s of the resulting vector '''s''' = ''s''<sub>1</sub>'''i''' + ''s''<sub>2</sub>'''j''' + ''s''<sub>3</sub>'''k''' = {{nowrap|1='''a''' × '''b'''}} are :<math>\begin{align} s_1 &= a_2b_3-a_3b_2\\ s_2 &= a_3b_1-a_1b_3\\ s_3 &= a_1b_2-a_2b_1 \end{align}</math> Using [[column vector]]s, we can represent the same result as follows: :<math>\begin{bmatrix}s_1\\s_2\\s_3\end{bmatrix}=\begin{bmatrix}a_2b_3-a_3b_2\\a_3b_1-a_1b_3\\a_1b_2-a_2b_1\end{bmatrix}</math> === Matrix notation === [[File:Sarrus_rule_cross_product_ab.svg|thumb|Use of Sarrus's rule to find the cross product of '''a''' and '''b''']] The cross product can also be expressed as the [[formal calculation|formal]] determinant:<ref group="note">Here, "formal" means that this notation has the form of a determinant, but does not strictly adhere to the definition; it is a mnemonic used to remember the expansion of the cross product.</ref><ref name=":1" /> :<math>\mathbf{a\times b} = \begin{vmatrix} \mathbf{i}&\mathbf{j}&\mathbf{k}\\ a_1&a_2&a_3\\ b_1&b_2&b_3\\ \end{vmatrix}</math> This determinant can be computed using [[Rule of Sarrus|Sarrus's rule]] or [[cofactor expansion]]. Using Sarrus's rule, it expands to :<math>\begin{align} \mathbf{a\times b} &=(a_2b_3\mathbf{i}+a_3b_1\mathbf{j}+a_1b_2\mathbf{k}) - (a_3b_2\mathbf{i}+a_1b_3\mathbf{j}+a_2b_1\mathbf{k})\\ &=(a_2b_3 - a_3b_2)\mathbf{i} -(a_1b_3 - a_3b_1)\mathbf{j} +(a_1b_2 - a_2b_1)\mathbf{k}. \end{align}</math> which gives the components of the resulting vector directly. === Using Levi-Civita tensors === * In any basis, the cross-product <math>a \times b</math> is given by the tensorial formula <math>E_{ijk}a^ib^j</math> where <math>E_{ijk} </math> is the covariant [[Levi-Civita symbol#Levi-Civita tensors|Levi-Civita]] tensor (we note the position of the indices). That corresponds to the intrinsic formula given [[#As an external product|here]]. * In an orthonormal basis '''having the same orientation as the space''', <math>a \times b</math> is given by the pseudo-tensorial formula <math> \varepsilon_{ijk}a^ib^j</math> where <math>\varepsilon_{ijk}</math> is the Levi-Civita symbol (which is a pseudo-tensor). That is the formula used for everyday physics but it works only for this special choice of basis. * In any orthonormal basis, <math>a \times b</math> is given by the pseudo-tensorial formula <math>(-1)^B\varepsilon_{ijk}a^ib^j</math> where <math>(-1)^B = \pm 1</math> indicates whether the basis has the same orientation as the space or not. The latter formula avoids having to change the orientation of the space when we inverse an orthonormal basis. == Properties == === Geometric meaning === {{See also|Triple product}} [[File:Cross product parallelogram.svg|right|thumb|Figure 1. The area of a parallelogram as the magnitude of a cross product]] [[File:Parallelepiped volume.svg|right|thumb|240px|Figure 2. Three vectors defining a parallelepiped]] The [[Euclidean norm|magnitude]] of the cross product can be interpreted as the positive [[area]] of the [[parallelogram]] having '''a''' and '''b''' as sides (see Figure 1):<ref name=":1" /> <math display="block"> \left\| \mathbf{a} \times \mathbf{b} \right\| = \left\| \mathbf{a} \right\| \left\| \mathbf{b} \right\| \left| \sin \theta \right| .</math> Indeed, one can also compute the volume ''V'' of a [[parallelepiped]] having '''a''', '''b''' and '''c''' as edges by using a combination of a cross product and a dot product, called [[scalar triple product]] (see Figure 2): :<math> \mathbf{a}\cdot(\mathbf{b}\times \mathbf{c})= \mathbf{b}\cdot(\mathbf{c}\times \mathbf{a})= \mathbf{c}\cdot(\mathbf{a}\times \mathbf{b}). </math> Since the result of the scalar triple product may be negative, the volume of the parallelepiped is given by its absolute value: :<math>V = |\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c})|.</math> Because the magnitude of the cross product goes by the sine of the angle between its arguments, the cross product can be thought of as a measure of ''perpendicularity'' in the same way that the dot product is a measure of ''parallelism''. Given two [[unit vectors]], their cross product has a magnitude of 1 if the two are perpendicular and a magnitude of zero if the two are parallel. The dot product of two unit vectors behaves just oppositely: it is zero when the unit vectors are perpendicular and 1 if the unit vectors are parallel. Unit vectors enable two convenient identities: the dot product of two unit vectors yields the cosine (which may be positive or negative) of the angle between the two unit vectors. The magnitude of the cross product of the two unit vectors yields the sine (which will always be positive). === Algebraic properties === [[File:Cross product scalar multiplication.svg|350px|thumb|Cross product [[scalar multiplication]]. '''Left:''' Decomposition of '''b''' into components parallel and perpendicular to '''a'''. Right: Scaling of the perpendicular components by a positive real number ''r'' (if negative, '''b''' and the cross product are reversed).]] [[File:Cross product distributivity.svg|350px|thumb|Cross product distributivity over vector addition. '''Left:''' The vectors '''b''' and '''c''' are resolved into parallel and perpendicular components to '''a'''. '''Right:''' The parallel components vanish in the cross product, only the perpendicular components shown in the plane perpendicular to '''a''' remain.<ref>{{cite book|title=Vector Analysis|author1=M. R. Spiegel |author2=S. Lipschutz |author3=D. Spellman |series=Schaum's outlines|year=2009|page=29|publisher=McGraw Hill|isbn=978-0-07-161545-7}}</ref>]] [[File:Cross product triple.svg|thumb|350px|The two nonequivalent triple cross products of three vectors '''a''', '''b''', '''c'''. In each case, two vectors define a plane, the other is out of the plane and can be split into parallel and perpendicular components to the cross product of the vectors defining the plane. These components can be found by [[vector projection]] and [[vector rejection|rejection]]. The triple product is in the plane and is rotated as shown.]] If the cross product of two vectors is the zero vector (that is, {{nowrap|1='''a''' × '''b''' = '''0'''}}), then either one or both of the inputs is the zero vector, ({{nowrap|1='''a''' = '''0'''}} or {{nowrap|1='''b''' = '''0'''}}) or else they are parallel or antiparallel ({{nowrap|'''a''' ∥ '''b'''}}) so that the sine of the angle between them is zero ({{nowrap|1=''θ'' = 0°}} or {{nowrap|1=''θ'' = 180°}} and {{nowrap|1=sin ''θ'' = 0}}). The self cross product of a vector is the zero vector: :<math>\mathbf{a} \times \mathbf{a} = \mathbf{0}.</math> The cross product is [[anticommutativity|anticommutative]], :<math>\mathbf{a} \times \mathbf{b} = -(\mathbf{b} \times \mathbf{a}),</math> [[distributive property|distributive]] over addition, : <math>\mathbf{a} \times (\mathbf{b} + \mathbf{c}) = (\mathbf{a} \times \mathbf{b}) + (\mathbf{a} \times \mathbf{c}),</math> and compatible with scalar multiplication so that :<math>(r\,\mathbf{a}) \times \mathbf{b} = \mathbf{a} \times (r\,\mathbf{b}) = r\,(\mathbf{a} \times \mathbf{b}).</math> It is not [[associative]], but satisfies the [[Jacobi identity]]: :<math>\mathbf{a} \times (\mathbf{b} \times \mathbf{c}) + \mathbf{b} \times (\mathbf{c} \times \mathbf{a}) + \mathbf{c} \times (\mathbf{a} \times \mathbf{b}) = \mathbf{0}.</math> Distributivity, linearity and Jacobi identity show that the '''R'''<sup>3</sup> [[Real coordinate space|vector space]] together with vector addition and the cross product forms a [[Lie algebra]], the Lie algebra of the real [[orthogonal group]] in 3 dimensions, [[SO(3)]]. The cross product does not obey the [[cancellation law]]; that is, {{nowrap|1='''a''' × '''b''' = '''a''' × '''c'''}} with {{nowrap|'''a''' ≠ '''0'''}} does not imply {{nowrap|1='''b''' = '''c'''}}, but only that: :<math> \begin{align} \mathbf{0} &= (\mathbf{a} \times \mathbf{b}) - (\mathbf{a} \times \mathbf{c})\\ &= \mathbf{a} \times (\mathbf{b} - \mathbf{c}).\\ \end{align}</math> This can be the case where '''b''' and '''c''' cancel, but additionally where '''a''' and {{nowrap|'''b''' − '''c'''}} are parallel; that is, they are related by a scale factor ''t'', leading to: :<math>\mathbf{c} = \mathbf{b} + t\,\mathbf{a},</math> for some scalar ''t''. If, in addition to {{nowrap|1='''a''' × '''b''' = '''a''' × '''c'''}} and {{nowrap|'''a''' ≠ '''0'''}} as above, it is the case that {{nowrap|1='''a''' ⋅ '''b''' = '''a''' ⋅ '''c'''}} then :<math>\begin{align} \mathbf{a} \times (\mathbf{b} - \mathbf{c}) &= \mathbf{0} \\ \mathbf{a} \cdot (\mathbf{b} - \mathbf{c}) &= 0, \end{align}</math> As {{nowrap|1='''b''' − '''c'''}} cannot be simultaneously parallel (for the cross product to be '''0''') and perpendicular (for the dot product to be 0) to '''a''', it must be the case that '''b''' and '''c''' cancel: {{nowrap|1='''b''' = '''c'''}}. From the geometrical definition, the cross product is invariant under proper [[rotation (mathematics)|rotations]] about the axis defined by {{nowrap|'''a''' × '''b'''}}. In formulae: :<math>(R\mathbf{a}) \times (R\mathbf{b}) = R(\mathbf{a} \times \mathbf{b})</math>, where <math>R</math> is a [[rotation matrix]] with <math>\det(R)=1</math>. More generally, the cross product obeys the following identity under [[matrix (mathematics)|matrix]] transformations: :<math>(M\mathbf{a}) \times (M\mathbf{b}) = (\det M) \left(M^{-1}\right)^\mathrm{T}(\mathbf{a} \times \mathbf{b}) = \operatorname{cof} M (\mathbf{a} \times \mathbf{b}) </math> where <math>M</math> is a 3-by-3 [[matrix (mathematics)|matrix]] and <math>\left(M^{-1}\right)^\mathrm{T}</math> is the [[transpose]] of the [[inverse matrix|inverse]] and <math>\operatorname{cof}</math> is the cofactor matrix. It can be readily seen how this formula reduces to the former one if <math>M</math> is a rotation matrix. If <math>M</math> is a 3-by-3 symmetric matrix applied to a generic cross product <math>\mathbf{a} \times \mathbf{b}</math>, the following relation holds true: :<math>M(\mathbf{a} \times \mathbf{b}) = \operatorname{Tr}(M)(\mathbf{a} \times \mathbf{b}) - \mathbf{a} \times M\mathbf{b} + \mathbf{b} \times M\mathbf{a}</math> The cross product of two vectors lies in the [[null space]] of the {{nowrap|2 × 3}} matrix with the vectors as rows: :<math>\mathbf{a} \times \mathbf{b} \in NS\left(\begin{bmatrix}\mathbf{a} \\ \mathbf{b}\end{bmatrix}\right).</math> For the sum of two cross products, the following identity holds: :<math>\mathbf{a} \times \mathbf{b} + \mathbf{c} \times \mathbf{d} = (\mathbf{a} - \mathbf{c}) \times (\mathbf{b} - \mathbf{d}) + \mathbf{a} \times \mathbf{d} + \mathbf{c} \times \mathbf{b}.</math> === Differentiation === {{Main|Vector-valued_function#Derivative_and_vector_multiplication|l1=Vector-valued function § Derivative and vector multiplication}} The [[product rule]] of differential calculus applies to any bilinear operation, and therefore also to the cross product: :<math>\frac{d}{dt}(\mathbf{a} \times \mathbf{b}) = \frac{d\mathbf{a}}{dt} \times \mathbf{b} + \mathbf{a} \times \frac{d\mathbf{b}}{dt} ,</math> where '''a''' and '''b''' are vectors that depend on the real variable ''t''. === Triple product expansion === {{Main|Triple product}} The cross product is used in both forms of the triple product. The [[scalar triple product]] of three vectors is defined as :<math>\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c}), </math> It is the signed volume of the [[parallelepiped]] with edges '''a''', '''b''' and '''c''' and as such the vectors can be used in any order that's an [[even permutation]] of the above ordering. The following therefore are equal: :<math>\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c}) = \mathbf{b} \cdot (\mathbf{c} \times \mathbf{a}) = \mathbf{c} \cdot (\mathbf{a} \times \mathbf{b}), </math> The [[vector triple product]] is the cross product of a vector with the result of another cross product, and is related to the dot product by the following formula :<math>\begin{align} \mathbf{a} \times (\mathbf{b} \times \mathbf{c}) = \mathbf{b}(\mathbf{a} \cdot \mathbf{c}) - \mathbf{c}(\mathbf{a} \cdot \mathbf{b}) \\ (\mathbf{a} \times \mathbf{b}) \times \mathbf{c} = \mathbf{b}(\mathbf{c} \cdot \mathbf{a}) - \mathbf{a} (\mathbf{b} \cdot \mathbf{c}) \end{align}</math> The [[mnemonic]] "BAC minus CAB" is used to remember the order of the vectors in the right hand member. This formula is used in [[physics]] to simplify vector calculations. A special case, regarding [[gradient]]s and useful in [[vector calculus]], is :<math>\begin{align} \nabla \times (\nabla \times \mathbf{f}) &= \nabla (\nabla \cdot \mathbf{f} ) - (\nabla \cdot \nabla) \mathbf{f} \\ &= \nabla (\nabla \cdot \mathbf{f} ) - \nabla^2 \mathbf{f},\\ \end{align}</math> where ∇<sup>2</sup> is the [[vector Laplacian]] operator. Other identities relate the cross product to the scalar triple product: :<math>\begin{align} (\mathbf{a}\times \mathbf{b})\times (\mathbf{a}\times \mathbf{c}) &= (\mathbf{a}\cdot(\mathbf{b}\times \mathbf{c})) \mathbf{a} \\ (\mathbf{a}\times \mathbf{b})\cdot(\mathbf{c}\times \mathbf{d}) &= \mathbf{b}^\mathrm{T} \left( \left( \mathbf{c}^\mathrm{T} \mathbf{a}\right)I - \mathbf{c} \mathbf{a}^\mathrm{T} \right) \mathbf{d}\\ &= (\mathbf{a}\cdot \mathbf{c})(\mathbf{b}\cdot \mathbf{d})-(\mathbf{a}\cdot \mathbf{d}) (\mathbf{b}\cdot \mathbf{c}) \end{align}</math> where ''I'' is the identity matrix. === Alternative formulation === The cross product and the dot product are related by: :<math> \left\| \mathbf{a} \times \mathbf{b} \right\| ^2 = \left\| \mathbf{a}\right\|^2 \left\|\mathbf{b}\right\|^2 - (\mathbf{a} \cdot \mathbf{b})^2 .</math> The right-hand side is the [[Gramian matrix|Gram determinant]] of '''a''' and '''b''', the square of the area of the parallelogram defined by the vectors. This condition determines the magnitude of the cross product. Namely, since the dot product is defined, in terms of the angle ''θ'' between the two vectors, as: :<math> \mathbf{a \cdot b} = \left\| \mathbf a \right\| \left\| \mathbf b \right\| \cos \theta , </math> the above given relationship can be rewritten as follows: :<math> \left\| \mathbf{a \times b} \right\|^2 = \left\| \mathbf{a} \right\| ^2 \left\| \mathbf{b}\right \| ^2 \left(1-\cos^2 \theta \right) .</math> Invoking the [[Pythagorean trigonometric identity]] one obtains: :<math> \left\| \mathbf{a} \times \mathbf{b} \right\| = \left\| \mathbf{a} \right\| \left\| \mathbf{b} \right\| \left| \sin \theta \right| ,</math> which is the magnitude of the cross product expressed in terms of ''θ'', equal to the area of the parallelogram defined by '''a''' and '''b''' (see [[#Definition|definition]] above). The combination of this requirement and the property that the cross product be orthogonal to its constituents '''a''' and '''b''' provides an alternative definition of the cross product.<ref name=Massey>{{cite journal |title=Cross products of vectors in higher dimensional Euclidean spaces |author=WS Massey |journal=The American Mathematical Monthly |volume=90 |date=Dec 1983 |pages=697–701 |issue=10 |doi=10.2307/2323537 |publisher=The American Mathematical Monthly, Vol. 90, No. 10 |jstor=2323537}}</ref> === Cross product inverse === Given two vectors {{math|'''a'''}} and {{math|'''c'''}} with {{nowrap|1='''a'''≠'''0'''}}, the equation {{nowrap|1='''a''' × '''b''' = '''c'''}} admits solutions for {{math|'''b'''}} if and only if {{math|'''a'''}} is orthogonal to {{math|'''c'''}} (that is, if {{nowrap|1='''a''' ⋅ '''c''' = 0}}). In that case, there exists an infinite family of solutions for {{math|'''b'''}}, which are :<math> \mathbf{b} = \frac{\mathbf{c} \times \mathbf{a}}{\left\| \mathbf{a} \right\|^2} + t \mathbf{a} ,</math> where {{nowrap|1=''t''}} is an arbitrary constant. This can be derived using the triple product expansion: :<math> \mathbf{c} \times \mathbf{a} = (\mathbf{a} \times \mathbf{b}) \times \mathbf{a} = \left\| \mathbf{a} \right\|^2 \mathbf{b} - (\mathbf{a} \cdot \mathbf{b})\mathbf{a} </math> Rearrange to solve for {{nowrap|1='''b'''}} to give :<math> \mathbf{b} = \frac{\mathbf{c} \times \mathbf{a}}{\left\| \mathbf{a} \right\|^2} + \frac{\mathbf{a}\cdot \mathbf{b}}{\left\| \mathbf{a} \right\|^2}\mathbf{a} </math> The coefficient of the last term can be simplified to just the arbitrary constant {{nowrap|1=''t''}} to yield the result shown above. === Lagrange's identity === The relation :<math> \left\| \mathbf{a} \times \mathbf{b} \right\|^2 \equiv \det \begin{bmatrix} \mathbf{a} \cdot \mathbf{a} & \mathbf{a} \cdot \mathbf{b} \\ \mathbf{a} \cdot \mathbf{b} & \mathbf{b} \cdot \mathbf{b}\\ \end{bmatrix} \equiv \left\| \mathbf{a} \right\| ^2 \left\| \mathbf{b} \right\| ^2 - (\mathbf{a} \cdot \mathbf{b})^2 </math> can be compared with another relation involving the right-hand side, namely [[Lagrange's identity]] expressed as<ref name=Boichenko>{{cite book |title=Dimension theory for ordinary differential equations |author1=Vladimir A. Boichenko |author2=Gennadiĭ Alekseevich Leonov |author3=Volker Reitmann |url=https://books.google.com/books?id=9bN1-b_dSYsC&pg=PA26 |page=26 |isbn=3-519-00437-2 |year=2005 |publisher=Vieweg+Teubner Verlag}}</ref> :<math> \sum_{1 \le i < j \le n} \left( a_ib_j - a_jb_i \right)^2 \equiv \left\| \mathbf a \right\|^2 \left\| \mathbf b \right\|^2 - ( \mathbf{a \cdot b } )^2, </math> where '''a''' and '''b''' may be ''n''-dimensional vectors. This also shows that the [[Riemannian volume form]] for surfaces is exactly the [[Volume form|surface element]] from vector calculus. In the case where {{nowrap|1=''n'' = 3}}, combining these two equations results in the expression for the magnitude of the cross product in terms of its components:<ref name=Lounesto1>{{cite book |url=https://books.google.com/books?id=kOsybQWDK4oC&q=%22which+in+coordinate+form+means+Lagrange%27s+identity%22&pg=PA94 |author=Pertti Lounesto |page=94 |title=Clifford algebras and spinors |isbn=0-521-00551-5 |edition=2nd |publisher=Cambridge University Press |year=2001}}</ref> :<math>\begin{align} \|\mathbf{a} \times \mathbf{b}\|^2 &\equiv \sum_{1 \le i < j \le 3} (a_ib_j - a_jb_i)^2 \\ &\equiv (a_1 b_2 - b_1 a_2)^2 + (a_2 b_3 - a_3 b_2)^2 + (a_3 b_1 - a_1 b_3)^2. \end{align}</math> The same result is found directly using the components of the cross product found from :<math>\mathbf{a} \times \mathbf{b} \equiv \det \begin{bmatrix} \hat\mathbf{i} & \hat\mathbf{j} & \hat\mathbf{k} \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ \end{bmatrix}.</math> In '''R'''<sup>3</sup>, Lagrange's equation is a special case of the multiplicativity {{nowrap|1={{abs|'''vw'''}} = {{abs|'''v'''}}{{abs|'''w'''}}}} of the norm in the [[Quaternion#Algebraic properties|quaternion algebra]]. It is a special case of another formula, also sometimes called Lagrange's identity, which is the three dimensional case of the [[Binet–Cauchy identity]]:<ref name=Liu/><ref name=Weisstein>by {{cite book |author=Eric W. Weisstein |chapter=Binet-Cauchy identity |title=CRC concise encyclopedia of mathematics |chapter-url=https://books.google.com/books?id=8LmCzWQYh_UC&pg=PA228 |page=228 |isbn=1-58488-347-2 |edition=2nd |year=2003 |publisher=CRC Press}}</ref> :<math> (\mathbf{a} \times \mathbf{b}) \cdot (\mathbf{c} \times \mathbf{d}) \equiv (\mathbf{a} \cdot \mathbf{c})(\mathbf{b} \cdot \mathbf{d}) - (\mathbf{a} \cdot \mathbf{d})(\mathbf{b} \cdot \mathbf{c}). </math> If {{nowrap|1='''a''' = '''c'''}} and {{nowrap|1='''b''' = '''d'''}}, this simplifies to the formula above. === Infinitesimal generators of rotations === {{further|Infinitesimal rotation matrix#Generators of rotations}} The cross product conveniently describes the infinitesimal generators of [[rotation (mathematics)|rotation]]s in '''R'''<sup>3</sup>. Specifically, if '''n''' is a unit vector in '''R'''<sup>3</sup> and ''R''(''φ'', '''n''') denotes a rotation about the axis through the origin specified by '''n''', with angle φ (measured in radians, counterclockwise when viewed from the tip of '''n'''), then :<math>\left.{d\over d\phi} \right|_{\phi=0} R(\phi,\boldsymbol{n}) \boldsymbol{x} = \boldsymbol{n} \times \boldsymbol{x}</math> for every vector '''x''' in '''R'''<sup>3</sup>. The cross product with '''n''' therefore describes the infinitesimal generator of the rotations about '''n'''. These infinitesimal generators form the [[Lie algebra]] '''so'''(3) of the [[rotation group SO(3)]], and we obtain the result that the Lie algebra '''R'''<sup>3</sup> with cross product is isomorphic to the Lie algebra '''so'''(3). == Alternative ways to compute == === Conversion to matrix multiplication === The vector cross product also can be expressed as the product of a [[skew-symmetric matrix]] and a vector:<ref name=Liu>{{cite journal |title=Hadamard, Khatri-Rao, Kronecker and other matrix products |journal=Int J Information and Systems Sciences |volume=4 |pages=160–177 |year=2008 |publisher=Institute for scientific computing and education |url=https://www.researchgate.net/publication/251677036 |author1=Shuangzhe Liu |author2=Gõtz Trenkler |issue=1 }}</ref> <math display="block">\begin{align} \mathbf{a} \times \mathbf{b} = [\mathbf{a}]_{\times} \mathbf{b} &= \begin{bmatrix}\,0&\!-a_3&\,\,a_2\\ \,\,a_3&0&\!-a_1\\-a_2&\,\,a_1&\,0\end{bmatrix}\begin{bmatrix}b_1\\b_2\\b_3\end{bmatrix} \\ \mathbf{a} \times \mathbf{b} = {[\mathbf{b}]_\times}^\mathrm{\!\!T} \mathbf{a} &= \begin{bmatrix}\,0&\,\,b_3&\!-b_2\\ -b_3&0&\,\,b_1\\\,\,b_2&\!-b_1&\,0\end{bmatrix}\begin{bmatrix}a_1\\a_2\\a_3\end{bmatrix}, \end{align}</math> where superscript {{math|{{sup|T}}}} refers to the [[transpose]] operation, and ['''a''']<sub>×</sub> is defined by: <math display="block">[\mathbf{a}]_{\times} \stackrel{\rm def}{=} \begin{bmatrix}\,\,0&\!-a_3&\,\,\,a_2\\\,\,\,a_3&0&\!-a_1\\\!-a_2&\,\,a_1&\,\,0\end{bmatrix}.</math> The columns ['''a''']<sub>×,i</sub> of the skew-symmetric matrix for a vector '''a''' can be also obtained by calculating the cross product with [[unit vectors]]. That is, <math display="block">[\mathbf{a}]_{\times, i} = \mathbf{a} \times \mathbf{\hat{e}_i}, \; i\in \{1,2,3\} </math> or <math display="block">[\mathbf{a}]_{\times} = \sum_{i=1}^3\left(\mathbf{a} \times \mathbf{\hat{e}_i}\right)\otimes\mathbf{\hat{e}_i},</math> where <math>\otimes</math> is the [[outer product]] operator. Also, if '''a''' is itself expressed as a cross product: <math display="block">\mathbf{a} = \mathbf{c} \times \mathbf{d}</math> then <math display="block">[\mathbf{a}]_{\times} = \mathbf{d}\mathbf{c}^\mathrm{T} - \mathbf{c}\mathbf{d}^\mathrm{T} .</math> {{math proof|title=Proof by substitution |proof=Evaluation of the cross product gives <math display="block"> \mathbf{a} = \mathbf{c} \times \mathbf{d} = \begin{pmatrix} c_2 d_3 - c_3 d_2 \\ c_3 d_1 - c_1 d_3 \\ c_1 d_2 - c_2 d_1 \end{pmatrix} </math> Hence, the left hand side equals <math display="block"> [\mathbf{a}]_{\times} = \begin{bmatrix} 0 & c_2 d_1 - c_1 d_2 & c_3 d_1 - c_1 d_3 \\ c_1 d_2 - c_2 d_1 & 0 & c_3 d_2 - c_2 d_3 \\ c_1 d_3 - c_3 d_1 & c_2 d_3 - c_3 d_2 & 0 \end{bmatrix} </math> Now, for the right hand side, <math display="block"> \mathbf{c} \mathbf{d}^{\mathrm T} = \begin{bmatrix} c_1 d_1 & c_1 d_2 & c_1 d_3 \\ c_2 d_1 & c_2 d_2 & c_2 d_3 \\ c_3 d_1 & c_3 d_2 & c_3 d_3 \end{bmatrix} </math> And its transpose is <math display="block"> \mathbf{d} \mathbf{c}^{\mathrm T} = \begin{bmatrix} c_1 d_1 & c_2 d_1 & c_3 d_1 \\ c_1 d_2 & c_2 d_2 & c_3 d_2 \\ c_1 d_3 & c_2 d_3 & c_3 d_3 \end{bmatrix} </math> Evaluation of the right hand side gives <math display="block"> \mathbf{d} \mathbf{c}^{\mathrm T} - \mathbf{c} \mathbf{d}^{\mathrm T} = \begin{bmatrix} 0 & c_2 d_1 - c_1 d_2 & c_3 d_1 - c_1 d_3 \\ c_1 d_2 - c_2 d_1 & 0 & c_3 d_2 - c_2 d_3 \\ c_1 d_3 - c_3 d_1 & c_2 d_3 - c_3 d_2 & 0 \end{bmatrix} </math> Comparison shows that the left hand side equals the right hand side. }} This result can be generalized to higher dimensions using [[geometric algebra]]. In particular in any dimension bivectors can be identified with skew-symmetric matrices, so the product between a skew-symmetric matrix and vector is equivalent to the grade-1 part of the product of a bivector and vector.<ref name="lounesto2001">{{cite book | author = Lounesto, Pertti | title = Clifford algebras and spinors | url = https://archive.org/details/cliffordalgebras00loun | url-access = limited | publisher = Cambridge: Cambridge University Press | year = 2001 | isbn = 978-0-521-00551-7 | pages = [https://archive.org/details/cliffordalgebras00loun/page/n200 193] }}</ref> In three dimensions bivectors are [[Hodge dual|dual]] to vectors so the product is equivalent to the cross product, with the bivector instead of its vector dual. In higher dimensions the product can still be calculated but bivectors have more degrees of freedom and are not equivalent to vectors.<ref name="lounesto2001"/> This notation is also often much easier to work with, for example, in [[epipolar geometry]]. From the general properties of the cross product follows immediately that <math display="block">[\mathbf{a}]_{\times} \, \mathbf{a} = \mathbf{0}</math> and <math display="block">\mathbf{a}^\mathrm T \, [\mathbf{a}]_{\times} = \mathbf{0}</math> and from fact that ['''a''']<sub>×</sub> is skew-symmetric it follows that <math display="block">\mathbf{b}^\mathrm T \, [\mathbf{a}]_{\times} \, \mathbf{b} = 0. </math> The above-mentioned triple product expansion (bac–cab rule) can be easily proven using this notation. As mentioned above, the Lie algebra '''R'''<sup>3</sup> with cross product is isomorphic to the Lie algebra '''so(3)''', whose elements can be identified with the 3×3 skew-symmetric matrices. The map '''a''' → ['''a''']<sub>×</sub> provides an isomorphism between '''R'''<sup>3</sup> and '''so(3)'''. Under this map, the cross product of 3-vectors corresponds to the [[commutator]] of 3x3 skew-symmetric matrices. :{| class="toccolours collapsible collapsed" width="70%" style="text-align:left" !Matrix conversion for cross product with canonical base vectors |- |Denoting with <math>\mathbf{e}_i \in \mathbf{R}^{3 \times 1}</math> the <math>i</math>-th canonical base vector, the cross product of a generic vector <math>\mathbf{v} \in \mathbf{R}^{3 \times 1}</math> with <math>\mathbf{e}_i</math> is given by: <math>\mathbf{v} \times \mathbf{e}_i = \mathbf{C}_i \mathbf{v}</math>, where <math display="block"> \mathbf{C}_1 = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0 \end{bmatrix}, \quad \mathbf{C}_2 = \begin{bmatrix} 0 & 0 & -1 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{bmatrix}, \quad \mathbf{C}_3 = \begin{bmatrix} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix} </math> These matrices share the following properties: * <math>\mathbf{C}_i^\textrm{T} = -\mathbf{C}_i</math> (skew-symmetric); * Both trace and determinant are zero; * <math>\text{rank}(\mathbf{C}_i) = 2</math>; * <math>\mathbf{C}_i \mathbf{C}_i^\textrm{T} = \mathbf{P}^{ ^\perp}_{\mathbf{e}_i}</math> (see below); The [[projection (linear algebra)#Orthogonal projection|orthogonal projection matrix]] of a vector <math>\mathbf{v} \neq \mathbf{0}</math> is given by <math>\mathbf{P}_{\mathbf{v}} = \mathbf{v}\left(\mathbf{v}^\textrm{T} \mathbf{v}\right)^{-1} \mathbf{v}^T</math>. The projection matrix onto the [[orthogonal complement]] is given by <math>\mathbf{P}^{ ^\perp}_{\mathbf{v}} = \mathbf{I} - \mathbf{P}_{\mathbf{v}}</math>, where <math>\mathbf{I}</math> is the identity matrix. For the special case of <math>\mathbf{v} = \mathbf{e}_i</math>, it can be verified that <math display="block"> \mathbf{P}^{^\perp}_{\mathbf{e}_1} = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}, \quad \mathbf{P}^{ ^\perp}_{\mathbf{e}_2} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix}, \quad \mathbf{P}^{ ^\perp}_{\mathbf{e}_3} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{bmatrix} </math> For other properties of orthogonal projection matrices, see [[projection (linear algebra)]]. |} ===Index notation for tensors=== The cross product can alternatively be defined in terms of the [[Levi-Civita symbol#Levi-Civita tensors|Levi-Civita tensor]] ''E<sub>ijk</sub>'' and a dot product ''η<sup>mi</sup>'', which are useful in converting vector notation for tensor applications: :<math>\mathbf{c} = \mathbf{a \times b} \Leftrightarrow\ c^m = \sum_{i=1}^3 \sum_{j=1}^3 \sum_{k=1}^3 \eta^{mi} E_{ijk} a^j b^k</math> where the [[Indexed family|indices]] <math>i,j,k</math> correspond to vector components. This characterization of the cross product is often expressed more compactly using the [[Einstein summation convention]] as :<math>\mathbf{c} = \mathbf{a \times b} \Leftrightarrow\ c^m = \eta^{mi} E_{ijk} a^j b^k</math> in which repeated indices are summed over the values 1 to 3. In a positively-oriented orthonormal basis ''η<sup>mi</sup>'' = δ<sup>''mi''</sup> (the [[Kronecker delta]]) and <math> E_{ijk} = \varepsilon_{ijk}</math> (the [[Levi-Civita symbol]]). In that case, this representation is another form of the skew-symmetric representation of the cross product: :<math>[\varepsilon_{ijk} a^j] = [\mathbf{a}]_\times.</math> In [[classical mechanics]]: representing the cross product by using the Levi-Civita symbol can cause mechanical symmetries to be obvious when physical systems are [[isotropic]]. (An example: consider a particle in a [[Hooke's law]] potential in three-space, free to oscillate in three dimensions; none of these dimensions are "special" in any sense, so symmetries lie in the cross-product-represented angular momentum, which are made clear by the abovementioned Levi-Civita representation).{{Citation needed|date=November 2009}} === Mnemonic === [[File:cross_product_mnemonic.svg|thumb|Mnemonic to calculate a cross product in vector form]] {{redirect|Xyzzy (mnemonic)||Xyzzy (disambiguation){{!}}Xyzzy}} The word "xyzzy" can be used to remember the definition of the cross product. If :<math>\mathbf{a} = \mathbf{b} \times \mathbf{c}</math> where: :<math> \mathbf{a} = \begin{bmatrix}a_x\\a_y\\a_z\end{bmatrix},\ \mathbf{b} = \begin{bmatrix}b_x\\b_y\\b_z\end{bmatrix},\ \mathbf{c} = \begin{bmatrix}c_x\\c_y\\c_z\end{bmatrix} </math> then: :<math>a_x = b_y c_z - b_z c_y </math> :<math>a_y = b_z c_x - b_x c_z </math> :<math>a_z = b_x c_y - b_y c_x. </math> The second and third equations can be obtained from the first by simply vertically rotating the subscripts, {{nowrap|''x'' → ''y'' → ''z'' → ''x''}}. The problem, of course, is how to remember the first equation, and two options are available for this purpose: either to remember the relevant two diagonals of Sarrus's scheme (those containing '''''i'''''), or to remember the xyzzy sequence. Since the first diagonal in Sarrus's scheme is just the [[main diagonal]] of the [[#Matrix notation|above]]-mentioned 3×3 matrix, the first three letters of the word xyzzy can be very easily remembered. === Cross visualization === Similarly to the mnemonic device above, a "cross" or X can be visualized between the two vectors in the equation. This may be helpful for remembering the correct cross product formula. If :<math>\mathbf{a} = \mathbf{b} \times \mathbf{c}</math> then: :<math> \mathbf{a} = \begin{bmatrix}b_x\\b_y\\b_z\end{bmatrix} \times \begin{bmatrix}c_x\\c_y\\c_z\end{bmatrix}. </math> If we want to obtain the formula for <math>a_x</math> we simply drop the <math>b_x</math> and <math>c_x</math> from the formula, and take the next two components down: :<math> a_x = \begin{bmatrix}b_y\\b_z\end{bmatrix} \times \begin{bmatrix}c_y\\c_z\end{bmatrix}. </math> When doing this for <math>a_y</math> the next two elements down should "wrap around" the matrix so that after the z component comes the x component. For clarity, when performing this operation for <math>a_y</math>, the next two components should be z and x (in that order). While for <math>a_z</math> the next two components should be taken as x and y. :<math> a_y = \begin{bmatrix}b_z\\b_x\end{bmatrix} \times \begin{bmatrix}c_z\\c_x\end{bmatrix},\ a_z = \begin{bmatrix}b_x\\b_y\end{bmatrix} \times \begin{bmatrix}c_x\\c_y\end{bmatrix} </math> For <math>a_x</math> then, if we visualize the cross operator as pointing from an element on the left to an element on the right, we can take the first element on the left and simply multiply by the element that the cross points to in the right-hand matrix. We then subtract the next element down on the left, multiplied by the element that the cross points to here as well. This results in our <math>a_x</math> formula – :<math>a_x = b_y c_z - b_z c_y.</math> We can do this in the same way for <math>a_y</math> and <math>a_z</math> to construct their associated formulas. == Applications == The cross product has applications in various contexts. For example, it is used in computational geometry, physics and engineering. A non-exhaustive list of examples follows. === Computational geometry === The cross product appears in the calculation of the distance of two [[Skew lines#Distance|skew lines]] (lines not in the same plane) from each other in three-dimensional space. The cross product can be used to calculate the normal for a triangle or polygon, an operation frequently performed in [[computer graphics]]. For example, the winding of a polygon (clockwise or anticlockwise) about a point within the polygon can be calculated by triangulating the polygon (like spoking a wheel) and summing the angles (between the spokes) using the cross product to keep track of the sign of each angle. In [[computational geometry]] of [[Plane (geometry)|the plane]], the cross product is used to determine the sign of the [[acute angle]] defined by three points <math> p_1=(x_1,y_1), p_2=(x_2,y_2)</math> and <math> p_3=(x_3,y_3)</math>. It corresponds to the direction (upward or downward) of the cross product of the two coplanar [[vector (geometry)|vector]]s defined by the two pairs of points <math>(p_1, p_2)</math> and <math>(p_1, p_3)</math>. The sign of the acute angle is the sign of the expression :<math> P = (x_2-x_1)(y_3-y_1)-(y_2-y_1)(x_3-x_1),</math> which is the signed length of the cross product of the two vectors. To use the cross product, simply extend the 2D vectors <math>p_1, p_2, p_3</math> to co-planar 3D vectors by setting <math>z_k=0</math> for each of them. In the "right-handed" coordinate system, if the result is 0, the points are [[collinear]]; if it is positive, the three points constitute a positive angle of rotation around <math> p_1</math> from <math> p_2</math> to <math> p_3</math>, otherwise a negative angle. From another point of view, the sign of <math>P</math> tells whether <math> p_3</math> lies to the left or to the right of line <math> p_1, p_2.</math> The cross product is used in calculating the volume of a [[polyhedron]] such as a [[tetrahedron#Volume|tetrahedron]] or [[parallelepiped#Volume|parallelepiped]]. === Angular momentum and torque === The [[angular momentum]] {{math|'''L'''}} of a particle about a given origin is defined as: : <math>\mathbf{L} = \mathbf{r} \times \mathbf{p},</math> where {{math|'''r'''}} is the position vector of the particle relative to the origin, {{math|'''p'''}} is the linear momentum of the particle. In the same way, the [[Moment (physics)|moment]] {{math|'''M'''}} of a force {{math|'''F'''<sub>B</sub>}} applied at point B around point A is given as: : <math> \mathbf{M}_\mathrm{A} = \mathbf{r}_\mathrm{AB} \times \mathbf{F}_\mathrm{B}\,</math> In mechanics the ''moment of a force'' is also called ''[[torque]]'' and written as <math>\mathbf{\tau}</math> Since position {{nowrap|{{math|'''r'''}},}} linear momentum {{math|'''p'''}} and force {{math|'''F'''}} are all ''true'' vectors, both the angular momentum {{math|'''L'''}} and the moment of a force {{math|'''M'''}} are ''pseudovectors'' or ''axial vectors''. === Rigid body === The cross product frequently appears in the description of rigid motions. Two points ''P'' and ''Q'' on a [[rigid body]] can be related by: : <math>\mathbf{v}_P - \mathbf{v}_Q = \boldsymbol\omega \times \left( \mathbf{r}_P - \mathbf{r}_Q \right)\,</math> where <math>\mathbf{r}</math> is the point's position, <math>\mathbf{v}</math> is its velocity and <math>\boldsymbol\omega</math> is the body's [[angular velocity]]. Since position <math>\mathbf{r}</math> and velocity <math>\mathbf{v}</math> are ''true'' vectors, the angular velocity <math>\boldsymbol\omega</math> is a ''pseudovector'' or ''axial vector''. === Lorentz force === {{See also|Lorentz force}} The cross product is used to describe the [[Lorentz force]] experienced by a moving electric charge {{nowrap|{{math|''q<sub>e</sub>''}}:}} : <math>\mathbf{F} = q_e \left( \mathbf{E}+ \mathbf{v} \times \mathbf{B} \right)</math> Since velocity {{nowrap|{{math|'''v'''}},}} force {{math|'''F'''}} and electric field {{math|'''E'''}} are all ''true'' vectors, the magnetic field {{math|'''B'''}} is a ''pseudovector''. === Other === In [[vector calculus]], the cross product is used to define the formula for the [[vector operator]] [[Curl (mathematics)|curl]]. The trick of rewriting a cross product in terms of a matrix multiplication appears frequently in [[epipolar geometry|epipolar]] and multi-view geometry, in particular when deriving matching constraints. == As an external product == [[File:Exterior calc cross product.svg|right|thumb|The cross product in relation to the exterior product. In red are the orthogonal [[unit vector]], and the "parallel" unit bivector.]] The cross product can be defined in terms of the exterior product. It can be generalized to an [[#External product|external product]] in other than three dimensions.<ref>{{cite book |author=Greub, W. |title=Multilinear Algebra |year=1978}}</ref> This generalization allows a natural geometric interpretation of the cross product. In [[exterior algebra]] the exterior product of two vectors is a bivector. A bivector is an oriented plane element, in much the same way that a vector is an oriented line element. Given two vectors ''a'' and ''b'', one can view the bivector {{nowrap|1=''a'' ∧ ''b''}} as the oriented parallelogram spanned by ''a'' and ''b''. The cross product is then obtained by taking the [[Hodge star]] of the bivector {{nowrap|1=''a'' ∧ ''b''}}, mapping [[p-vector|2-vectors]] to vectors: : <math>a \times b = \star (a \wedge b).</math> This can be thought of as the oriented multi-dimensional element "perpendicular" to the bivector. In a ''d-''dimensional space, Hodge star takes a ''k''-vector to a (''d–k'')-vector; thus only in ''d ='' 3 dimensions is the result an element of dimension one (3–2 = 1), i.e. a vector. For example, in ''d ='' 4 dimensions, the cross product of two vectors has dimension 4–2 = 2, giving a bivector. Thus, only in three dimensions does cross product define an algebra structure to multiply vectors. == Handedness ==<!-- This section is linked from [[Vector calculus]] --> {{Original research section|date=September 2021}} === Consistency === When physics laws are written as equations, it is possible to make an arbitrary choice of the coordinate system, including handedness. One should be careful to never write down an equation where the two sides do not behave equally under all transformations that need to be considered. For example, if one side of the equation is a cross product of two [[polar vector]]s, one must take into account that the result is an [[Pseudovector|axial vector]]. Therefore, for consistency, the other side must also be an axial vector.{{Citation needed|date=April 2008}} More generally, the result of a cross product may be either a polar vector or an axial vector, depending on the type of its operands (polar vectors or axial vectors). Namely, polar vectors and axial vectors are interrelated in the following ways under application of the cross product: * polar vector × polar vector = axial vector * axial vector × axial vector = axial vector * polar vector × axial vector = polar vector * axial vector × polar vector = polar vector or symbolically * polar × polar = axial * axial × axial = axial * polar × axial = polar * axial × polar = polar Because the cross product may also be a polar vector, it may not change direction with a mirror image transformation. This happens, according to the above relationships, if one of the operands is a polar vector and the other one is an axial vector (e.g., the cross product of two polar vectors). For instance, a [[vector triple product]] involving three polar vectors is a polar vector. A handedness-free approach is possible using exterior algebra. === The paradox of the orthonormal basis === Let ('''i''', '''j''', '''k''') be an orthonormal basis. The vectors '''i''', '''j''' and '''k''' do not depend on the orientation of the space. They can even be defined in the absence of any orientation. They can not therefore be axial vectors. But if '''i''' and '''j''' are polar vectors, then '''k''' is an axial vector for '''i''' × '''j''' = '''k''' or '''j''' × '''i''' = '''k'''. This is a paradox. "Axial" and "polar" are ''physical'' qualifiers for ''physical'' vectors; that is, vectors which represent ''physical'' quantities such as the velocity or the magnetic field. The vectors '''i''', '''j''' and '''k''' are mathematical vectors, neither axial nor polar. In mathematics, the cross-product of two vectors is a vector. There is no contradiction. == Generalizations == There are several ways to generalize the cross product to higher dimensions. === Lie algebra === {{Main|Lie algebra}} The cross product can be seen as one of the simplest Lie products, and is thus generalized by [[Lie algebra]]s, which are axiomatized as binary products satisfying the axioms of multilinearity, skew-symmetry, and the Jacobi identity. Many Lie algebras exist, and their study is a major field of mathematics, called [[Lie theory]]. For example, the [[Heisenberg algebra]] gives another Lie algebra structure on <math>\mathbf{R}^3.</math> In the basis <math>\{x,y,z\},</math> the product is <math>[x,y]=z, [x,z]=[y,z]=0.</math> === Quaternions === {{Further|quaternions and spatial rotation}} The cross product can also be described in terms of [[quaternion]]s. In general, if a vector {{nowrap|[''a''<sub>1</sub>, ''a''<sub>2</sub>, ''a''<sub>3</sub>]}} is represented as the quaternion {{nowrap|''a''<sub>1</sub>''i'' + ''a''<sub>2</sub>''j'' + ''a''<sub>3</sub>''k''}}, the cross product of two vectors can be obtained by taking their product as quaternions and deleting the real part of the result. The real part will be the negative of the dot product of the two vectors. === Octonions === {{See also|Seven-dimensional cross product|Octonion}} A cross product for 7-dimensional vectors can be obtained in the same way by using the [[octonion]]s instead of the quaternions. The nonexistence of nontrivial vector-valued cross products of two vectors in other dimensions is related to the result from [[Hurwitz's theorem (normed division algebras)|Hurwitz's theorem]] that the only [[normed division algebra]]s are the ones with dimension 1, 2, 4, and 8. === Exterior product === {{Main|Exterior algebra|Comparison of vector algebra and geometric algebra#Cross and exterior products}} In general dimension, there is no direct analogue of the binary cross product that yields specifically a vector. There is however the exterior product, which has similar properties, except that the exterior product of two vectors is now a [[p-vector|2-vector]] instead of an ordinary vector. As mentioned above, the cross product can be interpreted as the exterior product in three dimensions by using the [[Hodge star]] operator to map 2-vectors to vectors. The Hodge dual of the exterior product yields an {{nowrap|(''n'' − 2)}}-vector, which is a natural generalization of the cross product in any number of dimensions. The exterior product and dot product can be combined (through summation) to form the [[Geometric algebra|geometric product]] in geometric algebra. === External product === As mentioned above, the cross product can be interpreted in three dimensions as the Hodge dual of the exterior product. In any finite ''n'' dimensions, the Hodge dual of the exterior product of {{nowrap|''n'' − 1}} vectors is a vector. So, instead of a binary operation, in arbitrary finite dimensions, the cross product is generalized as the Hodge dual of the exterior product of some given {{nowrap|''n'' − 1}} vectors. This generalization is called '''external product'''.<ref>{{cite book|editor=Hogben, L|editor-link= Leslie Hogben |title=Handbook of Linear Algebra|year=2007}}{{page needed|date=September 2019}}</ref> === Commutator product === {{Main|Geometric algebra#Extensions of the inner and exterior products|Cross product#Cross product and handedness|Cross product#Lie algebra}} Interpreting the three-dimensional [[vector space]] of the algebra as the [[bivector|2-vector]] (not the 1-vector) [[Graded vector space|subalgebra]] of the three-dimensional geometric algebra, where <math>\mathbf{i} = \mathbf{e_2} \mathbf{e_3}</math>, <math>\mathbf{j} = \mathbf{e_1} \mathbf{e_3}</math>, and <math>\mathbf{k} = \mathbf{e_1} \mathbf{e_2}</math>, the cross product corresponds exactly to the [[geometric algebra#Extensions of the inner and exterior products|commutator product]] in geometric algebra and both use the same symbol <math>\times</math>. The commutator product is defined for 2-vectors <math>A</math> and <math>B</math> in geometric algebra as: : <math>A \times B = \tfrac{1}{2}(AB - BA),</math> where <math>AB</math> is the geometric product.<ref>{{cite book|title=Understanding Geometric Algebra for Electromagnetic Theory|year=2011|last1=Arthur|first1=John W.|page=49|isbn=978-0470941638|publisher=[[IEEE Press]]|url=https://books.google.com/books?id=rxGCaDvBCoAC}}</ref> The commutator product could be generalised to arbitrary [[multivector#Geometric algebra|multivectors]] in three dimensions, which results in a multivector consisting of only elements of [[Graded vector space|grades]] 1 (1-vectors/[[#Handedness|true vectors]]) and 2 (2-vectors/pseudovectors). While the commutator product of two 1-vectors is indeed the same as the exterior product and yields a 2-vector, the commutator of a 1-vector and a 2-vector yields a true vector, corresponding instead to the [[Geometric algebra#Extensions of the inner and exterior products|left and right contractions]] in geometric algebra. The commutator product of two 2-vectors has no corresponding equivalent product, which is why the commutator product is defined in the first place for 2-vectors. Furthermore, the commutator triple product of three 2-vectors is the same as the [[vector triple product]] of the same three pseudovectors in vector algebra. However, the commutator triple product of three 1-vectors in geometric algebra is instead the [[Sign (mathematics)#Sign of a direction|negative]] of the [[vector triple product]] of the same three true vectors in vector algebra. Generalizations to higher dimensions is provided by the same commutator product of 2-vectors in higher-dimensional geometric algebras, but the 2-vectors are no longer pseudovectors. Just as the commutator product/cross product of 2-vectors in three dimensions [[#Lie algebra|correspond to the simplest Lie algebra]], the 2-vector subalgebras of higher dimensional geometric algebra equipped with the commutator product also correspond to the Lie algebras.<ref>{{cite book|title=Geometric Algebra for Physicists|year=2003|last1=Doran|first1=Chris|last2=Lasenby|first2=Anthony|pages=401–408|isbn=978-0521715959|publisher=[[Cambridge University Press]]|url=https://books.google.com/books?id=VW4yt0WHdjoC}}</ref> Also as in three dimensions, the commutator product could be further generalised to arbitrary multivectors. === Multilinear algebra === In the context of [[multilinear algebra]], the cross product can be seen as the (1,2)-tensor (a [[mixed tensor]], specifically a [[bilinear map]]) obtained from the 3-dimensional [[volume form]],<ref group="note">By a volume form one means a function that takes in ''n'' vectors and gives out a scalar, the volume of the [[Parallelepiped#Parallelotope|parallelotope]] defined by the vectors: <math> V\times \cdots \times V \to \mathbf{R}.</math> This is an ''n''-ary multilinear skew-symmetric form. In the presence of a basis, such as on <math>\mathbf{R}^n,</math> this is given by the determinant, but in an abstract vector space, this is added structure. In terms of [[G-structure|''G''-structures]], a volume form is an [[Special linear group|<math> SL</math>]]-structure.</ref> a (0,3)-tensor, by [[Raising and lowering indices|raising an index]]. In detail, the 3-dimensional volume form defines a product <math> V \times V \times V \to \mathbf{R},</math> by taking the determinant of the matrix given by these 3 vectors. By [[Dual space|duality]], this is equivalent to a function <math> V \times V \to V^*,</math> (fixing any two inputs gives a function <math> V \to \mathbf{R}</math> by evaluating on the third input) and in the presence of an [[inner product]] (such as the dot product; more generally, a non-degenerate bilinear form), we have an isomorphism <math> V \to V^*,</math> and thus this yields a map <math> V \times V \to V,</math> which is the cross product: a (0,3)-tensor (3 vector inputs, scalar output) has been transformed into a (1,2)-tensor (2 vector inputs, 1 vector output) by "raising an index". Translating the above algebra into geometry, the function "volume of the parallelepiped defined by <math> (a,b,-)</math>" (where the first two vectors are fixed and the last is an input), which defines a function <math> V \to \mathbf{R}</math>, can be ''represented'' uniquely as the dot product with a vector: this vector is the cross product <math> a \times b.</math> From this perspective, the cross product is ''defined'' by the [[scalar triple product]], <math>\mathrm{Vol}(a,b,c) = (a\times b)\cdot c.</math> In the same way, in higher dimensions one may define generalized cross products by raising indices of the ''n''-dimensional volume form, which is a <math> (0,n)</math>-tensor. The most direct generalizations of the cross product are to define either: * a <math> (1,n-1)</math>-tensor, which takes as input <math> n-1</math> vectors, and gives as output 1 vector – an <math> (n-1)</math>-ary vector-valued product, or * a <math> (n-2,2)</math>-tensor, which takes as input 2 vectors and gives as output [[skew-symmetric tensor]] of rank {{nowrap|''n'' − 2}} – a binary product with rank {{nowrap|''n'' − 2}} tensor values. One can also define <math>(k,n-k)</math>-tensors for other ''k''. These products are all multilinear and skew-symmetric, and can be defined in terms of the determinant and [[parity (physics)|parity]]. The <math> (n-1)</math>-ary product can be described as follows: given <math> n-1</math> vectors <math> v_1,\dots,v_{n-1}</math> in <math>\mathbf{R}^n,</math> define their generalized cross product <math> v_n = v_1 \times \cdots \times v_{n-1}</math> as: * perpendicular to the hyperplane defined by the <math> v_i,</math> * magnitude is the volume of the parallelotope defined by the <math> v_i,</math> which can be computed as the Gram determinant of the <math> v_i,</math> * oriented so that <math> v_1,\dots,v_n</math> is positively oriented. This is the unique multilinear, alternating product which evaluates to <math> e_1 \times \cdots \times e_{n-1} = e_n</math>, <math> e_2 \times \cdots \times e_n = e_1,</math> and so forth for cyclic permutations of indices. In coordinates, one can give a formula for this <math> (n-1)</math>-ary analogue of the cross product in '''R'''<sup>''n''</sup> by: :<math>\bigwedge_{i=0}^{n-1}\mathbf{v}_i = \begin{vmatrix} v_1{}^1 &\cdots &v_1{}^{n}\\ \vdots &\ddots &\vdots\\ v_{n-1}{}^1 & \cdots &v_{n-1}{}^{n}\\ \mathbf{e}_1 &\cdots &\mathbf{e}_{n} \end{vmatrix}. </math> This formula is identical in structure to the determinant formula for the normal cross product in '''R'''<sup>3</sup> except that the row of basis vectors is the last row in the determinant rather than the first. The reason for this is to ensure that the ordered vectors ('''v'''<sub>1</sub>, ..., '''v'''<sub>''n''−1</sub>, Λ{{su|b=i=0|p=''n''–1}}'''v'''<sub>''i''</sub>) have a positive [[orientation (mathematics)|orientation]] with respect to ('''e'''<sub>1</sub>, ..., '''e'''<sub>''n''</sub>). If ''n'' is odd, this modification leaves the value unchanged, so this convention agrees with the normal definition of the binary product. In the case that ''n'' is even, however, the distinction must be kept. This <math> (n-1)</math>-ary form enjoys many of the same properties as the vector cross product: it is [[alternating form|alternating]] and linear in its arguments, it is perpendicular to each argument, and its magnitude gives the hypervolume of the region bounded by the arguments. And just like the vector cross product, it can be defined in a coordinate independent way as the Hodge dual of the wedge product of the arguments. Moreover, the product <math>[v_1,\ldots,v_n]:=\bigwedge_{i=0}^n v_i</math> satisfies the Filippov identity, :<math> [[x_1,\ldots,x_n],y_2,\ldots,y_n]] = \sum_{i=1}^n [x_1,\ldots,x_{i-1},[x_i,y_2,\ldots,y_n],x_{i+1},\ldots,x_n], </math> and so it endows '''R'''<sup>n+1</sup> with a structure of n-Lie algebra (see Proposition 1 of <ref>{{cite journal |last1=Filippov |first1=V.T. |date=1985 |title=n-Lie algebras |url=https://link.springer.com/article/10.1007/BF00969110 |journal=Sibirsk. Mat. Zh. |volume=26 |issue=6 |pages=879–891 |doi=10.1007/BF00969110 |bibcode=1985SibMJ..26..879F |s2cid=125051596 |access-date=}}</ref>). == History == In 1773, [[Joseph-Louis Lagrange]] used the component form of both the dot and cross products in order to study the [[tetrahedron]] in three dimensions.<ref> {{cite book |author=Lagrange, Joseph-Louis |title=Oeuvres |volume=3 |chapter=Solutions analytiques de quelques problèmes sur les pyramides triangulaires |year=1773 |page=661 |url=https://gallica.bnf.fr/ark:/12148/bpt6k229222d/f662.item.zoom }}</ref><ref group=note>In modern notation, Lagrange defines <math>\mathbf{\xi} = \mathbf{y} \times \mathbf{z}</math>, <math>\boldsymbol{\eta} = \mathbf{z} \times \mathbf{x}</math>, and <math>\boldsymbol{\zeta} = \mathbf{x} \times \boldsymbol{y}</math>. Thereby, the modern <math>\mathbf{x}</math> corresponds to the three variables <math>(x, x', x'')</math> in Lagrange's notation.</ref> In 1843, [[William Rowan Hamilton]] introduced the [[quaternion]] product, and with it the terms ''vector'' and ''scalar''. Given two quaternions {{nowrap|[0, '''u''']}} and {{nowrap|[0, '''v''']}}, where '''u''' and '''v''' are vectors in '''R'''<sup>3</sup>, their quaternion product can be summarized as {{nowrap|[−'''u''' ⋅ '''v''', '''u''' × '''v''']}}. [[James Clerk Maxwell]] used Hamilton's quaternion tools to develop his famous [[Maxwell's equations|electromagnetism equations]], and for this and other reasons quaternions for a time were an essential part of physics education. In 1844, [[Hermann Grassmann]] published a geometric algebra not tied to dimension two or three. Grassmann developed several products, including a cross product represented then by {{math|[uv]}}.{{sfnp|Cajori|1929|p=[https://archive.org/details/historyofmathema00cajo_0/pages/134 134]}} (''See also: [[exterior algebra]].'') In 1853, [[Augustin-Louis Cauchy]], a contemporary of Grassmann, published a paper on algebraic keys which were used to solve equations and had the same multiplication properties as the cross product.{{sfnp|Crowe|1994|p=[https://archive.org/details/historyofvectora0000crow/page/83 83]}}<ref>{{cite book|last=Cauchy|first=Augustin-Louis|title=Ouvres |volume=12 |page=[https://books.google.com/books?id=0k9eAAAAcAAJ&pg=PA16 16]|year=1900}}</ref> In 1878, [[William Kingdon Clifford]], known for a [[Geometric algebra|precursor]] to the [[Clifford algebra]] named in his honor, published ''[[Elements of Dynamic]]'', in which the term ''vector product'' is attested. In the book, this product of two vectors is defined to have magnitude equal to the [[area]] of the [[parallelogram]] of which they are two sides, and direction perpendicular to their plane.<ref>{{cite web |author-link=William Kingdon Clifford |last=Clifford |first=William Kingdon |date=1878 |url=https://archive.org/details/elementsofdynami01clifiala/page/94 |title=Elements of Dynamic, Part I |page=95 |location=London |publisher=MacMillan & Co }}</ref> In lecture notes from 1881, [[Josiah Willard Gibbs|Gibbs]] represented the cross product by <math>u \times v</math> and called it the ''skew product''.<ref> {{cite book |last1=Gibbs |first1=Josiah Willard |title=Elements of vector analysis : arranged for the use of students in physics |url=https://archive.org/details/elementsvectora00gibb/page/4 |publisher=New Haven : Printed by Tuttle, Morehouse & Taylor |date=1884 }} </ref>{{sfnp|Crowe|1994|p=[https://archive.org/details/historyofvectora0000crow/page/154 154]}} In 1901, Gibb's student [[Edwin Bidwell Wilson]] edited and extended these lecture notes into the [[textbook]] ''[[Vector Analysis]]''. Wilson kept the term ''skew product'', but observed that the alternative terms ''cross product''<ref group=note>since {{math|A × B}} is read as "{{math|A}} cross {{math|B}}"</ref> and ''vector product'' were more frequent.{{sfnp|Wilson|1901|p=[https://archive.org/details/117714283/page/61 61]}} In 1908, [[Cesare Burali-Forti]] and [[Roberto Marcolongo]] introduced the vector product notation {{math|u ∧ v}}.{{sfnp|Cajori|1929|p=[https://archive.org/details/historyofmathema00cajo_0/pages/134 134]}} This is used in [[France]] and other areas until this day, as the symbol <math>\times</math> is already used to denote [[multiplication]] and the [[Cartesian product]].{{fact|date=July 2024}} == See also == * [[Cartesian product]] – a product of two sets * [[Geometric algebra#Rotating systems|Geometric algebra: Rotating systems]] * [[Multiple cross products]] – products involving more than three vectors * [[Multiplication of vectors]] * [[Quadruple product]] * [[×]] (the symbol) == Notes == {{Reflist|group=note}} == References == {{reflist|30em}} == Bibliography == * {{Cite book | last=Cajori | first=Florian | author-link=Florian Cajori | title=A History Of Mathematical Notations Volume II | year=1929 | publisher=[[Open Court Publishing Company|Open Court Publishing]] | url=https://archive.org/details/historyofmathema00cajo_0 | isbn=978-0-486-67766-8 | page=[https://archive.org/details/historyofmathema00cajo_0/pages/134 134] }} * {{Cite book |last=Crowe |first=Michael J. |title=A History of Vector Analysis |url=https://archive.org/details/historyvectorana00crow |url-access=limited |year=1994 |publisher=Dover |isbn=0-486-67910-1 }} * [[E. A. Milne]] (1948) [[Vectorial Mechanics]], Chapter 2: Vector Product, pp 11 –31, London: [[Methuen Publishing]]. * {{Cite book | last=Wilson | first=Edwin Bidwell | title=Vector Analysis: A text-book for the use of students of mathematics and physics, founded upon the lectures of J. Willard Gibbs | year=1901 | publisher=[[Yale University Press]] | isbn=<!--none--> | url=https://archive.org/details/117714283 }} * {{Cite book |author = T. Levi-Civita |author2 = U. Amaldi |title = Lezioni di meccanica razionale |publisher = Zanichelli editore |location = Bologna |year = 1949 |language = it }} == External links == * {{Springer |title = Cross product |id = p/c027120}} * [http://behindtheguesses.blogspot.com/2009/04/dot-and-cross-products.html A quick geometrical derivation and interpretation of cross products] * [https://web.archive.org/web/20060424151900/http://physics.syr.edu/courses/java-suite/crosspro.html An interactive tutorial] created at [[Syracuse University]] – (requires [[Java (programming language)|java]]) * [http://www.cs.berkeley.edu/~wkahan/MathH110/Cross.pdf W. Kahan (2007). Cross-Products and Rotations in Euclidean 2- and 3-Space. University of California, Berkeley (PDF).] * [https://www.mathcentre.ac.uk/resources/uploaded/mc-ty-vectorprod-2009-1.pdf The vector product], Mathcentre (UK), 2009 {{linear algebra}} {{DEFAULTSORT:Cross Product}} [[Category:Bilinear maps]] [[Category:Operations on vectors]] [[Category:Analytic geometry]]
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Templates used on this page:
Template:About
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Fact
(
edit
)
Template:Further
(
edit
)
Template:Harvnb
(
edit
)
Template:Linear algebra
(
edit
)
Template:Main
(
edit
)
Template:Math
(
edit
)
Template:Math proof
(
edit
)
Template:Mvar
(
edit
)
Template:Nowrap
(
edit
)
Template:Original research section
(
edit
)
Template:Page needed
(
edit
)
Template:Redirect
(
edit
)
Template:Reflist
(
edit
)
Template:Section link
(
edit
)
Template:See also
(
edit
)
Template:Sfnp
(
edit
)
Template:Short description
(
edit
)
Template:Slink
(
edit
)
Template:Springer
(
edit
)
Template:Su
(
edit
)
Search
Search
Editing
Cross product
Add topic