Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Dot product
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Algebraic operation on coordinate vectors}} {{redirect|Scalar product|the abstract scalar product|Inner product space|the product of a vector and a scalar|Scalar multiplication}} In [[mathematics]], the '''dot product''' or '''scalar product'''<ref group="note">The term ''scalar product'' means literally "product with a [[Scalar (mathematics)|scalar]] as a result". It is also used for other [[symmetric bilinear form]]s, for example in a [[pseudo-Euclidean space]]. Not to be confused with [[scalar multiplication]].</ref> is an [[algebraic operation]] that takes two equal-length sequences of numbers (usually [[coordinate vector]]s), and returns a single number. In [[Euclidean geometry]], the dot product of the [[Cartesian coordinates]] of two [[Euclidean vector|vector]]s is widely used. It is often called the '''inner product''' (or rarely the '''projection product''') of [[Euclidean space]], even though it is not the only inner product that can be defined on Euclidean space (see ''[[Inner product space]]'' for more). It should not be confused with the [[cross product]]. Algebraically, the dot product is the sum of the [[Product (mathematics)|products]] of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the [[Euclidean vector#Length|Euclidean magnitude]]s of the two vectors and the [[cosine]] of the angle between them. These definitions are equivalent when using Cartesian coordinates. In modern [[geometry]], [[Euclidean space]]s are often defined by using [[vector space]]s. In this case, the dot product is used for defining lengths (the length of a vector is the [[square root]] of the dot product of the vector by itself) and angles (the cosine of the angle between two vectors is the [[quotient]] of their dot product by the product of their lengths). The name "dot product" is derived from the [[dot operator]] " '''⋅''' " that is often used to designate this operation;<ref name=":1">{{cite web|title=Dot Product|url=https://www.mathsisfun.com/algebra/vectors-dot-product.html|access-date=2020-09-06|website=www.mathsisfun.com}}</ref> the alternative name "scalar product" emphasizes that the result is a [[scalar (mathematics)|scalar]], rather than a vector (as with the [[vector product]] in three-dimensional space). == Definition == The dot product may be defined algebraically or geometrically. The geometric definition is based on the notions of angle and distance (magnitude) of vectors. The equivalence of these two definitions relies on having a [[Cartesian coordinate system]] for Euclidean space. In modern presentations of [[Euclidean geometry]], the points of space are defined in terms of their [[Cartesian coordinates]], and [[Euclidean space]] itself is commonly identified with the [[real coordinate space]] <math>\mathbf{R}^n</math>. In such a presentation, the notions of length and angle are defined by means of the dot product. The length of a vector is defined as the [[square root]] of the dot product of the vector by itself, and the [[cosine]] of the (non oriented) angle between two vectors of length one is defined as their dot product. So the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and the modern formulations of Euclidean geometry. === Coordinate definition === The dot product of two vectors <math>\mathbf{a} = [a_1, a_2, \cdots, a_n]</math> and {{nowrap|1=<math>\mathbf{b} = [b_1, b_2, \cdots, b_n]</math>,}} specified with respect to an [[orthonormal basis]], is defined as:<ref name="Lipschutz2009">{{cite book |author1=S. Lipschutz |author2=M. Lipson |title= Linear Algebra (Schaum's Outlines) | edition= 4th | year= 2009|publisher= McGraw Hill|isbn=978-0-07-154352-1}}</ref> <math display="block">\mathbf a \cdot \mathbf b = \sum_{i=1}^n a_i b_i = a_1 b_1 + a_2 b_2 + \cdots + a_n b_n</math> where <math>\Sigma</math> denotes [[summation]] and <math>n</math> is the [[dimension]] of the [[vector space]]. For instance, in [[Three-dimensional space (mathematics)|three-dimensional space]], the dot product of vectors {{nowrap|<math> [1,3,-5] </math>}} and {{nowrap|<math> [4,-2,-1] </math>}} is: <math display="block"> \begin{align} \ [1, 3, -5] \cdot [4, -2, -1] &= (1 \times 4) + (3\times-2) + (-5\times-1) \\ &= 4 - 6 + 5 \\ &= 3 \end{align} </math> Likewise, the dot product of the vector {{nowrap|<math>[1,3,-5]</math>}} with itself is: <math display="block"> \begin{align} \ [1, 3, -5] \cdot [1, 3, -5] &= (1 \times 1) + (3\times 3) + (-5\times -5) \\ &= 1 + 9 + 25 \\ &= 35 \end{align} </math> If vectors are identified with [[column matrix|column vectors]], the dot product can also be written as a [[matrix multiplication|matrix product]] <math display="block">\mathbf a \cdot \mathbf b = \mathbf a^{\mathsf T} \mathbf b,</math> where <math>\mathbf a{^\mathsf T}</math> denotes the [[transpose]] of <math>\mathbf a</math>. Expressing the above example in this way, a 1 × 3 matrix ([[row vector]]) is multiplied by a 3 × 1 matrix ([[column vector]]) to get a 1 × 1 matrix that is identified with its unique entry: <math display="block"> \begin{bmatrix} 1 & 3 & -5 \end{bmatrix} \begin{bmatrix} 4 \\ -2 \\ -1 \end{bmatrix} = 3 \, . </math> === Geometric definition === [[File:Inner-product-angle.svg|thumb|Illustration showing how to find the angle between vectors using the dot product]] [[File:Tetrahedral angle calculation.svg|thumb|216px|<!-- specify width as minus sign vanishes at most sizes --> Calculating bond angles of a symmetrical [[tetrahedral molecular geometry]] using a dot product]] In [[Euclidean space]], a [[Euclidean vector]] is a geometric object that possesses both a magnitude and a direction. A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction to which the arrow points. The [[Magnitude (mathematics)|magnitude]] of a vector <math>\mathbf{a}</math> is denoted by <math> \left\| \mathbf{a} \right\| </math>. The dot product of two Euclidean vectors <math>\mathbf{a}</math> and <math>\mathbf{b}</math> is defined by<ref name="Spiegel2009">{{cite book |author1=M.R. Spiegel |author2=S. Lipschutz |author3=D. Spellman |title= Vector Analysis (Schaum's Outlines)|edition= 2nd |year= 2009|publisher= McGraw Hill|isbn=978-0-07-161545-7}}</ref><ref>{{cite book|author1=A I Borisenko|author2=I E Taparov|title=Vector and tensor analysis with applications | publisher=Dover | translator=Richard Silverman | year=1968 | page=14}}</ref><ref name=":1" /> <math display="block">\mathbf{a}\cdot\mathbf{b}= \left\|\mathbf{a}\right\| \left\|\mathbf{b}\right\|\cos\theta ,</math> where <math>\theta</math> is the [[angle]] between <math>\mathbf{a}</math> and <math>\mathbf{b}</math>. In particular, if the vectors <math>\mathbf{a}</math> and <math>\mathbf{b}</math> are [[orthogonal]] (i.e., their angle is <math>\frac{\pi}{2}</math> or <math>90^\circ</math>), then <math>\cos \frac \pi 2 = 0</math>, which implies that <math display="block">\mathbf a \cdot \mathbf b = 0 .</math> At the other extreme, if they are [[codirectional]], then the angle between them is zero with <math>\cos 0 = 1</math> and <math display="block">\mathbf a \cdot \mathbf b = \left\| \mathbf a \right\| \, \left\| \mathbf b \right\| </math> This implies that the dot product of a vector <math>\mathbf{a}</math> with itself is <math display="block">\mathbf a \cdot \mathbf a = \left\| \mathbf a \right\| ^2 ,</math> which gives <math display="block"> \left\| \mathbf a \right\| = \sqrt{\mathbf a \cdot \mathbf a} ,</math> the formula for the [[Euclidean length]] of the vector. === Scalar projection and first properties === [[File:Dot Product.svg|thumb|right|Scalar projection]] The [[scalar projection]] (or scalar component) of a Euclidean vector <math>\mathbf{a}</math> in the direction of a Euclidean vector <math>\mathbf{b}</math> is given by <math display="block"> a_b = \left\| \mathbf a \right\| \cos \theta ,</math> where <math>\theta</math> is the angle between <math>\mathbf{a}</math> and <math>\mathbf{b}</math>. In terms of the geometric definition of the dot product, this can be rewritten as <math display="block">a_b = \mathbf a \cdot \widehat{\mathbf b} ,</math> where <math> \widehat{\mathbf b} = \mathbf b / \left\| \mathbf b \right\| </math> is the [[unit vector]] in the direction of <math>\mathbf{b}</math>. [[File:Dot product distributive law.svg|thumb|right|Distributive law for the dot product]] The dot product is thus characterized geometrically by<ref>{{cite book | last1=Arfken | first1=G. B. | last2=Weber | first2=H. J. | title=Mathematical Methods for Physicists | publisher=[[Academic Press]] | location=Boston, MA | edition=5th | isbn=978-0-12-059825-0 | year=2000 | pages=14–15 }}</ref> <math display="block"> \mathbf a \cdot \mathbf b = a_b \left\| \mathbf{b} \right\| = b_a \left\| \mathbf{a} \right\| .</math> The dot product, defined in this manner, is [[Homogeneous function|homogeneous]] under scaling in each variable, meaning that for any scalar <math>\alpha</math>, <math display="block"> ( \alpha \mathbf{a} ) \cdot \mathbf b = \alpha ( \mathbf a \cdot \mathbf b ) = \mathbf a \cdot ( \alpha \mathbf b ) .</math> It also satisfies the [[distributive law]], meaning that <math display="block"> \mathbf a \cdot ( \mathbf b + \mathbf c ) = \mathbf a \cdot \mathbf b + \mathbf a \cdot \mathbf c .</math> These properties may be summarized by saying that the dot product is a [[bilinear form]]. Moreover, this bilinear form is [[positive definite bilinear form|positive definite]], which means that <math> \mathbf a \cdot \mathbf a </math> is never negative, and is zero if and only if <math> \mathbf a = \mathbf 0 </math>, the zero vector. === Equivalence of the definitions === If <math>\mathbf{e}_1,\cdots,\mathbf{e}_n</math> are the [[standard basis|standard basis vectors]] in <math>\mathbf{R}^n</math>, then we may write <math display="block">\begin{align} \mathbf a &= [a_1 , \dots , a_n] = \sum_i a_i \mathbf e_i \\ \mathbf b &= [b_1 , \dots , b_n] = \sum_i b_i \mathbf e_i. \end{align} </math> The vectors <math>\mathbf{e}_i</math> are an [[orthonormal basis]], which means that they have unit length and are at right angles to each other. Since these vectors have unit length, <math display="block"> \mathbf e_i \cdot \mathbf e_i = 1 </math> and since they form right angles with each other, if <math>i\neq j</math>, <math display="block"> \mathbf e_i \cdot \mathbf e_j = 0 .</math> Thus in general, we can say that: <math display="block"> \mathbf e_i \cdot \mathbf e_j = \delta_ {ij} ,</math> where <math>\delta_{ij}</math> is the [[Kronecker delta]]. [[File:Skalarprodukt geometrisch.svg|thumb|upright=1.0|Vector components in an orthonormal basis]] Also, by the geometric definition, for any vector <math>\mathbf{e}_i</math> and a vector <math>\mathbf{a}</math>, we note that <math display="block"> \mathbf a \cdot \mathbf e_i = \left\| \mathbf a \right\| \left\| \mathbf e_i \right\| \cos \theta_i = \left\| \mathbf a \right\| \cos \theta_i = a_i ,</math> where <math>a_i</math> is the component of vector <math>\mathbf{a}</math> in the direction of <math>\mathbf{e}_i</math>. The last step in the equality can be seen from the figure. Now applying the distributivity of the geometric version of the dot product gives <math display="block"> \mathbf a \cdot \mathbf b = \mathbf a \cdot \sum_i b_i \mathbf e_i = \sum_i b_i ( \mathbf a \cdot \mathbf e_i ) = \sum_i b_i a_i= \sum_i a_i b_i ,</math> which is precisely the algebraic definition of the dot product. So the geometric dot product equals the algebraic dot product. == Properties== The dot product fulfills the following properties if <math>\mathbf{a}</math>, <math>\mathbf{b}</math>, <math>\mathbf{c}</math> and <math>\mathbf{d}</math> are real [[vector (geometry)|vectors]] and <math>\alpha</math>, <math>\beta</math>, <math>\gamma</math> and <math>\delta</math> are [[scalar (mathematics)|scalars]].<ref name="Lipschutz2009" /><ref name="Spiegel2009" /> ; [[Commutative]] : <math display="block"> \mathbf{a} \cdot \mathbf{b} = \mathbf{b} \cdot \mathbf{a} ,</math> which follows from the definition (<math>\theta</math> is the angle between <math>\mathbf{a}</math> and <math>\mathbf{b}</math>):<ref>{{cite web|last=Nykamp|first=Duane|title=The dot product|url=https://mathinsight.org/dot_product|access-date=September 6, 2020|website=Math Insight}}</ref> <math display="block"> \mathbf{a} \cdot \mathbf{b} = \left\| \mathbf{a} \right\| \left\| \mathbf{b} \right\| \cos \theta = \left\| \mathbf{b} \right\| \left\| \mathbf{a} \right\| \cos \theta = \mathbf{b} \cdot \mathbf{a} .</math> The commutative property can also be easily proven with the algebraic definition, and in [[Inner product space|more general spaces]] (where the notion of angle might not be geometrically intuitive but an analogous product can be defined) the angle between two vectors can be defined as <math display="block"> \theta = \operatorname{arccos}\left( \frac{\mathbf{a}\cdot\mathbf{b}}{\left\|\mathbf{a}\right\| \left\|\mathbf{b}\right\|} \right). </math> ; [[bilinear form|Bilinear]] (additive, distributive and scalar-multiplicative in both arguments) : <math display="block"> \begin{align} (\alpha \mathbf{a} + \beta\mathbf{b})&\cdot (\gamma\mathbf{c}+\delta\mathbf{d}) \\ &=\alpha\gamma(\mathbf{a}\cdot\mathbf{c}) + \alpha\delta(\mathbf{a}\cdot\mathbf{d}) +\beta\gamma(\mathbf{b}\cdot\mathbf{c}) +\beta\delta(\mathbf{b}\cdot\mathbf{d}) . \end{align}</math> ; Not [[associative]] : Because the dot product is not defined between a scalar <math>\mathbf{a}\cdot\mathbf{b}</math> and a vector <math>\mathbf{c},</math> associativity is meaningless.<ref>Weisstein, Eric W. "Dot Product". From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/DotProduct.html</ref> However, bilinearity implies <math display="block">c (\mathbf{a} \cdot \mathbf{b}) = (c\mathbf{a})\cdot\mathbf{b} = \mathbf{a}\cdot(c\mathbf{b}).</math> This property is sometimes called the "associative law for scalar and dot product",<ref name="BanchoffWermer1983">{{cite book|author1=T. Banchoff|author2=J. Wermer | title=Linear Algebra Through Geometry|year=1983|publisher=Springer Science & Business Media| isbn=978-1-4684-0161-5 | page=12| url=https://archive.org/details/linearalgebrathr00banc_0/page/12/mode/2up}}</ref> and one may say that "the dot product is associative with respect to scalar multiplication".<ref name="BedfordFowler2008">{{cite book | author1=A. Bedford|author2=Wallace L. Fowler|title=Engineering Mechanics: Statics|year=2008|publisher=Prentice Hall | isbn=978-0-13-612915-8 | edition=5th | page=60}}</ref> ; [[Orthogonal]] : Two non-zero vectors <math>\mathbf{a}</math> and <math>\mathbf{b}</math> are ''orthogonal'' if and only if <math>\mathbf{a} \cdot \mathbf{b} = 0</math>. ; No [[cancellation law|cancellation]] : Unlike multiplication of ordinary numbers, where if <math>ab=ac</math>, then <math>b</math> always equals <math>c</math> unless <math>a</math> is zero, the dot product does not obey the [[cancellation law]]: {{pb}} If <math>\mathbf{a}\cdot\mathbf{b}=\mathbf{a}\cdot\mathbf{c}</math> and <math>\mathbf{a}\neq\mathbf{0}</math>, then we can write: <math>\mathbf{a}\cdot(\mathbf{b}-\mathbf{c}) = 0</math> by the [[distributive law]]; the result above says this just means that <math>\mathbf{a}</math> is perpendicular to <math>(\mathbf{b}-\mathbf{c})</math>, which still allows <math>(\mathbf{b}-\mathbf{c})\neq\mathbf{0}</math>, and therefore allows <math>\mathbf{b}\neq\mathbf{c}</math>. ; [[Product rule]] : If <math>\mathbf{a}</math> and <math>\mathbf{b}</math> are vector-valued [[differentiable function]]s, then the derivative ([[Notation for differentiation#Lagrange's notation|denoted by a prime]] <math>{}'</math>) of <math>\mathbf{a}\cdot\mathbf{b}</math> is given by the rule <math display="block">(\mathbf{a}\cdot\mathbf{b})' = \mathbf{a}'\cdot\mathbf{b} + \mathbf{a}\cdot\mathbf{b}'.</math> === Application to the law of cosines === [[File:Dot product cosine rule.svg|100px|thumb|Triangle with vector edges '''a''' and '''b''', separated by angle ''θ'']] {{main|Law of cosines}} Given two vectors <math>{\color{red}\mathbf{a}}</math> and <math>{\color{blue}\mathbf{b}}</math> separated by angle <math>\theta</math> (see the upper image), they form a triangle with a third side <math>{\color{orange}\mathbf{c}} = {\color{red}\mathbf{a}} - {\color{blue}\mathbf{b}}</math>. Let <math>\color{red}a</math>, <math>\color{blue}b</math> and <math>\color{orange}c</math> denote the lengths of <math>{\color{red}\mathbf{a}}</math>, <math>{\color{blue}\mathbf{b}}</math>, and <math>{\color{orange}\mathbf{c}}</math>, respectively. The dot product of this with itself is: <math display="block"> \begin{align} \mathbf{\color{orange}c} \cdot \mathbf{\color{orange}c} & = ( \mathbf{\color{red}a} - \mathbf{\color{blue}b}) \cdot ( \mathbf{\color{red}a} - \mathbf{\color{blue}b} ) \\ & = \mathbf{\color{red}a} \cdot \mathbf{\color{red}a} - \mathbf{\color{red}a} \cdot \mathbf{\color{blue}b} - \mathbf{\color{blue}b} \cdot \mathbf{\color{red}a} + \mathbf{\color{blue}b} \cdot \mathbf{\color{blue}b} \\ & = {\color{red}a}^2 - \mathbf{\color{red}a} \cdot \mathbf{\color{blue}b} - \mathbf{\color{red}a} \cdot \mathbf{\color{blue}b} + {\color{blue}b}^2 \\ & = {\color{red}a}^2 - 2 \mathbf{\color{red}a} \cdot \mathbf{\color{blue}b} + {\color{blue}b}^2 \\ {\color{orange}c}^2 & = {\color{red}a}^2 + {\color{blue}b}^2 - 2 {\color{red}a} {\color{blue}b} \cos \mathbf{\color{purple}\theta} \\ \end{align} </math> which is the [[law of cosines]]. {{clear}} == Triple product == {{Main|Triple product}} There are two [[ternary operation]]s involving dot product and [[cross product]]. The '''scalar triple product''' of three vectors is defined as <math display="block"> \mathbf{a} \cdot ( \mathbf{b} \times \mathbf{c} ) = \mathbf{b} \cdot ( \mathbf{c} \times \mathbf{a} )=\mathbf{c} \cdot ( \mathbf{a} \times \mathbf{b} ).</math> Its value is the [[determinant]] of the matrix whose columns are the [[Cartesian coordinates]] of the three vectors. It is the signed [[volume]] of the [[parallelepiped]] defined by the three vectors, and is isomorphic to the three-dimensional special case of the [[exterior product]] of three vectors. The '''vector triple product''' is defined by<ref name="Lipschutz2009" /><ref name="Spiegel2009" /> <math display="block"> \mathbf{a} \times ( \mathbf{b} \times \mathbf{c} ) = ( \mathbf{a} \cdot \mathbf{c} )\, \mathbf{b} - ( \mathbf{a} \cdot \mathbf{b} )\, \mathbf{c} .</math> This identity, also known as ''Lagrange's formula'', [[mnemonic|may be remembered]] as "ACB minus ABC", keeping in mind which vectors are dotted together. This formula has applications in simplifying vector calculations in [[physics]]. == Physics == In [[physics]], the dot product takes two vectors and returns a [[scalar (mathematics)|scalar]] quantity. It is also known as the "scalar product". The dot product of two vectors can be defined as the product of the magnitudes of the two vectors and the cosine of the angle between the two vectors. Thus, <math display=block>\mathbf{a} \cdot \mathbf{b} = |\mathbf{a}| \, |\mathbf{b}| \cos \theta</math> Alternatively, it is defined as the product of the projection of the first vector onto the second vector and the magnitude of the second vector. For example:<ref name="Riley2010">{{cite book |author1=K.F. Riley |author2=M.P. Hobson | author3=S.J. Bence |title= Mathematical methods for physics and engineering|url=https://archive.org/details/mathematicalmeth00rile |url-access=registration |edition= 3rd|year= 2010|publisher= Cambridge University Press | isbn=978-0-521-86153-3}}</ref><ref>{{cite book |author1=M. Mansfield |author2=C. O'Sullivan |title= Understanding Physics | edition= 4th |year= 2011|publisher= John Wiley & Sons|isbn=978-0-47-0746370}}</ref> * [[Mechanical work]] is the dot product of [[force]] and [[Displacement (vector)|displacement]] vectors, * [[Power (physics)|Power]] is the dot product of [[force]] and [[velocity]]. == Generalizations == === Complex vectors === For vectors with [[complex number|complex]] entries, using the given definition of the dot product would lead to quite different properties. For instance, the dot product of a vector with itself could be zero without the vector being the zero vector (e.g. this would happen with the vector {{nowrap|<math>\mathbf{a} = [1\ i]</math>).}} This in turn would have consequences for notions like length and angle. Properties such as the positive-definite norm can be salvaged at the cost of giving up the symmetric and bilinear properties of the dot product, through the alternative definition<ref>{{cite book | page = 287 | first= Sterling K. | last = Berberian | title = Linear Algebra | year = 2014 | orig-year = 1992 | publisher = Dover | isbn = 978-0-486-78055-9}}</ref><ref name="Lipschutz2009" /> <math display="block"> \mathbf{a} \cdot \mathbf{b} = \sum_i {{a_i}\,\overline{b_i}} ,</math> where <math>\overline{b_i}</math> is the [[complex conjugate]] of <math>b_i</math>. When vectors are represented by [[column vector]]s, the dot product can be expressed as a [[matrix product]] involving a [[conjugate transpose]], denoted with the superscript H: <math display="block"> \mathbf{a} \cdot \mathbf{b} = \mathbf{b}^\mathsf{H} \mathbf{a} .</math> In the case of vectors with real components, this definition is the same as in the real case. The dot product of any vector with itself is a non-negative real number, and it is nonzero except for the zero vector. However, the complex dot product is [[sesquilinear]] rather than bilinear, as it is [[conjugate linear]] and not linear in <math>\mathbf{a}</math>. The dot product is not symmetric, since <math display="block"> \mathbf{a} \cdot \mathbf{b} = \overline{\mathbf{b} \cdot \mathbf{a}} .</math> The angle between two complex vectors is then given by <math display="block"> \cos \theta = \frac{\operatorname{Re} ( \mathbf{a} \cdot \mathbf{b} )}{ \left\| \mathbf{a} \right\| \left\| \mathbf{b} \right\| } .</math> The complex dot product leads to the notions of [[Hermitian form]]s and general [[inner product space]]s, which are widely used in mathematics and [[physics]]. {{anchor|Norm squared}}The self dot product of a complex vector <math>\mathbf{a} \cdot \mathbf{a} = \mathbf{a}^\mathsf{H} \mathbf{a} </math>, involving the conjugate transpose of a row vector, is also known as the '''norm squared''', <math display="inline">\mathbf{a} \cdot \mathbf{a} = \|\mathbf{a}\|^2</math>, after the [[Euclidean norm]]; it is a vector generalization of the ''[[absolute square]]'' of a complex scalar (see also: ''[[Squared Euclidean distance]]''). === Inner product === {{main|Inner product space}} The inner product generalizes the dot product to [[vector space|abstract vector spaces]] over a [[field (mathematics)|field]] of [[scalar (mathematics)|scalars]], being either the field of [[real number]]s <math> \R </math> or the field of [[complex number]]s <math> \Complex </math>. It is usually denoted using [[angular brackets]] by <math> \left\langle \mathbf{a} \, , \mathbf{b} \right\rangle </math>. The inner product of two vectors over the field of complex numbers is, in general, a complex number, and is [[Sesquilinear form|sesquilinear]] instead of bilinear. An inner product space is a [[normed vector space]], and the inner product of a vector with itself is real and positive-definite. === Functions === The dot product is defined for vectors that have a finite number of [[coordinate vector|entries]]. Thus these vectors can be regarded as [[discrete function]]s: a length-<math>n</math> vector <math>u</math> is, then, a function with [[domain of a function|domain]] <math>\{k\in\mathbb{N}:1\leq k \leq n\}</math>, and <math>u_i</math> is a notation for the image of <math>i</math> by the function/vector <math>u</math>. This notion can be generalized to [[Square-integrable function|square-integrable functions]]: just as the inner product on vectors uses a sum over corresponding components, the inner product on functions is defined as an integral over some [[measure space]] <math>(X, \mathcal{A}, \mu)</math>:<ref name="Lipschutz2009" /> <math display="block"> \left\langle u , v \right\rangle = \int_X u v \, \text{d} \mu.</math> For example, if <math> f</math> and <math>g</math> are [[Continuous function|continuous functions]] over a [[Compact space|compact subset]] <math> K</math> of <math>\mathbb{R}^n</math> with the standard [[Lebesgue measure]], the above definition becomes: <math display="block"> \left\langle f , g \right\rangle = \int_K f(\mathbf{x}) g(\mathbf{x}) \, \operatorname{d}^n \mathbf{x} .</math> Generalized further to [[complex function|complex continuous functions]] <math>\psi</math> and <math>\chi</math>, by analogy with the complex inner product above, gives: <math display="block"> \left\langle \psi, \chi \right\rangle = \int_K \psi(z) \overline{\chi(z)} \, \text{d} z.</math> === Weight function === Inner products can have a [[weight function]] (i.e., a function which weights each term of the inner product with a value). Explicitly, the inner product of functions <math>u(x)</math> and <math>v(x)</math> with respect to the weight function <math>r(x)>0</math> is <math display="block"> \left\langle u , v \right\rangle_r = \int_a^b r(x) u(x) v(x) \, d x.</math> === Dyadics and matrices === A double-dot product for [[Matrix (mathematics)|matrices]] is the [[Frobenius inner product]], which is analogous to the dot product on vectors. It is defined as the sum of the products of the corresponding components of two matrices <math>\mathbf{A}</math> and <math>\mathbf{B}</math> of the same size: <math display="block"> \mathbf{A} : \mathbf{B} = \sum_i \sum_j A_{ij} \overline{B_{ij}} = \operatorname{tr} ( \mathbf{B}^\mathsf{H} \mathbf{A} ) = \operatorname{tr} ( \mathbf{A} \mathbf{B}^\mathsf{H} ) .</math> And for real matrices, <math display="block"> \mathbf{A} : \mathbf{B} = \sum_i \sum_j A_{ij} B_{ij} = \operatorname{tr} ( \mathbf{B}^\mathsf{T} \mathbf{A} ) = \operatorname{tr} ( \mathbf{A} \mathbf{B}^\mathsf{T} ) = \operatorname{tr} ( \mathbf{A}^\mathsf{T} \mathbf{B} ) = \operatorname{tr} ( \mathbf{B} \mathbf{A}^\mathsf{T} ) .</math> Writing a matrix as a [[dyadics|dyadic]], we can define a different double-dot product (see ''{{slink|Dyadics|Product of dyadic and dyadic}}'') however it is not an inner product. === Tensors === The inner product between a [[tensor]] of order <math>n</math> and a tensor of order <math>m</math> is a tensor of order <math>n+m-2</math>, see ''[[Tensor contraction]]'' for details. == Computation == === Algorithms === The straightforward algorithm for calculating a floating-point dot product of vectors can suffer from [[catastrophic cancellation]]. To avoid this, approaches such as the [[Kahan summation algorithm]] are used. === Libraries === A dot product function is included in: * [[BLAS]] level 1 real {{code|SDOT}}, {{code|DDOT}}; complex {{code|CDOTU}}, {{code|1=ZDOTU = X^T * Y}}, {{code|CDOTC}}, {{code|1=ZDOTC = X^H * Y}} * [[Fortran]] as {{code|dot_product(A,B)}} or {{code|sum(conjg(A) * B)}} * [[Julia (programming language)|Julia]] as {{code|A' * B}} or standard library LinearAlgebra as {{code|dot(A, B)}} * [[R (programming language)]] as {{code|sum(A * B)}} for vectors or, more generally for matrices, as {{code|A %*% B}} * [[Matlab]] as {{code|A' * B}} or {{code|conj(transpose(A)) * B}} or {{code|sum(conj(A) .* B)}} or {{code|dot(A, B)}} * [[Python (programming language)|Python]] (package [[NumPy]]) as {{code|np.matmul(A, B)}} or {{code|np.dot(A, B)}} or {{code|np.inner(A, B)}} * [[GNU Octave]] as {{code|sum(conj(X) .* Y, dim)}}, and similar code as Matlab * Intel oneAPI Math Kernel Library real p?dot {{code|1=dot = sub(x)'*sub(y)}}; complex p?dotc {{code|1=dotc = conjg(sub(x)')*sub(y)}} == See also == {{div col}} * [[Cauchy–Schwarz inequality]] * [[Cross product]] * [[Dot product representation of a graph]] * [[Euclidean norm]], the square-root of the self dot product * [[Matrix multiplication]] * [[Metric tensor]] * [[Multiplication of vectors]] * [[Outer product]] {{div col end}} == Notes == {{reflist|group=note}} == References == {{reflist}} == External links == {{Commons category|Scalar product}} * {{springer|title=Inner product|id=p/i051240}} * [http://www.mathreference.com/la,dot.html Explanation of dot product including with complex vectors] * [http://demonstrations.wolfram.com/DotProduct/ "Dot Product"] by Bruce Torrence, [[Wolfram Demonstrations Project]], 2007. {{linear algebra}} {{tensors}} {{Authority control}} [[Category:Articles containing proofs]] [[Category:Bilinear forms]] [[Category:Operations on vectors]] [[Category:Analytic geometry]] [[Category:Tensors]] [[Category:Scalars]]
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Templates used on this page:
Template:Anchor
(
edit
)
Template:Authority control
(
edit
)
Template:Cite book
(
edit
)
Template:Cite web
(
edit
)
Template:Clear
(
edit
)
Template:Code
(
edit
)
Template:Commons category
(
edit
)
Template:Div col
(
edit
)
Template:Div col end
(
edit
)
Template:Linear algebra
(
edit
)
Template:Main
(
edit
)
Template:Nowrap
(
edit
)
Template:Pb
(
edit
)
Template:Redirect
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Slink
(
edit
)
Template:Springer
(
edit
)
Template:Tensors
(
edit
)
Search
Search
Editing
Dot product
Add topic