Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Inner product space
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Vector space with generalized dot product}} {{redirect|Inner product|the inner product of coordinate vectors|Dot product}} [[File:Inner-product-angle.png|thumb|300px|Geometric interpretation of the angle between two vectors defined using an inner product]] [[File:Product Spaces Drawing (1).png|alt=Scalar product spaces, inner product spaces, Hermitian product spaces.|thumb|300px|Scalar product spaces, over any field, have "scalar products" that are symmetrical and linear in the first argument. Hermitian product spaces are restricted to the field of complex numbers and have "Hermitian products" that are conjugate-symmetrical and linear in the first argument. Inner product spaces may be defined over any field, having "inner products" that are linear in the first argument, conjugate-symmetrical, and positive-definite. Unlike inner products, scalar products and Hermitian products need not be positive-definite.]] In [[mathematics]], an '''inner product space''' (or, rarely, a '''[[Hausdorff space|Hausdorff]] pre-Hilbert space'''{{sfn|Trèves|2006|pp=112-125}}{{sfn|Schaefer|Wolff|1999|pp=40-45}}) is a [[real vector space]] or a [[complex vector space]] with an [[operation (mathematics)|operation]] called an '''inner product'''. The inner product of two vectors in the space is a [[Scalar (mathematics)|scalar]], often denoted with [[angle brackets]] such as in <math>\langle a, b \rangle</math>. Inner products allow formal definitions of intuitive geometric notions, such as lengths, [[angle]]s, and [[orthogonality]] (zero inner product) of vectors. Inner product spaces generalize [[Euclidean vector space]]s, in which the inner product is the [[dot product]] or ''scalar product'' of [[Cartesian coordinates]]. Inner product spaces of infinite [[Dimension (vector space)|dimension]] are widely used in [[functional analysis]]. Inner product spaces over the [[Field (mathematics)|field]] of [[complex number]]s are sometimes referred to as '''unitary spaces'''. The first usage of the concept of a vector space with an inner product is due to [[Giuseppe Peano]], in 1898.<ref>{{cite journal|last1=Moore|first1=Gregory H.|title=The axiomatization of linear algebra: 1875-1940|journal=Historia Mathematica|date=1995|volume=22|issue=3|pages=262–303|doi=10.1006/hmat.1995.1025|doi-access=free}}</ref> An inner product naturally induces an associated [[Norm (mathematics)|norm]], (denoted <math>|x|</math> and <math>|y|</math> in the picture); so, every inner product space is a [[normed vector space]]. If this normed space is also [[complete metric space|complete]] (that is, a [[Banach space]]) then the inner product space is a [[Hilbert space]].{{sfn|Trèves|2006|pp=112-125}} If an inner product space {{mvar|H}} is not a Hilbert space, it can be ''extended'' by [[Complete topological vector space#Completions|completion]] to a Hilbert space <math>\overline{H}.</math> This means that <math>H</math> is a [[linear subspace]] of <math>\overline{H},</math> the inner product of <math>H</math> is the [[restriction (mathematics)|restriction]] of that of <math>\overline{H},</math> and <math>H</math> is [[Dense subset|dense]] in <math>\overline{H}</math> for the [[topology (structure)|topology]] defined by the norm.{{sfn|Trèves|2006|pp=112-125}}{{sfn|Schaefer|Wolff|1999|pp=36-72}} == Definition == In this article, {{math|''F''}} denotes a [[field (mathematics)|field]] that is either the [[real number]]s <math>\R,</math> or the [[complex number]]s <math>\Complex.</math> A [[scalar (mathematics)|scalar]] is thus an element of {{math|''F''}}. A bar over an expression representing a scalar denotes the [[complex conjugate]] of this scalar. A zero vector is denoted <math>\mathbf 0</math> for distinguishing it from the scalar {{math|0}}. An ''inner product'' space is a [[vector space]] {{math|''V''}} over the field {{math|''F''}} together with an ''inner product'', that is, a map <math display="block"> \langle \cdot, \cdot \rangle : V \times V \to F </math> that satisfies the following three properties for all vectors <math>x,y,z\in V</math> and all scalars {{nowrap|<math>a,b \in F</math>.<ref name= Jain>{{cite book |title=Functional Analysis |first1=P. K. |last1=Jain |first2=Khalil |last2=Ahmad |chapter-url=https://books.google.com/books?id=yZ68h97pnAkC&pg=PA203 |page=203 |chapter=5.1 Definitions and basic properties of inner product spaces and Hilbert spaces |isbn=81-224-0801-X |year=1995 |edition=2nd |publisher=New Age International}}</ref><ref name="Prugovec̆ki">{{cite book |title=Quantum Mechanics in Hilbert Space |first=Eduard |last=Prugovečki |chapter-url=https://books.google.com/books?id=GxmQxn2PF3IC&pg=PA18 |chapter=Definition 2.1 |pages=18ff |isbn=0-12-566060-X | year = 1981 |publisher=Academic Press |edition = 2nd}}</ref>}} * ''Conjugate symmetry'': <math display=block>\langle x, y \rangle = \overline{\langle y, x \rangle}.</math> As <math display="inline"> a = \overline{a} </math> [[if and only if]] <math>a</math> is real, conjugate symmetry implies that <math>\langle x, x \rangle </math> is always a real number. If {{math|''F''}} is <math>\R</math>, conjugate symmetry is just symmetry. * [[Linear map|Linearity]] in the first argument:<ref group="Note">By combining the ''linear in the first argument'' property with the ''conjugate symmetry'' property you get ''conjugate-linear in the second argument'': <math display="inline"> \langle x,by \rangle = \langle x,y \rangle \overline{b} </math>. This is how the inner product was originally defined and is used in most mathematical contexts. A different convention has been adopted in theoretical physics and quantum mechanics, originating in the [[bra-ket]] notation of [[Paul Dirac]], where the inner product is taken to be ''linear in the second argument'' and ''conjugate-linear in the first argument''; this convention is used in many other domains such as engineering and computer science.</ref> <math display=block> \langle ax+by, z \rangle = a \langle x, z \rangle + b \langle y, z \rangle.</math> * [[Definite bilinear form|Positive-definiteness]]: if <math>x</math> is not zero, then <math display=block> \langle x, x \rangle > 0 </math> (conjugate symmetry implies that <math>\langle x, x \rangle</math> is real). If the positive-definiteness condition is replaced by merely requiring that <math>\langle x, x \rangle \geq 0</math> for all <math>x</math>, then one obtains the definition of ''positive semi-definite Hermitian form''. A positive semi-definite Hermitian form <math>\langle \cdot, \cdot \rangle</math> is an inner product if and only if for all <math>x</math>, if <math>\langle x, x \rangle = 0</math> then <math>x = \mathbf 0</math>.{{sfn|Schaefer|Wolff|1999|p=44}} === Basic properties === In the following properties, which result almost immediately from the definition of an inner product, {{math|''x'', ''y''}} and {{mvar|z}} are arbitrary vectors, and {{mvar|a}} and {{mvar|b}} are arbitrary scalars. *<math>\langle \mathbf{0}, x \rangle=\langle x,\mathbf{0}\rangle=0.</math> *<math> \langle x, x \rangle</math> is real and nonnegative. *<math>\langle x, x \rangle = 0</math> if and only if <math>x=\mathbf{0}.</math> *<math>\langle x, ay+bz \rangle= \overline a \langle x, y \rangle + \overline b \langle x, z \rangle.</math><br>This implies that an inner product is a [[sesquilinear form]]. *<math>\langle x + y, x + y \rangle = \langle x, x \rangle + 2\operatorname{Re}(\langle x, y \rangle) + \langle y, y \rangle,</math> where <math>\operatorname{Re}</math><br>denotes the [[real part]] of its argument. Over <math>\R</math>, conjugate-symmetry reduces to symmetry, and sesquilinearity reduces to bilinearity. Hence an inner product on a real vector space is a ''positive-definite symmetric [[bilinear form]]''. The [[binomial expansion]] of a square becomes <math display="block">\langle x + y, x + y \rangle = \langle x, x \rangle + 2\langle x, y \rangle + \langle y, y \rangle .</math> === Notation === Several notations are used for inner products, including <math> \langle \cdot, \cdot \rangle </math>, <math> \left ( \cdot, \cdot \right ) </math>, <math> \langle \cdot | \cdot \rangle </math> and <math> \left ( \cdot | \cdot \right ) </math>, as well as the usual dot product. === Convention variant === Some authors, especially in [[physics]] and [[matrix algebra]], prefer to define inner products and sesquilinear forms with linearity in the second argument rather than the first. Then the first argument becomes conjugate linear, rather than the second. [[Bra–ket notation|Bra-ket notation]] in [[quantum mechanics]] also uses slightly different notation, i.e. <math> \langle \cdot | \cdot \rangle </math>, where <math> \langle x | y \rangle := \left ( y, x \right ) </math>. ==Examples== ===Real and complex numbers=== Among the simplest examples of inner product spaces are <math>\R</math> and <math>\Complex.</math> The [[real number]]s <math>\R</math> are a vector space over <math>\R</math> that becomes an inner product space with arithmetic multiplication as its inner product: <math display=block>\langle x, y \rangle := x y \quad \text{ for } x, y \in \R.</math> The [[complex number]]s <math>\Complex</math> are a vector space over <math>\Complex</math> that becomes an inner product space with the inner product <math display=block>\langle x, y \rangle := x \overline{y} \quad \text{ for } x, y \in \Complex.</math> Unlike with the real numbers, the assignment <math>(x, y) \mapsto x y</math> does {{em|not}} define a complex inner product on <math>\Complex.</math> ===Euclidean vector space=== More generally, the [[Real coordinate space|real <math>n</math>-space]] <math>\R^n</math> with the [[dot product]] is an inner product space, an example of a [[Euclidean vector space]]. <math display=block> \left\langle \begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix}, \begin{bmatrix} y_1 \\ \vdots \\ y_n \end{bmatrix} \right\rangle = x^\textsf{T} y = \sum_{i=1}^n x_i y_i = x_1 y_1 + \cdots + x_n y_n, </math> where <math>x^{\operatorname{T}}</math> is the [[transpose]] of <math>x.</math> A function <math>\langle \,\cdot, \cdot\, \rangle : \R^n \times \R^n \to \R</math> is an inner product on <math>\R^n</math> if and only if there exists a [[Symmetric matrix|symmetric]] [[positive-definite matrix]] <math>\mathbf{M}</math> such that <math>\langle x, y \rangle = x^{\operatorname{T}} \mathbf{M} y</math> for all <math>x, y \in \R^n.</math> If <math>\mathbf{M}</math> is the [[identity matrix]] then <math>\langle x, y \rangle = x^{\operatorname{T}} \mathbf{M} y</math> is the dot product. For another example, if <math>n = 2</math> and <math>\mathbf{M} = \begin{bmatrix} a & b \\ b & d \end{bmatrix}</math> is positive-definite (which happens if and only if <math>\det \mathbf{M} = a d - b^2 > 0</math> and one/both diagonal elements are positive) then for any <math>x := \left[x_1, x_2\right]^{\operatorname{T}}, y := \left[y_1, y_2\right]^{\operatorname{T}} \in \R^2,</math> <math display=block>\langle x, y \rangle := x^{\operatorname{T}} \mathbf{M} y = \left[x_1, x_2\right] \begin{bmatrix} a & b \\ b & d \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} = a x_1 y_1 + b x_1 y_2 + b x_2 y_1 + d x_2 y_2.</math> As mentioned earlier, every inner product on <math>\R^2</math> is of this form (where <math>b \in \R, a > 0</math> and <math>d > 0</math> satisfy <math>a d > b^2</math>). ===Complex coordinate space=== The general form of an inner product on <math>\Complex^n</math> is known as the [[Hermitian form]] and is given by <math display=block>\langle x, y \rangle = y^\dagger \mathbf{M} x = \overline{x^\dagger \mathbf{M} y},</math> where <math>M</math> is any [[Hermitian matrix|Hermitian]] [[positive-definite matrix]] and <math>y^{\dagger}</math> is the [[conjugate transpose]] of <math>y.</math> For the real case, this corresponds to the dot product of the results of directionally-different [[Scaling (geometry)|scaling]] of the two vectors, with positive [[scale factor]]s and orthogonal directions of scaling. It is a [[Weight function|weighted-sum]] version of the dot product with positive weights—up to an orthogonal transformation. ===Hilbert space=== The article on [[Hilbert spaces]] has several examples of inner product spaces, wherein the metric induced by the inner product yields a [[complete metric space]]. An example of an inner product space which induces an incomplete metric is the space <math>C([a, b])</math> of continuous complex valued functions <math>f</math> and <math>g</math> on the interval <math>[a, b].</math> The inner product is <math display=block>\langle f, g \rangle = \int_a^b f(t) \overline{g(t)} \, \mathrm{d}t.</math> This space is not complete; consider for example, for the interval {{closed-closed|−1, 1}} the sequence of continuous "step" functions, <math>\{ f_k \}_k,</math> defined by: <math display=block>f_k(t) = \begin{cases} 0 & t \in [-1, 0] \\ 1 & t \in \left[\tfrac{1}{k}, 1\right] \\ kt & t \in \left(0, \tfrac{1}{k}\right) \end{cases}</math> This sequence is a [[Cauchy sequence]] for the norm induced by the preceding inner product, which does not converge to a {{em|continuous}} function. ===Random variables=== For real [[random variable]]s <math>X</math> and <math>Y,</math> the [[expected value]] of their product <math display="block">\langle X, Y \rangle = \mathbb{E}[XY]</math> is an inner product.<ref>{{cite web|last1=Ouwehand|first1=Peter|title=Spaces of Random Variables|url=http://users.aims.ac.za/~pouw/Lectures/Lecture_Spaces_Random_Variables.pdf|website=AIMS|access-date=2017-09-05|date=November 2010|archive-date=2017-09-05|archive-url=https://web.archive.org/web/20170905225616/http://users.aims.ac.za/~pouw/Lectures/Lecture_Spaces_Random_Variables.pdf|url-status=dead}}</ref><ref>{{cite web|last1=Siegrist|first1=Kyle|title=Vector Spaces of Random Variables|url=http://www.math.uah.edu/stat/expect/Spaces.html|website=Random: Probability, Mathematical Statistics, Stochastic Processes|access-date=2017-09-05|date=1997}}</ref><ref>{{cite thesis|last1=Bigoni|first1=Daniele|title=Uncertainty Quantification with Applications to Engineering Problems|date=2015|type=PhD|publisher=Technical University of Denmark|chapter-url=http://orbit.dtu.dk/files/106969507/phd359_Bigoni_D.pdf|access-date=2017-09-05|chapter=Appendix B: Probability theory and functional spaces}}</ref> In this case, <math>\langle X, X \rangle = 0</math> if and only if <math>\mathbb{P}[X = 0] = 1</math> (that is, <math>X = 0</math> [[almost surely]]), where <math>\mathbb{P}</math> denotes the [[probability]] of the event. This definition of expectation as inner product can be extended to [[random vector]]s as well. ===Complex matrices=== The inner product for complex square matrices of the same size is the [[Frobenius inner product]] <math>\langle A, B \rangle := \operatorname{tr}\left(AB^\dagger\right)</math>. Since trace and transposition are linear and the conjugation is on the second matrix, it is a sesquilinear operator. We further get Hermitian symmetry by, <math display=block>\langle A, B \rangle = \operatorname{tr}\left(AB^\dagger\right) = \overline{\operatorname{tr}\left(BA^\dagger\right)} = \overline{\left\langle B,A \right\rangle}</math> Finally, since for <math>A</math> nonzero, <math>\langle A, A\rangle = \sum_{ij} \left|A_{ij}\right|^2 > 0 </math>, we get that the Frobenius inner product is positive definite too, and so is an inner product. ===Vector spaces with forms=== On an inner product space, or more generally a vector space with a [[nondegenerate form]] (hence an isomorphism <math>V \to V^*</math>), vectors can be sent to covectors (in coordinates, via transpose), so that one can take the inner product and outer product of two vectors—not simply of a vector and a covector. ==Basic results, terminology, and definitions== ===Norm properties {{anchor|Norm}}===<!-- This section is linked from [[Cauchy–Schwarz inequality]] --> Every inner product space induces a [[Norm (mathematics)|norm]], called its {{em|{{visible anchor|canonical norm}}}}, that is defined by <math display=block>\|x\| = \sqrt{\langle x, x \rangle}.</math> With this norm, every inner product space becomes a [[normed vector space]]. So, every general property of normed vector spaces applies to inner product spaces. <!-- In particular, an inner product space is a [[metric space]], for the distance defined by <math display=block>d(x, y) = \|y - x\|.</math> --> In particular, one has the following properties: {{glossary}} {{term|[[Absolute homogeneity]]}}{{defn| <math display=block>\|ax\| = |a| \, \|x\|</math> for every <math>x \in V</math> and <math>a \in F</math> (this results from <math>\langle ax, ax \rangle = a\overline a \langle x, x \rangle</math>). }} {{term|[[Triangle inequality]]}}{{defn| <math display=block>\|x + y\| \leq \|x\| + \|y\|</math> for <math>x, y\in V.</math> These two properties show that one has indeed a norm.}} {{term|[[Cauchy–Schwarz inequality]]}}{{defn| <math display=block>|\langle x, y \rangle| \leq \|x\| \, \|y\|</math> for every <math>x, y\in V,</math> with equality if and only if <math>x</math> and <math>y</math> are [[Linearly independent|linearly dependent]]. }} {{term|[[Parallelogram law]]}}{{defn| <math display=block>\|x + y\|^2 + \|x - y\|^2 = 2\|x\|^2 + 2\|y\|^2</math> for every <math>x, y\in V.</math> The parallelogram law is a necessary and sufficient condition for a norm to be defined by an inner product. }} {{term|[[Polarization identity]]}}{{defn| <math display=block>\|x + y\|^2 = \|x\|^2 + \|y\|^2 + 2\operatorname{Re}\langle x, y \rangle</math> for every <math>x, y\in V.</math> The inner product can be retrieved from the norm by the polarization identity, since its imaginary part is the real part of <math>\langle x, iy \rangle.</math> }} {{term|[[Ptolemy's inequality]]}}{{defn| <math display=block>\|x - y\| \, \|z\| ~+~ \|y - z\| \, \|x\| ~\geq~ \|x - z\| \, \|y\|</math> for every <math>x, y,z\in V.</math> Ptolemy's inequality is a necessary and sufficient condition for a [[seminorm]] to be the norm defined by an inner product.<ref>{{Cite journal|last=Apostol|first=Tom M.|date=1967|title=Ptolemy's Inequality and the Chordal Metric|url=https://www.tandfonline.com/doi/pdf/10.1080/0025570X.1967.11975804|journal=Mathematics Magazine|volume=40|issue=5|pages=233–235|language=en|doi=10.2307/2688275|jstor=2688275}}</ref> }} {{glossary end}} ===Orthogonality=== {{glossary}} {{term|[[Orthogonality (mathematics)|Orthogonality]]}}{{defn| Two vectors <math>x</math> and <math>y</math> are said to be {{em|{{visible anchor|orthogonal|Orthogonal vectors}}}}, often written <math>x \perp y,</math> if their inner product is zero, that is, if <math>\langle x, y \rangle = 0.</math> <br> This happens if and only if <math>\|x\| \leq \|x + s y\|</math> for all scalars <math>s,</math>{{sfn|Rudin|1991|pp=306-312}} and if and only if the real-valued function <math>f(s) := \|x + s y\|^2 - \|x\|^2</math> is non-negative. (This is a consequence of the fact that, if <math>y \neq 0</math> then the scalar <math>s_0 = - \tfrac{\overline{\langle x, y \rangle}}{\|y\|^2}</math> minimizes <math>f</math> with value <math>f\left(s_0\right) = - \tfrac{|\langle x, y \rangle|^2}{\|y\|^2},</math> which is always non positive).<br> For a {{em|complex}} inner product space <math>H,</math> a linear operator <math>T : V \to V</math> is identically <math>0</math> if and only if <math>x \perp T x</math> for every <math>x \in V.</math>{{sfn|Rudin|1991|pp=306-312}} This is not true in general for real inner product spaces, as it is a consequence of conjugate symmetry being distinct from symmetry for complex inner products. A counterexample in a real inner product space is <math>T</math> a 90° rotation in <math>\mathbb{R}^2</math>, which maps every vector to an orthogonal vector but is not identically <math>0</math>. }} {{term|[[Orthogonal complement]]}}{{defn|The ''orthogonal complement'' of a subset <math>C \subseteq V</math> is the set <math>C^{\bot}</math> of the vectors that are orthogonal to all elements of {{mvar|C}}; that is, <math display=block>C^{\bot} := \{\,y \in V : \langle y, c \rangle = 0 \text{ for all } c \in C\,\}.</math> This set <math>C^{\bot}</math> is always a closed vector subspace of <math>V</math> and if the [[Closure (topology)|closure]] <math>\operatorname{cl}_V C</math> of <math>C</math> in <math>V</math> is a vector subspace then <math>\operatorname{cl}_V C = \left(C^{\bot}\right)^{\bot}.</math> }} {{term|[[Pythagorean theorem]]}}{{defn| If <math>x</math> and <math>y</math> are orthogonal, then <math display=block>\|x\|^2 + \|y\|^2 = \|x + y\|^2.</math> This may be proved by expressing the squared norms in terms of the inner products, using additivity for expanding the right-hand side of the equation.<br> The name {{em|Pythagorean theorem}} arises from the geometric interpretation in [[Euclidean geometry]]. }} {{term|[[Parseval's identity]]}}{{defn| An [[Mathematical induction|induction]] on the Pythagorean theorem yields: if <math>x_1, \ldots, x_n</math> are pairwise orthogonal, then <math display=block>\sum_{i=1}^n \|x_i\|^2 = \left\|\sum_{i=1}^n x_i\right\|^2.</math> }} {{anchor|Angle}}{{term|[[Angle]]}}{{defn| When <math>\langle x, y \rangle</math> is a real number then the Cauchy–Schwarz inequality implies that <math display=inline>\frac{\langle x, y \rangle}{\|x\| \, \|y\|} \in [-1, 1],</math> and thus that <math display=block>\angle(x, y) = \arccos \frac{\langle x, y \rangle}{\|x\| \, \|y\|},</math> is a real number. This allows defining the (non oriented) {{em|angle}} of two vectors in modern definitions of [[Euclidean geometry]] in terms of [[linear algebra]]. This is also used in [[data analysis]], under the name "[[cosine similarity]]", for comparing two vectors of data. Furthermore, if <math>\langle x, y \rangle</math> is negative, the angle <math>\angle(x, y)</math> is larger than 90 degrees. This property is often used in computer graphics (e.g., in [[back-face culling]]) to analyze a direction without having to evaluate [[trigonometric functions]].}} {{glossary end}} ===Real and complex parts of inner products=== Suppose that <math>\langle \cdot, \cdot \rangle</math> is an inner product on <math>V</math> (so it is antilinear in its second argument). The [[polarization identity]] shows that the [[real part]] of the inner product is <math display=block>\operatorname{Re} \langle x, y \rangle = \frac{1}{4} \left(\|x + y\|^2 - \|x - y\|^2\right).</math> If <math>V</math> is a real vector space then <math display=block>\langle x, y \rangle = \operatorname{Re} \langle x, y \rangle = \frac{1}{4} \left(\|x + y\|^2 - \|x - y\|^2\right)</math> and the [[imaginary part]] (also called the {{em|complex part}}) of <math>\langle \cdot, \cdot \rangle</math> is always <math>0.</math> Assume for the rest of this section that <math>V</math> is a complex vector space. The [[polarization identity]] for complex vector spaces shows that <math display="block">\begin{alignat}{4} \langle x, \ y \rangle &= \frac{1}{4} \left(\|x + y\|^2 - \|x - y\|^2 + i\|x + iy\|^2 - i\|x - iy\|^2 \right) \\ &= \operatorname{Re} \langle x, y \rangle + i \operatorname{Re} \langle x, i y \rangle. \\ \end{alignat}</math> The map defined by <math>\langle x \mid y \rangle = \langle y, x \rangle</math> for all <math>x, y \in V</math> satisfies the axioms of the inner product except that it is antilinear in its {{em|first}}, rather than its second, argument. The real part of both <math>\langle x \mid y \rangle</math> and <math>\langle x, y \rangle</math> are equal to <math>\operatorname{Re} \langle x, y \rangle</math> but the inner products differ in their complex part: <math display="block">\begin{alignat}{4} \langle x \mid y \rangle &= \frac{1}{4} \left(\|x + y\|^2 - \|x - y\|^2 - i\|x + iy\|^2 + i\|x - iy\|^2 \right) \\ &= \operatorname{Re} \langle x, y \rangle - i \operatorname{Re} \langle x, i y \rangle. \\ \end{alignat}</math> The last equality is similar to the formula [[Real and imaginary parts of a linear functional|expressing a linear functional]] in terms of its real part. These formulas show that every complex inner product is completely determined by its real part. Moreover, this real part defines an inner product on <math>V,</math> considered as a real vector space. There is thus a one-to-one correspondence between complex inner products on a complex vector space <math>V,</math> and real inner products on <math>V.</math> For example, suppose that <math>V = \Complex^n </math> for some integer <math>n > 0.</math> When <math>V</math> is considered as a real vector space in the usual way (meaning that it is identified with the <math>2 n-</math>dimensional real vector space <math>\R^{2n},</math> with each <math>\left(a_1 + i b_1, \ldots, a_n + i b_n\right) \in \Complex^n</math> identified with <math>\left(a_1, b_1, \ldots, a_n, b_n\right) \in \R^{2n}</math>), then the [[dot product]] <math>x \,\cdot\, y = \left(x_1, \ldots, x_{2n}\right) \, \cdot \, \left(y_1, \ldots, y_{2n}\right) := x_1 y_1 + \cdots + x_{2n} y_{2n}</math> defines a real inner product on this space. The unique complex inner product <math>\langle \,\cdot, \cdot\, \rangle</math> on <math>V = \C^n</math> induced by the dot product is the map that sends <math>c = \left(c_1, \ldots, c_n\right), d = \left(d_1, \ldots, d_n\right) \in \Complex^n</math> to <math>\langle c, d \rangle := c_1 \overline{d_1} + \cdots + c_n \overline{d_n}</math> (because the real part of this map <math>\langle \,\cdot, \cdot\, \rangle</math> is equal to the dot product). ====Real vs. complex inner products==== Let <math>V_{\R}</math> denote <math>V</math> considered as a vector space over the real numbers rather than complex numbers. The [[real part]] of the complex inner product <math>\langle x, y \rangle</math> is the map <math>\langle x, y \rangle_{\R} = \operatorname{Re} \langle x, y \rangle ~:~ V_{\R} \times V_{\R} \to \R,</math> which necessarily forms a real inner product on the real vector space <math>V_{\R}.</math> Every inner product on a real vector space is a [[Bilinear map|bilinear]] and [[symmetric map]]. For example, if <math>V = \Complex</math> with inner product <math>\langle x, y \rangle = x \overline{y},</math> where <math>V</math> is a vector space over the field <math>\Complex,</math> then <math>V_{\R} = \R^2</math> is a vector space over <math>\R</math> and <math>\langle x, y \rangle_{\R}</math> is the [[dot product]] <math>x \cdot y,</math> where <math>x = a + i b \in V = \Complex</math> is identified with the point <math>(a, b) \in V_{\R} = \R^2</math> (and similarly for <math>y</math>); thus the standard inner product <math>\langle x, y \rangle = x \overline{y},</math> on <math>\Complex</math> is an "extension" the dot product . Also, had <math>\langle x, y \rangle</math> been instead defined to be the {{EquationNote|Symmetry|symmetric map}} <math>\langle x, y \rangle = x y</math> (rather than the usual {{EquationNote|Conjugate symmetry|conjugate symmetric map}} <math>\langle x, y \rangle = x \overline{y}</math>) then its real part <math>\langle x, y \rangle_{\R}</math> would {{em|not}} be the dot product; furthermore, without the complex conjugate, if <math>x \in \C</math> but <math>x \not\in \R</math> then <math>\langle x, x \rangle = x x = x^2 \not\in [0, \infty)</math> so the assignment <math display="inline">x \mapsto \sqrt{\langle x, x \rangle}</math> would not define a norm. The next examples show that although real and complex inner products have many properties and results in common, they are not entirely interchangeable. For instance, if <math>\langle x, y \rangle = 0</math> then <math>\langle x, y \rangle_{\R} = 0,</math> but the next example shows that the converse is in general {{em|not}} true. Given any <math>x \in V,</math> the vector <math>i x</math> (which is the vector <math>x</math> rotated by 90°) belongs to <math>V</math> and so also belongs to <math>V_{\R}</math> (although scalar multiplication of <math>x</math> by <math>i = \sqrt{-1}</math> is not defined in <math>V_{\R},</math> the vector in <math>V</math> denoted by <math>i x</math> is nevertheless still also an element of <math>V_{\R}</math>). For the complex inner product, <math>\langle x, ix \rangle = -i \|x\|^2,</math> whereas for the real inner product the value is always <math>\langle x, ix \rangle_{\R} = 0.</math> If <math>\langle \,\cdot, \cdot\, \rangle</math> is a complex inner product and <math>A : V \to V</math> is a continuous linear operator that satisfies <math>\langle x, A x \rangle = 0</math> for all <math>x \in V,</math> then <math>A = 0.</math> This statement is no longer true if <math>\langle \,\cdot, \cdot\, \rangle</math> is instead a real inner product, as this next example shows. Suppose that <math>V = \Complex</math> has the inner product <math>\langle x, y \rangle := x \overline{y}</math> mentioned above. Then the map <math>A : V \to V</math> defined by <math>A x = ix</math> is a linear map (linear for both <math>V</math> and <math>V_{\R}</math>) that denotes rotation by <math>90^{\circ}</math> in the plane. Because <math>x</math> and <math>A x</math> are perpendicular vectors and <math>\langle x, Ax \rangle_{\R}</math> is just the dot product, <math>\langle x, Ax \rangle_{\R} = 0</math> for all vectors <math>x;</math> nevertheless, this rotation map <math>A</math> is certainly not identically <math>0.</math> In contrast, using the complex inner product gives <math>\langle x, Ax \rangle = -i \|x\|^2,</math> which (as expected) is not identically zero. ==Orthonormal sequences== {{See also|Orthogonal basis|Orthonormal basis}} Let <math>V</math> be a finite dimensional inner product space of dimension <math>n.</math> Recall that every [[Basis (linear algebra)|basis]] of <math>V</math> consists of exactly <math>n</math> linearly independent vectors. Using the [[Gram–Schmidt process]] we may start with an arbitrary basis and transform it into an orthonormal basis. That is, into a basis in which all the elements are orthogonal and have unit norm. In symbols, a basis <math>\{e_1, \ldots, e_n\}</math> is orthonormal if <math>\langle e_i, e_j \rangle = 0</math> for every <math>i \neq j</math> and <math>\langle e_i, e_i \rangle = \|e_a\|^2 = 1</math> for each index <math>i.</math> This definition of orthonormal basis generalizes to the case of infinite-dimensional inner product spaces in the following way. Let <math>V</math> be any inner product space. Then a collection <math display=block>E = \left\{ e_a \right\}_{a \in A}</math> is a {{em|basis}} for <math>V</math> if the subspace of <math>V</math> generated by finite linear combinations of elements of <math>E</math> is dense in <math>V</math> (in the norm induced by the inner product). Say that <math>E</math> is an {{em|[[orthonormal basis]]}} for <math>V</math> if it is a basis and <math display=block>\left\langle e_{a}, e_{b} \right\rangle = 0</math> if <math>a \neq b</math> and <math>\langle e_a, e_a \rangle = \|e_a\|^2 = 1</math> for all <math>a, b \in A.</math> Using an infinite-dimensional analog of the Gram-Schmidt process one may show: '''Theorem.''' Any [[Separable space|separable]] inner product space has an orthonormal basis. Using the [[Hausdorff maximal principle]] and the fact that in a [[Hilbert space|complete inner product space]] orthogonal projection onto linear subspaces is well-defined, one may also show that '''Theorem.''' Any [[Hilbert space|complete inner product space]] has an orthonormal basis. The two previous theorems raise the question of whether all inner product spaces have an orthonormal basis. The answer, it turns out is negative. This is a non-trivial result, and is proved below. The following proof is taken from Halmos's ''A Hilbert Space Problem Book'' (see the references).{{citation needed|date=October 2017}} :{| class="toccolours collapsible collapsed" width="90%" style="text-align:left" !Proof |- | Recall that the dimension of an inner product space is the [[cardinality]] of a maximal orthonormal system that it contains (by [[Zorn's lemma]] it contains at least one, and any two have the same cardinality). An orthonormal basis is certainly a maximal orthonormal system but the converse need not hold in general. If <math>G</math> is a dense subspace of an inner product space <math>V,</math> then any orthonormal basis for <math>G</math> is automatically an orthonormal basis for <math>V.</math> Thus, it suffices to construct an inner product space <math>V</math> with a dense subspace <math>G</math> whose dimension is strictly smaller than that of <math>V.</math> Let <math>K</math> be a [[Hilbert space]] of dimension [[Aleph-null|<math>\aleph_0.</math>]] (for instance, <math>K = \ell^2(\N)</math>). Let <math>E</math> be an orthonormal basis of <math>K,</math> so <math>|E| = \aleph_0.</math> Extend <math>E</math> to a [[Basis (linear algebra)#Related notions|Hamel basis]] <math>E \cup F</math> for <math>K,</math>where <math>E \cap F = \varnothing.</math> Since it is known that the [[Hamel dimension]] of <math>K</math> is <math>c,</math> the cardinality of the continuum, it must be that <math>|F| = c.</math> Let <math>L</math> be a Hilbert space of dimension <math>c</math> (for instance, <math>L = \ell^2(\R)</math>). Let <math>B</math> be an orthonormal basis for <math>L</math> and let <math>\varphi : F \to B</math> be a bijection. Then there is a linear transformation <math>T : K \to L</math> such that <math>T f = \varphi(f)</math> for <math>f \in F,</math> and <math>Te = 0</math> for <math>e \in E.</math> Let <math>V = K \oplus L</math> and let <math>G = \{ (k, T k) : k \in K \}</math> be the graph of <math>T.</math> Let <math>\overline{G}</math> be the closure of <math>G</math> in <math>V</math>; we will show <math>\overline{G} = V.</math> Since for any <math>e \in E</math> we have <math>(e, 0) \in G,</math> it follows that <math>K \oplus 0 \subseteq \overline{G}.</math> Next, if <math>b \in B,</math> then <math>b = T f</math> for some <math>f \in F \subseteq K,</math> so <math>(f, b) \in G \subseteq \overline{G}</math>; since <math>(f, 0) \in \overline{G}</math> as well, we also have <math>(0, b) \in \overline{G}.</math> It follows that <math>0 \oplus L \subseteq \overline{G},</math> so <math>\overline{G} = V,</math> and <math>G</math> is dense in <math>V.</math> Finally, <math>\{(e, 0) : e \in E \}</math> is a maximal orthonormal set in <math>G</math>; if <math display=block>0 = \langle (e, 0), (k, Tk) \rangle = \langle e, k \rangle + \langle 0, Tk \rangle = \langle e, k \rangle</math> for all <math>e \in E</math> then <math>k = 0,</math> so <math>(k, Tk) = (0, 0)</math> is the zero vector in <math>G.</math> Hence the dimension of <math>G</math> is <math>|E| = \aleph_0,</math> whereas it is clear that the dimension of <math>V</math> is <math>c.</math> This completes the proof. |} [[Parseval's identity]] leads immediately to the following theorem: '''Theorem.''' Let <math>V</math> be a separable inner product space and <math>\left\{e_k\right\}_k</math> an orthonormal basis of <math>V.</math> Then the map <math display=block>x \mapsto \bigl\{\langle e_k, x \rangle\bigr\}_{k \in \N}</math> is an isometric linear map <math>V \rightarrow \ell^2</math> with a dense image. This theorem can be regarded as an abstract form of [[Fourier series]], in which an arbitrary orthonormal basis plays the role of the sequence of [[trigonometric polynomial]]s. Note that the underlying index set can be taken to be any countable set (and in fact any set whatsoever, provided <math>\ell^2</math> is defined appropriately, as is explained in the article [[Hilbert space]]). In particular, we obtain the following result in the theory of Fourier series: '''Theorem.''' Let <math>V</math> be the inner product space <math>C[-\pi, \pi].</math> Then the sequence (indexed on set of all integers) of continuous functions <math display=block>e_k(t) = \frac{e^{i k t}}{\sqrt{2 \pi}}</math> is an orthonormal basis of the space <math>C[-\pi, \pi]</math> with the <math>L^2</math> inner product. The mapping <math display=block>f \mapsto \frac{1}{\sqrt{2 \pi}} \left\{\int_{-\pi}^\pi f(t) e^{-i k t} \, \mathrm{d}t \right\}_{k \in \Z}</math> is an isometric linear map with dense image. Orthogonality of the sequence <math>\{ e_k \}_k</math> follows immediately from the fact that if <math>k \neq j,</math> then <math display=block>\int_{-\pi}^\pi e^{-i (j - k) t} \, \mathrm{d}t = 0.</math> Normality of the sequence is by design, that is, the coefficients are so chosen so that the norm comes out to 1. Finally the fact that the sequence has a dense algebraic span, in the {{em|inner product norm}}, follows from the fact that the sequence has a dense algebraic span, this time in the space of continuous periodic functions on <math>[-\pi, \pi]</math> with the uniform norm. This is the content of the [[Weierstrass approximation theorem|Weierstrass theorem]] on the uniform density of trigonometric polynomials. ==Operators on inner product spaces== {{Main|Operator theory}} Several types of [[linear]] maps <math>A : V \to W</math> between inner product spaces <math>V</math> and <math>W</math> are of relevance: * {{em|[[Continuous linear operator|Continuous linear maps]]}}: <math>A : V \to W</math> is linear and continuous with respect to the metric defined above, or equivalently, <math>A</math> is linear and the set of non-negative reals <math>\{ \|Ax\| : \|x\| \leq 1\},</math> where <math>x</math> ranges over the closed unit ball of <math>V,</math> is bounded. * {{em|Symmetric linear operators}}: <math>A : V \to W</math> is linear and <math>\langle Ax, y \rangle = \langle x, Ay \rangle</math> for all <math>x, y \in V.</math> * {{em|[[Isometry|Isometries]]}}: <math>A : V \to W</math> satisfies <math>\|A x\| = \|x\|</math> for all <math>x \in V.</math> A {{em|linear isometry}} (resp. an {{em|[[Antilinear map|antilinear]] isometry}}) is an isometry that is also a linear map (resp. an [[antilinear map]]). For inner product spaces, the [[polarization identity]] can be used to show that <math>A</math> is an isometry if and only if <math>\langle Ax, Ay \rangle = \langle x, y \rangle</math> for all <math>x, y \in V.</math> All isometries are [[injective]]. The [[Mazur–Ulam theorem]] establishes that every surjective isometry between two {{em|real}} normed spaces is an [[affine transformation]]. Consequently, an isometry <math>A</math> between real inner product spaces is a linear map if and only if <math>A(0) = 0.</math> Isometries are [[morphism]]s between inner product spaces, and morphisms of real inner product spaces are orthogonal transformations (compare with [[orthogonal matrix]]). * {{em|Isometrical isomorphisms}}: <math>A : V \to W</math> is an isometry which is [[surjective]] (and hence [[bijective]]). Isometrical isomorphisms are also known as unitary operators (compare with [[unitary matrix]]). From the point of view of inner product space theory, there is no need to distinguish between two spaces which are isometrically isomorphic. The [[spectral theorem]] provides a canonical form for symmetric, unitary and more generally [[normal operator]]s on finite dimensional inner product spaces. A generalization of the spectral theorem holds for continuous normal operators in Hilbert spaces.<ref>{{harvnb|Rudin|1991}}</ref> ==Generalizations== Any of the axioms of an inner product may be weakened, yielding generalized notions. The generalizations that are closest to inner products occur where bilinearity and conjugate symmetry are retained, but positive-definiteness is weakened. ===Degenerate inner products=== {{Main|Krein space}} If <math>V</math> is a vector space and <math>\langle \,\cdot\,, \,\cdot\, \rangle</math> a semi-definite sesquilinear form, then the function: <math display=block>\|x\| = \sqrt{\langle x, x\rangle}</math> makes sense and satisfies all the properties of norm except that <math>\|x\| = 0</math> does not imply <math>x = 0</math> (such a functional is then called a [[semi-norm]]). We can produce an inner product space by considering the quotient <math>W = V / \{x : \|x\| = 0\}.</math> The sesquilinear form <math>\langle \,\cdot\,, \,\cdot\, \rangle</math> factors through <math>W.</math> This construction is used in numerous contexts. The [[Gelfand–Naimark–Segal construction]] is a particularly important example of the use of this technique. Another example is the representation of [[Mercer's theorem|semi-definite kernel]]s on arbitrary sets. ===Nondegenerate conjugate symmetric forms=== {{Main|Pseudo-Euclidean space}} Alternatively, one may require that the pairing be a [[nondegenerate form]], meaning that for all non-zero <math>x \neq 0</math> there exists some <math>y</math> such that <math>\langle x, y \rangle \neq 0,</math> though <math>y</math> need not equal <math>x</math>; in other words, the induced map to the dual space <math>V \to V^*</math> is injective. This generalization is important in [[differential geometry]]: a manifold whose tangent spaces have an inner product is a [[Riemannian manifold]], while if this is related to nondegenerate conjugate symmetric form the manifold is a [[pseudo-Riemannian manifold]]. By [[Sylvester's law of inertia]], just as every inner product is similar to the dot product with positive weights on a set of vectors, every nondegenerate conjugate symmetric form is similar to the dot product with {{em|nonzero}} weights on a set of vectors, and the number of positive and negative weights are called respectively the positive index and negative index. Product of vectors in [[Minkowski space]] is an example of indefinite inner product, although, technically speaking, it is not an inner product according to the standard definition above. Minkowski space has four [[Dimension (mathematics)|dimensions]] and indices 3 and 1 (assignment of [[Sign (mathematics)|"+" and "−"]] to them [[Sign convention#Metric signature|differs depending on conventions]]). Purely algebraic statements (ones that do not use positivity) usually only rely on the nondegeneracy (the injective homomorphism <math>V \to V^*</math>) and thus hold more generally. ==Related products== The term "inner product" is opposed to [[outer product]] ([[tensor product]]), which is a slightly more general opposite. Simply, in coordinates, the inner product is the product of a <math>1 \times n</math> {{em|covector}} with an <math>n \times 1</math> vector, yielding a <math>1 \times 1</math> matrix (a scalar), while the outer product is the product of an <math>m \times 1</math> vector with a <math>1 \times n</math> covector, yielding an <math>m \times n</math> matrix. The outer product is defined for different dimensions, while the inner product requires the same dimension. If the dimensions are the same, then the inner product is the {{em|[[Trace (linear algebra)|trace]]}} of the outer product (trace only being properly defined for square matrices). In an informal summary: "inner is horizontal times vertical and shrinks down, outer is vertical times horizontal and expands out". More abstractly, the outer product is the bilinear map <math>W \times V^* \to \hom(V, W)</math> sending a vector and a covector to a rank 1 linear transformation ([[simple tensor]] of type (1, 1)), while the inner product is the bilinear evaluation map <math>V^* \times V \to F</math> given by evaluating a covector on a vector; the order of the domain vector spaces here reflects the covector/vector distinction. The inner product and outer product should not be confused with the [[interior product]] and [[exterior product]], which are instead operations on [[vector field]]s and [[differential form]]s, or more generally on the [[exterior algebra]]. As a further complication, in [[geometric algebra]] the inner product and the {{em|exterior}} (Grassmann) product are combined in the geometric product (the Clifford product in a [[Clifford algebra]]) – the inner product sends two vectors (1-vectors) to a scalar (a 0-vector), while the exterior product sends two vectors to a bivector (2-vector) – and in this context the exterior product is usually called the {{em|outer product}} (alternatively, {{em|[[wedge product]]}}). The inner product is more correctly called a {{em|scalar}} product in this context, as the nondegenerate quadratic form in question need not be positive definite (need not be an inner product). ==See also== * {{annotated link|Bilinear form}} * {{annotated link|Biorthogonal system}} * {{annotated link|Dual space}} * {{annotated link|Energetic space}} * {{annotated link|L-semi-inner product}} * {{annotated link|Minkowski distance}} * {{annotated link|Orthogonal basis}} * {{annotated link|Orthogonal complement}} * {{annotated link|Orthonormal basis}} * [[Riemannian manifold]] ==Notes== {{reflist|group="Note"|refs=<!-- <ref group="Note" name=ConjugateNotation>A line over an expression or symbol, such as <math>\overline{s}</math> or <math>\overline{\langle y, x \rangle},</math> denotes [[Complex conjugate|complex conjugation]]. A scalar <math>s</math> is real if and only if <math>s = \overline{s}.</math></ref> <ref group="Note" name=DefAsPosDefSesquilinear>This is because {{EquationNote|Additivity in the 1st argument|condition (1)}} (that is, linearity in the first argument) and {{EquationNote|Positive definite|positive definiteness}} implies that <math>\langle x, x \rangle</math> is always a real number. And as mentioned before, a sesquilinear form is Hermitian if and only if <math>\langle x, x \rangle</math> is real for all <math>x.</math></ref> <ref group="Note" name=DefByPolarization>Let <math>R(x, y) := \frac{1}{4} \left(\|x + y\|^2 - \|x - y\|^2\right).</math> If <math>\mathbb{F} = \R</math> then let <math>\langle x,\, y \rangle_P := R(x, y)</math> while if <math>\mathbb{F} = \C</math> then let <math>\langle x,\, y \rangle_P := R(x, y) + i R(x, i y).</math> See the [[polarization identity]] article for more details.</ref> <ref group="Note: name=SuggestsConjHom>If <math>\langle x,\, c y \rangle</math> can be written as <math>\langle x,\, c y \rangle = f(c, y) \langle x,\, y \rangle</math> for some function <math>f</math> (in particular, this assumes that the scalar in front of <math>\langle x,\, y \rangle</math> that results from trying to "pull <math>c</math> out of <math>\langle x,\, c y \rangle</math>" does not depend on <math>x</math>) then <math>\langle y,\, c y \rangle = \overline{c} \langle y,\, y \rangle</math> implies that <math>f(c, y) = \overline{c}</math> (when <math>y \neq 0</math>) and consequently, <math>\langle x,\, c y \rangle = \overline{c} \langle x,\, y \rangle</math> will hold for all <math>x, y, \text{ and } c.</math></ref> --> }} <!-- '''Proofs''' {{reflist|group=proof|refs= <ref group=proof name=ZeroVecProduces0AndRationalHomogeneousProof>{{EquationNote|Homogeneity in the 1st argument}} implies <math> \langle q x, y \rangle = 0 \langle x, y \rangle</math> for all rational <math>q</math> so that <math>\langle \mathbf{0}, y \rangle = \langle 0 y, y \rangle = 0 \langle y, y \rangle = 0.</math> <math>\blacksquare</math> Assume {{EquationNote|Additivity in the 1st argument|additivity in the 1st argument}}. Then <math>\langle \mathbf{0}, y \rangle = \langle \mathbf{0} + \mathbf{0}, y \rangle = \langle \mathbf{0}, y \rangle + \langle \mathbf{0}, y \rangle</math> so adding <math>- \langle \mathbf{0}, y \rangle</math> to both sides proves <math>\langle \mathbf{0}, y \rangle = 0.</math> Consequently, <math>0 = \langle \mathbf{0}, y \rangle = \langle x + (-x), y \rangle = \langle x, y \rangle + \langle -x, y \rangle,</math> which implies <math>\langle - x, y \rangle = - \langle x, y \rangle.</math> Induction shows that <math>\langle m x, y \rangle = m \langle x, y \rangle</math> for all integers <math>m.</math> If <math>n > 0</math> is an integer then <math>\langle x, y \rangle = \langle n \left(\tfrac{1}{n} x\right), y \rangle = n \langle \tfrac{1}{n} x, y \rangle</math> so that <math>\langle \tfrac{1}{n} x, y \rangle = \tfrac{1}{n} \langle x, y \rangle.</math> It follows that <math>\langle q x, y \rangle = q \langle x, y \rangle</math> for all rational <math>q \in \Q.</math> <math>\blacksquare</math> An analogous proof show that {{EquationNote|Additivity in the 2nd argument|additivity in the 2nd argument}} and {{EquationNote|Conjugate homogeneity in the 2nd argument|conjugate homogeneity in the 2nd argument}} each individually imply that <math>\langle x, q y \rangle = q \langle x, y \rangle</math> for all rational <math>q \in \Q.</math></ref> <ref group=proof name=SesqHermEquivProof>Assume that it is a sesquilinear form that satisfies <math>\langle x, x \rangle \in \R</math> for all <math>x.</math> To conclude that <math>\langle x, y \rangle = \overline{\langle y, x \rangle},</math> it is necessary and sufficient to show that the real parts of <math>\langle y, x \rangle</math> and <math>\langle x, y \rangle</math> are equal and that their imaginary parts are negatives of each other. For all <math>x, y,</math> because <math>\langle x + y, x + y \rangle - \langle x, x \rangle - \langle y, y \rangle = \langle y, x \rangle + \langle x, y \rangle</math> and the left hand side is real, <math>\langle y, x \rangle + \langle x, y \rangle</math> is also real, which implies that the <math>0 = \operatorname{im} \left[\langle y, x \rangle + \langle x, y \rangle\right] = \left(\operatorname{im} \langle y, x \rangle\right) + \operatorname{im} \langle x, y \rangle.</math> Similarly, <math>\langle i y, x \rangle + \langle x, i y \rangle \in \R.</math> But sesquilinearity implies <math>\langle i y, x \rangle + \langle x, i y \rangle = i (\langle y, x \rangle - \langle x, y \rangle),</math> which is only possible if the real parts of <math>\langle y, x \rangle</math> and <math>\langle x, y \rangle</math> are equal. <math>\blacksquare</math></ref> <ref group=proof name=HermSymImpliesRealProof>A complex number <math>c</math> is a real number if and only if <math>c = \overline{c}.</math> Using <math>y = x</math> in {{EquationNote|Conjugate symmetry|condition (2)}} gives <math>\langle x, x \rangle = \overline{\langle x, x \rangle},</math> which implies that <math>\langle x, x \rangle</math> is a real number. <math>\blacksquare</math></ref> <ref group=proof name=BilinearRangeIsCProof>Assume that <math>\langle \,\cdot, \cdot\, \rangle</math> is a [[bilinear map]] and that <math>x \in V</math> satisfies <math>\langle x, x \rangle \neq 0.</math> Let <math>N : \mathbb{F} \to \mathbb{F}</math> be defined by <math>N(c) := \langle c x, c x \rangle</math> where bilinearity implies that <math>N(c) = \langle c x, c x \rangle = c^2 \langle x, x \rangle = c^2 N(1)</math> holds for all scalars <math>c.</math> Since <math>N(1) = \langle x, x \rangle \neq 0,</math> the scalar <math>1/N(1)</math> is well-defined and so <math>N(c) = 0</math> if and only if <math>c = 0.</math> If <math>c \in \Complex</math> is a scalar such that <math>c^2 \not\in \R</math> (which implies <math>c \neq 0</math> and <math>\frac{1}{c^2} \not\in \R</math>) then <math>N(1) \in \R</math> implies <math>N(c) = c^2 N(1) \not\in \R</math> and similarly, <math>N(c) \in \R</math> implies <math>N(1) = \frac{1}{c^2} N(c) \not\in \R;</math> this shows that for such a <math>c,</math> at most one of <math>N(1) \text{ and } N(c)</math> can be real. <math>\blacksquare</math> If <math>\mathbb{F} = \Complex</math> and <math>s \in \mathbb{F}</math> then pick <math>c \in \Complex</math> such that <math>c^2 = \frac{s}{N(1)},</math> which implies that <math>N(c) = c^2 N(1) = \frac{s}{N(1)} N(1) = s;</math> thus <math>N(\Complex) = \Complex</math> so <math>N : \Complex \to \Complex</math> is surjective. If <math>\mathbb{F} = \R</math> and <math>R(1) > 0</math> (resp. <math>R(1) < 0</math>) then for any <math>s \geq 0</math> (resp. any <math>s \leq 0</math>), <math>N\left(\sqrt{s/N(1)}\right) = s,</math> which shows that <math>N(\R) = [0, \infty)</math> (resp. <math>N(\R) = (-\infty, 0]</math>). <math>\blacksquare</math></ref> <ref group=proof name=parallelogramLawSatisfiedProof>Note that <math>\|x+y\|^2 = \langle x+y, x+y\rangle = \langle x, x\rangle + \langle x, y\rangle + \langle y, x\rangle + \langle y, y\rangle</math> and <math>\|x-y\|^2 = \langle x-y, x-y\rangle = \langle x, x\rangle - \langle x, y\rangle - \langle y, x\rangle + \langle y, y\rangle,</math> which implies that <math>\|x+y\|^2 + \|x-y\|^2 = 2\langle x, x\rangle + 2\langle y, y\rangle = 2\|x\|^2 + 2\|y\|^2.</math> This proves that <math>\|\,\cdot\,\|</math> satisfies the [[parallelogram law]]. It also follows that <math>\|x+y\|^2 = \|x - y\|^2 + 2[\langle x, y \rangle + \langle y, x \rangle],</math> which proves that <math>\langle x, y \rangle + \langle y, x \rangle</math> is a real number and thus that its [[imaginary part]] is <math>0.</math> This implies that <math>\operatorname{im} \langle x, y \rangle = - \operatorname{im} \langle y, x \rangle.</math> If <math>\mathbb{F} = \Complex</math> then also <math>\langle x, iy \rangle + \langle iy, x \rangle = -[\langle ix, y \rangle + \langle y, ix \rangle].</math> <math>\blacksquare</math></ref> <ref group=proof name=InnerProductOfxANDixProof>Combining <math>\|i x\| = |i| \|x\| = \|x\|</math> and <math>2\|x\|^2 = |1+i|^2 \, \|x\|^2 = \|(1+i)x\|^2 = \langle x + i x, x + i x \rangle = \|x\|^2 + \langle x, ix \rangle + \langle i x, x \rangle + \|ix\|^2</math> proves that <math>0 = \langle x, ix \rangle + \langle i x, x \rangle.</math> <math>\blacksquare</math></ref> <ref group=proof name=RealHomIfContinuousProof>Fix <math>x, y \in V.</math> The equality <math>\langle q x, y \rangle = q \langle x, y \rangle</math> will be discussed first. Define <math>L, R : \R \to \mathbb{F}</math> by <math>L(q) := \langle q x, y \rangle</math> and <math>R(q) := q \langle x, y \rangle.</math> Because <math>\langle q x, y \rangle = q \langle x, y \rangle</math> for all <math>q \in \Q,</math> <math>L</math> and <math>R</math> are equal on a [[Dense set|dense subset]] of <math>\R.</math> Since <math>\langle x, y \rangle</math> is constant, the map <math>R : \R \to \mathbb{F}</math> is continuous (where the [[Hausdorff space]] <math>\mathbb{F},</math> which is either <math>\R</math> or <math>\Complex,</math> has its usual [[Euclidean topology]]). Consequently, if <math>L : \R \to \mathbb{F}</math> is also continuous then <math>L</math> and <math>R</math> will necessarily be equal on all of <math>\R;</math> that is, <math>\langle q x, y \rangle = q \langle x, y \rangle</math> will hold for all {{em|real}} <math>q \in \R.</math> If <math>f : \R \to V \text{ and } g : V \to \mathbb{F}</math> are defined by <math>f(q) := q x</math> and <math>g(v) := \langle v, y \rangle</math> then <math>L = g \circ f.</math> So for <math>L</math> to be continuous, it suffices for there to exist some topology <math>\tau</math> on <math>V</math> that makes both <math>f</math> and <math>g</math> continuous (or even just [[sequentially continuous]]). The map <math>f : \R \to (V, \tau)</math> will automatically be continuous if <math>\tau</math> is a [[topological vector space]] topology, such as a topology induced by a norm. The map <math>g : (V, \tau) \to \mathbb{F}</math> will be continuous if <math>\langle \,\cdot, \cdot\, \rangle : V \times V \to \mathbb{F}</math> is [[separately continuous]] (which will be true if <math>\langle \,\cdot, \cdot\, \rangle</math> is continuous). The discussion of the equality <math>\langle x, q y \rangle = q \langle x, y \rangle</math> is nearly identical, with the main difference being that <math>L, f, g</math> must be redefined as <math>L(q) := \langle x, q y \rangle,</math> <math>f(q) := q y,</math> and <math>g(v) := \langle x, v \rangle.</math> <math>\blacksquare</math></ref> <ref group=proof name=LinAndHermSymImplyAntilinearProof>Let <math>x, y, z</math> be vectors and let <math>s</math> be a scalar. Then <math>\langle x, s y \rangle = \overline{\langle s y, x \rangle} = \overline{s} \overline{\langle y, x \rangle} = \overline{s} \langle x, y \rangle</math> and <math>\langle x, y + z \rangle = \overline{\langle y + z, x \rangle} = \overline{\langle y, x \rangle} + \overline{\langle z, x \rangle} = \langle x, y \rangle + \langle x, z \rangle.</math> <math>\blacksquare</math></ref> }} --> ==References== {{reflist}} ==Bibliography== * {{Cite book|last1=Axler|first1=Sheldon|title=Linear Algebra Done Right|publisher=[[Springer-Verlag]]|location=Berlin, New York|edition=2nd|isbn=978-0-387-98258-8|year=1997}} * {{cite book|first=Jean|last=Dieudonné|author-link=Jean Dieudonné|title=Treatise on Analysis, Vol. I [Foundations of Modern Analysis]|publisher=[[Academic Press]]|year=1969|isbn=978-1-4067-2791-3|edition=2nd}} * {{Cite book|last1=Emch|first1=Gerard G.|title=Algebraic Methods in Statistical Mechanics and Quantum Field Theory|publisher=[[Wiley-Interscience]]|isbn=978-0-471-23900-0|year=1972}} * {{Halmos A Hilbert Space Problem Book 1982}} <!-- {{sfn|Halmos|1982|pp=}} --> * {{Lax Functional Analysis}} <!-- {{sfn|Lax|2002|p=}} --> * {{Rudin Walter Functional Analysis|edition=2}} <!-- {{sfn|Rudin|1991|p=}} --> * {{Schaefer Wolff Topological Vector Spaces|edition=2}} <!-- {{sfn|Schaefer|Wolff|1999|p=}} --> * {{Schechter Handbook of Analysis and Its Foundations}} <!-- {{sfn|Schechter|1996|p=}} --> * {{Swartz An Introduction to Functional Analysis}} <!-- {{sfn|Swartz|1992|p=}} --> * {{Trèves François Topological vector spaces, distributions and kernels}} <!-- {{sfn|Trèves|2006|p=}} --> * {{Cite book|last1=Young|first1=Nicholas|title=An Introduction to Hilbert Space|publisher=[[Cambridge University Press]]|isbn=978-0-521-33717-5|year=1988}} * Zamani, A.; Moslehian, M.S.; & Frank, M. (2015) "Angle Preserving Mappings", ''Journal of Analysis and Applications'' 34: 485 to 500 {{doi|10.4171/ZAA/1551}} <!-- AWB bots, please leave this space alone. --> {{linear algebra}} {{Functional analysis}} {{HilbertSpace}} [[Category:Normed spaces]] [[Category:Bilinear forms]]
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Templates used on this page:
Template:Anchor
(
edit
)
Template:Annotated link
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite thesis
(
edit
)
Template:Cite web
(
edit
)
Template:Closed-closed
(
edit
)
Template:Defn
(
edit
)
Template:Doi
(
edit
)
Template:Em
(
edit
)
Template:EquationNote
(
edit
)
Template:Functional analysis
(
edit
)
Template:Glossary
(
edit
)
Template:Glossary end
(
edit
)
Template:Halmos A Hilbert Space Problem Book 1982
(
edit
)
Template:Harvnb
(
edit
)
Template:HilbertSpace
(
edit
)
Template:Lax Functional Analysis
(
edit
)
Template:Linear algebra
(
edit
)
Template:Main
(
edit
)
Template:Math
(
edit
)
Template:Mvar
(
edit
)
Template:Nowrap
(
edit
)
Template:Redirect
(
edit
)
Template:Reflist
(
edit
)
Template:Rudin Walter Functional Analysis
(
edit
)
Template:Schaefer Wolff Topological Vector Spaces
(
edit
)
Template:Schechter Handbook of Analysis and Its Foundations
(
edit
)
Template:See also
(
edit
)
Template:Sfn
(
edit
)
Template:Short description
(
edit
)
Template:Swartz An Introduction to Functional Analysis
(
edit
)
Template:Term
(
edit
)
Template:Trèves François Topological vector spaces, distributions and kernels
(
edit
)
Search
Search
Editing
Inner product space
Add topic