Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Central tendency
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Solutions to variational problems== {{see also|Average#Summary of types}} Several measures of central tendency can be characterized as solving a variational problem, in the sense of the [[calculus of variations]], namely minimizing variation from the center. That is, given a measure of [[statistical dispersion]], one asks for a measure of central tendency that minimizes variation: such that variation from the center is minimal among all choices of center. In a quip, "dispersion precedes location". These measures are initially defined in one dimension, but can be generalized to multiple dimensions. This center may or may not be unique. In the sense of [[Lp space|{{math|{{var|L}}<sup>{{var|p}}</sup>}} spaces]], the correspondence is: {| class="wikitable" ! {{math|''L''<sup>{{var|p}}</sup>}} !! dispersion !! central tendency |- ! {{math|{{var|L}}<sup>0</sup>}} | [[variation ratio]] | [[Mode (statistics)|mode]]{{efn|Unlike the other measures, the mode does not require any geometry on the set, and thus applies equally in one dimension, multiple dimensions, or even for [[categorical variable]]s.}} |- ! {{math|{{var|L}}<sup>1</sup>}} | [[average absolute deviation]] | [[median]] ([[geometric median]]){{efn|The median is only defined in one dimension; the geometric median is a multidimensional generalization.}} |- ! {{math|{{var|L}}<sup>2</sup>}} | [[standard deviation]] | [[mean]] ([[centroid]]){{efn|The mean can be defined identically for vectors in multiple dimensions as for scalars in one dimension; the multidimensional form is often called the centroid.}} |- ! {{math|{{var|L}}<sup>∞</sup>}} | [[maximum deviation]] | [[midrange]]{{efn|In multiple dimensions, the midrange can be define coordinate-wise (take the midrange of each coordinate), though this is not common.}} |} The associated functions are called [[p-norm|{{math|{{var|p}}}}-norms]]: respectively 0-"norm", 1-norm, 2-norm, and ∞-norm. The function corresponding to the {{var|L}}<sup>0</sup> space is not a norm, and is thus often referred to in quotes: 0-"norm". In equations, for a given (finite) data set {{math|X}}, thought of as a vector {{math|{{strong|x}} {{=}} ({{var|x}}{{sub|1}},…,{{var|x}}{{sub|{{var|n}}}})}}, the dispersion about a point {{math|{{strong|c}}}} is the "distance" from {{math|{{strong|x}}}} to the constant vector {{math|{{strong|c}} {{=}} ({{var|c}},…,{{var|c}})}} in the {{var|p}}-norm (normalized by the number of points {{var|n}}): :<math>f_p(c) = \left\| \mathbf{x} - \mathbf{c} \right\|_p := \bigg( \frac{1}{n} \sum_{i=1}^n \left| x_i - c\right| ^p \bigg) ^{1/p}</math> For {{math|{{var|p}} {{=}} 0}} and {{math|{{var|p {{=}} ∞}}}} these functions are defined by taking limits, respectively as {{math|{{var|p}} → 0}} and {{math|{{var|p}} → ∞}}. For {{math|{{var|p}} {{=}} 0}} the limiting values are {{math|0<sup>0</sup> {{=}} 0}} and {{math|{{var|a}}<sup>0</sup> {{=}} 1}} for {{math|{{var|a}} ≠ 0}}, so the difference becomes simply equality, so the 0-norm counts the number of ''unequal'' points. For {{math|{{var|p}} {{=}} ∞}} the largest number dominates, and thus the ∞-norm is the maximum difference. ===Uniqueness=== The mean (''L''<sup>2</sup> center) and midrange (''L''<sup>∞</sup> center) are unique (when they exist), while the median (''L''<sup>1</sup> center) and mode (''L''<sup>0</sup> center) are not in general unique. This can be understood in terms of [[convex function|convexity]] of the associated functions ([[coercive function]]s). The 2-norm and ∞-norm are [[strictly convex function|strictly convex]], and thus (by convex optimization) the minimizer is unique (if it exists), and exists for bounded distributions. Thus standard deviation about the mean is lower than standard deviation about any other point, and the maximum deviation about the midrange is lower than the maximum deviation about any other point. The 1-norm is not ''strictly'' convex, whereas strict convexity is needed to ensure uniqueness of the minimizer. Correspondingly, the median (in this sense of minimizing) is not in general unique, and in fact any point between the two central points of a discrete distribution minimizes average absolute deviation. The 0-"norm" is not convex (hence not a norm). Correspondingly, the mode is not unique – for example, in a uniform distribution ''any'' point is the mode. ===Clustering=== Instead of a single central point, one can ask for multiple points such that the variation from these points is minimized. This leads to [[cluster analysis]], where each point in the data set is clustered with the nearest "center". Most commonly, using the 2-norm generalizes the mean to [[k-means clustering|''k''-means clustering]], while using the 1-norm generalizes the (geometric) median to [[k-medians clustering|''k''-medians clustering]]. Using the 0-norm simply generalizes the mode (most common value) to using the ''k'' most common values as centers. Unlike the single-center statistics, this multi-center clustering cannot in general be computed in a [[closed-form expression]], and instead must be computed or approximated by an [[iterative method]]; one general approach is [[expectation–maximization algorithm]]s. ===Information geometry=== The notion of a "center" as minimizing variation can be generalized in [[information geometry]] as a distribution that minimizes [[divergence (statistics)|divergence]] (a generalized distance) from a data set. The most common case is [[maximum likelihood estimation]], where the maximum likelihood estimate (MLE) maximizes likelihood (minimizes expected [[surprisal]]), which can be interpreted geometrically by using [[Entropy (statistics)|entropy]] to measure variation: the MLE minimizes [[cross-entropy]] (equivalently, [[relative entropy]], Kullback–Leibler divergence). A simple example of this is for the center of nominal data: instead of using the mode (the only single-valued "center"), one often uses the [[empirical measure]] (the [[frequency distribution]] divided by the [[sample size]]) as a "center". For example, given [[binary data]], say heads or tails, if a data set consists of 2 heads and 1 tails, then the mode is "heads", but the empirical measure is 2/3 heads, 1/3 tails, which minimizes the cross-entropy (total surprisal) from the data set. This perspective is also used in [[regression analysis]], where [[least squares]] finds the solution that minimizes the distances from it, and analogously in [[logistic regression]], a maximum likelihood estimate minimizes the surprisal (information distance).
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Central tendency
(section)
Add topic