Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Outlier
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Observation far apart from others in statistics and data science}} {{about|the statistical term||}} {{Use American English|date = January 2019}} [[Image:Michelsonmorley-boxplot.svg|thumb|Figure 1. [[Box plot]] of data from the [[MichelsonβMorley experiment]] displaying four outliers in the middle column, as well as one outlier in the first column.]] In [[statistics]], an '''outlier''' is a [[data point]] that differs significantly from other observations.<ref>{{Cite journal |last=Grubbs |first=F. E. |date=February 1969 |title=Procedures for detecting outlying observations in samples |journal=Technometrics |volume=11 |issue=1 |pages=1β21 |doi= 10.1080/00401706.1969.10490657|quote=An outlying observation, or "outlier," is one that appears to deviate markedly from other members of the sample in which it occurs.}}</ref><ref>{{cite book |last=Maddala |first=G. S. |author-link=G. S. Maddala |chapter=Outliers |title=Introduction to Econometrics |location=New York |publisher=MacMillan |edition=2nd |year=1992 |isbn=978-0-02-374545-4 |pages=[https://archive.org/details/introductiontoec00madd/page/89 89] |quote=An outlier is an observation that is far removed from the rest of the observations. |chapter-url=https://books.google.com/books?id=nBS3AAAAIAAJ&pg=PA89 |url=https://archive.org/details/introductiontoec00madd/page/89 }}</ref> An outlier may be due to a variability in the measurement, an indication of novel data, or it may be the result of experimental error; the latter are sometimes excluded from the [[data set]].<ref>Pimentel, M. A., Clifton, D. A., Clifton, L., & Tarassenko, L. (2014). A review of novelty detection. Signal Processing, 99, 215-249.</ref><ref>{{harvnb|Grubbs|1969|p=1}} stating "An outlying observation may be merely an extreme manifestation of the random variability inherent in the data. ... On the other hand, an outlying observation may be the result of gross deviation from prescribed experimental procedure or an error in calculating or recording the numerical value."</ref> An outlier can be an indication of exciting possibility, but can also cause serious problems in statistical analyses. Outliers can occur by chance in any distribution, but they can indicate novel behaviour or structures in the data-set, [[measurement error]], or that the population has a [[heavy-tailed distribution]]. In the case of measurement error, one wishes to discard them or use statistics that are [[robust statistics|robust]] to outliers, while in the case of heavy-tailed distributions, they indicate that the distribution has high [[skewness]] and that one should be very cautious in using tools or intuitions that assume a [[normal distribution]]. A frequent cause of outliers is a mixture of two distributions, which may be two distinct sub-populations, or may indicate 'correct trial' versus 'measurement error'; this is modeled by a [[mixture model]]. In most larger samplings of data, some data points will be further away from the [[Arithmetic mean|sample mean]] than what is deemed reasonable. This can be due to incidental [[systematic error]] or flaws in the [[theory]] that generated an assumed family of [[probability distribution]]s, or it may be that some observations are far from the center of the data. Outlier points can therefore indicate faulty data, erroneous procedures, or areas where a certain theory might not be valid. However, in large samples, a small number of outliers is to be expected (and not due to any anomalous condition). Outliers, being the most extreme observations, may include the [[sample maximum]] or [[sample minimum]], or both, depending on whether they are extremely high or low. However, the sample maximum and minimum are not always outliers because they may not be unusually far from other observations. Naive interpretation of statistics derived from data sets that include outliers may be misleading. For example, if one is calculating the [[average]] temperature of 10 objects in a room, and nine of them are between 20 and 25 [[degrees Celsius]], but an oven is at 175 Β°C, the [[median]] of the data will be between 20 and 25 Β°C but the mean temperature will be between 35.5 and 40 Β°C. In this case, the median better reflects the temperature of a randomly sampled object (but not the temperature in the room) than the mean; naively interpreting the mean as "a typical sample", equivalent to the median, is incorrect. As illustrated in this case, outliers may indicate data points that belong to a different [[Statistical population|population]] than the rest of the [[Sample (statistics)|sample]] set. [[Estimator]]s capable of coping with outliers are said to be robust: the median is a robust statistic of [[central tendency]], while the mean is not.<ref>Ripley, Brian D. 2004. [http://www.stats.ox.ac.uk/pub/StatMeth/Robust.pdf Robust statistics] {{webarchive|url=https://web.archive.org/web/20121021081319/http://www.stats.ox.ac.uk/pub/StatMeth/Robust.pdf |date=2012-10-21 }}</ref> == Occurrence and causes == [[File:Standard_deviation_diagram_micro.svg|thumb|250px|Relative probabilities in a normal distribution]] In the case of [[normal distribution|normally distributed]] data, the [[three sigma rule]] means that roughly 1 in 22 observations will differ by twice the [[standard deviation]] or more from the mean, and 1 in 370 will deviate by three times the standard deviation.<ref>{{cite book|last1=Ruan|first1=Da|author1-link=Da Ruan|last2=Chen|first2=Guoqing|last3=Kerre|first3=Etienne|editor1-last=Wets|editor1-first=G.|title=Intelligent Data Mining: Techniques and Applications|url=https://archive.org/details/intelligentdatam00ruan_742|url-access=limited|date=2005|publisher=Springer|isbn=978-3-540-26256-5|page=[https://archive.org/details/intelligentdatam00ruan_742/page/n326 318]|series=Studies in Computational Intelligence Vol. 5}}</ref> In a sample of 1000 observations, the presence of up to five observations deviating from the mean by more than three times the standard deviation is within the range of what can be expected, being less than twice the expected number and hence within 1 standard deviation of the expected number β see [[Poisson distribution]] β and not indicate an anomaly. If the sample size is only 100, however, just three such outliers are already reason for concern, being more than 11 times the expected number. In general, if the nature of the population distribution is known ''a priori'', it is possible to test if the number of outliers deviate [[Statistical significance|significant]]ly from what can be expected: for a given cutoff (so samples fall beyond the cutoff with probability ''p'') of a given distribution, the number of outliers will follow a [[binomial distribution]] with parameter ''p'', which can generally be well-approximated by the [[Poisson distribution]] with Ξ» = ''pn''. Thus if one takes a normal distribution with cutoff 3 standard deviations from the mean, ''p'' is approximately 0.3%, and thus for 1000 trials one can approximate the number of samples whose deviation exceeds 3 sigmas by a Poisson distribution with Ξ» = 3. === Causes === Outliers can have many anomalous causes. A physical apparatus for taking measurements may have suffered a transient malfunction. There may have been an error in data transmission or transcription. Outliers arise due to changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. A sample may have been contaminated with elements from outside the population being examined. Alternatively, an outlier could be the result of a flaw in the assumed theory, calling for further investigation by the researcher. Additionally, the pathological appearance of outliers of a certain form appears in a variety of datasets, indicating that the causative mechanism for the data might differ at the extreme end ([[King effect]]). == Definitions and detection == There is no rigid mathematical definition of what constitutes an outlier; determining whether or not an observation is an outlier is ultimately a subjective exercise.<ref name="ZimekFilzmoser2018">{{cite journal|last1=Zimek|first1=Arthur|last2=Filzmoser|first2=Peter|title=There and back again: Outlier detection between statistical reasoning and data mining algorithms|journal=Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery|volume=8|issue=6|year=2018|pages=e1280|issn=1942-4787|doi=10.1002/widm.1280|s2cid=53305944|url=https://findresearcher.sdu.dk:8443/ws/files/153197807/There_and_Back_Again.pdf|access-date=2019-12-11|archive-date=2021-11-14|archive-url=https://web.archive.org/web/20211114121638/https://findresearcher.sdu.dk:8443/ws/files/153197807/There_and_Back_Again.pdf|url-status=dead}}</ref> There are various methods of outlier detection, some of which are treated as synonymous with novelty detection.<ref>Pimentel, M. A., Clifton, D. A., Clifton, L., & Tarassenko, L. (2014). A review of novelty detection. Signal Processing, 99, 215-249.</ref><ref>{{citation |last1=Rousseeuw |first1=P |author1-link=Peter Rousseeuw |last2=Leroy |first2=A. |year=1996 |title=Robust Regression and Outlier Detection |publisher=John Wiley & Sons |edition=3rd |title-link= Robust Regression and Outlier Detection}}</ref><ref>{{citation |first1=Victoria J. |last1=Hodge |first2=Jim |last2=Austin |title=A Survey of Outlier Detection Methodologies |journal=Artificial Intelligence Review |volume=22 |issue=2 |pages=85β126 |doi= 10.1023/B:AIRE.0000045502.10941.a9|year=2004 |citeseerx=10.1.1.109.1943 |s2cid=3330313 }}</ref><ref>{{Citation | last1 = Barnett | first1 = Vic | last2 = Lewis | first2 = Toby | year = 1994 | orig-year = 1978 | title = Outliers in Statistical Data | edition = 3 | publisher = Wiley | isbn =978-0-471-93094-5}}</ref><ref name="subspace" /> Some are graphical such as [[normal probability plot]]s. Others are model-based. [[Box plot]]s are a hybrid. Model-based methods which are commonly used for identification assume that the data are from a normal distribution, and identify observations which are deemed "unlikely" based on mean and standard deviation: * [[Chauvenet's criterion]] * [[Grubbs's test for outliers]] * [[Dixon's Q test|Dixon's ''Q'' test]] * [[ASTM]] E178: Standard Practice for Dealing With Outlying Observations<ref>[https://www.nrc.gov/docs/ML1023/ML102371244.pdf E178: Standard Practice for Dealing With Outlying Observations]</ref> * [[Mahalanobis distance]] and [[leverage (statistics)|leverage]] are often used to detect outliers, especially in the development of linear regression models. * Subspace and correlation based techniques for high-dimensional numerical data<ref name="subspace">{{cite journal | last1 = Zimek | first1 = A. | last2 = Schubert | first2 = E.| last3 = Kriegel | first3 = H.-P. | author-link3=Hans-Peter Kriegel| title = A survey on unsupervised outlier detection in high-dimensional numerical data | doi = 10.1002/sam.11161 | journal = Statistical Analysis and Data Mining | volume = 5 | issue = 5 | pages = 363β387| year = 2012| s2cid = 6724536 }}</ref> ===Peirce's criterion=== {{main|Peirce's criterion}} <blockquote> It is proposed to determine in a series of <math>m</math> observations the limit of error, beyond which all observations involving so great an error may be rejected, provided there are as many as <math>n</math> such observations. The principle upon which it is proposed to solve this problem is, that the proposed observations should be rejected when the probability of the system of errors obtained by retaining them is less than that of the system of errors obtained by their rejection multiplied by the probability of making so many, and no more, abnormal observations. (Quoted in the editorial note on page 516 to Peirce (1982 edition) from ''A Manual of Astronomy'' 2:558 by Chauvenet.) <ref>[[Benjamin Peirce]], [http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1852AJ......2..161P;data_type=PDF_HIGH "Criterion for the Rejection of Doubtful Observations"], ''Astronomical Journal'' II 45 (1852) and [http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1852AJ......2..176P;data_type=PDF_HIGH Errata to the original paper].</ref><ref>{{cite journal |title=On Peirce's criterion |author-link=Benjamin Peirce |first=Benjamin |last=Peirce |journal=Proceedings of the American Academy of Arts and Sciences |volume=13 |date=May 1877 β May 1878 |pages=348β351 |jstor=25138498 |doi=10.2307/25138498 }}</ref><ref>{{cite journal |first=Charles Sanders |last=Peirce |author-link=Charles Sanders Peirce |title=Appendix No. 21. On the Theory of Errors of Observation |journal=Report of the Superintendent of the United States Coast Survey Showing the Progress of the Survey During the Year 1870 |orig-year=1870 | year=1873 |pages=200β224 }}. NOAA [http://docs.lib.noaa.gov/rescue/cgs/001_pdf/CSC-0019.PDF#page=215 PDF Eprint] (goes to Report p. 200, PDF's p. 215).</ref><ref>{{cite book |first=Charles Sanders |last=Peirce |author-link=Charles Sanders Peirce |contribution=On the Theory of Errors of Observation |title=Writings of Charles S. Peirce: A Chronological Edition |volume=3, 1872-1878 |editor=Kloesel, Christian J. W. |display-editors=etal |publisher=Indiana University Press |location=Bloomington, Indiana |orig-year=1982 |year=1986 <!-- copyright=1986, but publication is listed as 1982 --> |pages=[https://archive.org/details/writingsofcharle0002peir/page/140 140β160] |isbn=978-0-253-37201-7 |url=https://archive.org/details/writingsofcharle0002peir/page/140 }} β Appendix 21, according to the editorial note on page 515</ref> </blockquote> ===Tukey's fences=== Other methods flag observations based on measures such as the [[interquartile range]]. For example, if <math>Q_1</math> and <math>Q_3</math> are the lower and upper [[quartile]]s respectively, then one could define an outlier to be any observation outside the range: :<math> \big[ Q_1 - k (Q_3 - Q_1 ) , Q_3 + k (Q_3 - Q_1 ) \big]</math> for some nonnegative constant <math>k</math>. [[John Tukey]] proposed this test, where <math>k=1.5</math> indicates an "outlier", and <math>k=3</math> indicates data that is "far out".<ref>{{cite book |last=Tukey |first=John W |title=Exploratory Data Analysis |year=1977 |publisher=Addison-Wesley |isbn=978-0-201-07616-5 |oclc=3058187 |url=https://archive.org/details/exploratorydataa00tuke_0 }}</ref> === In anomaly detection === {{main|Anomaly detection}} In various domains such as, but not limited to, [[statistics]], [[signal processing]], [[finance]], [[econometrics]], [[manufacturing]], [[Network science|networking]] and [[data mining]], the task of ''anomaly detection'' may take other approaches. Some of these may be distance-based<ref>{{Cite journal | doi = 10.1007/s007780050006| title = Distance-based outliers: Algorithms and applications| journal = The VLDB Journal the International Journal on Very Large Data Bases| volume = 8| issue = 3β4| pages = 237| year = 2000| last1 = Knorr | first1 = E. M. | last2 = Ng | first2 = R. T. | last3 = Tucakov | first3 = V. | citeseerx = 10.1.1.43.1842| s2cid = 11707259}}</ref><ref>{{Cite conference | doi = 10.1145/342009.335437| title = Efficient algorithms for mining outliers from large data sets| conference = Proceedings of the 2000 ACM SIGMOD international conference on Management of data - SIGMOD '00| pages = 427| year = 2000| last1 = Ramaswamy | first1 = S. | last2 = Rastogi | first2 = R. | last3 = Shim | first3 = K. | isbn = 1581132174}}</ref> and density-based such as [[Local Outlier Factor]] (LOF).<ref>{{Cite conference| doi = 10.1145/335191.335388| title = LOF: Identifying Density-based Local Outliers| year = 2000| last1 = Breunig | first1 = M. M.| last2 = Kriegel | first2 = H.-P. | author-link2 = Hans-Peter Kriegel| last3 = Ng | first3 = R. T.| last4 = Sander | first4 = J.| work = Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data| series = [[SIGMOD]]| isbn = 1-58113-217-4| pages = 93β104| url = http://www.dbs.ifi.lmu.de/Publikationen/Papers/LOF.pdf}}</ref> Some approaches may use the distance to the [[k-nearest neighbor]]s to label observations as outliers or non-outliers.<ref>{{Cite journal | last1 = Schubert | first1 = E. | last2 = Zimek | first2 = A. | last3 = Kriegel | first3 = H. -P. | author-link3 = Hans-Peter Kriegel| doi = 10.1007/s10618-012-0300-z | title = Local outlier detection reconsidered: A generalized view on locality with applications to spatial, video, and network outlier detection | journal = Data Mining and Knowledge Discovery | volume = 28 | pages = 190β237 | year = 2012 | s2cid = 19036098 }}</ref> ===Modified Thompson Tau test=== {{see also|Studentized residual#Distribution}} The modified Thompson Tau test is a method used to determine if an outlier exists in a data set.<ref>{{Cite web |last=Wheeler |first=Donald J. |date=11 January 2021 |title=Some Outlier Tests: Part 2 |url=https://www.qualitydigest.com/inside/statistics-column/some-outlier-tests-part-2-011121.html |access-date=2025-02-09 |website=Quality Digest |language=en}}</ref> The strength of this method lies in the fact that it takes into account a data set's standard deviation, average and provides a statistically determined rejection zone; thus providing an objective method to determine if a data point is an outlier.{{Citation needed|reason=Although intuitively appealing, this method appears to be unpublished (it is ''not'' described in Thompson (1985) so one should use it with caution.|date=October 2016}}<ref>Thompson .R. (1985). "[https://www.jstor.org/stable/2345543?seq=1#page_scan_tab_contents A Note on Restricted Maximum Likelihood Estimation with an Alternative Outlier Model]".Journal of the Royal Statistical Society. Series B (Methodological), Vol. 47, No. 1, pp. 53-55</ref> How it works: First, a data set's average is determined. Next the absolute deviation between each data point and the average are determined. Thirdly, a rejection region is determined using the formula: :<math>\text{Rejection Region}{{=}} \frac{{t_{\alpha/2}}{\left ( n-1 \right )}}{\sqrt{n}\sqrt{n-2+{t_{\alpha/2}^2}}} </math>; where <math>\scriptstyle{t_{\alpha/2}}</math> is the critical value from the Student {{mvar|t}} distribution with ''n''-2 degrees of freedom, ''n'' is the sample size, and s is the sample standard deviation. To determine if a value is an outlier: Calculate <math>\scriptstyle \delta = |(X - mean(X)) / s|</math>. If ''Ξ΄'' > Rejection Region, the data point is an outlier. If ''Ξ΄'' β€ Rejection Region, the data point is not an outlier. The modified Thompson Tau test is used to find one outlier at a time (largest value of ''Ξ΄'' is removed if it is an outlier). Meaning, if a data point is found to be an outlier, it is removed from the data set and the test is applied again with a new average and rejection region. This process is continued until no outliers remain in a data set. Some work has also examined outliers for nominal (or categorical) data. In the context of a set of examples (or instances) in a data set, instance hardness measures the probability that an instance will be misclassified ( <math>1-p(y|x)</math> where {{mvar|y}} is the assigned class label and {{mvar|x}} represent the input attribute value for an instance in the training set {{mvar|t}}).<ref>Smith, M.R.; Martinez, T.; Giraud-Carrier, C. (2014). "[https://link.springer.com/article/10.1007%2Fs10994-013-5422-z An Instance Level Analysis of Data Complexity]". Machine Learning, 95(2): 225-256.</ref> Ideally, instance hardness would be calculated by summing over the set of all possible hypotheses {{mvar|H}}: :<math>\begin{align}IH(\langle x, y\rangle) &= \sum_H (1 - p(y, x, h))p(h|t)\\ &= \sum_H p(h|t) - p(y, x, h)p(h|t)\\ &= 1- \sum_H p(y, x, h)p(h|t).\end{align}</math> Practically, this formulation is unfeasible as {{mvar|H}} is potentially infinite and calculating <math>p(h|t)</math> is unknown for many algorithms. Thus, instance hardness can be approximated using a diverse subset <math>L \subset H</math>: :<math>IH_L (\langle x,y\rangle) = 1 - \frac{1}{|L|} \sum_{j=1}^{|L|} p(y|x, g_j(t, \alpha))</math> where <math>g_j(t, \alpha)</math> is the hypothesis induced by learning algorithm <math>g_j</math> trained on training set {{mvar|t}} with hyperparameters <math>\alpha</math>. Instance hardness provides a continuous value for determining if an instance is an outlier instance. == Working with outliers == The choice of how to deal with an outlier should depend on the cause. Some estimators are highly sensitive to outliers, notably [[estimation of covariance matrices]]. === Retention === Even when a normal distribution model is appropriate to the data being analyzed, outliers are expected for large sample sizes and should not automatically be discarded if that is the case.<ref name="karch2023">{{cite journal |last1=Karch |first1=Julian D. |title=Outliers may not be automatically removed. |journal=Journal of Experimental Psychology: General |date=2023 |volume=152 |issue=6 |pages=1735β1753 |doi=10.1037/xge0001357|pmid=37104797 |s2cid=258376426 |url=https://psyarxiv.com/47ezg/ |hdl=1887/4103722 |hdl-access=free }}</ref> Instead, one should use a method that is robust to outliers to model or analyze data with naturally occurring outliers.<ref name="karch2023"/> === Exclusion === When deciding whether to remove an outlier, the cause has to be considered. As mentioned earlier, if the outlier's origin can be attributed to an experimental error, or if it can be otherwise determined that the outlying data point is erroneous, it is generally recommended to remove it.<ref name="karch2023"/><ref name="bakker2014">{{cite journal |last1=Bakker |first1=Marjan |last2=Wicherts |first2=Jelte M. |title=Outlier removal, sum scores, and the inflation of the type I error rate in independent samples t tests: The power of alternatives and recommendations. |journal=Psychological Methods |date=2014 |volume=19 |issue=3 |pages=409β427 |doi=10.1037/met0000014|pmid=24773354 }}</ref> However, it is more desirable to correct the erroneous value, if possible. Removing a data point solely because it is an outlier, on the other hand, is a controversial practice, often frowned upon by many scientists and science instructors, as it typically invalidates statistical results.<ref name="karch2023"/><ref name="bakker2014"/> While mathematical criteria provide an objective and quantitative method for data rejection, they do not make the practice more scientifically or methodologically sound, especially in small sets or where a normal distribution cannot be assumed. Rejection of outliers is more acceptable in areas of practice where the underlying model of the process being measured and the usual distribution of measurement error are confidently known. The two common approaches to exclude outliers are [[truncation (statistics)|truncation]] (or trimming) and [[Winsorising]]. Trimming discards the outliers whereas Winsorising replaces the outliers with the nearest "nonsuspect" data.<ref>{{cite book |title=Data Analysis: A Statistical Primer for Psychology Students |pages=24β25 |first=Edward L. |last=Wike |date=2006 |publisher=Transaction Publishers |isbn=9780202365350}}</ref> Exclusion can also be a consequence of the measurement process, such as when an experiment is not entirely capable of measuring such extreme values, resulting in [[censoring (statistics)|censored]] data.<ref>{{cite journal |title=Simplified estimation from censored normal samples |first=W. J. |last=Dixon |journal=The Annals of Mathematical Statistics |volume=31 |number=2 |date=June 1960 |pages=385β391 |url=http://projecteuclid.org/download/pdf_1/euclid.aoms/1177705900 |doi=10.1214/aoms/1177705900|doi-access=free }}</ref> In [[Regression analysis|regression]] problems, an alternative approach may be to only exclude points which exhibit a large degree of influence on the estimated coefficients, using a measure such as [[Cook's distance]].<ref>Cook, R. Dennis (Feb 1977). "Detection of Influential Observations in Linear Regression". Technometrics (American Statistical Association) 19 (1): 15β18.</ref> If a data point (or points) is excluded from the [[data analysis]], this should be clearly stated on any subsequent report. === Non-normal distributions === The possibility should be considered that the underlying distribution of the data is not approximately normal, having "[[fat tails]]". For instance, when sampling from a [[Cauchy distribution]],<ref>Weisstein, Eric W. [http://mathworld.wolfram.com/CauchyDistribution.html Cauchy Distribution. From MathWorld--A Wolfram Web Resource]</ref> the sample variance increases with the sample size, the sample mean fails to converge as the sample size increases, and outliers are expected at far larger rates than for a normal distribution. Even a slight difference in the fatness of the tails can make a large difference in the expected number of extreme values. === Set-membership uncertainties === A [[set estimation|set membership approach]] considers that the uncertainty corresponding to the ''i''th measurement of an unknown random vector ''x'' is represented by a set ''X''<sub>i</sub> (instead of a probability density function). If no outliers occur, ''x'' should belong to the intersection of all ''X''<sub>i</sub>'s. When outliers occur, this intersection could be empty, and we should relax a small number of the sets ''X''<sub>i</sub> (as small as possible) in order to avoid any inconsistency.<ref>{{cite journal|last1=Jaulin|first1=L.| title=Probabilistic set-membership approach for robust regression| journal=Journal of Statistical Theory and Practice|volume=4|pages=155β167| year=2010| url=http://www.ensta-bretagne.fr/jaulin/paper_probint_0.pdf|doi=10.1080/15598608.2010.10411978|s2cid=16500768}}</ref> This can be done using the notion of ''q''-[[relaxed intersection]]. As illustrated by the figure, the ''q''-relaxed intersection corresponds to the set of all ''x'' which belong to all sets except ''q'' of them. Sets ''X''<sub>i</sub> that do not intersect the ''q''-relaxed intersection could be suspected to be outliers. [[File:Wiki q inter def.jpg|thumb|Figure 5. ''q''-relaxed intersection of 6 sets for ''q''=2 (red), ''q''=3 (green), ''q''= 4 (blue), ''q''= 5 (yellow).]] === Alternative models === In cases where the cause of the outliers is known, it may be possible to incorporate this effect into the model structure, for example by using a [[hierarchical Bayes model]], or a [[mixture model]].<ref>Roberts, S. and Tarassenko, L.: 1995, A probabilistic resource allocating network for novelty detection. Neural Computation 6, 270β284.</ref><ref>{{Cite journal |last= Bishop |first=C. M. |date= August 1994 |title= Novelty detection and Neural Network validation |journal= IEE Proceedings - Vision, Image, and Signal Processing|volume=141 |issue=4 |pages= 217β222 |doi=10.1049/ip-vis:19941330 |doi-broken-date=7 December 2024 }}</ref> == See also == {{Div col|colwidth=20em}} * [[Anomaly (natural sciences)]] * [[Novelty detection]] * [[Anscombe's quartet]] * [[Data transformation (statistics)]] * [[Extreme value theory]] * [[Influential observation]] * [[Random sample consensus]] * [[Robust regression]] * [[Studentized residual]] * [[Winsorizing]] {{Div col end}} == References == {{Reflist|30em}} == External links == {{Commons category|Outliers}} * {{MathWorld|Outlier|author=Renze, John}} * {{SpringerEOM|id=O/o110080|title=Outlier|first1=N. |last1=Balakrishnan |first2=A. |last2=Childs}} * [http://www.itl.nist.gov/div898/handbook/eda/section3/eda35h.htm Grubbs test] described by NIST manual {{Authority control}} [[Category:Statistical charts and diagrams]] [[Category:Robust statistics]] [[Category:Statistical outliers| ]]
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Templates used on this page:
Template:About
(
edit
)
Template:Authority control
(
edit
)
Template:Citation
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Commons category
(
edit
)
Template:Div col
(
edit
)
Template:Div col end
(
edit
)
Template:Harvnb
(
edit
)
Template:Main
(
edit
)
Template:MathWorld
(
edit
)
Template:Mvar
(
edit
)
Template:Reflist
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)
Template:SpringerEOM
(
edit
)
Template:Use American English
(
edit
)
Template:Webarchive
(
edit
)
Search
Search
Editing
Outlier
Add topic