Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Uncertainty
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== In measurements ===<!-- Heavily linked section: Standard uncertainty, Concise notation --> {{Main article|Measurement uncertainty}} {{See also|Uncertainty quantification|Uncertainty propagation}} The most commonly used procedure for calculating measurement uncertainty is described in the "Guide to the Expression of Uncertainty in Measurement" (GUM) published by [[ISO]]. A derived work is for example the [[National Institute of Standards and Technology]] (NIST) Technical Note 1297, "Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results", and the Eurachem/Citac publication "Quantifying Uncertainty in Analytical Measurement". The uncertainty of the result of a measurement generally consists of several components. The components are regarded as [[random variables]], and may be grouped into two categories according to the method used to estimate their numerical values: * Type A, those evaluated by [[statistical]] methods * Type B, those evaluated by other means, e.g., by assigning a [[probability distribution]] By propagating the [[variance]]s of the components through a function relating the components to the measurement result, the combined measurement uncertainty is given as the square root of the resulting variance. The simplest form is the [[standard deviation]] of a repeated observation. {{anchor|Uncertainty notation}}In [[metrology]], [[physics]], and [[engineering]], the uncertainty or [[margin of error]] of a measurement, when explicitly stated, is given by a range of values likely to enclose the true value. This may be denoted by [[error bar]]s on a graph, or by the following notations:{{citation needed|date=November 2017}} * ''measured value'' ± ''uncertainty'' * ''measured value'' {{su|p=+uncertainty|b=−uncertainty}} * ''measured value'' (''uncertainty'') {{Anchor|Shortest help for uncertainty notation}} In the last notation, parentheses are the concise notation for the ± notation. For example, applying 10 {{frac|1|2}} meters in a scientific or engineering application, it could be written {{val|10.5|u=m}} or {{val|10.50|u=m}}, by convention meaning accurate to ''within'' one tenth of a meter, or one hundredth. The precision is symmetric around the last digit. In this case it's half a tenth up and half a tenth down, so 10.5 means between 10.45 and 10.55. Thus it is ''understood'' that 10.5 means {{val|10.5|.05}}, and 10.50 means {{val|10.50|.005}}, also written {{val|10.50|(5)}} and {{val|10.500|(5)}} respectively. But if the accuracy is within two tenths, the uncertainty is ± one tenth, and it is ''required'' to be explicit: {{val|10.5|.1}} and {{val|10.50|.01}} or {{val|10.5|(1)}} and {{val|10.50|(1)}}. The numbers in parentheses ''apply'' to the numeral left of themselves, and are not part of that number, but part of a notation of uncertainty. They apply to the [[significant figure|least significant digits]]. For instance, {{val|1.00794|(7)}} stands for {{val|1.00794|0.00007}}, while {{val|1.00794|(72)}} stands for {{val|1.00794|0.00072}}.<ref>{{cite web|url=http://physics.nist.gov/cgi-bin/cuu/Info/Constants/definitions.html|title=Standard Uncertainty and Relative Standard Uncertainty|work=[[CODATA]] reference|publisher=[[NIST]]|access-date=26 September 2011|url-status=live|archive-url=https://web.archive.org/web/20111016021440/http://physics.nist.gov/cgi-bin/cuu/Info/Constants/definitions.html|archive-date=16 October 2011}}</ref> This concise notation is used for example by [[IUPAC]] in stating the [[list of elements by atomic mass|atomic mass]] of [[chemical element|elements]]. The middle notation is used when the error is not symmetrical about the value – for example {{val|3.4|+0.3|-0.2}}. This can occur when using a logarithmic scale, for example. Uncertainty of a measurement can be determined by repeating a measurement to arrive at an estimate of the standard deviation of the values. Then, any single value has an uncertainty equal to the standard deviation. However, if the values are averaged, then the mean measurement value has a much smaller uncertainty, equal to the [[standard error (statistics)|standard error]] of the mean, which is the standard deviation divided by the square root of the number of measurements. This procedure neglects [[systematic error]]s, however.{{citation needed|date=November 2017}} When the uncertainty represents the standard error of the measurement, then about 68.3% of the time, the true value of the measured quantity falls within the stated uncertainty range. For example, it is likely that for 31.7% of the atomic mass values given on the [[list of elements by atomic mass]], the true value lies outside of the stated range. If the width of the interval is doubled, then probably only 4.6% of the true values lie outside the doubled interval, and if the width is tripled, probably only 0.3% lie outside. These values follow from the properties of the [[normal distribution]], and they apply only if the measurement process produces normally distributed errors. In that case, the quoted [[standard error (statistics)|standard errors]] are easily converted to 68.3% ("one [[sigma]]"), 95.4% ("two sigma"), or 99.7% ("three sigma") [[confidence interval]]s.{{citation needed|date=September 2014}} In this context, uncertainty depends on both the [[accuracy and precision]] of the measurement instrument. The lower the accuracy and precision of an instrument, the larger the measurement uncertainty is. Precision is often determined as the standard deviation of the repeated measures of a given value, namely using the same method described above to assess measurement uncertainty. However, this method is correct only when the instrument is accurate. When it is inaccurate, the uncertainty is larger than the standard deviation of the repeated measures, and it appears evident that the uncertainty does not depend only on instrumental precision.
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Uncertainty
(section)
Add topic