Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Integer (computer science)
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Value and representation == The ''value'' of an item with an integral type is the mathematical integer that it corresponds to. Integral types may be ''unsigned'' (capable of representing only non-negative integers) or ''signed'' (capable of representing negative integers as well).<ref>{{cite web |url=http://www.swarthmore.edu/NatSci/echeeve1/Ref/BinaryMath/NumSys.html |title=Representation of numbers |last=Cheever |first=Eric |publisher=Swarthmore College |access-date=2011-09-11}}</ref> An integer value is typically specified in the [[source code]] of a program as a sequence of digits optionally prefixed with + or β. Some programming languages allow other notations, such as hexadecimal (base 16) or octal (base 8). Some programming languages also permit [[digit group separator]]s.<ref>{{cite web|author=Madhusudhan Konda |url=http://radar.oreilly.com/2011/09/java7-features.html |title=A look at Java 7's new features - O'Reilly Radar |publisher=Radar.oreilly.com |date=2011-09-02 |access-date=2013-10-15}}</ref> The ''internal representation'' of this datum is the way the value is stored in the computer's memory. Unlike mathematical integers, a typical datum in a computer has some minimal and maximum possible value. The most common representation of a positive integer is a string of [[bit]]s, using the [[binary numeral system]]. The order of the memory [[byte]]s storing the bits varies; see [[endianness]]. The ''width'', ''precision'', or ''bitness''<ref>{{Cite book |last=Barr |first=Adam |url=https://books.google.com/books?id=BxdxDwAAQBAJ&dq=%22bitness%22&pg=PA268 |title=The Problem with Software: Why Smart Engineers Write Bad Code |date=2018-10-23 |publisher=MIT Press |isbn=978-0-262-34821-8 |language=en}}</ref> of an integral type is the number of bits in its representation. An integral type with ''n'' bits can encode 2<sup>''n''</sup> numbers; for example an unsigned type typically represents the non-negative values 0 through {{nowrap|2<sup>''n''</sup> β 1}}. Other encodings of integer values to bit patterns are sometimes used, for example [[binary-coded decimal]] or [[Gray code]], or as printed character codes such as [[ASCII]]. There are four well-known [[signed number representations|ways to represent signed numbers]] in a binary computing system. The most common is [[two's complement]], which allows a signed integral type with ''n'' bits to represent numbers from {{nowrap|β2<sup>(''n''β1)</sup>}} through {{nowrap|2<sup>(''n''β1)</sup> β 1}}. Two's complement arithmetic is convenient because there is a perfect [[one-to-one correspondence]] between representations and values (in particular, [[signed zero|no separate +0 and β0]]), and because [[addition]], [[subtraction]] and [[multiplication]] do not need to distinguish between signed and unsigned types. Other possibilities include [[offset binary]], [[sign-magnitude]], and [[ones' complement]]. Some computer languages define integer sizes in a machine-independent way; others have varying definitions depending on the underlying processor word size. Not all language implementations define variables of all integer sizes, and defined sizes may not even be distinct in a particular implementation. An integer in one [[programming language]] may be a different size in a different language, on a different processor, or in an execution context of different bitness; see {{Section link||Words}}. Some [[Decimal computer|older computer architectures]] used decimal representations of integers, stored in [[binary-coded decimal|binary-coded decimal (BCD)]] or other format. These values generally require data sizes of 4 bits per decimal digit (sometimes called a [[nibble]]), usually with additional bits for a sign. Many modern CPUs provide limited support for decimal integers as an extended datatype, providing instructions for converting such values to and from binary values. Depending on the architecture, decimal integers may have fixed sizes (e.g., 7 decimal digits plus a sign fit into a 32-bit word), or may be variable-length (up to some maximum digit size), typically occupying two digits per byte (octet).
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Integer (computer science)
(section)
Add topic