Jump to content

Character (computing)

From Niidae Wiki

Template:Short description Template:Use dmy dates Template:SpecialChars

Diagram of String data in computing. Shows the word "example" with each letter in a separate box. The word "String" is above, referring to the entire sentence. The label "Character" is below and points to an individual box.
A string of seven characters

In computing and telecommunications, a character is the internal representation of a character (symbol) used within a computer or system.

Examples of characters include letters, numerical digits, punctuation marks (such as "." or "-"), and whitespace. The concept also includes control characters, which do not correspond to visible symbols but rather to instructions to format or process the text. Examples of control characters include carriage return and tab as well as other instructions to printers or other devices that display or otherwise process text.

Characters are typically combined into strings.

Historically, the term character was used to denote a specific number of contiguous bits. While a character is most commonly assumed to refer to 8 bits (one byte) today, other options like the 6-bit character code were once popular,<ref name="Dreyfus_1958_Gamma60"/><ref name="Buchholz_1962"/> and the 5-bit Baudot code has been used in the past as well. The term has even been applied to 4 bits<ref name="Intel_1973_MCS-4"/> with only 16 possible values. All modern systems use a varying-size sequence of these fixed-sized pieces, for instance UTF-8 uses a varying number of 8-bit code units to define a "code point" and Unicode uses varying number of those to define a "character".

Encoding

[edit]

Template:Main Computers and communication equipment represent characters using a character encoding that assigns each character to somethingTemplate:Snd an integer quantity represented by a sequence of digits, typicallyTemplate:Snd that can be stored or transmitted through a network. Two examples of usual encodings are ASCII and the UTF-8 encoding for Unicode. While most character encodings map characters to numbers and/or bit sequences, Morse code instead represents characters using a series of electrical impulses of varying length.

Terminology

[edit]

Template:More citations needed section

The dictionary Merriam-Webster defines a "character", in the relevant sense, as "a symbol (such as a letter or number) that represents information; also: a representation of such a symbol that may be accepted by a computer".<ref name="MW_Definition"/>

Historically, the term character has been widely used by industry professionals to refer to an encoded character, often as defined by the programming language or API. Likewise, character set has been widely used to refer to a specific repertoire of characters that have been mapped to specific bit sequences or numerical codes. The term glyph is used to describe a particular visual appearance of a character. Many computer fonts consist of glyphs that are indexed by the numerical code of the corresponding character.

With the advent and widespread acceptance of Unicode<ref name="movingtounicode"/> and bit-agnostic coded character sets,Template:Clarify a character is increasingly being seen as a unit of information, independent of any particular visual manifestation. The ISO/IEC 10646 (Unicode) International Standard defines character, or abstract character as "a member of a set of elements used for the organization, control, or representation of data". Unicode's definition supplements this with explanatory notes that encourage the reader to differentiate between characters, graphemes, and glyphs, among other things. Such differentiation is an instance of the wider theme of the separation of presentation and content.

For example, the Hebrew letter aleph ("א") is often used by mathematicians to denote certain kinds of infinity (ℵ), but it is also used in ordinary Hebrew text. In Unicode, these two uses are considered different characters, and have two different Unicode numerical identifiers ("code points"), though they may be rendered identically. Conversely, the Chinese logogram for water ("水") may have a slightly different appearance in Japanese texts than it does in Chinese texts, and local typefaces may reflect this. But nonetheless in Unicode they are considered the same character, and share the same code point.

The Unicode standard also differentiates between these abstract characters and coded characters or encoded characters that have been paired with numeric codes that facilitate their representation in computers.

Combining character

[edit]

The combining character is also addressed by Unicode. For instance, Unicode allocates a code point to each of

  • 'i ' (U+0069),
  • the combining diaeresis (U+0308), and
  • 'ï' (U+00EF).

This makes it possible to code the middle character of the word 'naïve' either as a single character 'ï' or as a combination of the character Template:Nowrap with the combining diaeresis: (U+0069 LATIN SMALL LETTER I + U+0308 COMBINING DIAERESIS); this is also rendered as Template:Nowrap.

These are considered canonically equivalent by the Unicode standard.

char

[edit]

Template:See also A char in the C programming language is a data type with the size of exactly one byte,<ref name="ISO9899"/><ref name="ISO14882"/> which in turn is defined to be large enough to contain any member of the "basic execution character set". The exact number of bits can be checked via Template:Code macro. By far the most common size is 8 bits, and the POSIX standard requires it to be 8 bits.<ref name="Opengroup_Limits"/> In newer C standards char is required to hold UTF-8 code units<ref name="ISO9899"/><ref name="ISO14882"/> which requires a minimum size of 8 bits.

A Unicode code point may require as many as 21 bits.<ref name="Unicode_Glossary"/> This will not fit in a char on most systems, so more than one is used for some of them, as in the variable-length encoding UTF-8 where each code point takes 1 to 4 bytes. Furthermore, a "character" may require more than one code point (for instance with combining characters), depending on what is meant by the word "character".

The fact that a character was historically stored in a single byte led to the two terms ("char" and "character") being used interchangeably in most documentation. This often makes the documentation confusing or misleading when multibyte encodings such as UTF-8 are used, and has led to inefficient and incorrect implementations of string manipulation functions (such as computing the "length" of a string as a count of code units rather than bytes). Modern POSIX documentation attempts to fix this, defining "character" as a sequence of one or more bytes representing a single graphic symbol or control code, and attempts to use "byte" when referring to char data.<ref name="Opengroup_POSIX_Character"/><ref name="Opengroup_POSIX_Strlen"/> However it still contains errors such as defining an array of char as a character array (rather than a byte array).<ref name="Opengroup_POSIX_CharacterArray"/>

Unicode can also be stored in strings made up of code units that are larger than char. These are called "wide characters". The original C type was called Template:Not a typo. Due to some platforms defining wchar_t as 16 bits and others defining it as 32 bits, recent versions have added char16_t, char32_t. Even then the objects being stored might not be characters, for instance the variable-length UTF-16 is often stored in arrays of char16_t.

Other languages also have a char type. Some such as C++ use at least 8 bits like C.<ref name="ISO14882"/> Others such as Java use 16 bits for char in order to represent UTF-16 values.

See also

[edit]

References

[edit]

Template:Reflist

[edit]

Template:Data types Template:Authority control