This will not fit in a char on most systems, so more than one is used for some of them, as in the variable-length encoding UTF-8 where each code point takes 1 to 4 bytes. In newer C standards char is required to hold UTF-8 code units which requires a minimum size of 8 bits.Ī Unicode code point may require as many as 21 bits. By far the most common size is 8 bits, and the POSIX standard requires it to be 8 bits. The exact number of bits can be checked via CHAR_BIT macro. These are considered canonically equivalent by the Unicode standard.Ī char in the C programming language is a data type with the size of exactly one byte, which in turn is defined to be large enough to contain any member of the “basic execution character set”. This makes it possible to code the middle character of the word 'naïve' either as a single character 'ï' or as a combination of the character 'i ' with the combining diaeresis: (U+0069 LATIN SMALL LETTER I + U+0308 COMBINING DIAERESIS) this is also rendered as 'ï '. For instance, Unicode allocates a code point to each of The combining character is also addressed by Unicode. The Unicode standard also differentiates between these abstract characters and coded characters or encoded characters that have been paired with numeric codes that facilitate their representation in computers. But nonetheless in Unicode they are considered the same character, and share the same code point. Conversely, the Chinese logogram for water ("水") may have a slightly different appearance in Japanese texts than it does in Chinese texts, and local typefaces may reflect this. In Unicode, these two uses are considered different characters, and have two different Unicode numerical identifiers (" code points"), though they may be rendered identically. Such differentiation is an instance of the wider theme of the separation of presentation and content.įor example, the Hebrew letter aleph ("א") is often used by mathematicians to denote certain kinds of infinity (ℵ), but it is also used in ordinary Hebrew text. Unicode's definition supplements this with explanatory notes that encourage the reader to differentiate between characters, graphemes, and glyphs, among other things. The ISO/IEC 10646 (Unicode) International Standard defines character, or abstract character as "a member of a set of elements used for the organization, control, or representation of data".
With the advent and widespread acceptance of Unicode and bit-agnostic coded character sets, a character is increasingly being seen as a unit of information, independent of any particular visual manifestation. Many computer fonts consist of glyphs that are indexed by the numerical code of the corresponding character. The term glyph is used to describe a particular visual appearance of a character. Likewise, character set has been widely used to refer to a specific repertoire of characters that have been mapped to specific bit sequences or numerical codes. Historically, the term character has been widely used by industry professionals to refer to an encoded character, often as defined by the programming language or API. ( January 2019) ( Learn how and when to remove this template message) Unsourced material may be challenged and removed. Please help improve this article by adding citations to reliable sources. This section needs additional citations for verification. See also Universal Character Set characters, where 8 bits are not enough, though they can all be represented with one or more 8-bit code units as in UTF-8. The term has even been applied to 4 bits, with only 16 possible values, which would be inadequate to the Latin (or English) alphabet though it would do for the 12-letter Rotokas alphabet.
While a character is most commonly assumed to refer to 8 bits (one byte) today, other options like the 6-bit character code were once popular, and the 5-bit Baudot code has been used in the past as well. Historically, the term character was used to denote a specific number of contiguous bits. Examples of control characters include carriage return and tab as well as other instructions to printers or other devices that display or otherwise process text.Ĭharacters are typically combined into strings. The concept also includes control characters, which do not correspond to visible symbols but rather to instructions to format or process the text. Įxamples of characters include letters, numerical digits, common punctuation marks (such as "." or "-"), and whitespace. In computer and machine-based telecommunications terminology, a character is a unit of information that roughly corresponds to a grapheme, grapheme-like unit, or symbol, such as in an alphabet or syllabary in the written form of a natural language. Without proper rendering support, you may see question marks, boxes, or other symbols.
This article contains special characters.