This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
Computer architecture bit widths |
---|
Bit |
Application |
Binary floating-point precision |
Decimal floating-point precision |
In computer architecture, 12-bit integers, memory addresses, or other data units are those that are 12 bits (1.5 octets) wide. Also, 12-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on registers, address buses, or data buses of that size.
Before the widespread adoption of ASCII in the late 1960s, six-bit character codes were common and a 12-bit word, which could hold two characters, was a convenient size. This also made it useful for storing a single decimal digit along with a sign. Possibly the best-known 12-bit CPU is the PDP-8 and its descendants (such as the Intersil 6100 microprocessor) produced in various forms from August 1963 to mid-1990. Many analog to digital converters (ADCs) have a 12-bit resolution. Some PIC microcontrollers use a 12-bit instruction word but handle only 8-bit data.
12 binary digits, or 3 nibbles (a 'tribble'), have 4096 (10000 octal, 1000 hexadecimal) distinct combinations. Hence, a microprocessor with 12-bit memory addresses can directly access 4096 words (4 kW) of word-addressable memory. IBM System/360 instruction formats use a 12-bit displacement field which, added to the contents of a base register, can address 4096 bytes of memory in a region that begins at the address in the base register.