Is there a difference between 16-bit, 24-bit, and 32-bit color? Updated: by Computer Hope. Nearly all computers over the last five to ten years come standard with support for at least 16-bit color, with newer computers supporting 24-bit and 32-bit color. Is there a different between the different levels of color? As you can see here I have this image in 16-bit depth. I process in 16-bit depth, because it gives us a much better starting point as opposed to just starting an 8-bit. Although Photoshop is much more workable in 8-bit, and we will be converting to 8-bit, the transitions between pixels are much smoother when you give yourself this.
When talking about retro games, terms like '8-bit music' or '16-bit graphics' often come up. I myself often use these terms, but I'm not exactly sure what they refer to. What do they mean?
Kevin YapKevin Yap31.2k1515 gold badges134134 silver badges180180 bronze badges
7 Answers
8-bit and 16-bit, for video games, specifically refers to the processors used in the console. The number references the size of the words of data used by each processor. The 8-bit generation of consoles (starting with Nintendo's Famicom, also called Nintendo Entertainment System) used 8-bit processors; the 16-bit generation (starting with NEC/Hudson's PC Engine, also called TurboGrafx-16) used a 16-bit graphics processor. This affects the quality and variety in the graphics and the music by affecting how much data can be used at once; Oak's answer details the specifics of graphics.
If you don't know about a computer bit, then here is the Wikipedia article on bits: http://en.wikipedia.org/wiki/Bit, which I'll quote the first sentence that is all one really needs to know.
A bit or binary digit is the basic unit of information in computing and telecommunications; it is the amount of information that can be stored by a digital device or other physical system that can usually exist in only two distinct states.
Now, note that in modern times, things like '8-bit music' and '16-bit graphics' don't necessarily have anything to do with processors or data size, as most machinery doesn't run that small anymore. They may instead refer specifically to the style of music or graphics used in games during those generations, done as a homage to nostalgia. 8-bit music is the standard chiptune fare; the graphics were simplistic in terms of colour. 16-bit music is higher quality but often still has a distinct electronic feel, while the graphics got much more complex but still largely 2-dimensional and 240p resolution.
Community♦
Grace Note♦Grace Note23.2k33 gold badges8282 silver badges108108 bronze badges
8-bit, 16-bit, 32-bit and 64-bit all refer to a processor's word size. A 'word' in processor parlance means the native size of information it can place into a register and process without special instructions. It also refers to the size of the memory address space. The word size of any chip is the most defining aspect of it's design. There are several reasons why it is so important:
- First off, the maximum value you can hold. An 8-bit integer can hold a value up to 255. A 16-bit int can be up to 65,535.
- Memory addressing: With bigger numbers, you can track more address space (a gross oversimplification, but it holds true).
- Double-words and quad-words. There are cases when you want to use a larger word for a variable. A double word is just 2 words, so a 32-bit variable on a 16-bit machine or a 16-bit variable on an 8-bit machine.
- Instructions. Again, with a larger number you can have more opcodes (the actual machine instructions). Even though adding 2 integers looks simple, on the hardware level even that is quite complicated. For instance a machine may have separate MOV instructions for loading a nibble (half-byte), byte, word, double word or quad word into a register. From there you would need to add it to another register or add from a variable in memory, and that's another set of possible instructions. Floating point instructions are also a completely separate set of instructions.
- Aside from not having the memory, an 8-bit machine usually has a separate processor for handling floating point math on the hardware. 16-bit machines usually have an integrated floating point unit to handle that.
- With a larger word size you can put in more specialized instructions, like specialized direct hardware access, built-in functions (hardware graphics processing for example), hardware memory management, etc.
- Memory management: With a bigger word comes the possibility of being able to address more memory. Many 8 and 16-bit machines used a variety of schemes to be able to address as much memory as possible, often exceeding the limitations of their word size. Your typical 32 & 64-bit personal computer CPUs use memory registers that are equal to their word size giving them access to 4,294,967,296 and 18,446,744,073,709,551,616 bytes, respectively.
TL;DR
The difference in word size has a dramatic impact on the capabilities and performance of a given chip. Once you get up to 32-bits, the differences mainly become those of refinement (unless you are running a really big application, like genetic analysis or counting all the stars in the galaxy big).
I hope this ramble of an answer is of some help.
Further Reading
CyberSkullCyberSkull14.6k2121 gold badges8787 silver badges154154 bronze badges
The term '8-bit graphics' literally means that every pixel uses 8 bits for storing the color value - so only 256 options. Modern systems use 8 bits to store each color channel, so every pixel typically uses 24 bits.
There's nothing preventing modern games from limiting themselves to a stricter, 8-bit color palette; but the term is often used to describe old games in which using 8 bits per pixel was necessary.
OakOak
44.7k6262 gold badges250250 silver badges409409 bronze badges
Way back in the day the bit size of a CPU was a reference to how wide the processors registers where. A CPU typically has several registers in which you can move data around and do operations on it. For example add 2 numbers together then store the results in another register. In the 8 bit era the registers were 8 bits wide and of you had a big number like 4000 it wouldn't fit in a single register so your would have to do two operations to simulate a 16 bit operation. For example if you have got 10,000 gold coins you would need to use to add instructions to add them together. One to handle the lower 8bits and another to add the upper 8bits(With carrying taken into account). Where as a 16bit system could have just done it in one operation. You may remember in the legend of Zelda you would max out at 255 rupees as its the largest unsigned 8bit number possible.
Nowadays registers in a CPU come in all different sizes so this isn't really good measure anymore. For example the SSE Registers in the amd64 processors of today are 256 bits wide(For real) but the processors are still considered 64 bit. Lately these days most people are thinking of the addressing size the CPU is capable of supporting. It seems the bit size of a machine is really based on the current trends of hardware of the time. But for me I still consider the size of a native integer register which seems correct even today and still matches the addressing size of the CPU as well. Which makes since since the native integer size of a register is typically the same size as a memory pointer.
Frank♦19.7k2121 gold badges8989 silver badges137137 bronze badges
carloscarlos
In addition to Oak's answer, the 8 bits for graphic not only limit1 the color palette, but also the screen resolution to a maximum of 256 in each direction (e.g. the NES has 256x240 pixels of which 256x224 are typically visible). For sprite graphics you need to split these 8 bit, e.g. to obtain 32 = 2⁵ different x-positions and 16 = 2⁴ different y-positions, you have 8x16 (2³x2⁴) pixels left for a sprite's resolution. That is why you get that typical pixel look.
The same applies for music, 8 bit means a maximum of 256 levels of your sound output level (per sample, the temporal resolution is another issue), which is too coarse to provide sounds that do not sound Chiptune (or noisy, if still trying PCM sound) to the human ear. 16 bits per sample is what the CD standard uses, by the way. But 16 bit music more refers to Tracker music, whose limits are similar to those of popular game consoles with a 16 bit processor.
Another interesting point is that an 8 bit input device is limited1 to 8 boolean button states split up into the four directions of the D-pad plus four buttons. Or a 2 button joystick with 3 bits (a mere 8 levels, including the sign!) remaining for both the x- and y-axis.
So, for originally old games, 8 bit / 16 bit might be considered referring to the system's capabilities (but consider Grace's point about the incosistency in the label '8 bit'). For a retro game, consider the question whether it would be theoretically possible to obey the mentioned constraints (neglecting shader effects like Bloom), although you might have to allow some 'cheating' - I'd consider a sprite based game using 8x16 squares sprites still 8 bit even if sprites could be floating at any position in HD resolution and the squares were 16x16 pixels each...
1) well obviously you can use 2 times 8 bit to circumvent that limit, but as BlueRaja points out in a comment on Grace's answer, considering the accumulator register to be 8 bit only as well, that would cause a performance loss. Also, it would be cheating your way to 16 bit IMHO
Community♦
ZommuterZommuter8,6781919 gold badges7979 silver badges138138 bronze badges
Despite all the interesting technical discussions provided by other contributors, the 8-bit and 16-bit descriptors for gaming consoles don't mean anything consistently. Effectively, 16-bit is only meaningful as a marketing term.
Briefly, in word size:
- The Super Nintendo uses the RA55 CPU which has 16 bit index registers, and opcodes which can process 16 bit numbers into a 16 bit accumulator, but it doesn't have the 16 bit register values we might associate with a typical 16-bit processor. I suppose this is a 16-bit word size in 650x terms, but it's a strange terminology to me. I might rather say the RA55 instruction set supports 16-bit value operations. The 68c816 documentation does not in any location define words as any particular size.
- The Turbo Grafx 16 doesn't have native 16 bit operations, nor a 16 bit accumulator to store them in. Like the Super Nintendo, this is a 650x family CPU, but this one only supports 8 bit operations and has only 8-bit registers. If it has a word size, it is 8-bit.
- The Genesis/Mega Drive with the Motorola 68000 offers 32 bit word sizes (with 32 bit registers, and 32 bit operations) but was marketed with '16-bit' in the molded plastic. As a relatively new 32-bit cpu, and due to historical patterns, the 68k family names a 16-bit value a 'word', but has full native support for nearly all operations with 32-bit values named 'long'. This represents the beginning of the era when 'word size' had become a legacy concept. Previously, there were architectures with things like 9 bit words, or 11 bit words. From here on, word size becomes most commonly 'two 8-bit bytes'.
In addressing space:
Most 8-bit consoles had 16-bit physical addressing space (256 bytes wouldn't get you very far.) They used segmenting schemes but so did the Turbo Grafx 16. The Genesis had a cpu capable of 32-bit addressing.
In data bus:
The Turbo Grafx 16 and the Super Nintendo had an 8 bit data bus. The Genesis/Mega Drive had a 16 bit data bus.
In color Depth:
Total possible color palette is owned by the graphics circuitry and however the palette table is expressed is for its needs. You wouldn't expect this to have much correlation across systems, and it doesn't.
- The Super Nintendo had 15 bit palette space, and 8 bits of space to select colors out of the that space.
- The Genesis had a 9 bit palette space, with essentially 6 bits of space to select colors out of that space.
- The Turbo Grafx 16 also had a 9 bit palette space with a complicated scheme of many simultaneous palettes all of which were 4 bit.
This doesn't fully describe graphics capabilities of the systems even in terms of colors, which had other features like special layer features, or specifics of their sprite implementations or other details. However, it does accurately portray the bit depth of the major features.
So you can see there are many features of systems which can be measured in bit-size which don't have a requirement to agree, and there is no particular grouping around any feature that is 16-bit for consoles grouped this way. Moreover, there is no reason to expect that consumers would care at all about word size or data paths. You can see that systems with 'small' values here were regardless very capable gaming platforms for the time.
Essentially '16-bit' is just a generation of consoles which received certain marketing in a certain time period. You can find a lot more commonality between them in terms of overall graphics capability than you can in terms of any specific bitness, and that makes sense because graphics innovation (at a low cost) was the main goal of these designs.
'8-bit' was a retroactive identification for the previous consoles. In the US this was a the dominant Nintendo Entertainment System and the less present Sega Master System. Does it apply to an Atari 7800? A 5200? An Intellivision? An atari 2600 or colecovision or an Odyssey 2? Again, there is no bitness boundary that is clear among these consoles. By convention, it probably only includes the consoles introduced from around 1984 to 1988 or so, but this is essentially a term we apply now that was not used then and refers to no particular set of consoles, except by convention.
jrodmanjrodman
When talking about the the retro gaming 8bit, 16bit, and 64bit. It simply means the amount of pixels used to create the images for example the NES and Sega Mega Drive are very blocky and has large pixels 8bit the SNES and Sega Genesis improve this to '16 bit' and the N64 masters this concept to 64-bit and so on to 128 to 256 and eventually to 1080 HD. Even though it is and was slightly out of context.
Nintendo power in the early 90s actually created these 'terms' when they released articles about how Nintendo 8bit power was so much better then Sega. Each to their own but anyways they did this because 99% of the people would have no clue what they were actually talking about.
Robotnik♦27.3k4343 gold badges131131 silver badges231231 bronze badges
the avid nintendo freakthe avid nintendo freak