Thinking back on the history of video games, you’re likely familiar with technological terms like “8-bit” or “32-bit.” However, do you know what the word “bit” really means? This is much less likely.
So, what is a bit, exactly? And what does a bit stand for in the world of computing? Let’s take a closer look at the bit, paying close attention to its meaning and what it represents in the digital communications and computing industry.
A Bit in Computing Explained
Simply put, a “bit” represents the simplest, most basic unit of information in the world of computing or digital communication. It derives its name from the binary digit — “b” from “binary,” and “it” from “digit.” That spells “bit.”
Functionally, the bit serves as a unit to describe the logical state. The “logical state” — also known as the “truth value” — is an expression used in mathematics and logic that represents the relationship between a proposition and the truth. In basic terms, the logical state can only have two values: true or false. These values can be represented by the numbers 1 or 0.

©Yurchanka Siarhei/Shutterstock.com
With this in mind, a bit can have just one of two possible values: 1 or 0. No more, no less. These binary values are almost always described with a 1 or a 0, but it’s not out of the ordinary to see them represented in other binary terms. Some examples of this include true or false, yes or no, on or off, plus and minus, and so on.
Interestingly enough, this binary relationship is purely a matter of convention. In other words, a bit can have different values in different parts of the same program or device. The unit dates all the way back to 1732 when Jean-Baptiste Falcon and Basile Bouchon used bits to represent data on punched cards printed on paper tape.
Bits were developed in bits and pieces (no pun intended) over the next couple of centuries, popping up in Morse code and stock ticker machines before they were eventually adopted by early computer manufacturers at IBM. While it sounds nearly identical, the bit differs quite significantly from the byte. The former represents the logical state, while the latter represents eight bits.
Other Units of Information in Computing
It doesn’t get any smaller than a bit in computing. It’s the most basic, least complex unit of information in all of computing. As such, after the bit, there’s nowhere to go but up.

©iStock.com/iambuff
Furthermore, at the very heart of each subsequent unit, there is merely a collection of bits. In truth, there are a number of units that simply serve as a way to describe a group of bits. Here are some of the most commonly used examples.
The Byte
If a bit in computing represents the binary digit, then a byte represents eight binary digits. For this reason, you might also see a bit described as an octet. With each bit representing either a 1 or a 0, there are 256 different possible values within a single byte.
The Nibble
So, if a bit in computing represents the binary digit, and a byte represents eight bits, then a nibble represents half a byte. (Or four bits.)
The Crumb
Following this train of thought, a crumb is half a nibble, which — in turn — is half a byte, which — as we know — is eight binary digits. To put it another way: a crumb is two bits, or a quarter of a byte.
The image featured at the top of this post is ©gonin/Shutterstock.com.