64 bit

From iGeek
Bits.png
Bits of bits... how many bits should my computer be and why should I care? It mattered before 64 bits (2002 or so).
Bits of bits... how many bits should my computer be and why should I care? It mattered before 64 bits (2002 or so). After we got to 64 bit computing, this became ancient history. The idea was if 16 is good, then 32 must be twice as good, and then 64 has to be great. The truths of engineering aren't that clear. 32 bit did 99% of what people needed. 64 covered the rest.
ℹ️ Info          
~ Aristotle Sabouni
Created: 2002-10-14 

So what is 64 bits?

There are three ways to measure a processors "size";

  • How many bits of data a processor works with
  • How many bits a processor uses to address memory
  • How many bits can move around at once

History of Data Size[edit | edit source]

A processor works with certain data sizes at a time.

  • Microcomputers started at 4 bits (with the Intel 4004). That turned out to be too little data to do anything of value -- even then, a character usually took 8 bits, and so with a 4 bit computer you were always having to do multiple 4 bit instructions to process a single 8 bit chunk (character) of data. That's not optimum.
  • Quickly, 8 bits became the standard. 8 bits made sense since a single character of text (upper or lower case, and all numbers and symbols), took 7 or 8 bits to encode. So 8 bits was a good size.... and that lasted for a few years.
  • While 8 bits was good for characters (back when characters were only 8 bits), it wasn't as good for number crunching. An 8 bit number (2^8) is only any whole number between 0 and 255. To do heavy math, you needed to work with more bits at once. The more the merrier. 16 bits could get you a value between 0 and 65,535 (an integer) or -32768 to +32767 if you liked signed math -- which is a lot more detail in a single pass. On top of that, instead of just having 256 different characters (Roman alphabet with symbols and accents), we went to Unicode (UTF16) which usually used 16bits of data and allowed for 65,000+ characters, which could add in most other languages.
  • While 16 was better than 8 bits for math, lots of numbers in daily use are larger that 65,000 -- so 16 bits was also requiring double-passes to get things done. Thus if 16 bits was better for math, then 32 was better still. 32 bits allowed a range of 0 - 4,000,000,000 (or +2B to -2B signed). That was good enough for about 99%+ of integer math. And with some tricky encoding, you could actually get a near infinite range of numbers with 8 digits of accuracy (fixed or floating-point math:a concept where the computer sacrifices some of the resolution of the number, so that it can have a mantissa (multiplier), and basically allowing numbers much larger, smaller and with a decimal point).
  • Then along came 64 bits and since this stuff is exponential, it gave us a lot more headroom for scientific stuff -- in a single pass (instruction). You could always to 64 bit, or 128 bit math, even with a 4 bit processor, it just took a lot more passes (instructions). While 32 bits was good enough for most things (and worked from the mid 80's until mid 2000's), for some scientific applications (floating point, and large integers), 64 bit was better.
In the early 1980's people used to add special "floating point processors" or FPU's (Floating Point Units) to help your main processor do this kind of math -- and make microcomputers behave like big mainframes and lab computers. By the early 90s, floating point units got added to the main processors (and are integral)-- and we've stayed there ever since. But there is a separation between kinds of data: 32 bits for integers (or short floats), and 64 bits for long floats.

How many bits of data[edit | edit source]

When we used to ask how many bits of data a processor worked with, it was easy. There was one unit, and it always worked in that amount of bits.

Now days there are 3 primary ALU's (arithmetic units), and each works on a different sizes:

  • Integer units are for smaller sized stuff
  • Floating point units are for higher resolution math
  • Vector units are even larger registers (128 or 256 bits) for doing the same thing to multiple smaller sized things (often many 8 or 16 bit chunks at the same time). Great for managing pixels, or characters.
  • GPU is like a vector unit on steroids: it can have hundreds of processors paired together, that all do the same thing to multiple smaller data chunks at the same time. Great for graphics.

This is all great for the data fidelity -- but we also wanted to deal with more data as well: how much the computer could address (or see/access at one time).

How many bits of address[edit | edit source]

Now a computer has an address for each and every memory location. 8 bits of address, mean that your computer can address 256 addresses (locations) - usually these were each one byte long, but in theory, they could be as wide as the computer needed them to be.

256 addresses isn't much -- so even 8 bit computers would often work with 16 bits of address to enable them to work with 65,536 bytes (or address 64K of memory). You'd be surprised what we could do with computers back then, even with that little memory. (The 70s were 16 bit addressing... mostly). Now the little controller in your mouse is more powerful than the 1970s computers that I started programming on.

32 bit addresses caught on in the mid 80s and were popular a lot longer. A 32 bit address, can deal with 4 Billion addresses (4 Gigabytes of memory). 32 bit addresses have been standard for quite some time, and will be for a while. But we are starting to get to the point where 4 gigabytes of RAM isn't that much. For some large databases or large 3D or math problems, 4 Billion locations is very small. Now most of us aren't mapping the human genome on our home computers; so it isn't like we're all bumping our heads daily. But it is getting to the point where video and graphics work especially could use more space. So we want to make room for it now. And so designers are looking at jumping to 64 bits, or roughly 16 exabytes of memory to prepare for the future.

Now an exabyte is a quintillion memory locations: or 1,000,000,000,000,000,000, or enough memory to track every cell of every person on the face of the earth. So we shouldn't bump our heads on that limit any time soon. The naming goes, mega (million), giga (billion), tera (trillion), peta (quadrillion), exa (quintillion).

For a few problems the extra address space helps; but not as much as you might think. 64 bits of addressing is a heck of a lot of memory, and as I said, 32 bits is good enough for most users today (and for the next 4 or 5 years or so). So going from 32 to 64 bit addressing, isn't a huge win for the average user, most of the time. And it comes with a cost: if you have to double the size of every address, everything gets bigger. (It takes more memory, and has to move more stuff around to do the same job).

The common work-around for 32 bit computers is that the computer just had many pages of 4 gigabyte chunks, and flips around to which 4GB page that it is looking at a time. This only rarely cost much in overhead (for paging around), so 32 bit addresses lasted for 30+ years, and even with 64 bit (or larger) computers, many will stick with smaller (32 or 40 bit addressing).

How many bits can move at once[edit | edit source]

While the computers are x# of bits, sometimes they talk to memory (or peripherals) on smaller or larger buses (connections). Obviously being the same width as the processor is good. But since processors are faster than memory, what if it could load 2 things at once? That would keep the processor fed better. A few designs did this, but the cost of all these connections on a bus is expensive and hard to do over distances (even as small as inside a computer case), so often the bus is less width than the CPU. While internal to the CPU when it's connecting one part to another, they can run things much wider. But there is a balancing act in design, and between all the sizes in your system. And if you make one part that is 10 or 100 times faster than the rest, it is just wasted potential, because it sits and waits for the other part to catch up all the time.

Conclusion[edit | edit source]

There is a murphy's law of communication (or should be) -- that no matter which way you mean something, others will assume you mean it a different way. And when talking about size, you could mean data size, path size (bus or internal), or address size. Generally, when weíre talking now days about chip size (how many bits), we mean is it a full 64 bit, non-paged, address and integer (data) support. Since it already has 64, 128 or 256 but support for other things.

Mostly, computers need to be balanced between how fast the processor is, with how fast the memory is, or the program you're using needs to be. More than that just wastes battery or something else... so there are reasons that computers have been 64 bits (mostly) for the last couple decades, and will likely remain so for a lot longer: it fits the problems we're doing. There are special units for special functions that work a lot larger -- but they're special units because most of the time they are not needed. So I think of it like a range extender on an electric car: great when you need it, but just something extra to haul around when you don't.

GeekPirate.small.png



🔗 More

Tech
Technology: Organizations, Reviews, People

Hardware
In computerese, if you can touch it, it's hardware. The computer, keyboard, screen, mouse, peripherals, rocks.

Programming
Computer Programming Articles.



Tags: Tech  Hardware  Programming

Cookies help us deliver our services. By using our services, you agree to our use of cookies.