Digital or Analog

From iGeek
DigitalHand.png
What is Digital? Why not just use analog? Why is digital better? What do they mean?
What is Digital? Why not just use analog? Why is digital better? A lot of people hear and use the term "digital", but do not really know what it means. In humans, digital means digits (fingers), in clocks it means that the clock shows the actual digits (numbers) and not a hand that points to the numbers. In computers, digital is another way of saying "binary".
ℹ️ Info          
~ Aristotle Sabouni
Created: 2002-07-15 
That means that there are really only two states; either on or off, either power signal high or low, or the value is either zero or one -- those all mean the same thing. Where analog has a lot of values between zero and one. So why is less choices better? That takes some explaining.

Binary[edit | edit source]

In binary every other value is built by grouping these on/off states. So if you need to represent a value between zero and seven, then you'd just pair up three digital lines (bits). Together, that can represent 8 different values (zero to seven). A table to represent this would look as follows:

  • off, off, off = 0
  • off, off, on = 1
  • off, on, off = 2
  • off, on, on = 3
  • on, off, off = 4
  • on, off, on = 5
  • on, on, off = 6
  • on, on, on = 7

And you can add more and more "bits", until you get the resolution you need.

In contrast, analog would be a way of having more than just on or off, and having states (voltages) that lie between minimum and maximum.

In analog, if you want to represent a value between zero and seven, then you would have five steps (levels) between the minimum and maximum, to represent each possibility. If the voltage was off the value would be a zero, 1/8ths of the way to maximum then it would represent a 1, 2/8ths would be a 2, and so on.

Now to a lot of people it seems that digital and binary is more complex, and that you can get more information in analog because each line holds more possible values. And this is true. In fact, the first computers were analog for just that reason. But when you learn more about binary and electronics things change.

The problem is that in electronics it takes time for a line to change from one voltage (level) to another, and then more time to stabilize/settle down (and not be too high or too low of voltage). The more steps (levels) you have, then the more careful you have to be, and the longer it takes to get a good value set, and to be able to read that value; or in other words, the slower a computer would have to be.

In digital electronics, you can set on and off very close together, which means it is quicker to change from one state to the other. And you also don't really care if the voltage is higher than on (one), or lower than off (zero); it just has to pass some threshold, which you call zero or one. So digital electronics can change state faster (more times in the same amount of time); or in other words, it can do its calculations faster (you can look at the value many more times each second without waiting for voltages to "settle" or be measured as closely).

Also in electronics, you can get "noise" or interference from other electronic devices, from static electricity, power lines, radio waves, from the sun, and so on. This interference can be more easily detected and ignored in digital electronics than in analog ones.

💭 Anything else?
Quantum Computing: Now some person might ask if there are alternatives to digital or analog; and of course there is. One of them is something called a quantum computer that exists in laboratories. Quantum mechanics gives me a headache, but basically it uses a concept called a quantum bit (or QuBit), which allows for a "bit" to be in both states (on and off) at the same time. The potential is fast computers or new ways of thinking; but we are decades away from that. It takes a demented mind to really understand quantum mechanics, and now I need an aspirin for talking about it, so if you want to read more, go search the Internet.

For now, the choices are pretty much digital and analog. And for the last 50 years or so, it is easier to design digital computers (and many other devices) that run quicker and are more reliable than analog ones, and hopefully this article helped you understand why.

💭 Frankenstein's digital watch
This is an old engineering joke;"Did you hear about Frankenstein's digital watch? He used real digits". See digits could mean real fingers, and, well, never mind.... engineers have their own sense of humor and it probably loses something in the translation.

More Analog and Digital[edit | edit source]

Imagine the signal hits certain levels over time. At some set period of time (clock) we are going to "check" the level, and get a value. Then we have some time while the electronics (the line level) will try to settle to its new level and "beat the clock" -- so that when we sample again, things are all in place.

Here is a little timeline chart to help explain how an analog computer might work. This is similar to something you would see on a logic analyzer or Oscilloscope. The line refreshes from left to right, over time. The following chart shows four discrete samples (from left to right) with the samples of 3, 2, 3, and 0. And there are basically 4 analog levels.

WhatsDigital-1.jpeg

Notice that the red line (the line level) is changing over time, but it can't magically/instantly jump from one voltage (level) to another, it can over shoot, then over correct, and has a little "settle" time before it gets to the proper state. This means we can't sample until the line has had enough time to get to the point and settle in (stabilize), and you can't know when this is going to be done, since it varies for each change; so you just have to assume worst case (usually from the maximum value to the minimum value or vise versa) and sample slower than that. During most of the smaller changes (from two near levels) the line actually transitions quicker, since it is less far to go, and the line level it is actually ready early and we are just waiting around because we can't be sure that it wasn't a larger transition until after we sample. If we didn't do this we would get a bad value (either too high or too low in voltage) and that would be an error or false reading.

Now lets compare this to what might happen to a digital / binary signal over time.

WhatsDigital-2.jpeg

First notice that we don't need as much voltage. There aren't "degrees" or multiple steps in between -- so the electronics can be simpler. We don't need as much voltage, because the resolution (detail) is less -- it is either on or off. Less voltage means that things run cooler (requires less power) -- and the time (distance) to transition from minimum to maximum is decreased as well. So since the level has less distance (time) to travel, things are faster.

Also, unlike analog, the digital signal only has to go beyond a threshold. It doesn't have to be completely "settled in" and be right on -- it can be measured even when it has spiked well past the threshold, with no fear of this giving a false value. Let's face it, it is either on or off. This is a gain, because we don't have to wait for the device to settle (as much) -- and another gain because we can just overpower things (or over drop) and not have to worry about what this will do to the settle time or other levels. This overaction allows us to increase the speed of the transition further.

In our little sample, if we were to go to over 5 volts to 6 volts, we would still get a 1 (on) value. If we were off by that same 20% on the analog sample, we would probably get a false value (reading) and introduce an error. So digital is more reliable (resistant to noise).

In fact, for clarity of signal, you should notice the analog sample only has about 3 volts between the different values. The digital signal has at least 5 volts (and can be a bit more) between levels. So there is a wider spread between values on the digital sample, even though there is less total spread between all values. Again, this means more resistance to noise or errors, and a clearer more discrete signal. And the more levels (samples) you add to the analog, the more susceptible to noise it is.

So by going digital over analog we've increased the speed (more samples in the same amount of time) -- we've decreased the heat, power and simplified the design, and increased the reliability. All of them are wins so far.

The devil is in the details[edit | edit source]

Of course there are still details -- the biggest issue against digital is that we lost "resolution". The Analog sample had more information (4 levels, either 0, 3 ,6 or 9 volts -- representing a 0,1, 2, or 3), and the binary sample has two level (on or off -- 0 or 1). We can make up for this loss (or less information) in a few different ways.

If you don't understand binary counting, then I recommend you read the article, <a href="http://www.igeek.com/browse.php?id=1057">computer counting</a>.

One way is by just sending more samples (sequentially or serially). We can just take two samples in a row, and pair them up -- two bits of binary data, give you four possible levels -- which gives us the same detail as our analog sample. This is basically how a "serial port" sends a stream of bits, and builds them into bytes of data -- but this is getting off topic. Since it takes two digital samples to equal the resolution of one analog one, if you can send the digital sample over twice as fast then you are still ahead. My example only shows the digital as being about 2.5 times faster than the analog one -- but in the real world it is probably more like 8 or 10 times faster or more. And the more resolution in the analog sample, the harder (slower) it can be to get accurate samples.

The other way of sending more digital data, and the way that is use inside computers more often, is by sending lots of samples in parallel (at the same time). Instead of just one binary line, they run many. This offers an even greater performance increase. Look at the following example with 3 lines (bits) of resolution.

WhatsDigital-3.jpeg

Our 3 bits (lines) of binary data give us twice the resolution as our analog example -- and we can still take more samples per second. So it is faster, and has more resolution. And I only chose 3 bits (lines). In the 1970s computers used 8 bits at a time (256 levels), and modern computers use 32, 64 or even 128 bits at a time. This is way more resolution than an analog computer could handle on a single line.

On a side note, there are complex issues with sending lots of parallel bits of information over long distances. Basically one line can create interference that bothers the other line, and creates noise (messes with their signal/level) if they are too close. Inside a computer, or a chip, this is easy (easier) to address and control, because there isn't as much outside interference and the distances are shorter, so most things are done parallel. Outside the computer, on a wire, it is far harder to control, and a lot larger distances, so most problems are solved serial.

What about infinite levels?[edit | edit source]

The analog example I used was doing something that could be called discrete-analog -- where the analog level is expected to be at an absolute (discrete) value, and not wandering anywhere in between. You could allow a different type of analog, where the signal is some floating level (an infinite-degrees analog), lets call that floating-analog. Floating-analog still has the same issues of settle time and speed, just more resolution crammed in the same space (theoretically an infinite amount). Yet the practicalities of noise and the resolution of the electronics mean that "infinite" is really a not so detailed "finite". In fact, it usually has less levels (in practical terms) than a digital solution could -- this is one of the reasons why things like a CD-player can sound so much better than say an old cassette or AM station. I was also talking about an analog computing device -- computing demands precision and repeatability. This type of floating-analog gets even more errors (is susceptible to noise, environment and gets "slop" in the signal) and all that noise in the signal means that your computing device really becomes an approximation device (as it does not get the same results consistently). So I wouldn't call that an analog computer -- just an analog approximater.

Conclusion[edit | edit source]

The only thing constant is change. The way we design and manufacture computers and storage for now makes digital the better solution -- but there is all sorts of research and new concepts being implemented. We have some biological based storage, electrochemical, holographic (light), optical computers, and so on. A major breakthrough in some area might totally change the rules and make analog (multilevel) computing or storage more viable and cost effective again -- so don't rule anything out. I'll just go insane if we start making quantum storage devices and I have to figure out how they work. But for now, and the immediate future, it looks like the world will stay digital -- and I'll retain my tenuous grasp on sanity.

I hope this helps explain more clearly why computers aren't analog. It isn't that analog is bad, or that it can't be done -- some of the early computers, and some research computers have been analog. It is just that digital is simpler and faster -- which also means cheaper and more reliable. Digital is also very versatile in that you just pair more samples up (add more bits of resolution) to get more detail -- and it can have more detail (discrete levels) than any analog computer (single line) ever could. So we learned through experience that for computers (for now), digital is better.

GeekPirate.small.png



🔗 More

Tech
Technology: Organizations, Reviews, People

Hardware
In computerese, if you can touch it, it's hardware. The computer, keyboard, screen, mouse, peripherals, rocks.



Tags: Tech  Hardware

Cookies help us deliver our services. By using our services, you agree to our use of cookies.