## Fallacies of computer topics: A physicist's point of view I

As a physicist, I have had many opportunities to use computers. My job has involved managing hardware and software for physics experiments, programming for numerical calculation, and repairing of system problems.

However, I am not a computer expert. I don’t have a degree of computer science. I don’t have experience in developing computer software either. So you can see I am a person between computer amateurs and the experts. In this article, I would like to point out fallacies related computer topics as a physicist.

First of all, for people who do not know computers much, the concept of bits is not well understood. A common mistake is that "64 bits is twice the 32 bits."

A bit is a unit for counting the number of digits in the binary number system. Therefore, one bit is one digit, which represents two ways such as 0 and 1. If you have 2 bits, it will be four ways with two digits and each with two ways.

By considering this rule, we conclude 32 bits as 2 to the power of 32 ways. So 64 bits are as 2 to the power of 64. Therefore, 64 bits will be 4294967296 times larger than 32 bits. (You can simply calculate 2^{64} divided by 2^{32}.)

Certainly, the number of digits is twice, but from the number of cases it can handle, this value is more meaningful to consider the power of computation.

This is the same as physics. The units represent what the numbers are, and give you the meaning.

Picture by by John Kovacich