Here's my question based on what I understand from the book I'm reading. Hopefully, someone can understand my rough idea:
1) Say we have 64 bits. Each bit is either 0 or 1. So 64 bits can only have 64 spots to store 0s or 1s. But 10^380 is a very long (and huge number). We need 380 spots to write down 10^380, i.e., 10…000. Then how could it be possible for computer to store this number? I'm totally lost here.
2)"uint64" data type means computer require 64 bits to store such a number. The maximum integer of this type it can store is 2^64 – 1. Comparing to "double", which also uses 64 bits to store a fractions. Yet the largest number it can store is 1.79×10^380. 10^380 is a very very large number, in comparison to 2^64. How could this be? I mean why don't we just throw away (literally throw away) "uint64" because it uses the same amount of memory like "double" and can store even larger numbers.
Unless I'm crazy here, or misunderstand something. Someone please help explain.
Thanks.
Best Answer