I don't know too much about computers, but I started to wonder one day why modern computers can't calculate something like the value of pi to as many decimal places as we want. Isn't the computer only doing one calculation at a time (the next step in the division process) and then storing the number it gets, moving on to the next one? I mean, if each number takes up one byte of information, and you had a 30 gigabyte HD you should be able to calculate pi to that high a precision too right?
My thinking must be flawed in here somewhere, because if it was this easy people would have done it already. If someone could point me in the right direction on this one it'd be great. Thanks!
My thinking must be flawed in here somewhere, because if it was this easy people would have done it already. If someone could point me in the right direction on this one it'd be great. Thanks!