These look like interesting links. I'll add them to my reading queue, which gets longer every day. However I don't need to read these articles in order to know that they can't possibly prove what you claim. That's because nobody can generate perfectly random numbers, for two fundamental reasons.
1) We don't even know, and can't ever prove, that there even are any truly random processes in the world.
2) And even if there are, any physical implementation would be subject to design bias and physical error.
So we could never have a mechanism that generates truly random numbers.
Even if that is slightly hyped,
I'm so happy you agree!
algorithmically generated true randomness 'fapp' has been possible for a long time.
That means something else, at least in the US. Better to use a different acronym.
Why should it need to be perfectly random anyway?
Ah. Because
Michael 345 suggested using random numbers to test reality.
I did think I could test throwing random numbers into a fixed running program (Physics) without changing the fixed values of the running program but making it swerve away from its previously set destination outcome.
It seems to me that for this purpose, "almost random" or "sort of random" numbers won't do. So I asked:
How do you generate a random number? Is it even possible?
That's when you jumped in. I certainly agree that for most practical purposes, "sort of" random processes work just fine. Coin-flipping is a common example. And taking the low-order bit of the femtosecond timestamp of the next cosmic ray to hit a particular detector is another. Neither are random. Both are "good enough" random.
But
Michael 345 wants to test physics. And for that, we need true randomness. Of which there is no such thing.
Just surmise that, via pencil & paper, you could write down TM code(s) implemented in that sophisticated network of TM's, that managed the feat of reproducing internal sensation - say colour.
There is no way you as code writer would ever perceive that sensation via writing down those code lines while labouring away for how ever many years it took.
Yes truly. This is one of my favorite arguments against the computational theory of mind. If a mind is a computation, then it can in principle be carried out using pencil and paper. So if I am sitting at a desk day in and day out, tediously executing the "mind" algorithm one instruction at a time,
exactly the way a supercomputer does, only slower; I would therefore create or instantiate a mind. And where would that mind live? Inside my own mind? In disembodied space? In Searle's Chinese room?
But with sufficient computational speed, the TM network would. So speed matters.
Perfectly possible. Run a computation fast enough, and it does something qualitatively different from when you run it slowly. It has "emergent" properties. We often hear this argument.
For all I know, it's true. But one thing I am certain of. Whatever those emergent qualities are that arise from running a given algorithm faster;
those qualities can not be computational. Because it's inherent in the nature of computation that implementation details and speed do not matter. We would be quite surprised if running the Euclidean algorithm on a supercomputer not only found GCD's, but also played chess. This would be astonishing. It does not happen.
But you (and many others) say that when we run an algorithm faster, it DOES do new things like develop a mind. If it does, then mind is not computational. Because if it were computational, then mind would be produced by the algorithm at very low speed as well.
Think of it the other way around. A landscape painting is nowadays easily converted into a string of digits as say a png file. But you just reading that digitized string of 1's and 0's will never experience the sensation 'landscape painting'.
Of course. Yes. Intentionality. Searle's point of veiw. It's we humans who give
meaning, or "aboutness," to the bitstrings. Bitstrings don't experience anything. And computers flipping bits do not experience anything, or give meaning to the bits. Minds give meaning to bitstrings.
But as a highly integrated network of TM's operating in some hierarchic feedback arrangement - you will see a landscape painting when those 1's and 0's are correctly reassembled on your PC screen. And yes maybe that requires fuzzy logic, maybe not.
Don't know. Nobody knows. But it can't be computational. If the organization and speed of a computation make a difference, that difference is not computational. It must be something else. And exactly what is that something else?