And you have repeatedly posted claims - such as the one quoted - that indicate you have not come to grips with it at all.
I don't think that's justified at all. And it's not conducive to productive conversation.
But you couldn't play a game of Go that way. The source code does not contain the necessary information - it does not tell you how to play Go.
But of course you COULD play a game of Go exactly that way. You would do what the computer does. Start from the first line of code and a given initial state. Using pencil and paper, step through the training algorithm. The training algorithm has you play millions of games against yourself and store the results. The results are of the form, "This move in that situation resulted in a W/L/D.") You would play millions of games to seed your knowledge base, whether that info is stored as a neural net or a relational database or any other data structure. Then in Play mode you would load your data and play according to the contents of your database or collection of nodes.
Do you agree with this? If not, please tell me what's different.
I assume you do understand that a computation does not depend on the speed or nature of its hardware. A supercomputer and a human with pencil and paper compute the exact same results. Of course we do not take into account any practical limitations such as the age of the universe or the supply of pencils. This is the principle of
substrate independence.
Do you agree with this? That given my description of the process, you COULD play an expert game of Go like this.
If not, you have to say why not and please be very specific, because you would be violating well-established principles of computation. You really need to explain very carefully why a program run on a computer can compute something different than that same program executed with pencil and paper. If you did this, you would become famous because NOBODY knows how to do this.
Do you agree with that? If disagree, please be as specific as you can.
Of course I am myself very impressed that AlphaGo Zero has shown that weak AI is no longer constrained by human knowledge. That's amazing and it's of great interest for practical weak AI. But as far as the theory,
it makes no difference. What AlphaGo Zero does can be replicated by a human being with pencil and paper. Just a lot slower.
Those are errors. The break comes when the incomprehensible is not an error, but the correct response.
No actually not. Even the correct functioning of a complex system could well be incomprehensible to its programmers. First, because
every complex system is incomprehensible to its designers. Just look at the systems of society, the law and politics and economics. We no longer understand the operation of our own formal systems.
This is a great crisis in rationality, on the order of the discovery of non-Euclidean geometry. Our rational systems are producing results that nobody understands and that nobody likes.
But secondly, the people who designed the old mainframe programs are no longer around. The programmers who maintain these systems do their best not to break anything.
Every formal system eventually gets so complex we can no longer understand it.
Weak AI systems built as neural nets or learning algorithms will certainly make the societal problems worse. But they are no different in principle than the systems that run our lives right now.
That is a different kind of unexpected behavior. And now, from the latest AI, we have unexpected responses that cannot be classified as correct or incorrect by the programmers of the machine.
Just as with our economic policies, our legal system, our educational system, and any nontrivial program ever written. You are conflating a difference in degree with a difference that's qualitative. It's not. The inscrutability of modern weak AI systems is just the next phase in the general inscrutability of every several-million line chunk of computer code ever engineered by man.
None of my posts (including the one you are responding to) attribute mystical properties to anything - including human brains.
By mystical I simply mean that you are ascribing to unconventional properties to perfectly conventional programs. You seem to believe that AlphaGo Zero does something that
differs in principle from a computation that could be carried out with paper and pencil.
This is not so and
this can not be so by the theory of computation as it has been understood since Turing's 1936 paper. Nobody's ever found a mode of computation in the real world that can go beyond it. The only modes of computation that go beyond TMs involve infinitary principles not implementable in the real world using known physics.
No, it doesn't. Its possible behaviors are fixed in advance, specifically, and it cannot alter them.
Responding to input is not the same as altering one's possible responses to input. And in the recent developments, we find the ability to alter the ability to alter the possible responses to input.
You're making semantic quibbles. I know how AlphaGo Zero works and I stipulate that it's a hell of a breakthrough in weak AI. But its inscrutability is not fundamentally different than the inscrutability of any large conventional program or formal system. It's just worse.
It's like a 4GL database language. We don't have to tell a program how to search a database. We tell the program WHAT we want and it figures out HOW to optimally search the database to produce the desired result. These 4GL database systems were developed in the 1980's.
Likewise with modern AI's we no longer tell them HOW to do the thing we want done, we tell it how to figure that out for itself. Yes this is cool but it's hardly unprecedented in the history of computing.
This is what I mean by mystical. You seem overly impressed by advances in software design that are
evolutionary and not revolutionary. Neural nets were invented in the 1940's. We've been teaching computers to figure how to do the things we want done for decades. AI is just the latest achievement in this direction of telling computers the WHAT and letting them figure out the HOW. It's really cool stuff, yes. But radically different in its fundamental aspects than what's come before, no.