Why are you trying to spin my simple observations of fact, attach innuendo to them?
I apologize if I've been making innuendos. I'd be glad to de-escalate any misunderstandings and try to understand each other's point of view.
You have also pointed out that all sufficiently complex software does unexpected stuff that its programmers have trouble explaining without a lot of work. My observation was that recent AI has been doing that not as a bug but a feature - that it does unexpected and inexplicable good stuff, superior performance, not just bugs and breakdowns.
I completely agree. I wonder why you think I don't. I hope you don't take my style of writing as an innuendo. I have a genuine feeling of curiosity as to why you feel you must laboriously explain to me things I already understand and agree to.
No, I did nothing of the kind. That's an important point, and I'm not sure how I was unclear in that matter.
Not just a human - you could probably train a chicken to implement a neural net, step by step, pecking keys.
If you can train a group of chickens to act as a logic gate, you could implement the world's largest supercomputer and use it to run weather simulations. Then you could have chicken dinner.
My emphasis has been on the direction or nature of the inexplicable performance - that it isn't bug and breakdown only, that recent AI is producing unexpected output that is superior function, and most tellingly output that cannot be securely classified as function or malfunction even by its programmers.
I completely agree. I do believe you've mischaracterized my posts a little bit. I never said that the inscrutability of ancient accounting programs pertained only to bugs or misbehavior of the program. Even the correct functioning of the system is literally a black box to the latest generation of maintenance programmers.
But of course I agree completely with your point that the modern neural nets are inscrutable and do clever things. Deep Blue played moves that startled chess experts; and AlphaGo played moves that startled Go experts.
I wish you would credit me with being up on the state of the art on neural nets and weak AI. Just because I disagree with someone's philosophical conclusions doesn't make me ignorant of the technology.
And stepping through the source code somehow (paper and pencil, whatever) - even with all the auxiliaries attached, so that one is actually emulating the machine - wouldn't necessarily help. You still wouldn't know what was going on, sometimes - especially if you couldn't figure out whether you were dealing with function or malfunction.
I completely agree. Perhaps I haven't made my strong agreement clear. I fully agree with your assessment of the profound and arguably revolutionary nature of the inscrutability of an approach like AlphaGo Zero, where there is no human domain knowledge programmed into the system at all. [You think the inscrutability is revolutionary, I think it's evolutionary, but that's a minor issue. I agree that one way or the other it's super-duper inscrutable and I hope that's sufficient for your needs].
We are only disagreeing on the deeper meaning of this inscrutability. I still don't understand what SIGNIFICANCE this profound inscrutability has. Is it philosophical? Does it have implications for the theory of mind? What exactly? My only point is that no matter how inscrutable the program is, it's still reducible to a TM, and therefore nothing out of the ordinary in terms of the theory of computation. That's a simple statement of technical fact.
Again: the source code of AlphaGo does not contain instructions on how to play winning Go.
I really wonder why you think I don't know this. I've been following AI since the 1980's. I read about neural networks back then. I know how neural nets work, I know some of the math. I started taking weak AI seriously when Deep Blue beat Kasparov, a player known to have a very deep understanding of the game. I was as seriously impressed as everyone else when AlphaGo played the very difficult game of Go at an expert level. And I'm impressed by AlphaGo Zero.
Why do you think I don't know any of these things?
I do know these things. And I also know that the code can be reduced to a Turing machine so that no claim of qualitatively different computing can be made. A neural net is ultimately just a different way of organizing a computation. One that mimics the neurons in the brain, so it's of some neurobiological interest. But a TM nonetheless.
It indicates no moves. The computer as a whole does that, after being trained. At the beginning of the training, the computer plays poorly. After a million games, with (in principle) the exact same source code, the computer plays much better.
As the chickens would if they could only implement logic gates. Logic gates are all you need. In fact that's what's interesting. Formal neurons are very different than logic gates. But what you can compute with formal neurons can be reduced to logic gates. That's another way to express my point.
And if I don't, or don't care, then functional transfer is not trivial - or as I posted above, I have nowhere claimed it would be easy.
As I've mentioned. I'm not even arguing functional transfer at this point. I'm only arguing against the claim that the mind is a computation; or that a neural net does some kind of special computation that a conventional computer can't do.
Not the inscrutability, the qualitative nature of the output that is unexpected and inexplicable.
In this discussion, it bears directly on whether AI can emulate human brain malfunction. It checks off another box.
Ah. How does it bear directly? I don't see that at all. If a neural net "bears directly" on the question of whether an AI can emulate a brain; then
so can a TM. Because neural nets can be implemented as TMs.
I'm responding as clearly as I can and not trying to express any innuendos. In my mind I am making a simple point of very well-known and well agreed-upon computer science.
You say neural nets shed light on human minds. I say that IF they do that, then so do TMs -- because any neural net can be implemented as a TM. And in fact real world neural nets ARE implemented as TMs, namely as conventional programs on conventional hardware.
In fact we have a syllogism.
Premise 1: Neural nets shed light on the functioning of the human mind.
Premise 2: Neural nets are TMs.
Conclusion: TMs shed light on the functioning of the human mind.
So if you say that neural nets are doing "something" that TMs are NOT doing ... what exactly is that?
It can not be categorized as being computational. It's something else. The
organization of a computation does not affect what the computation does. You can organize a given computation as a conventional TM or as a neural net. But it still computes the same thing.