# Why can't machines programs theirselves?

Discussion in 'Intelligence & Machines' started by Spectrum, Sep 10, 2007.

Not open for further replies.
1. ### SpectrumRegistered Senior Member

Messages:
459
What's the deal with having programs write to theirselves? I read somewhere that it crashed early computer systems which were programmed to update theirselves, but now it is possible, but still getting the program to choose a command line is tough. For example if we examine the following program:

1 print line 2, "run"
2 end

then we can have a program run itself instead of ending and it has done this itself. If we could somehow question the computer and have it choose a line to run then it would be running itself. I have wrote a batch program which follows these lines:

copy c:\folder\filename$.txt c:\folder\filename$.bat
*call filename$.bat and now when I open up the .txt file and save modifications the program is run live. I could do with an autosave. Does anyone know how this would work in a batch program? The above program looks better like this: cls prompt$t
copy c:\folder\filename$.txt c:\folder\filename$.bat
type c:\folder\filename$.bat *call filename$.bat

Last edited: Sep 11, 2007

to hide all adverts.
3. ### MrCrowleyRegistered Member

Messages:
8
Wouldn't you have to program a machine so it knows how to program itself?

to hide all adverts.
5. ### SpectrumRegistered Senior Member

Messages:
459
Yes but if it could choose it's own lines...

to hide all adverts.
7. ### superluminalI am MalcomRValued Senior Member

Messages:
10,876
They can and do. This usually results from an unhandled exception from an invalid branch or interrupt vector and the computer then "chooses" to run any damned line it "wants". This is never a very good thing.

Messages:
6,184
9. ### superluminalI am MalcomRValued Senior Member

Messages:
10,876
I always took this argument to be incredibly weak and shallow. It assumes that the "computer" in question will never gain a semantic grasp of the symbols it processes. This is clearly presupposing the conclusion in what I always took to be a tour de force of circular reasoning.

10. ### ZephyrHumans are ONERegistered Senior Member

Messages:
3,371
The Chinese Room is more a metaphor than an argument. If you slow a brain down, and look at the individual interactions between neurons, does it still look like the brain, as a whole, can 'understand' something?

Which is a little like saying, if you slow down a bee, does it still look like it can 'fly'? It's only your perspective that has changed. The bee is the same.

11. ### superluminalI am MalcomRValued Senior Member

Messages:
10,876
Hmmm. Not sure I agree with that. It's was always presented to me as an argument as to why computers will never be "intelligent" the way we are. Because we assign meaning to the symbols we "process" whereas a computer doesn't. Which assumes that a sufficiently advanced computer could never begin to extract meaning from the symbols as its algorithms "evolved". Seems to presuppose a conclusion based on anthropocentric bias.

Meh.

12. ### ZephyrHumans are ONERegistered Senior Member

Messages:
3,371
That's probably what it's meant to say. But I think if you encode electric signal patterns as Chinese symbols and neural cluster behaviour as rules, the Chinese Room can simulate a brain. Just much slower.

Another thought, which has little to do with the argument but seems interesting anyway - couldn't an intelligent, curious person running a Chinese Room actually learn to understand the symbols with time?

13. ### superluminalI am MalcomRValued Senior Member

Messages:
10,876
Why not? After all, isn't that what happens to all humans? We're born with no idea of the specific symbols of a particular language (although we probably have an inherent "language processor" in our brains somewhere) and we learn to assign meaning to these otherwise arbitrary symbols (letters, pictograms, etc.)?

14. ### ZephyrHumans are ONERegistered Senior Member

Messages:
3,371
Yet another hole in the Chinese Room argument

Please Register or Log in to view the hidden image!

15. ### YordaRegistered Senior Member

Messages:
2,275
A computer would have to be able to experience before it could understand anything. It would have to be able to see the symbols, letters, words...

16. ### ZephyrHumans are ONERegistered Senior Member

Messages:
3,371
An AI program could 'see' via a camera input. But there are other ways of experiencing information. A blind person can still understand many things.

17. ### YordaRegistered Senior Member

Messages:
2,275
A machine can't experience what it sees. It can't see because there is no observer in a machine.

A machine can only do what YOU choose for it to do. It can't choose by itself because it has no feelings or thoughts. It experiences nothing. There is nobody, no consciousness, in a machine... that feels the information.

Messages:
3,371
19. ### YordaRegistered Senior Member

Messages:
2,275
The observer sees through the eyes, but it is nowhere specific in the brain, because it (consciousness) continues to see and live even when the physical brain is dead.

Observation and brains are just thoughts.

20. ### ZephyrHumans are ONERegistered Senior Member

Messages:
3,371
Your faith in dualism is impressive. Mine would be stronger if there were observable effects to vouch for it.

21. ### NasorValued Senior Member

Messages:
6,221
I don’t think the “Chinese room” is meant to be taken as proof that a computer could never be intelligent – it’s simply pointing out that a system could appear to be perfectly intelligent without actually being intelligent. So, just because the computer always makes the correct “intelligent” choice, it doesn’t mean that there is necessarily intelligence present.

The ultimate example of a “Chinese room” AI would be scanning every bit of information about every atom in someone’s brain and feeding the data into a computer with an advanced physics engine that allowed it to calculate exactly how the brain would respond at the atomic/molecular level. Such a computer might be able to always generate exactly the same response to a complex question that an actual human brain would, but as far as the computer knows it’s just running a physics simulation on a whole bunch of atoms – it has no idea that it’s “considering” anything more than what each atom in the brain will do in the next picosecond based on the laws of physics.

22. ### leopoldValued Senior Member

Messages:
17,455
the earliest computers was designed so that they could change their programming. it isn't that they can't, it's more like it too hard for a programmer to share his programs with others.
you must admit that sharing a source code that the computer itself modifies is useless.
a programmer trying to follow such a code will find himself helpless.

23. ### superluminalI am MalcomRValued Senior Member

Messages:
10,876
Yes. I suppose then the question really becomes, what is the difference between Us and a computer that works exactly like Us? Is (as I believe) our sense of self, consciousness if you will, an illusion born of the highly complex and self referential nature of the brain?