Obama Seeks to Ax Moon Mision

Me neither. I still think exploration is important.


I was wondering, if 0.5% of the population owns >80-90% of the wealth, why did we have to bail out Wall Street? I mean, why not just tax that 0.5% a great deal of their wealth? It seems silly we're happy to sacrifice a person's life in the name of security but not a person's wealth? Is it fair that a middle or low class Citizen should die in a war in Iraq and an upper class 0.5% Citizen don't lose their life .. savings? Why is one person's wealth worth more to us than one person's life?

I just don't see the fairness in this whole enterprise.
 
Me neither. I still think exploration is important.


I was wondering, if 0.5% of the population owns >80-90% of the wealth, why did we have to bail out Wall Street? I mean, why not just tax that 0.5% a great deal of their wealth? It seems silly we're happy to sacrifice a person's life in the name of security but not a person's wealth? Is it fair that a middle or low class Citizen should die in a war in Iraq and an upper class 0.5% Citizen don't lose their life .. savings? Why is one person's wealth worth more to us than one person's life?

I just don't see the fairness in this whole enterprise.

I don't either.
 
Fucking global warming bullshit. What a surprise that Obama wants to kill one of the few government programs I support and change its mission from pushing the limits of science, technology, and human achievement to studying climate change. Unbelievable.
Another sign of your nation’s decline under the BO administration.
 
(2) No, like most of my posts, I am trying to educate.
Not this time. You are preaching -- and you don't know what you're talking about.

Spoken by one who obviously has a very limited view about computers and AI.

For example, not all computers are digital as you seem to be falsely assuming (or analogue if you still remember them). The neural network* type does not even need to be programmed so is not limited to the set of possibilities that a human programmer can anticipate. I.e. it can learn from experience. In several fields they were already out performing humans 20 years ago!
No. Neural networks do need to be programmed; it's just a different kind of programming. The network architecture needs to be designed, the training and evaluation data need to be collected and evaluated, and most importantly, the scoring algorithm needs to be developed -- and all of these tasks are done by humans. There is no such thing as hard AI yet.

To name one: The evaluation of loan applications.
There is a simple and very nefarious reason why machine learning techniques are used for such purposes: They can discriminate. People can't, lest they or their employers be sued. The best patterns that indicate ability to repay a loan are blatantly discriminating. Redlining neighborhoods is illegal -- unless a machine learning program "learns" to do so "all by itself." Of course, that isn't what happens: That would take hard AI, and that remains a pipe dream. No program learns "all by itself" (yet).
 
Since I like a good challenge...

No one can name one thing man in space has achieved that has benefit for Earth bound humans that could not have been achieved at least 100 times more economically - Anyone willing to try?

Back problems. (what I have) Astronauts say that weightlessness helps back problems>>>they disappear....You can do the same only in water on earth, but kind of hard to live immersed in water. Yes, I am being facetious. :)

Otherwise I agree with what you said. First let's solve the energy problem then we can go to Mars, Venus, wherever...
 
Last edited:
China's brief period as a navel power brought them great wealth and power. They had, by far, the greatest navy in the world. They even had gunpowder. They could have conquered the world. Instead, they turned inward and stagnated. That is the risk we take. If we're not out there on the cutting edge. If we abandon space exploration and leave it to India and China, we are giving up on the future.
We have been going down that road for quite some time. China, on the other hand, appears to have learned from its mistakes. So has Great Britain. A quarter of a century ago, space scientists convinced Parliament to ban government funding of human space flight activities. Those space scientists won the battle, but lost the war. Great Britain's funding for space exploration dropped, then dropped some more. There are very few space scientists left in Great Britain. Great Britain has recently rescinded this ban.

Without the motivation that humans might follow, there is little point to doing robotic space exploration. Robotic space exploration only looks cheap in comparison to human space flight. Compared to scientific research done right here on Earth it is danged expensive.
 
The Obama administration is seeking to seriously cut back on NASA He wants to kill the planned moon shot, the Ares rocket that was to replace the shuttle, the Ares 5, and the moon base. So he pretty much wants to kill the entire manned space program for the foreseable future.
Of course.

Because in the course of human history it has always been the redistribution of wealth that has advanced civilization, increasing health and wealth, easing the toil and suffering of all people, vastly increasing our longevity, our ability to communicate... It has solved problems thought insoluble by previous generations. Never in history has technological development aided mankind in any measureable way. Why should we waste money on such ventures?

~Raithere
 
To DH:

Your post 64 extends the concept of “programming” much too far, IMHO. What you are speaking of is truly needed, supplied by humans (at least now) but it is “computer design”, not “computer programming.”

In post 64, you state:
“... Neural networks do need to be programmed; it's just a different kind of programming. The network architecture needs to be designed, the training and evaluation data need to be collected and evaluated, and most importantly, the scoring algorithm* needs to be developed -- and all of these tasks are done by humans. ..."
----------
* I will assume that by “scoring” you mean what I would call the “learning procedure.” That includes not only comparison of its output achieved to the correct output in the training set, (which might be called “scoring”) but also and most importantly which internodal connection strengths to adjust and by how much.

The network architecture, for example, how many nodes should be in the intermediate layer, is exactly the same type of choice the designer of a digital computer must make (and never is called “computer programming”). There are many more of these architecture design questions for humans to make for the digital machine than for the NN machine. For some examples:

Word length (8, 16, 32, or 64 bits)
Clock cycle speed and (if it is subdivided or not)
Number of processors
RAM size
Hard disks, size and number
Screen memory size and its refresh rate
Keyboard or touch screen
Battery or only AC power
Blue tooth or cable internet IO
(I am sure there are dozens more, but I have never designed a digital machine. I have not even built one on a mother board, where probably 30 or more design decision have already been made for you.)

If you are consistent in your definition of “programming” you must call all these human decision “programming” (or a different type of “programming”) and not “machine hardware design.”

I.e. if you call specifying the number of nodes in each of the three layers of a NN “computer programming” then you logically must call all of the above “computer programming” too. I.e. you are obliterating the distinction between “computer programming” and “hardware design.” I never claimed that NNs were free of hardware design questions for humans to solve (at least until machines can do this too.)

It is also true that no machine (digital or NN) generates it on input data. That is human task (as is giving any meaning to the data – neither the digital nor the NN has any idea of meaning – they only process data to produce data with zero understanding of it). So again, you cannot say that supplying the input data (or its meaning) to a NN machine is “computer programming” (as you did in your quoted text at start.)

Perhaps you are on slightly firmer ground when it comes the human supplying the learning algorithm. The original was Hebb's method. He was a neurologist, as I recall, and it is very much like how nerves do “learn” to increase their probably of discharge when the same two inputs are again present.(This change in the neural sensitivity is called “potentiation” and is quite well understood biochemically now.) Hebb's method has been improved by dozens of choices, some better for a particular type of task than others, which are better for a different set of tasks. There are two ways for me to reply to this:

(1) Not all NNs require any human supplied learning algorithm. At least 25 years ago a Finn, (I think his name starts with a K, but that is all I now recall) built into the architecture the ability to develop the learning algorithm the machine would use. – I.e. made hardware that could “learn how to learn” – AFAIK it is still only of academic interest as it was at least ten times slower than simple Hebbian learning algorithm. Perhaps now, >25 years later, there are machine that do not need any human supplied “scoring algorithm” that are as fast as Hebbian learning. – I am more than 25 years from any working in this field, so I do not know.

(2) The human supplied learning algorithm is built into the hard ware – not a program related to the solution of some particular problem. The whole point of the Von Neumann machine is that the same machine solves many different problems because it is programmed with instructions UNIQUE for each problem it is to solve. The NN machine does not have the capacity to receive unique programs. It has one learning algorithm, built into the hard ware, which it applies to many different problems. It must be provided with the learning data set for each it is to solve. This is not “programming” but data input.

SUMMARY: I think we have essentially only a semantic dispute. I do not have as broad a definition of “programming” as you seem to have. For me, programming is problem specific, not part of the machine’s hardware.* I clearly distinguish between “programming” and “hardware design” but at least when speaking of NNs, you do not seem to. (AND if you are consistent, you call the items listed above, such as number of bits in a digital “word,” programming instead of hardware design.)

The introduction of separate and distinct problem specific programming is the great advance Von Neumann made. – At least 100 years earlier there were machines that could solve specific problems with only their hardware; but their utility was limited to that specific problem. The NN is also a great advance** in that it eliminates the need for any problem specific programming. In this regard, it is like those earlier machines as it needs no problem specific programming, but unlike them the same machine can learn to solve many different problems. All it needs is a training set of data input data to then learn how to solve any similarly structure problem.
-------------
* Analogue computer of course excepted. Are you old enough to have built any of them? They are of course problem specific, but if you need the ultimate in speed (and can tolerate a little error acumulation or periodicly reset, they can not be beat.)

** The NN is a great advance because unlike a programmed computer, no human even needs to know how to solve the problem and in most cases, no human even understand how the NN is solving it! This is very important advance as solving problems is no longer limited by human ability to do so. - I.e. develop an algorithm for the solution! Thus, probably if robots are to ever become much smarter than humans, the NN will be playing a central role.
 
Last edited by a moderator:
It's this same dynamic programming in artificial intelligence that's going to account for AI revolt. If you think I have the gaining of knowledge was seen as a poison for society in the past, you can see how a robot gaining knowledge will be a poison for the future 21st century, and with quantum computing replacing parallel computing, robots thinking for themselves giving standards that are non-standard (based on what's rational), AI independence is not too far off.
 
It's this same dynamic programming in artificial intelligence that's going to account for AI revolt. If you think I have the gaining of knowledge was seen as a poison for society in the past, you can see how a robot gaining knowledge will be a poison for the future 21st century, and with quantum computing replacing parallel computing, robots thinking for themselves giving standards that are non-standard (based on what's rational), AI independence is not too far off.

Force, just like you dont know much about your last bit of pseudo science, you know nothing of this one.

First off, AI stands for artificial intelligence. People think that the term "artificial" refers to the fact it is not biological, or it was engineered, that is false.

The term "artificial" refers to the fact that it shares some similarities with intelligence, but it has no real intelligence.

Robots cannot problem solve at all.

Take for example a metaphor for solving a problem.


pretend the problem would be...2+3

Humans know it is five because we count it out in our heads, or more often we just know it is five because we have counted it so many times.

A computer knows the answer is five, because we have told it that it is five. They didn't actually count out the number but searched through a library of answers to problems that we gave it.

Of course, for a real machine mathmatical operations would use the same concept as humans, but I basically replaced a real life problem with something simpler to use.

The problem with AI, is that computers cannot edit this library of answers and chances are will assuming there is no hugely significant innovation in the far future, wont be able to.

There are several reasons why from the actual programming, to the robot's perception of the world, to the physical chipsets.

A robot has a set library of solutions to problems,

for a human, if we see a person hold out their hand we think to ourselves that since we just met, we are shaking hands. I suppose you can call it context

With a robot, when it sees someone is offering their hand, it will shake it, but it does not know why, what the significance of doing so is, or the context.

Now so long as the robot doesn't come across problems that aren't in it's library, everything is okay. But it's when something out of the blue happens, the robot is dead.

(Yes this is a metaphore I dont know for a fact if this would happen, it could be something in the library of answers already)
A good example is the predator drone, if it loses a link with the control center, it reverts to level flight, but if there is for example a cliff in front of it and the predator does not have any sort of programming relating to what it should do, it will fly right into the cliff without a second thought.

The other problem is even bigger which relates to the chipset. For high level programs that are almost always running, you actually use transisters and other chip parts to make a mathmatical equation on the board for certain conditions and other things.

The problem is that in order for something intelligent to actually alter that kind of wiring they would have to be able to physically do it.

A robot has absolutely no way to alter it's circuitry, which is also why if we decided to put in safeguards to shut down a robot, not only would the robot have no knowledge of said safeguards if they were physically part of the circuit, there would be no way to remove them without destroying the robot.

But the brain is constantly rewiring the neurons and other nerves that make up our brain and so we can constantly rewrite the parameters of our lives.

Which in a philosiphical tangent, one could say humans have no purpose, because a purpose is a singular goal, like what a machine is constructed to fulfill by setting it's operations using a set parameter. The fact is that since we can easily change the parameters of our lives with relative ease, than our purpose would likewise have to be altered too. so we have no set function or purpose because the parameters are always changing.


But computers, rewriting the parameters in a program is dificult enough for a human programmer, but is impossible to do on the chipsets without a human. And if you can't rewrite the chipsets, than there is no way a computer could re write it's own parameters because its parameters are in it's physical circuits.
 
To Fedr88:

Your post 72 again reflects your limited concept as to what is a computer. At least read the second footnote of my post 70, which is:

"The NN is a great advance because unlike a programmed computer, no human even needs to know how to solve the problem and in most cases, no human even understand how the NN is solving it! This is very important advance as solving problems is no longer limited by human ability to do so. - I.e. develop an algorithm for the solution! Thus, probably if robots are to ever become much smarter than humans, the NN will be playing a central role."
 
To Fedr88:

Your post 72 again reflects your limited concept as to what is a computer. At least read the second footnote of mky post 70, which is:

"The NN is a great advance because unlike a programmed computer, no human even needs to know how to solve the problem and in most cases, no human even understand how the NN is solving it! This is very important advance as solving problems is no longer limited by human ability to do so. - I.e. develop an algorithm for the solution! Thus, probably if robots are to ever become much smarter than humans, the NN will be playing a central role."

Right here is were I stopped reading his post:
A computer knows the answer is five, because we have told it that it is five. They didn't actually count out the number but searched through a library of answers to problems that we gave it. ...

I'm amazed you read further.
 
Right here is were I stopped reading his post:


I'm amazed you read further.

It was a metaphor, of course we wouldnt make a library for mathmatical answers because that would be impossible.

I even explicitly stated it was a metaphor several times...
 
To Fedr88:

Your post 72 again reflects your limited concept as to what is a computer. At least read the second footnote of my post 70, which is:

"The NN is a great advance because unlike a programmed computer, no human even needs to know how to solve the problem and in most cases, no human even understand how the NN is solving it! This is very important advance as solving problems is no longer limited by human ability to do so. - I.e. develop an algorithm for the solution! Thus, probably if robots are to ever become much smarter than humans, the NN will be playing a central role."

Okay, than why in the world arent neural networks standard? Why dont we have intelligent sentiant robots?

Which besides the fact by explanation refers to computer in the conventional sense in how they are made and operate, since according to you neural networks are so different why would you assume the same rules would apply?

Actually, I even said that unless there is some extraordinary innovation AI on conventional computers as we know them today is near impossible.
 
Speaking of the value of robots...the Spirit Rover was just declared (on Tuesday) to be a "stationary" research platform. Spirit's mission to Mars was slated to last 90 days. We're at 2277 days and still counting. No human mission will likely ever beat the overextension of time that that rover gave us.

On the downside, if you anthropomorphize it, you get this:

spirit.png
 
Speaking of the value of robots...the Spirit Rover was just declared (on Tuesday) to be a "stationary" research platform. Spirit's mission to Mars was slated to last 90 days. We're at 2277 days and still counting. No human mission will likely ever beat the overextension of time that that rover gave us.

On the downside, if you anthropomorphize it, you get this:

spirit.png

Im torn between laughing:D and feeling sad:bawl:....


But yah, it goes to show how well the engineering was on that little thing.
 
Okay, than why in the world arent neural networks standard? Why dont we have intelligent sentiant robots?...
Inertia is part of the answer to the first question. In some industrial areas, such as paper making, NN are replacing human production managers. Typically these areas are ones where some old guy with years of experience make the dozens of adjustments to keep the product quality up. In the paper production plant there is the pH, the moisture content at dozens of stations in the production line, the temperature at these stations also, and a dozen other factors I do not know about (perhaps he even periodically tastes the mass.)

Much the same is true of the wine and beer industries. The "master" may die, taking his not even verbalizable knowledge to the grave. Thus many of these complex industries where quality control is an art, are switching to NN networks, but one drawback of the NN is that there is only the data set of the current production runs for the NN to learn on - I.e. it may take more than 10 years with all these factors measured hourly by instruments (and some may not be known to even measure) with information about the quality of the beer, paper, etc. being produced before there exists a "training set of data" which will let the NN even match the judgment of that old master's years of experience.

Human intelligence took more than a million years to develop. The NN will probably take a few hundred to equal it. But then intelligence will rapidly advance as they can significantly improve the next generation - in contrast to humans where thousands of generations were required to make even a one point increase in the average IQ.
 
Last edited by a moderator:
Back
Top