Is it possibly to functionally transfer knowledge from a neural network to another?

Discussion in 'Intelligence & Machines' started by Buckaroo Banzai, Jan 3, 2018.

  1. Write4U Valued Senior Member

    For one, in the creation of art, painting, music, poetry. Can an algorithm be imaginitive?
  2. Google AdSense Guest Advertisement

    to hide all adverts.
  3. someguy1 Registered Senior Member

  4. Google AdSense Guest Advertisement

    to hide all adverts.
  5. birch Valued Senior Member

    why not, in theory? it's just the complexity of the program. we are imaginative based on what we can utilize. that's not magic.
  6. Google AdSense Guest Advertisement

    to hide all adverts.
  7. Michael 345 New year. PRESENT is 71 years old Valued Senior Member

    I'm reading up about concessness at the moment
    Yes I would agree about the complexity of the program. However the complexity of the brain incorporates the experiences from the life lead up to that point
    Ways to duplicate a brains (or a computer's memories complexity) up to the particular stage of life
    • Allow the second brain to live the same life (not possible)
    • Expose the computer to the life experiences of the first brain (not possible)
    • Snapshot the first brain - figure out the code - write the code - upload code to second brain (theory dubious, practical very dubious)
    Any hope for the original post? Perhaps in 1 million years

    Please Register or Log in to view the hidden image!

  8. Write4U Valued Senior Member

    I agree. That's the point I made by calling it a "flexible algorithm", because how we process information is dependend on types and values which we recognize and subconsciously believe need processing.

    We may have 3 billion neurons in our brain, but the information available consistes of trillions upon trillions of bits and pieces of information. Thus our brain's limitatations in cognitive abilities makes it selective in ability to recognize which information is pertinent and which can be discarded. A "best guess".

    But our neural network can reprogram itself "on the fly". I believe this falls in fields of "inductive" and "deductive" reasoning.

    This is why optical illusions are so effective, they "fool" the brain by purposely misleading it in forming a false inner picture of whay is being observed, as Anil Seth clearly demonstrated.

    This simplest of all optical illusion demonstrates this flexibility in most, but not all people.
    Is this girl dancing clockwise or counterclockwise?
  9. someguy1 Registered Senior Member

    You're using the word algorithm in a way that's totally different than how the word is used in computer science.

    If you said, "That's the point I made by calling it a "flexible FOOZLE", because how we process information is dependend on types and values which we recognize and subconsciously believe need processing."

    then I would have no disagreement. But you couldn't use the word tunafish, because tunafish already has a meaning. You couldn't use the word brick, because brick already has a meaning. And you can't use the word algorithm there, because what you describe is not what algorithms are.
  10. Write4U Valued Senior Member

    Well, IMO, an algorithm is a mathematical function. And what I understand from this it is intimately involved in logic processing of information in computers and in the brain.

    Look deeper into the meaning of words at a fundamental universal level.
    Last edited: Jan 16, 2018
  11. someguy1 Registered Senior Member

    Could not be more false. There are uncountably many functions (from the naturals to the naturals, which is the usual domain of computer science) but only countably many Turing machines. There are VASTLY more functions than algorithms.

    If you randomly pick a mathematical function, the probability is 1 that it is NOT expressible as an algorithm.
  12. someguy1 Registered Senior Member

    I'm afraid that when it comes to technical stuff I am a literalist. And you are a poet. Which is cool. You talk about "... looking deeper into the meaning of words at a universal level." That sounds a little vague to me. It sounds like you are asking me to use my imagination and intuition and take flights of fancy to what could be going on in our brains that gives rise to intelligence and consciousness.

    So that's fine. I get that.

    But I suspect that you don't think you're doing that. I think that you think you're making a scientific point. And if that's true, then I have no choice but to disagree on technical grounds. Algorithms are defined in very specific terms. The capabilities you ascribe to algorithms are not present in the technical literature on what algorithms can do.

    I hope this clarifies where I'm coming from.

    I also wanted to add to what I said about there being way more noncomputable functions than computable ones. That's the terminology. Turing himself discussed this exact point in his 1936 paper. A function is computable if its bits can be cranked out by an algorithm.

    So you are right if you alter your statement to, "An algorithm is just a COMPUTABLE function." But then we're haven't learned anything new. It's still true that the vast majority of functions are NON-computable. The technically accurate phrase is, "all but countably many functions are noncomputable." If you put all the functions in a bowl, close your eyes and randomly pick one out ... you have zero probability of picking out a computable function!

    That which is computable is only a tiny subset of that which is.

    That's a mathematical fact, which I put forth as a metaphysical thesis. I believe that the qualities of intelligence and self-awareness and subjective experience go beyond the realm of what algorithms can do. Whether this is ultimately true or not is of course an open scientific question. I don't claim to know the ultimate truth. But I have evidence. The evidence is that non-computable functions exist. We must ask ourselves if perhaps they have some part to play in how our minds work and in how our world works.
    Last edited: Jan 17, 2018
  13. Michael 345 New year. PRESENT is 71 years old Valued Senior Member

    Me - maths = not good connection
    I think of a algorithms as a long string of calculations / formula which might pass the result onto another algorithms and just keep going until the whole bunch has assembled a answer

    I found this but not much help

    Please Register or Log in to view the hidden image!

  14. Write4U Valued Senior Member

    I gave you the reference you asked for which clearly explains the function of algorithms.
    Yes there are several "fundamental natural mathematical functions, but that's not what you asked for.

    Natural Algorithms is one,

    The Exponential Function is another, which is based on a Logarithm.

    These are natural functions, universal potentials, which we have been able to symbolize and use where these functions are applicable.

    According to Tegmark, there are some 33 significant numbers (natural values) and a handful of equations by which all of reality can be calculated.
  15. someguy1 Registered Senior Member

    * That's an extremely limited class of functions, the "well known freshman calculus functions." Trust me, there are a lot more functions than that. And of course these particular functions are definitely computable.

    You have exhibited SOME functions that are computable. But MOST functions are NOT computable.

    You are confusing the functions you've learned, which are called the elementary functions for a good reason, with the class of all functions. There are a lot of functions out there. Most are not computable.

    * Regarding Tegmark's 33 variables, those define our universe up to our current level of physical understanding. Given the history of physics from Aristotle to Newton to Einstein to ... I don't know, Witten say ... wouldn't it be fair to conclude that the physics of the future might extend the physics of the present? And maybe Tegmark's 33 variables will turn out not to be sufficient to tune the universe after all.

    Claims of physics are claims about what our current consensus theory says. It's historically contingent.

    Claims about the ultimate nature of the world are metaphysics.

    * But the real bottom line on this post is that to understand algorithms, you need to understand functions in a larger context than the elementary functions of calculus. Consider all the possible functions there could be that input a positive integer, and output a 0 or a 1. So each function can be represented as an infinite bitstring, like 0010101010101010101010... going on forever.

    If you like, put a "binary point" in front of each bitstring. Then it can be interpreted as the binary expansion of some real number in the unit interval, between 0 and 1.

    Now there are a LOT of these real numbers. If you know about Cantor's diagonal argument, you know that there are uncountably many of these bitstrings. Or functions, or real numbers. All the interpretations are really the same thing.

    And only countably many of them are computable. Turing worked all of this out in his 1936 paper in which he defined what it means for something to be a computation. Turing's ideas are still taught and are still valid today. The definition of an algorithm hasn't changed.

    One of the first things Turing did in fact was show that he could define a problem that could not possibly be solved by a computation. The example he came up with is called the Halting problem.

    Turing discovered that there are problems whose solution is simply not computable by an algorithm. He showed us the limitations of algorithms. For some reason this point is not appreciated by those who claim that the world's an algorithm, the mind's an algorithm, everything that can be done by a human can be done by an algorithm. It's just not true. Turing showed us on day one of the computer revolution that there are problems that computers cannot solve, not with all the processing power and memory in the world.

    Here is a pdf of Turing's paper if anyone is interested.
    Last edited: Jan 17, 2018
  16. Write4U Valued Senior Member

    Ok, my turn. Name me a few mathematical functions which are not computable?
  17. Write4U Valued Senior Member

    Actually I found this very interesting, especially the opening paragraph.
    I just got up, so I'll have me a cup of Java.....Algorithmically brewed.....

    Please Register or Log in to view the hidden image!

    Last edited: Jan 17, 2018
  18. someguy1 Registered Senior Member

    Not arguing the point or playing games. Turing was considering precisely the space of functions I explained to you. I'm explaining, not arguing.

    You want to play semantic games about what it means for a function to be mathematical. All functions are mathematical as far as I'm concerned. The ones we use in calculus, the ones they only use in the farthest reaches of set theory. The ones with names and the ones that can never have names. All functions are mathematical objects by definition.

    You are arguing from a very limited mathematical perspective. Your lack of broader understanding is not an argument! It's an opportunity for you to learn something. I'm doing my best to point you in a direction. If I should stop, please let me know.

    What I mean is ... you're interested in algorithms. I'm trying to explain a couple of things about algorithms that might help you to sharpen your own ideas. If that's helpful, ok. If not, not.
    Last edited: Jan 17, 2018
  19. iceaura Valued Senior Member

    I specifically set any assumption of "digital" aside, as irrelevant.
    Are you claiming my assumption that the brain is a physical object goes far beyond anything known to contemporary physics? Or is it something like the supposed absence of a clock making the critical difference - preventing the "freezing" of the brain like a CPU?

    There's an article in Science recently - last couple of months - that describes the great efficiency gains possible via varying the connection patterns in a neural network over time. A neural network that is always fully connected wastes a lot of energy in repetition and duplication, as a mathematical fact. This slows it.

    What, exactly, is "speed of execution" in that context?
  20. someguy1 Registered Senior Member

    Of course not. The brain is physical. It does not happen to be computational. It's not an algorithm. It does obey the laws of physics, but it is not reducible to a Turing machine.

    That's one reason, there is no clock and there is no discrete sequence of states as there is in a digital computer.

    But this is a question of physics. Is the world continuous in the sense of the real numbers in math? Infinitely divisible with no gaps? Or is it discrete? Nobody knows. Perhaps nobody can ever know (or perhaps someone will publish a definitive proof one way or the other tomorrow morning).

    So when you talk about freezing the brain in an instant, you are supposing that time itself can be frozen in an instant. But it's unknown as to whether time and space work that way.

    On the other hand, digital computers DO work that way. That's how we design them.

    If you are talking about artificial neural nets implemented on computers, they are programs that run on conventional hardware. They're reducible in principle to Turing machines. A neural net consists of a set of nodes that are assigned numeric weights. It's very conventional programming, although a very clever approach to organizing a computation. You could take a pencil and paper and a copy of the program and execute the neural net algorithm by hand.

    On the other hand if by neural net you mean brain, there is no evidence at all that the brain is a neural net and nothing else. After all, there are no numerically-weighted nodes in the brain.

    A speedup in the programming of a neural net, whether achieved via more clever organization or by running it on faster hardware, has no possible effect on what that net can compute. It's just like running the Euclidean algorithm to determine the GCD of two integers. You can run it by hand with pencil and paper or you can run it on a supercomputer. In either case it has the exact same capability: to determine the GCD of two integers. Running it fast doesn't make it do something that it couldn't do when done by hand. Of course the numbers it can handle get larger as you add processing power and memory. But the set of mathematical functions it can compute are exactly the same. Neural nets are reducible to Turing machines.
  21. iceaura Valued Senior Member

    hmmm. Wouldn't you need quantum processing to escape that reduction? Nothing that fails to violate Bell's inequality or some similar logical establishment will do, iirc.
    A bit misleading in omission - the real numbers are not as dense on the line as is possible, there are different kinds of "gaps".
    "Nothing else"? Even single purpose constructed neural networks in physical use are not neural nets and nothing else.
    There are physical models of numerically weighted nodes in the brain (primed and suppressed neurons) - and of course such things (physical analogs of numerically weighted nodes) are also among the basic components of digital computers. The setups are of course radically different, qualitatively different, in the complexity of their organization - including the existence of temporal patterns of change in node connection, in the brain - but is that the key to your objections?
    Last edited: Jan 17, 2018
  22. someguy1 Registered Senior Member

    I have to plead ignorance on that subject. If you have a ref I'd appreciate it. I don't see why saying something is physical but not reducible to a TM requires quantum physics but that's definitely one of the weak areas in my knowledge.
    That you'll have to explain? What kind of gaps?

    I phrased my remark as I did because people often ask if space is infinitely divisible. Well the rational numbers are infinitely divisible but they're not a continuum. They are full of holes. For example there's a hole where sqrt(2) is supposed to be. There's a hole at every irrational.

    On the other hand, the reals are topologically complete. This is often stated as the least upper bound property: Every nonempty subset of reals that's bounded above has a least upper bound. This means there are no holes in the reals.

    What gaps did you have in mind?

    Didn't follow your meaning here. You're asking me if I said that neural nets are not neural nets? Don't follow.

    What I said is that artificial neural nets are reducible to TMs, and they are just regular programs implemented on conventional computing hardware. They can't compute anything that a TM can't already compute.

    Yes of course, there are terrific models. We're talking about actual brains though. A model of a brain is not a brain. We can model a brain as a neural net but we don't know if a brain IS a neural net.

    My "and nothing else" remark was intended to head off objections along the lines of saying that the brain implements SOME functionality of neural nets. But neural nets alone have not yet been able to explain everything the brain does. We simply don't know enough about the brain.

    I'll have a look at the article.

    ps -- I looked at the article. The pdf renders fuzzy for me, did you see that too or is it my eyes?

    In any event, my remarks would still stand for a simple reason. Every possible mode of computation is reducible to a TM. That's the Church-Turing thesis. It's a thesis and not a theorem, and it's possible that someday someone could disprove it. But so far nobody has. It's stood for 80 years. If someone's neural net broke Church-Turing it would make the news.

    I do confess that I read all the same breathless hype about neural nets that everyone else does, but that I have not seen anyone address this particular objection. Neural nets can't compute anything TMs can't. So if neural nets are responsible for consciousness, then either a TM can be conscious (which I don't believe) or whatever it is neural nets are doing to implement consciousness, it can't be called a computation.

    I'd love to get a straight answer on this. For all I know I'm totally wrong but I have not found any kind of discussion or article that puts my concerns in context.
    Last edited: Jan 18, 2018
  23. iceaura Valued Senior Member

    Turing machines can emulate Boolean logic, and every process described by classical physics can be described in that logic (by way of propositional calculus etc
    The reference was to stuff like this: and complex numbers etc. We are dealing with electronic current in three dimensions - complex numbers, even quaternions, are involved.
    I failed to be clear: the "numerically weighted node" is the abstraction, the physical realization of it is the model - with their connections some neurons, some transistors, are physical models of numerically weighted nodes. Your claim was that the brain contains no such things, my claim is that it does.

    Of course it contains much more, not only in hardware but in organizational complexity - but so do the actual machines running neural nets, at least the hardware.

Share This Page