Is it possibly to functionally transfer knowledge from a neural network to another?

This is IMO an argument against mind being some kind of property or output of a computation. If this were true, then a person sitting with a big box of pencils and a big stack of paper and a copy of the computer program could start at the first line and execute that program line by line and thereby implement a mind. And where exactly would that mind be? When does the mind come into existence?
For one, in the creation of art, painting, music, poetry. Can an algorithm be imaginitive?
 
For one, in the creation of art, painting, music, poetry. Can an algorithm be imaginitive?

why not, in theory? it's just the complexity of the program. we are imaginative based on what we can utilize. that's not magic.
 
why not, in theory? it's just the complexity of the program. we are imaginative based on what we can utilize. that's not magic.

I'm reading up about concessness at the moment
Yes I would agree about the complexity of the program. However the complexity of the brain incorporates the experiences from the life lead up to that point
Ways to duplicate a brains (or a computer's memories complexity) up to the particular stage of life
  • Allow the second brain to live the same life (not possible)
  • Expose the computer to the life experiences of the first brain (not possible)
  • Snapshot the first brain - figure out the code - write the code - upload code to second brain (theory dubious, practical very dubious)
Any hope for the original post? Perhaps in 1 million years

:)
 
why not, in theory? it's just the complexity of the program. we are imaginative based on what we can utilize. that's not magic.
I agree. That's the point I made by calling it a "flexible algorithm", because how we process information is dependend on types and values which we recognize and subconsciously believe need processing.

We may have 3 billion neurons in our brain, but the information available consistes of trillions upon trillions of bits and pieces of information. Thus our brain's limitatations in cognitive abilities makes it selective in ability to recognize which information is pertinent and which can be discarded. A "best guess".

But our neural network can reprogram itself "on the fly". I believe this falls in fields of "inductive" and "deductive" reasoning.

This is why optical illusions are so effective, they "fool" the brain by purposely misleading it in forming a false inner picture of whay is being observed, as Anil Seth clearly demonstrated.

This simplest of all optical illusion demonstrates this flexibility in most, but not all people.
Is this girl dancing clockwise or counterclockwise?
 
I agree. That's the point I made by calling it a "flexible algorithm", because how we process information is dependend on types and values which we recognize and subconsciously believe need processing.

You're using the word algorithm in a way that's totally different than how the word is used in computer science.

If you said, "That's the point I made by calling it a "flexible FOOZLE", because how we process information is dependend on types and values which we recognize and subconsciously believe need processing."

then I would have no disagreement. But you couldn't use the word tunafish, because tunafish already has a meaning. You couldn't use the word brick, because brick already has a meaning. And you can't use the word algorithm there, because what you describe is not what algorithms are.
 
You're using the word algorithm in a way that's totally different than how the word is used in computer science.

If you said, "That's the point I made by calling it a "flexible FOOZLE", because how we process information is dependend on types and values which we recognize and subconsciously believe need processing."

then I would have no disagreement. But you couldn't use the word tunafish, because tunafish already has a meaning. You couldn't use the word brick, because brick already has a meaning. And you can't use the word algorithm there, because what you describe is not what algorithms are.
Well, IMO, an algorithm is a mathematical function. And what I understand from this it is intimately involved in logic processing of information in computers and in the brain.
Algorithm; Introduction;
The first step towards an understanding of why the study and knowledge of algorithms are so important is to define exactly what we mean by an algorithm. According to the popular algorithms textbook Introduction to Algorithms (Second Edition by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein), "an algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values as output." In other words, algorithms are like road maps for accomplishing a given, well-defined task. So, a chunk of code that calculates the terms of the Fibonacci sequence is an implementation of a particular algorithm. Even a simple function for adding two numbers is an algorithm in a sense, albeit a simple one.

Some algorithms, like those that compute the Fibonacci sequences, are intuitive and may be innately embedded into our logical thinking and problem solving skills. However, for most of us, complex algorithms are best studied so we can use them as building blocks for more efficient logical problem solving in the future. In fact, you may be surprised to learn just how many complex algorithms people use every day when they check their e-mail or listen to music on their computers. This article will introduce some basic ideas related to the analysis of algorithms, and then put these into practice with a few examples illustrating why it is important to know about algorithms.
https://www.topcoder.com/community/data-science/data-science-tutorials/the-importance-of-algorithms/

Look deeper into the meaning of words at a fundamental universal level.
 
Last edited:
Well, IMO, an algorithm is a mathematical function.

Could not be more false. There are uncountably many functions (from the naturals to the naturals, which is the usual domain of computer science) but only countably many Turing machines. There are VASTLY more functions than algorithms.

If you randomly pick a mathematical function, the probability is 1 that it is NOT expressible as an algorithm.
 
Look deeper into the meaning of words at a fundamental universal level.

I'm afraid that when it comes to technical stuff I am a literalist. And you are a poet. Which is cool. You talk about "... looking deeper into the meaning of words at a universal level." That sounds a little vague to me. It sounds like you are asking me to use my imagination and intuition and take flights of fancy to what could be going on in our brains that gives rise to intelligence and consciousness.

So that's fine. I get that.

But I suspect that you don't think you're doing that. I think that you think you're making a scientific point. And if that's true, then I have no choice but to disagree on technical grounds. Algorithms are defined in very specific terms. The capabilities you ascribe to algorithms are not present in the technical literature on what algorithms can do.

I hope this clarifies where I'm coming from.

I also wanted to add to what I said about there being way more noncomputable functions than computable ones. That's the terminology. Turing himself discussed this exact point in his 1936 paper. A function is computable if its bits can be cranked out by an algorithm.

So you are right if you alter your statement to, "An algorithm is just a COMPUTABLE function." But then we're haven't learned anything new. It's still true that the vast majority of functions are NON-computable. The technically accurate phrase is, "all but countably many functions are noncomputable." If you put all the functions in a bowl, close your eyes and randomly pick one out ... you have zero probability of picking out a computable function!

That which is computable is only a tiny subset of that which is.

That's a mathematical fact, which I put forth as a metaphysical thesis. I believe that the qualities of intelligence and self-awareness and subjective experience go beyond the realm of what algorithms can do. Whether this is ultimately true or not is of course an open scientific question. I don't claim to know the ultimate truth. But I have evidence. The evidence is that non-computable functions exist. We must ask ourselves if perhaps they have some part to play in how our minds work and in how our world works.
 
Last edited:
I'm afraid that when it comes to technical stuff I am a literalist. And you are a poet. Which is cool. You talk about "... looking deeper into the meaning of words at a universal level." That sounds a little vague to me. It sounds like you are asking me to use my imagination and intuition and take flights of fancy to what could be going on in our brains that gives rise to intelligence and consciousness.

So that's fine. I get that.

But I suspect that you don't think you're doing that. I think that you think you're making a scientific point. And if that's true, then I have no choice but to disagree on technical grounds. Algorithms are defined in very specific terms. The capabilities you ascribe to algorithms are not present in the technical literature on what algorithms can do.

I hope this clarifies where I'm coming from.

I also wanted to add to what I said about there being way more noncomputable functions than computable ones. That's the terminology. Turing himself discussed this exact point in his 1936 paper. A function is computable if its bits can be cranked out by an algorithm.

So you are right if you alter your statement to, "An algorithm is just a COMPUTABLE function." But then we're haven't learned anything new. It's still true that the vast majority of functions are NON-computable. The technically accurate phrase is, "all but countably many functions are noncomputable." If you put all the functions in a bowl, close your eyes and randomly pick one out ... you have zero probability of picking out a computable function!

That which is computable is only a tiny subset of that which is.

That's a mathematical fact, which I put forth as a metaphysical thesis. I believe that the qualities of intelligence and self-awareness and subjective experience go beyond the realm of what algorithms can do. Whether this is ultimately true or not is of course an open scientific question. I don't claim to know the ultimate truth. But I have evidence. The evidence is that non-computable functions exist. We must ask ourselves if perhaps they have some part to play in how our minds work and in how our world works.

Me - maths = not good connection
I think of a algorithms as a long string of calculations / formula which might pass the result onto another algorithms and just keep going until the whole bunch has assembled a answer

I found this but not much help

https://www.google.com.au/amp/bigth...n-algorithm-your-brain-is-an-operating-system

:)
 
Could not be more false. There are uncountably many functions (from the naturals to the naturals, which is the usual domain of computer science) but only countably many Turing machines. There are VASTLY more functions than algorithms.

If you randomly pick a mathematical function, the probability is 1 that it is NOT expressible as an algorithm.
I gave you the reference you asked for which clearly explains the function of algorithms.
Even a simple function for adding two numbers is an algorithm in a sense, albeit a simple one.

Yes there are several "fundamental natural mathematical functions, but that's not what you asked for.

Natural Algorithms is one,
Algebra studies two main families of equations: polynomial equations and, among them, the special case of linear equations. When there is only one variable, polynomial equations have the form P(x) = 0, where P is a polynomial, and linear equations have the form ax + b = 0, where a and b are parameters. To solve equations from either family, one uses algorithmic or geometric techniques that originate from linear algebra or mathematical analysis.
https://en.wikipedia.org/wiki/Equation

The Exponential Function is another, which is based on a Logarithm.
In mathematics, an exponential function is a function of the form
5e96cbfd64b37fd5a800b8ae1b87427683a0da74

in which the input variable x occurs as an exponent. A function of the form
73656e33639c69ea0ce14c7017e6a0ebf35fd028
, where
86a67b81c2de995bd608d5b2df50cd8cd7d92455
is a constant, is also considered an exponential function and can be rewritten as
5fbbc9abebe92f1aa9f54bdb6650375be8818b5f
, with
5769bbc61fa741d3145f5261893d3178364e8197
.

As functions of a real variable, exponential functions are uniquely characterizedby the fact that the growth rate of such a function (i.e., its derivative) is directly proportional to the value of the function. The constant of proportionality of this relationship is the natural logarithm of the base
f11423fbb2e967f986e36804a8ae4271734917c3
:
https://en.wikipedia.org/wiki/Exponential_function

These are natural functions, universal potentials, which we have been able to symbolize and use where these functions are applicable.

According to Tegmark, there are some 33 significant numbers (natural values) and a handful of equations by which all of reality can be calculated.
 
I gave you the reference you asked for which clearly explains the function of algorithms.

Yes there are several "fundamental natural mathematical functions, but that's not what you asked for.

* That's an extremely limited class of functions, the "well known freshman calculus functions." Trust me, there are a lot more functions than that. And of course these particular functions are definitely computable.

You have exhibited SOME functions that are computable. But MOST functions are NOT computable.

You are confusing the functions you've learned, which are called the elementary functions for a good reason, with the class of all functions. There are a lot of functions out there. Most are not computable.

* Regarding Tegmark's 33 variables, those define our universe up to our current level of physical understanding. Given the history of physics from Aristotle to Newton to Einstein to ... I don't know, Witten say ... wouldn't it be fair to conclude that the physics of the future might extend the physics of the present? And maybe Tegmark's 33 variables will turn out not to be sufficient to tune the universe after all.

Claims of physics are claims about what our current consensus theory says. It's historically contingent.

Claims about the ultimate nature of the world are metaphysics.

* But the real bottom line on this post is that to understand algorithms, you need to understand functions in a larger context than the elementary functions of calculus. Consider all the possible functions there could be that input a positive integer, and output a 0 or a 1. So each function can be represented as an infinite bitstring, like 0010101010101010101010... going on forever.

If you like, put a "binary point" in front of each bitstring. Then it can be interpreted as the binary expansion of some real number in the unit interval, between 0 and 1.

Now there are a LOT of these real numbers. If you know about Cantor's diagonal argument, you know that there are uncountably many of these bitstrings. Or functions, or real numbers. All the interpretations are really the same thing.

And only countably many of them are computable. Turing worked all of this out in his 1936 paper in which he defined what it means for something to be a computation. Turing's ideas are still taught and are still valid today. The definition of an algorithm hasn't changed.

One of the first things Turing did in fact was show that he could define a problem that could not possibly be solved by a computation. The example he came up with is called the Halting problem.

https://en.wikipedia.org/wiki/Halting_problem

Turing discovered that there are problems whose solution is simply not computable by an algorithm. He showed us the limitations of algorithms. For some reason this point is not appreciated by those who claim that the world's an algorithm, the mind's an algorithm, everything that can be done by a human can be done by an algorithm. It's just not true. Turing showed us on day one of the computer revolution that there are problems that computers cannot solve, not with all the processing power and memory in the world.

Here is a pdf of Turing's paper if anyone is interested. https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf
 
Last edited:
That's an extremely limited class of functions, the "well known freshman calculus functions." Trust me, there are a lot more functions than that. And of course these particular functions are definitely computable.

You have exhibited SOME functions that are computable. But MOST functions are NOT computable.

You are confusing the functions you've learned, which are called the elementary functions for a good reason, with the class of all functions. There are a lot of functions out there. Most are not computable.
Ok, my turn. Name me a few mathematical functions which are not computable?
 
Me - maths = not good connection
I think of a algorithms as a long string of calculations / formula which might pass the result onto another algorithms and just keep going until the whole bunch has assembled a answer

I found this but not much help

https://www.google.com.au/amp/bigth...n-algorithm-your-brain-is-an-operating-system :)
Actually I found this very interesting, especially the opening paragraph.
Ever wondered how you were supposed to keep up with the never-ending stream of content and data in your life? Not to worry, the elves of the Internet are busy at work, creating everything from magical little algorithms that automatically execute basic tasks to sophisticated utility apps that run in the background, taking care of all the minutiae in your daily life. Forget about hiring a personal assistant, you can “hire” off-the-shelf algorithms and digital apps that do all the heavy lifting for you. If that doesn't work, just ask Siri. Your life is an algorithm, your brain is an operating system, now go get some sleep.

I just got up, so I'll have me a cup of Java.....Algorithmically brewed.....:)
 
Last edited:
Ok, my turn. Name me a few mathematical functions which are not computable?

Not arguing the point or playing games. Turing was considering precisely the space of functions I explained to you. I'm explaining, not arguing.

You want to play semantic games about what it means for a function to be mathematical. All functions are mathematical as far as I'm concerned. The ones we use in calculus, the ones they only use in the farthest reaches of set theory. The ones with names and the ones that can never have names. All functions are mathematical objects by definition.

You are arguing from a very limited mathematical perspective. Your lack of broader understanding is not an argument! It's an opportunity for you to learn something. I'm doing my best to point you in a direction. If I should stop, please let me know.

What I mean is ... you're interested in algorithms. I'm trying to explain a couple of things about algorithms that might help you to sharpen your own ideas. If that's helpful, ok. If not, not.
 
Last edited:
Oh but this is NOT true; or at the very least, it's very far from being known. We have no theory of how the brain works from one instant to the next. There's no proof it's digital at all and in my opinion it isn't.

You are making a metaphysical assumption about the nature of the world. Your assumption goes far beyond anything known to contemporary physics.
I specifically set any assumption of "digital" aside, as irrelevant.
Are you claiming my assumption that the brain is a physical object goes far beyond anything known to contemporary physics? Or is it something like the supposed absence of a clock making the critical difference - preventing the "freezing" of the brain like a CPU?

If mind is any kind of byproduct or output of a computation that depends on the speed of execution, then mind, whatever it is, is not computational.
There's an article in Science recently - last couple of months - that describes the great efficiency gains possible via varying the connection patterns in a neural network over time. A neural network that is always fully connected wastes a lot of energy in repetition and duplication, as a mathematical fact. This slows it.

What, exactly, is "speed of execution" in that context?
 
Are you claiming my assumption that the brain is a physical object goes far beyond anything known to contemporary physics?

Of course not. The brain is physical. It does not happen to be computational. It's not an algorithm. It does obey the laws of physics, but it is not reducible to a Turing machine.

Or is it something like the supposed absence of a clock making the critical difference - preventing the "freezing" of the brain like a CPU?

That's one reason, there is no clock and there is no discrete sequence of states as there is in a digital computer.

But this is a question of physics. Is the world continuous in the sense of the real numbers in math? Infinitely divisible with no gaps? Or is it discrete? Nobody knows. Perhaps nobody can ever know (or perhaps someone will publish a definitive proof one way or the other tomorrow morning).

So when you talk about freezing the brain in an instant, you are supposing that time itself can be frozen in an instant. But it's unknown as to whether time and space work that way.

On the other hand, digital computers DO work that way. That's how we design them.

There's an article in Science recently - last couple of months - that describes the great efficiency gains possible via varying the connection patterns in a neural network over time. A neural network that is always fully connected wastes a lot of energy in repetition and duplication, as a mathematical fact. This slows it.

What, exactly, is "speed of execution" in that context?

If you are talking about artificial neural nets implemented on computers, they are programs that run on conventional hardware. They're reducible in principle to Turing machines. A neural net consists of a set of nodes that are assigned numeric weights. It's very conventional programming, although a very clever approach to organizing a computation. You could take a pencil and paper and a copy of the program and execute the neural net algorithm by hand.

On the other hand if by neural net you mean brain, there is no evidence at all that the brain is a neural net and nothing else. After all, there are no numerically-weighted nodes in the brain.

A speedup in the programming of a neural net, whether achieved via more clever organization or by running it on faster hardware, has no possible effect on what that net can compute. It's just like running the Euclidean algorithm to determine the GCD of two integers. You can run it by hand with pencil and paper or you can run it on a supercomputer. In either case it has the exact same capability: to determine the GCD of two integers. Running it fast doesn't make it do something that it couldn't do when done by hand. Of course the numbers it can handle get larger as you add processing power and memory. But the set of mathematical functions it can compute are exactly the same. Neural nets are reducible to Turing machines.
 
It does obey the laws of physics, but it is not reducible to a Turing machine.
hmmm. Wouldn't you need quantum processing to escape that reduction? Nothing that fails to violate Bell's inequality or some similar logical establishment will do, iirc.
Is the world continuous in the sense of the real numbers in math? Infinitely divisible with no gaps? Or is it discrete? Nobody knows.
A bit misleading in omission - the real numbers are not as dense on the line as is possible, there are different kinds of "gaps".
On the other hand if by neural net you mean brain, there is no evidence at all that the brain is a neural net and nothing else. After all, there are no numerically-weighted nodes in the brain.
"Nothing else"? Even single purpose constructed neural networks in physical use are not neural nets and nothing else.
There are physical models of numerically weighted nodes in the brain (primed and suppressed neurons) - and of course such things (physical analogs of numerically weighted nodes) are also among the basic components of digital computers. The setups are of course radically different, qualitatively different, in the complexity of their organization - including the existence of temporal patterns of change in node connection, in the brain http://www.sciencemagazinedigital.org/sciencemagazine/24_november_2017?pg=84#pg84 - but is that the key to your objections?
 
Last edited:
hmmm. Wouldn't you need quantum processing to escape that reduction? Nothing that fails to violate Bell's inequality or some similar logical establishment will do, iirc.

I have to plead ignorance on that subject. If you have a ref I'd appreciate it. I don't see why saying something is physical but not reducible to a TM requires quantum physics but that's definitely one of the weak areas in my knowledge.
A bit misleading in omission - the real numbers are not as dense on the line as is possible, there are different kinds of "gaps".

That you'll have to explain? What kind of gaps?

I phrased my remark as I did because people often ask if space is infinitely divisible. Well the rational numbers are infinitely divisible but they're not a continuum. They are full of holes. For example there's a hole where sqrt(2) is supposed to be. There's a hole at every irrational.

On the other hand, the reals are topologically complete. This is often stated as the least upper bound property: Every nonempty subset of reals that's bounded above has a least upper bound. This means there are no holes in the reals.

What gaps did you have in mind?


"Nothing else"? Even single purpose constructed neural networks in physical use are not neural nets and nothing else.

Didn't follow your meaning here. You're asking me if I said that neural nets are not neural nets? Don't follow.

What I said is that artificial neural nets are reducible to TMs, and they are just regular programs implemented on conventional computing hardware. They can't compute anything that a TM can't already compute.

There are physical models of numerically weighted nodes in the brain (primed and suppressed neurons) - and of course such things (physical analogs of numerically weighted nodes) are also among the basic components of digital computers.

Yes of course, there are terrific models. We're talking about actual brains though. A model of a brain is not a brain. We can model a brain as a neural net but we don't know if a brain IS a neural net.

My "and nothing else" remark was intended to head off objections along the lines of saying that the brain implements SOME functionality of neural nets. But neural nets alone have not yet been able to explain everything the brain does. We simply don't know enough about the brain.


The setups are of course radically different, qualitatively different, in the complexity of their organization - including the existence of temporal patterns of change in node connection, in the brain http://www.sciencemagazinedigital.org/sciencemagazine/24_november_2017?pg=84#pg84 - but is that the key to your objections?

I'll have a look at the article.

ps -- I looked at the article. The pdf renders fuzzy for me, did you see that too or is it my eyes?

In any event, my remarks would still stand for a simple reason. Every possible mode of computation is reducible to a TM. That's the Church-Turing thesis. It's a thesis and not a theorem, and it's possible that someday someone could disprove it. But so far nobody has. It's stood for 80 years. If someone's neural net broke Church-Turing it would make the news.

I do confess that I read all the same breathless hype about neural nets that everyone else does, but that I have not seen anyone address this particular objection. Neural nets can't compute anything TMs can't. So if neural nets are responsible for consciousness, then either a TM can be conscious (which I don't believe) or whatever it is neural nets are doing to implement consciousness, it can't be called a computation.

I'd love to get a straight answer on this. For all I know I'm totally wrong but I have not found any kind of discussion or article that puts my concerns in context.
 
Last edited:
I don't see why saying something is physical but not reducible to a TM requires quantum physics - - .
Turing machines can emulate Boolean logic, and every process described by classical physics can be described in that logic (by way of propositional calculus etc https://en.wikipedia.org/wiki/Propositional_calculus
Classical propositional calculus as described above is equivalent to Boolean algebra,
On the other hand, the reals are topologically complete. This is often stated as the least upper bound property: Every nonempty subset of reals that's bounded above has a least upper bound. This means there are no holes in the reals.
The reference was to stuff like this: http://mathworld.wolfram.com/SurrealNumber.html and complex numbers etc. We are dealing with electronic current in three dimensions - complex numbers, even quaternions, are involved.
Yes of course, there are terrific models. We're talking about actual brains though.
I failed to be clear: the "numerically weighted node" is the abstraction, the physical realization of it is the model - with their connections some neurons, some transistors, are physical models of numerically weighted nodes. Your claim was that the brain contains no such things, my claim is that it does.

Of course it contains much more, not only in hardware but in organizational complexity - but so do the actual machines running neural nets, at least the hardware.
 
Back
Top