I gave you the reference you asked for which clearly explains the function of algorithms.
Yes there are several "fundamental natural mathematical functions, but that's not what you asked for.
* That's an extremely limited class of functions, the "well known freshman calculus functions." Trust me, there are a lot more functions than that. And of course these particular functions are definitely computable.
You have exhibited SOME functions that are computable. But MOST functions are NOT computable.
You are confusing the functions you've learned, which are called the elementary functions for a good reason, with the class of all functions. There are a lot of functions out there. Most are not computable.
* Regarding Tegmark's 33 variables, those define our universe
up to our current level of physical understanding. Given the history of physics from Aristotle to Newton to Einstein to ... I don't know, Witten say ... wouldn't it be fair to conclude that the physics of the future might extend the physics of the present? And maybe Tegmark's 33 variables will turn out not to be sufficient to tune the universe after all.
Claims of physics are claims about what our current consensus theory says. It's historically contingent.
Claims about the ultimate nature of the world are metaphysics.
* But the real bottom line on this post is that to understand algorithms, you need to understand functions in a larger context than the elementary functions of calculus. Consider all the possible functions there could be that input a positive integer, and output a 0 or a 1. So each function can be represented as an infinite bitstring, like 0010101010101010101010... going on forever.
If you like, put a "binary point" in front of each bitstring. Then it can be interpreted as the binary expansion of some real number in the unit interval, between 0 and 1.
Now there are a LOT of these real numbers. If you know about Cantor's diagonal argument, you know that there are uncountably many of these bitstrings. Or functions, or real numbers. All the interpretations are really the same thing.
And only countably many of them are computable. Turing worked all of this out in his 1936 paper in which he defined what it means for something to be a computation. Turing's ideas are still taught and are still valid today. The definition of an algorithm hasn't changed.
One of the first things Turing did in fact was show that he could define a problem that could
not possibly be solved by a computation. The example he came up with is called the Halting problem.
https://en.wikipedia.org/wiki/Halting_problem
Turing discovered that there are problems whose solution is simply not computable by an algorithm. He showed us the
limitations of algorithms. For some reason this point is not appreciated by those who claim that the world's an algorithm, the mind's an algorithm, everything that can be done by a human can be done by an algorithm. It's just not true. Turing showed us on day one of the computer revolution that there are problems that computers cannot solve, not with all the processing power and memory in the world.
Here is a pdf of Turing's paper if anyone is interested.
https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf