# Why science must use math -- feedback welcome -- work in progress

Discussion in 'Science & Society' started by rpenner, Feb 15, 2012.

1. ### leopoldValued Senior Member

Messages:
17,455
math itself cannot be wrong.
1+1=2.
x is always equal to x.
in my opinion it's the ASSUMPTIONS that lead you to faulty reasoning, not the math.

if you observe a phenomenon and build a math model to describe it but the model gives faulty predictions then the ASSUMPTIONS you have made must be questioned. giving the model a once over helps.

maybe you are talking about revisions.
sure, math models can be revised, but usually it's due to new discoveries or, you guessed it, faulty assumptions.

- laymans point of veiw.

3. ### HectorDecimalRegistered Senior Member

Messages:
438
You may want to look up Einsteinian math. 1 + 1 + 1... +n = 1

Math is nice because it doesn't always have to be a hundred deviations in a linear algebraic equation or expression. A proposition can be composed of a simple Aristotlean syllogism void of actual numbers. The machine you are using involves millions of such logical exoression tried and retried each second. On the lowest level they reduce to binary machine code. Humans are most comfortable with base 10. Some mathematicians and those of us in the sciences often need to refine things like PI, for example, to a more exact quantity.

If we look at basic chemistry, from the simplest technical level, we add together say 2 + 1 to resolve from 3 + 9. We end up with two or three isomers yielding only 2 +1 of the desired isomer.

5. ### AlphaNumericFully ionizedRegistered Senior Member

Messages:
6,697
Citation needed. What precisely is 'Einsteinian math' and how does it say 1+1+....+n=1. That's nonsense.

A more exact quantity? Pi is pi is pi. Decimal or binary representations will be inexact but pi is pi is pi.

I get the distinct feeling your actual knowledge of mathematics is a long way short of where you think it is.

Messages:
438
Feel away...

8. ### HectorDecimalRegistered Senior Member

Messages:
438
Actually the example usually given is 1 raindrop + 1 raindrop = 1 raindrop. I don't have time to dig up something most mathematically proficient people have already seen.

Have a nice day...

9. ### AlphaNumericFully ionizedRegistered Senior Member

Messages:
6,697
That's not 'Einsteinian math'. I asked you to define what that even is and you haven't. Secondly your 'equation' didn't refer to anything like that. Thirdly I've got a maths degree and I'm a professional mathematician, hence why I asked you to justify your statement because it didn't align with anything I'm familiar with.

You failed.

10. ### wellwisherBannedBanned

Messages:
5,160
Theory and conceptual design is like the horse and the math is like the cart. The theory is supposed to lead the math. The math, in turn, like a cart, can be used to store and haul all types of goodies. There is a limit based on how strong the horse or theory is.

But what would happen if the cart led the horse or math was allowed to lead theory? I suppose this depend on the situation. It can work on a straight road going down hill. The theory or horse can follow or appear to stem from the math. But this arrangement would break down going uphill or if the roads had turns, unless the horse pushes the cart, so the cart appears to be leading. But pushing the cart involves less control and results in more randomness in the motion.

One common aspect of science that has the cart before the horse is the theory of randomness and probabilities. Before investigating a phenomena, one starts with the assumptions of statistical math. The specific theory needs to follow from the math which comes first.

There are cases, such as statistical mechanics of gases, which is like rolling down hill in a straight line.This works fine. But there are also cases, where we just assume random math and then formulate theory after the math, with less rational control since a horse cannot push the cart accurately.

For example, the evolution of life from nothing, assumes random events and therefore has the math cart first. Any accepted theory (horse) has to conform to this math cart leading. If you assume random and then calculate the odds the theory can never be proven, yet is still accepted in the world where carts leads horses. If you try to put the horse first with a logical theory, this looks weird in the world of carts leading horses.

There are two areas of science, pure and applied. Applied science, like in a factory, is more final results orientated. In applied science, if you have a process and cannot explain how it works, this may not be important as long as the process works and quality is good. You can even treat it like a black box. There as room for a math cart to lead the horse if QC is better. A better pure science theory but lower quality product is less acceptable.

Since business often funds pure science, there is a results mentality. Business has no problem letting the cart lead the horse. This mentality can filter into pure science causing the approach to become linear. Each turn in the phenomena will become a range of separate linear paths; specialty cart before horse theory, with each sort of moving downhill toward results=more money.

11. ### HectorDecimalRegistered Senior Member

Messages:
438
You're right. I failed to put in the right keywords for search, so will have to dig through my machine. I save a lot of public domain articles and have found the one example before, I'll find it again, but it's on the far bhack burner. The keyword of your post is outlined. I just had a conversation about this, only involving quantum packets, with a colleague in real life. Both of us pretty much agreed it's indroduced in Calc II, but I thought it might have been mentioned in Calc I too. When I have it, or a publication that it can be found in, I'll visit here again.

12. ### HectorDecimalRegistered Senior Member

Messages:
438
BTW... It may show up in wave packet theory somewhere. The example always used I'm familiar with is the raindrop splattering. Same as two raindrops colliding. How many raindrops result? What are their volumes on the average? What is the central volume?

It's an analogy used to describe two particle collisions.

You really don't have that in your repertoire?

13. ### gmilamValued Senior Member

Messages:
2,985
And we all know, one is the lonliest number.

Verdict's in. Troll.

14. ### rpennerFully WiredRegistered Senior Member

Messages:
4,833
Much was made of the as-of-yet undeveloped part of my outline at point IV.

Actual experience shows that given an axiom systems, there are a great number of additional axioms one can add that render the system self-contradictory. In this sense, software is fragile, since software is inherently prone to bugs -- unintended consequences of instructions given to a machine. Large parts of modern software engineering practice are devoted to making software systems robust to the point that they fail piecewise and not catastrophically. Mathematical models however treat all axioms and givens in the same space of logical thought and if you can prove a contradiction, the entire system is unreliable -- with respect to itself -- not just the physical system it is modeling.

The prior (also undeveloped) section of the outline was "Is a fragile model of more or less use to science than a flexible and ambiguous model?" So if you are flexible in your terms or otherwise ambiguous of what is or isn't predicted by your model or what is or isn't evidence against it, then your model is robust. "Rainy days are depressing" is ambiguous and flexible, "Wholly overcast days where no disc of the sun can be seen and natural illumination never rises above 15% of tropical cloudless noon standards are associated with 80% more of high-solarization populations reporting 3 of more symptoms of clinical depression" is less ambiguous and already mathematical in nature. Clearly some data could potentially be gathered that clearly refutes the second statement while leaving the first relatively unmolested.

See above.
There is almost a developed thought here. I think it's going to turn out to be wrong-headed, but that's because I'm basing my opinion on what was previously observed.

The universe is not required by law to make itself easily known.

I believe the Internet has been a great enabler of the crank population to link up and organize. Instead of working off a mimeograph machine in San Diego like creationists did in the 1980's, Ken Ham has a multi-million dollar sham museum with no actual research being conducted. Anyone can build an ebook or register for an ISBN number and these are being touted as demonstrations of their worthiness rather than their obvious narcissism.

Depends what day it is.

Math foundations is about what axioms (assumptions) day-to-day math is based on. Historically, it was with great horror realized that naive set theory -- the basis on which 1+1 = 2 and x = x were built -- was unreliable. Other set theories, including ZFC, were more successful. But the list of definitions and axioms accepted varies from mathematician to mathematician and from paper to paper. Proving that they aren't ever logically inconsistent can be onerous.

Math models of physics build on top of commonly accepted sets of axioms by introducing more axioms. And it's hard to do this without having unphysical infinities show up or other pathological behavior. Physicists can duck around some of the constraints of logic by limiting their models to a domain of applicability. But ultimately we don't have a single set of axioms which holds together self-consistently and usefully describes all known phenomena in the universe.

Well, that's a separate way math models are fragile. If the model says x=3 and testing shows x = 3, that's good. If later testing says x=3.1±0.2, that's still good. If later testing says x=3.1±0.02, that's much less good. Finally, if testing shows x=5 in some cases, then the model is no good -- or perhaps just good in a more limited domain of applicability than originally proposed.

In physics, The Standard Model and General Relativity are the two most successful models we have. We don't know their limits, but we have good reason to believe they do have limits because all attempts to combine their low-energy behaviors into a single model that is reliable at high energies have failed due to the first sense of mathematical models being fragile. But even if we find out what those limits are for the individual models, in the region of current physics experience these models are very good effective models.

15. ### rpennerFully WiredRegistered Senior Member

Messages:
4,833
I am so happy Latin is not the language of science today.

I enjoyed reading them again, thank you.

16. ### AlphaNumericFully ionizedRegistered Senior Member

Messages:
6,697
You haven't yet even defined what 'Einsteinian mathematics' is. I'm well versed in the material covered in introductory calculus courses, certainly more than you. Nothing in them say 1+1+....+n = 1. I'm actually working on an area of quantum mechanics right now, including things to do with coherent wave packets. None of it mentions what you've said.

May? So now you're just clutching at straws.

Firstly that isn't Einsteinian mathematics, whatever that is. The only stuff Einstein did pertaining to fluids is the alteration to viscosity due to a static suspension. Secondly the fact rain drops (or any other fluid) combine and split doesn't mean 1+1+...+n=1. The rain drops still split according to things like volume conservation (assuming constant density, as is pretty much the case for liquids). In quantum field theory you can have 2 particles collide and arbitrarily many produced but that doesn't mean 2=n, that isn't what the mathematics says at all.

You're again showing two things. Firstly that you're grossly ignorant of mathematics and physics and secondly that you're a terrible troll.

17. ### James RJust this guy, you know?Staff Member

Messages:
31,445
Moderator note: HectorDecimal has been officially warned for trolling.

18. ### EmilValued Senior Member

Messages:
2,801
I have a very simple math question.

There are three objects.
Object 1 is stationary. W is a vector.
Object 2 (relative to the object 1) has a velocity, W(2-1).
Object 3 (relative to the object 2) has a velocity, W(3-2).

What is the velocity of the object 3 (relative to object 1), W(3-1) ?

I know that is done as follows:

- where W(2-1)=a ; W(3-2)=b ; W(3-1)=a+b

19. ### rpennerFully WiredRegistered Senior Member

Messages:
4,833
That's not simple, since we have different models for relative velocity.
In the Newtonian case, which you may not know that you are implying since you say object 1 is stationary and the Newtonian model allows that in an objective sense, the magnitude of the velocity is $\sqrt{ \left| a^2 + b^2 + 2 ab \cos \theta \right| }$ where theta is the angle between the direction of a and the direction of b. Essentially, this the content of your triangle formed by placing two vectors head to tail and preserving the direction of each.

You might want to read up on the Law of Cosines that applies to triangles (but notice how they use interior angles which differ from the angle between vector directions).
http://mathworld.wolfram.com/LawofCosines.html

20. ### EmilValued Senior Member

Messages:
2,801
Are you sure? Never mind.

For: a=0.7c ; b=0.8c ; theta 90 degrees. (c is the speed of light in vacuum)
What is the scalar of the vector a+b ?

21. ### wlminexBannedBanned

Messages:
1,587
An interesting (and humorous) perspective:

"Today's scientists have substituted mathematics for experiments and they wander off through equation after equation and eventually build a structure which has no relation to reality." Nikola Tesla

22. ### rpennerFully WiredRegistered Senior Member

Messages:
4,833
Yes, I'm sure that you said object 1 was stationary. Yes, if this was objectively true then it implies a preferred rest frame, and I'm sure the Newtonian model of space and time had just such a preferred frame. Given that we are talking about the Newtonian model, then I am sure that my expression for the magnitude of the sum of vectors in a Euclidean space of 2 or more dimensions is correct.

Precisely, what of that did you wish to call into question?

Given that you have not introduced any contradictory postulates, the relative velocity of object 3 relative to object 1 has a magnitude of $\sqrt{a^2 + b^2 + 2ab \, \cos \, 90^{\circ}} = \sqrt{a^2 + b^2} = \sqrt{a^2 + b^2} = \frac{\sqrt{113}}{10} \, c \; \approx \; 1.063 \, c$.

Because of your curious contextomy that removes all discussion of why I used that one particular model, and your curious dismissive attitude towards the answer for the question as you posed it, I suspect you of trolling and the adoption of belligerent ignorance, therefore I will state this explicitly: The above result is model-dependent. A different model would give different results.

At the beginning of the 20th century, such a new model was proposed: That the geometry of space time wasn't Euclidean (the only geometry known to Newton) but Lorentzian. (Of course, Lorentzian space-time does not admit that any object can be determined to be actually "stationary" but since you do not actually use "stationary" in any physical sense, it seems superfluous if you weren't trying to be deliberately obtuse, and in this case we ignore it.)
Representing $\Lambda_a = \begin{pmatrix} \cosh \, \tanh^{\tiny -1} \, \frac{a}{c} & \sinh \, \tanh^{\tiny -1} \, \frac{a}{c} & 0 \\ \sinh \, \tanh^{\tiny -1} \, \frac{a}{c} & \cosh \, \tanh^{\tiny -1} \, \frac{a}{c} & 0 \\ 0 & 0 & 1 \end{pmatrix}$ and $\Lambda_b = \begin{pmatrix} \cosh \, \tanh^{\tiny -1} \, \frac{b}{c} & \cos \theta \; \sinh \, \tanh^{\tiny -1} \, \frac{b}{c} & - \sin \theta \; \sinh \, \tanh^{\tiny -1} \, \frac{b}{c} \\ \sinh \, \tanh^{\tiny -1} \, \frac{b}{c} & \cos \theta \; \cosh \, \tanh^{\tiny -1} \, \frac{b}{c} & - \sin \theta \; \cosh \, \tanh^{\tiny -1} \\ 0 & \sin \theta & \cos \theta \end{pmatrix}$ then
$\begin{pmatrix} c \Delta t_3 \\ \Delta x_3 \\ \Delta y_3 \end{pmatrix} = \Lambda_b \begin{pmatrix} c \Delta t_2 \\ \Delta x_2 \\ \Delta y_2 \end{pmatrix} = \Lambda_b \Lambda_a \begin{pmatrix} c \Delta t_1 \\ \Delta x_1 \\ \Delta y_1 \end{pmatrix}$
and so the magnitude of the relative velocity between 1 and 3 is:
$c \; \tanh \, \cosh^{\tiny -1} \, \left( \cosh \, \tanh^{\tiny -1} \, \frac{a}{c} \; \cosh \, \tanh^{\tiny -1} \, \frac{b}{c} \; + \; \cos \theta \; \sinh \, \tanh^{\tiny -1} \, \frac{a}{c} \; \sinh \, \tanh^{\tiny -1} \, \frac{b}{c} \right)$ and the result for this particular example is: $\frac{\sqrt{8164}}{100} \, c \; \approx \; 0.9035 \, c$

For $\theta = 0$ (and $0 \leq a \lt c , \; 0 \leq b \lt c$) this reduces to $\frac{a + b}{1 + \frac{ab}{c^2}}$. For $\theta = 90^{\circ}$ this reduces to $\sqrt{a^2 + b^2 - \frac{a^2 b^2}{c^2}}$ which can be used to get the above answer.

Two models, two answers to a physical situation. If experiment favors one model over the other, the fragility of mathematics requires us to discard the disfavored model as unphysical.

23. ### EmilValued Senior Member

Messages:
2,801
The correct formula is:$\sqrt{ \left| a^2 + b^2 - 2 ab \cos \theta \right| }$
You give a result greater than the speed of light. Got something to add?Is this possible?
I know the forum rules prohibit the accusation of of trolling.
No, I only gave a simple example.
I gave an example as simple as possible of calculation, hoping that you could do this calculation.
It has nothing to do with my example.
You simply are not able to do a vector calculation, taking into account the SR. Are you?
Please give the general formula, from which you deducted those cases.(If you have know physics, then you have know that replacing with values ​​is the last stage.)