# How to prove it? (apostol)

Discussion in 'Physics & Math' started by alyosha, Jun 14, 2006.

1. ### §outh§taris feeling causticRegistered Senior Member

Messages:
4,832
I'll have to check this alyosha. I've been lagging a little bit, trying to master the proof of the integral of cosine..

Off the top of my head, I think what he actually might be doing is using the concept of a piecewise monotonic function to construct the integral of a general polynomial. This is defended by that integration property for intervals, where the integral from a to b plus the integral from b to c is the integral from a to c.

Sooner than not, I'll be eyeball deep in multivariable calculus and linear algebra. Think of the fun! Without even being grounded in the theoretical aspects of single variable calculus..

3. ### shmoeRegistred UserRegistered Senior Member

Messages:
524
Lie algebras and galois theory are a little beyond a first course in algebra. Some linear algebra will be needed when you get into field extensions, the basic ideas should be more or less covered in Apostol. the main jump will be going from vector spaces over the reals to arbitrary fields, but this isn't much of a jump. Apostol's looks like a typical first exposure to linear algebra, you'll want to get something more thorough after, like the old standard Hoffman and Kunze.

nothing at all in calculus requires any pictures. they can help you understand what's going on and help lead to proofs and such, but all the theory can be written in a meaningless symbol pushing kind of way where you can check each step carefully.

He did 'cheat' a little in that he left the proof of the 'linearity property' of integrals until the end of the section, a linear combination of integrable functions is integrable. A polynomial is just a linear combination of monotonic functions (provided you split your interval at 0 if you have terms with even powers). If you believe that linearity theorem (and you should read theproof of course), the jump to polynomials being integrable is a tiny hop.

5. ### §outh§taris feeling causticRegistered Senior Member

Messages:
4,832
Hello shmoe,

Can you recommend any good rigorous textbooks for linear algebra and multivariable mathematics? Right now my course will be using Multivariable Mathematics by Shifrin, but I was wondering if there were any good supplements to this. I'll check up on Hoffman and Kunze, for sure. Another grad student recommended it to me.

7. ### shmoeRegistred UserRegistered Senior Member

Messages:
524
I'm not familiar with Shifrin, but the description looks fine.

Hoffman and Kunze contains most everything you'll ever need to know about linear algebra. Friedberg, Insel, Spence is less advanced and would be better suited for an introduction and possibly more useful for you right now in that it has more computation examples that will come in handy in your calc. class.

A fantastic multivariable calculus book is Spivak's 'Calculus on Manifolds', it was my constant reference during a differential topology course and contained everything that I should have learned in multivariable calculus but didn't, it may be hellish as a first course though, I don't know. Rudin's "Principles of Mathematical Analysis" also contains multivariable stuff and covers single variable to more depth than apostol's calculus. What about volume 2 of Apostol? I don't have any experience with it, but if you're liking the style of volume 1, carrying on with Apostol might be a good idea.

8. ### alyoshaRegistered Senior Member

Messages:
121
I'm looking at his exposition on the integrals of sine and cosine, and its interesting that his results apply virtually to any functions with the four fundamental properties he starts with, so his theory appears to be more sweeping than it is at first glance. The way he presents it, though (by using terms like sine and cosine right away), you might be fooled into thinking that he's "only" talking about the trigonometric functions. Could he have just replaced pi with some arbitrary number in his fundamental properties and made an even more general theory?

9. ### §outh§taris feeling causticRegistered Senior Member

Messages:
4,832
alyosha, I haven't read far enough but I don't think that particular section contains principles which are very general. In other words, there are not many non-sinusoidal functions with the periodic properties which accompany those four. Or maybe I'm not getting what you're saying. Can you give any examples of other non sinusoidal functions?

--

shmoe

I am taking an honors class so computational examples won't be much help. I don't know though, maybe in building intuition? I did many of the problem sets for the linear algebra part of apostol's vol 1, all the way into linear spaces, and it didn't seem as if they built my intuition in the slightest bit. I was having difficulty really in .. I don't know, feeling as comfortable/fluent with some of the proofs in that section. In other words, I don't know that I have the real understanding of the concepts necessary to, for example, find another way of proving that every subset of k+1 elements in the linear span of an independent set of k elements is itself dependent. That's pretty tough to familiarize and Apostol isn't as big on building intuition as he is in the calculus portion of the book.

10. ### alyoshaRegistered Senior Member

Messages:
121
hehe, I don't mean to say I even know of any other functions with properties similar to those that apostol gave. I guess my point though was that perhaps his theory doesn't HAVE to refer to only the trigonometric functions.

Messages:
121
12. ### shmoeRegistred UserRegistered Senior Member

Messages:
524
I don't think any functions can satisfy these properties if you try to change the 'pi'. If f and g satisfy these 4 conditions with the 'pi' changed to something else, it looks like everything in 2.5 and 2.6 follows through without trouble no? The important bits are the theorem 2.5 and the initial values f(0)=0 and g(0)=1. Theorem 2.5 wil lead to the f'=g and f''=-f from the fundamental theorem of calculus. This, plus the initial conditions f(0)=0 and f'(0)=g(0)=1 are enough to determine f is our usual sine function. Likewise for g equaling cosine.

The part you'd need to check is if all those properties still hold if you changed the pi, I just skimmed it but I don't see anywhere that the pi is important in sections 2.5 and 2.6, but is explicitly used in the geometric construction of these functions.

13. ### alyoshaRegistered Senior Member

Messages:
121
Shmoe, I was having a little trouble understanding the conclusion of your first paragraph; are you trying to say that any functions satisfying those properties must be sine and cosine? Perhaps that is the case, and if it is I suppose the idea that his theory may be more general is an empty one. The very idea of the trigonometric "functions" always makes me a little uncomfortable in the calculus because of how little reference there is to just what they are. However, apostols theory cheered me a little because I realized that the important results in the calculus do not at all have to depend on any definitions involving the ratios of the sides of triangles and other things that I find rather nauseating. It is much more enlightening to see the results laid out in the abstract for any functions with the four properties he gave, rather than believing there is some mysterious reason for why this or that particular curve derived from the ratios of sides of triangles should have this particular integral.

Last edited: Sep 24, 2006
14. ### shmoeRegistred UserRegistered Senior Member

Messages:
524
Yes. If you have a function f that satisfies f''=-f, f(0)=0, f'(0)=1 then f(x)=sin(x). You can use this differential equation as the definition of sine, once you prove you have a solution of course.

So if those four properties lead to this differential equation then you would have to have been talking about the sine and cos functions in the first place and the constant would have to be pi.

It looks like those properties do give this differential equation if you changed the 'pi' to something else (you should check this more carefully though!), so I'm led to believe there are no functions satisfying them with a constant other than pi.

notice his definition still involved the geometric thing 'pi'. This can be done away with entirely. the differential equation above would work, so will the analytic version he promises in a later chapter. He'll define sine and cos in terms of a power series. 'pi' is then defined to be the smallest positive root of sin(x), after showing you have positive roots of course.

Essentially the same, you can define e^x as a power series, then get sine and cosine out of the real and imaginary parts of e^(i*y). This is nice, and points out the connections between exponentials and trig functions.

15. ### alyoshaRegistered Senior Member

Messages:
121
Hmm, I would have thought that sine and cosine would imply those properties, but not necessarily the other way around. I haven't studied differential equations in depth; do they depend on the power series definitions of sine and cosine?

16. ### shmoeRegistred UserRegistered Senior Member

Messages:
524
Given any other definition of sine and cosine, you can prove they satisfy those differential equations.

However, it is possible to build up all your theory and prove that those d.e.'s have unique solutions without ever mentioning sine or cosine (not even their power series). You take sine and cosine to be these solutions, and from the d.e.'s you can prove all the other properties of sine and cos you are used to, the usual power series, relations to angles and so on.

It happens often in maths that you find many different ways to characterize the same object. In hindsight you can pick whichever of them you like as your definition, then prove the rest are equivalent.

17. ### alyoshaRegistered Senior Member

Messages:
121
Shmoe, what do you plan to do with your math degree?

18. ### alyoshaRegistered Senior Member

Messages:
121
I have an issue with his proofs of the limit theorems. For parts (i) and (ii), why does it "suffice" to prove the case for when the limits are equal to zero? Is this because you could just consider the limit of the function h(x)= f(x)-A which should approach zero as f(x) approaches A? Would it have been so much trouble just to prove the general case with the inequalities | f(x) - A | < e/2 etc.?

19. ### AbsaneRocket SurgeonValued Senior Member

Messages:
8,989
I'm looking forward to his answer.. because I do not have one for myself.

20. ### alyoshaRegistered Senior Member

Messages:
121
If I went into mathematics I would be interested in doing research on big open ended problems, but I'm not sure who exactly would pay me to do this.

21. ### shmoeRegistred UserRegistered Senior Member

Messages:
524
you're right it wasn't necessary. This kind of normalization is common in proofs, reduce the general problem to a simpler case, then prove the simpler case which is easier to work with. for (i) and (ii) the simpler case of the limits equalling 0 wasn't much simpler than the general case. It makes more of an impact for (iii) though. It's never really necessary, but can make a big difference if things are complicated enough and seeing this in action in a fairly elementary situation like these limit proofs is good practice.

Probably frame it and put it in the closet with the others.

Seriously, I'll be aiming for a post-doc, then university position*. there are plenty of research positions available for mathematicians in industry, cryptography being a really hopping area as well as anything relating to financial things, but I do enjoy teaching as well.

*going straight into a tenure track position at a university is of course possible before a post-doc, but unlikely. I am attracted to smaller town living though and won't cry if I don't end up at a big research university, so I'm more open than some to where I end up.

22. ### §outh§taris feeling causticRegistered Senior Member

Messages:
4,832
shmoe how can I tackle #18 on page 94?

23. ### alyoshaRegistered Senior Member

Messages:
121
In his proof of the "small span theorem" on page 152, I'm having a hard time seeing how the continuity of f at alpha ties into the span being less than epsilon. Because of the continuity at alpha,

|f(x)-f(alpha)| < epsilon

But I'm not seeing why this definitely implies that the difference between the maximum and minimum of f in the respective interval must also be less than epsilon.

edit: I think I've found why after writing it out.

from the above inequality we have

f(alpha)-epsilon< f(x) < f(alpha) + epsilon

Then in the interval it seems that f max is f(alpha) + epsilon

and that f min is f(alpha)-epsilon

and that the difference

[f(alpha) + e] - [ f(alpha - e] = 2e