How to prove it? (apostol)

Discussion in 'Physics & Math' started by alyosha, Jun 14, 2006.

  1. §outh§tar is feeling caustic Registered Senior Member

    Messages:
    4,832
    either shmoe or alyosha:

    could you explain to me how theorem i.35 on page 29 works? im trying to make intuitive sense behind the motivation for the proof so i can see why the different values of c were chosen but it's pretty difficult.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. shmoe Registred User Registered Senior Member

    Messages:
    524
    I can't think of what he has in mind here. Using some trig it's not difficult, but you don't have that yet. He is expecting you to be familiar with trig functions though, so maybe that is his intention. I'm not sure.



    If b^2>a then you should have b>sqrt(a), so it should be possible to find an r>0 where b-r>sqrt(a). We don't have a "sqrt(a)" yet, so instead we must find a positive r where (b-r)^2>a, which would mean b is not the sup of that set S (since b-r would also be an upper bound). r^2>0 if r>0, so it would be enough to find an r where b^2-2*b*r>=a. When we have equality here, we'd get r=(b^2-a)/(2*b), which is greater than zero by our assumption b^2>a, and is the r they picked.


    if a>b^2, we do a similar thing, only what he decided to call "c" in this part is analagous to what I called "r" above. In this case we're after a>(b+c)^2 which will show b is not an upper bound for S at all, since b+c is in S and strictly larger.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. §outh§tar is feeling caustic Registered Senior Member

    Messages:
    4,832
    shmoe, how do I prove #30 on page 382? I'm assuming it doesn't have anything to do with any limit definition given earlier (I haven't read the chaper on the limit definition for functions) but I can't see how possibly to prove it.

    I can see that for all e>0, there is an N>0 such that |a(n)|^2 < e^2, where n>=N.

    But this isn't leading anywhere; it applies for e>0 but in this case it would have to apply for every e^2 > 0. Kinda frustrating.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Zephyr Humans are ONE Registered Senior Member

    Messages:
    3,371
    Given an ε<sup>2</sup> > 0, just take the positive square root to get an ε > 0 and apply what you derived ... or am I missing something?
     
  8. §outh§tar is feeling caustic Registered Senior Member

    Messages:
    4,832
    What lim a -> 0 implies is that for every e > 0

    there is an N>0 such that |a(n)|^2 < e^2, where n>=N.

    But that's different from saying for every e^2 > 0

    there is an N>0 such that |a(n)|^2 < e^2, where n>=N.

    which would be necessary to show that lim a^2 = 0
     
  9. Zephyr Humans are ONE Registered Senior Member

    Messages:
    3,371
    I think there are a few ways you can do it:

    1) You're given that for all e > 0, there is an N > 0 s.t. n>=N
    =>|a(n)| < e
    => |a(n)|<sup>2</sup> < e<sup>2</sup> < e for e < 1
    Where you specify e < 1 (it has to hold for arbitrarily small e, not arbitrarily large*)

    2) Label the e's differently, that is say you're given

    for all e<sub>1</sub> > 0, there is an N<sub>1</sub> > 0 s.t. n>=N<sub>1</sub>
    => |a(n)| < e<sub>1</sub>
    => |a(n)|<sup>2</sup> < e<sub>1</sub><sup>2</sup>

    But you're given e<sub>2</sub> instead, so say
    e<sub>1</sub><sup>2</sup> = e<sub>2</sub>
    e<sub>1</sub> = e<sub>2</sub><sup>0.5</sup>

    Then by the first part, there is an N<sub>1</sub> > 0 s.t. n>=N<sub>1</sub>
    => |a(n)| < e<sub>1</sub>
    => |a(n)|<sup>2</sup> < e<sub>1</sub><sup>2</sup> = e<sub>2</sub>



    (*if you can show |a(n)} < b, where b < e, obviously |a(n)| < e by transitivity.)
     
    Last edited: Oct 13, 2006
  10. alyosha Registered Senior Member

    Messages:
    121
    On the Theorem that continuous functions are bound.

    About apostol's proof that continuous functions are bound (on page 151): Is his wordy bisection argument really necessary? Couldn't we have just gotten to the point (forgive the pun, ha!) and said "suppose f is not bounded at some point alpha on the interval" and then immediately used the definition of continuity to obtain a contradiction?


    Edit: Nevermind, I was recently informed that it does not even make sense to speak of a "point" at which a function is unbounded.
     
    Last edited: Oct 24, 2006
  11. alyosha Registered Senior Member

    Messages:
    121
    On the convexity of the indefinite integrals of increasing functions

    I believe I have reworked apostols proof of the convexity of the indefinite integrals of increasing functions. His proof, to me, seems far too super slick, and his definition of convexity does not seem to capture the intuitive notion of a line. I formulated the definition in terms of the point-slope formula, and found that the issue could be turned into an intuitively obvious problem about average value. A few lemmas about average value should be noted first.

    1. First we recall a theorem from the problem set on average value:

    If a< b < c , then there is some t such that 0 < t < 1 and


    A(a,c) = tA(a,b) + (1-t)A(b,c)


    where A(x,y) denotes the average value of a function on the interval [x,y].

    This theorem is noticebly similar in form to the way apostol defines convexity.

    2. If f is an increasing function, and a <= b <= c , then

    A(a,b) <= A(b,c)

    This is intuitively obvious and can be demonstrated by recalling the implied inequalities of increasing functions and integrating them in the same way employed in apostols proof.

    3. Finally, if f is an increasing function, and a <= b <= c , then

    A(a,b) <= A(a,c) <= A(b,c)

    This can be deduced from lemma 1 and 2.


    Now we define convexity with the point-slope formula.

    A function f is said to be convex on [a,b] if for every x < y in [a,b], and for every u in [x,y],



    f(u) <= [(f(y)-f(x))/(y-x)](u-x) + f(x)

    where the right side is just the equation of the line passing through (x, f(x)) and (y, f(y)).


    Now we assert that if f is an increasing function on [a,b], then the indefinite integral defined by I(u), with a as the lower limit and u as the variable upper limit, is convex on [a,b]


    Now when we write our indefinite integral with the point-slope definition, with a little rearranging we come to an expression about the average value of f.


    A(x,u) <= A(x,y)


    But this was already demonstrated in the lemmas ( recall that x <= u <= y ), so the convexity of I(u) has been established.

    Thanks to anyone who took the time to read this; I'm sure there are typos and perhaps misunderstandings. I can thoroughly demonstrate all of the claims here, but I felt that would have taken a great deal of space. Feedback would be appreciated.
     
    Last edited: Nov 5, 2006
  12. §outh§tar is feeling caustic Registered Senior Member

    Messages:
    4,832
    alyosha, just to ask. Are you doing all/most of the exercises anymore or are you just reading through the book? i started school and i'm doing multivariable as well as a little bit of analysis so going through one variable as well is a bit stressful but i would like to master the concepts because in vol. 2 apostol goes through multivar calc assuming quite a deal of maturity which one can't gain from doing merely the problems in the introduction to vol. 1

    Please Register or Log in to view the hidden image!



    what year of school are you in anyway? and did you say you were planning to major in math?
     
  13. alyosha Registered Senior Member

    Messages:
    121
    I'm attempting to do the exercises that involve proving (interesting) theorems. I don't have the time or desire to do the more mechanical exercises. I usually skim pretty far ahead and then slowly catch up with the problems. I admit apostols book is difficult to learn from; it might be accurate to say its probably best as a reference for people who already know the material, hah! The material itself and the style is so "self-assured" (I can't think of a better way to express what I mean). I am a freshman at the University of Florida and am considering a double major in mathematics and physics.
     
  14. §outh§tar is feeling caustic Registered Senior Member

    Messages:
    4,832
    That's cool. I am a freshman too and I am considering precisely the same majors you are (a double major too). If you find these books difficult, wait till you actually get to the REAL mathematical analysis book by Apostol. It's as dry as beans, no motivation whatsoever, hardly any pictures. More of an encyclopedia than anything. I'm taking a multivar/intro to analysis class right now and its kicking my ass so I'm slaving over some of the basics right now.

    I would advise looking into different analysis books. I'm using a couple myself and often the exercises are more 'interesting' than Apostol's. The main convenience of Apostol Vol. 1 that I find is that the exercises are structured in such a way that solving #n gives you a helpful insight #n+1. Personally I think Apostol is good mental preparation but in no way close to an introduction to analysis after taking a look at some books. It turns out that he's actually shielded us from a lot of the meat behind what completeness means, etc. It's nice that he slips in some measure theory though. I don't think Spivak does the same. Unfortunately, I have multivar to handle so I suppose I'll be mastering that before I master single variable.. Personally I think the best way for me to gain confidence is to work a ton of problems. Not rote of course.

    I expect you are considering academia too?
     
  15. alyosha Registered Senior Member

    Messages:
    121
    It isn't that I consider the material in apostol itself difficult, but rather the incredibly brief way in which he treats most topics (like cramming calculation of volume and integration in polar coordinates into single "examples" as if they were the most unimportant and uninteresting of things). Spivak's book, I think, contains problems and discussion that involve you more in the material.


    I realize that in an actual analysis book the brevity will be even greater, but apostols ultrabrief treatments in his "calculus" book are almost comical. This isn't to say his book is bad, because its very excellent as far as the purely theoretical part goes. His proof of the integrability of continuous functions is especially pleasing. I will probably go to Rudin after I get through both spivak and apostol (I doubt I'll be looking into apostol's analysis book). I also have thought about spivaks Calculus on Manifolds, but I'm not sure where that fits in.


    As far as what I want to "do"; reading about people like feynman and ed witten, for a long time I have felt that this is what I want to do and be involved with. If I don't end up as a theoretical physicist or pure mathematician, I can only imagine that I will at least be somewhat involved in both fields, perhaps in the development of new technology.


    But, it is possible that one day I will decide that I do not actually want to do those things as a profession. I may give it up to become a starving pianist. The point is that I would rather do these things first hand than have a job I cared nothing about.
     
    Last edited: Nov 14, 2006
  16. alyosha Registered Senior Member

    Messages:
    121
    hello again

    Hello all. I have returned to these boards with a particularly simple but vexing problem. I came across this problem in spivak, and then went back to see how I solved it in apostol. As it turns out, my proof was flawed (the flaw being easy to make if you are rushing).

    If a function f has a period of a, prove that the

    integral from 0 to a of f = integral from b to (b+a) of f


    holds for all b.


    This is intuitively extremely clear. The easy erroneous proof comes about when you immediately invoke the translation property to add b to the upper and lower limits, and it seems as though you are done. But, unfortunately, f had a period of a, not b, so the integrand wont simplify just yet. I was given the advice to write b=na-c for some integer n and some 0 <= c < a. I fooled around with this but simply lost patience.
     
  17. §outh§tar is feeling caustic Registered Senior Member

    Messages:
    4,832
    I'm using apostol's mathematical analysis. I won't bother with Rudin until an actual analysis class because I have a couple of months to go before I will be 'mature'.

    You have not defined the domain of your function and so it is not clear what you mean by 'for all b'.

    assuming the domain is the set of reals, f(x) = f(x-b) and from there Int[f(x),0,a] = Int [f(x-b),0,a] = Int[f(x),b,(a+b)], where the middle expression is 'a shift to the right'
     
  18. alyosha Registered Senior Member

    Messages:
    121
    Unless something has gone on behind the scenes, you have made the easy mistake I spoke of; that is, b is not the period of f(x), a is the period. So we can not simply write f(x) = f(x+b), because we only started with assuming f(x) = f(x+a). This mistake is amazingly easy to make and I never would have noticed it unless I had to do the problem twice. And yes, it seems as though we are assuming that we are considering some integrable function defined upon R. Spivak is usually not very precise about such things in wording the problems.
     
  19. §outh§tar is feeling caustic Registered Senior Member

    Messages:
    4,832
    oops. sorry. i guess after your reading your explanation i thought you meant b was the period.

    I don't think it's too difficult. Try to convince yourself that the integral of f over [0,b] is the integral of f over [a,b+a]. That should be easy. Then use this result to obtain what you want.
     
  20. alyosha Registered Senior Member

    Messages:
    121
    I will probably just draw a picture and look at it for a long time (if I find the patience) to see what the right analytic argument should be.
     
  21. quadraphonics Bloodthirsty Barbarian Valued Senior Member

    Messages:
    9,391
    That's good advice, although I'd use b = na + c. Doesn't really matter, though. You can proceed like this:

    \(\int_b^{b+a} f(x) dx = \int_{na + c}^{(n+1)a+c} f(x) dx\)

    Now, shift the thing to the origin as y = x - na:

    \( = \int_c^{a+c} f(y + na) dy = \int_c^{a+c} f(y) dy \)

    where we've used the periodicity of f (recall that n is an integer). Now, break this integral apart:

    \( = \int_0^a f(y) dy - \int_0^c f(y) dy + \int_a^{a+c} f(y) dy \)

    Then, use the periodicity of f on the last term:

    \( \int_a^{a+c} f(y) dy = \int_0^c f(y) dy \)

    and you've got the desired result.
     
  22. §outh§tar is feeling caustic Registered Senior Member

    Messages:
    4,832
    Well menos the damn fancy latex, here is how i conceived of it.

    Assume you can prove that the integral of f over [0,b] is the integral of f over [a,b+a]. That should be trivial.

    Then Int[f,0,a] = (Int[f,0,a] - Int[f,0,b]) +Int[f,a,b+a] =Int[f,b,a] + Int[f,a,b+a] = Int[f,b,b+a]

    I think that's more straightforward.

    Please Register or Log in to view the hidden image!

     
  23. alyosha Registered Senior Member

    Messages:
    121
    Thanks everyone. The old algebriac "adding and subtracting the same thing" always fools me.
     

Share This Page