Relativity fails with Magnetic Force

Discussion in 'Physics & Math' started by martillo, May 24, 2009.

  1. przyk squishy Valued Senior Member

    Well if you're familiar with differential calculus the only new step you'd need to learn is how to use the Green's function method to invert the wave equation.

    It's your intuition that needs to be revised here. Second-order ordinary differential equations only have two linearly independent solutions. Second order partial differential equations can easily have an infinite number of linearly independent solutions. You see this with Maxwell's equations in vacuum for example: every plane wave propagating at c in any direction with any wavelength is a solution. That's an infinite number of plane wave solutions, all linearly independent.
  2. Google AdSense Guest Advertisement

    to hide all adverts.
  3. martillo Registered Senior Member

    Well this bring then a possible "Fourier decomposition" of some special wave into place as any function has one.
    As I said in this case I don't see how a final solution could have a valid source of their electric and magnetic fields if any of the planar waves components doesn't have it.
    Last edited: Apr 16, 2010
  4. Google AdSense Guest Advertisement

    to hide all adverts.
  5. AlphaNumeric Fully ionized Moderator

    Suppose you have a 1 variable function f(x) which has a Fourier decomposition \(f(x) = \sum_{n} a_{n} \sin x + b_{n} \cos x\). Suppose you also have a 2nd order linear ODE which can be written in the form of a linear operator \(\mathcal{L}(y) = 0\). If you make the ansatz that f(x) satisfies the equation, \(\mathcal{L}(f)=0\) then you will find an infinite set of constraints on the coefficients of the Fourier decomposition. You know that any 2nd order linear ODE has 2 and only 2 independent solutions and so you'll find that there's 2 ways for you to satisfy the conditions on the coefficients, thus giving you your two solutions.

    This is nothing more than a demonstration that if you find a solution to an ODE you can then Fourier decompose it or alternatively you can Fourier decompose it and then find the form of the Fourier coefficients which solve it.

    For instance, suppose you have y' = 1 on the interval [0,1). If you make the ansatz that \(y(x) = \sum a_{n} \sin(\pi x) + b_{n}\cos(\pi x)\) you'll find constraints on the coefficients. Alternatively its pretty obvious that y(x) = x + constant and you'll find if you do the Fourier decomp of f(x) = x you'll get the same coefficients.

    Individually the terms in the decomposition are not solutions, only specific combinations of them are. Yes, in certain cases for certain decompositions you'll find you have only one or two terms which are non-zero but this is entirely to be expected. Decomposing a polynomial in terms of sin and cos is the algebraic version of a change of basis, something you should be familiar with from vector calculus.

    Given some quantity/objects in a vector space \(\lambda\) you pick a basis \(e_{n}\) such that \(\langle e_{n} | e_{m} \rangle = \delta_{mn}\), the Kroncker delta. For polynomial bases you might define \(\langle e_{n} | e_{m} \rangle = \int \omega(x) e_{n}e_{m} = \delta_{mn}\) and if you want to make life easy for yourself you pick a basis to simplify \(\omega(x)\) (ie an orthonormal basis) and you get \(\lambda = \sum C_{n}e_{n}\). If \(\lambda\) is a vector then you'll often write the \(e_{n}\) as \(\mathbf{i},\mathbf{j},\mathbf{k}\). If \(\lambda[/i] is a function you might use the infinite nice basis of [tex](1,x,x^{2},x^{3},x^{4},\ldots)\). Or you might change to a less pleasant basic \((1,x+1,x^{2}+x,x^{3}+x^{2},\ldots)\) or any other combination which is linearly independent (you shoudl know what that means). If you're doing something involving periodicity then picking a periodic basis would be a natural choice, ie \(\sin \frac{n\pi x}{L}\) and \(\cos \frac{n \pi x}{L}\). Or specific linear combinations of them, \(e^{i\frac{n\pi x}{L}}\) and \(e^{-i\frac{n \pi x}{L}}\).

    This is all basic stuff covered in any methods course at university. The terms which sum to make f(x) do not have to satisfy the equation itself. This is OBVIOUS else f(x) could be any linear combination of those terms but its not, its one and only one combination of those terms by the linear independence of them. Even if the equation is linear the terms which sum to make a solution do not need to be solutions themselves. Only linear combinations of solutions need be (in deed ARE) solutions.

    If \(f(x) = x + x^{4}\) is a solution to some linear equation and so is \(g(x) = 3x^{2} + x^{3}\) is too then the linear nature of the equation means \(Af(x) + Bg(x)\) is a solution. This does NOT mean the individual terms in f(x) or g(x) are, ie x is need not a solution. \(x^{4}\) need not be. The fact their linear combination is doesn't mean they are.

    As a vague side comment if a 1st year in a mathematical methods class of mine made that mistake I'd correct them politely. If they continued to make it class after class I'd be a little bit blunter. You've been saying it for YEARS, despite being corrected, despite claiming to be familiar with all of this. I can only assume you're either mind blowingly stupid for not grasping a point you've been exposed to many many times or you're intellectually dishonest by lying about having learnt the related relevant mathematical material and thus you've never seen these results or you're fully aware you're wrong and you're simply telling lies hoping that you won't come across anyone who hasn't slept through their 1st year methods course.

    If you're stupid enough to think no one would notice you telling lies again and again then you're fantastically stupid, even more so than being unable to grasp the results themselves. Hence I'm hoping you're either mildly stupid and don't understand the maths or you're just intellectually dishonest and lazy and have made no attempt to find the relevant material. Otherwise its a truely terrible statement about you if you are aware you're wrong and you just hoped no one would notice. You've posted your nonsense on many forums for many years, you must know someone who passed 1st year methods would read your clap trap.
  6. Google AdSense Guest Advertisement

    to hide all adverts.
  7. przyk squishy Valued Senior Member

    Why not? You can also perform a Fourier decomposition of a physical source onto a superposition of plane waves.
  8. martillo Registered Senior Member

    You all take the same approach. You tell me you can do this and that but there's no one already done case of finding the solution the way you say it can be done. There's no one known case in the World of solution for an electromagnetic wave based on that "Fourier decomposition" but you insist it can be done and that I'm just an stupid thinking it cannot be done.
    I'm not going to continue discussing.
    I will leave with yet another "stupid" comment:
    I think there also could be a problem with the solution for the antenna rpenner posted: because they are electric and magnetic fields that "encloses" (or are related to) charges and currents in the element of the antenna. An "electromagnetic wave" travels in free space not "enclosing" any travelling charge and the Maxwell's equations that determines them expresses exactly this: "enclosed" charges and currents = zero.
    In other words the solution of rpenner could not verify all the four of the Maxwell's equations needed to determine an "electromagnetic wave".
    I will check this someday, don't have time at this moment.
    But don't care because is just one more of my stupid comments since I'm totally stupid.
  9. AlphaNumeric Fully ionized Moderator

    Intelligence, knowledge and intellectual honesty? You're basically trying to insult us for actually knowing stuff.

    How do you know? You obviously haven't read the literature, you obviously aren't familiar with stuff 1st years should know. The fact you can't do it, you don't understand it and you haven't read about it doesn't mean people can't do it and never have done it. Fourier decompositions are the bread and butter of signal analysis, its used by any kind of digital media for audio.

    In fact its even more common than that. Every time someone does a power expansion they are using exactly the method I just outlined, assuming the solution expands in some basis and then working out the coefficients of such an expansion by the form the constraint equation takes when you use such an ansatz. I personally have done that many times and its standard methodology in any and all mathematical physics and taught in universities. And how do I know that? Because I've both been taught and have taught it.

    Have you honestly never seen "Assume a power expansion \(f(x) = \sum a_{n}x^{n}\) and insert into equation" anywhere? Its in so many textbooks its not even funny. And this isn't new, its been done for centuries. It allows you to break a difficult problem down into lots of simpler ones. Its also related to 'separation of variables', the standard methodology in wave mechanics.

    You're attempting to tell me that someone cannot and has not done something I not only know for a fact other people have done but have done myself. Want an example? Alright jackass, its hammer time.

    Let \(L(y) = y''+y = 0\). If you do enough maths or physics you immediately recognise this as simple harmonic motion so you can jump immediately to the solution \(y(x) = A \sin(x) + B \cos(x)\). This is a linear combination of two possible solutions, \(\sin(x)\) and \(\cos(x)\).

    But suppose you didn't know that sin and cos are the solutions and you make a power series ansatz, \(y(x) = \sum_{n=0}^{\infty} a_{n}x^{n}\). Now consider \(L(y) = L(a_{n}x^{n})\). We want y'', which is easily seen to be \(\sum_{n=2}^{\infty}n(n-1)a_{n}x^{n-2}\). We lose the n=0 and n=1 cases as they twice differentiate to zero. So L(y) now becomes \(\sum_{n=2}n(n-1)a_{n}x^{n-2} + \sum_{m=0}a_{m}x^{m} = 0\). A bit of relabelling and shuffling about and you have \(\sum_{s=0}\left((s+2)(s+1)a_{s+2} + a_{s}\right) x^{s}\). Since the \(x^{s}\) are linearly independent (in a sense which I won't bother to go into but can if needs be) their coefficients must vanish seperately and we have \(L(y) = 0\) implying \((s+2)(s+1)a_{s+2} + a_{s} = 0\) for all \(s \geq 0\) and thus \(a_{s+2} = -\frac{a_{s}}{(s+2)(s+1)}\) for all \(s \geq 0\). It's clear that due to the double derivative in L(y) there's a relationship between \(a_{r}\) and \(a_{r+2}\). More generally a k'th order ODE would relate \(a_{r}\) and \(a_{r+k}\). Further more this means there's 2 distinct set of coefficients and for a k'th order L there's be k. We consider this more specifically, considering the \(a_{r}\) for r even

    \(a_{2} = -\frac{1}{2}\frac{1}{1}a_{0} = -\frac{1}{2}a_{0} =- \frac{1}{2!}a_{0}\)
    \(a_{4} = -\frac{1}{4}\frac{1}{3}a_{2} = +\frac{1}{4}\frac{1}{3}\frac{1}{2}\frac{1}{1}a_{0} = \frac{1}{4!}a_{0}\)
    It's easy to see you get generally \(a_{2n} =(-1)^{n}\frac{1}{(2n)!}a_{0}\)

    Now consider the \(a_{r}\) for r odd.
    \(a_{3} = -\frac{1}{3}\frac{1}{2}a_{1} = -\frac{1}{3!}a_{1}\)
    \(a_{5} = -\frac{1}{5}\frac{1}{4}a_{3} = +\frac{1}{5!}a_{1}\)
    It's easy to see you get generally \(a_{2n+1} =(-1)^{n+1}\frac{1}{(2n+1)!}a_{1}\)

    So now I have \(y(x) = \sum_{n=0}^{\infty} a_{n}x^{n} = \sum_{n=0}^{\infty} a_{2n}x^{2n} + \sum_{m=0}^{\infty} a_{2m+1} x^{2m+1} = \sum_{n=0}^{\infty}a_{0}(-1)^{n}\frac{1}{(2n)!}x^{2n} + \sum_{m=0}^{\infty} a_{2m+1}(-1)^{n+1}\frac{1}{(2m+1)!}x^{2m+1}\).

    Lo and behold we seem to have split the solution into two linear combinations, \(y(s) = a_{0}\sum_{n=0}^{\infty}(-1)^{n}\frac{1}{(2n)!}x^{2n} - a_{1} \sum_{m=0}^{\infty} (-1)^{n}\frac{1}{(2m+1)!}x^{2m+1}\). Recognise them? We have \(\sum_{n=0}^{\infty}(-1)^{n}\frac{1}{(2n)!}x^{2n} = \cos x\) and \( \sum_{m=0}^{\infty} (-1)^{m}\frac{1}{(2m+1)!}x^{2m+1} = \sin x\) and have obtained the general solution \(y(x) = A \cos x + B \sin x\) where \(A = a_{0}\) and \(B = -a_{1}\).

    There, categorical proof that you can construct solutions to a linear differential equation out of terms which are not individually solutions to the equation. Series solutions are common place in physics.

    *goes to check 1st year textbook*

    Yep, the textbook recommended to 1st years by Cambridge includes dozens of pages outlining the methods and applications of series solutions in physics. In fact one of the papers my father published when he was a PhD student studying magnetohydrodynamics (ie interplay between electromagnetism and fluid mechanics in flowing conductors) was entirely on the series to solution to some horrible differential equation. This isn't new, its not advanced, its not obscure, its something anyone calling themselves 'physicist' should be very familiar with.

    Would you care to explain why you seem to not know about a common methodology, despite having spent years whining about it? In all those years didn't you ever open even one textbook.
  10. martillo Registered Senior Member

    Seems you don't understand what I write.

    Of course I know Fourier analisis works and is used in communications and other areas like medicine. Don't you know it is also teached for engineers and engineers work with them?
    Of course I saw and even did some and well (I was good in mathematics) while at University (Electrical Engineering).

    I wrote:
    Do you read it fine? I'm talking about a solution for an "electromagnetic wave", a solution for some particular antenna and I repeat this hasn't been done ever.

    You have wasted your time writing Fourier solution to an equation which is not related to any antenna and so it is not what I'm talking about.
    Last edited: Apr 16, 2010
  11. funkstar ratsknuf Valued Senior Member

    Ah, the usual bit of self-delusion: "I don't understand what you just did, so I'll pretend that you misunderstood, so as not to lose face."
    The applications of Fourier analysis are readily apparent to everyone with even a passing familiarity with it: You're not impressing anybody with this.
    No, you weren't. Last time you were here you couldn't even do something as basic as a coordinate change, and in this thread you're confused about some basic material that an electrical engineer would be expected to know.

    Of course, you might have gotten a degree from Diploma Mill University, but who really cares, when you're hopelessly ignorant of utterly pedestrian stuff?
  12. AlphaNumeric Fully ionized Moderator

    I sometimes struggle to grasp lesser minds, yes,

    I'm the one who just told you its taught in universities. I'm the one who just said I've taught it! Did you understand what I said? And as it happens my father was head of an engineering department for more than a decade. If you're going to try the "I know more about how this is done in universities" you're going to lose. Or rather, you have lost.

    I don't for a second think you're 'good' at mathematics. You don't seem to be well versed on things I'd expect a 1st year to know inside out. And if you've done this at university and you're good at it why are you saying things which contradicts it?

    I gave you a very simple example because quite frankly I don't think you'd understand anything more than that. I also said that my father did a paper which involves something very close to that method for a magnetohydrodynamics problem. Electromagnetism in a flowing conductor. Real world applications to electromagnetism of precisely the methodology I've just outlined. Once again, you're telling me that no one has ever done things I have read with my own two eyes and which people have already corrected you on.

    You made it clear that you didn't understand the application of Fourier decompositions to the issue of differential equations. You whined about how the individual terms in the series would have to be solutions to the differential equation and I just proved you wrong. Now you're trying to twist and say that unless I prove you wrong with a particular specific example then you're valid in carrying on with your BS. This is essentially nothing more than a change of basis. We're all used to changes of basis for a set of vectors but functions are elements in infinite dimensional vector spaces too. But then you'd know that if you hadn't slept through your methods course at university, if you indeed went to university which I don't entirely believe.
  13. martillo Registered Senior Member

    "Very close" is not enough. I'm saying it doesn't work for a supposed electromagnetic wave of any real antenna. I said that nobody ever has developed a solution of an electromagnetic wave for a real antenna based in the method you talk about and I think I'm right. If not show me one case.
    You never saw a solution of an electromagnetic wave for a real antenna based in the method you talk about.
    Not at all.

    Someones may wonder how could I be so sure about what I'm saying.
    Two reasons.
    First because theoretical solutions would have been developed for antennas tand hen perfect ideal antennas would have already been developed based on those solutions and I as an Electrical Engineer would have heard about that.
    Second because I trust Mathematics. Mathematics will not show something that does not exist and I know that "electromagnetic waves" do not exist as I argue in my site where I show how antennas really work through emission/absorption of photons and the photoelectric effect present in the antennas mantaining all the wave theory developed for communications.
    Last edited: Apr 17, 2010
  14. przyk squishy Valued Senior Member

    Of course no-one has. The usual way of solving Maxwell's equations for an antenna is via the retarded potentials or Jefimenko's equations. This is the avenue rpenner's link as well as textbooks take. Solving for an antenna via Fourier decomposition is technically possible but also a complete waste of time. The only reason anyone is bringing up Fourier decompositions at all is because of statements you make like this:
    Alphanumeric has already pointed out to you that a function can be a solution to a differential equation even if the individual modes aren't, and has given you specific examples that illustrate this. In electromagnetism, it's possible to have (say) a spherical wave produced by a physical source (eg. an oscillating dipole) that's a solution to Maxwell's equations, even if the individual plane waves that compose it aren't solutions with that source term. There's nothing mathematically invalid about that.

    I've also given you an alternative way of thinking about the same thing if it helps you intuitively: you can also perform a Fourier decomposition of a physical source onto plane waves. So if it helps you, you can think of the source itself as a superposition of plane waves, each one with a corresponding electromagnetic plane wave solution. Add up the individually non-physical "source waves" and you'll get your physical source back. Add up the corresponding electromagnetic plane waves, and you'll get the physical field it produces.

    To emphasize this: I'm not suggesting this as a way of actually calculating the fields produced by an antenna. There are much better ways of doing that. I'm only suggesting this as a way of reconciling this intuitive difficulty you're having with plane waves vs. sources by giving you another way of thinking about it.

    Well the point is, they have, and yes, you should have heard of it. For an electrical engineer, not having studied the solution for an oscillating dipole is a really serious omission.
  15. martillo Registered Senior Member

    Well, I'm pretty sure "electromagnetics waves" do not exist in reality but I feel I would need to spend lot of time in the high mathematics of your "level" to discuss more with you and I don't have time for this. Thanks for your comments although I disagree.
  16. AlphaNumeric Fully ionized Moderator

    So the application of a series expansion in which the individual terms are not solutions to the governing equation(s) but a specific linear combination is in the realms of electromagnetism which as has real world applications isn't enough to completely counter your arguments?!

    You didn't accept even the methodology. I proved it worked. You whine about real world applications. I proved they exist. Nothing more needs to be said, Fourier analysis or any other basis decomposition of any finite or even infinite dimensional basis of a vector space is EVERYWHERE in physics. Fourier analysis is used in wave mechanics. Fourier analysis is used in quantum field theory. The very mathematical foundation of quantum mechanics, a Hilbert space, is an infinite dimensional vector space!! If you were 'good' at mathematics then you'd know about these applications. The opening page of 'A Very Short Introduction to Mathematics' by Cambridge professor of mathematics and Fields Medal winner Tim Gowers (who, incidentally taught me in my first term of university) says, and I quote, "The notion of a Hilbert space sheds light on so much of modern mathematics, from number theory to quantum mechanics, that if you do not know at least the rudiments of Hilbert space theory then you cannot claim to be a well-educated mathematician.". You are not a good mathematician. You are not sufficiently informed to make the evaluations you do.

    Simply not understanding something doesn't make it justified to denounce it. To give an example relating to me I know a fair bit of general relativity. I know the basics of experiments like the Pound-Rebka one or the 1919 eclipse. I'm also aware of LIGO, which uses 4km long tunnels to measure variations on scales fractions of the width of a proton. My knowledge of thermodynamics also tells me that the vibrating motion of atoms simply due to their thermal energy is trillions of times more than this tiny variation LIGO looks for so I quite readily admit I do not know how they can measure for gravitational waves. The small amount of relevant knowledge I have actually makes me understand their methods less because it seems counter to what I know. But also I haven't tried to find out the specific methodologies used. Do I therefore think that LIGO is a massive pile of crap sucking up pointless funding? No. I admit I don't have enough understanding and accept that if I wished to learn I could find out but until then I default to the knowledge of the experts, who have convinced other experts they are onto something.

    This is not how you behave. You or any other cranks here (Jack* being the main example at present). You don't know something, you don't want to know but you want to tell the people who do know that infact they don't. You're trying to tell me things about mathematics and physics I know are false. I know them to be false not because I am blindly accepting the comments of someone else but because I can demonstrate, off the top of my head in several ways, you to be wrong. Immediately.

    * Jack even goes a step further. When I suggested a relevant book on the formal construction of geometry in Minkowkski space-time he said he didn't wish to read it because the book wouldn't say what he says (I'm paraphrasing, he said the book wouldn't contain 'the latest mathematics' by which he means his own claims).

    I didn't actually study electronics or do too much specific electromagnetism (without wrapping it up into a quantum field theory construction) but you can't employ the logic "I've never seen it so its never been done". I've never seen Japan but I don't claim there's an enormous conspiracy of Chinese people to pretend they're Japanese when I meet them.

    Yes, you're obviously up to date on the latest and greatest reseach in the engineering world. I hate to break it to you but you don't have omniscience in engineering. You don't even seem to be familiar with basic homework problems.

    You do realise maths isn't physics and physics isn't maths, right? I can construct infinitely many electromagnetism-like mathematical theories which are not real. In fact that's what Yang-Mills theory is, the generalisation of electromagnetism! On the purest level a physicist is someone who finds/selects/constructs the correct (or as near as we can tell) mathematical model out of the infinity of possibilities in mathematics which most accurately describes the real world. Give me any theory in physics and I'll give you a mathematical construction close but not equal to it. Its mathematically valid but physically wrong.

    Chalk up another basic qualitative concept you don't understand in science.

    Yes, you know. Just like Jack knows relativity is wrong. Just like Pat Robertson knows Jesus existed and so does God. Just as a 5 year old knows Santa exists.

    The easiest person to fool is yourself (as said by Feynman I think). Hence peer review. You think you've got something amazing but a fresh pair of eyes sees the mistakes more easily. You've got no interest in peer review because you refuse to listen to anyone who has enough knowledge and/or intelligence to provide valid comments. Have you submitted your work to a journal? If not, why not? If so, why are you still pushing it? Intellectual honesty starts with being able to say "I'm wrong". Happens to the best of us. Einstein was wrong almost entirely post 1930. Newton was wrong about alchemy. Kelvin thought the Sun was fuelled by gravitational collapse and the atom was like a pudding. Stick a pin in the physicists' 'Who's Who' and you'll find someone who at one time or another (probably many more times than once) has said "I'm wrong". And that's why you're not in it.
  17. martillo Registered Senior Member

    Then you should do that.
  18. AlphaNumeric Fully ionized Moderator

    In regards to what?

    I'm not being hypocritical in holding you to some requirement which I myself am unwilling to be held to. I have done physics research myself and had to bin months of work when it turned out to be a mistake. I've had to do corrections on papers I've written in order to satisfy the criticisms of my work received during peer review for a journal. I'd had to correct things in my thesis. I have repeatedly submitted my work for peer review and have made mistakes which I've acknowledged. And as a result I understood more.

    This is not the behaviour you engage in. Furthermore you ignored almost the entirety of my post. I'll assume you can't retort my criticisms of your level of knowledge or that I've demonstrated claims of yours false. You also ignored direct questions, which doesn't do wonders for your wish to be seen as intellectually honest and worth listening to.

    Tell you what, I'll make it easier for you by simply listing a few straight forward questions I'd like you to answer :

    1. Do you understand / have you done series expansions of functions, such as Taylor expansions and Fourier expansions?

    2. Do you understand vector spaces in general, not just those relating to actual vectors?

    3. Do you understand the notion of changes of basis in vector spaces?

    4. For linear differential equations do you understand any linear combination of solutions is itself a solution?

    5. Do you understand that such linear combinations are not the same linear combinations referred to in Q1?

    6. Do you understand the linear combinations referred to in Q1 have nothing to do with possible solutions of differential equations?

    7. Do you understand that writing a solution to a given differential equation via the expansions of Q1 need not satisfy the equation term by term, in the sense used in the explicit example in my previous post?

    8. Have you submitted your work to journals? If not, why not? If so, what did they say?

    If you disagree with any of the statements made in the questions explain why. Is that direct and straightforward enough for you?
  19. przyk squishy Valued Senior Member

    Yup. It's from his speech on cargo cult science.
  20. tsmid Registered Senior Member

    x' is the position of the reflecting mirror in the moving system (i.e. the distance between the source and reflector). x=x'+vt is the corresponding position in the stationary system (relative to which the source and reflector are moving). At time t=0, x=x', so the distance is the same here in both reference frames.

    The fact that the wavelength transforms as well is only an additional characteristic for light waves (due to the fact that the speed of light is constant). For other waves, the wavelength does not exhibit a Doppler effect, only the frequency. And the frequency change is merely caused by the change of the distance between the source and receiver, which means that subsequent pulses have a different distance ( and thus a different time) to travel.

  21. tsmid Registered Senior Member

    No, you haven't disproved it. You merely gave an example that resulted in an inconsistency, which however was due to the fact that you did the substitution for the differentials only for one variable rather than for both and adding the two (as the definition of the total differential requires). I also mentioned already that one could do the variable substitution actually before the differentiation, with the same result. So your general assertion that the differentials can not be treated like normal algebraic quantities is incorrect (for linear functions anyway, and linearity is actually already implied by the factor 1/2 in 1/2*tau(x2,t2)=tau(x1,t1)).

    So the conclusion on my page that, according to Eq.(11), there can not be a general function tau satisfying the equation 1/2*tau(x2,t2)=tau(x1,t1) unless v=0, is correct. Your argument that the Lorentz transformation (or indeed any function of the form tau=alpha*(t-vx/c^2)) satisfies the equation does not invalidate this conclusion because t and x can not take on arbitrary values here but only those given by the equations x1=x'+v*t1 , t1=x'*a1 ; x2=v*t2 , t2=x'*a2 , where a1=1/(c-v) and a2=1/(c-v) +1/(c+v) . So tau=alpha*(t-vx/c^2) can not be interpreted as a general function of the two variables x and t.

    The inconsistency of Einstein's approach is indeed readily apparent if one doesn't reverse the direction of the light signal but has for instance a partially transparent detector at distance x' and a second detector at distance 2x' (rather than 0) i.e. a1=1/(c-v) and a2=2/(c-v). As a general transformation equation for the space and time variables should be independent of the path of the light signal, Einstein should get the same transformation for this case as well, but because he now has 2x' rather than 0 as an argument on the left hand side, his way of doing the derivatives would now yield

    2*dtau/dx' + 1/(c-v)*dtau/dt = dtau/dx' + 1/(c-v)*dtau/dt

    i.e. dtau/dx' = 0, which looking at his original equation for the reversed light path (see ) would only be consistent if v=0.

    On the other hand, my general Eq.(11) does hold in any case. In the case a1=1/(c-v) and a2=2/(c-v) it is indeed always satisfied (i.e. any linear function of x and t would satisfy the equation 1/2*tau(x2,t2)=tau(x1,t1) ) whereas for a1=1/(c-v) and a2=1/(c-v) +1/(c+v) (light reversal) it can only be satisfied for v=0. And that suggests in fact just the identity 'transformation' as the only consistent transformation here.

  22. tsmid Registered Senior Member

    The fact that the tank is open, i.e. the object can move freely within the water rather than being constrained by the walls of the tank (which results in the object only being able to move with the water).

    You only come to that conclusion because you are arbitrarily enforcing a single continuous wave function throughout space. This is physically unjustified and indeed inconsistent if you have physically separated regions. As an example, consider a water tank divided in two by a solid wall. If you independently generate waves in both halfs, then these are completely independent of each other and can not be described by one continuous wave function. And trying nevertheless to define one is merely a logical contradiction in terms (and just calling it 'quantum mechanical' doesn't change anything about it; the energy conservation law would still be violated for instance).

    How in practice do you define a barrier and its thickness here?

    Last edited: Apr 25, 2010
  23. przyk squishy Valued Senior Member

    No it isn't. What Einstein said in his paper about the variable x' was:
    That's all. x' is a variable that you can use to characterize the position of the moving mirror compared to the origin of the moving frame, but it is using the rest frame's notion of distance. Nowhere does Einstein say that x' is the distance between the origin and the mirror as measured in the moving frame. You yourself appear to understand this when you say:
    so you've either forgotten this important distinction or you never understood it to begin with.

    It also contradicts your assumption that the distance light travels is invariant. Suppose a light pulse travels a distance of "ten wavelengths" from a source to something that absorbs it. That distance isn't invariant, and in fact it transforms asymetrically (ie. it depends on the direction the pulse is travelling in with respect to the direction of the boost). So when you complain that the Lorentz transformation is asymetric with respect to a flip in the sign of x, you're complaining about the Lorentz transformation actually getting something right. It's a perfectly sensible, correct prediction of the Doppler effect.

    Correction: the Galilean transformation predicts an invariant wavelength in its prediction of the Doppler effect.

Share This Page