Retrocausality in action

This makes it clear that Alice and Bob should be able to compare their measurements and decide whether or not their respective photons are entangled or separable with no information from Victor; they should be able to deduce Victor's choice of whether or not to entangle the photon pairs.
I can see why you might get that impression from the text you quoted, but that text comes from the introduction of the paper. Details given later in the article, not to mention the actual quantum theory the experiment is based on, contradict your interpretation.

Specifically: if you work in this retrocausal "picture" of the experiment, then an important detail is that when Victor performs an entangling measurement, he effectively retroactively projects the state shared by Alice and Bob randomly onto one of four different entangled states, only one of which Alice and Bob are actually interested in. So to see correlations they need to know which of the photons they receive are in the entangled state they're interested in, and they can only get that information from Victor.

This kind of detail is clear if you're already familiar with the theory behind this sort of experiment, but it is also stated in a few places in the paper. For instance, near the beginning of page 4:
When used for a BSM, our BiSA can project onto two of the four Bell states, namely onto $$| \Phi^{+} \rangle_{23} \,=\, ( | HH \rangle \,+\, | VV \rangle ) / \sqrt{2}$$ (both detectors in b'' firing or both detectors in c'' firing) and $$| \Phi^{-} \rangle_{23} \,=\, ( | HH \rangle_{23} \,-\, | VV \rangle_{23} ) / \sqrt{2}$$ (one photon in b'' and one in c'' with the same polarization).
Here they explain that their implementation of the Bell state (entangling) measurement only distinguishes two of the four Bell states. So their Bell state measurement returns $$\Phi^{+}$$ a quarter of the time, $$\Phi^{-}$$ a quarter of the time, and an inconclusive result half the time. On the next page, they say:
For each pair of photons 1&4, we record the chosen measurement configurations and the 4-fold coincidence detection events. All raw data are sorted into four subensembles in real time according to Victor’s choice and measurement results. After all the data had been taken, we calculated the polarization correlation function of photons 1 and 4. It is derived from their coincidence counts of photons 1 and 4 conditional on projecting photons 2 and 3 to $$| \Phi^{-} \rangle \,=\, (| HH \rangle_{23} \,-\,| VV \rangle_{23}) / \sqrt{2}$$ when the Bell-state measurement was performed, and to $$| HH \rangle_{23}$$ or $$| VV \rangle_{23}$$ when the separable state measurement was performed.
Here they say that their correlation function for the entangling measurement cases is also conditioned on Victor getting the $$\Phi^{-}$$ result.

Let me ask it another way, putting Victor's choice ahead of Alice and Bob's measurements: if Victor produces a stream of entangled photon pairs and sends one each to Alice and Bob, you agree that they should be able to measure them, compare their results, and after sufficient iterations determine whether or not the stream is correlated WITHOUT asking Victor, yes?
That depends. If Victor always sends the same entangled state, then yes. If Victor sends a random stream of the four different entangled Bell states, then no. They just see completely uncorrelated results in that case, and they need information from Victor to filter out the three entangled states they're not interested in.

That's the whole point of Bell's experiment!
This is an aside, but the experiment described in the paper doesn't actually perform a Bell test (though they could have, and there's not much doubt about the result they'd have obtained if they had).
 
Last edited:
What "traditional definition" are you working with? MWI is deterministic in the sense that if you know the initial quantum state of the entire universe (or an isolated subsystem), and you know everything there is to know about the interactions and evolution taking place, then the Schrödinger equation predicts a unique future state at any future time. But individual observers generally won't have access to complete information about that state for various reasons.


Huh? The branching that would occur according to an MWI account of the experiment you cited in the OP is probably quite a bit more complicated than you imagine and it doesn't happen all at once. Assuming "free will" (for simplicity, not by necessity), then the global quantum state generally splits into a number of branches corresponding to the number of measurement outcomes for every measurement performed. So if we just look at one iteration of the experiment, then by the time Alice and Bob have performed their measurements the global state has split into four branches (because Alice and Bob each have two outcomes), and when Victor performs his measurement (which ideally has four outcomes), the global quantum state further splits into a total of sixteen branches. Each time, exactly which branching happens depends on the measurements that Alice, Bob, and Victor choose to perform.

If you drop the "free will" assumption and you imagine modeling Alice, Bob, and Victor as physical systems, then there's additional branching depending on the number of decisions they could make. For example, as you say, in the experiment Victor was a quantum random number generator with two possible outcomes, and in a more complete MWI description, you'd consider that the measurements Victor performs are correlated with this outcome, which doubles the number of branches. In the experiment, Alice and Bob each chose between 3 different measurements they could perform, so if you also think of Alice and Bob as quantum random number generators, that's a total of 16 x 2 x 3 x 3 = 288 branches. That's 288 branches per iteration (generation of four photons) and assuming Alice, Bob, and Victor are no more complicated than two- or three-outcome quantum random number generators.


With all that said, if these responses don't make too much sense to you, I wouldn't worry about it too much. I've never really considered determinism a selling point of MWI anyway.

All your responses were great in my estimation. Thanks for linking the paper in the archive. If a QM interpretation makes the prediction that QM is deterministic, makes a different prediction than QM, or a new prediction, it's not an interpretation of QM. QM interpretations use determinism to create student friendly ways to interpret QM. The interpretations must make the exact same predictions as QM. The most interesting interpretation I've read, though the author doesn't claim it's a formal interpretation, was posted on a Internet physics forum by Mr_Homm.
 
Last edited:
Does this help?:
Many-worlds reconciles the observation of non-deterministic events, such as random radioactive decay, with the fully deterministic equations of quantum physics.
In many-worlds, the subjective appearance of wavefunction collapse is explained by the mechanism of quantum decoherence, and this is supposed to resolve all of the correlation paradoxes of quantum theory, such as the EPR paradox[13][14] and Schrödinger's cat,[1] since every possible outcome of every event defines or exists in its own "history" or "world".
https://en.wikipedia.org/wiki/Many-worlds_interpretation

Retrocausality and MWI seems to be separate issues. With the former, something happening to an entangled electron in the present could be affected by something happening to its affected twin paired electron in the future.

With MWI, there is a world where the electron event itself occurs and another world where the event doesn't occur.

This opening part in your article sounds pretty deep mentally:
Quantum entanglement is a state where two particles have correlated properties: when you make a measurement on one, it constrains the outcome of the measurement on the second, even if the two particles are widely separated. It's also possible to entangle more than two particles, and even to spread out the entanglements over time, so that a system that was only partly entangled at the start is made fully entangled later on.

This sequential process goes under the clunky name of "delayed-choice entanglement swapping." And, as described in a Nature Physics article by Xiao-song Ma et al., it has a rather counterintuitive consequence. You can take a measurement before the final entanglement takes place, but the measurement's results depend on whether or not you subsequently perform the entanglement.

Since SCIFORUMs seems to have a decent number of Skeptics, I would be interested in their reaction to the experiment explained in your article as below:
Delayed-choice entanglement swapping consists of the following steps. (I use the same names for the fictional experimenters as in the paper for convenience, but note that they represent acts of measurement, not literal people.)

  1. Two independent sources (labeled I and II) produce pairs photons such that their polarization states are entangled. One photon from I goes to Alice, while one photon from II is sent to Bob. The second photon from each source goes to Victor. (I'm not sure why the third party is named "Victor".)
  2. Alice and Bob independently perform polarization measurements; no communication passes between them during the experiment—they set the orientation of their polarization filters without knowing what the other is doing.
  3. At some time after Alice and Bob perform their measurements, Victor makes a choice (the "delayed choice" in the name). He either allows his two photons from I and II to travel on without doing anything, or he combines them so that their polarization states are entangled. A final measurement determines the polarization state of those two photons.
The results of all four measurements are then compared. If Victor did not entangle his two photons, the photons received by Alice and Bob are uncorrelated with each other: the outcome of their measurements are consistent with random chance. (This is the "entanglement swapping" portion of the name.) If Victor entangled the photons, then Alice and Bob's photons have correlated polarizations—even though they were not part of the same system and never interacted.
https://arstechnica.com/science/201...ects-results-of-measurements-taken-beforehand

One relevant issue I see there is the question of how this retrocausality would affect events at a more macro level and in human experience. For example, does it correlate to events in humans' lives like precognition or premonitions? Let's say that two systems of particles are entangled like the article says can happen, instead of just a single pair. Let's also say that those two systems of particles happen to make up a major component of a particular person or else are witnessed by that person. For example, suppose that a set of photon light particles are witnessed by a person in 500 BC and those photons are entangled with another set of photons that undergo change in 100 AD. Would the apparition or particle pattern witnessed by the person in 500 BC undergo a related change?

A more fundamental issue is whether the principle of retrocausality at least opens the door to other possibilities in theoretical physics that were considered impossible absolutely by Skeptics. For example, let's say that you reject what I wrote above about some person in 500 BC getting a light image of a later event because you see light patterns as too vague or incoherent for a person to use. There is still a more fundamental objection to the passage above that is more common among skeptics, which is the principle of Forward Causality. That is, to talk about Precognition or Premonitions has to be as nonsensical as talking about a flat earth because it's simply impossible by the most fundamental physics. To say that a person could direct experience or observe in some way an event occurring later in time would be so illogical due to Forward Causality principles and common sense that it would be like claiming that the earth is flat. And so the relevant question I see beyond just the experiment done in the article is whether if retrocausality is proven as a real phenomenon that it opens the door to the possibility of precognition/premonitions etc. by showing that those paranormal predictions are no longer so impossible as claiming that the earth is flat?

I raised the question of whether supernatural predictions like the Biblical ones are even possible in my thread here: (http://www.sciforums.com/threads/ho...t-prophecies-and-how-do-we-know.158973/page-3).
 
Just to be clear about two things with this sort of experiment, which I'm not sure are made clear in the article cited in the OP: 2) Victor has no control over which particular subset of Alice and Bob's data shows nonlocal correlations. In all experiments of this type, Victor's ability to entangle the photons is nondeterministic: ideally what Victor would do is perform a Bell state measurement which randomly projects his photons into one of four different entangled states. Then Victor knows that Alice and Bob's results could show nonlocal correlations in those four subsets of cases (provided Alice and Bob are making the right measurements). In reality, real world implementations of Bell state measurements aren't successful a significant fraction of the time (on the order of 50% to 75%).

So overall, what Victor has is a way of identifying (but not choosing or controlling) a subset of Alice and Bob's measurement results that exhibits nonlocal correlations, without actually having to look at their data.
A defect I see in what you answered is that you are saying that Victor doesn't have a way to choose a subset of their measurement results. However, if you check the article, it says that Victor himself does in fact make a real choice that really decides what happens to the entangled particles, and then his choice does in fact correlate directly to what Alice and Bob witness:
At some time after Alice and Bob perform their measurements, Victor makes a choice (the "delayed choice" in the name). He either allows his two photons from I and II to travel on without doing anything, or he combines them so that their polarization states are entangled. A final measurement determines the polarization state of those two photons.
Do you see what I mean, Przyk?
 
That's the whole point: there isn't really anything tangible that Victor does determine. The article you linked to is a bit misleading in this regard, when it says:

But when Victor chooses to "entangle the photons" as this article puts it, what he actually does is perform a Bell state measurement which, in the case of the experiment cited in the OP (arXiv preprint available here), could distinguish two of the four Bell states and gave an inconclusive result the rest (50%) of the time. In the subset of cases where he gets a given Bell state, he can know that the corresponding subset of Alice and Bob's results will show nonlocal correlations (provided Alice and Bob performed the right measurements, which is arranged beforehand). But he has no control of what that subset will be: he only learns it by looking out for certain results to measurements he performs, which occur randomly.
Even if he did not have control over which subset that will be, the fact that he caused nonlocal correlations to occur in the past does show an example of retrocausality.

The same thing goes for this:
Specifically: if you work in this retrocausal "picture" of the experiment, then an important detail is that when Victor performs an entangling measurement, he effectively retroactively projects the state shared by Alice and Bob randomly onto one of four different entangled states, only one of which Alice and Bob are actually interested in. So to see correlations they need to know which of the photons they receive are in the entangled state they're interested in, and they can only get that information from Victor.
Even if Alice and Bob didn't know which particles were entangled, he still will still be able to create an entanglement and retroactively caused an effect in Alice and Bob's present time.

Furthermore, let's say that Alice and Bob do succeed in getting the information from Victor before they do their measurement, and then later on Victor performs his entangling work that he promised. In that instance, Alice and Bob would in fact be able to observe an event in their own time that was caused directly by an event occurring at a later date - Victor's later implementation of his promise.

Let's say though that Victor has an intervening event- for some reason he changes his mind and doesn't follow through with his promise. Alice and Bob could find no noncausal results and think that it went against his promise and against their expectations. In that case, Victor's later choose and due to his later future control would still be able to retroactively choose the result of Alice and Bob's measurement.

Do you see what I mean about how this resolves the issue that you are raising, Przyk?
 
Last edited:
A defect I see in what you answered is that you are saying that Victor doesn't have a way to choose a subset of their measurement results. However, if you check the article, it says that Victor himself does in fact make a real choice that really decides what happens to the entangled particles, and then his choice does in fact correlate directly to what Alice and Bob witness

In the experiment, Victor can measure either the polarisation of a couple of photons or perform something called a (partial) "Bell-state measurement" on them. Both measurements have multiple possible outcomes. Victor can choose which of the two measurements he will do, but he cannot control what the outcome of that measurement will be -- that is completely random.


Even if he did not have control over which subset that will be, the fact that he caused nonlocal correlations to occur in the past does show an example of retrocausality.

He doesn't cause "nonlocal correlations to occur in the past". He gets results such that, when you postselect on some of the specific results (i.e. pick out cases where he got a certain result and ignore the rest) then you find certain correlations between Alice's and Bob's results. This is just a different way to say that the experiment shows a three-way correlation between Alice's, Bob's, and Victor's results when Victor does the Bell-state measurement.

The reason for the talk of "retrocausality" has to do with a certain mathematical feature of quantum mechanics, which can be used to predict the correlations and which is the theory that the experiment was testing. This was a type of experiment in which you do certain measurements on quantum states at different sites (three sites, in this case). According to quantum mechanics, the correlations you end up with in this kind of experiment depend on what measurements are done at each site, but are independent on in what order the measurements are done in time. So while in the experiment Victor does his measurement after Alice and Bob do theirs, for the purpose of predicting the correlations using quantum mechanics, you can just as well pretend Victor did his measurement right at the beginning. For certain purposes it's more convenient/interesting to think of the experiment this way, and according to quantum mechanics the end result is the same either way.

What the experiment doesn't show is any retrocausality in the sense of being able to send messages back in time or the like.


Furthermore, let's say that Alice and Bob do succeed in getting the information from Victor before they do their measurement

There is no way to access such information. Alice and Bob don't know anything at all until they measure something, and the correlations from their measurements that they observe just between themselves are the same whichever measurement Victor does (which is the only thing controllable by Victor in the experiment).

Or at least: 1) there is no way to access such information according to quantum mechanics (technically because all the information available to Alice and Bob is summarised by the marginal quantum state they share before they measure anything, and this is independent of whatever controllable manipulations or measurements Victor does), and 2) the experiment doesn't show any result different from what quantum mechanics predicts.

P.S. this thread is five years old.
 
What I'm asking is, how can this undeniable demonstration of the future affecting the past be encapsulated in an MWI interpretation?
Incorrect question, because it is in no way undeniable. A simple way to deny it is to describe the whole experiment in dBB theory or another causal realistic interpretation of QM. In such a description, you will have some faster than light causal influences (this is unavoidable, Bell's theorem tells this), but no future affecting the past.
 
In such a description, you will have some faster than light causal influences (this is unavoidable, Bell's theorem tells this), but no future affecting the past.
How is a faster-than-light causal influence different from the future affecting the past? See tachyons.
 
How is a faster-than-light causal influence different from the future affecting the past? See tachyons.
For example, causal influence from the future is simply paradoxical, think about the grandfather paradox. FTL influences in a preferred frame are instead, completely unproblematic from point of view of classical causality.
 
Again, how is faster than light different from causality from the future?

(It isn't)
Let us propose a scenario where the laboratory (non time-dilated) rest frame is shared by most ot the mass/energy of the Big Bang, starting at t=0, the expansion of the known universe.

Think about the possibility of a frame of refernce in relative motion to this that is capable of seeing an extreme amount of relative time dilation as it might affect Doppler red shifting of a photon leaving the center of the known universe at t=0. In such a scenario, it would be possible for every energy transfer event (unrelated to the propagation of the photon) in the rest of the known universe to transpire BEFORE the wavefront of the extreme red shifted photon saw the event of its SECOND peak amplitude emerge. That's one tick of a timepiece based on the frequency clock of the red shifted photon.

Next imagine that a second photon and frame of reference traveling at an equal speed in the opposite direction, the second photon quantum entangled with the first, so that it is extreme blue shifted. It has also been traveling in the diametrically opposite direction for as just as long as the red shifted one, or since the Big Bang. How many peak amplitude wavecrests (ticks of the clock) of the extreme blue shifted photon exist since the Big Bang? As many ticks as you might want, of course. There isn't a particular limit placed on time dilation in that respect, so long as one does not exceed the speed of light.

Finally, place an observer/detector in the path ahead of one of the photons and determine its polarization in a way that it causes the entanglement state of the other red/blue shifted photon to instantly flip polarization (entanglement) states.

The difference in the number of wavelengths (ticks of the atomic clock) of each photon has no effect on the near instantaneous timing of the entanglement state transition of the other.

In this scenario, causality depends on which entangled photon you choose to observe. For the "past to affect the future", you detect the red-shifted one "first", or for the "future to affect the past", if you detect the blue-shifted one "first", except that from the point of view of this thought experiment, there is no "first" or "second" quantum entanglement state flip, because whichever one you detect affects the polarization state flip of the other one more or less instantaneously. In the lab or "rest" frame where they started, each photon has undergone an equal number of peak amplitude "ticks" of their respective clocks, and there is no observed Doppler shift at all.

Of course, in order to understand the dynamics of entanglement, you must first acknowledge that a Special Relativistic description of the dynamics of motion of bulk matter/energy at or below the speed of light will not suffice to explain the order of the timing of causality, and also that Minkowski's edict that no two events in a universe separated/connected by means of the limit imposed by the finite speed of propagation of light are EVER "simultaneous", must somehow be wrong. His mistake was making time itself proportional to the propagation of light. At speeds less than or equal to c, it undoubtably is, but this is not the case for quantum entanglement.

The entanglement events referenced are simultaneous in the lab or rest frame the photons were created in. They are simultaneous in both of the frames in relative motion to the lab or rest frame. The only conclusion possible is that the speed of propagation of light is not the basis of time itself. The basis of time itself either needs to be faster than the speed of the propagation of light or else light cannot propagate. The behavior of time dilation observed in matter demands this conclusion. Quantum entanglement and time dilation is the only reason matter exists. If it were not, matter itself would not have the temporal permanence it manifestly does. The only choice for the basis of time itself is quantum entanglement. Quantum entanglement is not actually a velocity, and so proportional mathematics won't work in this domain. Welcome to one of Gödel's little inconsistencies of the system of mathematical reasoning we call Special Relativity.

ALL of this post is based on known science AND the mathematics that goes with it. The inconsistencies cited are science also. Science will NEVER be a complete body of knowledge. Neither will mathematics. Gödel did many mathematical proofs of the latter point.

But I can understand completely how it might appear that this is pseudoscience to someone uninitiated to both the mathematics and the physics and who has not pondered the problem for literally a lifetime, as I have. The conclusion reached does nothing to disperse the strangeness of the problem or the counterintuitive nature of the twin paradox, which is as physical and real as it gets in this strange universe.
 
Last edited:
The discrete and unique nature of quantum entanglement identifies it as something unrelated to a velocity but intimately related to time itself. This is a fact, even if it means there is not yet a suitable mathematical means for relating it to quantum spin or anything else that moves without division by zero.

The suggestion of mine that time and entanglement and quantum spin ARE related is of course speculative.

Ponder for a moment the idea that the matter comprising the atomic and chemical composition of the twins in the twin paradox is largely quantum entangled. What does this suggest about the temporal anomaly of aging at different rates when only one of them is sent on a round trip relativistic joyride? They are timepieces as well. What, internally, determines the rate at which each one ages?
 
Last edited:
There are a lot of words here trying to make the case that FTL and time travel are not the same thing but it is a well-established fact that they are equivalent. FTL in one frame is literally travelling backwards in time in another frame. These are not profound statements, they are a consequence of SR.

From here, #18
A better argument against FTL travel is the Grandfather Paradox. In special relativity, a particle moving FTL in one frame of reference will be travelling back in time in another. FTL travel or communication should therefore also give the possibility of travelling back in time or sending messages into the past. If such time travel is possible, you would be able to go back in time and change the course of history by killing your own grandfather. This is a very strong argument against FTL travel, but it leaves open the perhaps-unlikely possibility that we may be able to make limited journeys at FTL speed that did not allow us to come back. Or it may be that time travel is possible and causality breaks down in some consistent fashion when FTL travel is achieved. That is not very likely either, but if we are discussing FTL then we had better keep an open mind.

I think the difference between FTL travel and retrocausality referred to in the OP of this thread is that the retrocausality in this thread is retrocausal from all frames. The cause is in the future light cone of the effect.
 
There are a lot of words here trying to make the case that FTL and time travel are not the same thing but it is a well-established fact that they are equivalent. FTL in one frame is literally travelling backwards in time in another frame. These are not profound statements, they are a consequence of SR.

From here, #18


I think the difference between FTL travel and retrocausality referred to in the OP of this thread is that the retrocausality in this thread is retrocausal from all frames. The cause is in the future light cone of the effect.
What I am suggesting is that simultaneity exists for quantum entanglement state flips in all frames of reference, independent of whether their light cones are connected by means of a 'light' cone enclosed path. It's like time travel, but only for discrete, quantum entangled particles, not bulk masses, and it only works because the propagation of light 'light cones' are fundamentally not the basis of time itself. That's quite a bit more complicated.

Minkowski's edict that no two events are simultaneous is replaced by "quantum entanglement state changes are simultaneous in all frames, but time dilation (the rate at which time proceeds) is wildly different depending on both local energy density / proximity to gravitating inertial masses and relative motion".

Because of the discrete nature of entanglement, and its ability to operate over basically any distance, it may not ever be possible to correlate seemingly random quantum spin flips locally with paired transitions by means available to science. I understand, this makes it difficult, but not impossible, to study the basis of time itself, other than locally.

A framework of mutually rectilinear chronometers (connected by meter sticks) would all run at the same rate if they shared the same frame. Only a chronometer moving with respect to those sees a difference in rate (time dilation), but the difference is seen in EVERY chronometer in the stationary framework at once, not just one direction. This strongly suggests, the basis of time itself is something inherent in the frame of the chronometer that is moving (<c), relative to the chronometers that are at rest, and also, that unless relative v=c (impossible), time itself doesn't completely stop. What else is moving, or transitioning, within atomic structure, that would enable the entire structure to persist in time independent of relative state of motion? The same feature that makes it persist in time when it is at rest. It has nothing whatsoever to do with light cones or the propagation of light, or even the process of bulk matter or energy getting from point A to point B. What temporal component remains when these candidates for the basis of time itself are eliminated?

You can't understand what is meant by "time travel" without posssessing a knowledge of the basis of time itself, and contemplating proportional relationships between relative velocities gets you nowhere, because the fastest process in this universe is not a velocity. You can't make a proportional relationship out of entanglement vs velocity of anything because you can't divide by zero.
 
Last edited:
Inertial mass also increases in the direction of relative motion (<c). The Higgs mechanism provides inertial mass (ALL directions) to quarks, electrons, electroweak bosons, neutrinos, and their anti-particles. Higgs is the only spin zero boson, which means that all quantum spin everywhere is referred or relative to its spin state. If you are looking for a basis for time itself in free space, look no further. The Higgs field is your stationary network of clocks, quantum entangled everywhere. This IS HIGHLY SPECULATIVE, but makes perfect sense.

I see it working every time I marvel at the geometric precision of the surface of the water in a still pond or the plane mirror in my bathroom. There is good reason such things are so precise, depending on how flat we can make them. Electrons on the reflective coating of that mirror get their geometrical precision in the handling of photons from somewhere other than being very round. So do the electrons in the semitransparent sensory apparatus you are viewing it with.

People miss thinking about how temporally persistent most matter is, too. There is a reason matter persists and unbound energy disperses. In looking for a basis for time, why would you fixate only on the dynamics of the latter, without trying to understand why the former is so temporally immutable?

There is power in understanding what inertia is.
 
Last edited:
And there is also power in understanding this might actually be related to why it is the Earth and other larger gravitating bodies aren't flat, or even mostly flat, on a sufficiently large and dynamic scale, at least. Or if they are, they do not remain that way for as long as we once believed the Earth to be flat.

Newton only explained this in a context so restricted that when the radius of something was set to zero, a gravitational force of infinity derived of his theory of universal gravitation. Einstein's general relativity field equations suffer from this same flaw with a physical radius in the denominator of a proportional relation, but it's a fact, we don't yet have anything better to replace it with.

Proportional relations of any kind only make sense as long as you avoid dividing by zero, or even a quantity negligibly close to zero. Relationships like those don't bring us closer to an understanding of the infinite mind of G-d either; just the limitation of the symbols necessary as tools for a finite one.

If you are happy with this state of affairs, fine, but don't believe it makes you a better scientist than someone who never stops looking for a more satisfying and nuanced answer.
 
Last edited:
If you are happy with this state of affairs, fine, but don't believe it makes you a better scientist than someone who never stops looking for a more satisfying and nuanced answer.
You make fair points danshawen, and I personally am anything but satisfied with current sentiments. I do have my own pet theory resolution for all things QM and SR and it starts with the definition of "local". We think we know what "local" means when we use the word...but no one actually defines it in any rigorous, mathematical manner. If we define "local" to be a system of a certain practical size d then for any arbitrary separation D there is a frame which would consider D = d.

The photonic version of the EPR paradox is trivially resolved in a completely local and realistic manner, for example, if we consider the frame of the entangled photons themselves.
 
You make fair points danshawen, and I personally am anything but satisfied with current sentiments. I do have my own pet theory resolution for all things QM and SR and it starts with the definition of "local". We think we know what "local" means when we use the word...but no one actually defines it in any rigorous, mathematical manner. If we define "local" to be a system of a certain practical size d then for any arbitrary separation D there is a frame which would consider D = d.

The photonic version of the EPR paradox is trivially resolved in a completely local and realistic manner, for example, if we consider the frame of the entangled photons themselves.
That sounds like a promising direction of inquiry, RJBeery. Well worth an extra thought or three.
 
Back
Top