# Retrocausality in action

Discussion in 'Physics & Math' started by RJBeery, Apr 24, 2012.

1. ### RJBeeryNatural PhilosopherValued Senior Member

Messages:
4,136
So you're saying they would have to have information from Victor in order to make sense of their data? This doesn't ring true to me, not because I'm doubting your education, but for logical reasons:

Scenario 1: Alice and Bob compare data. They then ask Victor if that data should be correlated and he says yes. "Ahh! It IS correlated!!" they exclaim.

Scenario 2: Alice and Bob compare the same set of data. This time, Victor says that their photons should not be correlated because he took no action. "Ahh! The data is perfectly random just as we should expect!!" they exclaim.​

I hope you can understand my doubt here. You're asking me to question the article linked to, which is fine, but I want to know where the confusion lies. You know me well enough to know that I can't just "let it go"...

Last edited: Apr 26, 2012

3. ### przyksquishyValued Senior Member

Messages:
3,171
I've kind of explained this two or three times now: the article is misleading where it seems to imply Victor can "correlate" Alice's or Bob's data. He doesn't. What he actually does is perform a certain entangling measurement with four different possible outcomes (well, ideally it would be four, but the implementation in the actual experiment could only distinguish two of them), and he can use his own measurement results to identify correlated subsets of Alice's and Bob's results.

I'll illustrate with an example that simplifies things a bit. Suppose Alice and Bob only measured for horizontal polarisation (outcome 0) or vertical polarisation (outcome 1). They measure 8 sets of particles and get the following results:
Code:
 Event | Alice |  Bob
-------|-------|-------
1   |   1   |   0
2   |   0   |   0
3   |   1   |   0
4   |   0   |   1
5   |   1   |   1
6   |   1   |   1
7   |   0   |   0
8   |   0   |   1

These results show complete randomness. All of the four possible joint outcomes (00, 01, 10, and 11) occur equally often. But in this (and any other similar) set, you can identify subsets of correlated data. If you only look at subset A, consisting of events 2, 5, 6, and 7, Alice's and Bob's results look perfectly correlated. In set B, consisting of events 1, 3, 4, and 8, the results look perfectly anticorrelated.

Things are a little more complicated in the actual experiment cited in the OP (in particular measurements in more than just the horizontal/vertical polarisation basis were performed), but the principle is the same: the data is globally uncorrelated, but you can always identify correlated subsets a bit like the sets A and B I described above. The point of the experiment is that Victor has a way of identifying (but not controlling or creating) such correlated subsets without having or needing access to Alice's or Bob's raw data.

EDIT: The reason for the retrocausal-sounding language (and I've used this kind of language myself in some cases) is because once Victor communicates a correlated subset to Alice and Bob and they throw all their other results away, everything is "as if" they had been sharing a certain set of entangled states right from the beginning. In particular they can then do cryptography or device-independent random number generation or whatever else they might have liked to do with measurements made on entangled states. Typically they can certify that their protocol is working correctly and securely even if they don't trust Victor.

Last edited: Apr 26, 2012
Schmelzer likes this.

5. ### RJBeeryNatural PhilosopherValued Senior Member

Messages:
4,136
I get it, well done; now let me make it through the Arxiv paper to verify that my understanding of what you're saying and my understanding of what the paper says can be reconciled.

7. ### SyneSine qua nonValued Senior Member

Messages:
3,515
Very good post, przyk. Persistence and clarity paying off.

8. ### Jarek DudaRegistered Senior Member

Messages:
212
Not exactly, here are A-B correlation for the first and the second choice of Victor from the paper:

If A-B gets |R>|R>, it is more likely that V has chosen the left option, if A-B gets |R>|L> that V much more probably has chosen the right one - there is nonzero mutual information and so we could transmit information through such channel.

The simplest way to use nonzero mutual information is repetition code, like sending 3 times the same bit and read as those occurring at least twice . Extension of such (extremely weak!) method is just observing statistics - e.g. Victor fixes some choice and A-B estimates correlations using succeeding measurements - improving certainty of Victor's choice.
It's better to see in simpler and much more effective (single SPDC instead of two) setting I've posted before - just set polarizer in erasing or not angle and observe as statistics: light intensity changes on the second arm.

9. ### SyneSine qua nonValued Senior Member

Messages:
3,515
There is no using this for FTL communication because of the necessity for coincidence counting.

10. ### Jarek DudaRegistered Senior Member

Messages:
212
Yes, exactly - thanks of entanglement swapping, by performing Bell-state measurement (BSM - left), or separable-state measurement (SSM - right) on photons 2 and 3, he controls entanglement/correlations of photons 1 and 4 (A-B).

FTL??? No, everything is happening inside light cone of SPDCs.
What is nonintuitive here is that causality works backward, but why it cannot?
Just oppositely - fundamental physics is time/CPT symmetric. From GRT to QFT we use lagrangian mechanics, which has e.g. three equivalent evolution formulations:
a) situation and derivative in the past determines future through E-L eq.,
b) situation in past and future determine situation between - through action optimization,
c) situation and derivative in the future determines past through E-L eq.
Natural for our intuition is a), but we have to remember that b) and c) are equivalent. There is no reason to emphasize one time direction for causality.

If someone is anxious about the "conflict" of fundamental time/CPT symmetry with our 2nd law-based intuition, it should be educative to look at very simple model: Kac ring - on a ring there are black and white balls which mutually shift one position each step. There are also some marked positions and when a ball goes through it, this ball switches color.
Using natural statistical assumption ("Stoßzahlansatz"): that if there is p such markings (proportionally), p of both black and white balls will change the color this step, we can easily prove that it should leads to equal number of black and white balls - maximizing the entropy ...
... from the other side, after two complete rotations all balls have to return to the initial color - from 'all balls white' fully ordered state, it would return back to it ... so the entropy would first increase to maximum and then symmetrically decrease back to minimum.
Here is a good paper with simulations about it:http://www.maths.usyd.edu.au/u/gottwald/preprints/kac-ring.pdf
The lesson is that when on time/CPT symmetric fundamental physics we "prove" e.g. Boltzmann H theorem that entropy always grows ... we could take time symmetry transformation of this system and use the same "proof" to get that entropy always grows in backward direction - contradiction.
The problem with such "profs" is that they always contain some very subtle uniformity assumption - generally called Stoßzahlansatz. If underlying physics is time/CPT symmetric, we just cannot be sure that entropy will always grow - like for Kac ring and maybe our universe ...

11. ### SyneSine qua nonValued Senior Member

Messages:
3,515
For the same reason that coincidence counting precludes superluminal signaling, so does it preclude reverse causation/signaling. Alice and Bob cannot know anything of the actions taken by Victor until after Victor has acted. And it is trivial that Victor can know information about his own past line cone.

12. ### Jarek DudaRegistered Senior Member

Messages:
212
Nice hypothesis, now please prove it in time/CPT conserving physics? Or that entropy always grows?
The problem is that transforming the system using corresponding symmetry and then using such "proof" on it, one get contradiction.

13. ### SyneSine qua nonValued Senior Member

Messages:
3,515
This last (implying that entropy doesn't increase) is a fringe claim by you, which means the onus is yours.

Last edited: Apr 29, 2012
14. ### Jarek DudaRegistered Senior Member

Messages:
212
I cannot prove the last one and moreover, using Kac ring example, I've explained that no one can.
But the situation with causality is analogous: time/CPT symmetry forbids emphasizing one direction on fundamental level.
However, one direction can be emphasized on solution level: concrete realization we live in - e.g. with well spatially localized (low entropy) big bang relatively near and so creating entropy gradient - our 2nd law.
But it is statistical property (of solution we live in) only - it says nothing about "fundamental directionality of causality".

15. ### keith1Guest

Does any of the implications of this experiment's results leave any room for tampering with those results, outside the time limits of 0 and 520 ns, by Alice, Bob, Victor, or any possible privy and undisclosed fourth party participant?

Last edited by a moderator: Apr 29, 2012
16. ### przyksquishyValued Senior Member

Messages:
3,171
No. The graph on the left isn't just for the cases where Victor chose to do a Bell state measurement. It's also conditioned on Victor getting the $| \Phi^{-} \rangle$ result. The graph on the right is for the cases where Victor chooses the separable measurement and gets either $| HH \rangle$ or $| VV \rangle$. Alice and Bob can't predict which measurement Victor will choose to perform, but if Victor tells them which measurement he performed, they can sometimes predict his result.

17. ### RJBeeryNatural PhilosopherValued Senior Member

Messages:
4,136
I'm still confused how an MWI proponent can claim that the "universal wavefunction" can be deterministic (in its traditional definition) if there are properties of it at a given state which are not determinable until a future event has passed. In order for the subsets of Alice and Bob's data to be identified by Victor's future measurements, those "branches" would by necessity be required to lead to Victor making those measurements. You can deny Free Will but I would be surprised if you denied the indeterminability of a quantum random number generator (which is what was used in this test) making the decision after Alice and Bob have taken their measurements. :shrug:

18. ### przyksquishyValued Senior Member

Messages:
3,171
What "traditional definition" are you working with? MWI is deterministic in the sense that if you know the initial quantum state of the entire universe (or an isolated subsystem), and you know everything there is to know about the interactions and evolution taking place, then the Schrödinger equation predicts a unique future state at any future time. But individual observers generally won't have access to complete information about that state for various reasons.

Huh? The branching that would occur according to an MWI account of the experiment you cited in the OP is probably quite a bit more complicated than you imagine and it doesn't happen all at once. Assuming "free will" (for simplicity, not by necessity), then the global quantum state generally splits into a number of branches corresponding to the number of measurement outcomes for every measurement performed. So if we just look at one iteration of the experiment, then by the time Alice and Bob have performed their measurements the global state has split into four branches (because Alice and Bob each have two outcomes), and when Victor performs his measurement (which ideally has four outcomes), the global quantum state further splits into a total of sixteen branches. Each time, exactly which branching happens depends on the measurements that Alice, Bob, and Victor choose to perform.

If you drop the "free will" assumption and you imagine modeling Alice, Bob, and Victor as physical systems, then there's additional branching depending on the number of decisions they could make. For example, as you say, in the experiment Victor was a quantum random number generator with two possible outcomes, and in a more complete MWI description, you'd consider that the measurements Victor performs are correlated with this outcome, which doubles the number of branches. In the experiment, Alice and Bob each chose between 3 different measurements they could perform, so if you also think of Alice and Bob as quantum random number generators, that's a total of 16 x 2 x 3 x 3 = 288 branches. That's 288 branches per iteration (generation of four photons) and assuming Alice, Bob, and Victor are no more complicated than two- or three-outcome quantum random number generators.

With all that said, if these responses don't make too much sense to you, I wouldn't worry about it too much. I've never really considered determinism a selling point of MWI anyway.

19. ### RJBeeryNatural PhilosopherValued Senior Member

Messages:
4,136
Yes, exactly. And when you speak of the number of branches created by Alice and Bob you're presuming an equal weight between them. My point is that the weights cannot be equal; they must be skewed depending upon the future and yet to be determined results of the quantum random number generator, or else there would be no correlation between the data. This does not qualify for the traditional definition of determinism.
Forgive me but I interpret this as a weak concession. Perhaps you see my point?

20. ### przyksquishyValued Senior Member

Messages:
3,171
Where did I say that? The weights of the branches happen to be equal after Alice and Bob have made their measurements, but they're generally not equal after Victor has made his measurements.

Actually, depending on the measurements, some of the 16 branches I counted are actually of weight 0, so you should really read that figure of 16 as a maximum.

No, that doesn't follow. I think you've forgotten that in MWI all the possible results occur. If Alice and Bob both make the same measurement, then there are two branches where they get correlated data and two where they get anticorrelated data. All of equal weight.

Again, what's this traditional definition of determinism? You keep using terms without defining them.

No.

21. ### RJBeeryNatural PhilosopherValued Senior Member

Messages:
4,136
What's the point of Victor making a random choice of whether or not to correlate Alice and Bob's data? What would the data look like if he correlated all photons received? I'm trying to see things from your perspective because we aren't seeing eye-to-eye.

22. ### przyksquishyValued Senior Member

Messages:
3,171
Victor doesn't "correlate" Alice and Bob's data as such. He can only identify correlated subsets of their data that would be there anyway.

If you're just talking about Alice and Bob's data, it would look no different than if Victor didn't exist: the results would look completely random.

23. ### RJBeeryNatural PhilosopherValued Senior Member

Messages:
4,136
It's one thing to claim that ArsTechnica did a poor job summarizing this experiment, but this is directly from the paper:
This makes it clear that Alice and Bob should be able to compare their measurements and decide whether or not their respective photons are entangled or separable with no information from Victor; they should be able to deduce Victor's choice of whether or not to entangle the photon pairs. Do you disagree? Let me ask it another way, putting Victor's choice ahead of Alice and Bob's measurements: if Victor produces a stream of entangled photon pairs and sends one each to Alice and Bob, you agree that they should be able to measure them, compare their results, and after sufficient iterations determine whether or not the stream is correlated WITHOUT asking Victor, yes? That's the whole point of Bell's experiment! Now, simply put Victor's choice of whether or not to entangle that stream AFTER Alice and Bob take their measurement. That's what is happening in this paper as I read it...