Probability and energy

Discussion in 'Physics & Math' started by wellwisher, Apr 7, 2011.

  1. wellwisher Banned Banned

    Messages:
    5,160
    This idea came to me several months ago and I thought I would share.

    Say I had a six-sided dice. The odds are 1 in 6, for any side to appear for any given throw. Although not very obvious, the unwritten assumption behind these odds is; there is sufficient energy for full randomization of the dice.

    Say there was not sufficient energy for full randomization of that dice. Say I weakly threw the dice so there is only enough energy in my throw, for the dice to flip only 90 degrees. The expected odds change. Not all sides are possible. The side at 180 degrees, does not have sufficient energy therefore it has odds become different from its four adjacent sides.

    As another example, say we had a new deck of sequenced cards. If we add enough energy for full randomization (shuffle it completely) each hand has given odds, which are consistent and predictable for all full energy randomizations. Say,instead, I took a new deck and shuffled it with less energy so there is less than full randomization; I cut it only once. The odds are now totally different. If you ever played cards, such a low energy suffle would not be allowed, since one could alter the odds.

    The question is, say we have a phenonema in nature that we characterize using probability. How do we know and prove if there is sufficient energy for complete randomization so the assumed odds are valid?

    As an example, the DNA is assumed to change in a random way. But observation shows that certain aspects of the DNA change quicker than other aspects. Other aspects are more conserved. There is not sufficient energy for full randomization of the entire DNA, allowing the dice to be loaded.

    The idea was, maybe partial energy randomization simulations can be done on the computer so see how odds change with energy. The assumption of sufficient energy for full randomization may not always be correct, with the data better fit with less energy.

    For example, odds were calculated for the spontaneous decomposition of the proton, which has never panned out. Maybe there is not sufficient energy for those odds, which assume full energy randomization.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. mathman Valued Senior Member

    Messages:
    2,002
    "The question is, say we have a phenonema in nature that we characterize using probability. How do we know and prove if there is sufficient energy for complete randomization so the assumed odds are valid?"

    You have it backwards. The odds for physical phenomena in complex situations are often determined by experimental studies. Simple assumptions (such as dice tossing) can be made only if the underlying physics is obvious.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. wellwisher Banned Banned

    Messages:
    5,160
    I see what you are saying. Let me give an example to better illustrate what I am saying and then compare it to what you are saying.

    If we had a dice there are certain odds, if there is sufficient energy for full randomization. A weak throw will not allow these same odds. Randomization is connected to entropy. Entropy, which amounts to degrees of freedom, absorbs energy. With insufficient energy added to the throw of a dice, there is not enough entropy for full randomization odds, since the degrees of freedom are lower by that amount of energy.

    Beside changing energy input into the dice, another way to alter the randomization of the dice is it to load the dice with a small weight. The pull of gravity on the weight limits degrees of freedom, so we can't get full randomization. We would need to add energy to neutralize gravity to regain full randomization odds. But since that technology does not exist, the structure of theloaded dice is designed with fewer degrees of freedom in mind. It is not designed to play by the odds of full randomization.

    Conceptually, we can ignore the limited energy/entropy consideration by looking at the loaded dice as a unique phenomena. In other words, instead of comparing the loaded dice to a standardize dice with full randomization energy, we can treat the loaded dice as different and then apply normal probability to this unique phenomena.

    What I was trying to do, was show a way to standardize odds as a function of energy/entropy (internal/external) to avoid all the fudge factors needed to turn special cases into unique case so we can apply normal odds. The fudge factors add additional degrees of freedom to the analysis, which means we intellectually add energy to the analysis, to allow us to have sufficient energy to apply full randomization, intellectually.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. mathman Valued Senior Member

    Messages:
    2,002
    I am not sure what you are trying to do. Dice throwing odds can be effected by the mechanics of the throwing, energy being a factor, but there are others, such as height of initial release, direction, and surface material.
     
  8. tashja Registered Senior Member

    Messages:
    715
    To prepare the memory to remember anything, there has to be entropy increasing. When entropy is maximized, there is no entropy difference to remember anything.

    Please Register or Log in to view the hidden image!

     
  9. BenTheMan Dr. of Physics, Prof. of Love Valued Senior Member

    Messages:
    8,967
    Hi wellwisher:

    This is an interesting question.

    At first sight, the answer is "we check". Physics is an experimental science, and when we build a model to describe nature, we assign probabilities to events. Those probabilities could be 1, or 0, or anywhere between.

    But we can rephrase the question, right? We can say: suppose that something happens with probability 1. How do we know that we haven't imparted enough energy to the process to find something more interesting?

    In your dice analogy, you only know the result if you know where you started from. Dice rolls are random in the sense that you don't really know the initial state, so even knowing the dynamics of the roll, you don't know what the final state is.

    But ok---let's assume you know the initial state (which is reasonable in an experiment) and you can control the forces you impart on the dice (again, not unreasonable). Then you know, with certainty, the final state of the dice. (People actually do this.)

    Now we have to ask: how do we _know_ the dice throw is random. Suppose you don't know the initial state of the dice, but you can control the throw. Then your results appear random---the randomization wasn't in the throw, but in the initial state. So you have to have control over both the initial state, _AND_ the "randomization". You can see that lack of knowledge of one of these renders the process effectively random.

    The difference with laws of nature is that, even if you _know_ the initial state, and you _know_ the dynamics which govern the process, at best you can still only make probabilistic statements about the outcome of experiments. One can see this (for example) in a particle accelerator. You shoot two particles at each other---often times you know very precisely the initial energies of the particles, and you certainly know what particles you start with. These two particles can collide, and (depending on the energy of the collision), you have some theory which tells you what should happen---the theory assigns probabilities to events. So, while you cannot determine the outcome of any individual experiment, you can determine the outcome of an infinite number of experiments, given a theory.

    After reading this, I'm not sure I answered your question. So I'll stop here and wait for you to interrogate me

    Please Register or Log in to view the hidden image!

     
  10. RJBeery Natural Philosopher Valued Senior Member

    Messages:
    4,222
    Wellwisher, it's a bit of an interesting idea but I think you're presuming that "normal odds" and "full randomization" are attainable, which I don't agree with.
     
  11. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    If the universe generates "events" randomly, how can we be sure that it's "completely" random?
    How can we be sure that the amount of energy in the universe (or connected to an event) is "sufficient", so entropy tends to a maximum?
    Since entropy does tend to a maximum, does that mean the amount of energy in the universe is sufficient?

    With a finite range of values, such as the faces of dice, or decks of cards it's always possible to maximise random throws/shuffles; but "randomness" of any outcome is always relative to all the others, and the possibilities are always finite--does the universe contain a finite amount of energy though, and is it sufficient for "randomness"?
     
  12. RJBeery Natural Philosopher Valued Senior Member

    Messages:
    4,222
    Again, this is presumptive speculation; it isn't definitively testable in the real world, it requires a leap of faith that relies on our ability to imagine "perfectly" fair dice or "perfect" shuffling of "perfect" cards, infinite iterations for testing and verification, etc.
     
  13. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    I don't think so.
    If you have two dice and you don't know if they're biased, you can still throw them as often as you want.

    Any bias will show up, after maximising a number of "random" throws. Even if it takes forever. So it is always possible to maximise randomness, if you have a closed system. Put some gas in a container, the molecules distribute themselves "randomly" and "maximally".
     
  14. RJBeery Natural Philosopher Valued Senior Member

    Messages:
    4,222
    And you believe that "taking forever" qualifies as being testable in the real world?
    They do? All of the phase space will eventually be represented by the gas, including very orderly states. It sounds like what you mean to say is that the molecules are random unless they aren't, which is not a meaningful statement. In any event, how would you know? This again requires presumption on our part.

    Regarding a deck of cards, again how would you know they are "maximally" random? Would you have an algorithm to test for it?

    I think there's no problem agreeing that some systems are generally random, but I take exception to the phrase "maximally random". Entropy isn't well defined enough to proclaim something is maximally random in the real world. Ironically, if it were, and if you had an algorithm to test for this attribute in a shuffled deck of cards, then the randomness disappears because the testing algorithm itself becomes a predictor for the order of the deck!
     
  15. BenTheMan Dr. of Physics, Prof. of Love Valued Senior Member

    Messages:
    8,967
    It sounds like you are operating from a very weird (i.e., non-rigorous, and probably wrong) definition of random. The state where all of the gas molecules congregate into a corner is just as random as any other state.

    "Random" implies "sampling". The statement that "all of the phase space will be represented by the gas" is the same as saying "if I roll the dice indefinitely, I'll eventually roll all of the numbers". Is rolling a 6 any more "random" than rolling a 1? "Random" means that repeating an experiment under similar conditions can yield different results---again, see my prior post.

    The phase space is the space of all possible states, and we need a probability measure to tell us the likelihood of each state. Every state in the space is an element a "random" set.
     
  16. BenTheMan Dr. of Physics, Prof. of Love Valued Senior Member

    Messages:
    8,967
  17. RJBeery Natural Philosopher Valued Senior Member

    Messages:
    4,222
    Ben, your link just made my point.
    Pi qualifies as being statistically random, yet it's digits are 100% deterministic! I can announce with certainty what the "next" digit will be at any point in the sequence. My point is that random isn't a well-defined as many people think it is (i.e. I believe it's a completely subjective term, and intrinsic randomness does not exist), and the concept of "maximal randomness" is not only difficult to define but I contend impossible to achieve or test for.
     
  18. BenTheMan Dr. of Physics, Prof. of Love Valued Senior Member

    Messages:
    8,967
    "Random" has a very well defined meaning mathematically.

    If you'd ever read any books about probability theory, then you'd know that.
     
  19. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Pi itself isn't a random number, the digits have a random sequence--which you might say is a function of pi, or of computing it as a number.
    A lot of people can recognise the first n digits of pi. Not many people can tell if a number is random by looking at it.
     
  20. mathman Valued Senior Member

    Messages:
    2,002
    The concept of random (mathematics) is defined using an axiomatic approach (Kolmogoroff). The digits of pi (as well as almost all numbers) are called "normal", not random. This means that any particular finite sequence of digits will show up with a frequency as expected (i.e. any digit 1/10, any digit pair 1/100, etc.) when you look long enough at the expansion.
     
  21. RJBeery Natural Philosopher Valued Senior Member

    Messages:
    4,222
    So is the system state of a gas container wherein we place all molecules initially in the corner maximally random or not? Or must we wait for the molecules to distribute themselves and eventually wind up back in the corner to make this claim?
     
  22. RJBeery Natural Philosopher Valued Senior Member

    Messages:
    4,222
    Yes, mathman, but my objection is to the term "maximally" random; how does a squishy phrase like "long enough" allow us to make an absolutely definitive statement about something being MAXIMAL? Maybe I'm being pedantic here, but in our analysis for randomness we must necessarily restrict ourselves to one or more subsets of a sequence. "Maximal" randomness, to me, means that the sequence appears random for any scope (i.e. the set of all subsets), and this requirement has problems. Or not?? If someone wants to make the claim that maximal randomness is achievable, please define it first.
     
  23. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Heat a gas in a container or other closed system to infinite temperature, all the molecules will be distributed maximally.
    Or shuffle a deck of cards without stopping, or toss a coin forever.

    Then the question of sufficient energy is more like a sufficient number of tosses of a coin, does it imply you have a truly random sequence only after an infinite number? That a gas is only fully randomised at infinite temperature?
     
    Last edited: Apr 10, 2011

Share This Page