Randomly selecting from an infinite number of choices

Discussion in 'Physics & Math' started by Nasor, Jan 20, 2009.

  1. Trippy ALEA IACTA EST Staff Member

    Messages:
    10,890
    It doesn't have to be infinite, it only has to be finite, by definition, it just has to be undefined.

    Go back and reread what I actually said, and bear in mind that what i'm talking about is an expansion, not a compression, and also bear in mind that the number i'm talking about, if I define the rule before hand, can be 'nested expansions'.

    For example, I could say that my mashed number is '1234' and that "each of those numbers represents a multiple of Grahams number, and they are to be read sequentially, but each of the numbers from this expansion is also a multiple of grahams numbers, and this process is to be repeared a grahams number of times"

    It's still a finite number, it is arbitrarily large, and selected at random, and is just as likely to have been selected as '1' "But let's apply the standard rules this time".

    in essence, i'm not really talking about compression algorithms, I'm talking about the rules we use to expand the numbers to interpret their meaning - for example the rules we use to expand the number 123 and interpret it's meaning as \(1*10^2+2*10^1+3*10^0\)
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Trippy ALEA IACTA EST Staff Member

    Messages:
    10,890
    And you just proved my point when I mentioned the shortcomings of any specific set of rules, but, this is actually irrelevant, because all I have to do is specify a different set of rules when I state the number that is capable of handling the number of the room you're in.

    And when did the life time of a mortal become relevant?

    The other thing is you could combine rules.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Pete It's not rocket surgery Registered Senior Member

    Messages:
    10,167
    Only if it hasn't already decayed. Think about it - the atom has a 50% chance of decaying within any given half-life.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. RJBeery Natural Philosopher Valued Senior Member

    Messages:
    4,222
    Trippy, if time (such as a mortal's lifetime) isn't relevant then why are we discussing compression/expansion algorithms at all?? You brought them up as a resolution to the representation/communication issue. If we're assuming that the participant is immortal and capable of supertasks then they might as well represent and communicate arbitrarily large numbers by growing that many daisies in their infinite garden.:thumbsup:
     
  8. RJBeery Natural Philosopher Valued Senior Member

    Messages:
    4,222
    Also, the half-life suggestion is NOT uniform. Try it 1,000,000 times and graph it. Is the number of decays at each time interval the same? No it is not; they are exponential by definition. The only way it would show a uniform distribution is if the element had a half-life of infinity, in which case the graph would in fact equal everywhere (but not very useful).

    Please Register or Log in to view the hidden image!

     
  9. quadraphonics Bloodthirsty Barbarian Valued Senior Member

    Messages:
    9,391
    Am I supposed to be more comfortable with waiting an "undefined" amount of time than an "infinite" amount of time?

    "Expansion" is just the final step in a compression algorithm, where you decode the actual result. In order to perform an expansion, you have to have first defined a compression, or there isn't anything to expand. That you generate number directly in the compressed domain is irrelevant: the compressor is specified when you specify the expander (i.e., the "rules").

    Said another way, your "rules" represent an invertible mapping between an alphabet of codewords and a set of symbols. The mapping from symbols to codewords is a "compression" and the inverse is "expansion." Taken together, they represent a coder, or "compression algorithm." But the key point is that specifying just ONE of the mappings specifies the entire thing, and you can then apply results from data compression to the situation.

    Only if you are also randomly selecting over the rules, as well as numbers. How long will it take you to randomly select from the infinite set of possible rules?

    That is exactly what the decoder side of a compression algorithm is, which is turn specifies the encoder side. I.e., you are designing a compression algorithm.
     
  10. zanket Human Valued Senior Member

    Messages:
    3,777
    I agree with Trippy that a single atom never takes infinite time (forever) to decay, just an indefinite time. Those are subtly different. An indefinite time would be the expected time too. It could be 5 minutes, or 5 billion years.

    How could the set of natural numbers, which uses only symbols 0 to 9, have a finite number of numbers in it?

    I've been discussing Nasor's #23 post, which morphed into "If I randomly pick a natural number, what are the odds that you will be able to guess my number on the first try?" What I read (Wikipedia) shows that the set of natural numbers contains only finite numbers, and contains an infinite number of numbers. Can you show otherwise?

    I agree that if the set of natural numbers contains numbers of infinite length, one cannot randomly select from it.
     
  11. RJBeery Natural Philosopher Valued Senior Member

    Messages:
    4,222
    Then we are in agreement that the answer to Nasor's #23 post is zero?
     
  12. Pete It's not rocket surgery Registered Senior Member

    Messages:
    10,167
    Hi Trippy,
    You're not following what I'm saying at all.

    When you specify a set of rules, you must do so using some notation (eg English words). These specified rules, plus the data, form a complete set of characters that uniquely specify a number.

    There is a finite number of sets of rules you can describe (within any finite time), just as there is a finite number of data sets you can describe (within any finite time), simply because there is a finite number of symbols you can select (within any finite time).

    In algorithm terms, you could consider a universal turing machine, that accepts a description of a turing machine (a set of rules), followed by a string of data to operate on.

    Talking about algorithms isn't "quibbling", it's cutting to the absolute heart of the problem. Any method (of finite precision) that you can describe can also be described as an algorithm.

    And as I showed earlier, an algorithm that chooses from an infinite uniform distribution has zero probability of halting in finite time.
    Or equivalently, if an algorithm selects a random number from an infinite distribution in finite time, then the distribution is not uniform.

    When you maintained that you could select randomly from an uniform infinite distribution. You're mortal, right?

    Yes, you could get surprisingly high. But not infinitely.
     
  13. RJBeery Natural Philosopher Valued Senior Member

    Messages:
    4,222
    Well said, Pete. So we're all in agreement that the answer to Nasor's #23 post is zero, then?
     
  14. Pete It's not rocket surgery Registered Senior Member

    Messages:
    10,167
    A single atom has a 50% chance of decaying in any given half life. Think about it - start with 2 billion atoms, wait one half life. If each individual atom has a probability of 50% of decaying in that time, then about 50% should have decayed... which is what a half life is!

    If an atom has a half life of 1 second, then it has a 50% chance in the first second, 50% chance of decaying in the second (if it survived the first), 50% chance of decaying in the third (if it survived the previous two) and so on. The probability that it decays in any given half-life is constant - but is contingent on it having survived up to that point.

    The output of your program has about the same chance of being a one button mash as a six button mash. A one might come up as 000001 (the same chance as 347692), but a one has a better chance because it could also come up as 00001, 0001, 001, 01, or just 1.

    So whoever does it needs unlimited precision?

    Consider in further, and you'll find it does matter. If you are surrounded by an infinity of numbers, then how closely packed must they be?
     
  15. zanket Human Valued Senior Member

    Messages:
    3,777
    Well, whether or not the algorithm is selecting randomly depends on the odds of the atom decaying at any moment, so "Only if it hasn't already decayed" is all that matters, right? Once it decays the algorithm is finished.

    Suppose the half-life of an element is 1 day. Does that mean that there is a greater chance that the single atom the algorithm is watching will decay in the first month, than in the second month? No, the odds of that atom decaying in any month is the same. The half-life tells only when to expect that 50% of a group of like atoms will have decayed.

    Flip a coin every day. Heads is survive for another day, tails is decay. On any given day you have the same odds of decaying. But there will be a half-life statistic for a group of such coin flippers.
     
  16. Pete It's not rocket surgery Registered Senior Member

    Messages:
    10,167
    I don't think so, because if Nasor randomly picks an integer, he's not randomly selecting from a uniform infinite distribution. There is a non-zero chance that he chooses 1736209878016208, for example.
     
  17. Pete It's not rocket surgery Registered Senior Member

    Messages:
    10,167
    Yes!
    The atom's chance of decaying in the first 30 days is 99.999999907%.
    The atom's chance of decaying in the second month is also 99.999999907%... if it survives the first month.

    Start with a zillion atoms. In the first half life, about half decay, leaving half surviving.

    Start with 1000 flippers. The first day, about half flip tails, leaving half surviving. That's a half-life.
     
  18. zanket Human Valued Senior Member

    Messages:
    3,777
    True. You can substitute "moment" for "half-life". And it guarantees that the algorithm will randomly select; it will pick a number out of an infinity of numbers, with every number having equal odds of being picked.
     
  19. Pete It's not rocket surgery Registered Senior Member

    Messages:
    10,167
    Look, don't take my word for it... look it up in a chemistry of physics text, or ask Wikipedia:
     
  20. zanket Human Valued Senior Member

    Messages:
    3,777
    (Quickly, 'cause I gotta leave) What's your point? The algorithm employs a single atom, not a group of them, and you seem to agree that the odds of a single atom decaying at any moment are constant.
     
  21. Pete It's not rocket surgery Registered Senior Member

    Messages:
    10,167
    No, subsequent moments are less likely to be picked, because they are contingent on the survival of previous moments. Look at post 109, and try working the maths. It takes a little effort, but it's not that difficult. All you need are your basic probability laws.
     
  22. RJBeery Natural Philosopher Valued Senior Member

    Messages:
    4,222
    As I said, the half-life issue is irrelevant because it is not UNIFORM distribution. It's exponential. It might help in choosing an number that was not predetermined, but it does not help in choosing a uniformly distributed random number from infinity.
     
  23. zanket Human Valued Senior Member

    Messages:
    3,777
    Oh, I thought "Yes!" was excitement. I'll reply later.
     

Share This Page