Determinism or Indeterminism

Discussion in 'General Science & Technology' started by Plato, May 3, 2000.

Thread Status:
Not open for further replies.
  1. Plato Registered Senior Member

    Messages:
    366
    We are touching here on a subject that has been addressed since the very beginning of constructive rational thought. However I believe that we are living in a time where it will find its final solution.
    To open the discussion I would like to refer to the work of Ilya Prigogine, a Belgian (yes we have some good scientists to

    Please Register or Log in to view the hidden image!

    ) scientist who made his contribution in the field of thermodynamics and received the Nobel Price of Chemistry for it.
    His work primarily focussed on thermodynamics far from equilibrium and irreversable processes. He also made a lot of contributions to the fairly new field of deterministic chaos.
    In deterministic chaos we find that some systems (non-equilibrium systems) experience behavior that the tiniest change of the initial conditions results in totally diverse solutions. This is also known as the butterfly effect : the flapping of the wings of a butterfly in the amazoneforest could result in a huge storm in Europe.
    If this is the case why are we still talking about deterministic chaos ? Because if you input exactly the same initial values (exaclty is in infinite presision) you get the same results. You would expect that the determinists have won the day because even in something as random as 'chaos' there still is determinism.
    Where does this determinism come from ? Time symmetry !
    All basic equations from Newton through Einstein and quantum mechanics are time symmetric. This means that if you replace t with -t, the equations remain the same. It makes no difference for a particle or a wavefunction if time goes to the past or to the future.
    Where does the apparent arrow of time come from ? The second law of thermodynamics states that entropy always increases as time increases. This however is like an ad hoc proposition that we have allowed to exists but actually describes our lack of accurate measurment. It was believed, when the laws of thermodynamics were formulated, that the individual particles that interacted still obayed Newtons timesymmetric and deterministic equations but we had to resort to statistical formulations because of our inadiquacy of measuring. Entropy is then the value that expresses our ignorance of the system. This was accepted as a temporary standback since after all, the particles were very small (10^-10 m) and there were a lot of them (10^23 particles in a mol of gas)
    However something very disterbing happened near the end of the 19th century. There was a particularly nasty problem going on in the field of celestial mechanics namely the three body question. Newton had touched upon it but left it wisely for others to solve, it stayed with the theoretical physicist all through the eighteenth and nineteenth century until Poincaré, the famous French mathematician (who almost beat Einstein the the formulation of special relativity), proved that it could not be solved ! The three body system proved to be highly unstable and led to chaotic behavior. This is of course a very disturbing, if even something as simple as three body interaction is a source of chaos how on earth are we ever going to solve the chaos that reigns at a molecular level in gasses ?
    The solution that Prigogine proposes actually is quite simple.
    Normally solutions of systems can be shown in something called fase space. This is an imaginary space where one can plot all the observables involved like position and impuls to show the solution of the equations of motion. For example periodical movements like an idealised pendulum are represented by a circle in fase space. This fase space in Newtonian mechanics is actually a Hilbert space, this is an Euclidian space with an infinite amount of dimensions.
    What Prigogine proposed was to generalise Hilbertspace in order to allow distribution functions in stead of trajectories to represent the solution. Once he did this, chaos turned to order again and he could actually solve the non linear equations. In doing this something else was solved as well : the time symmetry was broken ! As it turned out the generalised Newtonian equations carried the arrow of time in them and allowed for an explantion of the second law of thermodynamics.
    Why have the laws of Newton been so succesful ? What are the criteria that makes them break down ? It turns out that persistent interactions as opposed to transient interactions are the key. A transient interaction is when two free particles meet, interact and are free again. However this is an idealisation in nature there is no such thing as a 'free particle' what actually happens is : a particle interacts, flies a bit, interacts again and this never stops. This last kind is called persistent interaction and is the very reason for chaos and irreversebility.
    What we have to come to grips with is that probability is actually something that has found its way into our basic equations. This however doesn't mean that nature will stay forever a mystery to us, quite the contrary : by accepting the inherent probabilistic nature of the universe we have found a new way of questioning it and finding new and interesting answers. Life isn't deterministic but also not a total random event, it is a narrow path between the two.

    ------------------
    "If I have been able to see further, it was only because I stood on the shoulders of giants."
    Isaac Newton
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Adlerian Registered Senior Member

    Messages:
    107
    Plato: I hope you get a lot of responses on this but perhaps you're too far ahead for our audience. I'll bite though.

    This is not my given field but I fail to see how what you have said translates into existence being something in between deterministic and indeterministic. Existence is both as I see it but the way in which it is indeterministic is in the realm of human existence, hence choice (and personal responsiblility). Nature itself seems completely deterministic in every other way.

    Heisenberg was wrong, by the way, God doesn't play dice.

    Please elaborate...

    Adlerian
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Plato Registered Senior Member

    Messages:
    366
    Adlerian,

    how can a deterministic nature give rise to an indeterministic human ? This is a logical contradiction !
    Actually this duallism has plagued phylosophy and theology all through history. Where does this believe in nature as an automaton come from ? I think it is partly because we always made a clear distinction between "living" and "non-living" matter. The distinction is at one side a natural one because what can be more distinct than a stone and a human ?
    However it results in a whole range of difficulties when one wants to apply logic to it. The reason for living things to be alive has always been some kind of divine spark that animated it. This however alienated us from the very universe that we inhabit. It is as though we are foreign visitors who are thrown on this world and have nothing in common with it.

    Once we put probability into nature itself, we find ourselves in a whole different universe ! One that welcomes and anticipates live, one (I dare say) in which life is inevitable !
    When I say that life follows a narrow path between determinism and total randomness I mean that probability doesn't mean total randomness ! There is only one such probability function that describes total randomness and it is the flat probability. There is a whole range of other probability functions out there even ones who very closely resemble the certainty of delta functions.

    ------------------
    "If I have been able to see further, it was only because I stood on the shoulders of giants."
    Isaac Newton
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Crisp Gone 4ever Registered Senior Member

    Messages:
    1,339
    Hi Plato,

    I'd have to agree with Adlerian here (from a personal point of view). I also believe that nature is determined and governed by laws that predict its future and actions in a very good way.

    The key in my reasoning that nature is deterministic is exactly what you've already written when talking about entropy: simply because we do not have the conceptual and mechanical power to calculate every movement of every molecule of a mole of gas, we create a statistical model that describes all the particles as eg. a point or a rigid spherical body. Everything we know in physics is based on a certain way of representing matter and its interactions in a simple model that we can compute. When creating a model, you are forced to leave out certain aspects of nature (minor side-effects). A good example is, also as you already mentioned, the ideal pendulum (harmonic oscillator). If you damp the oscillation a bit and add an external force, you can approximate the differential equation to the first order (sin -> x), and you can easily get some results. But those results are not consistent with what you see in the lab, so you decide to take another term in the approximation of the sine (sin -> x - x^3/6). And all of the sudden effects that you'd never expect (hysteresis in fase/amplitude diagrams) appear. This is also a form of chaos (the reaction of the oscillation to the external force depends on what you did before), but does this necessarily mean that the damped oscillator with external force behaves chaotically ? I don't think so: we simply have to take all the order terms in the approximation to get a complete result.
    (sidenote: this is a silly example and definitly not fundamental enough, but i thought it might illustrate my point

    Please Register or Log in to view the hidden image!

    ).

    About the three-body problem: ask yourself why there is a non-lineair differential equation to solve in the first place ? It's because Newton (or Poincare, or whoever) described the interactions between those three bodies using a mathematical model. They had to resort to this model because it is impossible to describe all the n*10^23 particles in differential equations. The probability in nature is not a characteristic of nature itself, but, as you said yourself, a manifestation of our own ignorance.

    Life is something similar. What makes us think or do things ? On a molecular level this is just a chemical reaction between organic compounds (to quote Pacino from "The devil's advocate" here: "love is chemically equal to eating large quantities of chocolate"

    Please Register or Log in to view the hidden image!

    ). If we know exactly how many chemical reactions take place inside the brain and how all the particles, molecules interact, we must be able to predict how a human being will react. But we can't, so we say a human being behaves chaotically and indeterministic.
    (Tricky subject by the way, since I don't think we already know if there's such a thing as a "human soul" (the spark of life)... Perhaps Boris can enlighten us a bit here).

    As I see it, the question we have to ask ourselves before saying nature is indeterministic, is what matter is really made of. They aren'tlittle spheres of mass and charge. They also don't seem to be some sort of wave... These are all mathematical models that are not accurate enough, so the question whether nature is (in)deterministic is not yet to us to answer... I'll stick for deterministic for now

    Please Register or Log in to view the hidden image!

    .

    Bye!

    Crisp
     
  8. Plato Registered Senior Member

    Messages:
    366
    I'm sorry Crisp but you told me nothing new. Again I have to say it makes no sense, how can time symmetric laws give rise to irreversible processes ? This is impossible and self contradictory !

    Actually the argument is quite simple to be verified. We need an experiment that verifies a prediction of the changed laws that is not predicted by the classical laws. I will have a look on the net if I find something that does this very thing or that proposes such an experiment.

    ------------------
    "If I have been able to see further, it was only because I stood on the shoulders of giants."
    Isaac Newton
     
  9. Crisp Gone 4ever Registered Senior Member

    Messages:
    1,339
    Hi Plato,

    I totally agree that time-symmetric laws cannot predict irreversible processes. But the point I tried to raise (and, after rereading I have to admit it got totally lost somewhere

    Please Register or Log in to view the hidden image!

    ), is that from this observation (and others) you conclude that nature behaves indeterministic. I would simply say that the laws we have are wrong, and once we find a set of laws that have the direction of time implemented in them, nature would be deterministic again (to put it very simple, I know there are some complications).

    Correct me if I am wrong, but your point on this is "our current laws of physics cannot predict the indeterministic behaviour of nature, and there are very good indications that nature actually is indeterministic", right ? (Just to be certain we're at the same wavelength here).

    Bye!

    Crisp


    [This message has been edited by Crisp (edited May 05, 2000).]
     
  10. Boris Senior Member Registered Senior Member

    Messages:
    1,052
    Plato,

    Enlighten me please, but how is it that a process is termed "irreversible"? Is it because the process is modelled by a set of functions that map a single point in configuration space (initial parameters, in other words) to an entire region of phase space, and the mapping is not 1-1?

    But there is a big difference between observing a property of a model (such as irreversibility), and claiming that the reality described by the model in fact intrinsically possessses such a quality. We have to look before mere formulae, and remember that the formulae are models, not "laws". The only Laws there are, are summarized in the axyoms used to construct the models. However, a model typically involves much more than a set of axyoms; namely, in modeling physical reality we often neglect or simplify large swaths of it. So when our limited model predicts that two distinct sets of initial conditions result in the same outcome, all it means is that the outcome is only identical because we neglected some finer details of the interaction; were we to dig deeper, we might discover that the "identical" outcomes only look similar, but indeed are distinct and distinguishable, at least in principle. What does it mean when a one-to-many relationship is defined between initial configurations and final outcomes? All it means is that there is noise in the system that deflects a trajectory under ideal conditions so that the result is a distribution of trajectories. However, a distribution is merely a result of the fact that we model the noise as a distribution; it is impossible for us to determine precisely what the noise will be at any moment in time -- however, that does not mean that at any moment in time the noise does not possess a definite, and indeed deterministic, value.

    As to the problem of disentangling determinism from free will, I am afraid I've done an indadequate job at ruining your day on that particular issue.

    Please Register or Log in to view the hidden image!

    Let me correct that particular mistake right here and now.

    Let's assume that you are right and the universe is indeed nondeterministic at the fundamental level. Then, of course, you have the problem of explaining how the large-scale world we observe is so sharply defined and causal. But even allowing that you found an ingenious explanation for how order arises from chaos (and not the other way around, which would be the position I'm trying to argue), you must observe that both the brain and indeed the cells from which it is built, are large-scale objects. As such, they do not suffer from the quantum "indeterminacies" of the subatomic realm. As sure as a rock that flies when you throw it, a neuron will fire when stimulated. Even at the level of individual synapses, we are dealing with effects of thousands, if not millions, of neurotransmitter molecules acting simultaneously, so that their individual "random" behavior gets averaged into a very predictable and dependable outcome. So even the behavior of an idividual synapse is as deterministic as that of a transistor in an electronic circuit. Now, consider that brain activity is a resultant of many, often redundant, computational units summed together -- so that additional averaging and smoothing occurs even beyond the molecular level. I am afraid that when you look at the cognitive machine as the high-level result of brain activity, it is therefore deterministic for all practical purposes even if the underlying substrate is not (and I claim that it is.) Similarly, when you consider the interaction of that cognitive machine with the environment, you must understand that we are always dealing with smooth averages and therefore sheer determinacy. Even if you assume that at the subatomic level the universe is nondeterministic, you still cannot escape the conclusion that determinacy rules at the large scale, and therefore governs our very nature as human beings.

    But going back to the issue of what comes first: order, or chaos -- I choose order. This is simply because in such a scenario we indeed know that chaos still arises in a sufficiently complex system (e.g. at least 3 orbiting bodies), and everything we observe sounds like a possible outcome. If, on the other hand, utter chaos stood at the foundation of reality, then a question must be asked how that chaos manages to acquire deterministic characteristics at any scale whatsoever. I have yet to see even a hint of a satisfactory answer to that question.

    ------------------
    I am; therefore I think.
     
  11. Plato Registered Senior Member

    Messages:
    366
    Crisp,

    Yes, you are correct, I claim that our current laws are incomplete in so far that they don't show the order that arises from chaos, which we can observe in a range of experiments (e.g. Belousov-Zhabotinski reaction) and has been called the self organisation of dynamical systems.

    Boris,

    I mean irreversible as in having a distinct arrow of time in the basic equations that provides a different answer for t and -t. This could indeed be shown as going from a point to a region in fase space.
    A simple example is diffusion in gasses, a mixture of two gasses will never "unmix" itself by itself.
    A much more dramatic example is the conception and consequent growing of a human embryo. There is absolutly zero chance that this new being will unconceive itself into an eggcell and a sperm again. This is one of the most beautiful examples of self organisation and irreversibility.

    You see chaos only reigns at the trajectory level of the description in fase space, if you go to probability description you get order.
    Perhaps a small example can illustrate my point more clearly. I am taking this from Prigogine's book "The end of Certainty" which is a very accessable and interesting book which I recommend very much.
    Take the recursive equation : X_(n+1) = X_n + 1/2 modulo 1 (X_n means variable the n'th solution of the recursive formula)
    This has a very predictable and periodical behavior, for example take X_0 = 1/4 then we have X_1 = 3/4, X_2 = 5/4 modulo 1 = 1/4 and so on. In stead of considering individual points we can examine ensembles of points which I write as P(x), this is the probability distribution of x. A trajectory discription then corresponds to a specific distribution called a delta function, it can be written as P_n(x) = \delta(x - x_n) (with \delta being the greek letter, it is a function that vanishes for all values of x exept x=x_n)
    We can formally write the relation between P_n and P_(n+1) as P_(n+1)(x) = U P_n(x). U is the so called Perron-Frebenius operator which for example in quantum mechanics is used to describe the time evolution of a system.
    As a special case for trajectories we have :
    \delta(x - x_(n+1)) = U \delta(x - x_n), this is exacly the same equation as the first. There are no solutions that cannot be expresed in terms of trajectories. This is the case for a periodic map, if we take a chaotic map like the Bernoulli map we get the following equation :
    x_(n+1) = 2x_n modulo 1. This is a deterministic equation because once we know x_n we also know x_(n+1) however every non rational number will result in utter chaos, deterministic chaos. One could argue that the result is still periodical but with an infinite period. We can show that any number will be arbitrarily close approached after an infinite amount of time. We have a dynamical system leading to randomness.
    If we swich back to the statistical formulation of the Bernoulli map we get a totally different view, after 4 steps the function finds stability in a flat distribution. The chaos is gone ! The equivalence between the statistical description and the trajectory description is broken. After only 4 steps we already know the system has a flat distribution and is therefor completly random, this knowledge we find only after an infinite amount of iterations in the trajectory description.

    This is only a very simple example of course, it show that probabilistic descriptions actually provide more information than trajectory descriptions.

    Boris, I'm utterly shocked to hear from you that you describe the brain as deterministic while it is the 'example par excellence' of self organisation from randomness ! We have neurons transmitting information to each other in an apparently random way while the endresult is a coherent thought process. Although coherent is even not really at its place here, if we stop and follow our thoughts for a moment about anything they follow a really strange path of associations and eliminations of nonsense. This has nothing to do with determinism, cause this would mean I could somehow unthink a certain idea that just occurred to me. This of course is utter nonsense.
    I claim there is no comparison what so ever with the turing machines that our computers are and the selforganising processes that go on in our brain.
    The apparent determinism as you call it would have a very big problem if for example suddenly a part of the brain would function any more because of a tumor or some other throumatic event. However apart from some severe headaches, the person still has the capacity of most of its cognitive processes. There is no way a deterministic system can handle this.

    It seems to me that people have concentrated to much of the negative aspects of chaos. It also has a negative connotation, however during the past decades it has become more and more apparent that chaos is actually very positive. That is the way life and evolution works. I would like to refer here to the work done in micro biology and how quasi species provide mechanisms for evolution.

    ------------------
    "If I have been able to see further, it was only because I stood on the shoulders of giants."
    Isaac Newton
     
  12. Plato Registered Senior Member

    Messages:
    366
    Meta physical plea for indeterminism.

    First of all I would like to point out that being a atheist I don't like order as a prerequisit for obtaining a universe. This reeks to much after a god concocting it all.

    Second I claim that in a deterministic universe time doesn't really exist as such. What is time ultimatly ? It is something that is very closely related with change however nothing really changes in a deterministic universe. I mean by this that if you know all the preconditions to the infinite decimal point, you know everything there is to know and can predict the future and reconstruct the past. This means that each timeslice of the universe holds in itself all the information of all other timeslices. Therefore I would call time an illusion in a deterministic universe (as did Einstein on several occasions).
    However time as in change is the basic dimension for our existence, if there is no change, there is no existence. The non existence never changes for there is nothing to change, everything that doesn't change (and I mean never as in through all time) doesn't really exist. So a deterministic universe doesn't exist !
    This looks like one of those nasty syllogisms but actually it isn't. If you think about it determinism take away everything that makes life worth living, it puts life on the same level as death. It puts existence on the same level as non existence and that is actually a contradiction.

    Something else that strikes me in the believe that it would be possible to know everything is that this very much resembles the Icarus story. It is a longing to become God, to be able to obtain immortality and become detached from the universe itself.
    It is also a disturbing belief in the controllability of the universe or in society it is something that could serve very well as a bases philosophy for a totalitarian regime. This also has a very strict set of rules and tries to mold society in that way. The reason why these regimes never succeded is ultimatly because the universe is not governed by strict laws but it rather searches its own way.
    In eastern philosophy nature is something that is not describable by laws but rather something that becomes by itself. I like this very much, I also think that eastern philosophers and religious thinkers are much less antropocentric then our western thinkers who ultimatly always need some form of intelligence who is behind everything. Cause after all God is the ultimate antropocentricity that we assign to nature...

    ------------------
    "If I have been able to see further, it was only because I stood on the shoulders of giants."
    Isaac Newton
     
  13. Boris Senior Member Registered Senior Member

    Messages:
    1,052
    Plato,

    I can't believe I am about to lecture a physicist about the foundations of his own trade, but I am getting a distinct impression that you are not hearing what I'm saying. So, for better or worse, here it goes...

    Ok, let's dwell on that one for a while. If the collisions between gas molecules are perfectly elastic, then indeed we can build a simulation of an arbitrary number of of gas molecules, take it through some n state updates, then exactly reverse the momenta of all the molecules, and watch the system re-establish its initial conditions. Deterministic indeed.

    Of course, in real life the collisions are not perfectly elastic; every now and then energy is leaked and absorbed as photons. If we were to simulate the system to that level of detail, then to reverse it we would need to reverse the outbound photon trajectories, as well as make sure that subatomic particles are simulated at a sufficient level of detail that they will remember what happened to them in the past, so that they can re-enact such events in reverse if we reversed the momenta of their respective components. Of course, here we are faced with the problem of deterministically simulating both photons and other "particles" of matter/energy -- and so far, we do not possess sufficiently detailed models to do so (in fact, all we currently possess are crude models that describe the statistics of trajectories, rather than the actual mechanisms causing the trajectories.) So you may cry foul all you want, and for now you are right -- deterministic computational understanding of reality at all levels is currently nonexistent. However, I would not vouch for the future if I were you.

    Nothing but a more complex version of mixing gases. Still the same type of matter/energy, still complex inelastic reactions, still complex interactions with environment, but every one of those reactions and interactions are causal and therefore deterministic.

    I said it before, and I'll say it again. There are two kinds of knowledge: applied, and theoretical. The former is more concerned with what is observed, and doesn't care as much about how the observations are brought about. The latter is concerned with the mechanisms that give rise to trajectories. You seem to be having trouble separating the two kinds and realizing that their objectives are quite different -- even though both fit equally well under the aegis of science. The primary difference, I suppose, is that you can always derive a probabilistic description from a trajectory description, but not the other way around. So you ought to realize that one of these levels of description is more fundamentally powerful than the other (although not necessarily more usable or indeed useful in applied endeavors.)

    This is just what I've been talking about. Sure, a statistical description may provide you with some useful insights about the system, but it will never allow you to understand how the system really works, what are the rules that guide its intrinsics. In fact, if you wanted to know that, you'd have to somehow discover the formulaic definition of the Bernoulli map -- and once you do, then you will have gained knowledge of how the system works, not just what the results of all that work look like.

    Here, you are confusing amounts of information with types of information. The statistics of outcomes vs. trajectory descriptions are not referring to the same qualities of the system; they are not mutually exclusive but in fact complementary. However, you must note that you would never be able to derive a statistical description of the system unless you first knew the deterministic formulation of what the Bernoullli map is in the first place. This is the difference in descriptive power I've been trying to point out all this time.

    What randomness? Neither the components of the brain, nor of the environment within which it develops and functions, are random; they all follow the same set of laws, and they are all driven by causal interactions. Causality == determinism, n'est-ce pas?

    Apparently random, but not actually random. Consider, if you knew nothing about computers, and were given a chance to observe real-time contents of registers in the CPU, what your conclusion would be as to the nature of the process that changes those contents. You'd probably say it was random -- at least that's my guess. All the while, we know that particular process is deterministic as can be.

    I keep making analogies between computers and brains, largely because it is indeed true that fundamentally they are the same -- deterministic information processors. Yes indeed, the brain is a rather sophisticated Turing machine. You must have missed that bit I wrote about the utterly deterministic input/output behavior of neurons. An entire branch of science is burgeoning due to that fact, called neural computation. Wellcome to the year 2000!

    Plato, you can no more unthink an idea, than a broken Humpty Dumpty can reassemble itself. Remember that funny branch of science called thermodynamics? Time never reverses itself. The only way to achieve a complete reversal of the system, is to simultaneously and precisely flip all of its momenta -- which is statistically next to impossible for most systems, and physically impossible altogether (because everything is causal, and in a closed system a complete reversal cannot occur spontaneously.) Hence, the "arrow" of time. What would we do without inertia, eh?

    A brain can, at least in principle, be modelled at an atomic level on a computer. Anything that can be precisely modelled on a computer, is a computational process and is describable by a Turing machine. Like it or not, brains are Turing machines. Otherwise, you are going to have to provide me with a fundamental (and not just practical) reason why a brain cannot be completely functionally simulated on a computer.

    Now you are treading on ground that will burn your feet. In fact, the brain is severely vulnerable to injury, and most brain traumas that take out a region of the brain indeed result in disability (which is the more severe the larger the disabled chunk of the brain is, and the later in life the injury occurs.) Some aspects of cognition are relatively robust to injury, but only as long as only peripheral functionality is damaged; every brain process has a critical brain region without which it can no longer function at all. Some of the brain's robustness is due to its holography-like representation of information, where data is not stored in fixed locations, but rather is spread out across a network of storage devices; taking out part of the network in such situations will not destroy the data, but only degrade it a little bit. This has been demonstrated quite convincingly with many computational neural network models. In fact, these days one of the standard methodologies in neural network research, is "lesioning" a part of the network, and observing the effects of the lesion on the network's performance (which is essentially a measure of how localized vs. distributed the network's representation is). Another feature that helps the brain witstand injury is the reduncancy of many functions due to its bilateral symmetry -- similar to how your body is somewhat resistant to injury due to the fact that you have a backup kidney, a backup lung, a backup eye, etc. in case one of them gets damaged or destroyed.

    So indeed, you can't be more wrong. The brain is a deterministic system, and deterministic brain models are taking lightyear strides in the enterprise of understanding cognition. For some discussion on the deterministic and material nature of the brain, as well as some mention of interesting brain pathology results, see the <A HREF="http://www.exosci.com/ubb/Forum8/HTML/000245.html">souls...................oh Boris </A> thread in the Religious Debate forum.

    Once again, nobody is belittling statistical models. What is at issue, is the fundamental nature of reality. Chaos can arise from order. But order cannot arise from chaos. Even though chaos-based, statistical, or monte carlo approaches are useful in learning about the world or even gaining control of the world, we must not loose track of the fact that the ground upon which all of these methodologies stand is purely deterministic, and that they merely give us generalizations of systems while doing nothing to elucidate the details of the inner function. The fundamental mechanisms that drive reality must by necessity be causal, if they are to give rise to our causal world. And causality is just another word for determinism.

    <hr>

    With regard to your "physical plea".

    This is ridiculous. Why should order entail design? I think you are overgeneralizing based on your anthropomorphic perspective.

    I concur, partially. The "continuum" of past-future-present is an illusion; there is only the present. However, change does occur, and therefore the universe is indeed evolving and not static. So, of course, there is still the "arrow" of time, however improperly we have tended to perceive it to date.

    Here, you are playing on words. A non-interactive computer program is entirely deterministic, so that given its precise state at any moment, you can trace back to all of its past states, as well as predict all of its future states. Nevertheless, the computer program is still doing something, accomplishing some goal,and oftentimes, I'd say, it even has a purpose.

    The "existence" you refer to is from a perspective that is somehow external to our universe. From that point of view, the universe is indeed either just a static object, or a record playing itself out. Mind you, the object (or the record) still exists -- even though it does not interact with its external environment, whatever that environment may be (and even if such an environment exists.) So if the universe indeed lives in some larger world, but is completely closed off from that world, then you could say that to that external world the universe does not exist. For us, however, the perspective changes. Being part of the universe's state, we are directly involved and are part and parcel of state change across timeslices. To us, there is indeed change, and there is indeed existence.

    What is it that makes life worth living in a nondeterministic universe vs. a deterministic universe, may I ask? And, there only are contradictions if you choose to define your terms in a contradictory fashion. Paradigm shifts usually involve novel use of language; you cannot use a nondeterminist vocabulary on determist issues and expect reasonable outcomes. But a challenge I'd like to fire back at you, is how can any type of ordered existence whatsoever be possible in a universe where there is no order at the most fundamental level.

    In fact, as a determinist, I have successfully argued that it is impossible to know everything, even if you learn all the deterministic rules and all the state fundamentals of the universe's operation -- see the <A HREF="http://www.exosci.com/ubb/Forum8/HTML/000175.html">Contradictions</A> thread in teh Religious Debate forum. In fact, having complete knowledge of the mechanisms that drive reality does not constitute a complete knowledge of reality; a God's perspective would be to not only know all of the reality's mechanisms, but to also know the reality's complete state during some timeslice (which is what is impossible).

    This is even more ridiculous. There is nothing about determinism that dictates one social policy or the other; in fact, there is no mention of human societies at all, nor should there be. Determinism is about the fundamental nature of reality, not about how we should behave ourselves. And coming from a physicist, the statement that "the universe is not governed by strict laws" is baffling, to say the least. What seems to be the case in terms of humanity, is that humans are not idealized nor identical either in their individual structure or function. Therefore, they will never as a population completely comply with a rigid set of rules. That is why there has always been crime, and always will be. And in case you haven't noticed, capitalistic societies, which I assume you insinuate are successful, are also invariably governed by pretty rigid rules and laws.

    Which begs the following question: just how exactly is this process of "becoming by itself" accomplished? Which is where determinism comes in.

    ------------------
    I am; therefore I think.
     
  14. Plato Registered Senior Member

    Messages:
    366
    Boris, I don't think I will top the length of your message but then again, I think you hold the record on that

    Please Register or Log in to view the hidden image!



    Isn't it ? I like to think it is.

    Please Register or Log in to view the hidden image!


    Let me take you into a secret. Did you know for example that the so called fundamental interaction 'constants' of all four basic interactions are no constants at all ? What happens actually is as two particles get closer and closer together, these so called constants change. Hence the search for some unifying force that emerges at the highest engergies or (equivalently) the smallest distances.
    You most think of the current values of the 'constants' as some kind of settlement with the current avarage energy level of our universe. This means that even the fundamental forces are subject to evolution. Don't worry however there is still a nice formula that explains the change but it is a sign on the wall.
    What I'm saying is that all those nice formulas we keep finding to explain everything might just be a temporary view on how the universe behaves in our observable region of space. This would mean that the universe itself actually evolves and adapts...

    I agree with the first part but not with the last. You see if reality is probabilistic at the fundamental level then nothing prevents causality to occur still. For something to have some chance to occur there still needs to be a reason but if something occurs it needen't exaclty follow there is a cause for it.
    Causation is only a necessary constraint, but not a sufficient one as it is in mechanistic causation.
    Take for example an unstable atomic nucleus, physically there is absolutly no difference between a nucleus who is about to decay or one who isn't. The nescessity for the decay is an instable nucleus but the actual emission of for example an alfa particle has no cause at all.
    This is no defeat of our human endeavor to understand our universe, quite the contrary, it opens new ways that have been closed until now. It provides a new fresh way to look at the world.

    Further more you seem to think of statistical information as inferior to knowledge of each and every particle of a gas for example.
    I'm afraid you will never understand self organisation if you don't go to the statistical description of a dynamical system. You see if you take trajectory descriptions of lets say turbulence in a gas all you find is chaos, if on the other hand you take statistical descriptions you see the funadmental mechanisms how for exaple a vortex is formed and how it behaves. If you are after fundamental mechanisms, you will get nothing out of deterministic, time-symmetric equations.

    You see it is exactly this attitude that I resent but it is what determinism ultimatly leads to. A true determinist can't possible have any respect for life. What is even more he can't even have respect for the gasses and chemicals of which he himself is composed.
    Mind you I'm not pleading for some kind of world spirit here but only for a certain acknowledgement of which we come from and are made from. You see 'liveless' matter most have on some basic level the same qualities that we have, and one of the most treasured of these qualities is choice.
    This by the way is the reason why I prefer an indeterministic universe over a deterministic universe : Choice

    You know I find this pretty concerning if you claim that Observed knowledge is inferior to theoretical knowledge. Next thing you say is that we better stop doing those messy experiments at all and construct our own theoritical paradise as how the universe should be...
    For me observation still has precedence over theory even though I am a theoretical physisist from education. We can think and ponder as much as we like what counts is what we observe.

    O yes, one more thing. The reason why we can model the functions of our brain on a turing machine is because we are actually only modelling the statistics, these behave deterministic. As in according to some formula, again you are confusion what really happens with a model.

    ------------------
    "If I have been able to see further, it was only because I stood on the shoulders of giants."
    Isaac Newton

    [This message has been edited by Plato (edited May 10, 2000).]
     
  15. Plato Registered Senior Member

    Messages:
    366
    Why does determinism lead to some form of god or creator ?

    As I said in a deterministic universe, each timeslice holds all the information of all other timeslices. This means that the actual information content of the universe is a closed system and is static.
    If we look at the big bang event this information needed to be there, but how can such information exist in a singularity ? Or if we go just before that (if we can speak of such a thing as before the big bang of course) how can this information be there in the nothing ?
    Somehow at the big bang there must have been some kind of information transfer into the otherwise closed deterministic universe. This can only point to some kind of creator, I know the believers will like this argument but then they must realise there is no such thing as Free Will.
    If however the big bang itself was some kind of self organising system, information was created along the way and is now still being created.
     
  16. Crisp Gone 4ever Registered Senior Member

    Messages:
    1,339
    Hi Plato,

    Constants that aren't constants, no problem. We just replace them by an adequate function c(t) and we have a deterministic behaviour again. The universe evolves ? So what ? If we study long enough then we could even find a good model (and I have to be careful not to contradict myself with that word) that predicts this adaptation.
    Please note that I am not raising the question whether we (meaning the current and coming generations) will "discover" this magical formula; from my believe in a fundamentally deterministic view of the universe, I am convinced that one day we will.

    From my personal point of view this tells me that the theory that cannot predict the exact time of emission is a flawed theory. To counter this argument (rather badly), why don't all the unstable nuclei emit their alpha-particles all at once ? This doesn't occur, and hence I am forced to conclude that there is some mechanism that determines why some nuclei emit them today and some will in 1000 years.

    How can you still live with yourself ? If things happen without a cause, then you could disintegrate without any apparent reason into trillions of nuclei just when reading this message. If things happen without reason, then the whole mechanism of cause and effect is destroyed, and hence about 99% of all physics we know (which is rather heavily based on this particular assumption).

    I totally agree on this one. A fresh new look might give us more answers and aid us in our way to comprehend the universe and in our efforts to describe it (deterministically). Without fresh new looks we'd still be in the 15th century kind of "technology".

    I disagree. A determinist can have an even higher appreciation of life than an indeterminist can. The determinist will step back and have a look at the wonderful theory of everything he just wrote down, marvel at its beauty and say "the mechanism of life is magnificent".

    Also, I have to agree with Boris when he says you use the paradigm of determinism to prove your point of indeterminism (as you know, everything about the big bang is fundamentally deducted from deterministic laws). But on the other hand, we can never ask you to deduce a bigbang model based on indeterminism

    Please Register or Log in to view the hidden image!

    .

    As a sidenote I would like to add that, after attending a lecture by Prigogine, I also was in doubt of a completely deterministic universe. However, when I had some more thoughts on what he said, I was not satisfied by his answer: his conclusion was that because we simply cannot know all the equations of state of every elementary particle, there is non-lineair and chaotic behaviour. (This is a very rude summary of what I've understood from what he said - the man's english sounded more like german). Perhaps it's just because I am still a newbie in this world, but I still have to find a good argument that convinces me of indeterministic behaviour at a fundamental level.

    I must say I was as baffled as Boris was, but heck, what the h*ll do I know ?

    Please Register or Log in to view the hidden image!

    .

    Bye!

    Crisp

    --
    "The best thing you can become in life is yourself" - M. Eyskens.


    [This message has been edited by Crisp (edited May 10, 2000).]
     
  17. Plato Registered Senior Member

    Messages:
    366
    Crisp,

    calm down, sit, have a sigare

    Please Register or Log in to view the hidden image!



    We have always been thought that there is a one to one relationship between cause and effect, that is way we can't help but conclude dat everything must be deterministic. There are other logics however that do not adhere this confined principle.

    I know this constitutes a severe paradigm shift but a lot of the research in many different fields of science points in this new direction.
    The struggle of people like Bohm to put some determinism into quantum mechanics is really almost desperate. I like to compare it with the epicicle theory of Archimedes to explain the retrograde motion of some of the planets.

    You see ultimatly determinism and evolution are uncompatible, it is just crazy to propose that the ultimate rise and fall of the dinosaures was already predisposed from the moment that gravity split itself from the GUT force. You must understand that the baby universe simply could not contain that much information so where does it all come from ? The same problem you have with the development of a brain from a strand of DNA, there isn't enough information on the DNA to form something that complex as a brain, where does the information come from ?

    What we need is a new kind of logic that encorporates deterministic behavior as a special case of a system from which all choises have been removed. This means that deterministic behavior is still possible but only under strict rules.

    Maybe I was a bit obsure on that, I'm not saying that things happen without a reason but I'm saying that the same reason can have different effects. Like in the case of an unstable nucleus, the same reason (namely being unstable) can have different effects (namely for one nucleas decay now, for the other decay over a 1000 years).
    Quantum mechanically there is absolutely no difference between different unstable nuclea of for example C14(or any other particles of the same kind), they are interchangeble so we are really talking about the same reason resulting in different effects.

    You see this is dialectic thinking, however one doesn't exclude the other. I'm not making a case for complete indeterminism here, I'm more agruing for something like incomplete determinism of wich complete determinism is a special case, hence still valid under certain conditions.
    I also think that how we think about the big bang will be severely changed once the new fysics that is rising has found his way into cosmology.

    However I envy you Crisp that you could attend one of Prigogines lectures. If you know of an other lecture of his please let me know, may be I will come to.
    (Dan kunnen we misschien elkaar eens in het echte leven ook leren kennen

    Please Register or Log in to view the hidden image!

    )

    ------------------
    "If I have been able to see further, it was only because I stood on the shoulders of giants."
    Isaac Newton
     
  18. Boris Senior Member Registered Senior Member

    Messages:
    1,052
    Plato,

    I concur with Crisp that variability in constants or indeed even laws is irrelevant. What matters is that any such variability be systematic and be itself causal.

    As to your allusion to probabilities being a fundamental property of reality, this is where our fundamental difference arises. We might as well debate nothing else, other than the issue of whether probability, or a mechanism, comes first.

    So, suppose that reality is indeed fundamentally probabilistic. What does that entail? Well, I shouldn't be too far off to assume that it entails certain events have higher probability of occurrence than other. Now, the question becomes: why should some events be more probable than others? If you assume that probability is a fundamental building block, you cannot provide a satisfactory answer to such a question; in fact, the only answer you can come up with is "just because it is this way." Which is not too far from the kinds of answers you get from religious people when you inquire why God has this or the other attribute. From a fundamentally determinist perspective, I will be able to tell you precisely why the probabilities are the way they are (e.g. a fair die has 6 sides with 6 different numbers, so the probability of seeing each number on a random toss is 1/6 -- but notice that I had to know something about the die (or indeed have a concept of a die) to provide such a derivation.) Of course, to be able to take advantage of a fundamentally determinist model, you have to have the model first. It is not something we currently have, and personally I don't even like the best running candidate (the M theory) all that much (simply because it assumes too many primitive entities for my taste).

    You could argue that there is really no difference between knowing why certain probabilities exist in a nondeterministic framework, versus knowing why the fundamental building block exists in a determinist framework. The difference is, determinism indeed aims to reduce everything to just one, simple, and self-referential building block. A probabilistic approach will merely result in gazillions of different probability distributions for different situations, and will not be able to reduce that complexity to something simple and beautiful. From an aesthetic standpoint, determinism is certainly more promising. And even supposing that either approach ultimately can give the same levels of control over reality, you still must concede that while probability density functions can be derived from a deterministic description of the system, a reverse derivation is mathematically undefined. Therefore, a deterministic formulation is fundamentally more powerful, as well as far more concise. Furthermore, it is potentially much more insightful, since you can derive new and unexpected probabilistic solutions from a deterministic description -- but if your starting point is probability itself, you can only describe what you have observed so far, and you cannot induce previously unseen phenomena.

    You are making a circular argument. You are arguing that causality is not sufficient for mechanistic causation, based on the assumption that nothing actually causes an unstable nucleus to decay. In case you haven't noticed, not only is your argument circular, but it is a classical mistake. You assume a negative, and hence assume a passive role of waiting for someone to prove you wrong. Provided everyone agreed with you, 0 progress will be made. If, on the other hand, you assumed a positive, then you might be continuosly trying to come up with a model that indeed explains all the probabilities in a deterministic fashion. Should you or any of your co-workers eventually succeed, you will have on your hands a beautiful theory that not only completely explains all the bird's-eye observations of nondeterminists, but probably also makes lots of empirical predictions -- which make your theory testable. And yes indeed, testability -- we completely forgot about that crucial criterion, haven't we? The reason why determinists have had such difficult time formulating a theory of everything, is because their approach indeed leads to testable predictions -- and so far they have been unable to pass the tests. Another reason is that we still don't have a good idea of what reality actually looks like -- heck, we are still missing 90% of the stuff in this universe! But just because it's difficult, doesn't mean it's not worth it. In my view, a fallback on empirically-grounded mathematical models with empirically-determined constants is a bit of a copout in the ultimate quest for understanding. We certainly need empirical science in order to tell us what the world looks and acts like -- but such shallow understanding should not be our ultimate goal. What should be our ultimate goal is to understand <u>why</u> the world looks and acts like it does.

    With respect to your decaying atome argument, let me make an analogy. We know that every year some 30,000 people die in car accidents in a certain country, with the overall probability of any person dying in an accident within one year being something like 1 in 10,000. Now, these people are actually otherwise immortal, so their only way to die is through a car accident. Given one such person, can you predict when they will die? Absolutely not, since the person does not carry such information within them; the best you can do is build a probability density function, and talk about likelihoods of a person dying within some time period (in fact, given a constant death rate, your model would be a decaying exponential.) Yet, whe the person does die, you know with certainty that they had a car accident (because they couldn't die through any other mechanism.) So indeed the event of their death was deterministic and strictly causal. But how come you couldn't predict it? Simple: to predict that event, you would have had to know the complete state of not just that one person, but of the entire environment of that person, which, depending on whether nonlocality is real, may be the entire universe. So there is your answer: even though the event itself is deterministic, there is no way of predicting it, because there is no way of knowing the precise state of the entire universe. My and Crisp's argument are that the same is true of decaying atoms, and indeed of any other random variable in the universe.

    We are after different levels of understanding here. First of all, I don't just "think" that there is more information, mathematically speaking, in a more detailed model -- <u>I know so</u>. And so do you. The problem with understanding, is that our cognitive resources are limited, and we cannot keep track in our heads of billions of individual entities each with hundreds of parameters all mutually interacting among themselves and with their immediate environment simultaneuously. This is why we need to compress information in order to get it into our heads. So sure, to understand the world at a cognitive level, we need to simplify and abstract away -- and there is nothing wrong with that. However, the type of understanding that I am talking about is not the kind that directly allows you to easily grasp all of the implications of a model. Rather, if you had sufficient computational resources at hand, the type of understanding I imply would allow you to carry out timestepped simulations of a complex dynamical system, and observe the results. If the simulation matches observations, then you know the fundamental model is correct. Also, starting with the fundamental model, you can indeed proceed to build up a framework of escalating generalizations, probabilistic or otherwise -- but this time, you are mathematically assured of never missing anything important, because at each step of abstraction you will be able to derive an exact measure of how much information about the system is lost in the process.

    This is a matter of your personal opinion. I still do not understand why you are so eager to prescribe such a detached attitude for me. In fact, I could construct a diametrically opposite stance based on determinism -- that the entirety of the universe is just as precious and just as wonderful as the tiny subset of it that we call life. Respect is a matter of personal attitude; it has nothing to do with knowledge of the universe's fundamental nature.

    And do you remember why people used to prefer a Newtonian clockwork space over an Einsteinian spacetime continuum? As I recall, it was all about <u>absolute frame of reference</u>. Back then, people were so used to that notion, it was so natural to them, that they simply couldn't accept the suggestion that absolute references are only an illusion. You are facing a similar conundrum: you are refusing to accept the suggestion that choice is nothing but an illusion -- a suggestion I and many others are making. However, I have the upper hand in this debate, because at the very least on the human level, I understand that the cognitive machine is a deterministic state automaton. So at least with respect to <u>us</u>, I *know* that choice is an illusion. It may take you a while to see this, but science is squarely on my side this time around, and this time I am the expert.

    You are misinterpreting my position. What I said, is that theoretical models carry more information than empirical models. Naturally, empirical models must come first, but theoretical models are the ultimate end-result. This is why in the end theory is always superior to practice when it comes to actually understanding what is happening, rather than merely memorizing a rather arbitrary observed association between inputs and outputs. In studying, there are two fundamental approaches: rote memorization, and deep understanding. With the former, you are stumped by novel situations and must possess a heck of a memory to perform well in classical situations. With the latter, you don't need as much memory to explicitly remember all the formulae and results; you can derive many of them based on your deeper understanding of the system; furthermore, you will be able to successfully tackle novel situations, because you understand not just how a system responds under classical conditions, but why.

    You are simply wrong. As computer power grows, the brain is increasingly modelled at synaptic and molecular levels. True, at those levels we still use statistical abstractions to describe ion transport or neurotransmitter degradation, but at the level of neural behavior, we have complete determinism. And it is only at that level that cognition occurs -- therefore, a cognitive machine is a Turing machine, regardless of the actual substrate (and I claim that all substrates are ultimately Turing-describable anyway, at least in principle.)

    <hr>
    <center><h3>As for your other posts:</h3></center>
    <hr>

    Indeed, there was -- and the information content of the universe has been fixed ever since. That is why whenever information is concentrated in one place, it must be diluted elsewhere -- which is just another way to paraphrase the laws of conservation of energy and matter.

    No it does not point to a creator at all. It simply means that the universe emerged somehow -- and nobody knows how. Indeed, a determinist view gives absolutely no indication of where the observed universe came from. If, on the other hand, you assume a continuous influx of information into the universe since the Big Bang, then you are indeed implying a meddling Creator! Talk about getting it backwards...

    Plato, just because something sounds crazy to you doesn't mean that it cannot be true. If you want to really demonstrate that evolution and determinism are incompatible, you are going to have to construct a mathematical proof. Until then, all I can conclude is that you simply find determinism too incredible to accept -- in which case, you must be growing old.

    Indeed, the claim is that the baby universe contained all of the same information that it contains now. It contained all the matter and energy, and all the spacetime, that it contains now -- didn't it? And units of matter/energy/spacetime are merely units of information.

    The answer is easy: the information is contained both in the DNA and in the environment within which the brain develops. In fact, DNA is meaningless without an environment -- and a common mistake is to assume that all information needed to construct a human is contained in DNA.

    Your problems with determinism result from the fact that in order to create an abstract representation of some entity, you decouple it from its environment. This is when you begin to have events that apparently had no cause inherent in your definition of the entity. Of course such events will have no cause, because you have removed the environment -- the source of cause itself, from the equation.

    ------------------
    I am; therefore I think.
     
  19. Plato Registered Senior Member

    Messages:
    366
    1. Interchangability of elementary particles. There is actually far less information that is lost in a statistical description of a gass then a statisticle description of a population in which each individual actually is unique. Molecules are not unique, they are all the same. The only thing that differs between two atoms in a gass are their impuls and position (assuming that the kinetic energy of the particles are low enough not to exite them to higher energy levels).
    2. Influx of information at the beginning of the universe. What is this, we don't know where it came from ? Suddenly we don't want to know any more ? The famous urge to know everything has come to a full stop ? Besides you forget something : the baby universe couldn't possible contain all the information that it contains today because it was smaller. Yes, I know the only thing that has changed is simply there is more 'empty space' but this empty space has a direct impact on the matter/energy contained in it, it poses new boundary conditions. So actually the information content of the universe has been and is still increasing, this alone is enough to refute deterministic theories. Then I even haven't mentioned the vacuum fluctuations that pervade so called empty space, more and more influx of unknowns that disable your nice deterministic model !
    3. What are the mecanisms that make probability distributions who are not flat ? Boundary conditions ! You seem to be unable to disattach probability from some kind of underlying mechanism (e.g. dice). Observe the hydrogen atom : the central proton provides a boundary condition for the bound electron, hence its probability to be found close to the proton is much higher then further away. Ok but now I'm predisposing a distinctly localised proton, give me some credit here, the boundary conditions for the proton are something like 100000 times smaller then the electron, thus giving me all the right to temporarily hold it to be a mathematical point for calculating electron orbitals.
    4. Thinking that the brain is deterministic because cognition only arises at macroscopic levels is committing the same error as thinking you can predict the weather by measuring temperature and airpressure all over the world. This is true but only for a limited amount of time, long term predictions are as valid as wild guesses. I guess you are familiar with the butterfly effect ? This means that even tiny quantum variations can ultimatly lead to totally new behavior. So if at the basic level the universe is probabilistic, so is the brain ! I know that you know a lot about how the brain functions but ultimatly it is just an other non equilibrium system and thus behaves in the same way. Since a Turing machine is in total equilibrium at all times : you get as much information back as you put in, it can never become a means to make true intelligence, it will always be 'artificial'.
    5. Why have the deterministic equations been so succesful up until now ? Because they have always described ideal situations for example two body interactions, they describe but a small sample of reality. They are correct but only under certain conditions, it is a logical error to assume that they are applicable in all situations.
    6. About my circular argument, I didn't say that causality is not sufficient for mechanistic causation, I said it WAS sufficient for mechanistic causation but not for PROBABILISTIC causation. I do not assume a negative, I prove there is no fundamental mechanism at work by showing that it is impossible to observe this mechanism. You see even Bohm ultimatly admitted that his theory was never possible to prove, the so called hidden variables are doomed to stay hidden forever ! So why persist in believing in them ? It almost is like a religious conviction in the ultimate triumph of determinism. Besides the proof that the Einstein-Rosen-Podolsky paradox isn't really a paradox has been done in the eighties and clearly demonstrates the non-locality of quantum mechanics and if you say non-locality you say probability. Determinism is dead and buried but it doesn't want to admit it !
    7. If you get the feeling that I'm prescribing some kind of detached attitude for you then this is totally not my intension. I was more talking about a determinist in general. Once you start marvelling about beauty you give up your deterministic views. You think there is something that devides personal attitude and fundamental nature of the universe ? How is that ? For a determinist marvelling about beauty is something he can't help but doing since this was compelled at the time the universe was conceived.

    Please Register or Log in to view the hidden image!


    8. A meddling god in a indeterministic universe would not be a true god in the christian sense since he wouldn't know what his meddling ultimatly would arise to. The universe is in constant need of meddling, no problem for an omnipotent god of course but this means that the system universe plus god is still deterministic unless you assume that the god doesn't know what he is going to do next. Besides an open universe doesn't need a god to make choices for it, it can do that for itself (thanks to Occam we dispose of the meddling god notion) while a deterministic universe does need this information input at its conception.

    ------------------
    "If I have been able to see further, it was only because I stood on the shoulders of giants."
    Isaac Newton
     
  20. Crisp Gone 4ever Registered Senior Member

    Messages:
    1,339
    Hi Plato,

    It depends on how you look at it ofcourse. What you call "all the information" is something I would prefer to call "initial conditions" for the deterministic super-equation. Then, by using these initial conditions, you can determine what will happen to each particle in the past, present and future. Sidenote: You will probably (and I mean this in a deterministic sense

    Please Register or Log in to view the hidden image!

    ) argue that in the babyuniverse there were no particles but only energy (and hence nothing to apply these initial conditions to), but I would like to refer to the "Energy = Matter = Fields" post in the General Astronomy forum, where I argue for a theory that describes everything as energy (and because of the equivalence with matter, particle initial conditions can also be applied to their equivalent amount of energy).

    Now the problem you seem to have with this view is that the universe cannot be able to store all this "information" somewhere.

    The way I look at it, this information doesn't need to be stored anywhere. Let's assume for an instant that time is discretized (e.g. into steps of 10^(-50) second). In your point of view, the universe then has to contain the following number of information:

    (lifetime of universe)*(number of parameters)/(10^(-50))

    The "number of parameters" refers to the number of ways a particle can interfer with other particles (eg. electromagnetic forces) and the number of coordinates needed to exactly pinpoint a particle in space (eg. 3 for place, 3 for rotation, 1 for time).
    Ofcourse this is a huge amount of information to store.

    But why should this information be stored anyway ? I'd like to look at it the following way: after every 10^(-50) second, the universe evaluates all particles, how they should interact with the other particles in the next 10^(-50)s slice of time, and let the particles act accordingly. This is the "we don't care for tommorow, let's care for the next 10^(-50)s" approach

    Please Register or Log in to view the hidden image!

    .

    Okay, this discretization of time is a dangerous thing to say ofcourse, but I personally think that this would save us loads of trouble,eg.:
    • the problem of timetravel is solved when time is discretized: there is no past or future, only the present. Hence timetravel is impossible).
    • the problem of living matter is solved. Every interaction of a living being is evaluated at every instant.
    But I still have to do some thinking on that, for example see what the effect of timedilatation does to this model.

    Okay, to me this is a confirmation that quantummechanics fails at this point. "Why, but now when" is not a satisfactory answer.

    In regard of your reply to Boris,

    Hrm, doesn't this violate the conservation of energy/matter rules that apply to any closed system, as information (and hence energy/matter) is introduced ? And how would you explain, from an "incomplete determinism" point of view where this new information comes from ?

    About the virtual/real particle creation/annihilation in vacuum; those particles are formed from energy and energy can be described deterministically.

    Completely true, but it is, as you called it, a logical error to apply this model to larger systems. The deterministic equations we know, indeed only apply to a specified region of space (from femtometers to lightyears, ranging from theory to theory). I don't see how this can be a plea for indeterminism; Personally I'd say it's a plea for more research and a confirmation of our limited knowledge.

    As a last note, I would like to concure with Boris when he talks about determinism in probability. The only reason we have probability is because we have a deterministic procedure to determine what all possible outcomes are of a proces. For example, take the Gaussian distribution: the only reason that this distribution yields results from -oo to +oo is because we know for sure that the particle (or anything else) can have this set of values. On the other hand, the Chi-Squared distribution only allows a region of |R (the real numbers). If we use this distribution, then suddenly determinism is introduced about the area of space the particle can be in, and to create the distribution, assumptions of determinism are used.

    In schematic form: determinism -> probability -> determinism

    This is not a convincing argument at the moment, but I'll work this out a bit further.

    Bye!

    Crisp


    ------------------
    "The best thing you can become in life is yourself" -- M. Eyskens.

    [This message has been edited by Crisp (edited May 13, 2000).]
     
  21. Plato Registered Senior Member

    Messages:
    366
    I will try something new here, see if visualisation can help my cause. (I must be getting desperate

    Please Register or Log in to view the hidden image!

    )

    Please Register or Log in to view the hidden image!



    This represents complete determinism. It has been the fundament of science since Newton. However it needs revision :

    Please Register or Log in to view the hidden image!



    would be the representation of incomplete determinism. According to this logic, the strangeness of quantummechanics makes perfect sense. You see it is just a question of using the correct logical frame, there is no magic or acts of god involved.

    If you want to read more about this new form of logic or the philosophy that is behind it I refer you to Emergence Logic Explained

    Crisp, in response to your question of where does new information come from in an open universe, you only have to look at the second relationship. Once you let go of the strict relationship between cause and effect new information automatically emerges (hence emergence philosophy).
    Besides I don't really think you grasp the implications of one to one cause and effect (my excuses if you do and I simply have misinterpreted your last post) Even though the universe might seem to live from moment to moment (which you can choose arbitrarily close together so actually as far as we are concerned is time a continuum and not discrete) every one of these moments holds all the informations of all moments to come and that have been. Boris will confirm this even though it might be the only thing he agrees with of my entire post

    Please Register or Log in to view the hidden image!

    Please Register or Log in to view the hidden image!



    Something else entirely, how are the examens going for you two ? (just to keep things not as serieus all the time)
     
  22. Crisp Gone 4ever Registered Senior Member

    Messages:
    1,339
    Hi Plato,

    Thanks for the images, but they didn't tell me anything new (the images told me axactly what I understood from your posts).

    I thought I did, but now you made me doubt

    Please Register or Log in to view the hidden image!

    .

    Why should that be ? Ofcourse all the particles and other energy have to be contained (assuming the universe is a closed system and the conservation of energy is valid), but I don't see any reason why information about "old" positions of particles should be contained. To get back to the discretization of time point of view, the universe just "looks" where all the particles are (regardless of where they were in a previous slice of time) and "repositions" them according to some law we'd call a physical law.

    Now, on the other hand, if the universe lives from moment to moment, how can there be a physical law that predicts something about the future when the information of the future is not contained in the universe...

    Hrm... Better do some thinking on this.

    Bye!

    Crisp
     
  23. Boris Senior Member Registered Senior Member

    Messages:
    1,052
    There is a statement you made in Evolution vs. Creation that I want to address:

    Especially in light of your illustration, I fail to understand how you can make such a statement.

    First of all, given a many-one mapping from cause to effect, you are in fact loosing information by only observing the whole -- which is in part what I've been trying to state in this thread. Think about it: given an effect in a many-one mapping, can you deduce the cause? Given a cause in a many-one mapping, can you deduce the effect? It's "no" in the first case, and "yes" in the second case, if there are any doubts. In the first case, you know only the effect. In the second case, you know both the effect and the cause. This means that in the second case we possess fundamentally more information than in the first case. Therefore, by only looking at the effects, or at the "whole" in the sense you use the word, you are in fact loosing information.

    Yes indeed, the "whole" is different from the parts -- but only in the sense that it is an incomplete, lossy representation of what's truly going on. You could argue that the whole may possess some attribute that the parts do not, such as, for example, transparency -- but this would be a play on semantics. The attributes normally understood to belong to the "whole", are merely attributes of a collection of the parts. In the same way that three Newtonian orbiting bodies give rise to chaos (an "emergent" feature of a system of deterministic parts) -- so are all the other properties of composite objects driven by the collective behavior of those objects' deterministic constituents.

    Secondly, I contest the entire idea that two distinct causes will have a precisely identical effect. If that were the case, then I do not see how the two causes can be defined as distinct in the first place. The very reason we consider causes to be distinct, is because they arise out of distinct interactions or types of interaction -- which implies that even if the <u>observed</u> effect is the same, internally the mechanisms are quite different, and the observations that we make describe only a part of the outcome. In essence, were we to indeed discover such an apparent many-one mapping, it would suggest to me that we are observing only part of the outcome, and failing to capture information that would have enabled us to distinguish the underlying causes from one another.

    Now, back to this thread, and your itemized rebuttals... For the sake of keeping this post somewhat brief (yes, I know, brevity is seldom a virtue I exhibit

    Please Register or Log in to view the hidden image!

    ) I'll just refer to your post by the same numbers you used, instead of quoting it directly.

    1. It is irrelevant whether we are talking about people or subatomic particles. When you say that molecules are all the same, you cannot be more wrong than when you say that people are all the same. For, we all know that each molecule is composed of a large number of known elementary "particles", each of which possesses a slew of its own parameters. Moreover, because of that complexity at the level of the Standard Model, as a determinist I suggest that the "particles" we observe are also no less primitive than a molecule or perhaps indeed a human being; they may in turn consist of many other components of which we are unaware at present. The point is that all those components, besides having a unique identity unto themselves, also possess their own unique parameters; thus, even a simple molecule's parameter vector is quite long indeed. But that is not the real point; the real point is the difference between observing a deterministic outcome, and being able to determine it ahead of time. While the former happens all the time (any time we observe something), the latter is impossible in practice (because we cannot know an exact state of anything, due to the uncertainty principle.) We can, however, pretend to know all the parameters of a theoretical system, and proceed to simulate it in full detail; if the simulation outcomes match real world observations, we can judge our deterministic model to be correct. And in fact, if we are able to create special cases where the mapping from state to outcome is smooth, we might be able to determine an entire complex state of a system by matching its behavior to a simulated model with that initial state.

    2. It would be nice to know where the universe came from, but regardless of your stance on determinism, you gain no information concerning that particular question. What I refer to as "information", is the complete set of all parameters of all elementary entities in the universe at any single instant in time. Because matter/energy/vacuum are not being created or destroyed, the total number of parameters in the universe remains constant. This is what I mean when I say that the information content of the universe is fixed. The parameters may be changing values due to mutual interactions, but such changes do not alter the total information content. Entropy may flow from one subset of parameters to another, but the sum total stays fixed. It is true that there is ever more "empty space" in the universe; however we do not yet understand where it is coming from. In M-theory, for example, the three spatial dimensions we observe are expanding at the cost of other dimensions contracting, so the sum total is still constant. The "vacuum fluctuations" are something I like to liken to waves on an ocean's surface -- though apparently random, they are still a deterministic result of motion and mutual interaction of countless molecules and ions that constitute the ocean's essense.

    You seem to have some kind of an urge to oversimplify my "nice deterministic model" -- which allows you to subsequently balk at all the inconsistencies that come from such oversimplification. Inconsistencies only arise because you are focusing your attention upon a superficial description of reality, rather than trying to picture an entire underlying hidden universe seething with activity and complexity. Contrary to your apparent suggestion, determinism promises a universe far more complex, multilayered, hierarchical, and structured than any other possible approach. The difference between you and me, is that I want to resolve the individual trees where you only wish to see the forest. I enjoy the concept of a forest and I find it useful, but I also want to know about the trees, and the grass, and indeed the very cells and elementary particles from which the forest is composed.

    3. Fine, let's obseve the hydrogen atom. Suppose, for argument's sake, that we have discovered the inner structure of the electron and indeed of the very field through which the electron is interacting with the photon. Now, we have the exact mechanism that gives the exact characteristics of the electron at any moment in time, provided we know its exact starting conditions. So, we calculate the state of the electron into the future, and discover that something in the fundamental mechanisms we discovered causes the electron to spend more timeslices closer to the proton than farther away. So now, we have the actual mechanism behind the probability distribution. In general, there is absolutely no way you can "disattach" a probability distribution from an underlying mechanism. Indeed, by its very nature a probability distribution is a low-moment, lossy description of some mechanism (and it is precisely because of the extreme lossiness of probability distributions as descriptions, that we cannot trivially derive a mechanism from a distribution.)

    4. You are missing the point again. Take the weather example. If you knew absolutely all the parameters of the Earth's current atmosphere, you'll still be unable to predict weather into the future accurately -- but not because of some fundamental nondeterminacy in the system. The reason is that the system is not isolated -- it is receiving input both from the Earth's interior, and from the Sun, as well as from the infalling meteorites, and indeed from butterfly wings. You see, if we knew the complete state of an entire region of the univers within some light-radius of Earth, then we would indeed be able to predict the exact state of the atmosphere for up to the amount of time it takes light to traverse that radius (and thus provide unforseen input to the atmosphere). And actually, if quantum nonlocality is for real, we'd need to know the precise state of the entire universe to deterministically predict weather.

    I am getting tired of repeating this over and over, but I'll keep repeating it until it sinks in: there is a difference between understanding how something works, and being able to completely describe something. The former is the goal of determinism. The latter is impossible -- not only for the weather or for the brain, but even for a single photon. The brain is indeed a Turing machine, but in real-world conditions its state is impossible to predict because it is constantly receiving input through its sensory apparati, as well as through interaction with the body that encases it. However, if we were able to computationally simulate a complete developed brain, and feed it controlled input, we will observe the same deterministic outcome every time we repeat the same exact experiment. In essense, with respect to the brain its immediate surroundings become part of the Turing machine -- part of its input on the tape. Without having knowledge of both the complete state of the brain and its current input, you will not be able to exactly predict the next state.

    5. Equations are nice, but there are very few things in nature (aside from simple, idealized laboratory experiments) that can be described by nice tidy equations. The majority of natural systems to not offer clean analytical solutions. This is why many tasks in real-world engineering are tackled through computer simulation -- be it weather forecasts, plasma flow in an inertial confinement reactor, or virtual car crashes. However, what is crucial in such endeavors, is to know the fundamental interactions between the components of a system you are trying to simulate -- and yes, those fundamental interactions are deterministic and indeed described by nice tidy equations.

    6. First of all, I do not see why we have to assume that we have reached a limit in our resolving power. There may be such a thing as Planck scale, but we still have a long way to go before our measurements can approach that degree of precision. And in the meantime, who knows what "hidden variables" will yet come out of hiding? Remember the time when protons and neutrons were assumed to be fundamental particles? Described by simple models, and everything worked nicely -- until we started using particle accelerators, and discovered quarks. Indeed, just when do you suppose we are going to finally hit that impervious wall beyond which we supposedly can no longer peer? Maybe Niels Bohr can give you some insight regarding what science of the future will and will not be able to do? And even then, when we become somehow ultimately and fundamentally limited by our instruments and can no longer hope to detect the "hidden variables" directly, we might be able to deduce their nature from what our instruments do allow us to observe. You see, when a tree falls it makes a sound, even if nobody is listening; and if we listen for it, we just might be able to tell that a tree fell without ever having a chance to directly observe it. It is precisely due to such possibilities (or, indeed, expectations) of future discovery that a determinist believes in "hidden variables"; to refuse to look deeper is to give up on any further progress that might have been made otherwise. And once again, nonlocality has nothing to do with determinism or lack thereof. It may carry significant implications for relativity (due to its faster-than-light interactions), as well as for the ultimate theory of everything -- but whatever mechanisms mediate nonlocal interactions must still in themselves be deterministic and indeed causal. Indeed, the EPR experiments have demonstrated, if anything, that nonlocal interactions still obey the laws of cause and effect -- only by altering one of the entangled particles do we also determine the state of the other.

    7. Here, we are once again getting into discussion of meaning. Even though as a determinist I am convinced that both my attitudes toward things and your challenges to my attitudes have been predetermined from the beginning (whatever that "beginning" is), I would say that as a sentient entity, my own attitudes and actions still have meaning to me. It does not matter if I am only a tiny part in a larger machine; as a very special kind of part I have my own internal machinery (at least as long as I continue to exist as a "human being") -- and that machinery has its own internal states which construct a representation of meaningfulness and purpose.

    8. Care to elaborate what you mean by "open universe"? Are you suggesting that the universe as we know it is not a closed system, and that information is coming into it from somewhere else? In which case, where from? Does such a construct give you fundamentally any more information about the universe than the assertion that all of the universe's information was already contained by it at the moment of the Big Bang? As to where the information came from, the question is exactly equivalent to asking where the universe came from, since in my book the universe and "information" are really one and the same thing.

    Oh, and also, with regard to "choice" -- under your framework, just how is it that the universe makes its "choices"? Are you saying that there is no mechanism behind the "choice"? In which case, why is "choice" the correct word to use? (Wouldn't it be more accurate to call it "noise"?)

    ------------------
    I am; therefore I think.

    [This message has been edited by Boris (edited May 16, 2000).]
     
Thread Status:
Not open for further replies.

Share This Page