Are You Living in a Computer Simulation?

Discussion in 'Intelligence & Machines' started by Magical Realist, Apr 18, 2011.

  1. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Yes as Chambers said "consciousness is the hard problem." I doubt any set of hard ware can ever be made to have even simple experiences ("Qualia") that humans have all the time.

    However, I think we have them because the brain, specifically the parietal part, is running a real time simulation and creating "us" in the process. I have also suggested that because of this genuine free will is not necessarily inconsistent with the natural laws that control the firing of every nerve in your body. See how I think we have experiences and may even have genuine free will (but I doubt that) at:

    BTW, IMHO, John Searle makes the most sense on this subject.

    SUMMARY: My POV is not that we are living in simulation, but that we are part of a real time parietal (brain) simulation. I.e. "we" are not a physical body, but an informational process. Note "I" ,"we" "us" etc. in quotes refers to this created psychological self not the physical self /body.
    Last edited by a moderator: May 20, 2011
  2. Google AdSense Guest Advertisement

    to hide all adverts.
  3. angslan Registered Senior Member

    Well, if you saw a computer running a program that it was impossible for that computer to run - let's say that you saw a classical computer running a program that only a quantum computer could run - the conclusion would have to be that (assuming your observations were correct and that there is no mistake that it is a classical computer and a quantum program) the program is not being run by the computer. The could imply that it has been designed to make it appear that the classical computer is running a quantum program, whereas really a quantum computer is running the program.

    Now, if our brains are equivalent to a computer, and our consciousness is a program, we would expect our consciousness to be a program that can be run by the hardware of our brain. But if we found that this was not the case - that our conscious experience was a program that could not possibly run on our brains - this could be used to conclude that although at first glance it appears that our brains run the program of our consciousness, the program of our consciousness is in fact being run by a different, and unseen, set of hardware.

    Despite being highly implausible, this is the only phenomenon that I can think of that I would count as evidence of being in a computer simulation.
  4. Google AdSense Guest Advertisement

    to hide all adverts.
  5. Believe Happy medium Valued Senior Member

    I guess my question is what difference does it make?
  6. Google AdSense Guest Advertisement

    to hide all adverts.
  7. Stryder Keeper of "good" ideas. Valued Senior Member

    A lot of difference. I'm still struggling to output a whitepaper on the subject, however some of the conceptional things that could occur is creating a form of Nanotechnology that doesn't actually use "small robots" but actually uses "Small instances of emulation". Controlling each emulation would then allow for potentially rewriting how that emulation functions. This means potentially taking emulated DNA and rewriting any mutated code back into being structured as the original code, thus removing aging, defeating disease, prolonging life and undermining the concept of death.

    There is then the potential to run sensors at the emulation level to aid in transferring communication across severed nerve structures, or the ability to augment system beyond what we have currently believed possible to incorporate senses across networks etc.

    There is also a potential to utilise "Non-Locality" as a communication medium, since Non-Locality in an emulated environment means moving the communication outside of the presumed direct structure and placing it into the sub-levels of the emulation structure. This could allow instant but long distance communication systems (like for instance remotely controlling a Martian lander in real time with no 20 minute delays) and also has potential usage in creating a method to "communicate" power across distance, so you could have a battery that never needs to be recharged.

    It will all sound very ludicrous, but these are just potentials in an Emulation system.
  8. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Your post seems a little confused to me, but perhaps you are only stating that anything is possible within a simulation or the emulation of a simulation? Certainly the simulated creatures don't need to die or have DNA, etc. but that does not have any thing to do with the physical world. Yes in the simulation one can have infinite light speed, - I.e. no delay in control by simulated creatures on simulated Earth of their simulated robots on simulated mars, but in the real world the speed of light is finite and there is delay for control of robots on Mars from Earth. Not even linked states can transmit information (robot turn right) to the real Mars without delay.

    Are you saying anything more than:
    "I make a simulated world / universe where the laws of physics are different?" If so, what?
  9. Stryder Keeper of "good" ideas. Valued Senior Member

    I've touched on it before but it's to to do with a Recursive method of Emulation. The notion initially started with the hypothetical act of emulating 1cm[sup]3[/sup] from data supplied by observing a real 1cm[sup]3[/sup]. Obviously an Emulation would be like a "Slave" to the real world following the smallest of time delays. The hypothetical was to then suggest that if by manipulation of a paradox (Which requires the use of Qubit computing.) you could in fact make any "Slave" action in the emulation occur prior to the real world that's being modelled, you could actually get to the point of bridging manipulation of reality through the usage of the emulation.

    This was initially hypothesised as a way to "trick the universe into the machine" as such. It however bore the question as to whether that initial real world 1cm[sup]3[/sup] was already in fact a running emulation and whether the attempt to emulate it was actually just compounding it's make up into a recursive emulation state.

    In essence the physics laws of that 1cm[sup]3[/sup] will initially start as a "Default", it's default that you would want to begin with after all if the theory is right we are playing catchup and the last thing we want to do is cut corners and jump the gun in the sense that we don't actually know how something was done. "Default" is also a good state to defeat those people that might worry about such potential technological disasters like "Grey-goo" in Nanotechnology. However if it's understood that subtle manipulation can be done to alter the very physics of this finite model space through iterating per recursion.

    Hypothetically if a 1cm[sup]3[/sup] is such an emulation, there is no upstairs universe sitting at the top of a hierarchy to define it's limitations, it's limitation themselves would actually more likely be tied into the logic of an "Inverse square law" in the sense that only so many iterations could be done before the communication between iterations becomes too large to to be transmitted as a packet, of course this is implying there is a data transfer specification which I would hope would have a "Broadcast" method applied.
    (It will seem like word salad but that's because I'm cutting across interdisciplinary schools of thought, and like I've mentioned I am still struggling to write it up to the point of explaining ever detail. I also know that no matter what I've written I should edit things out no matter how world salad it is because it has some meaning, even if you don't get it, you can still ask me to reiterate bits which you wouldn't be able to do if I just pressed the delete key.)

    As for the statement of "Non-Locality communication", I have a theoretical model where you first start with a finite number of servers emulating 1cm[sup]3[/sup]. This is the first finite building block which through the use of a Super-symmetry method of planning and the capacity to create paradoxes will eventually allow for infinite duplicates volumes that through the super-symmetry are initialised to be at a different global coordinate to the original and also have differences in what is outputted during the initial "Universe" building calculations.

    The 1cm[sup]3[/sup] is then populated similar to how load baring benchmarking is done, the volume is populated by dimensions until it can not be processed further upwardly. The hypothetical is that all the infinite spacial volumes that are paradoxically interlinked and placed through super-symmetry will also initially duplicate this initial event. This is the equivalent of spamming a volume with Gamma Radiation prior to element building.

    Once this "load test" is complete, it's then possibly to run "builder algorithms" in the sense that you reduce the volumes dimensional population and then start to repopulate but to specifications. I hypothesised something similar to Conway's Game of Life in the sense that the algorithms would attempt through basic default fundamentals attempt to build structures that are stable.

    What is interesting about the infinite array of 1cm[sup]3[/sup]'s volumes at this point is they do not necessarily stack like blocks onto of each other, or staggered like a brick weave, in essence you've potentially got one volume existing composited over eight other volumes. (This is where the term Composition or composite comes in to play.)

    Imagine you have a construct cube of eight blocks, 2 x 2 x 2, you then have this emulated block the same sizes as the other blocks but it's centred on the constructed cubes centre. In essence if that centred block was to contain say the emulation of an Atom, you could suggest that all components that make that atom will only be emulated by that block, the other blocks emulate an "Effect" or a negative impression of what the emulated block outputs.

    The reason for this is because in this emulated hypothetical I am trying to follow certain rules in protecting Data and making sure that it's "Non-Volatile". So say having a hydrogen atom with one electron implies that you can't have extra electron's suddenly be conjured by the other emulation areas unsynchronised their matrix into duplications, instead the rule of thumb is you can only ever have that one electron in that emulation model. If the model just happens to be emulated as a construct, it means that electron would shift in and out of existence with each emulation block, so as to be non-volatile.

    What is also interesting is how Data exist within a composite environment. Obviously we are use to thinking in 3D, however data transfers in regards to computers are 2D arrays that are processed in regards to planar spacetime. (basically computers timing "eventually" processes data)

    An emulated construct (in this case the observer) will see 3D, however the information that was composited together to create it was stored as 2D arrays but processed into 3D. (I'm not entirely sure if this is what Fermilab was investigating with it's Holometer, but I have a Fix for "Emulators" that use 2D arrays in the sense that the array is duplicated and placed next to an inverse version, thereby negating the "One direction of time arrows". The effect being that a 2D array is no longer suffering from processing alone one planar axis, but is in fact capable of processing from both "ends" at the same time. (It also hypothetically increases data integrity on Disc storage media since the scratch on a 2D array disc would have to damage both ends of the data for it to be complete corrupted.)

    In essence it's all part of the parcel, to create paradoxes requires to violate physics which requires manipulation of time which currently with a system that doesn't see itself as an emulation would be impossible, if it was indeed possible it explains some, if not all the "oddities" of particle physics and opens the potential to re-write the very foundations of the universe (when we are of course ready).
  10. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Thanks for the long reply, but it still seems to me that
    (1) You are making models in some computer of a world / universe with zero effect upon the “real world” and can have any set of interaction laws for the objects of that simulated world you want to build into it (infinite light speed, etc.). I.e. your “interaction laws” play the role in your simulated world that the “laws of physics” in the “real world” with the exception that your “interaction laws” do definitely control the evolution of changes in your simulated world but the “laws of physics” only describe how mater and energy evolve in the real world but are not causal of the evolution –exactly what causes mater and energy to evolve in general accord with “laws of physics” is not well known. (I say “in general accord with” as man’s laws of physics, may not be a perfect description of nature’s laws of physics, which are always obeyed for reasons not clear to us.)
    (2) You are presuming that the real world is being created in some simulation and in evolves in accord with the program of that simulation. This is quite like Bishop Berkeley suggested more than 300 years ago. I.e. he too thought that there was no “real physical world.” The perception of it was created by God. He did not know about computer simulations which could be making both the simulated world and the perceivers of that simulated world.*

    I especially liked the good Bishop’s explanation as to why the perceived world did appear to be governed by a set of “laws of nature / physics.” If there were no such rules normally appearing to control the interactions in his perceived (but not existing) world but only chaos / random behavior, then God could not work any miracles as by definition a miracle is a violation of the normally governing laws.

    * I have a crackpot (in the sense that it repudiates the accepted “perception emerges after a sequence of neural transformations of the sensed environment) POV about how human perception is achieved, which is very much like a hybrid of your (2) and Berkeley’s POV. That is I think all perception is produced in a parietal brain simulation, as is the perceiver. I.e. instead of Berkeley’s God creating “me” and what “I” perceive, parietal neural activity, which is a Real Time Simulation, RTS, of the physical world, including my physical body, and “me” is making both “me” and what “I” perceive. As a Ph.D. Physicists, I do however believe the real physical world does exist, so differ from Berkeley in two aspects. “I” just never perceive it directly by computational transforms of neural inputs.

    Note that “me”, “I”, etc. in quotes refers to the perceiver created (or at least represented in) the parietal computer and is not my physical body and it nerves etc. – What most think of themselves as. I.e. “I” only exist when awake or in a dream state. (I say “represented in” as alternative to pure parietal creation as I believe much of “me” is sort of like an old Fortran “call” from information stored in memory (no idea where or how stored in the brain) and the frontal lobes which certainly store a lot about your personality structure as the old “ice pick” operations proved.

    See how I think we have our experiences & perceptions in the RTS and may even have genuine free will (but I doubt that) at:
    Last edited by a moderator: May 23, 2011
  11. Michael 歌舞伎 Valued Senior Member

    Does it come down to this: Is our world digital? It seems like it is.... If you zoom in on any simulation (or emulation), you will eventually get to the pixels. No matter how crisp a picture is, keep zooming and it pixelates. Our world similarly seems to be produced from pixelated information - in the form of sub-atomic particles of various energy levels. Also, isn't it odd that things change depending whether they are being observed or not. Like in a video game. PS3's only modeling what you are observing. Lastly, we will probably make Sim like worlds ourselves soon. What are the chances that this isn't one of these-type worlds? Apparently 1 in 100 million. Or so I read.

    I vote PS12

    Please Register or Log in to view the hidden image!

    Last edited: May 23, 2011
  12. Stryder Keeper of "good" ideas. Valued Senior Member

    It wouldn't be pixel, it would be a Vertex. A vertex is mathematically, you could imply that you could get as low as the value of '1' however this doesn't imply you couldn't go further, after all you'd likely move to floating point (0.1111r1 etc )

    The limitations at this point is actually more to do with the architecture of your Emulator systems and what data throughput can be handled, which according to Moore's Law is evolved exponentially overtime.

    While there might be an upper limit in volume, there is potentially no limit to how low you can go because of the potential to constantly upgrade the technology and software used.
  13. sniffy Banned Banned

    Technically speaking, isn't this forum a computer simulation? And are we posters not 2D simulations of our 3D selves?

    just sayin'
  14. Believe Happy medium Valued Senior Member

    This is assuming that we have root control, however the world would indicate that we do not.
  15. Stryder Keeper of "good" ideas. Valued Senior Member

    There is the reason that we don't just start with "root". Such "Privileges" require "Elevation".

    Namely if we never had the chance to observe ourselves as a default, and automatically had "root" access, we'd likely do something extremely idiotic and mess up the very system that we are trying to use and probably never get to the point where we could observe how to safely utilize this "root" access.

    So we have a "Default" not just for a dry run, but as a "safemode" or "Jailed Sandbox". We can't accidentally cause an emulation to fail so drastically that it creates an emulation version of the Grey Goo effect, instead we have to learn that default limitations allow us to create structure, to gain knowledge, to create protocol and once we understand enough we no longer have to be bound by the "Default". At this point I'm hoping enough Scientists world wide would be working to make sure that toggling the "Default" off to "User defined" isn't a mistake.
  16. sniffy Banned Banned

    I think in these instances the scientists and philosophers should work together to ensure that Chaos Sensitive Dependence does not occur.

    Perhaps some sort of synthesis would be the best option in view of the OP. However, having said this it is probably best not to overlook the contributions of artists (even aspiring ones) poets in particular, to stimulations of this nature.

    'It is the very nature of poetry to be forever setting up problems of meaning that require an alert solving response in the reader, and that this is one of a poem's greatest pleasures.'
    John Fuller 2011
  17. Cyperium I'm always me Valued Senior Member

    Was it Cantor that thought that human intuition could compute the impossible? He derived his thoughts from having discovered many things himself through intuition.

    I think it was Cantor that proved that logic couldn't prove it all (which was a major setback for logicians at the time).

    If intuition could compute things that are uncomputable then it's still not evidence that we live in a simulation/emulation though.
  18. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    1 thing I find missing from every discussion of this is that what we perceive as possible or impossible and probable or improbable may not reflect what actually is possible or probable in the genuine world outside the simulation we're in.
  19. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    No. logic can even evaluate as true or false all statements permitted by its rules.*
    "...Gödel's incompleteness theorems are two theorems of mathematical logic that establish inherent limitations of all but the most trivial axiomatic systems for mathematics. The theorems, proven by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. ..."- from wiki.

    Gödel sank the "logical positivist" program, but it was dead earlier due to the ambiguity in words.

    * Simple example: What is the "truth value" of the following four words: "This statement is false" I.e. is that a true or false statement?
    Last edited by a moderator: Jun 5, 2011
  20. NietzscheHimself Banned Banned

    Dreaming about dodging bullets beats the alternative...
  21. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Nearly everyone discusses this as if they know it's not so.
  22. NietzscheHimself Banned Banned

    Well technically the whole universe works much like an optical computer sending light, mass, and other signals around itself. The functions of our reality work just like a computer. Sending electons and other particles around that hold information. Perhaps it is regretful to have used the word "simulation" in the original post. Still reality came before computers so perhaps computers are simulating reality.
  23. Stryder Keeper of "good" ideas. Valued Senior Member

    That however is Philosophical after all, what is "reality"?

    You can assume everything exists in some construct you name reality with any sibling creations being emulations/simulations or virtual, however it's still an assumption that reality is existent.

    A simple way to undermine reality is to just look at how we ourselves observe and interact with this "real" universe, in essence we don't directly interact with it, we instead are reliant upon our senses to convey the outside universe into egocentric observations that we assume are directly related to the stimulation. In essence we already live in a virtual world, made by the creation of our brains attempting to emulate what's outside of our sensory bubble.

    Are we real?

Share This Page