curvature of spacetime

Discussion in 'Physics & Math' started by jaiii, Mar 2, 2011.

  1. jaiii Registered Senior Member

    Messages:
    195
    Hello.

    I have a big problem to understand what it is spacetime and how it changes in different situations.
    Could someone recommend a site on the Internet where it can look.
    Or there is no software or other device that would help me in this?

    By.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Dinosaur Rational Skeptic Valued Senior Member

    Messages:
    4,885
    First, you should understand that space-time is used as a model. It is not a reality.

    Space-Time is a simple concept, but difficult to deal with in practice. It models the laws of physics using 4 dimensional geometry. The basic idea is as follows.
    All (at least most) of the laws of physics require the specification of locations using (x, y, z) coordinates to indicate where and a time variable to indicate when something happens at each location.

    For example: Classical gravitational equations allow the prediction of the path of a planet in solar system specifying when (time variable) the planet would at each specific location (x, y, z).

    The space-time concept expresses the laws of physics using sets of 4 values (x, y, z, t), with t being a time variable. (x, y, z, t) is suggestive of a point in a 4-dimensional space and there is a lot of mathematics which is applicable to 4D geometry.

    BTW: (x, y, z, t) is often referred to as an Event. The lines/curves modeling particle or planetary motion are often referred to as World Lines.
    Using the mathematics of 4D geometry, the path of a particle or a planet is viewed a static curve in 4D space rather than as an object moving in 3D space. Consider our solar system.
    All the planets & asteroids have orbits around the sun which are approximately in the same plane.

    The classical view of the solar system describes a planet as moving on an elliptical path.

    Suppose we ignore one dimension (z) & use (x, y, t) to describe motion in the solar system, where x & y are coordinates in a plane, while t is a time variable (think of it as the z-axis). Think of the sun as always being at x = 0 & y = 0, in which case it plots as a line vertical to the XY-plane (Like the z-axis in a 3D space).

    Using this model, an elliptical orbit looks like a helix on the surface of a cylinder. A decaying orbit looks like a helix on the surface of a cone (base on the XY-Plane & apex coincident with the sun at some point in time).

    An object falling directly into the sun looks like a straight line starting in the XY-plane & ending at the sun​
    The above models the solar system using 3D curves & lines. You might refer to it as using 3D space-time.

    Note that the 3D space-time is not reality. It is only a model of reality. Similarly, 4D space time used in relativity physics is only a model. It is not reality.

    The mathematics used for relativity is differential geometry using tensor notation (aka Tensor Analysis). The concept of the model is easy to understand. Using the mathematics is a formidable task, requiring at least a few semesters of mathematics after taking prerequisite courses in calculus & algebra.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. jaiii Registered Senior Member

    Messages:
    195
    Veri interesting.
    Thank.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. OnlyMe Valued Senior Member

    Messages:
    3,914
     
  8. BenTheMan Dr. of Physics, Prof. of Love Valued Senior Member

    Messages:
    8,967
    How can you ever tell the difference?

    If you can devise a test that disproves it, then you've disproven the model.

    But if you can never devise a test to disprove a model, then what use is it to wax prosaic about "reality"? That's not science, that's philosophy.
     
  9. CptBork Valued Senior Member

    Messages:
    6,465
    At the end of the day, predictive power is what matters. We respect people who can get s--t done and make things happen, and demonstrate it regularly on demand if needed.

    You've got a simple theory that can be written down in a few hundred pages and successfully models hundreds of thousands of pages' worth of data? Then we're buying what you're selling.
     
  10. OnlyMe Valued Senior Member

    Messages:
    3,914
    The Antikythera Mechanism

    Credit & Copyright: Wikipedia

    Explanation: What is it? It was found at the bottom of the sea aboard an ancient Greek ship. Its seeming complexity has prompted decades of study, although some of its functions remained unknown. Recent X-rays of the device have now confirmed the nature of the Antikythera mechanism, and discovered several surprising functions. The Antikythera mechanism has been discovered to be a mechanical computer of an accuracy thought impossible in 80 BC, when the ship that carried it sank. Such sophisticated technology was not thought to be developed by humanity for another 1,000 years. Its wheels and gears create a portable orrery of the sky that predicted star and planet locations as well as lunar and solar eclipses. The Antikythera mechanism, shown above, is 33 centimeters high and similar in size to a large book.

    All this at a time when the earth was yet the center of the universe. A model does not explain how some is as it is. It only predicts how it interacts with the (it's) environment. Our theories and models, especially those involving purely theoretical physics are always part science and mathematics and part imagination.

    Everything we do is part philosophy. We cannot escape that as what we believe the world to be is a construct. A construct of knowledge, experience and imagination. When we come to the place that we believe we know the truth of what is real, there will be nothing left we can learn.

    Special relativity was based in part on an assumed fact that empty space, vacuum, was in fact truly empty. At the time atoms were only just becoming accepted as real. Einstein and his contemporaries had no way to know then that space is full of neutrinos, that do have mass and interact at least weakly with ordinary matter. The assumption was in error.

    Einstein seems to have agreed with you though. Somewhere I seem to remember him saying that the success of the theory ( it's predictive success) was more important than the validity of the a priori facts upon which it was based.

    Still for myself..,
     
  11. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    A theory shouldn't be more complex than the data or observations it explains.

    If it is, it isn't a theory.

    And btw, the Antikythera mechanism had to be calibrated by an expert-a "simple" procedure based on a theory of synchronisation I suppose. It was the first portable computer with tech-support! The single example is believed to have been more than a century old when it was lost, but presumably several were made.
     
  12. jaiii Registered Senior Member

    Messages:
    195
    Thank veri much.
     
  13. BenTheMan Dr. of Physics, Prof. of Love Valued Senior Member

    Messages:
    8,967
    How does one objectively judge "complexity"?

    I suspect you will say something about the number of free parameters versus the number of predictions in the theory, but that isn't clear from your comment.
     
  14. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    From the point of view of information and an algorithm that generates it.

    The least complex program that can generate the information corresponds to a simplest "theory" that explains the information.
    --http://www.cs.auckland.ac.nz/CDMTCS/chaitin/sciamer3.html
     
  15. AlphaNumeric Fully ionized Registered Senior Member

    Messages:
    6,702
    The definitions of 'complexity' and 'information content' are not clear cut though, so saying "Simplest means less complex" begs the question "What's your definition of complexity". Some very complex systems have very simple definitions.

    \(z_{n+1} = z_{n}^{2} + c\) defines things like the Mandlebrot set. So how complex is it? Do we measure it using the Kolmogorov complexity measure or do we measure it by the amount of data required to encode a picture of the fractal? But then that depends on your compression method. If you use jpeg it would take a lot of data but if you use fractal compression methods then it'd be very cheap but it'd need more computation to do.

    A layperson would consider something like quantum field theory extremely complicated but once you get your head around things like basic gauge theory and symmetries an enormous amount of quantum field theory becomes streamlined and elegant. I personally found my first course in electromagnetism extremely clumsy and bogged down in length algebraic manipulation. Later I covered proper tensor calculus and what used to take 2 pages to do in EM then took 5 lines. Later I learnt gauge theory and differential forms and what used to take 5 lines took 1. Which is more complex, the naive approach which is simple to grasp but takes 20 times the time to work through or the more advanced method which reduces the problem to triviality?

    Which is heavier, a ton of bricks or a ton of feathers?
     
  16. BenTheMan Dr. of Physics, Prof. of Love Valued Senior Member

    Messages:
    8,967
    Hi arfa---

    I'm not disagreeing with you, but I don't quite understand the metric.

    As AN points out, the same theory has many representations. I can write the Standard Model Lagrangian in terms of matrices and spinors in one or two lines, but writing all of the components out takes at least four slides in a presentation. Trust me

    Please Register or Log in to view the hidden image!



    In general, I think the correct way to interpret the comments you posted is by thinking of complexity in terms of free parameters---this is how most physicists interpret the statement, at least. So I should judge "complexity" as something like the ratio of the number of free parameters to the number of observations my theory can explain or predict. The worst theories have ratios close to one, the best, close to zero.
     
  17. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Well, Chaitin also says this:
    "Today the notions of complexity and simplicity are put in precise quantitative terms by a modern branch of mathematics called algorithmic information theory. Ordinary information theory quantifies information by asking how many bits are needed to encode the information.
    ..Algorithmic information, in contrast, is defined by asking what size computer program is necessary to generate the data."

    Complexity of a theory in terms of free parameters probably corresponds to the number of statements--data structures, loops and branches etc--in a program.

    Are real numbers more complex than integers? I would say yes, because it's easier to represent integers than fractional numbers, or rather the representation of integers is really the lengths of words, with a sign bit.

    Although you can write a formula in a few lines, you still need a computer that can interpret "code". You can write the Wheeler-DeWitt equation in one line and call it the simplest solution of GR, or something. But you still need that computer. With the computer, there is a shortest program with the least complexity that generates "Wheeler-DeWitt data". . .
     
    Last edited: Mar 20, 2011
  18. AlphaNumeric Fully ionized Registered Senior Member

    Messages:
    6,702
    This issue came up in my work recently. A guy I work with dislikes the definition of information "The size of the file after encoding". It's a very computer science point of view and it is extremely dependent upon your method of encoding. Any fundamental measurement of information should be independent of representation. That's why Shanon's definition of entropy is so useful, though it does require the implicit assumption information and entropy as somehow the same concept.

    That is basically a Kolmogorov information measure, the information is measured by how long an explanation/description of a system is. It's conceptually nice but can sometimes be hard to implement. For instance "The house is red" is a very short description but in order to be useful you must have a concept of a house available to you. I don't think any of us could even begin to quantify all we know about houses and how all the information fits together. The approach can be streamlined a little using the concept of 'frames', where a 'house' is a frame with a property 'red' but its still quite hard to work with in practice.

    You've touched on two things there I think. The first thing is "Complexity of a theory in terms of free parameters ". There are measures of information which include the number of parameters in your model and try to find a balance between an accurate model with lots of parameters and a less accurate model with fewer parameters. For instance, which is better a model with 1 parameter accurate to 10% or a model with 50 accurate to 1%? What about 4 parameters and 4%? That sort of thing.

    The second thing "probably corresponds to the number of statements--data structures, loops and branches etc--in a program." I don't think relates to the number of free parameters and I certainly disagree with that as a measure of complexity. The parameter dependent measure I just mentioned is defined independent of how its implemented in coding as any good information measure should be in my opinion.

    This is because on a fundamental level the information of a model shouldn't depend on the competency of the coder implementing it. A model should have some inherent information/complexity value independent of coding. Sure, when you get to implementing it you have issues like the computational complexity of the algorithms you use. For instance, the naive Fourier transform is \(O(n^{3})\) while the fast version is \(O(n \ln(n))\) but that doesn't alter the fact your model calls for a Fourier transform. Then there's issues like loop unwinding. Given a computer program with a pair of nested For loops many compilers will often unwind the inner For loop because it is faster to have them in a sequence than deal with various coding overheads associated to loops.

    Which is less complex? They are equivalent to one another in algorithmic procedure yet one is smaller on a disk yet the other is faster to run? That's a prime example of how those measures, speed and size, are not to be trusted utterly when it comes to complexity.

    Depends who you ask. A mathematician will tell you they are similar. A computer scientist considering how to encode them into computer memory will say the integers. Someone trying to do something like optimisation or dynamical systems will say the real numbers lead to simpler systems because they can include notions like smoothness, which an integer (or rational numbers) based system cannot. You can't run an optimisation routine based on steepest descent on an integer based system and Diophantine equations are notoriously difficult to deal with. With real numbers you have tools like calculus, so you can make the case they are often less complex to deal with.

    No, you don't. For instance, the Schwarzschild metric is the unique spherically symmetric charge-less solution to \(R_{ab}=0\) in 3+1 dimensional GR. It can be written down in a single line. You could ask a computer to solve the equations numerically and you'd get something close to the SC metric but it wouldn't be concise, you'd have to have data for a huge collection of points in space and it wouldn't tell you anything about uniqueness. Even getting a computer to solve the equations symbolically wouldn't give you uniqueness.

    This brings up another issue, how do you measure the complexity of something like the SC metric? Is it defined by the size of the equation the metric satisfies? Or is it the expression for the metric itself? What happens if a metric can't be written in closed form? Is its complexity then the size of the computer data set which solves the equation numerically to some level of accuracy? Definitely not in my opinion, since such a thing is dependent on how accurate an answer you want yet the metric itself, the underlying mathematical structure, it independent of such a choice. Numerical solutions to the Einstein field equations are notoriously huge in size, yet often closed form expressions are very short.

    Arfa, I think you're conflating the complexity of a numerical/coding implementation of a concept with the concept itself. Any 'good' information/complexity measure of a model or a mathematical construct should be independent of coding implementations or example data sets, it should depend only on the idealised concept. The only time a complexity measure should consider algorithms and specific implementations is when you're asking about the complexity of those implementations specifically, aka the Fourier transform example I gave. Then you're dealing with the complexity of applying a concept, rather than the complexity of the concept itself.
     
  19. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Well, I don't know, but I think I have to ask--how complex is information, and what is it? Is everything really reducible to a string of bits in a computer?
    Why do we simplify it by 'partitioning' it into whole numbers--of sheep, apples, whatever, and why do we believe real numbers exist?

    But information has entropy, although the information content of a message is the opposite of thermodynamic entropy it depends on how you see the closed system; a message which isn't expected has more information content than an expected one.
    This goes to the thermodynamic concept of work, and of entropy being the energy content of a system which doesn't do any work--i.e. wasted energy. Likewise expected messages don't "inform" as much as unexpected ones. In the case of information content things look back to front, but then the "system" is an observer (of messages coming down a channel)

    I think this actually touches on what Chaitin has to say, in the link, about the halting problem.

    No, I'm saying an algorithm has a different measure of complexity than the data it generates.
     
  20. arfa brane call me arf Valued Senior Member

    Messages:
    7,832
    Just adding to this: my understanding of algorithm is in terms of what is possible, i.e. known.
    When I did an undergrad course on algorithm design, the rule of thumb was: "we know how to ...", as in "we know how to write a program that will print a sequence of numbers", "we know how to order or sort a sequence (of numbers or other symbols)", and so on. These are like statements based on axioms such as "there is a number, x", "there is an order relation r", etc.

    So, ultimately this leads to: "we know how to build a computer that can print a sequence of numbers".
    That modern computers are based on binary logic, is because all numbers are reducible to binary form. If we could build the right kind of computer there is no reason in principle that it couldn't run a Wheeler-DeWitt algorithm, or a Schwarzschild algorithm. Ok, nobody knows if such a computer can be built, but "the universal" computer knows the reason for this (see below).

    To paraphrase Leibniz, there is always a reason that something is true, even if God is the only one who knows the reason.
    Chaitin however says there are mathematical truths (equations) which are true for no reason--i.e. are universally beyond reason. That's the title of his SciAm article btw: "The Limits of Reason"--it's worth a look.
     

Share This Page