columbia fallout

Discussion in 'Physics & Math' started by stef 730, Feb 1, 2003.

  1. stef 730 Registered Member

    Messages:
    19
    Just wondering what you guys might think will happen to the space shuttle program because of the loss of columbia. Also, how could they let this happen again? Can't they design something that doesn't explode?!
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Persol I am the great and mighty Zo. Registered Senior Member

    Messages:
    5,946
    How do you design something that is light enough to take off, flies at 100k mph, and can go into orbit. They take every step possible to design against failure, but this is a hugely complex machine. Humans make errors, and when not caught the result may be disaster.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. amraam Registered Member

    Messages:
    11
    Well, it certianly is a disaster isn't it ?
    Man's most technically advanced machine on Earth lets us down.....the second time in since operations started in 1981.

    I guess, when you would expect everything to work to near perfection, and have the capital invested to ensure just that, it should'nt happen ? I think that whatever the precautions taken, there is a profound human psychi nature to it which would JUST have overlooked some or the other possibility at some stage or the other of mission preparation.

    The press conference should be on abt now.....lets just remember that we just got too complacent abt the dangers of space travel.

    As far as the effects are concerned, I guess whatever happens will be a cause of the human emotion and less of scientific approach towards the disaster which would be another statical data on the NOT SO GOOD MISSIONS checklist .
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. stef 730 Registered Member

    Messages:
    19
    You guys are right. Too long have many people taken for granite the dangers of space travel. And when a hugely complex machine is built, there's always the chance of something going wrong. Its a shame we have to be reminded by such disasters.
     
  8. voltron Registered Senior Member

    Messages:
    42
    On another note:
     
  9. disposable88 My real name is Rick Registered Senior Member

    Messages:
    76
    It's a general rule for computer programmers that for every 1,000 lines of code, you'll have 1 bug. Imagine what kind of tremendous problems NASA has to go to fix problems like these.

    Just like a computer program, perfection for a complex machine is impossible. There is no perfect design, unless it was not designed by humans. And the only resources we have right now are either humans or made by humans themselves.

    Did this post make any sense?
     
  10. adaptiveView Registered Member

    Messages:
    14
    There are a large number of factors affecting hardware and software reliability. Lately I've been thinking a lot about the effects of our common sense misconceptions regarding systems, complexity and information (effects that are evident in more than a few NASA "mishaps"):

    Systems: We have an evolved capacity for (and predisposition towards) seeing nonexistent system boundaries, especially in the West. We like to put everything in "boxes" and treat them as isolated entities. In truth, though, anything that affects and is affected by a system is part of that system. By extension, just about everything we call a system is really a subsystem.

    One way of looking at the art of engineering (hardware or software) is that it involves finding the right boundaries. You're a NASA engineer designing a sensor "system." You start by looking at specs for input signals and ranges and the requirements for output. You come up with a little box (a sensor "system") that produces the right output for all specified inputs. From your perspective, you've designed a good system.

    So now somebody else working on some other "system" that needs a sensor installs one of yours. Their "system" gets installed on a rocket and when they test fire the rocket your sensor gets vibrated to smithereens (or melts, or both). There are a lot of ways to do post-mortems on mishaps like this but what it eventually gets down to is that somebody used the wrong system boundaries (or used something designed with the wrong system boundaries).

    Complexity: The jury's still out on questions regarding the nature of complexity; what it is, how it's measured, how it's constructed. In common usage, it's often confused with complicated. There is agreement, though, among those who study complexity that in complex subsystems the whole is greater than the sum of its parts, but there's still no formula c = f(x) where x is a complex subsystem and f(x) returns the whole - the sum of its parts. At best, we know that complex subsystems have emergent properties and that these are normally productive properties -- i.e., they do something, but we can't reliably predict what that will be.

    Engineers (especially software engineers), in addition to thinking "system" when they should be thinking "subsystem," are usually focused on making something simpler. They don't think in terms of, "this subroutine will increase the complexity in the system." The risk, here, is that complexity is just as capable of being productive when it's unintended as when it's designed, it's just that nobody knows what it's capable of. You try to simplify communication between people by inventing e-mail and, regardless of your intentions, you've added a huge amount of complexity to the system. Part of that complexity connects advertisers to a mass communication channel and, voila! You've got spam!

    Information: People's experience of information is the conscious tip of a huge amount of very complex, subconscious processing in response to a message (stimulus). We do it so well that it takes effort to realize that information is not "out there in the world" but purely in our minds. As you read this you see information, but what are you actually looking at? It's just an arrangement of dark and light pixels on your screen.

    Your perception (intuition, instinct) will lead you to say, "yes, but it's a particular arrangement of pixels and that arrangement makes it information." What I'm saying, though, is that it really isn't information. The arrangement is a particular stimulus that, because of shared knowledge and culture, I (as I'm writing this) can assume will evoke a particular experience of information in you.

    Call this the splitting of semantic hairs, if you'd like. In most day-to-day situations, it's a lot easier to just call it like you see it. But if you're a software engineer, for example, consider how much effort (code) is required to validate and process input. All of that effort is intended to emulate what people do naturally. The problem is that what we do naturally is incredibly complex and barely understood so you can only emulate some of it. Put another way, software is typically very brittle (and stupid) in its handling of input because we don't (yet) know how to respond to it in the way that people do. When you forget this, when you think that information exists in the message, you run the risk of writing code that responds inappropriately (or of sending the wrong message).

    Take two separate groups of NASA programmers working on two related software subsystems: one that sends "thrust information" and one that receives and responds to it. If these groups think of what is sent and received as "information" (instead of as stimuli the receiver must create information from), then they are leaving themselves open to the possibility that one group will send the thrust data encoded as pounds of thrust and the other will process it as if it were encoded as newtons of thrust. Ask the first group if they've succeeded in sending the information and they'll say, "Yes, we've tested it and it sends the right information." Ask the other if they've succeeded is receiving the information and they, too, will say yes.

    And both of them are wrong.
     
  11. Persol I am the great and mighty Zo. Registered Senior Member

    Messages:
    5,946
    Good posts disposable88 & adaptiveView. They are good ways of explaining why large systems are extremely difficult to work with.
     
  12. Gifted World Wanderer Registered Senior Member

    Messages:
    2,113
    One of the failures we have is that we build things big and complex. How many moving parts do the shuttle engines have? You make things simpler, they are better.
     
  13. Persol I am the great and mighty Zo. Registered Senior Member

    Messages:
    5,946
    Good point, but I think some of the complexity is for redundancy... it's not an exact science of where to draw the line.

    Hopefully new technology down the road will allow for simpler designs.
     
  14. adaptiveView Registered Member

    Messages:
    14
    I understand the perception behind what you're saying, but as complexity is related (proportionally) to the capabilities of a device, simpler isn't always better; it takes what it takes. Failures are often caused by insufficient complexity, especially when the device or software precludes the contributions of people (e.g., when it speeds up some process to the extent that people can no longer intervene when something goes wrong) but fails to replicate the complexity people were providing (e.g., the ability to interpret, diagnose and correct some problem).

    Case in point: In December of 1996, The Bright Field (a 763-foot freighter loaded with 56,380 long tons of corn) was positioning itself to navigate a turn in the Mississippi River when a primary oil pump failed. Automation software detected the failure and attempted to start a secondary oil pump but it wouldn't start so the automation shut down the engine. When viewed from the perspective of the "engine system," the automation behaved in a perfectly reasonable manner (and had this occurred on the open sea, everyone would congratulate the automation designers for a job well done). But if you jump up a level you see that shutting off the engine makes it impossible to steer or stop the "ship system." Jump up another level and on that day in December you see that you now have an extremely heavy ship drifting straight towards a New Orleans wharf. (The crew was able to finally get the engine started but not in time to stop the ship. It destroyed 200 feet of dock, tore the front off of a hotel and shopping mall and injured 116 people.) The captain knew that, if necessary, sacrificing the engine in an attempt to steer or stop the ship was better than running aground, but the automation wouldn't allow the captain to contribute this knowledge and it wasn't complex enough on its own to make this decision (or even know a decision was possible).


     
  15. Gifted World Wanderer Registered Senior Member

    Messages:
    2,113
    Point taken. There are two ways to do it: include redundancies in case the primary system fails, or design the system not to fail. The idea was that a simpler system is easier to reinforce, therefore preventing failure. Some things can't be made this simple.
     
  16. adaptiveView Registered Member

    Messages:
    14
    Adding redundancies is a "risk management" strategy. As with the Bright Field example in my previous post, redundancies can fail as well. You can add a backup to the backup, but this is no guarantee, either. At some point you determine "acceptable risks and losses" and then add enough backups to (statisically) achieve them.

    Redundancies are not always an option. How do you put redundant insulation tiles on a space shuttle? Or, to stretch the point a little, a redundant pair of wings? My assumption based on what you've written is that in these cases you would suggest designing "the system not to fail." But this is one of those things that is usually a goal or an ideal (at best). For many types of devices and software good design and testing procedures can yield something that is fail-proof (within some parameters -- your house key will probably never fail to unlock your door, but it makes a lousy screw driver). The common property defining such devices and software are their simplicity (low complexity).

    For more complex things, what I said in my original post about system boundaries and complexity applies. It's impossible to design a fail-proof shuttle. The system is too vast. Consider everything that potentially affects and is affected by the shuttle in the course of an average mission. You'll never be able to list (or even think of) all of it. You also have to consider that complexity is built from connections. You can look at everything within the physical boundaries of the shuttle (and its booster rockets) and maybe account for all of the connections in it but you also have to consider the larger system. In terms of the complexity involved, the shuttle isn't some static, isolated thing. It connects with its environment (weather, birds, planes, gravity, solar flairs, electronic sgnals, meteors, space craft, space junk, ...). All of these connections add to the complexity of the "shuttle system."

    You can't predict how that complexity will always play out (in many cases, you can't predict it at all). You can only accommodate some of it in your design (to even fend off a pea-sized meteor the surface of the shuttle would be way too thick and heavy to lift off). You'll never get a fail-proof space shuttle.

    None of this is meant to imply that things can't be done better. Hardware and software engineering is always advancing and there are new ways of looking at things. One "sort of new" and promising advance is making some headway in designing subsystems that assume there's always a possibility of failure and attempt to gracefully deal with it. There's an interesting article about one such approach here.
     
  17. SoLiDUS OMGWTFBBQ Registered Senior Member

    Messages:
    1,593
    Simple: don't fuckin' use GLUE.

    Please Register or Log in to view the hidden image!



    Damn EPA. Nasa shouldn't have to comply...
     

Share This Page