Harmony-seeking Computations, or: What we have yet to learn

Discussion in 'General Science & Technology' started by Nick S, Nov 11, 2005.

Thread Status:
Not open for further replies.
  1. Nick S Registered Member

    Messages:
    5
    Hello all, this is my very first post to Sciforums, so forgive me if I've misunderstood slightly the spirit of the community. Anyway... *plunges into the rockpool headfirst*:

    Last night/early this morning I read a shortish paper on "Harmony-seeking Computations" by one Christopher Alexander of UC Berkeley and Cambridge.

    The only copy of the paper I've found online is here (~5.7Mb PDF) and I was wondering if anyone had read it, or would like to read it and discuss it.

    The paper is (somewhat unsurprisingly

    Please Register or Log in to view the hidden image!

    ) a discussion of "Harmony-seeking Computations", which are, in a sense, Alexander's descriptions of how we can learn to process, amongst other not-quite-understood-but-really-very-"simple"- things, common sense. Science and mathematics can very often describe in algorithmic terms what seem to be hugely complex systems, but we've yet to scientifically understand concepts such as "common sense".

    Alexander puts forward 15 aspects of "configurations" and their "transformations" - primarily interaction between "centers" of the configurations, and other elements entangled with these centers. If you do decide to read the paper, I urge you not to be put off by the seemingly impenetrable descriptions of these 15 aspects at the beginning of the paper, as the remainder of it is far more digestible.

    Having read and by and large agreed with many of Alexander's points, I've discussed the concepts with a few colleagues, and it seems to me/us that many of the ideas raised are completely fundamental to huge swathes of science that remain unresearched. Just one example of this: I've scribbled in the margin of my copy "AI" at one point; perhaps being able to make electronic systems that have some semblance of "common sense" is the key to AI; all we have achieved so far are technically advanced simulations of human concepts, but they are nonetheless nothing more than algorithmic pretense, and perhaps this is why they largely don't work. [Disclaimer: I'm not trying to steer us onto AI in any way. There are many far more interesting takes on the paper that I'd like to discuss]

    Anyway. There's loads more interesting discussion to be had on this topic, so if anyone has read it, or has the time to read it (about 67 pages of not-hugely-dense stuff with plenty of illustrated examples) I'd love to discuss it with y'all.
     
Thread Status:
Not open for further replies.

Share This Page