Discussion in 'Intelligence & Machines' started by Jaster Mereel, Apr 25, 2006.
What is Transhumanism? What is the Singularity? In detail, if possible.
Log in or Sign up to hide all adverts.
Wikipedia: Use it, love it, be assimilated into it.
I've read both Wikipedia articles. I'm not looking for a definition from Wikipedia, I am looking for definitions from people. I already know what Wikipedia has to say about the subject, so I am not looking to be enlightened on it. I am looking to find out what people think of it. Thanks anyway.
What's the Singularity?
The Singularity is characterized as a time in the future where computers and AI (artificial intelligence) becomes so fast and advanced that they become a truly disruptive force in the fabric of society as we know it. These super intelligent machines use their superior cognitive abilities to improve and engineer upon themselves to make themselves ever intelligent and ever more powerful. Because smarter and more advanced intelligence is going to be engineering, this is going to speed up the rate at which technology advances dramatically. Therefore, it's extremely difficult to say what the world is going to be like in the coming decades. That is in fact why it's called the Singularity. Because you can't see or reasonably predict the future beyond that point, just like how you can't see beyond the singularity of a black hole.
I'm reading a book right now called "The Singularity is Near" by a dude named Ray Kurzweil. Brilliant guy. He's a futurist, inventor and author. Bill Gates says he's the best person to predict the future of AI.
Ray predicts that we're going to see the Singularity happen approximately 2045. Not very far away.
for 'Transhumanism' please see this article: 'Luciferianism: The Religion of Apotheosis' - by Philip D.Collins
"Transhumanism offers an updated hi-tech variety of Luciferanism. The appelation "Transhumanism" was coined by evolutionary
biologist Julian Huxley....Huxley defined the transhuman condition as "man remaining man but transcending himself, by realizing new possibilities of and for his human nature." However, by 1990, Dr Max More would radically redefine Transhumanism as follows:
Transhumanism isa class of philosophies that seek to guide us towards a post human condition. Transhumanism shares many elements of humanism, including a respect for reason and science, a commitment tp progress, and a valuing of human (or transhuman) existence in this life; Transhumanism differs from humanism in recognizing and anticipating the radical alterations in the nature of our lives resulting from various sciences and technologies."
Transhumanis advocates the use of nanotechnology, biotechnology, cognitive science, and information technology to propel humanity into a "posthuman" condition. Once one has arrived at this conditin, man will cease to be man. He will become amachine, immune to death and all other "weaknesses" intrinsic to his former human condition. The ultimate objective is to become a god"
UNDERSTAND that Luciferianism isthe philosophy of the Illuminati. And that Transhumanism is the modified version of it!!!!!
So, I am to understand that this is supposed by some to be a genuine historical event at some time in the future? How did Mr. Kurzweil come at the date of 2045? Also, if it is supposed to be a real historical event, how does he connect now with that event? What is leading up to it in the historical sense? Are there any factors other than technological development that he considers?
I would appreciate it greatly if someone engaged in this thread and at least attempted to answer my questions.
Damn it, man. The reason nobody is answering is probably because nobody really knows. The concept of transhumanism is somewhat after my time and still isn't really a subject of real study for most people. It is mostly condemned to science fiction and futurist mental-wankery.
From the little I know, I am guessing that the man charted civilization's advancement on a graph. It is a basic rule that every year, as the number of educated humans in existence and the quality of research technologies goes up, we advance faster and faster. The whole thing is exponentially driven so, at some key point, presumably 2045, the line on the graph would go almost straight up. This is generally seen as being a singularity where human civilization, if not the biological human itself, would advance to a posthuman state.
But what the hell do I know. I'm an old fart who got through with active study when we were still scared of the commies.
You should read his books.
also, If you're really interested, you should go to this event:
I've just read the utterly fascinating Wikipedia articles along with some of the links and that's the sum of my knowledge on the subject but, as I understand it, the key event is going to be the point when we develop the first technology that's smarter than us. This will take over the design of newer smarter technologies, which will design even smarter technologies, and so on, at an ever faster rate, until we reach a point - the Singularity - "where our old models must be discarded and a new reality rules" (Vernor Vinge, Click to read the full article). The "new reality" will be one designed by a machine with an IQ of, say, 6 trillion but only as much 'compassion' as we can build into it. How do we know it won't 'solve' world food shortages by eliminating 'surplus' humans? How do we know it won't play dumb, only to trick us at a later stage? The answer is: right now, we don't.
Vinge, writing in 1993, predicts that the design of the first 'superhuman' intelligence will happen "within 30 years", and that the singularity will take place very quickly afterwards - but because technology has by this point completely outstripped us, predicting the pace and nature of change from our current co-ordinates is an impossible task (we'll become semi-human cyborgs in order to keep up, but only if the machines don't vaporise us first).
Don't know if this answers any of your questions but that's my understanding.
the whole basis for the idea of a technological singularity is derived from the fact that the number of transistors we can fit into a circuit is doubling every 18 months (if i recall correctly). so we are advancing at an exponential rate doubling every year and a half.
and I wuod appreciate it if you respondsed to MY reply. already posted this andit went walkies....hmmmmmmmmm
I first found out about the supposed 'Singularity' in the fiction of Vernor Vinge, particularly a book called Marooned in Realtime.
Then I found the science fiction website Orion's Arm, founded by M Alan Kazlev and the transhumanist Anders Sandberg. As I am interested in the possibility of interstellar exploration and colonisation I joined up and started contributing;. The transhuman aspect of the Orion's Arm scenario seems to be a logical consequence of the sort of high technology that might be developed in the medium-to-long term future.
To be honest I am not myself a transhumanist as such. It just seems inevitable that humanity will be changed to a greater or lesser extent by the technologies which will almost certainly be developed in the next few thousand years. Exactly what sort of changes might ensue in reality I wouldn't really like to predict. Exploring these possibilities in fiction is one way of seeing the possible consequences of such changes.
What's the standard unit of 'smart'? Humans might be able to build intelligent machines, but what determines whether they're superior or inferior to us?
I'm going to reiterate baumgarten's point by saying that you can't honestly talk about a machine being more intelligent than us until you define what intelligence is. I absolutely agree that machines will (and already are) capable of doing things that we as homo sapiens cannot do, but to say that they will be more "intelligent" than us is kind of shortsighted. Most people seem to look at intelligence as if it works on a hierarchal scale, that some things are less or more intelligent than other things. I would say that, as we study animal intelligence more and more, we are realizing that intelligence is a matter of the niche which the animal exists in. Machine intelligence is likely to be the same thing (it already is, actually), whereby the machine's niche is in a particular area which is determined by us, rather than a drive to survive and reproduce like most life. I think you're looking at intelligence the wrong way.
Again, my point above. Intelligence doesn't work on a hierarchical plane, and machines won't think like human beings. What you just described is how a person (albeit an amoral one) would think, not how a machine would think. Machines are built to perform a specific function which is determined by us. Unless we design these machines to solve the world food shortage problem, or we design them like animals with a need to survive and reproduce, then none of these things will happen. The only way it could happen is if we were building said machines with the purpose of simulating living organisms, but that's not what the vast majority of machines are built to do.
On the first part, about smart technologies building smarter and smarter technologies, I agree. Machines will be designed whose purpose is to design other machines, and as they become more sophisticated, they will refine future technologies further and further until the technology we possess will be difficult to predict, not becuase of it's novelty, or even it's complexity, but rather, because of it's capability, which will be far greater than now.
The part of this that bothers me is the assumption that technological development dictates social change, instead of the converse. A close study of history will show that societies don't encorporate new technology or change substantially unless the society is ready for the change. In other words, a new technology isn't accepted unless there is a need found for that technology. This is true in the vast majority of cases throughout history.
The one exception to this is in the modern age, where novelty is not only accepted openly but is expected. However, the modern age will not last forever. All of this rapid change will not culminate in ever more rapid change, it will slow down because of external factors that demand that it slow down, namely environmental difficulties. The scarcity of water in the coming century, massive overpopulation in those same areas along with the emigration which will result from it, weapons proliferation, disease, resistence to globalization, the dilution of the power of Nation-States and the growing importance of Corporate power structures, etc... All of these things we are seeing right now, at this very moment, and yet people who speak of "the Singularity" ignore them completely, treating all human societies as one in common, and saying that they will all adapt to internal pressures that probably aren't likely to happen without deliberate manipulation. The world is not made up of a single society, but rather it is comprised of a numerous multitude of social groups all competing with one another. To be quite honest, I think that those who believe whole-heartedly in the Singularity lack any real understanding of history, and so when someone challenges the idea directly based upon historical precedent (because every idea and societal form has a precedent in the past), they go on about how precedent is meaningless because "technological growth will render it obsolete by forcing us to change ourselves", treating technology as something wholly separate from the society that created it. In my opinion, people need to start making serious attempts at projecting what the future will be like so that ideas like "the Singularity" cease gaining the kind of popularity among intellectual people that it does, because such an idea is dangerous for educated people to base their entire view of history upon.
Well, we were. We hit a plateau a while ago, so PC manufacturers have been pushing 64-bit and dual-core architectures in order to compensate. IBM has been using nanotechnology in research to attempt to squeeze even more transistors onto a chip, but you can only take miniaturization so far. At a certain point (likely within the next ten years) it will be impossible to build significantly smaller and faster electronic microprocessors.
Good question; it deserves some debate, rather than leaving it hanging as a rhetorical question.
Perhaps we could take general human intelligence as a measure of 'smartness'. Psychologists already use the IQ measure, supposedly based on average human ability; there are certain cultural difficulties associated with this measure, but it has a certain amount of validity when applied to humans. But machine intelligence is likely to be very different, and I suspect the IQ scale will be difficult if not impossible to apply to intelligent machines.
Alan Turing suggested a test, as is well known, to test if machines are indistinguishable from humans in their responses. A concerted effort might sooner or later produce a machine that would be able to pass a Turing Test; but this would only result in a machine that imitates a certain aspect of humanity.
Far more intensive tests would be required to determine if a machine has real self-awareness. Alternately a machine might be capable of self-awareness without being able to pass the Turing test. Or even self-awareness might not be a requirement for an artificial intelligence; a machine capable of running a traffic control system, or a market trading system, or a country, might not need self-awareness and may even be inhibited by such a capacity.
But given a machine that can be shown to emulate a human mind in every conceivable way, this could then provide a baseline to measure the 'smartness' of other machines. If a human-level machine is upgraded to be twice as fast, it simply becomes a speeded up human, capable of making mistakes twice as fast as a human. Similarly a series of human level machines connected in parallel could be as indecisive as a committee.
A machine with a vast memory could exceed any human capacity for knowledge and recall; but these aspects too could be no more useful than a human with a competent search engine and access to the internet, and who is fastidious about keeping records.
So will a machine that is equivalent to a human mind be the pinnacle of machine intelligence? I think not. A machine with a hundred times as much processing power and a hundred times as much memory will be more capable in many ways; it seems entirely likely that such a machine will exceed most human ability, even when the difference in operating speed is taken into account.
And it seems to me that machines which mimic human characteristics will be in the minority- there will be very competent machines with very large processing abilities which are entirely different in form and function.
There truly is very little precedence for such a historical development- unless you consider the agricultural revolution of the ninth millennium bce, or the industrial and scientific revolution of recent centuries. Both these had profound effects on most of the competing societies in the world to a greater and lesser extent (but they were effects which would have ben impossible to predict beforehand).
There is little precedent for such a technological development. The point is, no matter how capable a machine is of performing a task, human beings will always be assigning the task. Machines will only become as "smart" as we want them to be. The basics of society don't change with any historical development, just the details of it's organization. Human beings have always interracted with each other in roughly the same manner, and hyper-fast machines capable of handling all of our information storage and processing and our industrial capacity will not change the way that human beings interract with one another, only the way in which resources are exchanged and used.
The point I am trying to make is that most of the machines we have are already more capable in their niche than humans are, which is why we use them to begin with. Making them better and better at what they already do will not change human society to a great extent in terms of the interractions between people. What it will change is the economic (i.e., resource exchange and use) state of the world.
The assumption with AI folks is that, somehow, intelligence works as a hierarchy. I already pointed this out and everyone has ignored it completely. Intelligence doesn't work like that, and we are finding it out through our studies of other animals. Intelligence is about specific capabilities that have evolved for specific tasks. Take humans for example: Human beings are not generally more intelligent than other animals, because there is no such thing. The reason why we are the dominant species on the planet is because we have a greater ability to manipulate and control our environment than other animals, and our abilities such as language and knowledge transfer from one generation to the next, our culture as it were, is not a result of our general superiority over other animals, but rather they compliment our ability to control and change our environment. Yes, machines can, and already are better at this than we are, but machines are not living things. Machines do not have any motivations beyond those which we give to them. They don't have a drive to survive and multiply like every other living thing on the planet. They are sophisticated tools, nothing more, and any machine that is capable of simulating a human being will not be the same as a human being because it will merely be mimicking it.
Also, the three "revolutions" that you mentioned were not periods of great progress as most people suggest. They were shake-up periods, where the old order and form of society were overthrown and then altered. The basic substance of society remained the same because human beings remained the same. People must change first in order for society to change, and for the Singularity to take place, society must change before people do. It's simply not realistic when you have any real ground in history to believe in this event.
It definitely wasn't a rhetorical question (well, maybe a little). Thanks for the reply.
Such a test could only return a binary result, however. We wouldn't know how self-aware a machine was, only that it had self-awareness. I suppose a sufficiently rigorous test could establish several arb itrary levels of self-awareness under which things can be categorized, but such a test's practical application would be limited.
I believe Roger Penrose has presented an argument against the idea that a machine can completely emulate a human mind. I don't know much more about this, but perhaps someone on this forum does. It is worth researching, anyway.
It seems to me that a machine that thought and behaved exactly like a human wouldn't be much more than a very expensive human. It could be a personal bias, but I favor the idea of more concrete applications of artificial intelligence. Sentient computers might be capable of pondering the nature of their own existence, but such functionality wouldn't be useful in a wide range of applications, which is why I think they are unlikely to become prevalent in our society.
Sure, but that's using silicon. Various other methods are in the pipeline, and quantum computers are slowly becoming a reality. But it will take a few years.
As for intelligence, I accept only the mopst vague definitions, to do with ability to solve problems withing certain boundaries. IQ tests only ultimately test how good you are at doing IQ tests, which in turn usually relies upon you understanding that this is a piece of paper with lines printed on it that actually represent, in 2D, some objects or people, and so on. People forget how much background learning about an environment is necessary before youc an even beging to test "intelligence".
Separate names with a comma.