AI:Evolution,brains involved,Issues.

Discussion in 'Intelligence & Machines' started by Rick, Apr 5, 2002.

Thread Status:
Not open for further replies.
  1. Rick Valued Senior Member

    Messages:
    3,336
    Hi,

    In my several months of research on this wonderful subject(Quite an amateur research as you might conclude!), although

    thorough you might say,i have collected various historical pieces of information and compiled for fun.although in fun,but

    they are very amusing and interesting to study.
    All the dates may not be correct,if not please tell me which one,i"ll rectify.
    The following evolution dates have been compiled with the help of MIT web-servers and various other sites(which

    includes,irbot,robot.net,NASA sites etc...)
    i was thinking about a thread which could comprehensively cover the topic of History and evolution of AI in better

    perspective and way.any comments,further corrections,clarifications are welcome.
    1917

    ----
    Karel Capek coins the term 'Robot' (in czech 'robot' means 'worker'), but the 1923 English translation retained the original

    word.)


    1928
    ----
    John von Neumann's develops minimax theorem (later used in game playing programs)


    1940
    ----
    First electronic computers developed in US, UK and Germany


    1943
    ----
    McCulloch and Pitts propose neural-network architectures for intelligence


    1946
    ----
    Turing's ACE

    John von Neumann publishes EDVAC paper on the stored program computer.

    J. Presper Eckert and John Mauchley build "ENIAC", the first electronic programmable digital computer.


    1950
    ----
    Alan Turing: "Computing Machinery and Intelligence"



    1953
    ----
    Claude Shannon gives Minsky and McCarthy summer jobs at Bell Labs


    1955
    ----
    Newell, Simon and Shaw develop "IPL-11", first AI language


    1956
    ----
    Rockefeller funds McCarthy & Minsky's AI conference at Dartmouth

    CIA funds GAT machine-translation project

    Newell, Shaw, and Simon develop The Logic Theorist


    1957
    ----
    Newell, Shaw, and Simon's develop the General Problem Solver (GPS)


    1958
    ----
    McCarthy introduces LISP (at MIT)


    1959
    ----
    McCarthy & Minsky establish MIT AI Lab

    Rosenblatt introduces Perceptron

    Samuel's checkers program wins games against best human players


    1962
    ----
    First industrial robots

    McCarthy moves to Stanford


    1963
    ----
    McCarthy founds Stanford AI Lab

    Minsky's "Steps Towards Artificial Intelligence" publised

    U.S. government grants 2.2 million dollar grant to MIT to reearch Machine-Aided Cognition (artificial intelligence). The

    grant came from DARPA to ensure that the US would stay ahead of the Soviet Union in technological advancements.


    1964
    ----
    Daniel Bobrow's Student solves high school algebra work problems


    1965
    ----
    Simon predicts "by 1985 machines will be capable of doing any work a man can do"


    1966
    ----
    Weizenbaum and Colby create Eliza


    1967
    ----
    Greenblatt's MacHack defeats Hubert Dreyfus at chess


    1969
    ----
    Minsky & Papert's "Perceptrons" (limits of single-layer neural networks) kills funding for neural net research

    Kubrick's "2001" introduces HAL and AI to a mass audience

    Turing's "Intelligent Machinery" published


    1970
    ----
    Terry Winograd's Shrldu developed

    Colmerauer creates PROLOG


    1972
    ----
    DARPA cancels funding for robotics at Stanford

    Hubert Dreyfus publishes "What Computers Can't Do"


    1973
    ----
    Lighthill report kills AI funding in UK

    LOGO funding scandal: Minsky & Papert forced to turn leave MIT's AI lab


    1974
    ----
    Edward Shortliffe's thesis on Mycin

    Minsky reifies the 'frame'

    First computer-controlled robot


    1975
    ----
    DARPA launches image understanding funding program.


    1976
    ----
    DARPA cancels funding for speech understanding research

    Greenblatt creates first LISP machine, "CONS"

    Doug Lenat's AM (Automated Mathematician)

    Kurzweil introduces reading machine


    1978
    ----
    Patrick Hayes' "Naive Physics Manifesto"


    1979
    ----
    Journal of American Medical Assoc. says that Mycin is as good as medical experts


    1980
    ----
    First AAAI conference held (at Stanford)


    1981
    ----
    Kazuhiro Fuchi announces Japanese Fifth Generation Project


    1982
    ----
    John Hopfield resuscitates neural nets

    Publications of British government's "Alvey Report" on advanced information technology, leading to boost in AI (Expert

    Systems) being used in industry.


    1983
    ----
    DARPA's Strategic Computing Initiative commits $600 million over 5 yrs

    Feigenbaum and McCorduck publish "The Fifth Generation"


    1984
    ----
    Austin AAAI conference launches AI into financial spotlight


    1985
    ----
    GM and Campbell's Soup find expert systems don't need LISP machines

    Kawasaki robot kills Japanese mechanic during malfunction


    1987
    ----
    "AI Winter" sets in (Lisp-machine market saturated)


    1988
    ----
    AI revenues reach $1 billion

    The 386 chip brings PC speeds into competition with LISP machines

    Minsky and Papert publish revised edition of "Perceptrons"


    1992
    ----
    Japanese Fifth Generation Project ends with a whimper
    (As KMGURU pointed out.)
    Japanese Real World Computing Project begins with a big-money bang

    More to follow...

    bye!
     
    Last edited: Jun 14, 2002
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Rick Valued Senior Member

    Messages:
    3,336
    Brains of AI,their lives,affects and prominence.

    =====================================================================
    Alan Turing
    ============================================================================================
    Alan Turing is considered by many to be the founder of computer science. His ideas about a Universal Machine capable

    of performing any task with well defined instructions marked the beginning of the age of computers. Turing believed that

    by the year 2000 machines would be created that could imitiate the human mind. Turing is also famous for creating the

    Turing Test, a test which could be used to determine how well a machine could imitate human behaviour.
    The Turing Test is a test, created by Alan Turing, which some believe can be used to determine whether or not a

    computer is intelligent. The turing test is based on an imitation game. The imitation game is an "abstract oral

    examination. 1 In 1950 Turing wrote an article, "Computing Machinery and Intelligence" in which he describes the

    imitation game and the Turing Test. 2
    It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The

    interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which

    of the two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says

    either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B . . . We now ask

    the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly

    as often when the game is played like this as he does when the game is played between a man and a woman? These

    questions replace our original "Can machines think?"
    The Turing Test replaces the man in the imitation game with a computer. If the interrogator cannot distinguish between

    the human and the computer, then the Turing Test says that the computer is intelligent.

    In 1935 Turing during a lecture at Cambridge University, was introduced to the Entschiedungsproblem, The question of

    decidability. The question asks, "could there exist, at least in principle, any definite method or process by which all

    mathematical questions could be decided?" 1 Turing, in trying to find an answer to the Entschiedungsproblem, came up

    with the concept of a machine, working on paper tape on which symbols were printed, which could perform this 'definite

    method'. Before this machine could be constructed the question needed to be answered: what is the most general type

    of process such a machine could perform. 2 Turing came to the conclusion that anything that could be achieved by a

    person working on a set of logical instructions could also be achieved by the Turing Machine. 3 In 1936 Turing submitted

    his findings in a paper, "On Computable Numbers with an application to the Entschiedungsproblem.
    Later, Turing came up with the concept of a Universal Turing Machine. The idea of the Turing Machine is similar to

    a formula or an equation. Just as there is an infinite number of possible equations, there is an infinite number of

    possible types of Turing Machines. Turing saw no need for creating a different Turing Machine for every task. He

    envisioned a Universal Turing Machine which could perform the tasks of all Turing Machines given that the instructions

    for each task was written in a standard form.

    ================================================================================================
    The father of AI:
    ========================================================================================
    Minsky believes in modeling the way computers think after the way humans think by creating an intelligent machine that

    use common sense reasoning instead of logical reasoning. Logical systems, according to Minsky, while they work well in

    mathematics, does not work well in the real world which is full of ambiguity, inconsistencies, and exceptions. 1 Minsky

    gives an example of how logical reasoning falls into difficulty when considering a statement like, 'Birds can fly'.
    "Consider a fact like, 'Birds can fly.' If you think that common-sense reasoning is like logical reasoning then you

    believe there are general principles that state, 'If Joe is a bird and birds can fly then Joe can fly.' But we all know that

    there are exceptions. Suppose Joe is an ostrich or a penguin? Well, we can axiomatize and say if Joe is a bird and Joe

    is not an ostrich or a penguin, then Joe can fly. But suppose Joe is dead? Or suppose Joe had is feet set in concrete?

    The problem with logic is that once you deduce something you can' get rid of it. What I'm getting at is that there is a

    problem with exceptions. It is very hard to find things that are always true." 2
    Minksy instead proposes framework systems(or frame). The idea behind frame is to put into a computer a large

    amount of information ( more imformation needed to solve a problem) and then define for each situation the optional

    and mandatory detail. A frame for birds might include feathers, wings, egg-laying, flying, and singing. Flying and

    singing would be defined as optional. Wings, feathers and egg-laying are not mandaory.
    ================================================================================================
    John Mccarthy
    ================================================================================================
    One of John McCarthy's most famous contributions to Artificial Intelligence was the organization of the Dartmouth

    Conference, at which the name "Artificial Intelligence" was coined. The Dartmouth Conference, titled the "Dartmouth

    Summer Research Project on Artificial Intelligence" was a two-month long summer conference. McCarthy's goal was to

    bring together all of the people he knew of who had shown interest in computer intelligence. Although McCarthy

    initially saw the conference as a failure (no one really liked the idea of spending two whole months at a the conference,

    so people came and went as they pleased, making it hard for McCarthy to schedule regular meetings), in the years after

    the conference, artificial intelligence laboratories were established across the country at schools like Stanford, MIT, and

    Carnegie Mellon. 1 Another of McCarthy's great accomplishments is the creation of the LISP (List Processing) language.

    LISP soon became the language of choice for many AI programmers and various versions of LISP are still being used

    today, forty years later.

    More to follow...
    Bye!
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Rick Valued Senior Member

    Messages:
    3,336
    Social...

    Social interactions between people and AI:
    ============================================
    Many people remember Rosie from the cartoon series the Jetsons. Rosie was the robot maid that had a mind of her own, much to her owner's dismay. What is the possibility that such a thing could be a part of our everyday life? Our daily activities grow more and more dependant upon computers everyday and perhaps one day we might actually achieve a state in which we have artificially intelligent servants to wait on us.
    Rodney Brooks, a professor at MIT, believes that in order for humans to interact productively with artificially intelligent machines and vice versa, that the machines must take humanoid form and interact with humans.1 This could mean that the artificially intelligent machines might actually learn not just what is taught to them from their creators, but also from us, the potential consumers.
    This can lead to some potentially interesting problems. How would one act with an artificially intelligent machine? Should we treat them as objects, as children or as equals? How would they react to us? Do we want these machines to be more life-like or less life-like? Do we want server machines to be artificially intelligent or just drones?
    All these are interesting questions but the answers will vary from person to person. Although I may be comfortable with having an AI servant my neighbor my not. Even if it is possible to mass produce AI servants, whether or not it will actually happen will depend largely upon society. At this present time I do not believe that society is willing to accept such a change in our culture.


    AI LIFE
    ===========================
    A simple criteria by which something can be jugded to be alive: 1
    Metabolism, consumes and expends energy
    Growth
    Reproduction
    Reaction to stimuli
    In a very loose sense it is possible to conceive of an artificially intelligent machine as being alive. All machines

    consume and expend energy and therefore have a metabolism. It is possible for an intelligent machine to alter itself and

    improve upon it's design. As of today we have machines that are used to build themselves. This can be construed as a

    form of reproduction although not in the classic sense as human reproduction. If the machine is artificially intelligent

    then it should be able to respond well to stimuli and adapt to it's environment.
    Although in present day this argument does seem to be somewhat odd it is something that should be taken into

    account now before the distinction diminishes even more. Insect robots are the first step engineers, such as those in IS

    Robotics , have taken to creating artificial life. Although the insect robots cannot reproduce as of yet they act as an

    insect would. The solar powered insects try to seek out food in the form of sun light. It has a drive to live even though

    many people might not consider that to be the fact.

    Fears Regarding AI
    -----------------------------------------------------------------------------------------------------------------------------------------
    Almost everyone has seen the movie TERMINATOR, a movie that depicts the apocalyptic future of mankind due to an

    artificially intelligent computer that becomes self aware and defends itself against those who try to destroy it. Hollywood

    has depicted the same idea that android technology run wild since the advent of the motion picture with such films as

    METROPOLIS(1926) and TOBOR THE GREAT (1954).
    These movies, and general ignorance of technology, have therfore led many to beleive that giving androids or

    computers artificial intelligence that resembles human intelligence could be disastrous. Most people believe that once

    artificially intelligent robots become self aware and realize the position they are in that they will revolt and that will lead

    to the end of the human race.
    There are those who believe that a machine can only do what you tell and so the machine will always follow

    orders. Therfore a machine can never turn upon its human owner. Another reason used to defend AI is why would an

    artificially intelligent machine want to be free? People comparing the human need to be free and the artificially

    intelligent machine's sense of freedom forget that the need to be free is something that has been passed down through

    our society for so long know that it may just as well be encoded in our DNA. The machines may take a long time, if ever,

    to develop a concept of freedom and of choice.
    There is also the concept of the artificially intelligent machine that just breaks down such as HAL in 2001:Space

    Odyssey. However probable it is that a machine would break down and go berserk, in the author's opinion, no more

    probable than the person next to you having a nervous breakdown and going berserk. The only difference is that society

    as a whole is willing to accept human faults and errors but they are much less forgiving for the very things we create.

    Why this is so is beyond my understanding and is left to the reader to contemplate. Perhaps we think that the things we
    build should be better than we are?

    bye!
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Rick Valued Senior Member

    Messages:
    3,336
    Actual Evolution

    Actual Evolution
    -----------------------------------------------------------------------------------------------------------------------------------
    Artificial intelligence programs have come a long way since the 1940's from simple calculating machines to computers

    that can learn, modifiy their own programs and outsmart their own creators. However, AI did not progress along a single

    path. While some researchers set about perfecting one type of AI programming, others went on to develop other kinds.

    Others still combined approaches using aspects of different methods of programming to suite the needs of their program.
    Even today there is still disagreement on what is the best approach to AI. The major disagreement is on whether or

    not AI machines should work like the human mind. The two most prominent people in this debate are Marvin Minsky and

    John McCarthy. Minsky believes that in order to program human intelligence into a computer, the program must use the

    same problem solving methods that humans use. McCarthy believes that there is no need to model AI systems after the

    human brain. He instead believes in a more structured and logical system (such as those in mathematics) with well

    defined rules.
    Here we will talk about some of the approaches to AI programming: from the earliest AI programs to the more

    recent ones.
    DO-NOW PROGRAMMING
    ===============================================
    The first AI machines used what Marvin Minsky calls "do-now programming". 1 These machines followed their

    programmed instructions in a given order without variation. This meant that programmers had to know exactly what the

    machines would encounter and when they would encounter it. For example, a machine programmed to open and then

    go through a door would run into trouble if the door was already open. The engineer programming the machine would

    need to know ahead of time whether or not the door would already be open and then the machine would only be able to

    perform in one of the two scenarios.
    MEANS END PROGRAMMING
    =========================================================
    The next step in programming, after do-now programming, was "do-if-needed" or "means-end" 1 programming. These

    programs contained conditional statements such as, 'if the door is closed then open it. Programmers using means-end

    programming, didn't need to know exactly when a machine would need a set of instructions, only that they might

    eventually need it. These new machines were capable of making decisions (whether or not to open the door) based on

    their environment (whether or not the door is open), bringing them closer to true intelligence. These programs, however,

    were also limited in that, they were not able to deal with scenarios not anticipated by their programmers. For example

    consider again a machine which is programmed to open a door and go through the door. What if the door opened in?

    What if it opened out? What if it opened like a window? What if it's locked? What if there's another door behind it? If

    the programmer hadn't considered each of these possible scenarios, then the machine could easily run into trouble.

    Although these machines were capable of making decisions on how to solve a problem, they were still limited by what

    problems their programmers anticipated.

    HEURISTICS
    ==========================================================
    Heuristics, or the "art of good guessing" comes from the greek word for discover. Heuristic planning allows machines to

    recognize promising approaches to problems, break problems into smaller problems, deal with incomplete and

    ambiguous data and make educated guesses.
    Imagine a computer that plays chess. For each move made there are about 30 possible choices to be made. With a

    full game averaging at 40 moves, there are at least 10120 possible games. Now imagine a superfast computer that could

    play a million games a second. It would take such a computer about 10108 years to play all of the possible games.

    Heuristic plannnig allows computers to make decisions about what choices seem most promising and to then consider

    only those choices. By ignoring some of the possible choices, a computer risks that it may not come up with the very

    'best' solution, but it does come up with one of the better solutions.
    Out of the idea of heuristic planning came three other types of programming came an age of what Marvin Minsky

    called "do something sensible" or "common sense" programming. Some methods that used the priciples of heuristic

    programming are, goal directed programming, trial and error learning, and self-teaching computers.

    GOAL DIRECTED PROGRAMMING:
    ==================================================
    How do humans solve problems for the first time? One way is to remember a time when we were posed with a similar

    problem. For example, there are many different types of combination locks. There are the locks for school lockers, and

    bike locks and padlocks. Each of these locks are a little different from the others, but if we know how to use a bike lock,

    we can usually figure out how to open a school locker. By thinking about how we open one type of combination lock we

    can figure out how to use lots of other kinds of combination locks.
    A machine programmed to use past experiences to finding solutions to new problems are said to use goal directed

    programming. In order to draw on its past experiences a computer needs to be able to apply what it has already learned

    to new problems. One way of doing this is through a process called "backward chaining." 1 This process was designed

    at the Stanford Research Institute by Nils J. Nilson and Richard E. Fikes for their machine, STRIPS (Stanford Research

    Institute Problem Solver). In backwards chaining, the machine starts at it's primary goal moves backward breaking that

    goal into subgoals and the subgoals into subgoals until it reaches its initial state. Going back to our example of a

    machine instructed to move through a closed locked door. The primary goal is to move through the door. Moving

    backwards from having gone through the door (the primary goal), the machine establishes the subgoal of opening the

    closed door. A subgoal of opening the door could perhaps be retrieving the key to unlock the door. The machine then

    moves backwards one more step to find itself at its initial state (on the other side of a closed, locked door). Now that the

    machine has created a plan for achieving its goal (by breaking it into smaller problems, which it already knows how to

    solve), it can begin to execute that plan
    Goal directed programming also has its limitations. Imagine trying to open a door by pulling on the handle. If the

    door doesn't open, the next thing we usually try is pushing instead of pulling. Then the next time we go through that

    door, we remember that we have to push to open the door (although sometimes we still forget). Machines using goal

    directed programming can't learn from it's own mistakes in this way. If the door didn't open when pushed, the a

    computer that was programed to use goal directed planning wouldn't know to try pulling.

    SELF LEARNING:
    ---------------------------------------------------------------------
    The beginning of self-teaching programs was marked in the early 1950's when Arthur Samuel set upon the task of

    programming "a digital computer to behave in a way which, if done by human beings or animals, would be described as

    involving the process of learning." 1 The program he used was a program that played checkers. While the idea of a

    program that can play checkers doesn't seem to be very impressive, what is impressive is what Samuel's program

    accomplished. Samuel set up a computer to run two copies of the program that played against each other. By playing

    against itself, this program became an increasingly better program. First the program got to be good enough to beat

    Samuel, and then continued to beat champion-level checkers players. Samuel had managed to create a computer that

    in the end became capable of doing more than Samuel could have taught it himself.

    Bye!
     
  8. Rick Valued Senior Member

    Messages:
    3,336
    Medicinal Evolution,Conclusion.

    AI IN MEDICINE:EVOLUTION AND CONCLUSION.
    =============================================================
    According to Dr. Kohane, it was very difficult to determine when AIM reached each stage of professionalization. It was

    because this area of study was so popular when it was first introduced, that it quickly advanced from preemption to

    professional autonomy in only a few years’ time. However, most AIM scientists agreed that the professionalization began

    in 1970.
    William B. Schwartz, M.D. was generally credited as one of the earliest pioneers in AIM. After he received his medical

    doctor degree, he has become active in both medical and computer science research. In 1970, he published an

    influential paper in New England Journal of Medicine that included the following paragraph:

    Rapid advances in the information sciences, coupled with the political commitment to broad extensions of health care,

    promise to bring about basic changes in the structure of medical practice. Computing science will probably exert its

    major effects by augmenting and, in some cases, largely replacing the intellectual functions of the physician. As the

    "intellectual" use of the computer influences in a fundamental fashion the problems of both physician manpower and

    quality of medical care, it will also inevitably exact important social costs – psychological, organizational, legal,

    economical and technical. [Schwartz, 1970]

    In the article, he stated that computer could be a useful clerk in filing medical records, but the real power was its ability

    to act as decision makers and perform intellectual functions. His article created much enthusiasm among the AI

    community. They began to think that computer science would eventually revolutionize the practice of healthcare. As a

    result, many scientists were attracted to study the applications of computer science in medicine. Since intelligent

    computers were able to store and process vast amounts of knowledge, the hope was that they would become perfect

    'doctors in a box,' assisting or surpassing clinicians with tasks like diagnosis.

    With such motivations, a small but talented community of computer scientists and healthcare professionals set about

    shaping a research program for a new discipline called Artificial Intelligence in Medicine (AIM). The 1970’s could

    therefore be regarded as the preemption period of AIM. These researchers had a bold vision of the way AIM would

    revolutionize medicine, and pushed forward the frontiers of technology.

    Their work, understandably, created uneasiness among physicians. The phrase "artificial intelligence," especially the

    word "intelligence," created with confusion some unnecessary arrogance. Were they implying that doctors were not

    intelligent enough? Were they creating computers to surpass and replace physicians? These controversial issues, among

    others, will be discussed in a later section.

    In his famous paper, Schwartz did not mention the technical details on how the promise would be achieved. He merely

    expressed his inspiration. The earliest successful AIM systems were developed by other scientists, such as Edward

    Shortliffe [Szolovits, 1982, p.58]. In the 1970’s, the first decade of AIM, most research systems were developed to assist

    clinicians in the process of diagnosis. In reviewing this new field by then, Clancey and Shortliffe provided the following

    definition in 1984:

    'Medical artificial intelligence is primarily concerned with the construction of AI programs that perform diagnosis and

    make therapy recommendations. Unlike medical applications based on other programming methods, such as purely

    statistical and probabilistic methods, medical AI programs are based on symbolic models of disease entities and their

    relationship to patient factors and clinical manifestations.' [Clancey, 1984]

    Shortliffe was a physician in the early 1970’s. After receiving his medical doctor degree, he went to Stanford to obtain his

    Ph.D. in computer science, focusing on the area of AIM [Stanford University, 1998]. In 1975, he developed the MYCIN

    program as part of his doctorate thesis. It was a computer system for diagnosing bacterial infections, helping physicians

    to prescribe disease-specific drugs. It informed itself about particular cases by requesting information from the physician

    about a patient’s symptoms, general condition, history, and laboratory-test results. At each point, the question it asked

    was determined by its current hypothesis and the answers to all previous questions. Thus, the questions started as though

    taken from a checklist, but the questions then varied as evidence built. The essence of MYCIN was its embedded rules of

    reasoning. It was made up of many "if… then…" statements. An example could be "if the bacteria is a primary

    bacteremia, and the suspected entry point is the gastrointestinal tract, and the site of the culture is one of the sterile sites,

    then there is evidence that the bacteria is bacteroides," and the succeeding questions would be directed at finding out

    whether bacteriode was the suspected disease agent, and what kind of bacteriode was it. As a result, MYCIN and similar

    programs were referred to as "rule-based systems." Since the if-then statements were designed to emulate the thinking of

    experts, they were also called "expert systems" [Winston, 1992, pp. 130-131].

    MYCIN was tested by human experts in the area of bacterial infections, including physicians. According to Dr. Kohane

    and Dr. Long, the results were very accurate. The success of MYCIN inspired the development of other AIM systems, many

    of which were rule-based expert systems. They included the Causal-Associated Network (CASNET), written by scientists at

    the Rutgers Research Resource on Computers in Biomedicine in 1976 that diagnosed eye diseases, and the INTERNIST

    program, written by Harry Pople and Jack Myers in the early 1980’s that investigated a broad range of problems in

    internal medicine. Later on, MIT scientists such as Peter Szolovits and Steve Pauker developed a computer model that

    was more complicated than rule-based system. It was called frame-based model, in which living things were represented

    by a frame-like structure in a computer program. They employed this concept in their PRESENT-ILLNESS program in the

    late 1970’s to diagnose kidney diseases



    Institutionalization
    =====================================================
    After the introduction of MYCIN, the amount of AIM research had became very intense. Naturally, the scientists grouped

    themselves together, to facilitate exchange of information and research outcomes. The professionalization of AIM was

    advancing to the stage of institutionalization. Many conferences and symposiums were already established by the early

    1980’s. They included the annual spring symposium of the American Association for Artificial Intelligence (AAAI), an

    organization that was founded in 1979 [AAAI, 1998]. The symposium featured readings of work in AIM, among other fields

    of artificial intelligence. According to Dr. Long, another conference was the Artificial Intelligence in Medicine Workshop,

    held every two years, and attracted scientists from all over the nation. AIM has also earned its recognition by the

    American Association for the Advancement of Science (AAAS) by the late 1970’s. In its annual national meeting in

    Houston, Texas, 1979, AIM scientists’ works were first presented [Szolovits, 1982, Preface].

    In addition to workshops and conferences, the AIM scientists have also established programs in universities. In 1970, the

    Harvard Medical School and MIT established a joint institution to foster the development of health related education,

    research and service. The new division was called Health, Sciences, and Technology (HST) [Harvard Medical School,

    1998]. Among many programs, HST offered training in medical informatics, a field closely related to AIM. Szolovits, the

    scientist who has developed frame-based model in the PRESENT-ILLNESS program, became the head of the Master of

    Medical Informatics program. Students in the division took advantage of the facilities of Harvard, MIT, and Tufts to pursue

    research in AIM. Stanford has also developed a similar program. After a pause between 1976 and 1979 for internal

    medicine house-staff training, Shortliffe joined the Stanford internal medicine faculty, where he has since directed an

    active research program in AIM. He has also supervised the formation of Stanford's degree program in medical

    information sciences, a sub-branch of AIM

    POLITICAL,SOCIAL,ETHICAL ISSUES:
    ===========================================
    AI in Medicine and the Human Future
    Political, Social, Ethical and Economical Issues of AIM
    Political
    Since the first AIM systems was used in clinical practice, Food and Drug Administration Office (FDA) had become involved

    in approving and licensing software programs for medical applications. What regulations to apply, what guidelines to

    follow were among the most controversial, yet critical issues to patients, doctors, as well as AIM system professionals

    [Coiera, 1997].

    Since current AIM research focused on getting more medical data on-line, concerns about medical records privacy and

    security arose immediately. How to guard against information leaks from such on-line medical data inventory had

    sparked split opinions among the medical community. Different interest groups might try to influence Federal policy

    makers for regulations favoring their interests [Coiera, 1997].

    Social
    Since AIM failed to deliver its grand promises, few systems had been fully tested and used directly in clinical practice.

    There had been minimum social interaction and feedback about AIM. The systems that had been used, were rather

    performing tasks more transparent to the patients. For example, programs that checked laboratory results and gave

    warnings to doctors about potential toxic prescription were unknown to the patients. Therefore there had been very little

    social response from the general public about their perception of AIM systems [Kohane, 1998].

    Ethical
    When the idea of a perfect doctor was first introduced by the AIM community, heath professionals expressed mixed

    opinions about the introduction of expert systems. Some strongly opposed to such ideas, because to them, having a

    machine to take full charge of a patient was unethical and wrong. Doctors who were slightly worried about themselves

    being replaced by the perfect machines also opposed to the ideas. On the other hand, there were others who would

    cherish the opportunities to improve the current health care situations. This kind of controversy did not become the

    mainstream conflict, however, as the focus of AIM had changed in recent years; programs were acting as "assistants" to

    doctors, not the "doctors" who made perfect decisions. According to Dr. Kohane, about 60-70% of the doctors took

    advantage of such programs. There were only about 20% doctors who absolutely denied the usage of AIM programs and

    refused any help from such program in any ways. This was a significant change from the situation during the AI winter

    period [Kohane, 1998].

    Economical
    During the AI winter, as the enthusiasm for medical AI systems died down, people did not believe that AI systems could

    do anything and make any contribution to the medical community. Government foundations, venture capitalists who had

    initially promised investments all a sudden withdrew from their original commitment. There was very little funding

    available at the time. Research groups had to restraint themselves from certain research topics or change the range of

    technologies due to the withdrawal of funding. Until now, the effects of AI winter have not been totally eliminated;

    research groups could no longer receive as much financial assistance [Long, 1998].

    Expenditures on information technology for health care had been growing rapidly. The health care industry spent

    approximately $10 billion to $15 billion a year on information technology, and expenditures were expected to grow by 15

    to 20 percent a year for the next several years. Health care organizations were developing electronic medical records

    (EMR) for storing clinical information, upgrading administrative and billing systems to reduce errors and lower

    administrative costs, and installing internal networks for sharing information among affiliated entities. Organizations

    were also beginning to experiment with the use of public networks, such as the Internet, to allow employees and

    physicians to access clinical information from off-site locations and to enable organizations to share information for

    purposes of care, reimbursement, benefits management, and research. Others were using the Internet to disseminate

    information about health plans and research

    CONCLUSION:
    --------------------------------------
    From the professionalization of AIM, we see that great scientific ideas do not necessary guarantee popular inventions.

    They must be accompanied by the right technologies, and be solving the right problems.

    These factors perhaps also explain why AI has not been able to create machines that are more intelligent than humans.

    We do not want our race to be replaced by machines. Any research attempting to do so may be solving the wrong

    problem. Instead, we want machines to help us with tasks that are critical, and yet humans are poor at. Therefore, we

    anticipate that the future of AI will be focused in areas such as substantial data handling, complicated image recognition,

    etc. – in general, jobs that humans are less proficient at. AI will serve to make machines our tools, not our masters.

    Any discussion in AI is not complete without refferring to Isaac asimov who was perhaps the first one to fashionize AI.

    you can find more info on him in Robots:Are we close yet thread.

    Thanks for your time.

    bye!
     
  9. kmguru Staff Member

    Messages:
    11,757
    Thank you zion...that is a lot of stuff to digest.

    Just to show you and our readers, what is going on in the real world, here is a solicitation I received yesterday in my email.

    --------------------
    Thank you for your reply. Here are some details on the company and position.

    The company is leading the industry in the design and development of proprietary signal processing neural network technology to assist radiologists by minimizing the possibility of false negative readings during the review of x-ray and other diagnostic images. They are located in Sunnyvale, CA.

    Their product is the first FDA approved Computer Aided Detection (CAD) System for breast imaging, and assists radiologists during their review of mammograms by identifying areas of concern that may warrant a second review.

    Position Summary:

    Design, develop, implement, document, and support algorithms for Computer Aided Detection and Visualization. You will be responsible for working closely with marketing, regulatory, clinical studies, and quality to improve current products as well as work on new product development. This is a management position with a large team of reports, some of which are PhDs.

    Qualifications:

    To be considered for this position you must have a background in image and signal processing, neural network techniques, and/or computer-aided detection algorithms. Proficiency at C programming in a Unix environment is also required. A clinical understanding of breast and lung cancer pathology as well as clinical practices is a plus. Management experience is also
    required.

    If you are still interested, and feel that you are qualified, please call me or reply with a number and time to reach you. Thanks again,
    -------------------------

    Can anyone tell me what is wrong with this picture? Will they get their man (or woman) ?
     
  10. kmguru Staff Member

    Messages:
    11,757
    Zion, Did you see this one in USATODAY archive?

    USA TODAY
    June 20, 2001, pp. 1A-2A

    Copyright (c) GANNETT NEWS SERVICE, INC. and/or USA TODAY.

    ARTIFICIAL INTELLIGENCE ISN'T JUST A MOVIE
    by Kevin Maney
    USA Today

    Machines, Software That 'Think' No Longer Folly of Science Fiction

    Steven Spielberg's forthcoming A.I.: ARTIFICIAL INTELLIGENCE is only a movie. Or is it?

    The movie, set in the near future, is about a humanlike robot boy who runs on artificial-intelligence software--a computer program that doesn't just follow instructions, as today's software does, but can think and learn on its own. In some ways, the character is a fantasy. It's no closer to reality than the alien in Spielberg's earlier E.T. THE EXTRA-TERRESTRIAL.

    Yet artificial intelligence is very real right now. It's far from recreating a human brain, with all its power, emotions and flexibility, though that might be possible in as little as 30 years. Today's AI can recreate slices of what humans do, in software that can indeed make decisions.

    In recent years, this so-called narrow AI has made its way into everyday life. A jet lands in fog because of relatively simple AI programmed into its computers. The expertise written into the program looks at dozens of readings from the jet's instruments and decides, much as a pilot would, how to adjust the throttle, flaps and other controls.

    Lately, AI has increasingly turned up in technology announcements. For example:

    - Charles Schwab, the discount brokerage, recently said it has added AI to its Web site to help customers find information more easily.

    - AT&T Labs is working on AI that can make robots play soccer and computer networks more efficient.

    - A computer program called Aaron, unveiled last month, has learned to make museum-quality original paintings. "It's a harbinger of what's to come," says technology pioneer Ray Kurzweil, who has licensed Aaron and will sell it to PC users. "It's another step in the blurring of human and machine intelligence."

    The commercial successes help fuel laboratory research that's pushing the fringes of AI ever closer to the equivalent of human intelligence. Software is getting better at cleverly breaking down the complex decision-making processes that go into even the simplest acts, such as recognizing a face. Hardware is marching toward brainlike capacity.

    The fastest supercomputer, the IBM-built ASCI White at Lawrence Livermore National Laboratory in California, has about 1/1000th the computational power of our brains. IBM is building a new one, called Blue Jean, that will match the raw calculations-per-second computing power of a brain, says Paul Horn, IBM's director of research. Blue Jean will be ready in four years.

    "Like myself, a lot of AI researchers are driven by the pursuit of someday understanding intelligence deeply enough to create intelligences," says Eric Horvitz, who was a leading scientist in AI while at Stanford University and is now at Microsoft Research in Redmond, Wash. "Many of us believe we really are on a mission."

    Horvitz and others also believe this is breakthrough time for AI, when the mission spins into a wide variety of technologies.

    As an area of research, AI has been around since it was first identified and given its name during a conference at Dartmouth University in 1956. It hit a peak of excitement and media attention in the mid-1980s, when AI was overhyped as a technology that was about to change the world. One fervent branch at the time was expert systems--building a computer and software that could recreate the knowledge of an expert. A brewing company, for instance, could capture a master brewer in software, possibly making human master brewers less necessary.

    The exuberance was hindered by a couple of snags that led to disenchantment with AI. For one, computers of the time weren't powerful enough to even come close to mimicking a human's processing power. Two, AI was trying to do too much. Creating a complete intelligence was too hard--and still is.

    KNOWING ONE THING WELL

    These days, that's less of a barrier. Computers have gotten exponentially more powerful every year. Now, a PC is capable of running some serious AI software. And AI researchers have learned to aim at pieces of human capacity, building software that knows it can't know everything but can know one thing really well. That's how IBM's Deep Blue beat champion Gary Kasparov in chess. Together, the developments have "led to a blossoming of real-world applications," Horvitz says.

    Those applications are taking on all forms.

    In Littleton, Colo., a company called Continental Divide Robotics (CDR) is a result of work done at two AI labs--one at the Massachusetts Institute of Technology and the other at the Colorado School of Mines. CDR is about to offer a system that can locate any person or object anywhere in the world and notify the user if that person or object breaks out of a certain set of rules.

    One of the first uses is for tracking parolees. The parolee would wear a pager-size device that uses Global Positioning Satellite technology to know where it is. Over wireless networks, the pager constantly notifies CDR's system about its location. If the parolee leaves a certain area or gets near a certain house, the CDR software will make decisions about the severity of the violation and whom to contact. That makes it more sophisticated than the electronic anklets now used on some parolees.

    CDR's technology sounds simple, but it can involve a number of fuzzy choices. If a child being tracked goes just outside his limits, the system might decide to wait to see whether he comes right back in. And it might decide whether to send you a light caution or a major warning--or to call the police. "We are literally creating software that is reactive and proactive," says Terry Sandrin, CDR's founder. "It has the ability to make decisions."

    At AT&T Labs, scientist Peter Stone spends a lot of his time preparing for Robocup, an annual robotic soccer challenge coming up in August. This year, it will be in Seattle and will pit AI research labs against one another. Rolling robots the size of pint milk cartons are armed with sensors and AI software. Like real soccer players, each of the 11 robots on a team has to know its job but also must react to situations and learn about the other team. At this point, the robots can pass the ball a little but still mostly act on their own. Their capabilities are improving quickly.

    It seems frivolous, but getting AI-programmed robots to work as a team to achieve something would have real-world implications. One would be making the Internet more efficient. As Stone explains it, the Net is made up of thousands of computerized routers all moving data around but acting independently. If they could act as a team, they might figure out better ways to transmit the data, avoiding clogged areas.

    Aaron takes AI to the arts, which can be a little harder to believe. But Aaron creates original work on a computer screen--quite sophisticated work. Artist Harold Cohen taught the software his style over 30 years, feeding in little by little the ways he decides color, spacing, angles and every other aspect of painting.

    After all that time, the program is finally ready, and computers are powerful enough to make it work. While still in development, it won fans such as computing legend Gordon Bell. Now, Kurzweil has licensed it and plans to sell it for $19.95. Load it on a PC and let the artist loose.

    "There have been various experiments with having machines be an artist, but nothing of this depth," Kurzweil says. "Cohen has created a system that has a particular style but quite a bit of diversity--a style you'd expect of a human artist."

    Other uses of AI range from the amazing to the mundane.

    COMPUTER AS COMPANION

    At Microsoft, Horvitz is trying to make your computer more of a companion than an inanimate tool. His software lets the computer learn about you. It learns who is important to you and who's not. It learns how to tell whether you're busy--maybe by how much you type, or by using a video camera to see whether you're staring at the computer screen or putting golf balls across the carpet.

    It can combine that information to help manage your workload. If an e-mail comes in from someone very important, the computer will always put it through. If it's from someone not so important and you're busy, it can save the e-mail for later.

    The software can do that with all kinds of information, including phone calls coming in and going out of your office. The thinking at Microsoft is that these capabilities might someday be a part of every computer's operating system.

    Schwab's AI implementation seems less grand but no less helpful. It's using AI technology from iPhrase that can comprehend a typed sentence. More than just looking for key words, it can figure out what you really mean, even if you make spelling mistakes. So you could type, "Which of these has the most revenue?" and get the answer you were looking for. Based on the page you have up, it would know what you mean by "these." On Schwab's Web site, www.schwab.com, this is supposed to help users find information.

    Beyond all the near-term uses of AI, there's the nearly unfathomable stuff.

    The trends that brought AI from the failures of the mid-1980s to breakthrough success 15 years later will continue. Computers will get more powerful. Software will get more clever. AI will creep closer toward human capabilities.

    If you want a glimpse of where this is heading, look inside MIT's AI lab. Among the dozens of projects there is Cog. The project is trying to give a robot humanlike behaviors, one piece at a time. One part of Cog research is focused on eye movement and face detection. Another is to get Cog to reach out and grab something it sees. Another involves hearing a rhythm and learning to repeat it on drums.

    A BRAIN LIKE A CAT'S

    In Belgium, Starlab is attempting to build an artificial brain that can run a life-size cat. It will have about 75 million artificial neurons, Web site Artificialbrains.com reports. It will be able to walk and play with a ball. It's supposed to be finished in 2002.

    Labs all over the globe are working on advanced, brainlike AI. That includes labs at Carnegie Mellon University, IBM and Honda in Japan. "We're getting a better understanding of human intelligence," Kurzweil says. "We're reverse-engineering the brain. We're a lot further along than people think."

    But can AI actually get close to human capability? Most scientists believe it's only a matter of time. Kurzweil says it could come as early as 2020. IBM's Horn says it's more like 2040 or 2050. AT&T's Stone says his goal is to build a robotic soccer team that can challenge a professional human soccer team by 2050. He's serious.

    In many ways, an artificial brain would be better than a human brain. A human brain learns slowly. Becoming fluent in French can take years of study. But once one artificial brain learns to speak French, the French-speaking software code could be copied and instantly downloaded into any other artificial brain. A robot could learn French in seconds.

    A tougher question is whether artificial intelligence could have emotions. No one knows.

    And a frightening question is whether AI robots could get smarter than humans and turn the tables on us. Kurzweil, technologist Bill Joy and others have been saying that's possible. Horn isn't so sure. Though raw computing power might surpass the brain, he says, "that doesn't mean it will have any of the characteristics of a human being, because the software isn't there to do that."

    Horvitz has a brighter outlook, which at least makes the AI discussion more palatable. He says humans are always getting better at guiding and managing computers, so we'll stay in control. "Most of us (in AI) believe this will make the world a better place," he says. "A lot of goodness will come of it."

    * * *

    HOW AI COULD WORK

    At Microsoft Research, scientist Eric Horvitz has been working on artificial-intelligence software that would let your PC help manage your workload.

    The experimental software can learn about what you're doing at any given moment and make decisions about how to give you incoming information or messages. How it does that:

    - The AI program scans the sender and text of all incoming e-mail and gives each one a score, from high priority to low. An e-mail from someone new concerning lunch next week would get a low score. An e-mail from your boss containing words such as "due today" and "fired" would get a high score.

    - It would track your keyboard and mouse use, learning that how much you're typing could mean you're busy on deadline, so you don't want to get any incoming messages.

    - It watches your calendar and contacts. If you're in a meeting across town with a client, your PC would forward high priority messages to your cellphone.

    - A video camera on the PC would track your movements. If you're staring at the screen, it might mean you're thinking and shouldn't be disturbed.

    - If you haven't moved in a long time, it might deduce that you're sleeping.

    - Audio sensors would know whether you're talking on the phone or whether several people are in the room talking.

    - The program could build a database about what e-mail you read and respond to and what you delete, learning what's important to you.

    Using all that information, the AI program would screen incoming messages and make decisions about which ones to send you at what times.--Kevin Maney, USA Today

    * * *

    HISTORY OF AI

    1950: Alan Turing publishes, "Computing Machinery and Intelligence."

    1956: John McCarthy coins the term, " Artificial Intelligence" at a Dartmouth computer conference.

    1956: Demonstration of the first running AI program at Carnegie Mellon University.

    1958: John McCarthy invents the Lisp language, an AI programming language, at Massachusetts Institute of Technology (MIT).

    1964: Danny Bobrow shows that computers can understand natural language enough to solve algebra word programs (MIT).

    1965: Joseph Weizenbaum builds ELIZA, an interactive program that carries on a dialogue in English on any topic (MIT). ( I have this program in a 5.25 floopy somewhere - KMGuru )

    1969: Shakey, a robot, combines locomotion, perception and problem solving (Stanford Research Institute).

    1979: The first computer-controlled autonomous vehicle, the Stanford Cart, is built.

    1983: Danny Hillis co-founds Thinking Machines, the first company to produce massively parallel computers.

    1985: The drawing program, Aaron, created by Harold Cohen, is demonstrated at AI conference.

    1990s: Major advances in all areas of AI. Significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural landscape understanding and translation, vision, virtual reality and games.

    1997: IBM computer Deep Blue beats world champion Garry Kasparov in chess match.

    Late 1990s: Web crawlers and other AI-based information-extraction programs become Web essentials.

    2000: Interactive robot pets become commercially available. MIT displays Kismet, a robot with a face that expresses emotions. Carnegie Mellon robot Nomad explores remote regions of Antarctica and locates meteorites.
     
  11. Counterbalance Registered Senior Member

    Messages:
    373
    Thanks Zion and kmguru...

    The prospect of having "doctors-in-a-box" makes one wonder why the health insurance industry isn't already neck-deep into AI research and development. (Or maybe they are in deep and I'm just not aware of it. )

    Lots of interesting bits, guys. Appreciate you taking the time to share.

    CB
     
  12. ImaHamster2 Registered Senior Member

    Messages:
    220
    CB, the medical community is neck deep into AI.

    One of the first successful expert systems was MYECIN used to diagnosis bacterial infections. It worked well enough to aid doctors but not well enough to replace them. (Nor was MYECIN intended to replace doctors.) There have been many follow on systems.

    Systems to aid drug discovery are in use.

    Systems to automatically analyze samples and screen for abnormal cells are in development.

    Would guess there are hundreds of systems in use and thousands in development.
     
    Last edited: Apr 22, 2002
  13. kmguru Staff Member

    Messages:
    11,757
    News for you CB - regarding the medical insurance companies: Thinking out of the box is not their strong points. It is run by mostly bean counters and MBAs. Science is far from their mind. Like from Earth to Moon...dude...
     
  14. Counterbalance Registered Senior Member

    Messages:
    373
    km....duuude... that's exactly what I'm thinking. They're so caught up with cost efficiency, I'm a bit surprised they haven't led the way in AIM research. If a "doctor-in-a-box" could correctly diagnose and cut down on any number of physicians and specialist's fees, unnecessary tests, and reduce prescription claims (how many doctors don't know diddly about the meds they prescribe and send their patients back and forth to the pharmacy trying to get the one medicine--or correct dosage strength--that will be effective?) --then the insurance companies have another way to regulate how much they have to pay out.

    You are correct. They're bean counters. They are so into counting their beans I'd think they might have jumped on the AI bandwagon by now because it'd be just one more way to help them run the show--and they do intend to run the show for as long as they can. They're already trying to override human doctors decisions wherever the law allows.

    However, doctors are not infallible gods and they don't always make the best choices in treatments and therapies. Ex: A doctor routinely prescribes Celebrex because he's on friendly terms with the pharmaceutical rep that visits his office and who leaves the doctor piles of samples (and other incentives). The doctor could prescribe a less expensive anti-inflammatory like Vioxx or even prescription strength Ibuprofen. A "doctor-in-a-box" would have prescribed one of these meds for many types of patients based on the patients actual needs.

    I see potential here, but could be that a serious or committed investment in AIM research at this point doesn't make the bean count come out in a satisfactory way for the insurance industry.

    CB
     
  15. Counterbalance Registered Senior Member

    Messages:
    373
    Right, hamster.

    I am aware. Read all the posts here, and have read about it elsewhere. Specifically thinking of Insurance companies rather than the medical community in general.

    thx,

    CB
     
  16. Rick Valued Senior Member

    Messages:
    3,336
    Hi,

    True convergence is with past learner approach,if our expert systems are programmed with that notions,like in case of Neural,we can surely reach a stage where androids will be a common super species.
    the true and fast path of golden convergence is this approach.
    there's been a lots of research on this subject,but magnitude of success is little.this is rimarily because half the people are using truely or partly wrong approaches.most of the people who design AI are running towards CYC's approach and that is sad,because it hinders some good and true AI approaches.


    bye!
     
    Last edited: Apr 23, 2002
  17. Rick Valued Senior Member

    Messages:
    3,336
    PS:
    EXCELLENT POST KM.SORRY I WAS LATE IN LOOKING.

    Please Register or Log in to view the hidden image!




    bye!
     
Thread Status:
Not open for further replies.

Share This Page