technological telepathy

Discussion in 'Intelligence & Machines' started by zonabi, Mar 20, 2004.

Thread Status:
Not open for further replies.
  1. zonabi free thinker Registered Senior Member

    Messages:
    420
    hi one quick question to boggle everyones minds:

    could perhaps an advancement in genetics (and genetic engineering) and technology intertwining result in some form of telepathy?

    let me expand:

    we have our internet on our computers, one very great advancement in communication and data transmittal/retreival.

    we have biological engineering on the rise, another very great advancement in mankind's struggle to have a headstart on the probabilities of life and death.

    now, can you guys see a point where the two cross? this is depcited in many movies and films as genetically enhanced humans, even some mechanical robotics to enhance some physical abilities.

    i want to stay closely on the field of computer internet and bio-engineering.

    what if we were able to create into a human being some form of "internet" by using both nanotechnology and bio engineering?
    if there was a nanocomputer inside humans' brains, and they had internet, would telepathic communication be available?

    i mean our instant messaging systems on the internet are instantaneous in nature, and require not physical connection between the 2 persons (besides the computers cables and such...)

    so does wireless technology + nanotechnology + bioengineering = telepathy ?

    or could it ? thoughts thereof please ?
     
    Last edited: Mar 20, 2004
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. eburacum45 Valued Senior Member

    Messages:
    1,297
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Persol I am the great and mighty Zo. Registered Senior Member

    Messages:
    5,946
    Subvocal communications (I think by MIT) could do most of this. It allows voice recognition software to recognize words that are quite said... like when some people talk in their heads and slightly move their lips.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Fraggle Rocker Staff Member

    Messages:
    24,690
    I'm not sure it will even require subvocalization -- although I agree that will undoubtedly be the first step.

    Consider that most of the time when we think, we actually "hear" language in our head. This is not a fanciful hypothesis. We really do hear specific words, in a specific language, in a specific style, right down to the cadence and intonation. I don't doubt that there are synapses firing in our speech center or something that abuts it that are nearly identical to the ones that fire when we speak and/or hear speech.

    Farscape put genetically engineered bacteria right in there with the brain cells, picking up on the firing of the synapses. We'll probably do it with nanotech machines, but it will be essentially the same thing.

    It's interesting that you brought up the issue of people barely moving their lips. I recently read (and I have no idea as to the validity of the authority) of the history of writing and reading. When writing was new, people simply read it aloud. That was fine because only a few people could read and everyone around them probably wanted to know what the writing said. But eventually it became reasonable to not share what one was reading with everyone within earshot. It was said that as recently as the Greek and Roman era, people could not read without moving their lips silently to form the words.

    That certainly seems to relate to the speech center, and it probably started out as subvocalization. It was a particular skill, developed more recently, that allows us to learn to read without moving our lips. In fact, I remember in my youth, half a century ago, when there were still small pockets of semi-literacy in the U.S., people used to joke about slow learners, saying, "I can tell he's not really reading that book we were assigned. His lips aren't moving." Does anybody say that anymore?

    A bit off topic, but related.
     
  8. Persol I am the great and mighty Zo. Registered Senior Member

    Messages:
    5,946
    The question is, how do you reliably get the 'thought' of sound translated into electrical signals? We have a hard time reading simple motor signals. I wonder how far down the road it will be until we can read 'pre-speech'.
     
  9. zonabi free thinker Registered Senior Member

    Messages:
    420
    well, i can see your points, but my idea is more like, almost like an instant messenging program inside your head. say when nanotechnology can plant a little nanocomputer (providing its safe!) inside a human, probably in or near the brain-
    this nanocomputer has some programs on it, but the one im touching on is the communications program, similar to email or AIM, or Yahoo! (eh but i dont really like yahoo

    Please Register or Log in to view the hidden image!

    )
    so, somehow the user can open the program and "type" a message to be sent to another user who is logged on? or perhaps he can leave a msg if the contact isnt available? (we have to take these precautions, because what if u try to contact someone while they're in a meeting or while he's driving down the highway....)
    and so i see it would be a bit better to rely on type rather that 'voice' - because sometimes u are in a situation where type is more effective, like say the receiver was in a loud bar when someone sent a voice msg, he probably wouldnt be able to hear it, but if it were a typed msg, he could read the msg in the corner of his eye without problem.

    this brings me to another idea:
    nano-contact-lenses
    this is where you see the programs in the nanochip, menus, and messages.
    how could the users be able to select items and programs and type with this User Interface System ?> any ideas?
     
  10. Fraggle Rocker Staff Member

    Messages:
    24,690
    Since we're assuming that nanomachines will be able to find and sort out the functionality of all the cells in the speech center, there's no reason they couldn't do the same thing with hearing in general. They could just lower the volume that you perceive in the ambient noise so that you can hear the incoming voice signal as conversational-level speech.

    Come to think of it, it would be nice to be able to lower the volume on a lot of ambient stuff whether you're getting incoming t-messages or not. They could lower or completely silence the din in some of these popular hangouts and replace it with your favorite music or the sounds of the beach or forest. Although I guess you younger folks like it and that's why they design the acoustics that way. They could block out traffic and construction and the radio of the guy in the next cubicle and the neighbor's dog but put through sounds you need to hear like your baby crying or your spouse trying to talk to you.
    They could do the same thing with optics. You could have your own VR program running whenever you want, interruptible only by things and people that you need to see.
    In the last sci-fi story I read on this subject, the devices were programmed to respond to verbal commands in your head. It was subvocalization in the story but it could just as well be words you think when we get the nanomachines into your brain cells. You have a personal code word or phrase that tells the technology that the following words are instructions for it. The code can be some phrase that you would never utter in real life, or it could be a nonsense word. This way the machines can be programmed to respond to native language commands: Suppress the sound of my mother-in-law's voice. Give me a South Seas island beach scene with sound effects until the subway reaches my stop.

    Eventually there's no limit on how far the nanobots can go to distort or overwrite your perception of your environment. They could do the same thing with your olfactory senses and the nerve endings in your skin. VR vacations, VR participatory movies and videogames. Although I'm sure the same thing will happen to this technology as with videotape: The first and biggest market will be porno, virtual prostitution. It's been quite a while now since the dollar size of the porno market surpassed that of legitimate films.
     
  11. eburacum45 Valued Senior Member

    Messages:
    1,297
    the Jewellery Virtual Interface

    For those people who chose not to have medical implants for neural interfacing, I have imagined an external interface system based on facial piercings;
    an array of microcomputers in earrings, and nose rings, and lip rings detect subtle movements in the face and mouth, allowing subvocal control of a local network;
    the earrings can project sounds into the ears,

    while a series of projectors fitted into the eyebrow piercings project images directly into the eye and create a virtual keyboard and monitor system;

    this keyboard is operated by movements of the hands and fingers, movements which are detected by finger rings and bracelets; all these items of electronic jewellery can be available in a variety of different styles, from art deco to ethnic to gothic...

    --------------------
    SF worldbuilding at
    www.orionsarm.com/main.html
     
    Last edited: Mar 22, 2004
  12. Stryder Keeper of "good" ideas. Valued Senior Member

    Messages:
    13,105
    Well there are two locations on the human brain that would be most likely to be interfaced with, One being the Frontal Lobe for it's higher Processing capacity and the other being the Temporal Lobes since they deal with both "hearing" and Speech Synthesis (Namely developing vocal patterns that mimic what is heard)

    It is possible to interface with such regions using electrodes to measure electromagnetic fluctuations or through the use of other equipment that that uses frequency manipulation.

    The problem is then having the right equipment used to generate pattern matching through the signals that are received, so when someone says "Chair" the pattern is matched with what a "chair" is. There is even the cause of showing a "Door" and calling it "Chair" used as a method to generate a pattern for contradictary data [Extreme data] (namely the subject knows what it isn't and therefore develops a resistance pattern to being told what it is.)

    Such equipment has been in development for sometime and is usually refered to as a neural network, in the sense that to deal with the dynamic data from those inputs for pattern matching the system has to develop an artificial neural structure between processing units[nodes]

    In general the human mind is not suppose to synthesis any forms of voice inside the mind, so if you have a conscious that tells you things, be warned Psychiatric doctors will want a word with you about what you hear in your head.

    Simply there is a region that can generate an internal synthesised voice that can be your own, however it is possible for that region to suffer damage and for the synthesis to become impossible. Also the understanding of such synthesis being placed into a person could generate a base at which a person can be controlled to an extent.
     
Thread Status:
Not open for further replies.

Share This Page