Creating a "Chase" robot

Discussion in 'Intelligence & Machines' started by laxweasel, Apr 19, 2003.

Thread Status:
Not open for further replies.
  1. laxweasel Registered Senior Member

    Messages:
    70
    A friend of my dad's and I are creating a robot that we plan to use to chase living creatures away from his garden, and I'm trying to think out practical ways to do it.
    One idea I had was two light sensors placed apart, and when the input is broken from right to left, the bot would go right, and vice versa.
    Any input on this would be appreciated. Oh yea and comments on my idea (it stinks I know but it's my first robotics project).
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Capibara GrandfatherOfAllKnowledge Registered Senior Member

    Messages:
    39
    The ideea with the two light sensors would probably make your robot to run around in a zig-zag - at best ... but this project seems really interesting ...
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Rick Valued Senior Member

    Messages:
    3,336
    Nice Idea,really.
    What is the Radii of Light sensors that you"ll employ.

    You could also use a property used in GPS called Trilateration to make effective the art of finding the position of others....do you know?


    bye!
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. laxweasel Registered Senior Member

    Messages:
    70
    The sensors would be relatively close...less than a foot. And I remember about the triangulation I just need to figure how to implement it (cheaply) in this robot.
     
  8. hlreed Registered Senior Member

    Messages:
    245
    You need two light sensors, to see differences in light intensity. You need two motors for propulsion and direction.
    Name the sensors SR and SL. The motors are MR and ML. Arrange them so the sensors and motors are the same distance apart and
    so that MR > ML turns left, MR = ML is straight and MR < ML turns right.
    The sensors turn light intensity differences into meaning.
    SR > SL means SR brighter than SL
    SR = SL means SR same as SL
    SR < SL means SL brighter than SR
    With this you can seek light or seek dark
    Make the sensors digital by sending voltage through an analog to digital converter. The motors should be run through an H-Bridge controller. With all this, your robot needs a microcontroller to do this:
    MR = turn + go
    ML = - MR + go
    turn = SL - SR
    go = constant ; (The higher the constant the higher the speed.

    This is set as a light seeker. Reverse SL and SR and it is a dark seeker.
    To seek light or dark you need more brain.
    Your idea of chasing animals away, requires a lot more brain. Do this first.
    Harold
     
  9. laxweasel Registered Senior Member

    Messages:
    70
    Yea I figured how to make it do that...but how to make it go in a specific and correct direction is what is complicated and is puzzling me.
     
  10. Blindman Valued Senior Member

    Messages:
    1,425
    Mad as…. What makes you think that light sensors can pick out a rat from a snail…

    You must find something that identifies the object to chase without error..

    Create a sensor that reacts to the prey and your half way there..
     
  11. hlreed Registered Senior Member

    Messages:
    245
    You are correct that light seeking is not detection. You will have to decide what detection means in terms of what the robot can see. That can replace the constant. This is a very hard problem.
    The problem is determining which is figure and which is ground.
    Visual identification can be based on simularity. To do this you need more than two sensors. With four sensors, you can distinguish figure and ground on a line and begin to be able to see size. With eight sensors you can begin to do shapes.
    By the time you can determine rats from snails you have a very good visual system. At great cost.
    Your robot still can do only what its motors allow. It can turn left or right, go forward or backward no matter how big the brain.
    Here is how to start with this:
    Make a dot for each light sensor. Draw a V from each dot. Note how they intersect. An object (figure) will be seen by more than one sensor.
    If you like I will send you an article on one system of vision. Just send me an email and ask.
     
  12. AntonK Technomage Registered Senior Member

    Messages:
    1,083
    As far as detecting animals, here is an interesting thing i rememebr a while back. Uses a neural network to detect shadow. Now sure how you'd get a proper sillouette as they did, but anyways its interesting idea. Here's the link

    http://www.quantumpicture.com/Flo_Control/flo_control.htm

    -AntonK
     
  13. hlreed Registered Senior Member

    Messages:
    245
    AntonK
    My system makes neurons. Neural nets are encoders that require weights to be set. The robot is the one that needs to see, not the designer.
    Besides I have the hardware to do this.
    A CNode is a computer that takes the difference of two inputs and writes that out. These nodes form trees of any size. Put in n light sensors, and s layers of the tree are zeroed if something equal is there covering s of them. From this the robot can determine length by counting the zeroes. Do this vertically as well and you have height which combine into shape. So you have an input of light intensity and an output of shape codes, which can be remembered, to build up a picture in the robot.
    This is one system. I am looking at others.

    I have a complete algebra to do all this, and it is free to anyone.
    Just ask via email and I will send you a copy.
     
  14. AntonK Technomage Registered Senior Member

    Messages:
    1,083
    Harold,

    I have followed the posts on your system for a while now and all I wish I could see is some sort of evidence of its usefulness. The site I listed above has pictures, examples, paragraphs describing the system. I am in no way saying which system is better, I am simply saying they are putting their cards on the table so to speak. They are saying "Here is our system, here's how it works...heres what it can do" You have told us about the inner workings of your "Halgebra" but I have seen no concrete examples of the technology in use.

    Do you have any pictures as that site does? Do you have anything actually built?

    -AntonK
     
  15. one_raven God is a Chinese Whisper Valued Senior Member

    Messages:
    13,433
    Do the eyes have to be attached to the robot?
    What about having a robot that respond to external stimulus from a surveilance system?

    Basically, aim a CCD at the garden from overhead.
    That CCD can act as multiple motion detectors on a mapped grid.
    The robot will be pre-programmed to move along this grid according to the coordinates sent from the surveilance system.
    If motion in a sector is detected, the surveilance system will send a message to the robot to proceed to X,Y maybe making a subsonic sound (or other sound that will startle and annoy the pest).
    When the rabit then moves from 12,18 to 13,20, the robot will then do the same.
    Couple that with a sensor that will cause the robot to turn 12 degrees or so (in the general direction of the intruder) when an object is within 3 inches of it (to avoid trampling the tomoato plants).
     
  16. Capibara GrandfatherOfAllKnowledge Registered Senior Member

    Messages:
    39
    one_raven - I must say I like your post ... nice ideeas you got there

    Please Register or Log in to view the hidden image!



    I also thought of something a little more complex ... take a small webcam , mount it on a vertical pole above the robot (50 cm should be enough) , add a smal laser to the base of the pole (a red keychain laser will probably do the trick during the nigth and not so sunny days but you could also use a more powerfull green laser) - rotate the pole to scan around the robot and use a little program - like that used in Flo_control (the site mentioned above) to interpret the output of the webcam ... to get the distance and size of the objects around the robot ... this will also stop it from trampling the tomatoes - anything that moves more then one meter or so must be chased

    Please Register or Log in to view the hidden image!

    - I actually think I'll do this myself - this summer , I'm too busy with exams right now

    P.S. I forgot to mention that you should filter the light that goes to the webcam or ccd by using advanced equipement such as coloured glass or plastic (red you you use the red laser or green if you use the green one --- or whatever colour your laser is)
     
    Last edited: Apr 22, 2003
  17. hlreed Registered Senior Member

    Messages:
    245
    Have you checked my website?
    http://www.halbrain.com
     
  18. AntonK Technomage Registered Senior Member

    Messages:
    1,083
    I've read your site, and from what I can get from it, you are simply taking VERY simple AI (If you even want to call it that...because i have played video games with more compltex interactions) and making them into hardware components. This seems to work backwards. We started off using hardware based instructures and decision structures - Example...the original "Pong" wasn't programmed it was all hardware based - and now we are useing programmable components. Everything I read on your site could be programmed in code and placed on to sometype of reprogrammable microprocessor and build into an actual machine by any 2nd year CS or engineering student. I really don't see anything revolutionary. I'd like to though. Your site is entirely text. You show no proof that you've build ANYTHING...it just seems you're trying to sell stuff.

    Here's a link to a good micro-processor that can EASILY be used to create the "Animachines" that you describe.

    Javelin STAMP

    I also don't like the fact that you continue to refer to "integration" and "differentiation" It seems you are confusing them with "add" and "subtract" THEY ARE VERY DIFFERENT.

    -AntonK
     
  19. Capibara GrandfatherOfAllKnowledge Registered Senior Member

    Messages:
    39
    well , AntonK, I don't think that even a fast computer can simulate too many neurons so the general ideea is good but I don't like the fact that these nodes are so hard to assemble ... and they're way too big - but that could be fixed when you get to mass produce them
     
    Last edited: Apr 23, 2003
  20. laxweasel Registered Senior Member

    Messages:
    70
    I very much like your idea. It seems practical and uses some easily available supplies. Part of my problem is that this robot needs to be cheap (as opposed to other forms of pest control) and it needs to be made from readily available materials. You have adressed both of these problems. If I go into this project and create it I will let you know how it goes. Good luck with your own.
     
  21. malkiri Registered Senior Member

    Messages:
    198
    What's the laser for?
    You're going to run into difficulty when you try to decide how far the object is from the camera and what size the object is. Here's a link to help illustrate what I'm saying:
    http://sapdesignguild.org/resources/optical_illusions/intro_constancy.html
    Say your program is processing a frame like the one pictured there. How can it decide if the monster in the upper right is much bigger than the other one, or just floating up in the air a bit? There's some math involved that explains the reason for this - basically, while you can solve for the X and Y position of an object, you can't solve for the Z parameter. If you want more info on this, I'll dig through some notes.
    A solution to this problem is to use a stereo vision system. This is part of how we get our depth information as humans. You know how if you switch between closing your left and right eye, things seem to move? As you can imagine, this is because your eyes are in different locations, so they're going to see slightly different images. When you focus on something, this discrepancy (that's the technical term for it, if you want more info) is part of what cues us in on the distance to the object. Another part is context - you know that apples are usually the size of a fist, and you take that into account when looking at an apple. Unfortunately, computers don't have this context available to them.
    If you're thinking of using the light to get a bearing on the size - I can see that as a possiblity, but I don't have any ideas off the top of my head about how it might work.
     
  22. laxweasel Registered Senior Member

    Messages:
    70
    Malriki: Now correct me if I'm wrong but I think the laser is supposed to act as sort of a rangefinder almost...watching the position of objects and making sure they dont move too much.

    As for the rotating idea, I'm not sure I need to do that. The problem is the deer, and we can pretty well find out what path the deer are using and just set up a sort of trip wire with the laser idea, but not sure about that yet.
     
  23. Automan Mostly harmless. Registered Senior Member

    Messages:
    65
    Would it not be better to stop the deer getting into your garden in the first place?
    Robot chasing things break and require maintenance. Deer are big, and will eventually ignore the robot (or jump up and down on it depending on the family & season). Sudden frights work nicely on deer, there hard-wired that way.

    Remove items that are inordinately interesting to deer.
    Pressure sensor / laser / PIR detector as suit at the entry points.
    Place something deer don't like e.g. a few popup targets, speakers with variable sudden hissing noises (Bobcats?) in appropriate spots. Local natural predators are best. If they get used to the noise, they won’t be a pest for long anyway! sorry Bambi...

    Addition idea for a camera system just for laughs…

    One low-res mini IR camera with an electronic exposure etc and fisheye lens. Mammals should show up nicely, colder (damn hot day) or hotter as colour values. You are out of luck with alligators. They are also fun to play with (thermal sensing cameras..).

    As described by others, extracting the bearing through differences to the norm is easy to do (depends on budget). As long as you allow for gradual heat changes and over-values (caused by sunshine) everything in the garden will appear black, except the furry one. Or just set it to go for deer temperature. There may be some calibration req… but it will also work at night.
    If on an independent robot, 360 degrees is easily achieved with 2 mirrors, one over the top half of the lens.

    Have fun!!
     
Thread Status:
Not open for further replies.

Share This Page