Om's Robot

Discussion in 'Intelligence & Machines' started by domesticated om, Apr 19, 2010.

Thread Status:
Not open for further replies.
  1. domesticated om Interplanetary homesteader Valued Senior Member

    I'm in the process if building my first robot......although (according to a friend of mine in another forum), its more of a 'finite state machine' than a full blown AI. I normally would have posted something like this in the computer science thread, but Intelligence and machines is a closer match IMO.

    Basic synopsis - this is a quazi-robotic car. It will accept directional tasks and perform them governed by sensor data and pre-programmed criteria.

    The first (and only) problem I'm having trouble wrapping my head around is how to accurately establish position in real-space. My machine does not come with any sensors that empirically tell it how far its traveled. For example - I can't really tell it to "travel 10 feet forward - 3 feet left" without this.
    I can improvise using time multiplied by speed or something like that, but I want something more solid.

    I can add sensors if need be (if you guys have any suggestions on what exactly, let me know) - albeit limited to anything that's inexpensive and small enough to fit on a 1/10 scale R/C car.
    GPS is an option - although I *think* commercial GPS doesn't resolve down to a certain distance, and may have issues with lag-time or connectivity (useless inside a building).

    The easiest solution I can think of so far is a webcam and ping-pong balls. I can paint them all bright identifiable colors, and associate them with nodes on a pre-defined map. This limits me to pre-defined maps though (I can't take it to the tennis court down the street without setting up more ping pong balls and manually writing a new map)

    let me know if you have any input on this.
    Last edited: Apr 19, 2010
  2. Google AdSense Guest Advertisement

    to hide all adverts.
  3. Sarkus Hippomonstrosesquippedalo phobe Valued Senior Member

    Combination of gyroscope, odometer, triangulation to known fixed transmitters etc.
    The gyroscope will enable you to determine direction, odometer for distance and the triangulation will determine position relative to the fixed transmitters, which can then be compared to expected position from calculations based on the gyroscope and odometer etc.
  4. Google AdSense Guest Advertisement

    to hide all adverts.
  5. domesticated om Interplanetary homesteader Valued Senior Member

  6. Google AdSense Guest Advertisement

    to hide all adverts.
  7. domesticated om Interplanetary homesteader Valued Senior Member

    By the way - I'm still actively working on this project, but I'm in the "save and purchase" phase right now.

    Here area few more details about my setup -
    As I said earlier, I will be using a 1/10 scale R/C car. I haven't decided on a specific one, but it will probably be one of the "Tamyia" or "Team Associated" models since I have prior experience building and working with these.

    I will also be using a wireless camera (will be mounted on the car obviously). For the time being, this will be its only sensor. All of the actions the car takes will be based on how the software I'm writing interprets the video.

    The car itself will be controlled using an R/C radio controller that has been connected to a computer. None of the actual "brains" for this machine are gong to be located on the car itself, so no need for any complicated on-board controllers and whatnot.

    I haven't purchased the radio itself yet, but I HAVE purchased a USB device that allows me to connect the computer to the R/C controller. The only drawback to this device is that it doesn't relay any position data on the servos..... which would be extremely helpful in a scenario where some sort of external condition offsets the position of the servo (like running over a bump). I may just choose to go with servos that have a lot of torque and the least amount of play (wiggle) as possible.

    The other two sensors that I'm hoping to install are a compass and accelerometer. I'm hoping to find a way to install these in that doesn't require separate transmitters for each......especially since my R/C radio and wireless camera puts me up to two separate transmitters as is
  8. Blindman Valued Senior Member

    very interesting project Om. Machine vision is very difficult subject and requires lots of processing power. I using a mobile phone connected to my PC and a Mindstorms NXT brick via bluetooth. The phone is doing the image processing from its cameras.

    I am still in the process of getting good interface to the cameras. Im doing a line/corner recognition system. The phone will return to the PC a set of lines and corners. I am in the middle of battling with Carbide/S60 SDK to convert my C# code to c++.

    Im also frustrated that there is no good focus control. Focus is a good distance cue but cant find a good focus interface in the SDK.

    Nice to see im not the only one with the free time to play with the amazing toys we have to play with these days..

    My goal is to create a universal lego sorting machine. Scatter lego parts and sort to part and colour, and do it quickly.

    If i do it then it will be a sort and building machine. The end aim is a self replicating lego machine. It will be large and complicated and will have to be self healing if I build two.
  9. domesticated om Interplanetary homesteader Valued Senior Member

    Yeah - I totally agree. I've heard that edge detection is dicey.....although I've got ideas on possible pixel-based algorithms.
    In your project, I'm thinking that the color of the lego-bricks may also be significant since they are oftentimes an unusual color from the background.

    That's really the whole reason for my ping pong balls concept (in the beginning). One of the first tasks I'm wanting my robot to perform is racecar type circuits around a pre-defined track. I imagine trying to programmatically define "walls" and other barriers would be extremely complex......but coding to look for the balls would be easier. I could paint them all unique colors and associate them with nodes on a map (example - the neon green ball - or RGB range xxx,xxx,xxx to yyy,yyy,yyy could represent the left edge of turn 3 and so on).
  10. domesticated om Interplanetary homesteader Valued Senior Member

    Oh---as for focus, it seems like that would cause more complexity and slowdown in a robot. I'd imagine it was kinda like my Canon Powershot and its autofocus functionality (which I've disabled LOL). You press the button to take the picture, and it sits there trying to sort out the focus before triggering the shot......all the while, a couple of the the people in the frame have stopped smiling.

    Seems like it would be simpler to use a fixed focus with as much hyperfocal clarity as possible.
  11. Blindman Valued Senior Member

    I would use a reflective paint to mark the edge of the track. A low res camera will give you an easy way to find the track center. You could periodical mark bar code center lines. As simple as 1 dot then 2 dots and so on. This would give you a track position and allow for optimized cornering.

    I create my vision algorithms via a home grown neural net solution. I use a simple simulator i created to build a net that responds to pixel colour and brightness. I then train/analyze/rewire the network. In the end I output a C# source code solution that optimizes the network. It is a bit of a clunky solution but has come up with some very quick and simple solutions to vision problems.
  12. MacGyver1968 Fixin' Shit that Ain't Broke Valued Senior Member

    My final project in electronics school was a similar r/c car. It was a team project. While others worked on the software, and others the radio, it was my job to design a sensor to detect collisions. I used infrared LED's on the front of the car, and an IF detector. While it did only had a range of a couple of inches, and the target needed to be very close to 90 degrees from the front of the car...not to practical in application, but we did get an "a".

    Please Register or Log in to view the hidden image!

    Wish we could have had a USB connection to the r/c controller. This was 15 years ago, and none existed. We had to remove the joysticks, and put a D-to-A converter in their place, that received data from the parallel port, and applied the appropriate voltage to where the joysticks were connected to the board.
  13. domesticated om Interplanetary homesteader Valued Senior Member


    Here's the USB connector I'm using.

    Also found this link a while back. I thought about using an IR sensor to do the exact same thing as you.... to keep to from crashing (or possibly LIDAR type applications assuming it returns distance and not boolean). The rated distances for these look pretty good.
  14. domesticated om Interplanetary homesteader Valued Senior Member

    It's taken me a ton of time to finally get this thing built and running, but my sensors will finally [hopefully] be one their way next week.

    Here's what I'm going to be mounting on my robot:
    Single board computer (used to transmit my sensor data over an 802.11 type network)
    laser rangefinder w/~150 cm range

    I've decided not to attempt to use computer vision for the time being. The camera will be used for FPV purposes so I can rescue my robot by manual control whenever it gets marooned.
  15. cosmictraveler Be kind to yourself always. Valued Senior Member

    A question comes to mind. If you were to have all the equipment located at a stationary position in your house then your car only would need to have a camera on board to show the equipment where the car is going and the equipment could drive the car around instead of putting everything in the car. Can that be done or is that what you are doing? :shrug:
  16. domesticated om Interplanetary homesteader Valued Senior Member

    Actually - stationary fixed position 3rd person cameras are easier [for me] to write code for since I wouldn't have to write any code to deal with constant rapid scenery or lighting changes.

    For example - I could set the camera on the ceiling in a face down perspective, and write software that maneuvers the car around the static scene. The car would be the only thing that "changes" in the scene, so it would be relatively easy to track it and send instructions to it. The pattern would be something like

    1. Original bitmap with nothing in it = sample 1
    2. Current bitmap with car in it = sample 2 (car is region of bitmap different from sample 1).. this is car start position
    3. SendCar to rally point or rally points (engages my case, its just a steering servo and speed controller that makes it go forward/backward)
    4. performs loop to track changes (compares new sample to last sample...makes servo corrections if something went wrong).

    well...... that's the slower way to do it anyway LOL. I think I saw a youtube video of someone that did this a while back.

    I like having a portable system though. I don't exactly want to limit it to my house the entire time.
    PS - the various boards (laser rangefinder, single board computer, etc) are pretty tiny. Actually, the gyro/compass/accelerometer that I found comes in a single board and measures ~ 1"x1" and doesn't really weigh anything. Mounting it wont be a problem.
  17. domesticated om Interplanetary homesteader Valued Senior Member

    Ok -- here is my robot

    Please Register or Log in to view the hidden image!

    Please Register or Log in to view the hidden image!

    Let me explain what you're looking at:
    This is a Tamiya grasshopper (a bottom of the line RC car kit) with a single board computer mounted on it and numerous sensors.

    The data from the sensors is broadcast over an ordinary 802.11 type wifi network using a single board computer. I decided to go with the Phidgets SBC (version 1) simply because it 's modular and i wouldn't have to worry about doing a bunch of soldering down the road. When I start on a new project, I can simply yank all the parts off this machine and attach to the new machine without much customization.
    You can't see the SBC in the 2nd pic because I hot glued a little cardboard rollcage around it for protection.

    My list of sensors:
    IR Rangefinder - detects objects 20 to 150 cm
    Spatial sensor - gives me a compass, accelerometer, and gyroscope..... mounted on top of a the servo in the front on the tower (that big thing that sticks up in the front). The reason I mounted it like this for the time being is because there is a lot of magnetic interference from the rest of the vehicle. It seems to maintain the most accuracy when it's kept at more of a distance. I am not using gyro or accelerometer data at this time as I script the robot's behavior.... only using the compass.
    Hall sensor -- mounted on the back wheel (not shown).I have also taped a permanent magnet to the wheel so the hall sensor simply detects whenever the magnet passes by it and counts it as a single rotation. this allows me to detect how far the vehicle has traveled........which is extremely helpful because it allows me to set way-points and whatnot.

    the servos and ESC are controlled via normal RC transmitter. I'm using run of the mill a futaba 4YF 4 channel controller which is being controlled via the PCTx mentioned in a previous post. Channels 1 and 2 handle the steering and throttle for my car. Channel 3 handles the servo for the IR and spatial ssenssor.

    i have not yet built the camera system for the car. I experimented with using a webcam attached to the SBC, but the frame rate is too low. I decided instead to go with a full blown FPV system (a video feed broadcast over an FM signal) which has an extremely fast frame rate. This will give me a lot more flexibility as I work with the computer vision experiments.

    The first experiment I'm running right now is a simple obstacle avoidance test. The way it works is ---- if you've noticed, I've mounted both the IR range finger and the compass on top of the servo. My car is programmed to apply throttle until it detects an object, stops if object is detected, moves the servo until an object is not detected (which corresponds directly to the IR rangefinder data since they are both mounted on the same servo) and finally moves the servo back to a " forward home position" and steers my car to the new heading.
    I have not yet run this test or bug tested I'm in the early learning stages myself. I simply conceptualized the test and put it together..

    FOOTNOTE - I know I have a long way to go until I can do something badsss like this:
    Last edited: Mar 12, 2011
  18. tashja Registered Senior Member

    Cool robota. After you are done, make an appointment with DARPA.
  19. domesticated om Interplanetary homesteader Valued Senior Member

    To be honest, this project is extremely extremely weak in terms being a "door opener" for trying to gain entry into one of the various robotics programs (such as the various corporations/universities who participate in the DARPA challenges). For the time being, my rover is pretty much a glorified overly expensive "Big Trak".
  20. cosmictraveler Be kind to yourself always. Valued Senior Member

    Interesting looking, nice work because I would not have attempted to build it.

    Please Register or Log in to view the hidden image!

Thread Status:
Not open for further replies.

Share This Page