Walter's New Teaching Pendant for Head Moves (Virtual)

Is it wrong to give your robot a little head? Or a big head for that matter?

Gone are the days of my old, clunky teaching pendant for Walter's head. I have coded a new one via Processing. Watch the video and enjoy --this is a pretty good one. Code available upon request.


This is a companion discussion topic for the original entry at https://community.robotshop.com/robots/show/walters-new-teaching-pendant-for-head-moves-virtual

Walter VM

Looks like you’ve really taken to Processing…
I’ve been on sabbatical for a while and have not even seen your “old style” pendant which I think is very impressive too…

So forging ahead I’m guessing the point of “recording moves” would possibly be the “play move” under appropriate conditions?

Any ideas to the appropriate conditions or recorded moves?
e.g.  Hits wall —>triggers–> WTF move (slightly tilted head)

Sorry for not being up to speed… it’s been a while…

Play Move…

What I have done is pre-record various moves for different situations and yes, there is a “WTF was that” move included. Some of the head moves also sync up with phrases and other audio “spoken” by walter. The eeproms simply click by and the X,Y,Z position of the head is recorded sequentially onto the eeproms. In addition, there is another eeprom that simply stores the address numbers that correspond to the start and stop numbers for each move. This works just like the counter on an old tape deck. I.e. move one is from address 0 to address 1527. Move two is from 1528 to 3289, etc. When played back, the data is simply read from the eeproms and then off to the servos it goes.

I also have a nice chunk of code that will calculate what it will take to move the head from point A to point B. I use this to transiton from the end of one move to the start of another to avoid the dreaded “fast jerk” to the next starting position. In addition, I use this transition code to do simple moves like center yourself and look up (this is the standard “I am in menu mode” position. Both these head moves systems work great, the difference being that the prerecorded ones have a definate “human” quality to them in the fact that each small twitch or micro-movement from my human hand gets recorded whereas the “transition from a to b” method is a very calculated, “robotic” looking move.

Boldly forward

Great - I get it - nice that you are into the details of movement, me too… What the next big thing?  I take it from your recent pimping that “Mapping” is it?  Mapping for the purpose of recharging?  Or do you have other objectives?

Mapping and Docking are EVERYTHING!

Here’s the plan, Stan:

Yes, docking is also charging. Walter can already measure his own batteries and knows when they are getting low. I have also perfected the docking connection --the actual contacts that walter will drive into to send juice to the charging circuits. More importantly, when Walter is docked, or more to the point, when he leaves the dock, he now knows his exact position and heading --great for a starting point when mapping. From there, with a little cruising around, he has drawn a pretty good map of where he has been. It is just a tiny step from there to be able to click on the map (in processing) and have processing or walter calculate the route to that point. --Really, I already have the map, as a scale picture and as data on an eeprom, how hard could it really be to simply read this back and send it to the motors and encoders?

Now we’re talkin’ a robot that can navigate the whole house, dock (and measure from that point to determine any stopping point anywhere in the room) retrieve items and return, goto a location on demand at the click of a mouse, etc. etc. etc. Not to mention, with this compass I’ve been fixated with lately, I have discovered I can simplify my whole IR docking system. Right now, I am using 2 beacons (with different “ID’s”) for Walter to traiangluate on and thus center himself. From there he can  drive forward to the follow line that will direct him, quite precisely, to the final docking position. With a compass heading on hand, we can now simply use one beacon, shining 180 degrees, up against a wall. Walter can find this beacon, and then simply position himself so he is A) facing the beacon and B) pointing in a predetermined compass heading. Take a quick distance sonar reading bounced off the back wall and boom, you know where you are. You know where the beacon is on the map (this is pre-programmed), we know we are driving straight into it (90 degrees to the back wall) and we know how far away it is. Guess what? We now know where we are and what room we are in just based on one simple $5 Ir beacon stuck in the corner. Not to mention at this point, we can also update our map to sorta 0-out where we are. This will take care of any slight encoder inaccuracies that we have had during a long mapping run.

You can quickly see how many dominos I have stacked up here --I have just been waiting to get ahold of the first domino so I can put it in front of the rest and start knocking them all down. I’ve got so many awesome, solid sub-systems done it’s crazy. And everyday it seems another peg falls in a hole and walter can do one more amazing thing. The bottom line is, 2 years ago I was making a LED blink. Walter simply does/has an example of every single thing I have learned in those 2 years. It is finally nice to start putting all these learned pieces together into what I head in my head when I started.

Excellent Dr. Frankenstien

Wonderful progress…
You mentioned an IR array - or webcam?  Would you be trying to do more mapping with those?  I have some software you might want to try if your interested (regarding webcam stuff)

It all sounds good - I guess I’ll need to go through your other posts to see details.  I saw the processing/mapping one so far, but I suspect there is more I need to look at.

How is you computer communicating to walter? Xbee? or some other RF or IR link?

GroG

Connection and Webcam…

I am using a bluesmirf bluetooth module from sparkfun for transmitting data --I could not be happier with the unit by the way. In terms of the next mapping step is, like I said above, a system of then following the map that we just drew. I have however been thinking of building a system where walter not only draws the map as he goes but also does a sonar sweep every few inches or so. I could easily send the position data and sonar data to build a little bitmap picture of its surroundings as it goes.

In terms of web cams, I would love to start playing with edge detection and/or blob stuff. Unfortunatly, I can’t seem to find any libraries that will work with processing. I have found some but none of them seem to compile --even with a clean cut and paste. I understand that processing is based on or very similar to Java, but i don’t know if they are fully interchangeable or if any webcam libraries exist for java either. I would love to see whatever you have, I am quite interested in learning it --Oh, voice recognition too --that would be voice recognition on the pc, not on a pic. At any rate, gimme what you got on that webcam stuff.

Vulcan mind meld…

Aight - gonna start by givin you what I know… It seems you are interested in “coding” so I’ll pontificate ad nauseum  

I believe Processing is coded in Java - it has a scripting language which in turn gets compiled into Java classes and executed.  It looks pretty damn nifty. I downloaded it once and messed around with some of the demos (damn cool) … however my interests made me look around further for specifically “robot/machine control” software… Processing could do this to some degree, but it was designed for a different purpose.

OpenCV is the powerhouse of vision software !  It was started by intel - release to the public and contains amazing collection of functions and utilities.  It has had a long life, software years are like dog years …

 

http://sourceforge.net/projects/opencvlibrary/ - it is written in C and has interfaces for C++ & Python - I have downloaded and played with it…  it was designed to be built on all sorts of platforms - I’ve built it on windows and Linux

So… initially you might be interested in OpenCV + Processing … well you need some Java Glue to do that so they stick together…

These guys created some of that glue http://ubaa.net/shared/processing/opencv/ …
Really its not like glue but more of a “specialty fastener”… limited in some ways…  It’s wicked cool, I tried it … thought it was cool. Got it to work with Processing…

What I Got :
I made a service based multi-threaded java framework… A framework is like a lot of glue… and It currently is gluing OpenCV + Arduino + Servos + Motors + Speech Recognition + EZ1 Sonar + IR Module + MP3 Player + Control GUI + Joystick + RecorderPlayer + Text To Speech 

 

Here is a screen shot on a recent experiment… - the little rectangles are the services - and the little arrows are message routes
So in this case the “camera” service which uses OpenCV sends messages to the “tracker” service which in turn send messages to the “pan” and “tilt” servo services which in turn send messages to the “board” in this case an Arduino Duemilanove…

Make sense ?? a service is like a little piece O your brain ... visual cortex, cerebellum, hypothalamus, etc.. ;)
Services make messages, relay messages, receive and or process messages...
Message messages messages neurons synapsis dendrites ..... wheee !

So if you really want to try it I need to know some of the details of your system :

1. what is your puter OS - it looks bill gate'ish (bad boy) 
2. what are your "boards" - picaxe if I recollect - I currently don't have a PicAxe board in my library of services - but I could write one with your help - I'd be interested in adding it

Below is the "uber" pane - which have most of the controls for the "Services" .. e.g "camera", "pan".. etc... remember?

gooey.png

The "board" service is a bit more complicated so it has its own pane...

arduino.png

This is a work in progress - so things will be bumpy ... still interested?

GroG

Quite interested indeed

I have to say, first off this is the first time I have downloaded and installed the openCV stuff and had it work. I have tried 2 or 3 times in the past (as openCV crosses my path (seeing it in posts and comments)) and it never took. I now have it installed and have run some of the examples. Blob detection works great and this seems like a good place to start. I would like to replicate what I have seen others do --Waving a colored ball in front of the camera and allow the computer to track it. Seems pretty easy to get an X and a Y from this and then control some pan and tilt servos to keep this object centered in the screen. Obviously, with a distance sensor also shooting to the object, it seems we can make a pretty simple “follow the leader (or in this case, green ball)” code. I.e. find and track the ball with the blob detection, keep it centered via pan and tilt, turn the whole robot when we are getting close to maxing out the pan/tilt travel and finally, get a distance number from it (via sonar) and stay a given distance away. I would love to see walter cruising around the room following a green ball on the end of a stick.

Of course I would eventually like to get to full navigation via webcam as well, but that seems a bit down the road. I am really liking the idea of multiple services taking care of different tasks all reporting back to who they need to report to. The bottom line is that I am simply interested in/ready to start learning real, real, real big-boy programming now. If this post starts a conversation between us, I welcome it.

I would love to hear just about anything you have to tell me --If nothing else, MORE EXAMPLE CODE, PLEASE. --oh, and a link to your voice recognition stuff too!

–I’m getting excited

BTW

MSI Wind Netbook (1.6 atom --just like all the netbooks) 1gig mem

Internal USB webcam --I am still looking for a wireless webcam --don’t want the whole lappy on Walter’s back

Windows XP

Visual Servoing using OpenCV and Haar Classifiers

CtC,

I just released a video the other day testing some visual servo functionality. Seems to me you could leverage some of what I have done. Take a peek: http://www.youtube.com/watch?v=i48DnMCfTBc

I stay away from Processing, intentionally. The application in the video implements EmguCV, C# environment. However, in the past and in most of my applications I stick to C/C++ and at times build GUIs in Qt.

One other thing worth mentioning is a new open source initiative I am embarking upon; www.OpenRobotVision.org. Perhaps I can rope you into contributing in the future. 

Hey thanks kankatee

I dig your facial tracking. I think that here at the beginning of my learning I am going to focus on the whole “track a tennis ball” or track a certain color thing. I gotta tell you though, it seems that C is just slightly more popular to do this (in processing). I am just now going through link after link after link trying to find some sample code to do this. I have gone through all the openCV examples and can’t seem to find what I need. All the documentation states that color tracking is a standard feature in openCV, but I guess I am not looking hard enough.

Thanks bunces for the suggestions and encouragement

–and don’t forget to stop by the post office today!  :slight_smile:

 

I just got back from the

I just got back from the post office. It should be there in a few days. 

In my opinion, your gonna wanna ditch Processing pretty quick if you want to work with active vision. If you want, I can write you up some quick C code to track a tennis ball. Which version of OpenCV did you install 2.0 or 2.1?

Do you have a MinGW and a C compiler?

In fact, wait, try this. I

In fact, wait, try this. I forgot I posted this a few weeks ago:

http://www.davidjbarnes.com/Publicly_Available_Software/OpenCV_Object_Tracking_Based_on_Pixel_Color.aspx

Like I said earlier, you’ll need MinGW and a C compiler. Also, Eclipse CDT is a good idea.

Oh wait, one more thing.

Oh wait, one more thing. Sorry for the stream of consciences. If you aren’t really interested in learning OpenCV or active vision from the ground up per se, but more or less interested in solving a specific vision problem as it pertains to Walter, maybe just try RoboRealm. In fact, they have an API into the application, so you may be able to leverage what you already have in Processing. Check them out www.RoboRealm.com. If you download the 30 day trial and make a tutorial, I think the guys at RoboRealm will give you a license key free of charge. 

I accept

Let’s see, first off I am quickly finding out the limitations of Processing… It’s sorta like basic, it’s great because it looks like english and is simple and serves its purpose, but if you really need to crunch numbers…

Probably about time for me to start learning C or C+ etc. and I would love a little chunk of tennis ball or color code --great if in ended with a simple int x and int y noting the center of the tracked object in relation to the window --this could then be sent out to the pan/ tilt servos

. I am using openCV 1.0 --this is the version that the openCV website told me to download and install and I am/was a bit confused because the 2.0 version was the big download button and the 1.0 was tiny and much futher down on the page.

In terms of C, you suggest Eclipse CDT as my “editor”, correct? I was looking at the roborealm as well, and you are right --it is well worth it to get the 30 day trial and see if I like it. I need to also start learning about API’s and how everybody is supposed to be talking to eachother.

Like I have said before, my brain is in full-on sponge-mode lately (not to mention having a ton of time on my hands) and I am simply in the groove to be learning new stuff. I love the fact that LMR and guys like you, GroG, rik and the like are around to steer me in the right direction.

 

Hi Kankatee

Nice code post, love your hair … I’ve looked at RoboRealm too, they have a nifty interface - MATLAB has a system where you can layer vision filters one on top of the other - I think they even had a interface to do stuff i.e. move servos but I didn’t go that far with it

Quiet Servos

What are they? On my system when it tracks faces I always hear the squeeeeek squeeeeek of the servos…  

You are thinking too much

You know, I am always confused about libraries for bluetooth, and arduino and picaxe. I just don’t know why you need them. Here is what I use: myPort.write(?);  --that’s it. My bluesmirf appears to processing to simply be a serial port. I don’t think that processing even knows, and certainly does not care, that it is actually sending data through bluetooth. My bluetooth/ bluesmirf set-up is just a serial com number to processing. In terms of picaxe-speak, you can set them up to receive data a lot of different ways. They will just sit there (forever) and wait for serial data come in, or they can receive in the background and write to a “buffer” to be read later and they can interrupt on a serial send if you want them to. You can send a qualifier (single byte or string) at the start of your send, or you can include something like a CR to let everyone know it is the end of the data. You can also set the picaxe up to receive a specific number of bytes as well and keep going through the code when it gets this number of bytes.

It is just a simple system --the bluetooth is a serial data-pipe, whatever goes in one end, comes out the other --both ways. The comunication protocal for the picaxe is whatever you want it to be. That’s it. Right now, for most of my stuff (like the control panel) I send 3 bytes, a letter, a byte of data and a CR. With 52 letters and 256 choices for the data byte, for most information I need to exchange this is all I need. For example: I would send D,232,CR --this would tell processing that I am sending it the level of my drive battery. D means drive battery, 232 is the level and CR finishes it. When it gets to processing, processing checks the first byte first, runs it through a switch case which then tells the code what to do with the second byte. In this case, the 232 would be reflected by the battery level indicator on the contol panel.

Java version: I have whatever version of java was included in the processing download. Let me know what the most appropriate version to use is and I will be sure to get it.

For the record: I am very excited to offer up Walter as a guinea pig here. Just from your “flow chart” of systems talking to each other, I can tell our minds think the same way. Actully, the more I look at that flow-chart, the more I think that control system idea was what was in the back of my head when I was building walter’s control panel. Not to mention, that I adore the thought of getting more thinking power off of walter and into the laptop.

I  am off to check out your website.

Your website

I know your site is alpha right now, but your links are not links!

I can find the java --where is the download for your stuff?

Just found it

Java version 6.0.200.2