Question 4 Engineer or Hobbyist

<!-- @page { margin: 0.79in } P { margin-bottom: 0.08in } A:link { so-language: zxx } -->

OK, I am going to share my situation in hopes that someone out there can answer some of my questions. I will post this on several boards.

 

Here is what I have: two propellers 8 core cpus, three basicstampIIs with different speeds, 11 small robots, one arduino, one arduino mega, several single core computers and one dual core and one quad core. One laptop and one net-book. One robot head. Broadband internet wired and wireless, linksys WRT54GL router. I have a somewhat entry level experience on Basic and C++.

 

I have mastered the art of obstacle avoidance. But, when I try to do more, everything gets so slow it becomes not practical.

 

I have two web-cams that can recognize faces, tell who they are, track moving objects, my voice is recognized and I can hear the answer in a human voice. It can recognize colors and flesh tones.

 

There is a data base that can interact and learn from me speaking to it. I have not tried OCR yet, but I am sure it will work. I have not tried object recognition, but I am sure it will work. I have used chatterboxes and Open-CV, and many more open source stuff.

 

Now, the problem is, all of this is scattered out over many robots and computers. How would one go about tying all of these items together into ONE machine? I am lost of how to do this. Where do I start? I have a deep desire to experience AI in a machine. Not to give it orders or commands , but to communicate with it and have it have it's own free agency. To watch it learn and grow and become more than just the sum of it's parts.

 

Oh, BTW, my wife says I have NO MORE money to dump into this stupid project. What would you do if you were in my shoes. Where would you start?

 

I hope there is an Engineer out there or some hobbyist that can answer that question for me.

 

Thanks!

 

MovieMaker

 

[email protected]

 

 

 

 

FWIW

For what it’s worth, I like the dual-processor approach taken by (e.g.) the Paparrazzi Autopilot, which uses a main processor for handling the autopilot, and a secondary processor to handle the non-critical GPS location.  To extrapolate this model to other robots, the obstacle-avoidance (reactive) systems could run on one processor, while the face/object recognition could work on one of the big CPUs as a peripheral operation.  So conceptually the robot has something to do, it has a spare “cerebrum” to process objects, and if it recognizes something, it can stop what it’s doing and investigate (e.g. have you ever walked past someone before recognizing them and turning around to interact).  Of course the first thing that the face/voice processor should look for and recognize is the word “STOP!!!”, which can be processed quickly and induce shut-down.

It sounds like there are a lot of capabilities – researching an interface like serial, “soft serial”, or I2C for communications among the computation nodes might be a first step towards a modular intelligence.

my 2¢,

-John

 

re;Question 4 Engineers or hobbyist with a passion

Thanks for the reply.  I sortof know what to do, but what I need to know is HOW to connect these things together. Example, I have face recognition on my computer terminal. How do I call that program from my robot and visa versa? Some of these programs are written in C++, others java, others lisp, others other languages. I need to know how to get from one to the other.  I think a wireless connection with terminal software and a serial port may be a start. But, I could use some advise.

 

Thanks

Thanks!

I don’t know what an API is. Also, i do not know how to make a .dll file. I see them all the time in windows.Most of the programs run from GUIs like the chatterbox program.

:slight_smile:

These guys are nerds, I’m an ol’ Kansas farm welder…

Here’s my two cents…

Don’t go cheap or lazy on your base --and don’t under estimate what it will take to hold all your stuff. I don’t want to seem snobby here, as I have a background in both metal and woodwork but, I have seen some pretty weak-ass frames go by around here. I have also seen some very tall, very wobbly frames as well. I would say to go ahead and triple what you think you will need. Walter is about 2 1/2’ wide and 3’ long, and weighs in at about 80 lbs. I have it powered by 2 dewalt drill motors with some good secondary gearing. This set-up allowed me to actually ride the bare frame. Now that I have completed the robot (now at about 1/2 my weight) I find that that extra power is a god-send when you need slow speed and still enough torque to get over bumps.

Add to this that with all that extra computer gear, you are looking at probably around 24 amp-hour SLA --There’s 30 pounds of battery right there!

Most of the big bots I have seen around here are using wheel chair motors and if you have the money, that would be my vote.

In Conclusion:

Over-build your chassis, over-power it with good low end gearing use a big batttery.

**Here are some of my objectives. **

 Most all of these have already been done in project Aiko. But, not by me. for me, just some of them, and only individually.

Goals:

 

Voice Recognition

(scores of thousands of sentences)

Speech Generation

Face Recognition (hundreds)

Flesh Recognition

Emotion Recognition

Object Recognition (thousands)

Motion Recognition

OCR

Chatterbox Interface

Net Interface

WolfRamAlpha Interface

Google Interface

Wiki Interface

(Emotions Interface)

Brow Interface

Jaw Interface

Eye Tracking

Ear Movement

Obstacle Avoidance

Weather, Time, Date

Know if Daytime or Nighttime

Wireless Serial Communications

Math Calculations

Be able to experience Pain, Pleasure

human looks (much later, maybe NEVER)

Be able to laugh at jokes (maybe NEVER)

 

Learning API is cheap!

MMC,  The good news is that learning to use an API is cheap, and you can put it on your resume.  If you have MSVC (my version is 6.0) I can probably find a tutorial, otherwise there may be ways of writing it in Python or something.  You may want to search for “[Program X] API”, “[Program X] SDK” or search your software’s “help” file or documentation (some programs tell you how to script functions from your command prompt or the Windows API) – look under “Programmers Reference” if the section is there near the end of the document or as an Appendix.  I think you’ll need the API to get Software X and Software Y and Serial Port Z to work together. 

If you publish more detail about the software (names at least) after you search for the “API” or “SDK”, then folks like me (or someone better at API-wrangling) could maybe write some pseudocode for coordinating the communications. 

Alternately to the SDK, if the code is open-source, you can add my “message passing via file” kludge into the code.  Also, if the code is open source, one might be able to find the API by looking at the code (a better nerd than I could write one), if Google fails to yield details.

Looking forward to more details,

-John

GitHub?

MMC – if you can share some of the code or file names that might help.  GitHub seems set up for code sharing, so maybe you could set up an account and post some of the code.  It’s been a while since I’ve done API – and only in MS Visual Studio.  I’d like to revisit some robotics-oriented API-wrangling, maybe with Open Souce tools, and see what I can do.

Happy to help get you started, for now, if I can,

-John

Everything I am using so far is open source.

I guess you are talking about Microsoft Visual C.

 

John,

 

Thank you for all your help today. You have placed me in a new direct to go.

yup

Glad to help :)  Good luck on the robot  :)  Don’t forget the robot videos if it works!

-John

Might I suggest building a

Might I suggest building a UML diagram. This might help to figure out the communication paths and get some order to the list of chaos.  This would also help in working on an api. You’ve got quite as well so it might be wise to get a set that you want as a base of functionality and several sets that you will want to add on with in the future.

Another thing. For each one of your procs and computers you might want to build a list of commincations  that it can communicate with. Like SPI, Serial, usb, network, etc. This way you can figure out what can communicate with what.

OK, let me drop some names on you. Some are trial versions until

until they work.

OpenCV, framecap,Mavis,Qtracker,TTSreader, answerpad, WolfRam Alpha, Robocomm roomba software. Maybe Alice chatterbot,  I have wireless broadband internet with a wireless router, but no wireless on any robot yet. I just have a wired serial connection to each. I have 11 robots, but for the sake of simplicity I will tell you about the main ones. I have a turtle type robot for obstical avoidance having a arduino mega and made out of two frisbies.  I have a robot head that I am currently constructing. Waiting on the kneck servos to come in. (haven’t ordered them yet until funds are appropriated.).  Also I need to hookup the servos to the ears, brows and jaws. Also, to the eyes. I just have an aluminum frame with two webcams that rotate left to right and are controlled by an aluminum shaft to go L/R. I know how I am going to do this, this is not an issue.  I have also a boebot robot and several parallax chipsets. I am not using it. I also have a spare arduino delci that I plan to control the Roomba with. I will be putting the head on the roomba. I purchased a 200mhz linksys router because it had emebeded linux. I flashed it and I can see the linux, but have no idea what to do now. But, it is a wireless router that I was possible wanting to use for processing stuff on the robot. Maybe it can transmitt or recieve or both if I hook it to the arduino or the roomba, etc. I was thinking about making a Tod-Bot when I bought it. So, I should wind up with a nice fast computer that is programmed in XP to controll wirelessly the Roomba and router and arduino and other stuff that I hook to the combination robot I intend to build.

Hope I have given you enough info.

 

Thanks John,

MovieMaker

wow! lots of info

MMC – Wow – there’s lots of info on the OpenCV API, and I’m tempted to add computer vision to my hobby project queue.  Some of the other names also have lots of info.  What programming languages to you write or are interested in learning?  Since I’m looking to leverage this discussion to learn how to call APIs in other software, I’m leaning toward trying out Python (which I want to learn) or Perl (if I can do API calls in Perl), but I am curious about your programming background/interests, especially since you’ve been doing interesting and apparently successful work with the various subsystems you mentioned.

-John

Update from MovieMaker

Well, like everyone, I guess, I started off programming in BASIC in 1975. Before that cobol, rpg and fortran.  After that I wrote several vertical markets in DBASEII. Before it was DBASE it was Vulcan.  But, this was 20 years ago. I would rate myself as an Entry level programmer with Basic and C++ experience.  I have written in PBASIC, RobotBasic.  I haven’t been to good at ASM and Spin lately.  And, Visual BASIC and Visual C is all confusing to me because I cut my teeth on the command prompt, not a window. So, I guess I would have to say C is my favorite language now. Never did Python or Perl. But, if they are simlar to C, I could probably learn them. I have read several books on HTML and JAVA and NetFramework. But without an application to apply them, it was all theory and boring.

The Arduino C environment is my favorite. I have heard that many people like ARM and the PIC stuff. But, I only programmed once when I build the BrainMachine from Make magazine. It was hard to understand, but it worked.

Thanks, John for the info.

 

Update

Please define UML, api, and SPI.

 

Thanks.

just pointing this out, but

just pointing this out, but you have to think about what experiencing pain or pleasure is. It’s more a philosophy question than a robotics question.

Is a line follower happy when it’s on a line? does it dislike being off the line?

GIYF

SPI - http://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus – one way that hardware can communicate

API - http://en.wikipedia.org/wiki/Application_programming_interface – a way for software to communicate

UML - http://en.wikipedia.org/wiki/Unified_Modeling_Language
UML - http://www.uml.org/#Links-Tutorials

From context, rather than using a UML specification ((http://www.omg.org/technology/documents/modeling_spec_catalog.htm#UML), I think the suggestion might be to make a flow chart with boxes for the different software parts that are interacting.  Actually learning to use UML to create the flow chart for the software you are learning to interface seems like a bit of extra work to me … especially if there are no plans to make a commercial product. 

voodoobot - does that sound right?

-John

about pain and pleasure.

What I am talking about is having negative points for bad behavior and positive points for good behavior linked to confidence levels. If it runs into the wall several times, like having a headache, it will loose points. It will eventually decide it is not good to go in that direction at this particular time. It will make decisions based on number of points and confidence levels. If you understand what I am talking about.

Thanks for all of the help to all of you.