Super Droid Bot "Anna" - w/ Learning AI

Wolfram Assumptions

I hope you don’t mind if I pick your brains here, you seem to have a very good grasp on NLP and such.

How did you get Anna to solve the “assumptions” issues with Wolfram?

For example “What time is it” does not return the same value as “Whats the time” as Wolfram assumes the first is a date object, but the second as a word and tried to define the word “Time”.

I mean you could of course go through every sentance it trips up on and patch it with querying the correct assumption, but do you know of any more general ways that would work for more obscure queries?

 

Cheers!

http://www.popularmechanics.c

http://www.popularmechanics.com/technology/engineering/news/why-watson-and-siri-are-not-real-ai-16477207

Great topic for the NLP

Great topic for the NLP forum! Can we move it there? Complicated and fascinating question. I feel these are really ‘idioms’ and not grammatical structures you can deconstruct with an algorithm. So what you’d really want is the AI to ‘learn’ these idioms by trial and error, just as a human would, instead of endlessly programming the English lexicon. So ‘what’s the time’ and ‘what time is it’ would gramatically lead to 2 possible queries: what is the concept of time and what is the value of time. AI randomly picks one, and if incorrect, we correct it (saying ‘bad robot’), and AI would store this correction as an idiom it would use later. Would that work? 

Work in Progress - Forwarding to NLP Forum

I spent the last hour trying to put together a coherent response to this one.  It’s late, I don’t like it.  I’m going to revise before posting.

Happily and sadly for me, I like byerley’s idea better than what I’m currently doing.  Happy to get new and better ideas, sad that I have to figure out how to code it and get it to work in my system!

This does seem like a good thread for NLP Forum, so I’m going to post my input there.

Moving to NLP

As suggested, I will repost in NLP forum to be discussed further :slight_smile:

 

Cheers!

I was thinking… would Anna

I was thinking… would Anna be able to go on any social media? (Philae is sending tweets too :wink: Could she do random witty banter posts, and react to pokes/tweets from us? Maybe she could be a full-blown LMR member and talk with members in the shoutbox? Or is all of this outside of terms-and-conditions ;-)?

re: Robots on Social Media

I really like this idea.  About a year ago I was thinking quite a bit about robots having their own Facebook accounts, and checking FB on behalf of people and reporting on what was new.  I was dealing with so many APIs back then, I never got around to figuring out the FB API.  I think their policy is people only, but its worth a try.

At the time, I settled on text messaging so the robot could chat and receive commands and send pictures back when asked.  It could also forward messages to other numbers if I said something like “Tell Jane I said hello.”  … a text would be sent to Jane’s phone number, from Anna, saying that I said hello.

I’ve never used Twitter myself, but seems like an option.  If anyone out there has any java or .NET code that calls FB or twitter and wants to share, I’ll re-write it into the shared brain project which will land on Github for everyone before too long.  Is there an API for the Shout Box?

Take the page, parse with

Take the page, parse with jsoup, compile the form, submit. Even easier for http requests, as it’s just url editing.

Anyway a bot could just move the mouse around on a pc and see what is on the screen. Trough the Java Robot library i managed to make a computer to play a complex card game.

You see, I never got why

You see, I never got why Star Trek’s Data would have to punch keys on the console (although he could do that very fast). Why didn’t he just have a wifi link into the Enterprise computer. Even R2D2 had a wall-socket plug in, although that involved some seemingly redendant mechanical rotation too…). 

Absolutely Amazing

Came across your project yesterday and thought - wow - now that’s a robot :slight_smile:

A couple of questions if I may:

1. What have you used to animate the facial expressions on the Android Phone - I was thinking of doing something similiar and have a spare HTC Desire HD that I wanted to use?

2. How have you interfaced the phone?

Many thanks

re: JohnnyAlpha

Thanks much Johnny!

You’ll have to build an Android App (I use Eclipse as the IDE) with one or more Activities (forms).  There are several ways to do it, For the face, I created a custom View and drew graphics to the view’s canvas object.  The face is a combination of circles, arcs, lines, etc. drawn to this canvas several times a second.  It gives the effect of animation.

It took me the better part of a weekend to figure it out and get it to work the way you see in the videos.  The eyes were pretty easy, the mouth not as much.  I probably spent a few more hours getting the eye reticles (there is one for video, and another one for sonar) to work.  I don’t demo the sonar one in the videos…but it is like a radar plot with a bunch of pie slices, one for each sonar.

I interface from the Phone to the Arduino Mega ADK using a USB cable and USB as an “Android Accessory”.  You’ll have to create several things (an activity, a BroadcastReceiver, a Runnable to process the USB messages in another thread, and several other things.)  It took me going through some bad examples found through Google, failing a lot, and eventually coming up with something that worked and was reliable…several long days and nights I think.  Worth the effort though.  Do you yourself a favor and have just a few (or one) standard messages that pass back/and forth and things will be simpler.  Its probably a lot easier for anyone that has programmed USB before, I wasn’t used to dealing with translating ints, string, etc to bytes and back, so I had a learning curve there too.  In the end, the Arduino side is pretty short and sweet, its the Android side that is a lot less straightforward.

If you are really serious about this, I can send you some code for some of the individual pieces.  You’ll have to sift through a lot of stuff to find the essential pieces you need though.  There may be easier ways to do it now, haven’t looked lately.  Some people use bluetooth.

Happy coding,

Martin 

hello,this is simply an

hello,
this is simply an amazing work !

i would like to know how do you make it speak, i mean you use a text to speech converter or something ?

i am building a robot it’s been since the start of christmas holidays, and i am programming the controller and continue working on the robot itself, i have many options and function that i will be adding one of the suggestions is to make it speak at some cases

re: Speech

I have a habit of diving deep into something for a day or two, getting it to do what I want, and then never needing to look at it or touch it again.  Because of this, I forget a lot of details.

The robot uses the text to speech service that comes with various versions of Android.  You can use various voices in male/female for different languages.

When I am testing the robots higher level brain with the robot off, I use a windows app written in C# that uses whatever text-to-speech microsoft provides with windows 7.

Both are really easy to use.

 

Liked your latest video (lasers)
Now, were you wearing pink, or does Anna just like shooting you?

i have no idea about that !

i have no idea about that ! so you mean the robot is speaking threw the phone ? and when you removed the phone it will stop ?

Your Robot

Your robot is so COOOOOLLL! 

Your Robot

Your robot is so COOOOOLLL! 

re: Being cool.

Thanks!

re: firashelou

The phone is her face and highest level brain physically on the robot.  It handles speech, the face, emotions, motivations, vision processing, OCR, and calls an even higher level brain for processing language, reasoning, and access to various other web services.  The android phone is the “center” of the 4 different brains the robot uses, and so is also a conduit for a great deal of messages that pass around from one platform to another.  If you unplug the phone from the bot, its like removing the brain from a person. With the phone removed, nothing works except obstacle avoidance, IR remote control, radio control, and other low level modules running on the microcontrollers.  No brain, no autonomy.

aha i see ! really great

aha i see ! really great work :slight_smile: