Verbal Skills and NLP

Re: What a cool place?

Yes sir, what a cool place this is!  Thought provoking post.  Great to see you back here.  

I think you touched on a lot of the big issues…emotions, motivations, and personality to name only a few.

I don’t really know anything about it either.  Like you said, I’m not going to let a little ignorance hold me back though.  

It seems like all these concepts are important and need some representation in an artificial brain.  I think motivations might be the most important one to study next.  There also would seem to be an interplay between motivations and emotions as well that I don’t understand yet.  I agree with you about the limits of abstract knowledge.  I have a feeling that in the end my own A.I. may well end up “optimizing a happy function” as you describe.

I have some ideas on this and some paraphrased ideas from academia.  I think I will save them for separate topic specific posts so I don’t write 5 page posts.

API-HW data exchange

I’ve been experimenting a little with Anna’s brain API a little, and I’m ready to take the next step: trying to run Anna on the CCSR hardware (i.e. brain-transplant). For this to work, we need a way for the brain API to receive sensor information from the robot hardware, and send actuator instruction from the API back to the robot hardware. Have you guys thought about this a little? I could create a single XML tempate that used to exchange data between brain API and HW, here’s a scratch version:

https://github.com/beyerly/robotAPI/blob/master/robotDataExchange.xml

For example, the robot hardware would detect and record voice, converts in to text, and sends this (in the nlp section of the xml file) up to the API using curl, together with all other sensory info. The brain API would run NLP on the text, and would parse the sensory data, form a response and return it in the same xml format. This file would now contain a text-reply to be converted to speech by the robot hardware, as well as actuator data, for example to move the robot forward. Could we develop an XML standard like this?

Of course I am reinventing the wheel here, do you guys know of another sensor/actuator XML format we could use?

 

Good luck with your health

Good luck with your health challenges! I’m looking forward to seeing your machines come together. Finished robots are boring, it’s the little improvements everytime that make it fun.

Re: Data Exchange

I will setup a “Robot” atom for CCSR and a “Person” atom for you in the next few minutes.  Use the AtomID of your CCSR robot as your Robot.Key for now…you can lookup the AtomID for CCSR on the website.

I finally did the brain transplant for Anna to the new brain.  She is calling from Android app using a simple http request, getting back a response that has a set of name/value pairs delimited with a “|”.  I take the response, do a String.Split on it, and then iterate through the results, using the odd values as names and the even ones as values and loading them into a hashtable (to do lookups by name) to be used by the robot.  I’ll send you the android java source code that calls the API in case you are doing something similar.  It is an async task.

The old brain used a web service.  The new brain could have multiple “gateways” built on top of it to use web services, json, xml, whatever output we wish I suppose.  You can try it by calling SimpleAPI.aspx (on the same site as the other pages) and appending the inputs as parameters.  I’d prefer to stick with the SimpleAPI for now though if we can make that work for you as a quick way to get started until we decide on something better.  I’ll probably have to do something different to start sending video frames through though.  Right now I’m sending light, compass, GPS, Thermal, and Voice info through.  As long as there are context data atoms for whatever you want to do, you can send them through

The important thing to remember is that the number of inputs and the number of outputs is variable…that’s why I’m basically using lists of inputs/outputs.  I’m trying to keep things flexible so that someone could use this for bots that don’t have the same capabilities (like being verbal).

Some progress I should report that will be directly relevant for you…

I added the ability to see the internal stucture of atoms on the website.  No editing yet, but it is a good way to learn a lot for now.

I added the ability to switch on/off any agent in the system for any robot.  For example, I could see you wanting to turn off the MasterBabbleAgent to stop the bot from chattering crazy stuff so much.  You will see a “context data” atom for each switch.  It will have a “.Switch” in the name.  The valid values are “On” and “Off”.

I added the ability to add a “default value” to any item in the context that will be loaded when a session starts, and the ability to “override” any of these values for any robot.  To see this, check out the new “Context Data Override” atom type.  You can see Anna overriding one if you look.

I’m hoping to get a rules engine going today and tomorrow.  I’m basically going to be working from noon to midnight or later on this the next few days.This will allow rules to be created (by robot), to create all kinds of new behavior.  I hope to convert the babbling features over to the rules engine since they have such a great influence on personality and are “crazy” prone right now.  This should help us work separately on behaviors without messing each other up as much.  You can simply turn off babbling or say “shut up” if it is annoying you.

We might want to use some kind of instant messenger tool so I can answer questions faster/fix issues for you when I am at computer.  For now just text me.  My # should be the instructions I sent you.  

Sending sensor data…

Basically, you just need to find the “Context Data” atoms (or create new ones) that represent the sensor data you want to pass into the service.  They all start with “Robot.Sensor.” like “Robot.Sensor.Sonar.1”  I created quite a few, so you might have what you need.  Send me a message for any more you want and I’ll set it up.  (it only takes a few seconds each).  Once I get the page for editing atoms created, you’ld be able to create them yourself.

Receiving actuator data back…

This is one area that will require some thought/redesign perhaps for our robots to function in the same environment.  The system (which is still valid for Anna) sends back “Commands” which anna then routes to the proper service in the bot.  Each command has a ServiceID, a CommandID, and up to 4 integer data items.  The ServiceID acts as a “ZipCode” for routing the command (think switch statement) to the proper service.  Each service then uses the CommandID (another switch statement) to execute a given command within that service and pass data to it.  For me this “routing” is important because it allows me to send/delegate commands around between Android and Arduino with great universal simplicity.  Having a common “Command”: format also allowed me to create “Missions” which executed and tracked a series of many commands in sequence.  It’s not ideal but there are some huge benefits in my opinion. 

I could certainly evolve the service so that it manipulated actuators more directly, but the command model does have some merits.  It also separates the details of an individual robots implentation of commands from the brain service.  There are hundreds of “instructions” (an instruction is a command with specific data) already setup that represent things like “Drive southwest”.  Each of these instructions also has verbal results.  I intend to add emotion to that too but haven’t yet.

The simpliest way, in my opinion, would be if you wrote a class or classes on your bot that interpreted the ServiceIDs, CommandIDs, and Data(1-4) and implemented each command you wish to.  Our bots are so similar and there are only a few drive commands and servo sommands.  I could send you an Arduino constants file that has all of them defined in it.  If you want to use the routing/mission stuff too, I can send that source as well.  We can design a new system when a better one becomes clear.  Since our bots are so similar, I think this should work for now.  You can see the full list of services, command, and instructions (a command with data values) documented as atoms on the website (now that you can see the data in the atoms).

Here are the ones necessary to get started…

//Service Constants

const int DRIVE_SVC = 2;

const int SERVO_SVC = 5;

 

//Drive Commands

const int STOP_CMD = 1;

const int FORWARD_CMD = 2;

const int REVERSE_CMD = 3;

const int FORWARD_ON_HEADING_CMD = 4;

const int ROTATE_LEFT_CMD = 5;

const int ROTATE_RIGHT_CMD = 6;

const int ROTATE_TO_HEADING_CMD = 7;

const int ROTATE_LEFT_DEGREES_CMD = 8;

const int ROTATE_RIGHT_DEGREES_CMD = 9;

const int DRIVE_LEFT_DEGREES_CMD = 13;

const int DRIVE_RIGHT_DEGREES_CMD = 14;

 

//Servo Commands

const int POINT_NECK_IN_DIRECTION_CMD = 1;

const int POINT_NECK_ON_HEADING_CMD = 2;

const int MOVE_NECK_LEFT_DEGREES_CMD = 4;

const int MOVE_NECK_RIGHT_DEGREES_CMD = 5;

const int MOVE_NECK_UP_DEGREES_CMD = 6;

const int MOVE_NECK_DOWN_DEGREES_CMD = 7;

const int POINT_HEAD_CMD = 8;

const int MOVE_HEAD_CMD = 9;

Hope to get the romote control pages up soon so we can control bots through computer/tablet/phone soon.  That will probably force me to start sending video through again.

Hope this is making sense.

Regards,

Martin