Practical Uses of Artificial Neural Nets?

I have read over the theory behind neural networks and found open source code to help implement them in various ways.

I was wondering if any LMR users out there had put neural nets to any practical uses on their projects?  What kind of neural net did you implement?  For what purposes?  Was it worth the effort?  What problems in robotics do you think are well suited to neural nets?

I have run across people that are avid enthusiasts of neural nets, however I find too few examples where they have been applied to practical issues on actual bots.  Because of this, I put myself in the "Wow, that sounds interesting, but I am skeptical about successful applications of it."  This means "Not worth bothering" to me at present.  Being open minded however, I wondered what other members experiences have been.

Cheers,

Martin

Vision, Balancing…
The only uses I’ve seen were in vision and balancing bots.

I believe the post office still uses a complex ANN for reading addresses.

One of the problems with

One of the problems with ANNs is that they are primarily pattern matchers. To get anything else requires a fairly complex ANN.

I've read books where people have used them to attempt a bug for bug translation of a single brain function. For example, one chapter told of someone who created an ANN to attempt to deal with numbers the way humans do; note, I did not say accurately. He had the basic arithmetic functions that seemed about as accurate as a human doing it quickly in their head.

There is a Python library for ANNs, called PyBrain at http://www.pybrain.org

re: problems with ANN

Unfortunately, your assessment is pretty much the same as mine…pattern matchers and such.  A lot of people trying to get a boolean answer from a lot of inputs.  A few decades of work on this and that’s all they have to show for it.  Seems lame unless you are getting paid to work on a very specific problem.

Someone was trying to tell me…“You should build your brain” with ANNs."  I was saying “I’m more of a Marvin Minsky Society of Mind Guy”.  I’m so glad I didn’t waste time trying to create some kind of general purpose ANN.  What a collosal failure that would have been.  

I could see using one to recognize thermal signatures…vision again, and not all that useful.  I would love to try something with ANNs if I could see a useful outcome.

 

If you are trying to take a

If you are trying to take a bunch of data and from that extrapolate a value for a particular point, it lends itself to a neural network.  For instance, if you have a bunch of points of data of where you have been, a neural network can easily extrapolate where you will likely be at time x.  Robot mapping algorithms ala Google Car, pattern matching, any type of machine learning etc are all solvable via neural networks techniques or patterns.  

I don’t pretend to be an expert on the math and usage of neural networks.  I am still trying to grasp these ideas and understand what it brings to the table.  I do know that some really smart people who have been successful solving the problems you are trying to solve with Anna have used these approaches. Does that mean your approach is wrong?  The mountain has many paths to the top; your path might be something completely new that hasn’t been seen before.  You have done some incredible stuff with Anna.  

http://www.letsmakerobots.com/blog/nhbill/artificial-intelligence-framework

This might be something you could incorporate into your Anna.  It would fit in well with your server based architecture and could be a way forward to teach her to learn from her environment.  This Variable Stochastic Automaton approach is similar to how a baby or a puppy learns from their world.  Based on events that happen, you have a list of possible reactions each of which starts off as statistically equal to occur.  Each time a reaction to an event occurs, the robot gets feedback as to whether it was a good reaction (the likelihood of doing that again goes up) or a bad reaction (the likelihood of that reaction occuring goes down).  I was able to teach my robot in ten minutes that backing up and turning away from obstacles was the best reaction to running into something.  Imagine getting Anna to fine tune what she says based on how people react to her!

I have many ideas on this if you are interested.  

Regards,

Bill 

re: Bill

Thanks for the post Bill.  …love that quote “the mountain has many paths to the top”.  You said you have many ideas if I am interested.  I am VERY interested, so please post away!

Your post about AI frameworks is towards the top of my “Top things I’d like to try out” list…ahead of neural nets I might add.  I would not describe the solution as a neural net though, would you?  Yes, it does learn, but there are many approaches to learning.

By the way, I don’t really see any of this as a “my approach vs. ANNs”.   My approach has simply been to use whatever concepts work, write code, and piece it together into an overall architecture…but that’s another story.  This is all about ANNs, not Anna.  Having said that, thanks for the compliment on Anna.

I am trying to assess real-world uses of ANNs (separate hype from actual solutions), and determine suitable problems to solve with them.  The solutions seem quite specialized to very specific uses.  I personally have yet to see an ANN solve a more generalized problem.  The do get major credit for solving problems that humans can’t figure out how to write code for.  I do however find it a bit ironic that ANNs were hyped as being “like our brains”, yet our brains are generalized and quickly adaptable to many many circumstances.  I understand the counter-argument…“You just need a ANN with 100 billion neurons and it will happen”.  I hope they are right, but it sounds a bit too much like “trust me”, or a religion.

A tangent…In high school, one of my instructors used to give us pages of data and say, “derive a formula to fit these 4 or 5 variables in the next hour”.  Most of this was done with simple graph paper and log graph paper, a ruler, and a calculator.  A lot of this was simply graphing two variables, looking at the shape, and forming an opinion as to what the relationship was and what should be picked next to look graph.  A computer could do this too - derive formulas.  Simple linear regression, correlation, and other statistical functions can also have a use when it comes to extrapolation.

I gotta run.

Regards,

Martin

By the way, I’d like to open up Anna’s brain on the net either this fall or early next year, and get other’s involved.  I think you expressed some interest.  I’ll keep you posted.

sorry it took so long to get back to you

Martin,

We need a whiteboard and some time to make all of this gel.

Some ideas – I started on the C++ code but just never completed it.  The idea was to build on what I had done but make it more generic and ultimately more flexible.  Doing this in C# will be fairly easy to implement since most of my C++ code was creating list objects (not a problem in Java or C#!) and would work well with your server based model.

classes

ActionManager -

Action - actions or action sequence that are randomly selected to occur after an event occurs.

Event - something that occurs to a robot or series of things that occur - hit a wall, a person responds to a robot question, person responds to etc - has overridden CheckIfActionIsFavorable() which can use motivation(s) to decide if when the event occurs, an action occurs and it is favorable or not

EventManager- manages the Events, resets events as they occur and after an action is complete etc.  Can create events or event sequences and then random action reactions to those events.

EventPattern-a pattern of events or set of steps.  Can specify if event A occurs, then wait 30s and if after that within next 4 minutes event B occurs, wait 40 seconds and if B occur within next 10 minutes, then will activate Action or Action sequence.

Motivation - generalized goals of a robot - eg always tries to go toward light, always tries to engage people, always tries to have more complex interactions with people - so after an event occurs and a random action is chosen, can use the motivations to decide if the actions pushed forward were favorable or not.

MotivationManager - can decide if series of events occurs, can remove or add a Motivation

Habit - if an event happens a lot and the likelihood of an action occurring is near 100%, is moved to become a habit - habits are also “failsafe” events - eg. if bot goes near an edge, it overrides other events and performs back up action - can change habits to events


HabitManager - manage the habits.  Check the events for actions that are near 100% and move event to habit.


What would be ideal would be to have a way to randomly combine events and randomly combine action or action sequences.  These would then be tried and over time those random events and action combos could be removed if not used or tested.

With Anna, an event could be:

she sees an opportunity to interact with a person,
she responds to a statement from a person,
she enters a room and sees several people,
she enters a room, only one person and she knows them,
she enters a room, only one person and she doesn’t know them
she is in a room and a person enters she knows
maybe have a copy of each for each person she knows with topics to fine tune how she interacts
etc.

she randomly selects an action to say something on a particular topic.  As part of her action, she would check motivations to see if the person’s reactions to her are good or bad.  That changes the stats on the action and whether it will be higher or lower.

Anyways, just some ideas.  I love the idea of being involved with you on this project.  My time might be sort of thin over the next year or two since I am looking at doing grad work in the fall but would love to contribute as I can.  This might be a huge rewrite of your code so not sure of its viability as a way forward.

Regards,

Bill

 

 

 

re: Your ideas Bill

Awesome post Bill.  I think a lot of this can directly fit into my recent work, as each of these things you have mentioned would be a different “Atom Type” as far as storing state and relating different concepts to one another.  There would be some code to write for each concept, but at least the data structures would be there.  I already have atom types for many of them, Event, Motivation, Action, etc.  In my new system, new Atom Types can be invented and linked together quite easily.

I haven’t worked out the “Pattern Matching” that would be necessary, but this issue is showing itself over and over again with other stuff, so a solution MUST emerge.  Looks like you have a lot of great thoughts on the nature of some of those patterns.  That will be useful for evaluating a design later.

Sadly, I’m going to be out of communication for the next week.  I’m printing your post and will study over it and make paper notes and respond when I get back.  

I am already forced to re-write everything for other reasons, so now is a great time to throw a whole lotta ideas on the whiteboard, so keep the ideas flowing if you have the time.

Thanks very much,

Martin

OpenCog and Bill’s Framework
Thinking about Bill’s framework and your ideas, Martin, I think that OpenCog’s ideas about non-Boolean truth works fine there,

I had something like this worked out years ago. Think about the “facts” we know. Almost all of them have some sort of hidden certainty value. For example, even though I’ve never seen it, I would put the existence of China at near 100% while the existence of Narnia is very low except in a fictional context. There are things I’d put in the middle, such as campaign promises.

I would love to find a way of tying in the truth values to the context, or at least find a way to deal with real-but-fictional objects such as Narnia or an honest politician.

People can talk about things from Star Wars with an almost religious fervor, even though the movie series is fiction. People can also argue about the virtue of various computing platforms in much the same way.

I create fiction for fun: I am currently running a face-to-face science fantasy game for some friends and I can discuss much of the background of the DJverse (Dangerous Journeys Universe) pretty much from the beginning of this universe until the current date in the game which is the furthest forward I’ve gotten. All of this is factual for me and the players, but we all know it is a shared fiction that very few other people would understand because they don’t have the background.

I’m not sure how this sort of thing would be represented by a robot.

Perhaps by its context in a semantic web. Most humans are very good at switching contexts properly, but each of us has puts different strengths on different contexts and things. For example, most of the people on this forum put a lot of importance into making things which are of little value to a lot of the people off this forum. For example, I’m happiest when I’m knee-deep in software, but making bookshelves is also fun. I’ll probably take a break between Mini-Groucho and Groucho to build some types of shelving units that my wife wants. Other people would just get out their credit cards and buy shelves. And I wouldn’t say that either of us is “right.”

I think I’m starting to babel now, so I’ll go away for a bit. :slight_smile:

Have a good week, Martin!

Martin,Pattern Matching -

Martin,

Pattern Matching - not sure exactly what problem you are trying to solve, but maybe we could talk about that.  I might have some insights that are helpful although you migth have thought of them already!

A few things to think about.  While you are in the midst of a “major refactoring”, some things to think about.

You might want to think about running ROS (ros.org) on Anna, and then connect to her real IO through smaller controllers such as Arduino Mega etc via I2C or whatever protocol hits your fancy.  That gives you mapping algorithms similar to google car, and all kinds of cool stuff out of the box.  You just have to set it up correctly and do a little bit of coding if you don’t have a LIDAR device (it assumes a LIDAR for ranging but that is easy to work around).  It has a very nice architecture for dealing with adding new sensors, simulation stuff right out of the box but is on Linux.  I have decided my next bot is going to be on this; I have my daughter’s old laptop ready to be turned to the darkness that is Linux!  You can even write code in C# via Mono so could write your code on Windows and then just move it to Linux.

You might want to change your WCF to REST services via the MVC 4 and above.  I just did a small project at work with it and found it ridiculously easy to work with.  This would make your server(s) scale better since is a “thinner” protocol via JSON serialization (or XML, but why do that?).  SOAP is pretty verbose by comparison.  It will also be easier to maintain going forward as different bots clients are current with different server versions.

I was also going to suggest another class for our hierarchy.

RobotContext - this would have robot context information, where it is, who it is with, is it inside or outside.  To begin with, it might be as simple as a string.  “GenericContext\Inside\LivingRoom\Single\Julie” ie in the living room talking to only Julie. There would be a global and then the RobotContext would define what sorts of Actions might be appropriate.  When an Event occurs, the robot would attempt to find an appropriate context object, then from that context, it would have an Action list and randomly choose from that list.  For instance, if the robot is inside, it wouldn’t throw a football.  If there are only women in the room, maybe talking about football wouldn’t be a good conversation starter etc.  

Have a good vacation.  I am also gone the week after ie 18th-22nd but will catch up then.

Regards,

Bill 

 

 

Contexts
Bill,

Thinking about context, maybe there should be multiple contexts for a robot, such as LocationContext, PeopleContext, TimeContext, and ConversationContext.

I would break them up rather than putting them all into a single context.

This way the same LocationContext == “/Home/LivingRoom” could have different PeopleContext “Martin” or “Susan+Stranger001”. The TimeContext could be used for that indefinite time that people seem to live in, like DinnerTime, BedTime, or HomeworkTime; maybe something like TimeContext==“2014-08-03/22:00:00:0000/BedTime”. I’m not sure how ConversationContext should be represented, but it would essentially be a pointer into the start of the conversation, its subject, and the current location.

This might help the robot choose if something is an order to the robot or a suggestion to another human

Mr. Dangerous,After my post,

Mr. Dangerous,

After my post, I did a bit more thinking and refinement on this idea.  I was thinking the RobotContext class would be a smart enough object to encapsulate all of the things you mention and more.  There would be a global one for each robot on the server and as it extracts information about its environment, the server robot context would be expanded to reflect what it knows about its environment.  When an event occurs, the event object would call a static method

RobotContext GetAppropriateRobotContextForEvent (Event event);  

This would look at the global RobotContext and then attempt to find a RobotContext which is closest to the global.  The passed back context would have an action list and each action would have a statistical chance to occur.  The trick is that each RobotContext will have its own list of statistics for each action to occur.  So, multiple RobotContexts could have the same Action object in its list, but each RobotContext could have a different chance for it to occur. 

To begin with, this would be just a placeholder and probably just return one context that has a complete list of all possible actions in the system. As the framework evolves, this could become very complex and involve a lot of calculations all completely dependent on what the context contains and how sophisticated the bot becomes in extracting data about its environment. 

We could probably put code in the EventManager to look for patterns and have the robot eventually build its own RobotContext objects with mapped in Actions.  It would need to work from a huge database of information on events that happen to it, so that might be a way off…could probably use k means clustering or some similar algorithm to id these data trends and from that generate RobotContexts (at night, when navigating a room the bots chance of successfully making it through the room without bumping into walls goes up if it turns the lights on).  That kind of stuff is a long, long way off, but just so COOL to think about!

Regards,

Bill

 

 

That would be neat
(Sorry Bill, but I meant to respond to your post.)

Perhaps two different kinds of context objects then.

I was thinking of something lightweight enough to be linked to every atom that refers to interaction. I wonder how much using a full environment context would cost in memory.

I also wonder what sort of contexts a robot would come up with if it did so on its own pattern analysis.

Jay

My first reply is accidentally at the first level after this
Bill,

Please call me Jay or DT. The nickname comes from the phrase “A little learning is a dangerous thing.”

I think we might be more in agreement about Context than I thought. I just tend to break it up into smaller bits.

However, I’m not convinced about your random action generator. It’s one way to induce learning, but I’m not sure it’s very efficient unless the robot can create both atomic and combined actions on its own. On the other hand, I willing to try it or a close relation unless I find something better.

Have you looked at OpenCog? I don’t fully understand the math behind it yet, but it’s strongly based on probabilistic principles.

Jay,Ah, good name.I read a

Jay,

Ah, good name.  We all live dangerously!

I read a book by Jeff Heaton and had looked at his Encog open framework.  AI and machne learning developers have their own shorthand and nomenclature just like robotics folks.  I found it made sense when reading the book and its examples but when moving to a real world program I found it difficult to apply the concepts.  I only do it as part of my hobby, so drips and drabs - not enough time to really solidify what I learn.  I will keep on it though.

As to a bot creating its own events and actions to those events, you are absolutely correct.  That is the center and the beginning of creating a useful robot - building introspection.  We as people take the chaos of things around us and recognize patterns to which we respond or act.  When we run into something totally new, we make educated guesses as to best ways forward, but it is only through trying and then evaluating the success of strategies that we learn.  To me, that seems like consciousness.  Or at the very least a consciousness-like activity.

In what I presented earlier, EventManager would do that.  It would look at previous data and then divine patterns of events, assign actions that previous experience in similar events have shown to be useful as possible reactions.  If those events don’t occur within a reasonable period, they could be garbage collected.  This will be by far, the most difficult piece to write of this framework.  It is the kind of thing that might not ever be completely finished or fine tuned.  People years and years from now might be writing tweaks to move a robot’s personality a slightly different way.  I think we should start simple, and just have a base set of events we define and a llist of actions.  The bot can put them together anyway it pleases and try out its new found event and reactions on its environment.

Regards,

Bill

 

 

 

Progress on Contexts and Agent Lists

I am back from swimming with Whale Sharks and feeling ready to conquer the world with code.

I agree with you Jay and Bill on various points you have made.  Thankyou so much for participating.  I also agree about moving to JSON.  I will have to study up on ROS…sounds interesting.

I made a great deal of progress over the weekend at prototyping a brand new project with new AIService, Request, Response, Context, and others.  I like the direction it is going and I believe it will facilitate adding most of the things you guys are talking about.  I got a test app with the first “Hello World” type interactions going on.  I plan on prototyping a JSON wrapper for this soon just to make sure the API can work that way.  

I ended up coming to the conclusion that Request, Response, and Robot objects each needed to have a variable number of name/value pairs (I am subclassing a C# HybridDictionary) that can grow/shrink at runtime.  When I considered all the various use cases and possible robots, and the need to pass in sparse inputs or large sets of sensor or other data, this variable approach made more and more sense.

My current AIContext object is below.  I am not sure whether I want to put a few objects like Person, Conversation, etc. in the context separately, in BusObjects (List of Objects), or throw their values into request.  I’ll figure it out later.  One odd bit, there is a need to shuffle all robot related values to/from Request and Robot so that each holds the latest state of the superset of all robot variables.  I think I have a way to do that.

AIContext

  1. Request (HybridDictionary) - list of inputs - additional data is added to this list as agents execute and “summarize” the environment or circumstances.  This may or may not contain none, some, or all sensor/actuator data at a given time.
  2. Robot (HybridDictionary) - loaded from and updated to cache.
  3. BusObjects (HybridDictionary) - list of objects that agents have decided to add to the context - not using yet
  4. Agents (ArrayList) - list of agents to execute.  This is changeable and overridable at runtime.  Also, different agents will execute depending on what data is present in the Request.  Example:  ImageAgent will only get triggered if ImageData is in Request.
  5. ResponseOptions (ArrayList) - list of possible responses w/ ConfidenceRating
  6. Response (HybridDictionary) - list of outputs - this will return a Success flag, the winning ResponseOption, Actuator changes, Exception info if any, and debug and performance metrics if asked.

I think this “List Based” approach to the Request object will better facilitate the learning algos you guys are talking about, fuzzy logic, rules engines, maybe even neural nets at some point, because all relevant “Variables” will be in one list.  I should be able to build generic “Criteria Sets”, “Pattern Matchers”, “Rule Sets”, whatever you call them, to test against the Request objects for doing various things.  There are a lot of other dividends to the “Lists” that I can already see, but I’ll write more on that later.

In order to achieve the level of flexibility I believe is needed, I’m having to take on some unusual coding practices, but I believe it will lay the foundation for some great things to come.  One of these is almost all the code now is getting embedded into very small agents.  When a service needs to do a process, it asks to execute an “Agent” or a “List of Agents”.  These agent lists are created in an app, just like all other atoms.  I would see individual users/robots being able to customize/override these processes to suit their individual needs, or use different lists alltogether.  These agents are not coupled to each other in any way, they do not talk to each other.  They only talk to the context and a few services.  As a result, my main service is now down to less than two pages of code, not counting the agents themselves.  

It’s mega weird, mega clean, mega flexible, and seems a lot easier to maintain.  It’s also self documenting as the “Lists” are directly viewable in an app.

I really hope some of this makes sense.  My dream is to get a few people to try it out.

Cheers,

Martin

 

I’d love to try it out.
I like ROS, but it isn’t totally platform independent. Basically there are two main things in ROS. Code is one of them and that isn’t platform independent, the other is a a standard for passing data.

To give an example, you can put a ROS node on an Arduino that runs a small robot. The code to run the robot could be stored as a ROS node, but it would only run on the Arduino. If the Arduino code wants to communicate with another system, it publishes it’s data.

That might be something that you would want.

One advantage to ROS under Linux is that most of the code is compatible, not just the message passing standards. I will probably put ROS on mini-Groucho, but I don’t know when I’ll do this. I’m hoping to get this done enough to take to Cleveland with us when we go for Lee’s surgery early next month.

TTFN

I’ve got to run; I’ve just been smacked with the tired stick.

Martin,Sounds like a great

Martin,

Sounds like a great vacation, swimming with Whale Sharks!  My vacation was much more prosaic – a few quiet days by a lake in NH reading sci fi novels, kayaking and taking it easy. 

Not sure I fully understand what you have in mind here or how it differs from what you had before.  I would need to see the code or more documentation to understand what you have in mind.  I am not sure either where all of the pieces I was talking about with the Stocachastic Variable Learning Automaton would fit either.  Although that would probably be another agent that would be enabled if there isn’t a specific action to respond to the event.

Again, not really sure I understand what you have in mind, but open to taking some time with it.  I have found there is nothing like talking through a problem.  I have access to my company’s web ex and phone conferencing if you want to chat through some of these design decisions, share what you have etc.  This is the fun part of software so would be more than happy to be a sounding board!

Moving to JSON or REST will be a good move especially as you try to scale this to more than one robot. 


I am starting grad work in September so not sure how much time I would have to offer to this endeavor as interesting and fun as it might be.  26 years since I finished my under grad work, so not sure how much this will consume me!  If you would like I could convert my  VSLA code to C# which you might be able to fit in with the rest of what you have here.  I think it would help and could push forward what you are doing or focus your mind in a direction you hadn’t thought of before.

Regards,

Bill

Hi Bill

Thanks Bill.  I appreciate your candor.  I can totally understand how what I am writing about would not make sense yet…too much of it is unconventional and still in my head.  Sorry for that.  I will try to publish some doc and code when I am a bit further along.  The web ex/conference thing is a great idea.  I feel like I need to create some decent documentation and a working prototype first, maybe with just a few agents to show the end to end idea.  I am getting close on the prototype.

My code previously had more specific objects with specific properties, which were too directly tied into the specifics of Anna.  What I am trying to do now is build something that could be used by an inifinite number of Anna-Like or non-Anna like robots and devices.  To help do this, I am envisioning a lot of the key objects being “Lists”.  I would envision some metadata to say “Hook this Request Data up to this Agent”.  I would guess the code for Anna has 100+ agents and 30+ atom types.  An atom type would be analogous to a database table, and is effectively a “Memory Type”.  A logical data model could help document this but it is too early for that since I am doing such a re-write.  Your robots could likely have some different agents and deal with some key behaviors in a very different way, and maybe even some new atom types.  Hopefully, most developers would choose to reuse the existing agents that I would provide and share new ones with the community.  How to create, configure, and share new agents, behavior, atom types, and memories without things getting too tightly coupled to a specific robot is probably the biggest design consideration for me.

As you have said, I do believe that the VSLA code you are talking about can and should be encapsulated into an agent or agents.  I also believe that in time I could design an architecture where an individual robot owner could configure one or more instances of VSLAs through an App for a given robot, to say which inputs and outputs are plugged into a given instance of a VSLA Agent, or what conditions must be met first.  Lets say one wants to use a VSLA to control obstacle avoidance and drive behavior when a bot is autonomous.  Sonar data and motors might be plugged into a VSLA agent, while other things like speech might be controlled by other agents.  Someone might use a VSLA to interpret color blobs, kinect, or any number of other sensors, but use Anna’s agents for retrieving news, weather, or doing mathematics.  Everything I just said about VSLAs might equally be said of ANNs (Artificial Neural Nets) or some other routines.  I hope I am not murdering this through over-simplification to the point of not having validity.  I think there is something useful to my concept, and that the required data structures (atom types) to make it work would emerge once I worked with your algo a while and could see the problem more clearly.

A side note:  I am not sure how your feedback mechanism “IsActionSuccessful” (you called it something else, I can’t remember) could be implemented system wide for all behaviors, so I guess I would need to figure that out.  A C# port of your code would be useful if you have time, I wouldn’t mind doing it either when the time comes.  

The Force Field Algorithm could also be an agent.  Likewise, I will soon have OpenNLP encapsulated in a few agents.  The point is, I guess I get back to Minsky…“The trick is there is no single trick”…unless of course you want it to be.  You could have a VSLA do everything I suppose, but I doubt it would be effective for a lot of the conversational pieces though, like asking questions, answering questions, initiating statements on topic (what I call babbling), knowing personal details about people, etc.

My thought is that I will make the code available but also allow developers to run their bot against my server or a public one yet to be created.  This will allow hobbyists to learn and use the system first without having to setup a server, db, and all the code.  In the beginning, it will be enough of a task for a developer just to understand the API and online App.

I bet NH was nice, I went to prep school near there back in the day, best days of my life.  I am extremely envious on grad school.  What are you studying?  

I plan on starting a new thread as soon as I can think of a good name for whatever this is.  It’s late, I’m rambling, so I will go.  Bill and Jay, if you wouldn’t mind, drop me an email sometime to [email protected] so I can send you stuff when I have it.  For now, I’d rather trade detailed info and docs in a non-public way.  I will also need a way to give you guys API keys and the like.  Talk to you later.  

Regards,

Martin

 

**Martin,

I’ll send you email**
Martin,

I’ll send you email later today.

You guys got vacations, and all I got to do is work on mini-Groucho a bit. :slight_smile:

Seriously, things are a bit strange here. We were planning on going to Cleveland for Lee’s surgery, but now she has a staph and e. Coli infection in her wounds so I’m changing the bandages once per day. At least if we’d gone to Cleveland I’d have had time to work on my robots without having to do much else.

I like your ideas and I’ll talk about pretty much anything. There must be some conferencing software, maybe Skype? I’m behind the times there. I do have an Adafruit cellular card so once I get a provider and a SIM card, I can have Groucho do phones or texts for me, as well as giving me internet where there isn’t any otherwise.

I’m very interested in both taking about your programs and testing it. I have a background in odd things. Most of my professional career, most of it at Penn State, was as a systems analyst and fire fighter. As a developer I did mostly user interface development. I started working with robots after Lee got sick, but before I quit to become her full time caregiver, and gave up robotics because she didn’t want me to work on them.