Denuded Furbot

My first attempt at building an autonomous rover bot:

-Denuded Furby skull with expression and speech control hacked with a small servo
-vmusic mp3 module for speech/attitude
-atom pro 128 microcontroller
-A4WD rover base
-ir sensors for obstacle avoidance and speech interaction, bumpers as “fail safe”

Still on the test bed: wireless link and weapons systems.

In action:

overview of the electronic (this is the alpha version of the rover, note many quick ties and errant wiring):

Wow, it did a great job navigating around obstacles! The code seems to perform very well. I like how the head moves about looking for obstacles.

Thanks- the navigation code was (obviously) the hardest part, especially using only the ir sensors (3 sensor array). It was very helpful to use the head movement during the test phase to give a visual indicator of when,which sensors were being tripped and at what distance, as well as adding a bit of “life” to the bot as it travels around.

Pretty cool bot! :smiley: It’s unique to say the least. 8)

What I like about it is that it looks intelligent running around. You can actually see it thinking and making decisions. Some bots just bang in to walls and then backup and go a different direction. You bot does a great job. :smiley:

Interesting! Just what Valentino Braitenberg was saying in his book: “Vehicles: experiments in synthetic psychology”.

That at a certain point of complexity, users will infer intelligence to an otherwise inanimate device.

But I agree, it’s interesting to watch.

Alan KM6VV

“That at a certain point of complexity, users will infer intelligence to an otherwise inanimate device.”

It seems that when programming, electronics, mechanics and environment combine (as in a robot), unexpected things can happen since there is always going to be some (variable) amount of ‘slippage’ between each of the components which are manifest in apparent ‘intelligent’, ‘independent’ or ‘emergent’ behaviors.

Adding animatronics certainly enhances anthropomorphization, though I have to admit that I have spent more than a few minutes just watching a Roomba as it trundles about its business.

I like to think of code as a form of recorded thought process. Basically you use your human intelligence, by reasoning and solving problems in code, then when the software runs, it uses your pre thought out processes and therefore it looks intelligent to the observer.

Well… I don’t know if the code is a copy of my thought process, more like I put myself “into the beast” and think how I want it to operate. This then is what’s recorded, and if all is successful, the beast gets to “think” his beastly thoughts.

And my current “beast” project (Micromouse) finally got over a hurtle that’s been bugging me for over two weeks (very limited time to work on project). Seems the comparator and D/A must be turned off for PIC bits ('877) in the low end of the A port in order to use 'em as digital inputs. I think I made that same mistake over two years ago on a project far, far away…

Alan KM6VV

I don’t think its an exact copy of your thought process at all, rather the code contains basic pre thought out instructions that when run, will appear to us humans as intelligent control. The word “intelligent” can mean a lot of different things to many different people, but depending on how complex a program is the more intelligent the machine can appear to be. I like to think of it as the number of colors you can mix on a pallet. Mix red and blue you get violet, and so on. You end up with an infinite number of colors, same with code, the more instructions the more intelligent the bot can be, however, machines will never have consciousness. Everything will only appear to humans as intelligent control as we will always relate to the machines to our selves.

I really have a hard time puting into words what I want to say but in a nutshell thats my take on it. :laughing:

I like the way you put that. I think the ultimate code for a robot would be a code that changes and adapts over time based on past decisions made by the robot as to whether a certain decision it made was correct or not. That would make a robot look pretty “inteligent”. I believe that how to do this would be to set goals for the bot and letting it verify whether it has succeded or failed at accomplishing its goal. I don’t know how this could be done, but it is my dream robot project.

Yes, “intelligent control” could mean many things. My lawn control computer is an “intelligent control” system.

You don’t think we’ll have machines that are self-aware? Where’s your Sci-Fi man! And the AI guys are saying it’s just around the corner, all we need is a little more processing power! Of course, they’ve said that for what, 30 years now? Hard to say if it will ever become possible.

But don’t forget all the proponents of RISC! To them less instructions is more! (I like my CISC).

A conversation like this could get really deep!

Best regards,

Alan KM6VV

A systems biology approach combined with fuzzy logic and self annealing genetic algorithms is what I’m thinking… (see Chidananda Gowda.pdf)

Truthfully I’m not really skeptical about AIs being as smart (or smarter) than man, but I’m not quite sure if its possible or not. I do know however that machines will become smart enough to think fairly independently taking care of small tasks on their own using fuzzy logic, but having their main objective needing human input. For example a robot in an industrial setting that can avoid obstacles and complete small tasks but must have a goal given to it to be able to function.

Quite right! These goals are quite realistic, and worthy to strive for!

There’s no reason why artificial “pets”, with loyalty to their owners and minor abilities can’t be made. K9 may be closer then we think!

Alan KM6VV

As someone who did AI research in the early 70s, AI researchers were excited by the promise of early successes on basic tasks. As a teen, I wrote software to recognize basic 3D objects, so it wasn’t rocket science! However, I think the problem of integrating these basic AI building blocks was much more difficult than The Eternal Optimists imagined.

Processing power does help, but IMHO it will be massively parallel processing that will be the next big thing in AI. The small size of processors will help. Once we can fit hundreds of processors on a chip… Servos with a processor in their plastic case…Think one processor per servo and groups of processors talking to a processor that coordinates its move. For example, one arm, 4 servos with processors, one arm processor… split the wrist and hand operation on its own coordination processor, split the shoulder onto its own processor… etc…

IK becomes simple because it is limited to a few servos or servo groups.

It is the coordination or communication between processors that will bethe tough nut to crack.

dj

Yes! Rather then servos being merely a part of the robot being controled by one “brain”. Each servo being treated as an individual worker in a collective would be brilliant. Hmmm sounds kind of like the borg.

Bingo, I agree, and not only parallel processing, but a chip modeled after the human brain using neuron processing technology. IMO, the only way to make a machine comparable to human intelligence is to copy the full workings of the human brain. Since we don’t fully understand how the human brain works, we can only expect moderate results from the very best system. The human brain is so complex, I’m not sure it’s possible to build a system to rival it. I do think personal servants in the home are a real possibility though.

Hey, it’s starting…I just heard that a new 6 core chip is coming out…next 100 core!!!

Apologies if I have hijacked this thread…

Define comparable to human intelligence? That is an anthropomorphic view. Perhaps making a machine humanly intelligent should not be the goal. Dogs are useful creatures but are not humanly intelligent. Machine intelligence will be different and to reach that goal, copying the human brain will lead to many dead ends. and possibly, that is why researchers have been so optimistic and not been able to deliver.

Intelligence is both rule-based in part and in part, it is intuitive. MPP provides the rule-based decisions; neural networks provide the intuitive/learning capability. In my example, the processors at higher levels will likely be in part neural processors, teaching the individual component processors how to work together.

All of the concepts presented in the thread are excellent comments. It is my point that a multi-headed solution may prove to be the best, and that some of the blindness restricting AI progress that occur in strict adherents to any one theory will be eliminated. It has happened before in research; think chaos theory, which tied together economics, biology, ecology, the arts…

dj

By the way - anyone care to guess the significance of my signature?