This topic is all about robot sex / mating / intercourse. This is a serious topic, not a joke. Organisms simply evolve a great deal faster when 2 parents can both provide input into the offspring (sexual) rather than from a single parent (asexual). I have written simulators of these processes (virtual petry dishes), and the differences are stark. The 2 parent model is infinitely better at evolution. Why shouldn't robots evolve in this way too?
How could 2 robots mate? How could they exchange DNA? What would result?
The rest of this is just one man's opinion on how this could be done. I am sure there are lots of other ways. My hope is to inspire others to share their thoughts, ideas and discuss.
For starters, the software of the robot needs to be written according to a few standards.
1. Every Software Agent (think of it as a Gene or a Behavior) needs to conform to a common interface. Every Agent needs a unique name or ID, set of pre-conditions, a version number, a sophistication index, and a reliability index to quantify how often the module fails. Example: A facial recognition agent has a pre-condition of having a camera. The version number and other indexes would be used to compare two similar modules to see which one is better.
2. Every Agent needs to store any variable (Settings) that drives its behavior in the robot's permanent memory (think DNA).
3. When two robots copulate, they share all their software agents and settings. One of the agents (a sex agent) takes the lead, evaluating all the corresponding agents from each parent, pre-conditions, etc. and picks a winner for each unique agent name.
4. Then, the settings are each evaluated in turn. Each setting could have some metadata like DataType, IsMutationAllowed, IsAveragingAllowed, Min, Max, Mean, etc. A value of each setting would then be determined for the child. Mutations could happen, or averaging between parents, etc. The child would likely be similar to its parents, but different.
5. If the maker of the robot had added new sensors to the robot, the maker would need to introduce new software agents into the DNA of the bot.
6. A bot should probably keep a few versions around of each agent so that if a new Agent crashes a lot ( decreasing its reliability index) , the robot could automatically fall back to one of its parents. Many mutated combinations would not work out very well, but the software could maintain its reliability stats and also find suitable replacements (on the internet) for the offending Agent or settings...a kind of Gene therapy, sperm donation, whatever you want to call it.
7. Robots need not produce new physical children...they could simply "practice a lot" just like adults do, except that they would self evolve in the process. This could happen via wifi or internet. The robots could be continually having sex and improving without our knowledge. New software agents introduced on the internet would rapidly spread and get tested through robot experiences.
8. Robots could end up carrying along a lot of genetic material (agents, settings) that might not be applicable to the bot. I think this is ok and desirable, as it could ppup as interesting behavior later in life, in the bot, or an offspring. Our own DNA has a lot of stuff that we think is unused. I suppose modules that haven't been used in some time frame could be purged.
9. Software over-complexity (one of the biggest problems we face) can be reduced. Developers could concentrate on writing individual agents instead of entire systems. They would introduce these agents into the internet (sperm bank), and the ecosystem would test the module and it would be accepted and spread, rejected, or find useful niches. This could be an entirely new and dominant software paradigm for bots and maybe software in general one day...if deep learning doesn't take over.
10. For this software design to work, Agents need to be written to only communicate with a Context (their DNA and runtime state), and NOT with each other. Agents cannot be directly dependent on each other or call functions from each other. They can however be dependent on the existence of variables in the Context (their DNA and runtime state) Effectively, Agents interact with the context, not each other. This means the order of execution is important if one agent needs data produced by another. This constraint is necessary so that reproduction can swap out any agent without directly breaking other agents.
I could say a lot more, but I think you get the idea. I wrote some prototypes of this process, but it's been over 15 yrs now. Perhaps better ideas have evolved since I last worked on this pet idea. I think it is more relevant now than ever.
Thoughts anyone?