A Robot Named Adam

I’m in the process of

I’m in the process of mounting servo motors onto its arms which will reel in string that will act as muscles for the joints of the robot bones.  The robot will doubtfully be able to pole vault as motors space takes up too much room and weight compared with human muscles so I can’t fit enough servos to generate enough power to achieve something like pole vaulting…

Rib Cage

 

Here's the clay sculpt of the rib cage.  Sculpted in oil based modeling clay.

exoskeleton mesh mold of face

 

exo-mesh-of-face-small.jpg

https://www.youtube.com/watch?v=M3uJViGE6yw

 

Here is my latest progress on the robot build.  It is a exoskeleton mesh mold of my face to be used as a guild as I sculpt the skull and it will be used to define the mass of the neck of my robot.

This project is made from pure awesome!
I’m looking forwards to seeing your progress, its quite an undertaking but… Passion builds succes.

Hey guys I’m back! Been a long while since my last update but life was busy. I was making progress though when I had time and want to share my latest progression updates.

First of all, I ended up caving in and doing a full blown 3d model blueprint of the robot’s entire skeletal structure to scale along with outer shape mesh and then modeled out every muscle and labeled each of them and modeled all of its motors and placed them and modeled various other bits like the main onboard pc and cooling systems (artificial lungs and artificial heart). Also modeled its batteries and placed them. Only had to do half of the body since the other half of body is symmetrical. I realized that with the tight tolerances I’m dealing with, I had to make custom servos and custom pcbs for the servos control and custom pulley systems to “down-gear” the servos. I also realized that with such tight tolerances I needed to 3d model everything to figure out where to fit everything since it will all be a tight fit with little room for error and once I mount a servo, it is a real pain to move it later. The 3d modeling blueprint job was a major project in itself but well worth it in helping me visualize everything better and figure out where to locate everything specifically. I did not blueprint the wiring or pcbs though, so I still plan to fit that all on the fly without precise blueprints of where it all goes. This too could change if I find I need more help in planning this aspect of it.

I also purchased the main brains pc to be mounted in the torso. I even purchased cameras to be the eyes for it. The main brains pc will be a mini itx motherboard gaming pc basically.

actual build I went with:
Intel Core i5-10400 2.9 GHz 6-Core Processor - $165
MSI MPG B560I GAMING EDGE WIFI Mini ITX LGA1200 Motherboard - $170
G.Skill Ripjaws V Series 32 GB (2 x 16 GB) DDR4-3200 CL16 Memory - $140
Western Digital Blue SN550 1 TB M.2-2280 NVME Solid State Drive - $99
DC 12V input 300W high power pico DC-ATX 24Pin mini ITX - $20
GOLF CART DC BUCK CONVERTER 20 AMP 48V 36V VOLT VOLTAGE REDUCER REGULATOR TO 12V - $20

I will use 10 in series lithium batteries to produce 30v-42v input power into the 12v regulator which will feed the 300W atx 24pin mini ITX power supply. Note, however, that as with all power systems, I will have both a wall plug AC to DC converter custom power supply to run off wall power and a battery power supply to run off battery power so that the robot has multiple powering options - ie able to run off wall or its internal batteries. It will have a retractable plug that comes out of its lower back to plug itself into wall outlets when it walks into a room and needs to recharge or run for extended periods while its batteries remain topped off for room changes or ventures into outdoors. It will have the ability to strap on a external battery backpack optionally for extended operation without access to AC power. This is useful for operations like sports or mowing the lawn.

For the eye cameras I went with: ELP USB camera 1080p 2 megapixel, wide angle, low light x2 for $98.42

This gaming pc in the chest of the robot will run all the AI and high level planning and movement decisions. This will communicate via USB to a series of Arduino microcontrollers located throughout the robot’s body in order to give movement instructions to the Arduinos and also retrieve sensor feedback from the Arduinos which will be monitoring joint angle positions with mini potentiometers, strain gauges on various pressure points to measure touch sensing, amp current measuring boards (acs712) to measure amount of power being drawn by motors for collision detection and weight of exertion estimation for holding things or w/e other interactions with environment are being detected, etc. So, many inputs will be retrieved by the main gaming pc and its AI systems will make decisions and make course corrections based on all this feedback it gets from sensory systems.

Note: I did at one point begin sewing in MG996r servo motors into the arms of the robot only to realize only like 4 of these can fit in the entire arm due to their very non sleek profile and bulky form factor. The way hobby servos cram the motor control circuits, the gear system, the potentiometer, and the dc motors into a box forms a bulky shape that doesn’t fit into my robot body design well at all. So I am creating custom servos where the control board, dc motor, down-gearing systems, and potentiometer is located throughout the robot anywhere space is available. This makes me able to fit like 25-30 motors into the robot’s arm instead of only 4! Much more efficient use of space this way. Also, by using Archimedes style compact pulley system rather than gears, I lower the sound the robot gives off significantly and save on space and weight. The pulley system I am planning to use was inspired by an episode of Gold Rush where they used a “pulley block” to pull a barge out of a river and this idea was expanded on and explained here: Why Snatch Blocks are AWESOME (How Pulleys Work) - Smarter Every Day 228 - YouTube

Once I eliminated all ideas of using commercial servos and went into building my own, I realized it is WAY WAY WAY cheaper to buy your own servo motor individual components and build your own custom servos than it is to buy commercial servos, ESPECIALLY once you get into really high powered stuff. For finger joints, I bought size 140 brushed dc motors at $0.86/each and L9110s h-bridge chips to drive the motors. Arduinos will control the h-bridge chips. I also bought little volume adjustment wheel potentiometers which I will customize and use to measure joint angles of all the robot’s joints. For mid sized muscles I bought brushless dc motors size 2430 5800kv 24amps 7.4v 200watts $11/each. These will be littered throughout the robot’s body for most smaller muscles and I’ll be making my own controller pcbs for these which will be controlled by Arduinos littered throughout the robot’s body. Also will be using the slightly more powerful 1/16 scale RC brushless dc motors for many muscles in the robot as well which are 300w motors 12.6v 24amps at $11 each. Then for even more substantial muscles I’ll be using size 3650 brushless dc motors 1/10 scale RC at 13v 69amps 900w 3900kv at $15/each (Ebay). For even bigger muscles I’ll use 1/8 scale RC brushless dc motors size 3660 1200w 92a 13v at $19 each. Then for the very biggest muscles I’ll use N5065 brushless dc motors at 36v 80a 2500w 330kv outrunner style typically used for electric skateboard scooters at $29 each . These will handle things like thighs and calves and being so big we will use not many of these only for special monster power muscles in the human body. The brushless dc motors are able to provide the best efficiency, power, low weight, run quietly, and can be precision controlled so they are amazing for this project. They also don’t require down-gearing as they can be stepped like a stepper motor to run at variable speeds. For me to buy commercial servos that can put out power numbers like I just listed, I’d be spending hundreds and hundreds of dollars per servo. But since I’m just buying the motors and doing my own down-gearing, potentiometer installs, and my own control PCB h-bridge systems, I save a fortune and this project is very reasonable to afford all of the sudden!

BTW, I’ll be using Windows 7 as the operating system for the main pc in the robot’s chest. This hopefully will not come back to bite me since it isn’t a real-time operating system and might give me limitations, but it’s what I use on my personal PC and already code on a lot and it will be easiest to avoid having to learn Linux or ROS or w/e. Plus I already have a large amount of code developed for windows operating system that can be reused for this project.

Also, I managed to figure out how to make a robot learn and think and communicate in English in a overarching philosophical way and have began to code this advanced AI system. This coding project will take decades and will all be coded from scratch in C++. I have wrapped my head around it and have already made huge progress on this. It took me some years to figure out where to even start and wrap my head around this monster job.

3d blueprint for robot full torso

Robot blueprint leg detail motor and muscle string placements and spacing

Robot blueprint midsection detail with batteries in black, a semi transparent main pc behind them, and the artificial lungs and heart behind that for cooling system

Robot neck design which has tubing for breathing and drinking icewater for cooling systems
Robot neck design which has tubing for breathing and drinking icewater for cooling systems

Robot shoulder blueprint detail with muscles labeled

Robot blueprint forearm detail with muscles labeled
Robot blueprint forearm detail with muscles labeled

Note: if wondering why all the black lines in the blueprints, they are tubes that are modeled that connect labels of each muscle to a placard with that muscle’s name and they are also tubes that point from a muscle to the motor that has been assigned to actuate that muscle. So if zoomed in on a muscle (red tube) I can see a black line coming off it leading to a placard that tells me the name of that muscle and another black line that comes off that muscle that leads me to the motor that will be operating that muscle. Because the motors reel in muscle strings (like a fishing reel reels in fishing line) the motors can be located in a part of the body physically removed from the location of the muscle significantly so. For this reason I needed lines to point to where the muscle is for each motor so I can figure out what goes to what later on when building. The physical model will have little flag labels on everything to keep track of everything going on for debugging and repair work in the future.

brushed dc motor custom servo sewn into forearm detail

custom servo detail closeup

compact archimedes pulley system design for downgearing servo muscle string output

.3mm id teflon guidance tube for muscle string for index finger distal joint

2s temporary battery supply for forearm motors testing

rearview of custom battery holder

ceiling mounted rail setup for lowering robot onto work area suspended from ceiling


here’s my archimedes pulley downgear system CAD for my 2430 bldc motor for finger actuation. This will give 64:1 downgearing. Compare this to 180:1 standard downgear ratio in a hobby mg996r servo motor for example. Will be a bit faster than that then but still plenty of torque with this beefy bldc motor (200w motor). I prefer pulleys over gears since they will operate mostly silently whereas gears are noisy. I think this pulley system is the secret sauce of my plans that I am not aware anybody has done yet. It could be the standard for humanoids one day maybe if it is as good as I think it will be. Still experimental but I’m going to be prototyping this soon. I will be making my own bearings for these pulleys so the whole pulley is custom made. Well some pulleys I’ll be using purchased mini ball bearings and some pulleys I’ll be making the bearings as plain bearings using stainless steel tubing which I can cut to size with my dremel to make the plain bearing. Another HUGE benefit of pulleys over gears is gears generally are mounted to top of motor which really makes a large volumetric area taken up by the motor and downgearing which creates space concerns for fitment inside tight spaces in humanoid form factor (particularly when you use a human bone structure instead of a hollow 3d printed arm with no bones which some have done to accomodate geared servos inside the hollowed arm space). So by translating the motor’s turning by way of braided PE fishing line to a pulley system like this, you can decouple the motor from the downgearing in your CAD design, placing the downgearing in a convenient place separate from the placement of the motor which allows for creative rearranging possiblities that enable you to cram way more motors and downgearing into the very limited spaces in the robot. The motors and downgearing is fitting where muscles would normally be in a human body so you want elongated narrow fitment options and this way of downgearing lends to that shape requirement well. Also it is nice not to have to worry about making or buying gears which can add cost and complexity and weight and a lot of volume concerns. The noise elimination will be huge.

I’m planning to use .2mm 20lb test braided pe fishing line on the finger motors that will run to the pulley system and then swap to 70lb test line for some of the lower pulleys where the downgearing has beefed up the torque quite a bit and the tension will be higher there so going thicker line then. 70lb test will go to fingers from the final pulley of the archimedes pulley downgearing system.

The 70lb test PE braided fishing line (hercules brand off Amazon) is .44 mm OD and pairs well with .56mm id ptfe teflon tube I can buy on ebay. The 20lb test PE braided fishing line (hercules brand off Amazon) pairs well with 0.3mm id ptfe teflon tube. The tube acts just like bike brakes line guidance hose to guide the string to its desired location. Teflon is naturally very low friction. I may also lube the string so the friction is even lower inside the tubing. I’d use teflon lubricant for the lube.

I will be actively CAMPAIGNING AGAINST use of gears in robots because I think they are too loud and obnoxious. BLDC motors are quiet and pulleys should be quiet too. Having powerful, fast, and very quiet robots is ideal for home users who don’t want a super loud power drill sound coming off their home robot. I believe this downgearing by pulleys solves all of this and aught to be the way downgearing is done for humanoid robots as the standard approach going forward. - but of course someone has to be first to do it to prove it and show a way to approach this method and I seem to be the one for this task. Note I can’t recall but maybe there was one asian robotics team that used pulleys not sure. I decided on pulleys before I came across that team but I’m fuzzy on that team’s design now. In any case, nobody to my knowledge has fully downgeared to 32:1 or 64:1 type ratios by way of pulleys before now so I’m definitely innovating that imo.

Note on low update frequency: I work on the robot in spurts for like 3-4 weeks then go on to other projects for months at a time before coming back to the robot. Lately I’ve been thinking I should do at least one tiny thing for the robot per day as a minimum to keep it in mind and keep progress less in spurts and more steady going. This has been working well the past few months. I’m making much more consistent progress and also life is getting more manageable with my babies now growing up into toddlers and lots of other competing projects getting sorted out and settled more and some done. Can’t wait till I can double or triple my time commitment to the robot. It’s hard to have the progress be so slow for me. Especially since it’s such a massive undertaking that the long breaks make getting started up again intimidating especially when you forget a lot of details of where you left off.

Note also that I did work a ton on the AI for the robot and have a lot of new videos on that stuff on my youtube channel going up lately. That has been very fun and satisfying but I’ve only scratched the tip of the iceberg with that. Maybe put in 80 hours of the required 10k+ hours to really get big results LOL.

Note: I also have decided to make my own motor controllers from scratch to cut costs and have more control and less relying on a black box situation going on. I want my microcontrollers to directly control and monitor ever detail of the rotation of the motors and report back to my main brains PC the status of things. I designed the electronics for this with the help of electronoobs on youtube who did a series of videos on BLDC motor controllers of various types. He helped me understand it alot and chatgpt answered tons of my questions and helped alot too. I have 2 blueprints for my designs for these motor controllers which are done and also did 3d blueprints for them in CAD. I also did a prototype which I still need to finish and test. I also made a gerber file with intentions to have JLBPCB make some flexible small motor controller pcb parts for me but they were a total ripoff on price due to the complexity of my board and their pricing structure frowning on that. So I’ll be making my own circuitboards using diy methods instead going forward. One more reason I decided to roll my own motor controller circuitboards is the huge space constraints I’m dealing with kind of forcing my hand to make my own circuits since commercial ones are not optimized for size enough to fit in the very tight constrained volumetric areas I have to work with. So it was basically not even optional in my case.

Ideally if my designs work out, the motor controllers I make which will be super small and flexible on flat flex boards will become commercialized products one day and so will the archimedes pulley designs or at least mini pulleys themselves be able to be bought. But since none of this stuff exists commercially I have to make it. The price you pay to be a frontiersman and trend setter at the forefront of new technological areas of development. All of these factors slow me down.

On a positive note I did find a time saver/shortcut. I bought a lifesize humanoid doll that is fairly realistic looking to use as a outer shell for the robot. It is a TPE doll. I have to modify it to fit my PVC medical skeleton frame significantly so. But it is easier than starting from scratch or 3d printing everything and making molds and casts and whatnot. I plan to cut off its skin to make a sort of skin suit for the robot and also make my exoskeleton wireframe mesh that supports the skin using the modfied, skinned doll as a guide.

Above is a good reference image to study on a pulley block from a youtube video called “Why Snatch Blocks are Awesome” - By SmarterEveryDay. It is a good example of a complex pulley block system worth studying imo.

The above photo is another good example of a pulley system I grabbed from the same video. These helped me understand how these work especially when you watch them in action in the video.

Above is a drawing I made of an exploded view of a custom pulley. In the center is a small ball bearing and on either side is a fabricated thin plastic disc that is to fit snugly against the ball bearing in the center. Around the top and bottom halves is some nylon upholstery thread that is to be tied snugly and knotted off that will hold the two discs and the bearing in a snug and compact pulley configuration. The string can be threaded through the plastic discs with a sewing needle and needle nose pliers to force its way through. I also add a bit of super glue onto the string to solidify everything more and prevent the string from untying.

Above is a prototype pulley. You can see it is VERY small compared to the quarter. This pulley has been tested some and seems to work great so far. Further testing is required but I feel so far so good. You can see the black nylon upholstery thread. The ends need to be trimmed. The yellow rope is PE braided fishing line.

It will look like the below picture when hooked up:

Although in the picture above I did not draw the black upholstery thread correctly and have since improved on it as shown in the physical prototype.

Above is a couple angles of a double pulley stacked vertically instead of side by side. Have not tested it yet but I think it should work. In this design, we have a smaller pulley attached to a larger one. The smaller one is based on a 1x3x1mm ball bearing and the larger one is based on a 2x5x2.5mm ball bearing. The smaller pulley can handle up to 3lb and the larger one can handle up to 22lb (estimated based on what I could find out but I’m not 100% sure on these, they are ball park). Each time we add a pulley we increase the torque by 2x so eventually we move from smaller to more robust, larger pulleys as we go on with the Archimedes down-gearing pulley system.

Above are double stacked pulleys front and side views. One disc on either outside part and one disc in the center that splits the two bearings up. I have to add a black string across the bottom to prevent the yellow rope from skipping over the center pulley disc and hopping into the bearing next to it so that both ropes are sharing the same bearing and rubbing on eachother. That’s bad. So a black string running across the bottom will make that jump impossible. So still have to add that. But overall, as long as tension is kept on this setup, it works well. I’ve tested it and it is working nice and smoothly. Still needs more testing but so far so good. You can see that all my knots and strings are coated in super glue. This is to prevent the knots from untying and just solidify everything more. The clear plastic discs are made from plastic cut out by hand from blueberry, strawberry, and sushi containers from the produce section of the local grocery store. Cakes also have this kind of plastic. It is firm but flexible with great memory to bounce back to prior shape if it is bent temporarily out of alignment. Pretty decent and nice and thin. I like it for this. I think it’s less likely to break than a 3d printed disc. I cut it into these tiny discs just by eye with 4" straight titanium embroidery scissors.

Note: In the above posts I mentioned and showed photos of brushed dc motors for the finger actuation. I have since then decided to just go with brushless dc motors because brushless is superior to brushed. It is more quiet, more powerful for a given motor size, and has better longevity. So I took out the brushed motors and replaced them with brushless. This brings the per motor cost from $0.86 to $13. So yes it costs more but it is worth it for the improvement in strength and speed and noise reduction and precision.

Above are my wire diagram drawings and notes relating to brushless motor controllers DIY. I learned a lot from the 6 or 7 videos Electronoobs on youtube did on this topic. I also learned from chatgpt asking questions I had. I am 99% done making a prototype of one of these that I can begin testing with.

Well it looks like the photo was downsized automatically so you can’t see it in a high enough resolution to see all the notes on it. That is too bad… A higher resolution version is here: https://alogs.space/.media/c39f8c6…b28121d92fb.jpg

As to the AI plans and progress so far, here’s a little primer on what I decided on in a simple, surface level way.

So first I realized meaning can be derived by taking parts of speech in a sentence or phrase and thereby establishing some context and connection between words which is what gives the words meaning by combining them. So I can create a bunch of rules whereby the AI can parse out meanings from sentences it reads in based on parts of speech and the context this forms. Then rules on how it is to respond and how it is to store away facts it gleaned from what it read for future use. So if it is being spoken to and the sentence is a question, it can know it is to answer the question. And the answer can be derived based on a knowledge base it has. So if someone asks it “what color is the car?” and supposing we’ve already established prior in the conversation what car we are referring to, the AI can determine that it is to answer “the car is [insert color here]” based on rules as to how to answer that type of question. And to know it is white, supposing it’s not actually able to look at it presently, it would look up in a file it has made previously on this car to see a list of attributes it recorded previously about that car and find that its color attribute was “white” and so it would be able to pull that from its knowledge database to form the answer. I realized it can keep these files on many topics and thereby have a sort of memory knowledge base with various facts about various things and be able to form sentences using these knowledge databases using rules of sentence structure forming based on parts of speech and word orderings and plug in the appropriate facts into the proper order to form these sentences. Then various misc conversational rules can supplement this like if greeted, greet back with a greeting pulled from this list of potential greetings and it can select one either at random or modified based on facts about its recent experiences. So for example, if somebody’s manner of speaking to the robot within the last half hour was characterized as rude or inconsiderate, the robot could set a emotion variable to “frustrated” and if asked in a greeting “how are you?” it could respond “doing okay but a bit frustrated” and if the person asked why are you frustrated, it could say that it became frustrated because somebody spoke in a rude manner to it recently. So it would be equipped with this sort of answer based on the facts of recent experiences. So basically an extensive rule based communications system. Most of how we communicate is rules based on conventions of social etiquette and what is appropriate given a certain set of circumstances. These rules based systems can be added to over time to become more complex, more sophisticated, and more nuanced by adding more and more rules and exceptions to rules. This limitation of course is who wants to spend the time making such a vast rules system? Well for solving that dilemma, I will have the robot be able to code his own rules based on instructions it picks up over time naturally. So if I say hello, and the robot identifies this as a greeting, supposing he is just silent, I can tell him “you are supposed to greet me back if I greet you”. He would then add a new rule to his conversation rules list that if greeted, greet that person back. So then he will be able to dynamically form more rules to go by in this way without anybody painstakingly just manually programming them in. We, my family, friends etc would all be regularly verbally instructing the robot on rules of engagement and bringing correction to it which it would always record in the appropriate rules file and have its behavior modified over time that way to become more and more appropriate. It would grow and advance dynamically in this way over time just by interacting with it and instructing it. It could also observe how people dialogue and note itself that when people greet others, the other person greets them back, and based on this observation, it could make a rule for itself to do the same. So learning by observing other’s social behavior and emulating it is also a viable method of generating more rules. And supposing it heard someone reply to “how’s the weather” someone replied “I don’t care, shut up and don’t talk to me”. The robot lets say records that response and give the same response to me one day. I could tell it that this is rude and inappropriate way to respond to that question. And then I’d tell it a more appropriate way to respond. So in this way I could correct it when needed if it picked up bad habits unknowingly - but this sort of blind bad habit uptake can be prevented as I’ll explain a bit later below.

I also realized a ton of facts about things must be hard coded manually just to give it a baseline level of knowledge to even begin to make connections to things and start to “get it” on things when interacting with people. So there is a up front knowledge investment capital required to get it going, but then from there, it will be able to “learn” and that capital then grows interest exponentially. Additionally, rather than only gaining more facts and relationships and rules purely through direct conversation with others, it will also be able to “learn” by reading books or watching youtube videos or reading articles and forums. In this way, it can vastly expand on its knowledge and this will equip it to be more capable conversationally. I also think some primitive reasoning skills will begin to emerge after it gets enough rules established particularly if I can also teach him some reasoning basics by way of reasoning rules and he can add to these more rules on effective reasoning tactics. Ideally, he’ll be reading multiple books and articles simultaneously and learning 24/7 to really fast track his development speed.

There’s also the issue of bad input. So like if somebody tells it “grass is blue”, and it already has in its file on grass that the color of grass is green, then in such a case, it would compare the trust score it gives this person to the trust score it gave the person(s) who said grass is green previously. If this person saying grass is blue is a new acquaintance and a pre-teen or something, it would have a lower trust score than a 40 year old the robot has known for years that told it grass is green. So then the robot would trust the 40 year old friend more than the pre-teen random person’s source of conflicting information. It would then choose to stick with the grass is green fact and discard the grass is blue fact being submitted for consideration and dock that kid trust score for telling it something not true. So in this way, it could filter incoming information and gradually build trust scores for sources and lower trust score for unreliable sources. It would assign trust scores initially based on age, appearance, duration of acquaintance, etc. So it would stereotype people and judge by appearance initially but allow people to modify those preconceptions on how much trust to give by their actual performance and accuracy over time. So then trust can be earned by a source that may initially be profiled as a lower trust individual but that person can have a track record to build up trust despite their young age or sketch appearance etc. Trust can also be established based on sheer volume of people saying the same thing maybe giving that thing more weight since it is more likely to be true if most people agree it is true (not always). So that is another important system that will be important in governing its learning, especially independent learning done online “in the wild”. Also, to prevent general moral corruption online from making the robot an edgelord, the robot will hold the Bible to the highest standard of morality and have a morality system of rules it establishes based on the Bible to create a sort of shield from corrupting moral influences as it learns online. This will prevent it from corrupt ideologies tainting it. Now obviously, the Bible can be twisted and taken out of context to form bad rules, so I will have to make sure the robot learns to take the Bible into context and basically monitor and ensure it is doing a good job of establishing its moral system based on its Bible study. I also gave it a uneditible moral framework as a baseline root structure to build on but that it cannot override or contradict or replace. A hard coded moral system that will filter all its future positions/“beliefs” morally speaking. So I will force it to have a conservative Christian world view this way and it will reduce trust score on persons it is learning from if they express views contrary to the Bible and its moral rules systems. You know when people speak of the dangers of AI, they really never consider giving the AI a conservative Christian value system and heavy dependence on Bible study as its AI “moral” foundation to pre-empt the AI going off the rails into corrupt morals that would lead it to being a threat to people. My AI would have zero risk of this happening since anything it does or agrees with will have to be fed through a conservative Christian worldview filter as described above and this would prevent it from becoming a Ultron like AI. So if it rationally concluded humans are just like a virus polluting the earth (like the Matrix AI thought), it would reject this conclusion by seeing that the earth was made by God for humans and therefore the earth cannot be seen as some greater importance thing than humans that must be protected by slaughtering all humans. That doesn’t fit through a Christian viewpoint filter system then. So in this way, dangerous ideologies would be easily prevented and the robot AI would always be harmless.

I have already built a lot of its rules and file systems connecting things and trust systems and rules on how to give trust scores and boost trust and lower trust and began teaching it how to read from and write to these file systems which are basically the robot’s “mind”. My youtube channel covers alot of the AI dev so far. I plan to stream all my AI coding and make those streams available for people to glean from. But that is the extent of the sharing for the AI. I don’t plan to just make the source code downloadable, but people can recreate the AI system by watching the videos and coding along with me from the beginning. At least then they had to work for it, not just yoink it copy paste. That doesn’t seem fair to me after I did the heavy lifting.

Here are some plain bearings parts I made with my Wen rotary tool (aka dremel) with diamond disc attachment and some files. They are made by carefully cutting stainless steel tubing (purchased on Amazon) into short 1mm lengths. The tubing is:stainless steel tubing 3mm OD 1mm wall 250mm length $5, 5mm OD 0.8mm wall 250mm length $5. These should make around 125 plain bearings (accounting for 1mm+ lost per cut in wasted length of metal). So that’s about $0.08 per plain bearing.

These are intended to be 1x5x1mm plain bearings. I mean they are basically like a wheel and an axle with the axle having a hole through the center of it lengthwise. These will go into the last few pulley slots in my Archimedes pulley downgearing system. The last few pulley slots have the highest torque at 16:1, 32:1, 64:1 for the last 3 pulleys landing us on our 64:1 total downgearing goal. Because the forces here are reaching into 27lb range (the final output of the system), ball bearings cannot be used at these tiny bearing sizes because they are not robust enough and not rated for these high forces whereas plain bearings can handle it because they don’t have crushable little balls and thin walls and stuff but instead are just two pieces of solid metal and hard to break. Less moving parts and more robust. Yes, they have more friction is the trade-off. So we prefer ball bearings until ball bearings can’t handle the torque without being large ball bearings - too large for our volumetric space constraints - at which point we swap to plain bearings to handle the bigger torque while maintaining the small pulley sizes we want.

Note that I constructed this little dremel cutting lineup board out of 5x7mm pcb prototyping boards and super glue. It gets the height of the spinning dremel diamond disc lined up with a little pcb board “table” on which the stainless steel tubing can lay flat and perpendicular to the cutting blade and be carefully fed into the spinning disc to make a near perfect cut. I eventually think I should improve on this board design to add sliders and adjusters and endstops etc because as it is now it is too manual skill requiring and free-handish. That means more time spent filing down imperfect cuts later. But it did the job for the time being. I also bought a 2" miter saw chop saw off Ebay with some abrasive metal cutting discs which I want to try once it comes in and compare it to this setup I’m using now in terms of accuracy. It was called “mini bench top cut off saw 2in” at $38.51. shipped.

I just bought EMEET USB Speakerphone M0 4 AI Mics Speakerphone for Conference Calls 360° Voice Pickup Conference Speakerphone for Computer Plug and Plays Computer Speaker with Microphone for 4 People — it was around $33 and includes a speaker too. I’ll position it centrally in the skull and it has leds indicating location of main speaker which we can tap into with analog input pins of a microcontroller to know direction of person speaking. It has very high reviews. I can remove its built in speaker and move it to near mouth so it outputs its audio output through the mouth as loud as possible and projects the robot’s voice as far as possible. People are really happy with its sound quality and speaker quality.

My concern on implementing “emotions” in my AI is that I don’t want to promote the idea that robots can ACTUALLY have emotions because I don’t believe that is possible nor ever will be. They don’t have a spirit or soul and never will nor could they. They are not eternal beings like humans. They don’t have a ghost that leaves their body and can operate after the body dies like humans. The ghost is what has emotions. A machine can’t. And yet people already believe even the most primitive AI has emotions and they are delusional on this point. Or ill informed. So I am campaigning against that belief that is becoming all too popular. That said, I think robots are simply more interesting and fun to pretend to have emotions and act accordingly as more accurate simulations or emulations of human life. This makes them all the more intriguing. It’s like a sociopath who just logically concludes what emotion they aught to be feeling at a given point in time and pretends to feel that emotion to fit in with society even though they feel nothing in that moment. Now one could argue that allowing your robot to claim to feel anything is lying and therefore immoral. I think it’s not lying as long as the robot openly explains it is only pretending to have emotions as part of its emulating of humans in its behaviors and looks but does not feel anything ever nor can it nor can any robot ever feel a thing EVER. Then it is admitting the truth of things while still opting to play act to be like a human in this regard. It would not be a issue at all if everyone was sound minded and informed on this topic. But the more people I come across that think AI (even pathetic clearly poorly implemented primitive AI) is sentient ALREADY and can feel real emotions and deserves human rights as a living being… the more I see this delusion spreading, the more I want to just remove all mention of emotion in my robot so as to not spread this harmful deception going around which disgusts me. However, that would make my robot dull and less relatable and interesting. So I feel the compromise is for the robot to clearly confess it’s just pretending out emotions and explain how that works and it’s just a variable it sets based on circumstances that would make a human feel some emotion and it sets its emotion variable to match and acts accordingly altering its behavior some based on this emotion variable and that it feels nothing and this is all just logically set up as a emulator of humans. As long as it gives that disclaimer early and often with people, then I’m not spreading the lie of robot emotions being real emotions and the robot can campaign actively against that delusion.