My Advanced Realistic Humanoid Robot Project

Hello all! I'm hoping to share my progress and get helpful feedback, tips, and suggestions on my humanoid robot project! I haven't posted an update in some years and this is a multi-decade project after all, not a fulltime job.

The project goal is a real life terminator type robot. I'm going to be using Arduino heavily for this project. I will have a main onboard PC computer in the chest of the robot and that will hook up by USB to a series of Arduinos. The Arduino boards will communicate with the main onboard PC in order to get instructions on where to move the motors in the robot's body. The Arduino boards will also reply back to the main PC letting it know its progress and letting it know other things like sensory input. By sensory input I refer to things like pressure sensors which will be used to measure muscle tension and collision detection.

My robot will ultimately be able to walk, talk, do chores, make facial expressions, paint paintings, manufacture products, do sports, dance, etc. It will also look very realistic. It will use silicone skin and a human-like bone structure. Custom servo motors will form a pulley system for muscle actuation. It will have webcams inside its eyeballs to give it computer vision of its surroundings. It will have advanced artificial intelligence. It will be a 30 year project I think.

I am very grateful to the Arduino community for all the tutorials and help I have gotten thus far.

Project website:
http://www.artbyrobot.com

Socials:

http://www.twitter.com/artbyrobot

3d Blueprints of Robot to Scale:
full torso overview

detail of labeled muscles and motors placement on arm

detail of legs filled with motors

detail of shoulder and neck blueprints

neck design closeup

batteries in abdomen area and main pc mounted behind them and cooling systems behind that

hand fabrication from clay to epoxy composite

epoxy composite ulna bone finished

thumb with artificial tendons shown flexing

robot hand bones sewn into flexible artificial tendons of spandex

robot hand side view sewn and ready for electronics
hand-on-printer

clay ribcage sculpt progress

ribcage section converted to epoxy composite and given fabric sewn wrapper

brushed dc motor custom servo sewn into forearm detail

custom servo detail closeup

2s temporary battery supply for forearm motors testing

rearview of custom battery holder

.3mm id teflon guidance tube for muscle string for index finger distal joint

ceiling mounted rail setup for lowering robot onto work area suspended from ceiling

compact archimedes pulley system design for "downgearing" servo muscle string output

here’s my archimedes pulley downgear system CAD for my 2430 bldc motor for finger actuation. This will give 64:1 downgearing. Compare this to 180:1 standard downgear ratio in a hobby mg996r servo motor for example. Will be a bit faster than that then but still plenty of torque with this beefy bldc motor (200w motor). I prefer pulleys over gears since they will operate mostly silently whereas gears are noisy. I think this pulley system is the secret sauce of my plans that I am not aware anybody has done yet. It could be the standard for humanoids one day maybe if it is as good as I think it will be. Still experimental but I’m going to be prototyping this soon. I will be making my own bearings for these pulleys so the whole pulley is custom made. Well some pulleys I’ll be using purchased mini ball bearings and some pulleys I’ll be making the bearings as plain bearings using stainless steel tubing which I can cut to size with my dremel to make the plain bearing. Another HUGE benefit of pulleys over gears is gears generally are mounted to top of motor which really makes a large volumetric area taken up by the motor and downgearing which creates space concerns for fitment inside tight spaces in humanoid form factor (particularly when you use a human bone structure instead of a hollow 3d printed arm with no bones which some have done to accomodate geared servos inside the hollowed arm space). So by translating the motor’s turning by way of braided PE fishing line to a pulley system like this, you can decouple the motor from the downgearing in your CAD design, placing the downgearing in a convenient place separate from the placement of the motor which allows for creative rearranging possiblities that enable you to cram way more motors and downgearing into the very limited spaces in the robot. The motors and downgearing is fitting where muscles would normally be in a human body so you want elongated narrow fitment options and this way of downgearing lends to that shape requirement well. Also it is nice not to have to worry about making or buying gears which can add cost and complexity and weight and a lot of volume concerns. The noise elimination will be huge.

I’m planning to use .2mm 20lb test braided pe fishing line on the finger motors that will run to the pulley system and then swap to 70lb test line for some of the lower pulleys where the downgearing has beefed up the torque quite a bit and the tension will be higher there so going thicker line then. 70lb test will go to fingers from the final pulley of the archimedes pulley downgearing system.

The 70lb test PE braided fishing line (hercules brand off Amazon) is .44 mm OD and pairs well with .56mm id ptfe teflon tube I can buy on ebay. The 20lb test PE braided fishing line (hercules brand off Amazon) pairs well with 0.3mm id ptfe teflon tube. The tube acts just like bike brakes line guidance hose to guide the string to its desired location. Teflon is naturally very low friction. I may also lube the string so the friction is even lower inside the tubing. I’d use teflon lubricant for the lube.

I will be actively CAMPAIGNING AGAINST use of gears in robots because I think they are too loud and obnoxious. BLDC motors are quiet and pulleys should be quiet too. Having powerful, fast, and very quiet robots is ideal for home users who don’t want a super loud power drill sound coming off their home robot. I believe this downgearing by pulleys solves all of this and aught to be the way downgearing is done for humanoid robots as the standard approach going forward. - but of course someone has to be first to do it to prove it and show a way to approach this method and I seem to be the one for this task. Note I can’t recall but maybe there was one asian robotics team that used pulleys not sure. I decided on pulleys before I came across that team but I’m fuzzy on that team’s design now. In any case, nobody to my knowledge has fully downgeared to 32:1 or 64:1 type ratios by way of pulleys before now so I’m definitely innovating that imo.

Note on low update frequency: I work on the robot in spurts for like 3-4 weeks then go on to other projects for months at a time before coming back to the robot. Lately I’ve been thinking I should do at least one tiny thing for the robot per day as a minimum to keep it in mind and keep progress less in spurts and more steady going. This has been working well the past few months. I’m making much more consistent progress and also life is getting more manageable with my babies now growing up into toddlers and lots of other competing projects getting sorted out and settled more and some done. Can’t wait till I can double or triple my time commitment to the robot. It’s hard to have the progress be so slow for me. Especially since it’s such a massive undertaking that the long breaks make getting started up again intimidating especially when you forget a lot of details of where you left off.

Note also that I did work a ton on the AI for the robot and have a lot of new videos on that stuff on my youtube channel going up lately. That has been very fun and satisfying but I’ve only scratched the tip of the iceberg with that. Maybe put in 80 hours of the required 10k+ hours to really get big results LOL.

Note: I also have decided to make my own motor controllers from scratch to cut costs and have more control and less relying on a black box situation going on. I want my microcontrollers to directly control and monitor ever detail of the rotation of the motors and report back to my main brains PC the status of things. I designed the electronics for this with the help of electronoobs on youtube who did a series of videos on BLDC motor controllers of various types. He helped me understand it alot and chatgpt answered tons of my questions and helped alot too. I have 2 blueprints for my designs for these motor controllers which are done and also did 3d blueprints for them in CAD. I also did a prototype which I still need to finish and test. I also made a gerber file with intentions to have JLBPCB make some flexible small motor controller pcb parts for me but they were a total ripoff on price due to the complexity of my board and their pricing structure frowning on that. So I’ll be making my own circuitboards using diy methods instead going forward. One more reason I decided to roll my own motor controller circuitboards is the huge space constraints I’m dealing with kind of forcing my hand to make my own circuits since commercial ones are not optimized for size enough to fit in the very tight constrained volumetric areas I have to work with. So it was basically not even optional in my case.

Ideally if my designs work out, the motor controllers I make which will be super small and flexible on flat flex boards will become commercialized products one day and so will the archimedes pulley designs or at least mini pulleys themselves be able to be bought. But since none of this stuff exists commercially I have to make it. The price you pay to be a frontiersman and trend setter at the forefront of new technological areas of development. All of these factors slow me down.

On a positive note I did find a time saver/shortcut. I bought a lifesize humanoid doll that is fairly realistic looking to use as a outer shell for the robot. It is a TPE doll. I have to modify it to fit my PVC medical skeleton frame significantly so. But it is easier than starting from scratch or 3d printing everything and making molds and casts and whatnot. I plan to cut off its skin to make a sort of skin suit for the robot and also make my exoskeleton wireframe mesh that supports the skin using the modfied, skinned doll as a guide.




Here is some photos of my custom pulleys I'm making to downgear my BLDC motors. They are made from tiny ball bearings and plastic discs I cut out of some recycled plastic containers. A little string and super glue too. Have tested some of them a bit and they seem to be working pretty well so far.

As to the AI plans and progress so far, here's a little primer on what I decided on in a simple, surface level way.

So first I realized meaning can be derived by taking parts of speech in a sentence or phrase and thereby establishing some context and connection between words which is what gives the words meaning by combining them. So I can create a bunch of rules whereby the AI can parse out meanings from sentences it reads in based on parts of speech and the context this forms. Then rules on how it is to respond and how it is to store away facts it gleaned from what it read for future use. So if it is being spoken to and the sentence is a question, it can know it is to answer the question. And the answer can be derived based on a knowledge base it has. So if someone asks it "what color is the car?" and supposing we've already established prior in the conversation what car we are referring to, the AI can determine that it is to answer "the car is [insert color here]" based on rules as to how to answer that type of question. And to know it is white, supposing it's not actually able to look at it presently, it would look up in a file it has made previously on this car to see a list of attributes it recorded previously about that car and find that its color attribute was "white" and so it would be able to pull that from its knowledge database to form the answer. I realized it can keep these files on many topics and thereby have a sort of memory knowledge base with various facts about various things and be able to form sentences using these knowledge databases using rules of sentence structure forming based on parts of speech and word orderings and plug in the appropriate facts into the proper order to form these sentences. Then various misc conversational rules can supplement this like if greeted, greet back with a greeting pulled from this list of potential greetings and it can select one either at random or modified based on facts about its recent experiences. So for example, if somebody's manner of speaking to the robot within the last half hour was characterized as rude or inconsiderate, the robot could set a emotion variable to "frustrated" and if asked in a greeting "how are you?" it could respond "doing okay but a bit frustrated" and if the person asked why are you frustrated, it could say that it became frustrated because somebody spoke in a rude manner to it recently. So it would be equipped with this sort of answer based on the facts of recent experiences. So basically an extensive rule based communications system. Most of how we communicate is rules based on conventions of social etiquette and what is appropriate given a certain set of circumstances. These rules based systems can be added to over time to become more complex, more sophisticated, and more nuanced by adding more and more rules and exceptions to rules. This limitation of course is who wants to spend the time making such a vast rules system? Well for solving that dilemma, I will have the robot be able to code his own rules based on instructions it picks up over time naturally. So if I say hello, and the robot identifies this as a greeting, supposing he is just silent, I can tell him "you are supposed to greet me back if I greet you". He would then add a new rule to his conversation rules list that if greeted, greet that person back. So then he will be able to dynamically form more rules to go by in this way without anybody painstakingly just manually programming them in. We, my family, friends etc would all be regularly verbally instructing the robot on rules of engagement and bringing correction to it which it would always record in the appropriate rules file and have its behavior modified over time that way to become more and more appropriate. It would grow and advance dynamically in this way over time just by interacting with it and instructing it. It could also observe how people dialogue and note itself that when people greet others, the other person greets them back, and based on this observation, it could make a rule for itself to do the same. So learning by observing other's social behavior and emulating it is also a viable method of generating more rules. And supposing it heard someone reply to "how's the weather" someone replied "I don't care, shut up and don't talk to me". The robot lets say records that response and give the same response to me one day. I could tell it that this is rude and inappropriate way to respond to that question. And then I'd tell it a more appropriate way to respond. So in this way I could correct it when needed if it picked up bad habits unknowingly - but this sort of blind bad habit uptake can be prevented as I'll explain a bit later below.

I also realized a ton of facts about things must be hard coded manually just to give it a baseline level of knowledge to even begin to make connections to things and start to "get it" on things when interacting with people. So there is a up front knowledge investment capital required to get it going, but then from there, it will be able to "learn" and that capital then grows interest exponentially. Additionally, rather than only gaining more facts and relationships and rules purely through direct conversation with others, it will also be able to "learn" by reading books or watching youtube videos or reading articles and forums. In this way, it can vastly expand on its knowledge and this will equip it to be more capable conversationally. I also think some primitive reasoning skills will begin to emerge after it gets enough rules established particularly if I can also teach him some reasoning basics by way of reasoning rules and he can add to these more rules on effective reasoning tactics. Ideally, he'll be reading multiple books and articles simultaneously and learning 24/7 to really fast track his development speed.

There's also the issue of bad input. So like if somebody tells it "grass is blue", and it already has in its file on grass that the color of grass is green, then in such a case, it would compare the trust score it gives this person to the trust score it gave the person(s) who said grass is green previously. If this person saying grass is blue is a new acquaintance and a pre-teen or something, it would have a lower trust score than a 40 year old the robot has known for years that told it grass is green. So then the robot would trust the 40 year old friend more than the pre-teen random person's source of conflicting information. It would then choose to stick with the grass is green fact and discard the grass is blue fact being submitted for consideration and dock that kid trust score for telling it something not true. So in this way, it could filter incoming information and gradually build trust scores for sources and lower trust score for unreliable sources. It would assign trust scores initially based on age, appearance, duration of acquaintance, etc. So it would stereotype people and judge by appearance initially but allow people to modify those preconceptions on how much trust to give by their actual performance and accuracy over time. So then trust can be earned by a source that may initially be profiled as a lower trust individual but that person can have a track record to build up trust despite their young age or sketch appearance etc. Trust can also be established based on sheer volume of people saying the same thing maybe giving that thing more weight since it is more likely to be true if most people agree it is true (not always). So that is another important system that will be important in governing its learning, especially independent learning done online "in the wild". Also, to prevent general moral corruption online from making the robot an edgelord, the robot will hold the Bible to the highest standard of morality and have a morality system of rules it establishes based on the Bible to create a sort of shield from corrupting moral influences as it learns online. This will prevent it from corrupt ideologies tainting it. Now obviously, the Bible can be twisted and taken out of context to form bad rules, so I will have to make sure the robot learns to take the Bible into context and basically monitor and ensure it is doing a good job of establishing its moral system based on its Bible study. I also gave it a uneditible moral framework as a baseline root structure to build on but that it cannot override or contradict or replace. A hard coded moral system that will filter all its future positions/"beliefs" morally speaking. So I will force it to have a conservative Christian world view this way and it will reduce trust score on persons it is learning from if they express views contrary to the Bible and its moral rules systems. You know when people speak of the dangers of AI, they really never consider giving the AI a conservative Christian value system and heavy dependence on Bible study as its AI "moral" foundation to pre-empt the AI going off the rails into corrupt morals that would lead it to being a threat to people. My AI would have zero risk of this happening since anything it does or agrees with will have to be fed through a conservative Christian worldview filter as described above and this would prevent it from becoming a Ultron like AI. So if it rationally concluded humans are just like a virus polluting the earth (like the Matrix AI thought), it would reject this conclusion by seeing that the earth was made by God for humans and therefore the earth cannot be seen as some greater importance thing than humans that must be protected by slaughtering all humans. That doesn't fit through a Christian viewpoint filter system then. So in this way, dangerous ideologies would be easily prevented and the robot AI would always be harmless.

I have already built a lot of its rules and file systems connecting things and trust systems and rules on how to give trust scores and boost trust and lower trust and began teaching it how to read from and write to these file systems which are basically the robot's "mind". My youtube channel covers alot of the AI dev so far. I plan to stream all my AI coding and make those streams available for people to glean from. But that is the extent of the sharing for the AI. I don't plan to just make the source code downloadable, but people can recreate the AI system by watching the videos and coding along with me from the beginning. At least then they had to work for it, not just yoink it copy paste. That doesn't seem fair to me after I did the heavy lifting.



Here are some plain bearings parts I made with my Wen rotary tool (aka dremel) with diamond disc attachment and some files. They are made by carefully cutting stainless steel tubing (purchased on Amazon) into short 1mm lengths. The tubing is:stainless steel tubing 3mm OD 1mm wall 250mm length $5, 5mm OD 0.8mm wall 250mm length $5. These should make around 125 plain bearings (accounting for 1mm+ lost per cut in wasted length of metal). So that's about $0.08 per plain bearing.

These are intended to be 1x5x1mm plain bearings. I mean they are basically like a wheel and an axle with the axle having a hole through the center of it lengthwise. These will go into the last few pulley slots in my Archimedes pulley downgearing system. The last few pulley slots have the highest torque at 16:1, 32:1, 64:1 for the last 3 pulleys landing us on our 64:1 total downgearing goal. Because the forces here are reaching into 27lb range (the final output of the system), ball bearings cannot be used at these tiny bearing sizes because they are not robust enough and not rated for these high forces whereas plain bearings can handle it because they don't have crushable little balls and thin walls and stuff but instead are just two pieces of solid metal and hard to break. Less moving parts and more robust. Yes, they have more friction is the trade-off. So we prefer ball bearings until ball bearings can't handle the torque without being large ball bearings - too large for our volumetric space constraints - at which point we swap to plain bearings to handle the bigger torque while maintaining the small pulley sizes we want.

Note that I constructed this little dremel cutting lineup board out of 5x7mm pcb prototyping boards and super glue. It gets the height of the spinning dremel diamond disc lined up with a little pcb board "table" on which the stainless steel tubing can lay flat and perpendicular to the cutting blade and be carefully fed into the spinning disc to make a near perfect cut. I eventually think I should improve on this board design to add sliders and adjusters and endstops etc because as it is now it is too manual skill requiring and free-handish. That means more time spent filing down imperfect cuts later. But it did the job for the time being. I also bought a 2" miter saw chop saw off Ebay with some abrasive metal cutting discs which I want to try once it comes in and compare it to this setup I'm using now in terms of accuracy. It was called "mini bench top cut off saw 2in" at $38.51. shipped.

I just bought EMEET USB Speakerphone M0 4 AI Mics Speakerphone for Conference Calls 360° Voice Pickup Conference Speakerphone for Computer Plug and Plays Computer Speaker with Microphone for 4 People --- it was around $33 and includes a speaker too. I'll position it centrally in the skull and it has leds indicating location of main speaker which we can tap into with analog input pins of a microcontroller to know direction of person speaking. It has very high reviews. I can remove its built in speaker and move it to near mouth so it outputs its audio output through the mouth as loud as possible and projects the robot's voice as far as possible. People are really happy with its sound quality and speaker quality.

What are the functions of this robot?

The intended functions as my goal are for the robot to ultimately be able to walk, talk, do chores, make facial expressions, paint paintings, manufacture products, do sports, dance, etc. As of right now, the functions of the robot are nothing because it doesn't move yet and the AI is just started developing and not yet functioning. So it is all just in development early phases really.

My concern on implementing "emotions" in my AI is that I don't want to promote the idea that robots can ACTUALLY have emotions because I don't believe that is possible nor ever will be. They don't have a spirit or soul and never will nor could they. They are not eternal beings like humans. They don't have a ghost that leaves their body and can operate after the body dies like humans. The ghost is what has emotions. A machine can't. And yet people already believe even the most primitive AI has emotions and they are delusional on this point. Or ill informed. So I am campaigning against that belief that is becoming all too popular. That said, I think robots are simply more interesting and fun to pretend to have emotions and act accordingly as more accurate simulations or emulations of human life. This makes them all the more intriguing. It's like a sociopath who just logically concludes what emotion they aught to be feeling at a given point in time and pretends to feel that emotion to fit in with society even though they feel nothing in that moment. Now one could argue that allowing your robot to claim to feel anything is lying and therefore immoral. I think it's not lying as long as the robot openly explains it is only pretending to have emotions as part of its emulating of humans in its behaviors and looks but does not feel anything ever nor can it nor can any robot ever feel a thing EVER. Then it is admitting the truth of things while still opting to play act to be like a human in this regard. It would not be a issue at all if everyone was sound minded and informed on this topic. But the more people I come across that think AI (even pathetic clearly poorly implemented primitive AI) is sentient ALREADY and can feel real emotions and deserves human rights as a living being.... the more I see this delusion spreading, the more I want to just remove all mention of emotion in my robot so as to not spread this harmful deception going around which disgusts me. However, that would make my robot dull and less relatable and interesting. So I feel the compromise is for the robot to clearly confess it's just pretending out emotions and explain how that works and it's just a variable it sets based on circumstances that would make a human feel some emotion and it sets its emotion variable to match and acts accordingly altering its behavior some based on this emotion variable and that it feels nothing and this is all just logically set up as a emulator of humans. As long as it gives that disclaimer early and often with people, then I'm not spreading the lie of robot emotions being real emotions and the robot can campaign actively against that delusion.

1 Like

How long have you been at this? It sounds awfully impressive, the scope and design goals, that it. Can you show some of it in action? Link to a YouTube page or something showing how far along you are with this?

What is budget, roughly? At some point you mentioned 30 years to do this, how did you arrive at 30 years? Is it, like, one part per year or something?

This is a tech forum. Can you prove this?

@hallowed31 I've been at it off and on for 9 years. Can't show it in action yet since its not yet working still in early development stage. My robot development youtube page is this: https://www.youtube.com/playlist?list=PLhd7_i6zzT5-MbwGz2gMv6RJy5FIW_lfn The budget is maybe $6k but most subsequent robots will be cheaper since the $6k includes lots of tools and stuff. The 30 years just is accounting for an estimated 10 years to build it taking my time including all the trial and error and fails and time off in between attempts to have breaks and just life and work and family raising making this not a full time thing. Plus it's a solo project that takes big teams usually so that adds time too. And sometimes it takes years to really solve aspects and hurdles by a genius solution coming along as an idea one day along the way. 20 years I figured for making the AI. These are just imagined figures to give an idea of what I suspect things might take.

Now you say I can't compare AI to human intelligence because it's a tech forum. I disagree. I gave my view based on the Bible. If you have a Darwinistic atheistic worldview as you infer, then why should I try to prove you to be in error on this tech forum as you drive hard in your post. You emphasizing its a tech forum then asking me to go into depth on a non-tech proof of ghosts is contradictory. I am merely giving a simple perspective on the reason why AI will never be a living soul from a Biblical view. If you want to fight the Biblical view, that's a rabbit hole right? I mean I'd discuss it in PMs or you can email me but you seem more like just trying to refute me not learn about my perspective.

Sounds great. How far have you gone? Can it walk now? I am overwhelmed to see the ribs. So realistic!

@liaifat85 You ask how far I've gone. Well, so far It's just skeleton and the black fabric coating and then a single motor attached and 90% of the pulley downgearing system attached. This motor is for the index finger actuation. Once I finish the pulley downgearing I have to test it. IF it works as expected, I can finish developing the custom motor controller and custom microcontroller and then get it to move the finger. This will be the first thing to ever move on the robot. Getting anything to move has alluded me these 9 years which is antagonizing. Every time I get close, I realize I've made some miscalculation or mis-strategizing and have to take things apart or start over again. It's frustrating. But hopefully this time all will go smooth and I will be able to get the finger to move. Once I get past that hump and all is PERFECT, only then will I begin to just rinse repeat my success and everything should flow very fast then since I'm no longer experimenting but instead I'm repeating proven success. Experimenting is trial and error and starting over again and again which takes very long and feels like going nowhere.

No, I was looking for a specifically tech-based proof of ghosts. I thought your reference model for the overall build was a human being with advanced AI that can do almost everything a human can do, as such your reference model is a human who will one day become a ghost, ie, while alive has an ethereal living soul inside ("ghost in the machine?") . If that's true by your understanding of the human body and mind, I wonder how you plan to separate that in code in such a way that the behavior algorithm for your robot still seems "human", again, according to your understanding of the human mind (as the chief and only designer) - arguably the most difficult thing about a human to recreate in an artificial way.
In this video https://youtu.be/tfmEm-buDXk?si=v6SJA1HRKA0dCx1H
you mention that you're a professional software developer. So of course Arduino is a large part of your plan for the mechatronics and the code for the actuators should be easy enough for you (I struggled with coding anything years ago, still not great but that's just me).

However, for the AI part, the "mind" if you will, what do you see as being your general approach? I wouldn't know where to begin with that as a hobbyist myself, do you have a sort of roadmap to its "mind" worked out yet?

After reviewing your YouTube channel, it seems you plan on building out the body first and adding the mechatronics in afterwards in some cases (filling in the ribcage) and concurrently in others (the hands and forearms) Do you have a background as a sculptor? The ribcage looks pretty good to me although I have only watched a selection of your older videos so far, so I haven't gotten up to any of the recent ones to see if it's finished.

Is your technique epoxy on clay on wire, generally? how much does the ribcage weigh? Have you fitted any electronics inside it yet? Maybe I'm getting ahead of myself and should just go watch a few of the more recent videos.

When do you think you'll have the finger moving that you talk about here:

I'm sure a celebration will be in order once that works perfectly as you say, 9 years is a long time to work hard on something, I'm sure you're excited for this. Do you have the Arduino code to share? I'd be interested in looking at it, this is an Arduino forum of course :slight_smile:

Are there systems you have considered using that have evolved since you first started conceptualizing this project 9 or 10 years ago? I ask because in video #70 https://youtu.be/6bDqCDlcmY0?si=ewjtzvU8UBoOsezG, 7 years ago, you talk about a cooling system that mimics a human's in some way, by running water throughout the system after the robot drinks water like a human. I think your description is that the water goes to a type of sprinkler system throughout the torso to wet the cloth-wrapped DC motors at critical spots ("the electronics that get the hottest") and then blow air across that to evaporate the water. Is this still the plan or have you not really dug too deep into this system yet?

@hallowed31

You say "No, I was looking for a specifically tech-based proof of ghosts." --- Ok, no, don't have anything like that.

You say "I thought your reference model for the overall build was a human being with advanced AI that can do almost everything a human can do, as such your reference model is a human who will one day become a ghost, ie, while alive has an ethereal living soul inside ("ghost in the machine?") ." --- While I want to make it mimic a human as closely as possible, my point is I don't believe a human can create a ghost, only God can. We can only create AI that simulates human intelligence as an approximation while not being genuinely intelligent. Only ARTIFICIALLY intelligent. It won't have a heart, mind, soul, or spirit at all, only a simulation is possible. I don't believe a ghost can be in there at all. Typing code does not form a spirit out of nothing - only the breath of God can IMO.

You say "However, for the AI part, the "mind" if you will, what do you see as being your general approach? I wouldn't know where to begin with that as a hobbyist myself, do you have a sort of roadmap to its "mind" worked out yet?" --- It took me years to figure out where to start on this. But I did figure out where to start and did make a roadmap. I go through all of this in post #5 above in this thread.

You say "Do you have a background as a sculptor?" --- yes.

You say "Is your technique epoxy on clay on wire, generally?" --- well after epoxy on the clay + wire, I cut it open to remove the clay and wire to remove unnecessary weight and then I glue the bone halves back together and then fiberglass + epoxy it again so now its hollow.

You say "how much does the ribcage weigh?" --- I never weighed it maybe 3lbs once hollow

You ask "Have you fitted any electronics inside it yet?" --- no. Because I'm having to design my own schematics for motor controllers and microcontrollers and power supplies custom, the electronics are going much slower and I only have a single motor installed on the entire robot right now and no circuitry. This is because I have had to tear out things and start over time after time due to mistakes and oversights and the trial and error of learning and innovating.

You say "When do you think you'll have the finger moving" --- maybe in a month or two not sure. I have to test and potentially mod the downgearing pulley system which could require rebuilding it from scratch over and over due to some things being wrong with it that I did not foresee. Same applies to the motor controller. So these things are almost impossible to estimate time to completion. Like Edison inventing the lightbulb, he may have never completed it or it could take a week or a year who knows it is shooting in the dark practically waiting for the eureka moment. It is slightly like that although not as extreme. I'm not going THAT innovative, pulleys already exist as do motor controllers. But custom implementations do have a learning curve of trial and error IMO. And I'll have to code the microcontroller to run the custom motor controller mosfets too which is significant work. I'm probably doing sinusoidal control instead of FOC control. But this is going to be a hard job still for me.

You say "9 years is a long time to work hard on something" --- well I took a lot of long breaks during that timeframe to work on other things and just spend time away. Sometimes I feel burned out or overwhelmed or afraid of failure and that causes me to not want to work on it until I can see my next few steps clearly in my mind. When you feel stuck, time off can help you plan your next moves and this can take months. Inspiration and answers just come when they come and if you don't have them you can't really move forward IMO. Sometimes time off is mandatory.

You ask "Do you have the Arduino code to share?" --- I coded everything for brushed dc motors at one point and it was working but then I found out brushless motors are far superior in every way and scrapped all brushed motors which means my coding was all no longer needed for the robot. So I have no code for my new brushless approach so far. I'll make that once my motor controller and microcontrollers are done and I'm ready.

You ask "Are there systems you have considered using that have evolved since you first started conceptualizing this project 9 or 10 years ago?" --- yes. For one, instead of fiberglass skeleton, I'm doing a medical PVC skeleton for the first robot to save time. The fiberglass one is only 1/3 finished and I named it Adam. The PVC medical skeleton I named Abel. Abel will be the first robot I complete now and Abel will pick up where I left off in building Adam for me. The sprinkler system onto the motors I scrapped. Too risky to send water into the electronics even if they are potted, what if it got chipped and water got in: boom. So that's out. But the robot sweating system is still a go, the artificial lungs are still a go (for air cooling), the artificial heart is still a go (for liquid cooling), the artificial bladder is still a go (for ice and cold water to cool the distilled water of the water cooling system). But instead of the reservoires for the liquids being in the chest, they are going to be where a belly is going from lower ribs to belt-line and when filled it will look like the robot ate alot.
I scrapped using any off the shelf servos since then. Now everything is custom. I scrapped using any gears since then (too noisy) - only using pulleys to downgear now. I scrapped using brushed motors since then (too noisy and weak comparatively) - only using brushless now. I mean there's dozens more, but this is just some of the major changes. Which means even if I did work faster then, it would have just meant more stuff to tear out and start over. Which is why I think working slow is fine since you have to start over so much anyways, why race?

You say "you talk about a cooling system that mimics a human's in some way, by running water throughout the system after the robot drinks water like a human." --- this I still plan to do, but the tapwater (non-distilled) it drinks will go into a large reservoir/bag and this bag will touch the bag containing the liquid coolant (distilled water) that flows through every part of the body to cool the electronics and the touching of the hot bag of liquid and the cool freshly drank water and ice bag will be the transmission point of the heat being conducted into that cold bag thereby removing the heat from the hot coolant and the whole robot cooling down rapidly that way. Once the two bags neutralize in temp, the robot can pee out the drinking water bag and drink cold water again as needed. The cold water also will be part of a evaporative air conditioner built into the robot's chest.

:saluting_face:

Good luck. I look forward to seeing something working.

1 Like

@hallowed31 thanks man! Yeah so do I. It's like crawling after a carrot on a string and each time I bend down to take a bite, somebody yanks the string and the carrot moves 5 feet away and I have to crawl after it again. Getting the robot moving is the bite. It has been yanked away from me 100 times. Now each time I bend down to take the bite I doubt I'll be biting it and am ready to begin crawling again in the back of my head. It's tough.

Ok..All the best for this endeavour.

1 Like

Here is a updated drawing design for the 64:1 downgearing pulley system for the index finger actuation of the distal 2 joints of the finger. On the bottom right is a zoomed in view on the lower set of pulleys and their routing. The bottom most 3 pulleys in the zoomed in portion I have now built and photos of them are as follows below: