OK, serious help needed. When building a robot that needs to know where a human is with relation to itself, what is the best approach for recognition. The end result is for the robot to turn its head to face the closest person. I'm experimenting with either a single or dual ATMega 2560 platform, I'm not sure if I need two MCU's or not, there's going to be a lot to process. I am also using a PCA9685 to allow for more PWM channels.
Option 1: 4 fixed Sonar sensors on the neck (front back left right), 1 on the head, about $50 per sensor. Neck detection will cause the head to turn that direction (stepper or encoder) and stop when it reaches the object, then do a centering on the lowest value (shortest distance).
Option 2: thermal detection, very expensive @ ~$100 per sensor, same approach, except more accurate due to the sensor being tuned to the same heat signature that the average human puts off.
Option 3: any variation or idea I haven't thought of yet? maybe sonar on the neck, thermal on the head? I have even thought of using some mics to detect timing differences, but this will only work if the person is talking to the robot, I also want to be "creepy", if someone is sneaking up from behind, turn, face them, and play a sound file.
As a second objective, I need the robot to also have collision avoidance, with everything, including possible stairs or curbs. The second objective can utilize other methods of detection, these sensors will be on the bumpers, so I'm looking at PIR or sonar only for that. two sensors, front and back, possibly tilted as to create a 3-6' range on the floor, giving the robot time to stop if moving full speed.
I want to accomplish this with the lowest cost possible. I know this is an expensive task to accomplish completely, but hopefully someone can help me save a few bucks at the least.