It's Possible to Connect and Control like this?

I would like to build remote control robot and connect like this

Nunchuck --> Arduino(1) -- >Xbee ................ Xbee --> Arduino(2) --> Robot (Servo, Stepper, PWM moters)

then feed back like this :

Camera --> transmitter .......... Reciever --> Laptop (image processing) --> USB --> Arduino(1) -->Xbee

i would like to build automation control robot with image processing.

please feel free to advice, thank in advance ;)

It would probably work, but could proove to be slow.

Have you thought about this more in-depth? Software design on both Arduinoes will have an impact on the result. (Stating the obvious) :)

Foir the image-processing, have a look at http://www.roborealm.com/

Thank AlphaBeta

Image processing is for automatic mode only.

and nunchuck for control manual mode. It's should be work ?

I would like to build remote control robot and connect like this

Nunchuck --> Arduino(1) -- >Xbee ................ Xbee --> Arduino(2) --> Robot (Servo, Stepper, PWM moters)

If I where you, I would step through these milestones:

  • Determine what kind of actions your robot will need to take
  • Determine what you want to control using Nunchuck

Then, using the results you would have to concider what kind of sensors you need?

Then there is this:

Camera --> transmitter .......... Reciever --> Laptop (image processing) --> USB --> Arduino(1) -->Xbee

What is most important:

  • Autonomous behaviour
  • Interaction with nunchuck
  • PC control

On to my advice:

Implement the robot, or the Arduino 2 as a finite state machine. A library for you: http://www.arduino.cc/playground/Code/FiniteStateMachine

Why use an FSM? Because, this will ensure that your robot tries to do one thing at a time. (i.e moving to one place, not two) You can make your robot do multiple things at the 'same' time, by using two FSMs. Since you are going to control this robot from three 'places'; nunchuck, laptop, and autonomously, this will be the most reliable platform. Imagine this: The robot wants to travel ahead two more seconds, the laptop does not spot any obstacles, but you see the end of the table. Using an FSM you could easily make your commands overrule the other's commands. And the beauty is, that as soon as you stop sending your commands, the FSM will again rely on the laptop and the autonomous code.

Break down your vision for this robot into simple 'states' Typically:

  • Idle
  • MoveAhead
  • MoveBack
  • TurnLeft
  • TurnRight
  • etc

Pseudocode for Arduino 1:

//loop
    //check status on nunchuck
    //calculate differences
        //is the difference enough to cause a status update
    //check for commands given by laptop [from image processing]
    //if a status update is needed (on the Arduino 2)
        //send as little data (bitwise) as possible, that is sufficient for letting the Arduino 2 know what to do

Pseudocode for Arduino 2:

//loop
    //check for input from Arduino 1
        //cause changes as appropriate
    //update FSM for the robot
        //make transitions to queued states [exit current, enter next]
        //update current state(s)
            //set values and states to engines/signals/actuators
    //report back to Arduino 1 if needed

Image processing is for automatic mode only.

and nunchuck for control manual mode. It's should be work ?

This project is absolutely feasable. :)

You could also eliminate Arduino(1) if you wanted to: there are PC applications to talk to a Wii remote, and I'm pretty sure they can read the Nunchuck when it's attached.

Then the laptop could use its own Xbee to directly talk to Arduino(2).

Note that, for the setup you proposed, Arduino(1) would either need to be an ArduinoMega (to get the extra UARTs), or use the SoftwareSerial library to simulate a second UART in software.

Ran