# Little Physics discussion

Hello all. I have a small physics-related robotics/programming question.

I am on my hs FIRST robotics team (i think ive mentioned this before), and next year I will be taking charge of programming. The deal is, my predecessor made a traction control system with which I was terribly unhappy because it was too simple and seemed completely ineffective (although our team captain saw otherwise)… Now, the code was basically a simple set of if then statements which compared the rates output by the encoders on driven wheels to the rates output by the encoders on non-driven omniwheels.

Simple enought, right?

Whenever a load was placed on the motors, say, for instance, starting from 0 velocity, the code would divide the input from the joystick by a set value (i dont remember the value, we had to play around with it and guess the best one, another reason why i dont like the old traction control design)… so if you tell the robot to go 100% forward, and the program detects slippage (a large difference between the omni wheel speed and the driven wheel speed) it drops the input from 100 to say 60… then, as the gap between the real rate and the rate of the driven wheel decreases, the input value is scaled back to what it “should” be: 60 → 70 → 90 → 100

Sorry for the somewhat long and perhaps confusing intro… here is the question - Is there a more effective way?

I havent taken to long to think about it, but my first ideas where to use an accelerometer, but thats as far as I got… I was also thinking how much good would pulsing the wheels do, say, if you are trying to push a robot, rather than running the same “slowly-increment-speed” system described above…

If anyone has any experience with this sort of problem, I would be grateful to if you help find the optimal solution

What exactly is your problem / question?

I’ve read your post several times, but I still can’t manage to get the essence of your problem. :-[

yea, sorry about that, basically, I need ideas on an effective “system” of traction control, using only encoders on driven and non-driven wheels, and perhaps a gyro and/or accelerometer

also, the reason for the traction control is b/c the wheels are extremely smooth, and so is the floor, sort of like having pvc wheels driving on pvc flooring

What’s of most importance:

• The difference between driven/non-driven per track
• The difference between the tracks

Use an optical mouse to determine ground covered?

Really its a means of optimizing torque output without wasting motor power

keep in mind this is a tank drive set up, so the two sides of the robot are driven independently, and thus have independent traction control (same algorithm, just they dont affect the other)

keep in mind this is a tank drive set up

How can we “keep it in mind”?
This is the first time you’ve mentioned it.

Really its a means of optimizing torque output without wasting motor power

I see!

Well then, I think the driven/non-driven comparison is a good/simple approach.

Sorry i’m compltely confused myself as to what this is or what you are asking. :-/

Sorry i’m compltely confused myself as to what this is or what you are asking. :-/

As I have understood it, he has a treaded robot, and he wants to programmatically determine the best instantaneous engine power per distance travelled.

I think the question could be formulated as:
How to ensure the least amount of slippage, but maintaining a high torque?

This is just a personal idea, not certain it is the correct/intended question/problem.

He’s pretty much right

Do you have data that “proves” the old system is ineffective? Can you figure out tests that you can perform to compare different algorithms?

I suck at mechanics, but here’s a thought: detecting slippage is ineffective because once you have slippage it is already “too late”; getting back to non-slippage is very “expensive.” Instead, you may want to gradually ramp up power to the desired level at rates that avoid slippage in the first place. These rates may be surface dependent and need calibration for each event; basically, you’d run tests during warmups that would detect how fast power could be ramped up without slippage, and then during actual competition you’d ramp power at rates LOWER than those.

(oh wait; your school might compete against “our” school. Never mind!)

what school? and thanks, but i have no official proofs, just my experience of driving the robot before and after the implementation of the traction control

westfw has a good point.

Maybe you could have an accelerate(motor, speed); that raps the speed up to the desired value.
And if it detects slippage, it decrements the speed of the ramp, until next reset.

This way, you could drive a little testrun on the target surface, and it would configure itself to the surface. Resulting in a optimal acceleration for that specific surface.

You could implement a PID algorithm into the current program. You could tune this to get the required torque faster and with little overshoot. Its just a more precise way of doing: “it drops the input from 100 to say 60… then, as the gap between the real rate and the rate of the driven wheel decreases, the input value is scaled back to what it “should” be: 60 → 70 → 90 → 100”

If you google PID controller you can get some info on this. However this is a pretty complex solution.

You might be better off just using a P (Proportional Gain) controller. This is basically a constant that you whould multiply the difference between the two encoders by.