I am trying to avoid using the camera.
You'd have to go with an on-board camera and fiducial markers on the floor/walls/ceiling (hint: reacTIVision). That would require something like a Raspberry Pi to do that processing though. There's also the CMUCam which will do color blob identification that you could use as fiducials.
You might put some type of beacons in the environment which the robot can detect and use to calculate its position of the robot in relation to the beacons.