xArm6 absolute coordinates precision

Hello,

I am working on bin picking robotic station equipped with brand new xArm6. I have custom 3d printed gripper with basler camera and I am struggling to find any solution for hand-eye calibration problem.

I finally tracked down the error to the accuracy of absolute robot coordinates with following procedure:

  • gripper equipped with thin sharp pointer
  • we pined the pointer on four corners of rectange with dimensions 250x150 mm using the pointer to the cardboard paper in z=0 (exact xy coordinates: [380,100], [380,-150], [230,-150], [230,100], z=150)

By measuring the resulting rectangle, we found out, that the actual dimensions are 253, 150, 251, 148 and the diagonals are 295.5 vs 290.7 mm.

Is this expected for a new robot?
Is this robot capable of dynamic bin picking, or its expected use is rather for tasks made in repeated positions, independent on absolute xyz values?

It makes me think, that general hand-eye camera calibration is not possible with this accuracy, because all solutions are based on robot pose taken into account in the calculation.

Moreover, I need to make images from z=700 which makes me think, that more far away from base, the larger error will arise?

With kind regards
Vladimir

Hi Vladimir,

What is the SN of your xArm6? What are your requirements for absolute accuracy?

Best regards,
Minna

Thank you for fast response.

Serial number is: XI130412C23L47.

Regarding the accuracy: it would be nice to be around 1 mm.

llustration of the problem:

Hi Vladimir,

This year, we developed a new calibration algorithm to improve the absolute accuracy from 4-6mm to 1-3mm. If you want, you need to ship the arm back to do the calibration.

What is your application? Can’t you calibrate the points one by one? This method does not require high absolute accuracy.

Best regards,
Minna

Using the camera mounted on gripper, I need to detect more or less flat parts on flat plane using the camera and based on the image coordinates, I need to pick that part using gripper or suction.

Now I have two levels of detection:

  1. one raw detection on z=700 mm for rough centering of camera above the part at z=350
  2. more precise detection of point in camera coordinates and perform pick

Both levels of detection are calibrated using markers: I am calculating marker positions in camera coordinates, then centering tool above corresponding markers and calculating transformation between camera coordinates and robot coordinates. It seems I can get accuracy about ±2 mm which maybe will be enough.

Thank you for your offer for recalibration. We will think about it. Or ordering newer robot and leaving this for different applications. Is it really like two times more accurate?

And one more question: will robot end in the same positions using

  • relative move
  • absolute move where target coords are calculated as: position + relative move ?

Thank you
Vladimir

Hi Vladimir,

Yes, we use expensive equipment and a new algorithm, the absolute accuracy has been greatly improved, you can buy a new robot to try.

Not quite understand ‘will robot end in the same position’, would you please share more details?

Best regards,
Minna

I would like the know, if physical position of tool in real world is the same in both cases:

  1. first case: send robot relatively set_position(relative=True)
  2. second case: read robot position, add offsets and use set_position(relative=False) with calculated position

Hi Vladimirs,

They are the same, the first case will read robot position and then send the command as well.
But in both case, get_position will get the value send to the robot, not the position calculated from the joint encoder.

You can call get_servo_angle, and set is_real=True, it will return the real joint angle(encoder data), and then use get_forward_kinematics() to get the corresponding TCP pose.

Best regards,
Minna