I am working on bin picking robotic station equipped with brand new xArm6. I have custom 3d printed gripper with basler camera and I am struggling to find any solution for hand-eye calibration problem.
I finally tracked down the error to the accuracy of absolute robot coordinates with following procedure:
gripper equipped with thin sharp pointer
we pined the pointer on four corners of rectange with dimensions 250x150 mm using the pointer to the cardboard paper in z=0 (exact xy coordinates: [380,100], [380,-150], [230,-150], [230,100], z=150)
By measuring the resulting rectangle, we found out, that the actual dimensions are 253, 150, 251, 148 and the diagonals are 295.5 vs 290.7 mm.
Is this expected for a new robot?
Is this robot capable of dynamic bin picking, or its expected use is rather for tasks made in repeated positions, independent on absolute xyz values?
It makes me think, that general hand-eye camera calibration is not possible with this accuracy, because all solutions are based on robot pose taken into account in the calculation.
Moreover, I need to make images from z=700 which makes me think, that more far away from base, the larger error will arise?
This year, we developed a new calibration algorithm to improve the absolute accuracy from 4-6mm to 1-3mm. If you want, you need to ship the arm back to do the calibration.
What is your application? Can’t you calibrate the points one by one? This method does not require high absolute accuracy.
Using the camera mounted on gripper, I need to detect more or less flat parts on flat plane using the camera and based on the image coordinates, I need to pick that part using gripper or suction.
Now I have two levels of detection:
one raw detection on z=700 mm for rough centering of camera above the part at z=350
more precise detection of point in camera coordinates and perform pick
Both levels of detection are calibrated using markers: I am calculating marker positions in camera coordinates, then centering tool above corresponding markers and calculating transformation between camera coordinates and robot coordinates. It seems I can get accuracy about ±2 mm which maybe will be enough.
Thank you for your offer for recalibration. We will think about it. Or ordering newer robot and leaving this for different applications. Is it really like two times more accurate?
And one more question: will robot end in the same positions using
relative move
absolute move where target coords are calculated as: position + relative move ?
They are the same, the first case will read robot position and then send the command as well.
But in both case, get_position will get the value send to the robot, not the position calculated from the joint encoder.
You can call get_servo_angle, and set is_real=True, it will return the real joint angle(encoder data), and then use get_forward_kinematics() to get the corresponding TCP pose.