Translation from Camera Coordinates (x,y) to uArm Coordinates (x,y,z)

Hi guys,
I am new in Robotics, I am not quite sure what will be the best approach to convert coordinates from the camera system, using OpenCV (x,y) (origin is in the top-left of the screen) to uArm Coordinates System (x,y,z).

My ‘z’ coordinate is not an issue because is always fixed since the arm makes specific movements where ‘z’ does not change. It is enough a conversion in 2D.

I would really appreciate any suggestion on the best approach to follow and in addition, I would like to know if uArm provides any helpful function that can be used in this situation.

I am using Python bindings (this should not make the difference)

I hope I’ve been clear, thank you in advance for your help!


UFACTORY Website:
www.ufactory.cc
uArm User Facebook Group
Contact Us:
English Channel
中文通道

Typical way of the coordinate translation include two steps:

  1. Make sure the output image from camera is linear, refer to the “Calibration Results” at end of this page: camera_calibration/Tutorials/MonocularCalibration - ROS Wiki

  2. Obtain coordinates of uArm system by scale, rotate, and pan the output image, only if you don’t change Z, and make sure the lens is facing the ground.

Then move the uArm to the specified position.

Another way is compare the target object’s coordinate from the camera to a fixed coordinate which represent the endpoint of the uArm(e.g. sucker), then move uArm to make sure the two coordinates are close together in real time.