How to implement this?

I watched this video uArm Robotic Arm: Vision based manipulator control application - YouTube and I was wondering how to implement this project? Because I’m doing a similar project except instead of coin sorting I’m colored objects.

I have a few questions:

How does the robot and the camera communicate to each other?
Is there source code for this project? If so can I have the source code? What program is used for this project?

Regards Tim

Timmy, take a look at the Pixy (CMUcam5) from Charmed Labs. I demonstrate it in this video with uArm.

You can also look at @apockill 's implementation using OpenCV, an open-source video toolkit.

In my implementation, the camera is now affixed to the head of the uArm in a 3D-printed bracket. The camera communicates with the uArm via the serial connector on the back. Mine is still an ongoing work in progress so no pictures of these latest stages yet, but I (and likely Alex) are happy to answer questions.

Dave

I’d love to help. I’m Alex, I did the checkerbot video that Dave referenced. Could you give me some details about the project you are looking to do, and an idea of how you want to go about it? Also, feel free to check out my checkers code on github (username Apockill)

@timmy2519 You can also find my Pixy processing code on GitHub (username DCorboy). The vision processing is in a library uArm-Tic-Tac-Toe/tic_tac_toe/sensor.cpp