Simple strategies for using vision processor outputs involve using the target’s position in the 2D image to infer range and angle to the target.
Knowledge and Equipment Needed
A Coprocessor running PhotonVision
A Drivetrain with wheels
The simplest way to use a vision processing result is to first determine how far left or right in the image the vision target should be for your robot to be “aligned” to the target. Then,
Read the current angle to the target from the vision Coprocessor.
If too far in one direction, command the drivetrain to rotate in the opposite direction to compensate.
See the Aiming at a Target example for more information.
Sometimes, these strategies have also involved incorporating a gyroscope. This can be necessary due to the high latency of vision processing algorithms. However, advancements in the tools available (including PhotonVision) has made that unnecessary for most applications.
By looking at the position of the target in the “vertical” direction in the image, and applying some trionometery, the distance between the robot and the camera can be deduced.
Read the current distance to the target from the vision Coprocessor.
If too far in one direction, command the drivetrain to travel in the opposite direction to compensate.
See the Getting in Range of the Target example for more information.
Angle + Range
Since the previous two alignment strategies work on independent axes of the robot, there’s no reason you can’t do them simultaneously.
See the Aim and Range example for more information.