AprilTags and PhotonPoseEstimator
Note
For more information on how to methods to get AprilTag data, look here.
PhotonLib includes a PhotonPoseEstimator
class, which allows you to combine the pose data from all tags in view in order to get a field relative pose. For each camera, a separate instance of the PhotonPoseEstimator
class should be created.
Creating an AprilTagFieldLayout
AprilTagFieldLayout
is used to represent a layout of AprilTags within a space (field, shop at home, classroom, etc.). WPILib provides a JSON that describes the layout of AprilTags on the field which you can then use in the AprilTagFieldLayout constructor. You can also specify a custom layout.
The API documentation can be found in here: Java, C++, and Python.
public static final AprilTagFieldLayout kTagLayout =
AprilTagFieldLayout.loadField(AprilTagFields.kDefaultField);
inline const frc::AprilTagFieldLayout kTagLayout{
frc::AprilTagFieldLayout::LoadField(frc::AprilTagField::kDefaultField)};
AprilTagFieldLayout.loadField(AprilTagField.kDefaultField),
Defining the Robot to Camera Transform3d
Another necessary argument for creating a PhotonPoseEstimator
is the Transform3d
representing the robot-relative location and orientation of the camera. A Transform3d
contains a Translation3d
and a Rotation3d
. The Translation3d
is created in meters and the Rotation3d
is created with radians. For more information on the coordinate system, please see the Coordinate Systems documentation.
public static final Transform3d kRobotToCam =
new Transform3d(new Translation3d(0.5, 0.0, 0.5), new Rotation3d(0, 0, 0));
inline const frc::Transform3d kRobotToCam{
frc::Translation3d{0.5_m, 0.0_m, 0.5_m},
frc::Rotation3d{0_rad, -30_deg, 0_rad}};
kRobotToCam = wpimath.geometry.Transform3d(
wpimath.geometry.Translation3d(0.5, 0.0, 0.5),
wpimath.geometry.Rotation3d.fromDegrees(0.0, -30.0, 0.0),
)
Creating a PhotonPoseEstimator
The PhotonPoseEstimator has a constructor that takes an AprilTagFieldLayout
(see above), PoseStrategy
, PhotonCamera
, and Transform3d
. PoseStrategy
has nine possible values:
MULTI_TAG_PNP_ON_COPROCESSOR
Calculates a new robot position estimate by combining all visible tag corners. Recommended for all teams as it will be the most accurate.
Must configure the AprilTagFieldLayout properly in the UI, please see here for more information.
LOWEST_AMBIGUITY
Choose the Pose with the lowest ambiguity.
CLOSEST_TO_CAMERA_HEIGHT
Choose the Pose which is closest to the camera height.
CLOSEST_TO_REFERENCE_POSE
Choose the Pose which is closest to the pose from setReferencePose().
CLOSEST_TO_LAST_POSE
Choose the Pose which is closest to the last pose calculated.
AVERAGE_BEST_TARGETS
Choose the Pose which is the average of all the poses from each tag.
MULTI_TAG_PNP_ON_RIO
A slower, older version of MULTI_TAG_PNP_ON_COPROCESSOR, not recommended for use.
PNP_DISTANCE_TRIG_SOLVE
Use distance data from best visible tag to compute a Pose. This runs on the RoboRIO in order to access the robot’s yaw heading, and MUST have addHeadingData called every frame so heading data is up-to-date. Based on a reference implementation by FRC Team 6328 Mechanical Advantage.
CONSTRAINED_SOLVEPNP
Solve a constrained version of the Perspective-n-Point problem with the robot’s drivebase flat on the floor. This computation takes place on the RoboRIO, and should not take more than 2ms. This also requires addHeadingData to be called every frame so heading data is up to date. If Multi-Tag PNP is enabled on the coprocessor, it will be used to provide an initial seed to the optimization algorithm – otherwise, the multi-tag fallback strategy will be used as the seed.
photonEstimator =
new PhotonPoseEstimator(kTagLayout, PoseStrategy.MULTI_TAG_PNP_ON_COPROCESSOR, kRobotToCam);
photon::PhotonPoseEstimator photonEstimator{
constants::Vision::kTagLayout,
photon::PoseStrategy::MULTI_TAG_PNP_ON_COPROCESSOR,
constants::Vision::kRobotToCam};
self.camPoseEst = PhotonPoseEstimator(
AprilTagFieldLayout.loadField(AprilTagField.kDefaultField),
PoseStrategy.LOWEST_AMBIGUITY,
self.cam,
kRobotToCam,
)
Note
Python still takes a PhotonCamera
in the constructor, so you must create the camera as shown in the next section and then return and use it to create the PhotonPoseEstimator
.
Using a PhotonPoseEstimator
The final prerequisite to using your PhotonPoseEstimator
is creating a PhotonCamera
. To do this, you must set the name of your camera in Photon Client. From there you can define the camera in code.
camera = new PhotonCamera(kCameraName);
photon::PhotonCamera camera{constants::Vision::kCameraName};
self.cam = PhotonCamera("YOUR CAMERA NAME")
Calling update()
on your PhotonPoseEstimator
will return an EstimatedRobotPose
, which includes a Pose3d
of the latest estimated pose (using the selected strategy) along with a double
of the timestamp when the robot pose was estimated.
Optional<EstimatedRobotPose> visionEst = Optional.empty();
for (var change : camera.getAllUnreadResults()) {
visionEst = photonEstimator.update(change);
updateEstimationStdDevs(visionEst, change.getTargets());
if (Robot.isSimulation()) {
visionEst.ifPresentOrElse(
est ->
getSimDebugField()
.getObject("VisionEstimation")
.setPose(est.estimatedPose.toPose2d()),
() -> {
getSimDebugField().getObject("VisionEstimation").setPoses();
});
}
visionEst.ifPresent(
est -> {
// Change our trust in the measurement based on the tags we can see
var estStdDevs = getEstimationStdDevs();
estConsumer.accept(est.estimatedPose.toPose2d(), est.timestampSeconds, estStdDevs);
});
}
// Run each new pipeline result through our pose estimator
for (const auto& result : camera.GetAllUnreadResults()) {
// cache result and update pose estimator
auto visionEst = photonEstimator.Update(result);
m_latestResult = result;
// In sim only, add our vision estimate to the sim debug field
if (frc::RobotBase::IsSimulation()) {
if (visionEst) {
GetSimDebugField()
.GetObject("VisionEstimation")
->SetPose(visionEst->estimatedPose.ToPose2d());
} else {
GetSimDebugField().GetObject("VisionEstimation")->SetPoses({});
}
}
if (visionEst) {
estConsumer(visionEst->estimatedPose.ToPose2d(), visionEst->timestamp,
GetEstimationStdDevs(visionEst->estimatedPose.ToPose2d()));
}
camEstPose = self.camPoseEst.update()
You should be updating your drivetrain pose estimator with the result from the PhotonPoseEstimator
every loop using addVisionMeasurement()
.
vision = new Vision(drivetrain::addVisionMeasurement);
Vision vision{[=, this](frc::Pose2d pose, units::second_t timestamp,
Eigen::Matrix<double, 3, 1> stddevs) {
drivetrain.AddVisionMeasurement(pose, timestamp, stddevs);
}};
if camEstPose:
self.swerve.addVisionPoseEstimate(
camEstPose.estimatedPose, camEstPose.timestampSeconds
)
Complete Examples
The complete examples for the PhotonPoseEstimator
can be found in the following locations:
Additional PhotonPoseEstimator
Methods
For more information on the PhotonPoseEstimator
class, please see the API documentation.