Using WPILib Pose Estimation, Simulation, and PhotonVision Together
The following example comes from the PhotonLib example repository (Java). Full code is available at that links.
Knowledge and Equipment Needed
Everything required in Combining Aiming and Getting in Range, plus some familiarity with WPILib pose estimation functionality.
Background
This example builds upon WPILib’s Differential Drive Pose Estimator. It adds a PhotonCamera
to gather estimates of the robot’s position on the field. This in turn can be used for aligning with vision targets, and increasing accuracy of autonomous routines.
To support simulation, a SimVisionSystem
is used to drive data into the PhotonCamera
. The far high goal target from 2020 is modeled.
Walkthrough
WPILib’s Pose2d
class is used to represent robot positions on the field.
Three different Pose2d
positions are relevant for this example:
Desired Pose: The location some autonomous routine wants the robot to be in.
Estimated Pose: The location the software believes the robot to be in, based on physics models and sensor feedback.
Actual Pose: The locations the robot is actually at. The physics simulation generates this in simulation, but it cannot be directly measured on the real robot.
Estimating Pose
The DrivetrainPoseEstimator
class is responsible for generating an estimated robot pose using sensor readings (including PhotonVision).
Please reference the WPILib documentation on using the DifferentialDrivePoseEstimator
class.
For both simulation and on-robot code, we create objects to represent the physical location and size of the vision targets we are calibrated to detect. This example models the down-field high goal vision target from the 2020 and 2021 games.
83 // See
84 // https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf
85 // page 208
86 public static final double targetWidth =
87 Units.inchesToMeters(41.30) - Units.inchesToMeters(6.70); // meters
88
89 // See
90 // https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf
91 // page 197
92 public static final double targetHeight =
93 Units.inchesToMeters(98.19) - Units.inchesToMeters(81.19); // meters
94
95 // See https://firstfrc.blob.core.windows.net/frc2020/PlayingField/LayoutandMarkingDiagram.pdf
96 // pages 4 and 5
97 public static final double kFarTgtXPos = Units.feetToMeters(54);
98 public static final double kFarTgtYPos =
99 Units.feetToMeters(27 / 2) - Units.inchesToMeters(43.75) - Units.inchesToMeters(48.0 / 2.0);
100 public static final double kFarTgtZPos =
101 (Units.inchesToMeters(98.19) - targetHeight) / 2 + targetHeight;
102
103 public static final Pose3d kFarTargetPose =
104 new Pose3d(
105 new Translation3d(kFarTgtXPos, kFarTgtYPos, kFarTgtZPos),
106 new Rotation3d(0.0, 0.0, Units.degreesToRadians(180)));
To incorporate Photon Vision, we need to create a PhotonCamera
:
46 private PhotonCamera cam = new PhotonCamera(Constants.kCamName);
During periodic execution, we read back camera results. If we see a target in the image, we pass the camera-measured pose of the robot to the DifferentialDrivePoseEstimator
.
81 public void update(double leftDist, double rightDist) {
82 m_poseEstimator.update(gyro.getRotation2d(), leftDist, rightDist);
83
84 var res = cam.getLatestResult();
85 if (res.hasTargets()) {
86 var imageCaptureTime = res.getTimestampSeconds();
87 var camToTargetTrans = res.getBestTarget().getBestCameraToTarget();
88 var camPose = Constants.kFarTargetPose.transformBy(camToTargetTrans.inverse());
89 m_poseEstimator.addVisionMeasurement(
90 camPose.transformBy(Constants.kCameraToRobot).toPose2d(), imageCaptureTime);
91 }
92 }
That’s it!
Simulating the Camera
First, we create a new SimVisionSystem
to represent our camera and coprocessor running PhotonVision.
76 // Simulated Vision System.
77 // Configure these to match your PhotonVision Camera,
78 // pipeline, and LED setup.
79 double camDiagFOV = 75.0; // degrees
80 double camPitch = 15.0; // degrees
81 double camHeightOffGround = 0.85; // meters
82 double maxLEDRange = 20; // meters
83 int camResolutionWidth = 640; // pixels
84 int camResolutionHeight = 480; // pixels
85 double minTargetArea = 10; // square pixels
86
87 SimVisionSystem simVision =
88 new SimVisionSystem(
89 Constants.kCamName,
90 camDiagFOV,
91 Constants.kCameraToRobot,
92 maxLEDRange,
93 camResolutionWidth,
94 camResolutionHeight,
95 minTargetArea);
Then, we add our target to the simulated vision system.
97 public DrivetrainSim() {
98 simVision.addSimVisionTarget(Constants.kFarTarget);
99 }
If you have additional targets you want to detect, you can add them in the same way as the first one.
Updating the Simulated Vision System
Once we have all the properties of our simulated vision system defined, the remaining work is minimal. Periodically, pass in the robot’s pose to the simulated vision system.
138 // Update PhotonVision based on our new robot position.
139 simVision.processFrame(drivetrainSimulator.getPose());
The rest is done behind the scenes.