Using WPILib Pose Estimation, Simulation, and PhotonVision Together

Background

Full code may be found in the PhotonLib repository (Java).

This example builds upon WPILib’s Differential Drive Pose Estimator. It adds a PhotonCamera to gather estimates of the robot’s position on the field. This in turn can be used for aligning with vision targets, and increasing accuracy of autonomous routines.

To support simulation, a SimVisionSystem is used to drive data into the PhotonCamera. The far high goal target from 2020 is modeled.

Walkthrough

WPILib’s Pose2d class is used to represent robot positions on the field.

Three different Pose2d positions are relevant for this example:

  1. Desired Pose: The location the some autonomous routine wants the robot to be in.

  2. Estimated Pose: The location the software believes the robot to be in, based on physics models and sensor feedback.

  3. Actual Pose: The locations he robot is actually at. The physics simulation generates this in simulation, but it cannot be directly measured on the real robot.

Estimating Pose

The DrivetrainPoseEstimator class is responsible for generating an estimated robot pose using sensor readings (including PhotonVision).

Please reference the WPILib documentation on using the DifferentialDrivePoseEstimator class.

Specifically, to incorporate Photon Vision, we need to create a PhotonCamera:

43
    private PhotonCamera cam = new PhotonCamera(Constants.kCamName);

// Coming Soon!

Then, during periodic execution, we read back results. If we see a target in the image, we pass the camera-measured pose of the robot to the DifferentialDrivePoseEstimator.

81
82
83
84
85
86
87
88
        var res = cam.getLatestResult();
        if (res.hasTargets()) {
            double imageCaptureTime = Timer.getFPGATimestamp() - res.getLatencyMillis();
            Transform2d camToTargetTrans = res.getBestTarget().getCameraToTarget();
            Pose2d camPose = Constants.kFarTargetPose.transformBy(camToTargetTrans.inverse());
            m_poseEstimator.addVisionMeasurement(
                    camPose.transformBy(Constants.kCameraToRobot), imageCaptureTime);
        }

// Coming Soon!

That’s it!

Simulating the Camera

First, we create a new SimVisionSystem to represent our camera and coprocessor running PhotonVision.

71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
    // Simulated Vision System.
    // Configure these to match your PhotonVision Camera,
    // pipeline, and LED setup.
    double camDiagFOV = 75.0; // degrees
    double camPitch = 15.0; // degrees
    double camHeightOffGround = 0.85; // meters
    double maxLEDRange = 20; // meters
    int camResolutionWidth = 640; // pixels
    int camResolutionHeight = 480; // pixels
    double minTargetArea = 10; // square pixels

    SimVisionSystem simVision =
            new SimVisionSystem(
                    Constants.kCamName,
                    camDiagFOV,
                    camPitch,
                    Constants.kCameraToRobot,
                    camHeightOffGround,
                    maxLEDRange,
                    camResolutionWidth,
                    camResolutionHeight,
                    minTargetArea);

// Coming Soon!

Next, we create objects to represent the physical location and size of the vision targets we are calibrated to detect. This example models the down-field high goal vision target from the 2020 and 2021 games.

66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
    // Physical location of the camera on the robot, relative to the center of the
    // robot.
    public static final Transform2d kCameraToRobot =
            new Transform2d(
                    new Translation2d(-0.25, 0), // in meters
                    new Rotation2d());

    // See
    // https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf
    // page 208
    public static final double targetWidth =
            Units.inchesToMeters(41.30) - Units.inchesToMeters(6.70); // meters

    // See
    // https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf
    // page 197
    public static final double targetHeight =
            Units.inchesToMeters(98.19) - Units.inchesToMeters(81.19); // meters
    public static final double targetHeightAboveGround = Units.inchesToMeters(81.19); // meters

    // See https://firstfrc.blob.core.windows.net/frc2020/PlayingField/LayoutandMarkingDiagram.pdf
    // pages 4 and 5
    public static final double kFarTgtXPos = Units.feetToMeters(54);
    public static final double kFarTgtYPos =
            Units.feetToMeters(27 / 2) - Units.inchesToMeters(43.75) - Units.inchesToMeters(48.0 / 2.0);
    public static final Pose2d kFarTargetPose =
            new Pose2d(new Translation2d(kFarTgtXPos, kFarTgtYPos), new Rotation2d(0.0));

    public static final SimVisionTarget kFarTarget =
            new SimVisionTarget(kFarTargetPose, targetHeightAboveGround, targetWidth, targetHeight);

// Coming Soon!

Finally, we add our target to the simulated vision system.

95
        simVision.addSimVisionTarget(Constants.kFarTarget);

// Coming Soon!

If you have additional targets you want to detect, you can add them in the same way as the first one.

Updating the Simulated Vision System

Once we have all the properties of our simulated vision system defined, the work to do at runtime becomes very minimal. Simply pass in the robot’s pose periodically to the simulated vision system.

136
137
        // Update PhotonVision based on our new robot position.
        simVision.processFrame(drivetrainSimulator.getPose());

// Coming Soon!

The rest is done behind the scenes.