Simulation Support in PhotonLib

What Is Supported?

PhotonLib supports simulation of a Photon Vision camera and processor moving about a field on a robot.

You can use this to help validate your robot code’s behavior in simulation without special wrappers or additional hardware.

Simulation Vision World Model

Sim-specific classes are provided to model sending one frame of a camera image through PhotonVision. Based on what targets are visible, results are published to NetworkTables.

While processing, the given robot Pose2d is used to analyze which targets should be in view, and determine where they would have shown up in the camera image.

Targets are considered in view if:

  1. Their centroid is within the field of view of the camera.

  2. The camera is not in driver mode.

  3. The target’s in-image pixel size is greater than minTargetArea

  4. The distance from the camera to the target is less than maxLEDRange

Warning

Not all network tables objects are updated in simulation. The interaction through PhontonLib remains the same. Actual camera images are also not simulated.

Latency of processing is not yet modeled.

../../../_images/SimArchitecture.svg

Simulated Vision System

A SimVisionSystem represents the camera, coprocessor, and PhotonVision software moving around on the field.

It requires a number of pieces of configuration to accurately simulate your physical setup. Match them to your configuration in PhotonVision, and to your robot’s physical dimensions.

    // Simulated Vision System.
    // Configure these to match your PhotonVision Camera,
    // pipeline, and LED setup.
    double camDiagFOV = 170.0; // degrees - assume wide-angle camera
    double camPitch = Units.radiansToDegrees(Robot.CAMERA_PITCH_RADIANS); // degrees
    double camHeightOffGround = Robot.CAMERA_HEIGHT_METERS; // meters
    double maxLEDRange = 20; // meters
    int camResolutionWidth = 640; // pixels
    int camResolutionHeight = 480; // pixels
    double minTargetArea = 10; // square pixels

    SimVisionSystem simVision =
            new SimVisionSystem(
                    "photonvision",
                    camDiagFOV,
                    camPitch,
                    new Transform2d(),
                    camHeightOffGround,
                    maxLEDRange,
                    camResolutionWidth,
                    camResolutionHeight,
                    minTargetArea);
    // Simulated Vision System.
    // Configure these to match your PhotonVision Camera,
    // pipeline, and LED setup.
    units::degree_t camDiagFOV = 170.0_deg; // assume wide-angle camera
    units::degree_t camPitch = 15_deg; 
    units::meter_t camHeightOffGround = 24_in;
    units::meter_t maxLEDRange = 20_m; 
    int camResolutionWidth = 640; // pixels
    int camResolutionHeight = 480; // pixels
    double minTargetArea = 10; // square pixels

    photonlib::SimVisionSystem simVision {
                    "photonvision",
                    camDiagFOV,
                    camPitch,
                    frc::Transform2d{},
                    camHeightOffGround,
                    maxLEDRange,
                    camResolutionWidth,
                    camResolutionHeight,
                    minTargetArea};

After declaring the system, you should create and add one SimVisionTarget per target on the field you are attempting to detect.

    // See
    // https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf
    // page 208
    double targetWidth = Units.inchesToMeters(41.30) - Units.inchesToMeters(6.70); // meters
    // See
    // https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf
    // page 197
    double targetHeight = Units.inchesToMeters(98.19) - Units.inchesToMeters(81.19); // meters
    // See https://firstfrc.blob.core.windows.net/frc2020/PlayingField/LayoutandMarkingDiagram.pdf
    // pages 4 and 5
    double tgtXPos = Units.feetToMeters(54);
    double tgtYPos =
            Units.feetToMeters(27 / 2) - Units.inchesToMeters(43.75) - Units.inchesToMeters(48.0 / 2.0);
    Pose2d farTargetPose = new Pose2d(new Translation2d(tgtXPos, tgtYPos), new Rotation2d(0.0));

    Field2d field = new Field2d();

    public DrivetrainSim() {
        simVision.addSimVisionTarget(
                new SimVisionTarget(farTargetPose, Robot.TARGET_HEIGHT_METERS, targetWidth, targetHeight));
    const units::meter_t targetWidth = 41.30_in - 6.70_in; 
    // See https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf page 197
    const units::meter_t targetHeight = 98.19_in - 81.19_in; // meters
    // See https://firstfrc.blob.core.windows.net/frc2020/PlayingField/LayoutandMarkingDiagram.pdf pages 4 and 5
    const units::meter_t tgtXPos = 54_ft;
    const units::meter_t tgtYPos = (27.0_ft / 2) - 43.75_in - (48.0_in / 2.0);
    const frc::Translation2d targetTrans {tgtXPos, tgtYPos};
    const frc::Rotation2d targetRot {0.0_deg};
    frc::Pose2d farTargetPose {targetTrans, targetRot};
      simVision.AddSimVisionTarget(photonlib::SimVisionTarget(farTargetPose, 81.91_in, targetWidth, targetHeight));

Finally, while running the simulation, process simulated camera frames by providing the robot’s pose to the system.

        simVision.processFrame(drivetrainSimulator.getPose());
    simVision.ProcessFrame(m_drivetrainSimulator.GetPose());

This will cause most NetworkTables fields to update properly, representing any targets that are in view of the robot.

Robot software which uses PhotonLib to interact with a camera running PhotonVision should work the same as though a real camera was hooked up and active.

Raw-Data Approach

Advanced users may wish to directly provide target information based on an existing detailed simulation.

A SimPhotonCamera can be created for this purpose. It provides an interface where the user can supply target data via a list of PhotonTrackedTarget objects.

@Override
public void simulationInit() {
    //  ...
    cam = new SimPhotonCamera("MyCamera");
    //  ...
}

@Override
public void simulationPeriodic() {
    //  ...
    ArrayList<PhotonTrackedTarget> visibleTgtList = new ArrayList<PhotonTrackedTarget>();
    visibleTgtList.add(new PhotonTrackedTarget(yawDegrees, pitchDegrees, area, skew, camToTargetTrans)); // Repeat for each target that you see
    cam.submitProcessedFrame(0.0, visibleTgtList);
    //  ...
}
#include "photonlib/SimPhotonCamera.h"

//  ...

void Robot::SimulationInit(){
    //  ...
    cam = SimPhotonCamera("MyCamera");
    //  ...
}

void Robot::SimulationPeriodic(){
    //  ...
    std::vector<PhotonTrackedTarget> visibleTgtList = {};
    visibleTgtList.push_back(PhotonTrackedTarget(yawAngle, pitchAngle, area, 0.0, camToTargetTrans));
    cam.SubmitProcessedFrame(0_sec, wpi::MutableArrayRef<PhotonTrackedTarget>(visibleTgtList));
    //  ...
}

Note that while there is less code and configuration required to get basic data into the simulation, this approach will cause the user to need to implement much more code on their end to calculate the relative positions of the robot and target. If you already have this, the raw interface may be helpful. However, if you don’t, you’ll likely want to be looking at the Simulated Vision System first.