Simulating Aiming and Getting in Range

Knowledge and Equipment Needed

Background

Full code may be found in the PhotonLib repository (Java/C++).

The previous examples show how to run PhotonVision on a real robot, with a physical robot drivetrain moving around and interacting with the software.

This example builds upon that, adding support for simulating robot motion and incorporating that motion into a SimVisionSystem. This allows you to test control algorithms on your development computer, without requiring access to a real robot.

Walkthrough

First, in the main Robot source file, we add support to periodically update a new simulation-specific object. This logic only gets used while running in simulation:

108    // Simulation support
109
110    DrivetrainSim dtSim;
111
112    @Override
113    public void simulationInit() {
114        dtSim = new DrivetrainSim();
115    }
116
117    @Override
118    public void simulationPeriodic() {
119        dtSim.update();
120    }
62void Robot::SimulationPeriodic() { dtSim.update(); }

Then, we add in the implementation of our new DrivetrainSim class. Please reference the WPILib documentation on physics simulation.

Simulated Vision support is added with the following steps:

Creating the Simulated Vision System

First, we create a new SimVisionSystem to represent our camera and coprocessor running PhotonVision.

64    // Simulated Vision System.
65    // Configure these to match your PhotonVision Camera,
66    // pipeline, and LED setup.
67    double camDiagFOV = 170.0; // degrees - assume wide-angle camera
68    double camPitch = Units.radiansToDegrees(Robot.CAMERA_PITCH_RADIANS); // degrees
69    double camHeightOffGround = Robot.CAMERA_HEIGHT_METERS; // meters
70    double maxLEDRange = 20; // meters
71    int camResolutionWidth = 640; // pixels
72    int camResolutionHeight = 480; // pixels
73    double minTargetArea = 10; // square pixels
74
75    SimVisionSystem simVision =
76            new SimVisionSystem(
77                    "photonvision",
78                    camDiagFOV,
79                    camPitch,
80                    new Transform2d(),
81                    camHeightOffGround,
82                    maxLEDRange,
83                    camResolutionWidth,
84                    camResolutionHeight,
85                    minTargetArea);
71  // Simulated Vision System.
72  // Configure these to match your PhotonVision Camera,
73  // pipeline, and LED setup.
74  units::degree_t camDiagFOV = 170.0_deg;  // assume wide-angle camera
75  units::degree_t camPitch = 15_deg;
76  units::meter_t camHeightOffGround = 24_in;
77  units::meter_t maxLEDRange = 20_m;
78  int camResolutionWidth = 640;   // pixels
79  int camResolutionHeight = 480;  // pixels
80  double minTargetArea = 10;      // square pixels
81
82  photonlib::SimVisionSystem simVision{
83      "photonvision",     camDiagFOV,          camPitch,
84      frc::Transform2d{}, camHeightOffGround,  maxLEDRange,
85      camResolutionWidth, camResolutionHeight, minTargetArea};

Next, we create objects to represent the physical location and size of the vision targets we are calibrated to detect. This example models the down-field high goal vision target from the 2020 and 2021 games.

 87    // See
 88    // https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf
 89    // page 208
 90    double targetWidth = Units.inchesToMeters(41.30) - Units.inchesToMeters(6.70); // meters
 91    // See
 92    // https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf
 93    // page 197
 94    double targetHeight = Units.inchesToMeters(98.19) - Units.inchesToMeters(81.19); // meters
 95    // See https://firstfrc.blob.core.windows.net/frc2020/PlayingField/LayoutandMarkingDiagram.pdf
 96    // pages 4 and 5
 97    double tgtXPos = Units.feetToMeters(54);
 98    double tgtYPos =
 99            Units.feetToMeters(27 / 2) - Units.inchesToMeters(43.75) - Units.inchesToMeters(48.0 / 2.0);
100    Pose2d farTargetPose = new Pose2d(new Translation2d(tgtXPos, tgtYPos), new Rotation2d(0.0));
 87  // See
 88  // https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf
 89  // page 208
 90  const units::meter_t targetWidth = 41.30_in - 6.70_in;
 91  // See
 92  // https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf
 93  // page 197
 94  const units::meter_t targetHeight = 98.19_in - 81.19_in;  // meters
 95  // See
 96  // https://firstfrc.blob.core.windows.net/frc2020/PlayingField/LayoutandMarkingDiagram.pdf
 97  // pages 4 and 5
 98  const units::meter_t tgtXPos = 54_ft;
 99  const units::meter_t tgtYPos = (27.0_ft / 2) - 43.75_in - (48.0_in / 2.0);
100  const frc::Translation2d targetTrans{tgtXPos, tgtYPos};
101  const frc::Rotation2d targetRot{0.0_deg};
102  frc::Pose2d farTargetPose{targetTrans, targetRot};

Finally, we add our target to the simulated vision system.

104    public DrivetrainSim() {
105        simVision.addSimVisionTarget(
106                new SimVisionTarget(farTargetPose, Robot.TARGET_HEIGHT_METERS, targetWidth, targetHeight));
107        SmartDashboard.putData("Field", field);
108    }
36class DrivetrainSim {
37 public:
38  DrivetrainSim() {
39    simVision.AddSimVisionTarget(photonlib::SimVisionTarget(
40        farTargetPose, 81.91_in, targetWidth, targetHeight));
41    frc::SmartDashboard::PutData("Field", &field);
42  }

If you have additional targets you want to detect, you can add them in the same way as the first one.

Updating the Simulated Vision System

Once we have all the properties of our simulated vision system defined, the work to do at runtime becomes very minimal. Simply pass in the robot’s pose periodically to the simulated vision system.

129        // Update PhotonVision based on our new robot position.
130        simVision.processFrame(drivetrainSimulator.getPose());
39  // Update PhotonVision based on our new robot position.
40  simVision.ProcessFrame(m_drivetrainSimulator.GetPose());

The rest is done behind the scenes.