Simulating Aiming and Getting in Range

Knowledge and Equipment Needed

Background

Full code may be found in the PhotonLib repository (Java/C++).

The previous examples show how to run PhotonVision on a real robot, with a physical robot drivetrain moving around and interacting with the software.

This example builds upon that, adding support for simulating robot motion and incorporating that motion into a SimVisionSystem. This allows you to test control algorithms on your development computer, without requiring access to a real robot.

Walkthrough

First, in the main Robot source file, we add support to periodically update a new simulation-specific object. This logic only gets used while running in simulation:

109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
    ////////////////////////////////////////////////////
    // Simulation support

    DrivetrainSim dtSim;

    @Override
    public void simulationInit() {
        dtSim = new DrivetrainSim();
    }

    @Override
    public void simulationPeriodic() {
        dtSim.update();
    }
}
64
65
66
void Robot::SimulationPeriodic() {
    dtSim.update();
}

Then, we add in the implementation of our new DrivetrainSim class. Please reference the WPILib documentation on physics simulation.

Simulated Vision support is added with the following steps:

Creating the Simulated Vision System

First, we create a new SimVisionSystem to represent our camera and coprocessor running PhotonVision.

66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
    // Simulated Vision System.
    // Configure these to match your PhotonVision Camera,
    // pipeline, and LED setup.
    double camDiagFOV = 170.0; // degrees - assume wide-angle camera
    double camPitch = Units.radiansToDegrees(Robot.CAMERA_PITCH_RADIANS); // degrees
    double camHeightOffGround = Robot.CAMERA_HEIGHT_METERS; // meters
    double maxLEDRange = 20; // meters
    int camResolutionWidth = 640; // pixels
    int camResolutionHeight = 480; // pixels
    double minTargetArea = 10; // square pixels

    SimVisionSystem simVision =
            new SimVisionSystem(
                    "photonvision",
                    camDiagFOV,
                    camPitch,
                    new Transform2d(),
                    camHeightOffGround,
                    maxLEDRange,
                    camResolutionWidth,
                    camResolutionHeight,
                    minTargetArea);
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
    // Simulated Vision System.
    // Configure these to match your PhotonVision Camera,
    // pipeline, and LED setup.
    units::degree_t camDiagFOV = 170.0_deg; // assume wide-angle camera
    units::degree_t camPitch = 15_deg; 
    units::meter_t camHeightOffGround = 24_in;
    units::meter_t maxLEDRange = 20_m; 
    int camResolutionWidth = 640; // pixels
    int camResolutionHeight = 480; // pixels
    double minTargetArea = 10; // square pixels

    photonlib::SimVisionSystem simVision {
                    "photonvision",
                    camDiagFOV,
                    camPitch,
                    frc::Transform2d{},
                    camHeightOffGround,
                    maxLEDRange,
                    camResolutionWidth,
                    camResolutionHeight,
                    minTargetArea};

Next, we create objects to represent the physical location and size of the vision targets we are calibrated to detect. This example models the down-field high goal vision target from the 2020 and 2021 games.

 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
    // See
    // https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf
    // page 208
    double targetWidth = Units.inchesToMeters(41.30) - Units.inchesToMeters(6.70); // meters
    // See
    // https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf
    // page 197
    double targetHeight = Units.inchesToMeters(98.19) - Units.inchesToMeters(81.19); // meters
    // See https://firstfrc.blob.core.windows.net/frc2020/PlayingField/LayoutandMarkingDiagram.pdf
    // pages 4 and 5
    double tgtXPos = Units.feetToMeters(54);
    double tgtYPos =
            Units.feetToMeters(27 / 2) - Units.inchesToMeters(43.75) - Units.inchesToMeters(48.0 / 2.0);
    Pose2d farTargetPose = new Pose2d(new Translation2d(tgtXPos, tgtYPos), new Rotation2d(0.0));
 94
 95
 96
 97
 98
 99
100
101
102
103
    // See https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf page 208
    const units::meter_t targetWidth = 41.30_in - 6.70_in; 
    // See https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf page 197
    const units::meter_t targetHeight = 98.19_in - 81.19_in; // meters
    // See https://firstfrc.blob.core.windows.net/frc2020/PlayingField/LayoutandMarkingDiagram.pdf pages 4 and 5
    const units::meter_t tgtXPos = 54_ft;
    const units::meter_t tgtYPos = (27.0_ft / 2) - 43.75_in - (48.0_in / 2.0);
    const frc::Translation2d targetTrans {tgtXPos, tgtYPos};
    const frc::Rotation2d targetRot {0.0_deg};
    frc::Pose2d farTargetPose {targetTrans, targetRot};

Finally, we add our target to the simulated vision system.

107
108
        simVision.addSimVisionTarget(
                new SimVisionTarget(farTargetPose, Robot.TARGET_HEIGHT_METERS, targetWidth, targetHeight));
41
      simVision.AddSimVisionTarget(photonlib::SimVisionTarget(farTargetPose, 81.91_in, targetWidth, targetHeight));

If you have additional targets you want to detect, you can add them in the same way as the first one.

Updating the Simulated Vision System

Once we have all the properties of our simulated vision system defined, the work to do at runtime becomes very minimal. Simply pass in the robot’s pose periodically to the simulated vision system.

131
132
        // Update PhotonVision based on our new robot position.
        simVision.processFrame(drivetrainSimulator.getPose());
39
40
    // Update PhotonVision based on our new robot position.
    simVision.ProcessFrame(m_drivetrainSimulator.GetPose());

The rest is done behind the scenes.