Simulating Aiming and Getting in Range
The following example comes from the PhotonLib example repository (Java/C++). Full code is available at those links.
Knowledge and Equipment Needed
Everything required in Combining Aiming and Getting in Range.
Background
The previous examples show how to run PhotonVision on a real robot, with a physical robot drivetrain moving around and interacting with the software.
This example builds upon that, adding support for simulating robot motion and incorporating that motion into a SimVisionSystem
. This allows you to test control algorithms on your development computer, without requiring access to a real robot.
Walkthrough
First, in the main Robot
source file, we add support to periodically update a new simulation-specific object. This logic only gets used while running in simulation:
118 DrivetrainSim dtSim;
119
120 @Override
121 public void simulationInit() {
122 dtSim = new DrivetrainSim();
123 }
124
125 @Override
126 public void simulationPeriodic() {
127 dtSim.update();
128 }
72int main() { return frc::StartRobot<Robot>(); }
Then, we add in the implementation of our new DrivetrainSim class. Please reference the WPILib documentation on physics simulation.
Simulated Vision support is added with the following steps:
Creating the Simulated Vision System
First, we create a new SimVisionSystem
to represent our camera and coprocessor running PhotonVision.
72 // Simulated Vision System.
73 // Configure these to match your PhotonVision Camera,
74 // pipeline, and LED setup.
75 double camDiagFOV = 170.0; // degrees - assume wide-angle camera
76 double camPitch = Units.radiansToDegrees(Robot.CAMERA_PITCH_RADIANS); // degrees
77 double camHeightOffGround = Robot.CAMERA_HEIGHT_METERS; // meters
78 double maxLEDRange = 20; // meters
79 int camResolutionWidth = 640; // pixels
80 int camResolutionHeight = 480; // pixels
81 double minTargetArea = 10; // square pixels
82
83 SimVisionSystem simVision =
84 new SimVisionSystem(
85 "photonvision",
86 camDiagFOV,
87 camPitch,
88 new Transform2d(),
89 camHeightOffGround,
90 maxLEDRange,
91 camResolutionWidth,
92 camResolutionHeight,
93 minTargetArea);
78 // Simulated Vision System.
79 // Configure these to match your PhotonVision Camera,
80 // pipeline, and LED setup.
81 units::degree_t camDiagFOV = 170.0_deg; // assume wide-angle camera
82 units::degree_t camPitch = 15_deg;
83 units::meter_t camHeightOffGround = 24_in;
84 units::meter_t maxLEDRange = 20_m;
85 int camResolutionWidth = 640; // pixels
86 int camResolutionHeight = 480; // pixels
87 double minTargetArea = 10; // square pixels
88
89 photonlib::SimVisionSystem simVision{
90 "photonvision", camDiagFOV, camPitch,
91 frc::Transform2d{}, camHeightOffGround, maxLEDRange,
92 camResolutionWidth, camResolutionHeight, minTargetArea};
Next, we create objects to represent the physical location and size of the vision targets we are calibrated to detect. This example models the down-field high goal vision target from the 2020 and 2021 games.
95 // See
96 // https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf
97 // page 208
98 double targetWidth = Units.inchesToMeters(41.30) - Units.inchesToMeters(6.70); // meters
99 // See
100 // https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf
101 // page 197
102 double targetHeight = Units.inchesToMeters(98.19) - Units.inchesToMeters(81.19); // meters
103 // See https://firstfrc.blob.core.windows.net/frc2020/PlayingField/LayoutandMarkingDiagram.pdf
104 // pages 4 and 5
105 double tgtXPos = Units.feetToMeters(54);
106 double tgtYPos =
107 Units.feetToMeters(27 / 2) - Units.inchesToMeters(43.75) - Units.inchesToMeters(48.0 / 2.0);
108 Pose2d farTargetPose = new Pose2d(new Translation2d(tgtXPos, tgtYPos), new Rotation2d(0.0));
94 // See
95 // https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf
96 // page 208
97 const units::meter_t targetWidth = 41.30_in - 6.70_in;
98 // See
99 // https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf
100 // page 197
101 const units::meter_t targetHeight = 98.19_in - 81.19_in; // meters
102 // See
103 // https://firstfrc.blob.core.windows.net/frc2020/PlayingField/LayoutandMarkingDiagram.pdf
104 // pages 4 and 5
105 const units::meter_t tgtXPos = 54_ft;
106 const units::meter_t tgtYPos = (27.0_ft / 2) - 43.75_in - (48.0_in / 2.0);
107 const frc::Translation2d targetTrans{tgtXPos, tgtYPos};
108 const frc::Rotation2d targetRot{0.0_deg};
109 frc::Pose2d farTargetPose{targetTrans, targetRot};
Finally, we add our target to the simulated vision system.
113 simVision.addSimVisionTarget(
114 new SimVisionTarget(farTargetPose, Robot.TARGET_HEIGHT_METERS, targetWidth, targetHeight));
45 DrivetrainSim() {
46 simVision.AddSimVisionTarget(photonlib::SimVisionTarget(
47 farTargetPose, 81.91_in, targetWidth, targetHeight));
48 frc::SmartDashboard::PutData("Field", &field);
49 }
If you have additional targets you want to detect, you can add them in the same way as the first one.
Updating the Simulated Vision System
Once we have all the properties of our simulated vision system defined, the work to do at runtime becomes very minimal. Simply pass in the robot’s pose periodically to the simulated vision system.
122 public void update() {
123 double leftMotorCmd = 0;
124 double rightMotorCmd = 0;
125
126 if (DriverStation.isEnabled() && !RobotController.isBrownedOut()) {
127 leftMotorCmd = leftLeader.getSpeed();
128 rightMotorCmd = rightLeader.getSpeed();
129 }
130
131 drivetrainSimulator.setInputs(
132 leftMotorCmd * RobotController.getInputVoltage(),
133 -rightMotorCmd * RobotController.getInputVoltage());
134 drivetrainSimulator.update(0.02);
135
136 // Update PhotonVision based on our new robot position.
137 simVision.processFrame(drivetrainSimulator.getPose());
138
139 field.setRobotPose(drivetrainSimulator.getPose());
140 }
31void DrivetrainSim::update() {
32 double leftMotorCmd = 0;
33 double rightMotorCmd = 0;
34
35 if (frc::DriverStation::IsEnabled() &&
36 !frc::RobotController::IsBrownedOut()) {
37 leftMotorCmd = leftLeader.GetSpeed();
38 rightMotorCmd = rightLeader.GetSpeed();
39 }
40
41 m_drivetrainSimulator.SetInputs(
42 units::volt_t(leftMotorCmd * frc::RobotController::GetInputVoltage()),
43 units::volt_t(-rightMotorCmd * frc::RobotController::GetInputVoltage()));
44 m_drivetrainSimulator.Update(20_ms);
45
46 // Update PhotonVision based on our new robot position.
47 simVision.ProcessFrame(m_drivetrainSimulator.GetPose());
48
49 field.SetRobotPose(m_drivetrainSimulator.GetPose());
50}
The rest is done behind the scenes.