Simulation Support in PhotonLib
What Is Supported?
PhotonLib supports simulation of a Photon Vision camera and processor moving about a field on a robot.
You can use this to help validate your robot code’s behavior in simulation without special wrappers or additional hardware.
Simulation Vision World Model
Sim-specific classes are provided to model sending one frame of a camera image through PhotonVision. Based on what targets are visible, results are published to NetworkTables.
While processing, the given robot Pose2d
is used to analyze which targets should be in view, and determine where they would have shown up in the camera image.
Targets are considered in view if:
Their centroid is within the field of view of the camera.
The camera is not in driver mode.
The target’s in-image pixel size is greater than
minTargetArea
The distance from the camera to the target is less than
maxLEDRange
Warning
Not all network tables objects are updated in simulation. The interaction through PhotonLib remains the same. Actual camera images are also not simulated.
Latency of processing is not yet modeled.
Simulated Vision System
A SimVisionSystem
represents the camera, coprocessor, and PhotonVision software moving around on the field.
It requires a number of pieces of configuration to accurately simulate your physical setup. Match them to your configuration in PhotonVision, and to your robot’s physical dimensions.
// Simulated Vision System.
// Configure these to match your PhotonVision Camera,
// pipeline, and LED setup.
double camDiagFOV = 170.0; // degrees - assume wide-angle camera
double camPitch = Robot.CAMERA_PITCH_RADIANS; // degrees
double camHeightOffGround = Robot.CAMERA_HEIGHT_METERS; // meters
double maxLEDRange = 20; // meters
int camResolutionWidth = 640; // pixels
int camResolutionHeight = 480; // pixels
double minTargetArea = 10; // square pixels
SimVisionSystem simVision =
new SimVisionSystem(
"photonvision",
camDiagFOV,
new Transform3d(
new Translation3d(0, 0, camHeightOffGround), new Rotation3d(0, camPitch, 0)),
maxLEDRange,
camResolutionWidth,
camResolutionHeight,
minTargetArea);
After declaring the system, you should create and add one SimVisionTarget
per target on the field you are attempting to detect.
// See
// https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf
// page 208
double targetWidth = Units.inchesToMeters(41.30) - Units.inchesToMeters(6.70); // meters
// See
// https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf
// page 197
double targetHeight = Units.inchesToMeters(98.19) - Units.inchesToMeters(81.19); // meters
// See https://firstfrc.blob.core.windows.net/frc2020/PlayingField/LayoutandMarkingDiagram.pdf
// pages 4 and 5
double tgtXPos = Units.feetToMeters(54);
double tgtYPos =
Units.feetToMeters(27 / 2) - Units.inchesToMeters(43.75) - Units.inchesToMeters(48.0 / 2.0);
Pose3d farTargetPose =
new Pose3d(
new Translation3d(tgtXPos, tgtYPos, Robot.TARGET_HEIGHT_METERS),
new Rotation3d(0.0, 0.0, 0.0));
Finally, while running the simulation, process simulated camera frames by providing the robot’s pose to the system.
// Update PhotonVision based on our new robot position.
simVision.processFrame(drivetrainSimulator.getPose());
This will cause most NetworkTables fields to update properly, representing any targets that are in view of the robot.
Robot software which uses PhotonLib to interact with a camera running PhotonVision should work the same as though a real camera was hooked up and active.
Raw-Data Approach
Advanced users may wish to directly provide target information based on an existing detailed simulation.
A SimPhotonCamera
can be created for this purpose. It provides an interface where the user can supply target data via a list of PhotonTrackedTarget
objects.
@Override
public void simulationInit() {
// ...
cam = new SimPhotonCamera("MyCamera");
// ...
}
@Override
public void simulationPeriodic() {
// ...
ArrayList<PhotonTrackedTarget> visibleTgtList = new ArrayList<PhotonTrackedTarget>();
visibleTgtList.add(new PhotonTrackedTarget(yawDegrees, pitchDegrees, area, skew, camToTargetTrans)); // Repeat for each target that you see
cam.submitProcessedFrame(0.0, visibleTgtList);
// ...
}
Note that while there is less code and configuration required to get basic data into the simulation, this approach will cause the user to need to implement much more code on their end to calculate the relative positions of the robot and target. If you already have this, the raw interface may be helpful. However, if you don’t, you’ll likely want to be looking at the Simulated Vision System first.