Using Limelight 2+ for Vision Processing
So, you came here for long-distance targeting. I know what you are thinking, “Oh my robot! Using vision to scan a long-distance target is gonna be so hard!” Not to worry guys. If you have a Limelight, vision should be cake. If not, buy one! Limelight 2+ is a smart camera purposely built for FIRST Robotics. Limelight is easy enough for teams with no vision experience or expert mentors, and powerful enough for experienced teams who need a reliable, competition-ready vision solution.
Not convinced? Look at these stats:
Tracking Speed: 90fps
Tracking Resolution: 320 x 240 pixels
Field-of-View: 59.6 x 49.7 degrees
Dimensions: 3.819 x 2.194 x 0.984 in (97.005 x 53.730 x 25.00 mm)
Weight: .25 lbs
Tracking Interfaces: Network Tables
Total latency (Photons -> Robot) : 21-25 milliseconds
-Pipeline Latency: 3-7 milliseconds
-NetworkTables -> Robot latency: .3 milliseconds
-(NT limits bypassed to instantly submit targeting data.)
Luminous Flux: 400 lumens
-60% more light than the standard dual-ring setup
-Illuminance is increased by gloss-finish LED cones
Constant-brightness LEDs down to 7 volts
Ok, so now you know what's in this thing. Let’s talk about how to use it. Remember, you need Windows to run most of the software needed. NO Windows 7. Use Windows 8, 8.1, or 10
Use four 1 1/2” 10-32 screws and nylock nuts to mount your Limelight. Discuss with your CAD team as to how you will mount it.
1. Do not run wires to your VRM.
2. Run two wires from your limelight to a slot on your PDP (NOT your VRM).
3. Add any breaker (5A, 10A, 20A, etc.) to the same slot on your PDP.
4. Run an ethernet cable from your Limelight to your robot radio.
1. Remove power from your limelight.
2. Download the Limelight Finder Tool.
3. Install the Balena Etcher flash tool.
4. Run a USB-to-micro USB cable from your laptop to your limelight.
5. Run “Balena Etcher”.
6. It may take up to 20 seconds for your machine to recognize the camera.
7. Select the latest .zip image in your downloads folder
a. Go to the latest critical update: https://limelightvision.io/pages/downloads
8. Select a “Compute Module” device in the “Drives” menu
9. Click “Flash”
10. Once flashing is complete, remove power from your limelight
It is best that you use a static IP address for matches during the competition.
1. Download Bonjour (Only download it ONCE)
2. Reboot your robot and computer.
3. Power-up your robot, and connect your laptop to your robot’s network.
4. After your Limelight flashes its LED array, open the Limelight Finder Tool and search for your Limelight or navigate to http://limelight.local:5801. This is the configuration panel.
5. Navigate to the “Settings” tab on the left side of the interface.
6. Enter your team number and press the “Update Team Number” button.
7. Change your “IP Assignment” to “Static”.
8. Set your Limelight’s IP address to “10.TE.AM.11”.
9. Set the Netmask to “255.255.255.0”.
10.Set the Gateway to “10.TE.AM.1”.
11. Click the “Update” button.
12. Power-cycle your robot.
13. You will now access your config panel and camera stream by copying and pasting this link on your web browser: http://10.TE.AM.11:5801
Note: to reset the IP address, click on this button
By doing some programming, you can get data from the Limelight. Limelight posts targeting data to Network Tables. Network Tables is an implementation of a distributed “dictionary”. That is named values are created either on the robot, driver station, or potentially an attached coprocessor, and the values are automatically distributed to all the other participants.
You can learn more about the Network Tables API here: https://docs.wpilib.org/en/latest/docs/software/networktables/networktables-intro.html
For now, here is some important data that you really need:
tv: Whether the limelight has any valid targets (0 or 1)
tx: Horizontal Offset From Crosshair To Target (-27 degrees to 27 degrees)
ty: Vertical Offset From Crosshair To Target (-20.5 degrees to 20.5 degrees)
ta: Target Area (0% of image to 100% of image)
Here is how you declare and instantiate your data variables from Network tables:
NetworkTable table = NetworkTableInstance.getDefault()
NetworkTableEntry tx = table.getEntry("tx").getDouble(0.0);
NetworkTableEntry ty = table.getEntry("ty").getDouble(0.0);
NetworkTableEntry ta = table.getEntry("ta").getDouble(0.0);
To post the data on the dashboard, just do the following:
Great! You can finally track the target/object! Now all that's left is fine tuning. There are five different tuning options that you could use on the config panel. I’ll not go in-depth, but I will summarize what each option does:
Input: hosts controls to change the raw camera image before it is passed through the processing pipeline.
Thresholding: the act of taking an image, and throwing away any pixels that aren’t in a specific color range. (Very critical)
Contour Filtering: After thresholding, Limelight generates a list of contours. After that, each contour is wrapped in a bounding rectangle, an unrotated rectangle, and a “convex hull”. These are passed through a series of filters to determine the “best” contour.
Output: controls what happens during the last stage of the vision pipeline
3D: PnP point-based pose estimation.
Limelight’s crosshairs turn calibration into a seamless process. Rather than storing offsets in their code, teams can line-up their robots perfectly by hand (or by joystick), and click the “calibrate” button.
Calibrating a crosshair moves the “zero” of your targeting data. This is very useful if your Limelight isn’t perfectly centered on your robot.
Single Crosshair Mode
Line-up your robot at its ideal scoring location+rotation, and click “calibrate”. Now a tx and ty of “zero” equate to a perfectly aligned robot. If your robot needs to be recalibrated for a new field, simply take a practice match to find the perfect alignment for your robot, and click “calibrate” during your match.
Double Crosshair Mode
Imagine a robot with an off-axis camera or shooter that needs to shoot game objects into a goal from many positions on the field. As the robot approaches the goal, its crosshair must adjust in real-time to compensate. Dual crosshair mode is built for this functionality. Line-up your robot at its closest scoring position+rotation, and calibrate crosshair “A”. Line-up your robot at its farthest scoring position+rotation, and calibrate crosshair “B”. When you calibrate in dual-crosshair mode, crosshairs also store an area value. You will notice that as your robot moves between its min and max scoring distances, the crosshair moves between crosshair “A” and crosshair “B”. This is done by checking the area of the target and comparing it to the two target areas seen during calibration.
Using Multiple Pipelines
Limelight can store up to ten unique vision pipelines for different goals, different fields, or different robots. Change pipelines mid-match by changing the “pipeline” value in NetworkTables.
To edit multiple pipelines, you must first check the “Ignore NetworkTables Index” checkbox in the web interface. This will force the robot to temporarily allow you to change the pipeline index through the web interface rather than through NetworkTables.
To download your pipelines for backups and sharing, simply click the “download” button next to your pipeline’s name. To upload a pipeline, click the “upload” button.
The End (or is it?)
Overall, if you went through this entire blog and finished the process, you are set. But setting up vision is just the start. The real question is how do we use it? Maybe you can use it to estimate the distance from the robot to target. You could use it to aim your shooter mechanism. You could use it to get your robot in range of a certain target. Perhaps you could do a GRIP pipeline. Or you could just have the LImelight there to make your robot look cooler :)
If you are interested, here are some case studies: