diff --git a/docs/source/conf.py b/docs/source/conf.py index ee4054b710..b7b45f42fc 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -120,6 +120,18 @@ def setup(app): "color-api-overall": "#101010", "color-inline-code-background": "#0d0d0d", }, + "footer_icons": [ + { + "name": "GitHub", + "url": "https://github.com/photonvision/photonvision", + "html": """ + + + + """, + "class": "", + }, + ], } suppress_warnings = ["epub.unknown_project_files"] diff --git a/docs/source/docs/additional-resources/config.md b/docs/source/docs/additional-resources/config.md index 76fdfee9cb..e29fb1e87c 100644 --- a/docs/source/docs/additional-resources/config.md +++ b/docs/source/docs/additional-resources/config.md @@ -1,6 +1,6 @@ # Filesystem Directory -PhotonVision stores and loads settings in the {code}`photonvision_config` directory, in the same folder as the PhotonVision JAR is stored. On the Pi image as well as the Gloworm, this is in the {code}`/opt/photonvision` directory. The contents of this directory can be exported as a zip archive from the settings page of the interface, under "export settings". This export will contain everything detailed below. These settings can later be uploaded using "import settings", to restore configurations from previous backups. +PhotonVision stores and loads settings in the {code}`photonvision_config` directory, in the same folder as the PhotonVision JAR is stored. On supported hardware, this is in the {code}`/opt/photonvision` directory. The contents of this directory can be exported as a zip archive from the settings page of the interface, under "export settings". This export will contain everything detailed below. These settings can later be uploaded using "import settings", to restore configurations from previous backups. ## Directory Structure @@ -12,20 +12,20 @@ The directory structure is outlined below. ``` - calibImgs - - Images saved from the last run of the calibration routine + - Images saved from the last run of the calibration routine - cameras - - Contains a subfolder for each camera. This folder contains the following files: - - pipelines folder, which contains a {code}`json` file for each user-created pipeline. - - config.json, which contains all camera-specific configuration. This includes FOV, pitch, current pipeline index, and calibration data - - drivermode.json, which contains settings for the driver mode pipeline + - Contains a subfolder for each camera. This folder contains the following files: + - pipelines folder, which contains a {code}`json` file for each user-created pipeline. + - config.json, which contains all camera-specific configuration. This includes FOV, pitch, current pipeline index, and calibration data + - drivermode.json, which contains settings for the driver mode pipeline - imgSaves - - Contains images saved with the input/output save commands. + - Contains images saved with the input/output save commands. - logs - - Contains timestamped logs in the format {code}`photonvision-YYYY-MM-D_HH-MM-SS.log`. Note that on Pi or Gloworm these timestamps will likely be significantly behind the real time. + - Contains timestamped logs in the format {code}`photonvision-YYYY-MM-D_HH-MM-SS.log`. These timestamps will likely be significantly behind the real time. Coprocessors on the robot have no way to get current time. - hardwareSettings.json - - Contains hardware settings. Currently this includes only the LED brightness. + - Contains hardware settings. Currently this includes only the LED brightness. - networkSettings.json - - Contains network settings, including team number (or remote network tables address), static/dynamic settings, and hostname. + - Contains network settings, including team number (or remote network tables address), static/dynamic settings, and hostname. ## Importing and Exporting Settings @@ -41,10 +41,10 @@ The entire settings directory can be exported as a ZIP archive from the settings A variety of files can be imported back into PhotonVision: - ZIP Archive ({code}`.zip`) - - Useful for restoring a full configuration from a different PhotonVision instance. + - Useful for restoring a full configuration from a different PhotonVision instance. - Single Config File - - Currently-supported Files - - {code}`hardwareConfig.json` - - {code}`hardwareSettings.json` - - {code}`networkSettings.json` - - Useful for simple hardware or network configuration tasks without overwriting all settings. + - Currently-supported Files + - {code}`hardwareConfig.json` + - {code}`hardwareSettings.json` + - {code}`networkSettings.json` + - Useful for simple hardware or network configuration tasks without overwriting all settings. diff --git a/docs/source/docs/installation/images/gh_actions_1.png b/docs/source/docs/advanced-installation/images/gh_actions_1.png similarity index 100% rename from docs/source/docs/installation/images/gh_actions_1.png rename to docs/source/docs/advanced-installation/images/gh_actions_1.png diff --git a/docs/source/docs/installation/images/gh_actions_2.png b/docs/source/docs/advanced-installation/images/gh_actions_2.png similarity index 100% rename from docs/source/docs/installation/images/gh_actions_2.png rename to docs/source/docs/advanced-installation/images/gh_actions_2.png diff --git a/docs/source/docs/installation/images/gh_actions_3.png b/docs/source/docs/advanced-installation/images/gh_actions_3.png similarity index 100% rename from docs/source/docs/installation/images/gh_actions_3.png rename to docs/source/docs/advanced-installation/images/gh_actions_3.png diff --git a/docs/source/docs/installation/index.md b/docs/source/docs/advanced-installation/index.md similarity index 55% rename from docs/source/docs/installation/index.md rename to docs/source/docs/advanced-installation/index.md index fe42022db1..377414f66f 100644 --- a/docs/source/docs/installation/index.md +++ b/docs/source/docs/advanced-installation/index.md @@ -1,6 +1,6 @@ -# Installation & Setup +# Advanced Installation -This page will help you install PhotonVision on your coprocessor, wire it, and properly setup the networking in order to start tracking targets. +This page will help you install PhotonVision on non-supported coprocessor. ## Step 1: Software Install @@ -14,25 +14,5 @@ You only need to install PhotonVision on the coprocessor/device that is being us :maxdepth: 3 sw_install/index -updating -``` - -## Step 2: Wiring - -This section will walk you through how to wire your coprocessor to get power. - -```{toctree} -:maxdepth: 1 - -wiring -``` - -## Step 3: Networking - -This section will walk you though how to connect your coprocessor to a network. This section is very important (and easy to get wrong), so we recommend you read it thoroughly. - -```{toctree} -:maxdepth: 1 - -networking +prerelease-software ``` diff --git a/docs/source/docs/advanced-installation/prerelease-software.md b/docs/source/docs/advanced-installation/prerelease-software.md new file mode 100644 index 0000000000..1122ff9682 --- /dev/null +++ b/docs/source/docs/advanced-installation/prerelease-software.md @@ -0,0 +1,23 @@ +# Installing Pre-Release Versions + +Pre-release/development version of PhotonVision can be tested by installing/downloading artifacts from Github Actions (see below), which are built automatically on commits to open pull requests and to PhotonVision's `master` branch, or by {ref}`compiling PhotonVision locally `. + +:::{warning} +If testing a pre-release version of PhotonVision with a robot, PhotonLib must be updated to match the version downloaded! If not, packet schema definitions may not match and unexpected things will occur. To update PhotonLib, refer to {ref}`installing specific version of PhotonLib`. +::: + +GitHub Actions builds pre-release version of PhotonVision automatically on PRs and on each commit merged to master. To test a particular commit to master, navigate to the [PhotonVision commit list](https://github.com/PhotonVision/photonvision/commits/master/) and click on the check mark (below). Scroll to "Build / Build fat JAR - PLATFORM", click details, and then summary. From here, JAR and image files can be downloaded to be flashed or uploaded using "Offline Update". + +```{image} images/gh_actions_1.png +:alt: Github Actions Badge +``` + +```{image} images/gh_actions_2.png +:alt: Github Actions artifact list +``` + +Built JAR files (but not image files) can also be downloaded from PRs before they are merged. Navigate to the PR in GitHub, and select Checks at the top. Click on "Build" to display the same artifact list as above. + +```{image} images/gh_actions_3.png +:alt: Github Actions artifacts from PR +``` diff --git a/docs/source/docs/installation/sw_install/advanced-cmd.md b/docs/source/docs/advanced-installation/sw_install/advanced-cmd.md similarity index 100% rename from docs/source/docs/installation/sw_install/advanced-cmd.md rename to docs/source/docs/advanced-installation/sw_install/advanced-cmd.md diff --git a/docs/source/docs/installation/sw_install/files/Limelight2+/hardwareConfig.json b/docs/source/docs/advanced-installation/sw_install/files/Limelight2+/hardwareConfig.json similarity index 100% rename from docs/source/docs/installation/sw_install/files/Limelight2+/hardwareConfig.json rename to docs/source/docs/advanced-installation/sw_install/files/Limelight2+/hardwareConfig.json diff --git a/docs/source/docs/installation/sw_install/files/Limelight2/hardwareConfig.json b/docs/source/docs/advanced-installation/sw_install/files/Limelight2/hardwareConfig.json similarity index 100% rename from docs/source/docs/installation/sw_install/files/Limelight2/hardwareConfig.json rename to docs/source/docs/advanced-installation/sw_install/files/Limelight2/hardwareConfig.json diff --git a/docs/source/docs/installation/sw_install/images/angryIP.png b/docs/source/docs/advanced-installation/sw_install/images/angryIP.png similarity index 100% rename from docs/source/docs/installation/sw_install/images/angryIP.png rename to docs/source/docs/advanced-installation/sw_install/images/angryIP.png diff --git a/docs/source/docs/installation/sw_install/images/nano.png b/docs/source/docs/advanced-installation/sw_install/images/nano.png similarity index 100% rename from docs/source/docs/installation/sw_install/images/nano.png rename to docs/source/docs/advanced-installation/sw_install/images/nano.png diff --git a/docs/source/docs/installation/sw_install/index.md b/docs/source/docs/advanced-installation/sw_install/index.md similarity index 63% rename from docs/source/docs/installation/sw_install/index.md rename to docs/source/docs/advanced-installation/sw_install/index.md index 86ad5d88b3..cc15c6f9b8 100644 --- a/docs/source/docs/installation/sw_install/index.md +++ b/docs/source/docs/advanced-installation/sw_install/index.md @@ -1,16 +1,5 @@ # Software Installation -## Supported Coprocessors - -```{toctree} -:maxdepth: 1 - -raspberry-pi -limelight -orange-pi -snakeyes -``` - ## Desktop Environments ```{toctree} @@ -29,5 +18,4 @@ mac-os other-coprocessors advanced-cmd romi -gloworm ``` diff --git a/docs/source/docs/installation/sw_install/linux-pc.md b/docs/source/docs/advanced-installation/sw_install/linux-pc.md similarity index 100% rename from docs/source/docs/installation/sw_install/linux-pc.md rename to docs/source/docs/advanced-installation/sw_install/linux-pc.md diff --git a/docs/source/docs/installation/sw_install/mac-os.md b/docs/source/docs/advanced-installation/sw_install/mac-os.md similarity index 100% rename from docs/source/docs/installation/sw_install/mac-os.md rename to docs/source/docs/advanced-installation/sw_install/mac-os.md diff --git a/docs/source/docs/installation/sw_install/other-coprocessors.md b/docs/source/docs/advanced-installation/sw_install/other-coprocessors.md similarity index 88% rename from docs/source/docs/installation/sw_install/other-coprocessors.md rename to docs/source/docs/advanced-installation/sw_install/other-coprocessors.md index c2e0c7ff22..6afb65f67b 100644 --- a/docs/source/docs/installation/sw_install/other-coprocessors.md +++ b/docs/source/docs/advanced-installation/sw_install/other-coprocessors.md @@ -23,13 +23,13 @@ $ sudo reboot now Your co-processor will require an Internet connection for this process to work correctly. ::: -For installation on any other co-processors, we recommend reading the {ref}`advanced command line documentation `. +For installation on any other co-processors, we recommend reading the {ref}`advanced command line documentation `. ## Updating PhotonVision PhotonVision can be updated by downloading the latest jar file, copying it onto the processor, and restarting the service. -For example, from another computer, run the following commands. Substitute the correct username for "\[user\]" (e.g. Raspberry Pi uses "pi", Orange Pi uses "orangepi".) +For example, from another computer, run the following commands. Substitute the correct username for "\[user\]" ( Provided images use username "pi") ```bash $ scp [jar name].jar [user]@photonvision.local:~/ diff --git a/docs/source/docs/advanced-installation/sw_install/romi.md b/docs/source/docs/advanced-installation/sw_install/romi.md new file mode 100644 index 0000000000..d40a518685 --- /dev/null +++ b/docs/source/docs/advanced-installation/sw_install/romi.md @@ -0,0 +1,23 @@ +# Romi Installation + +The [Romi](https://docs.wpilib.org/en/latest/docs/romi-robot/index.html) is a small robot that can be controlled with the WPILib software. The main controller is a Raspberry Pi that must be imaged with [WPILibPi](https://docs.wpilib.org/en/latest/docs/romi-robot/imaging-romi.html) . + +## Installation + +The WPILibPi image includes FRCVision, which reserves USB cameras; to use PhotonVision, we need to edit the `/home/pi/runCamera` script to disable it. First we will need to make the file system writeable; the easiest way to do this is to go to `10.0.0.2` and choose "Writable" at the top. + +SSH into the Raspberry Pi (using Windows command line, or a tool like [Putty](https://www.chiark.greenend.org.uk/~sgtatham/putty/) ) at the Romi's default address `10.0.0.2`. The default user is `pi`, and the password is `raspberry`. + +Follow the process for installing PhotonVision on {ref}`"Other Debian-Based Co-Processor Installation" `. As it mentions this will require an internet connection so plugging into the ethernet jack on the Raspberry Pi will be the easiest solution. The pi must remain writable! + +Next, from the SSH terminal, run `sudo nano /home/pi/runCamera` then arrow down to the start of the exec line and press "Enter" to add a new line. Then add `#` before the exec command to comment it out. Then, arrow up to the new line and type `sleep 10000`. Hit "Ctrl + O" and then "Enter" to save the file. Finally press "Ctrl + X" to exit nano. Now, reboot the Romi by typing `sudo reboot`. + +```{image} images/nano.png + +``` + +After it reboots, you should be able to [locate the PhotonVision UI](https://photonvision.github.io/gloworm-docs/docs/quickstart/#finding-gloworm) at: `http://10.0.0.2:5800/`. + +:::{warning} +In order for settings, logs, etc. to be saved / take effect, ensure that PhotonVision is in writable mode. +::: diff --git a/docs/source/docs/installation/sw_install/windows-pc.md b/docs/source/docs/advanced-installation/sw_install/windows-pc.md similarity index 100% rename from docs/source/docs/installation/sw_install/windows-pc.md rename to docs/source/docs/advanced-installation/sw_install/windows-pc.md diff --git a/docs/source/docs/apriltag-pipelines/2D-tracking-tuning.md b/docs/source/docs/apriltag-pipelines/2D-tracking-tuning.md index c9f205eb66..3a00e3d5f2 100644 --- a/docs/source/docs/apriltag-pipelines/2D-tracking-tuning.md +++ b/docs/source/docs/apriltag-pipelines/2D-tracking-tuning.md @@ -1,6 +1,6 @@ # 2D AprilTag Tuning / Tracking -## Tracking Apriltags +## Tracking AprilTags Before you get started tracking AprilTags, ensure that you have followed the previous sections on installation, wiring and networking. Next, open the Web UI, go to the top right card, and switch to the "AprilTag" or "Aruco" type. You should see a screen similar to the one below. diff --git a/docs/source/docs/apriltag-pipelines/about-apriltags.md b/docs/source/docs/apriltag-pipelines/about-apriltags.md index aaee12ff96..0eba914f8e 100644 --- a/docs/source/docs/apriltag-pipelines/about-apriltags.md +++ b/docs/source/docs/apriltag-pipelines/about-apriltags.md @@ -1,4 +1,4 @@ -# About Apriltags +# About AprilTags ```{image} images/pv-apriltag.png :align: center diff --git a/docs/source/docs/contributing/building-docs.md b/docs/source/docs/contributing/building-docs.md index d5a7bcaa14..73e45fed92 100644 --- a/docs/source/docs/contributing/building-docs.md +++ b/docs/source/docs/contributing/building-docs.md @@ -18,7 +18,7 @@ You must install a set of Python dependencies in order to build the documentatio In order to build the documentation, you can run the following command in the docs sub-folder. This will automatically build docs every time a file changes, and serves them locally at `localhost:8000` by default. -`~/photonvision/docs$ sphinx-autobuild --open-browser source/_build/html` +`~/photonvision/docs$ sphinx-autobuild --open-browser source source/_build/html` ## Opening the Documentation diff --git a/docs/source/docs/description.md b/docs/source/docs/description.md index 2a962188e8..2d342d2bee 100644 --- a/docs/source/docs/description.md +++ b/docs/source/docs/description.md @@ -2,7 +2,7 @@ ## Description -PhotonVision is a free, fast, and easy-to-use vision processing solution for the *FIRST*Robotics Competition. PhotonVision is designed to get vision working on your robot *quickly*, without the significant cost of other similar solutions. +PhotonVision is a free, fast, and easy-to-use vision processing solution for the _FIRST_ Robotics Competition. PhotonVision is designed to get vision working on your robot _quickly_, without the significant cost of other similar solutions. Using PhotonVision, teams can go from setting up a camera and coprocessor to detecting and tracking AprilTags and other targets by simply tuning sliders. With an easy to use interface, comprehensive documentation, and a feature rich vendor dependency, no experience is necessary to use PhotonVision. No matter your resources, using PhotonVision is easy compared to its alternatives. ## Advantages diff --git a/docs/source/docs/examples/aimingatatarget.md b/docs/source/docs/examples/aimingatatarget.md index 6de0379308..b152af0e24 100644 --- a/docs/source/docs/examples/aimingatatarget.md +++ b/docs/source/docs/examples/aimingatatarget.md @@ -7,15 +7,15 @@ The following example is from the PhotonLib example repository ([Java](https://g - A Robot - A camera mounted rigidly to the robot's frame, cenetered and pointed forward. - A coprocessor running PhotonVision with an AprilTag or Aurco 2D Pipeline. -- [A printout of Apriltag 7](https://firstfrc.blob.core.windows.net/frc2024/FieldAssets/Apriltag_Images_and_User_Guide.pdf), mounted on a rigid and flat surface. +- [A printout of AprilTag 7](https://firstfrc.blob.core.windows.net/frc2024/FieldAssets/Apriltag_Images_and_User_Guide.pdf), mounted on a rigid and flat surface. ## Code -Now that you have properly set up your vision system and have tuned a pipeline, you can now aim your robot at an AprilTag using the data from PhotonVision. The *yaw* of the target is the critical piece of data that will be needed first. +Now that you have properly set up your vision system and have tuned a pipeline, you can now aim your robot at an AprilTag using the data from PhotonVision. The _yaw_ of the target is the critical piece of data that will be needed first. Yaw is reported to the roboRIO over Network Tables. PhotonLib, our vender dependency, is the easiest way to access this data. The documentation for the Network Tables API can be found {ref}`here ` and the documentation for PhotonLib {ref}`here `. -In this example, while the operator holds a button down, the robot will turn towards the AprilTag using the P term of a PID loop. To learn more about how PID loops work, how WPILib implements them, and more, visit [Advanced Controls (PID)](https://docs.wpilib.org/en/stable/docs/software/advanced-control/introduction/index.html) and [PID Control in WPILib](https://docs.wpilib.org/en/stable/docs/software/advanced-controls/controllers/pidcontroller.html#pid-control-in-wpilib). +In this example, while the operator holds a button down, the robot will turn towards the AprilTag using the P term of a PID loop. To learn more about how PID loops work, how WPILib implements them, and more, visit [Advanced Controls (PID)](https://docs.wpilib.org/en/stable/docs/software/advanced-control/introduction/index.html) and [PID Control in WPILib](https://docs.wpilib.org/en/stable/docs/software/advanced-controls/controllers/pidcontroller.html#pid-control-in-wpilib). ```{eval-rst} .. tab-set:: diff --git a/docs/source/docs/hardware/images/motionblur.gif b/docs/source/docs/hardware/images/rollingshutter.gif similarity index 100% rename from docs/source/docs/hardware/images/motionblur.gif rename to docs/source/docs/hardware/images/rollingshutter.gif diff --git a/docs/source/docs/hardware/selecting-hardware.md b/docs/source/docs/hardware/selecting-hardware.md index 9d26d0a321..138bd48e18 100644 --- a/docs/source/docs/hardware/selecting-hardware.md +++ b/docs/source/docs/hardware/selecting-hardware.md @@ -1,6 +1,6 @@ # Selecting Hardware -In order to use PhotonVision, you need a coprocessor and a camera. This page will help you select the right hardware for your team depending on your budget, needs, and experience. +In order to use PhotonVision, you need a coprocessor and a camera. Other than the recommended hardware found in the {ref}`quick start guide`, this page will help you select hardware that should work for photonvision even though it is not supported/recommended. ## Choosing a Coprocessor @@ -11,27 +11,19 @@ In order to use PhotonVision, you need a coprocessor and a camera. This page wil - CPU: ARM Cortex-A53 (the CPU on Raspberry Pi 3) or better - At least 8GB of storage - 2GB of RAM - - PhotonVision isn't very RAM intensive, but you'll need at least 2GB to run the OS and PhotonVision. + - PhotonVision isn't very RAM intensive, but you'll need at least 2GB to run the OS and PhotonVision. - The following IO: - - At least 1 USB or MIPI-CSI port for the camera - - Note that we only support using the Raspberry Pi's MIPI-CSI port, other MIPI-CSI ports from other coprocessors may not work. - - Ethernet port for networking + - At least 1 USB or MIPI-CSI port for the camera + - Note that we only support using the Raspberry Pi's MIPI-CSI port, other MIPI-CSI ports from other coprocessors will probably not work. + - Ethernet port for networking ### Coprocessor Recommendations -When selecting a coprocessor, it is important to consider various factors, particularly when it comes to AprilTag detection. Opting for a coprocessor with a more powerful CPU can generally result in higher FPS AprilTag detection, leading to more accurate pose estimation. However, it is important to note that there is a point of diminishing returns, where the benefits of a more powerful CPU may not outweigh the additional cost. Below is a list of supported hardware, along with some notes on each. - -- Orange Pi 5 (\$99) - - This is the recommended coprocessor for most teams. It has a powerful CPU that can handle AprilTag detection at high FPS, and is relatively cheap compared to processors of a similar power. -- Raspberry Pi 4/5 (\$55-\$80) - - This is the recommended coprocessor for teams on a budget. It has a less powerful CPU than the Orange Pi 5, but is still capable of running PhotonVision at a reasonable FPS. -- Mini PCs (such as Beelink N5095) - - This coprocessor will likely have similar performance to the Orange Pi 5 but has a higher performance ceiling (when using more powerful CPUs). Do note that this would require extra effort to wire to the robot / get set up. More information can be found in the set up guide [here.](https://docs.google.com/document/d/1lOSzG8iNE43cK-PgJDDzbwtf6ASyf4vbW8lQuFswxzw/edit?usp=drivesdk) -- Other coprocessors can be used but may require some extra work / command line usage in order to get it working properly. +When selecting a coprocessor, it is important to consider various factors, particularly when it comes to AprilTag detection. Opting for a coprocessor with a more powerful CPU can generally result in higher FPS AprilTag detection, leading to more accurate pose estimation. However, it is important to note that there is a point of diminishing returns, where the benefits of a more powerful CPU may not outweigh the additional cost. Other coprocessors can be used but may require some extra work / command line usage in order to get it working properly. ## Choosing a Camera -PhotonVision works with Pi Cameras and most USB Cameras, the recommendations below are known to be working and have been tested. Other cameras such as webcams, virtual cameras, etc. are not officially supported and may not work. It is important to note that fisheye cameras should only be used as a driver camera and not for detecting targets. +PhotonVision works with Pi Cameras and most USB Cameras. Other cameras such as webcams, virtual cameras, etc. are not officially supported and may not work. It is important to note that fisheye cameras should only be used as a driver camera / gamepeice detection and not for detecting targets / AprilTags. PhotonVision relies on [CSCore](https://github.com/wpilibsuite/allwpilib/tree/main/cscore) to detect and process cameras, so camera support is determined based off compatibility with CScore along with native support for the camera within your OS (ex. [V4L compatibility](https://en.wikipedia.org/wiki/Video4Linux) if using a Linux machine like a Raspberry Pi). @@ -43,31 +35,17 @@ Logitech Cameras and integrated laptop cameras will not work with PhotonVision d We do not currently support the usage of two of the same camera on the same coprocessor. You can only use two or more cameras if they are of different models or they are from Arducam, which has a [tool that allows for cameras to be renamed](https://docs.arducam.com/UVC-Camera/Serial-Number-Tool-Guide/). ::: -### Recommended Cameras +### Cameras Attributes -For colored shape detection, any non-fisheye camera supported by PhotonVision will work. We recommend the Pi Camera V1 or a high fps USB camera. +For colored shape detection, any non-fisheye camera supported by PhotonVision will work. We recommend a high fps USB camera. For driver camera, we recommend a USB camera with a fisheye lens, so your driver can see more of the field. For AprilTag detection, we recommend you use a global shutter camera that has ~100 degree diagonal FOV. This will allow you to see more AprilTags in frame, and will allow for more accurate pose estimation. You also want a camera that supports high FPS, as this will allow you to update your pose estimator at a higher frequency. -- Recommendations For AprilTag Detection - - Arducam USB OV9281 - - This is the recommended camera for AprilTag detection as it is a high FPS, global shutter camera USB camera that has a ~70 degree FOV. - - Innomaker OV9281 - - Spinel AR0144 - - Pi Camera Module V1 - - The V1 is strongly preferred over the V2 due to the V2 having undesirable FOV choices - -### AprilTags and Motion Blur - -When detecting AprilTags, you want to reduce the "motion blur" as much as possible. Motion blur is the visual streaking/smearing on the camera stream as a result of movement of the camera or object of focus. You want to mitigate this as much as possible because your robot is constantly moving and you want to be able to read as many tags as you possibly can. The possible solutions to this include: - -1. Cranking your exposure as low as it goes and increasing your gain/brightness. This will decrease the effects of motion blur and increase FPS. -2. Using a global shutter (as opposed to rolling shutter) camera. This should eliminate most, if not all motion blur. -3. Only rely on tags when not moving. +Another cause of image distortion is 'rolling shutter.' This occurs when the camera captures pixels sequentially from top to bottom, which can also lead to distortion if the camera or object is moving. -```{image} images/motionblur.gif +```{image} images/rollingshutter.gif :align: center ``` diff --git a/docs/source/docs/installation/images/networking-diagram.png b/docs/source/docs/installation/images/networking-diagram.png deleted file mode 100644 index dcc57fe080..0000000000 Binary files a/docs/source/docs/installation/images/networking-diagram.png and /dev/null differ diff --git a/docs/source/docs/installation/images/pololu-diagram.png b/docs/source/docs/installation/images/pololu-diagram.png deleted file mode 100644 index 74b7b6f07a..0000000000 Binary files a/docs/source/docs/installation/images/pololu-diagram.png and /dev/null differ diff --git a/docs/source/docs/installation/images/release-page.png b/docs/source/docs/installation/images/release-page.png deleted file mode 100644 index 478d9a8918..0000000000 Binary files a/docs/source/docs/installation/images/release-page.png and /dev/null differ diff --git a/docs/source/docs/installation/sw_install/gloworm.md b/docs/source/docs/installation/sw_install/gloworm.md deleted file mode 100644 index 2a52497660..0000000000 --- a/docs/source/docs/installation/sw_install/gloworm.md +++ /dev/null @@ -1,60 +0,0 @@ -# Gloworm Installation - -While not currently in production, PhotonVision still supports Gloworm vision processing cameras. - -## Downloading the Gloworm Image - -Download the latest [Gloworm/Limelight release of PhotonVision](https://github.com/photonvision/photonvision/releases); the image will be suffixed with "image_limelight2.xz". You do not need to extract the downloaded archive. - -## Flashing the Gloworm Image - -Plug a USB C cable from your computer into the USB C port on Gloworm labeled with a download icon. - -Use the 1.18.11 version of [Balena Etcher](https://github.com/balena-io/etcher/releases/tag/v1.18.11) to flash an image onto the coprocessor. - -Run BalenaEtcher as an administrator. Select the downloaded `.zip` file. - -Select the compute module. If it doesn't show up after 30s try using another USB port, initialization may take a while. If prompted, install the recommended missing drivers. - -Hit flash. Wait for flashing to complete, then disconnect your USB C cable. - -:::{warning} -Using a version of Balena Etcher older than 1.18.11 may cause bootlooping (the system will repeatedly boot and restart) when imaging your Gloworm. Updating to 1.18.11 will fix this issue. -::: - -## Final Steps - -Power your device per its documentation and connect it to a robot network. - -You should be able to locate the camera at `http://photonvision.local:5800/` in your browser on your computer when connected to the robot. - -## Troubleshooting/Setting a Static IP - -A static IP address may be used as an alternative to the mDNS `photonvision.local` address. - -Download and run [Angry IP Scanner](https://angryip.org/download/#windows) to find PhotonVision/your coprocessor on your network. - -```{image} images/angryIP.png -``` - -Once you find it, set the IP to a desired {ref}`static IP in PhotonVision. ` - -## Updating PhotonVision - -Download the latest stable .jar from [the releases page](https://github.com/PhotonVision/photonvision/releases), go to the settings tab, and upload the .jar using the Offline Update button. - -:::{note} -If you are updating PhotonVision on a Gloworm/Limelight, download the LinuxArm64 .jar file. -::: - -As an alternative option - Export your settings, reimage your coprocessor using the instructions above, and import your settings back in. - -## Hardware Troubleshooting - -To turn the LED lights off or on you need to modify the `ledMode` network tables entry or the `camera.setLED` of PhotonLib. - -## Support Links - -- [Website/Documentation](https://photonvision.github.io/gloworm-docs/docs/quickstart/#finding-gloworm) (Note: Gloworm is no longer in production) -- [Image](https://github.com/gloworm-vision/pi-img-updator/releases) -- [Discord](https://discord.com/invite/DncQRky) diff --git a/docs/source/docs/installation/sw_install/limelight.md b/docs/source/docs/installation/sw_install/limelight.md deleted file mode 100644 index e819fd6a48..0000000000 --- a/docs/source/docs/installation/sw_install/limelight.md +++ /dev/null @@ -1,24 +0,0 @@ -# Limelight Installation - -## Imaging - -Limelight imaging is a very similar process to Gloworm, but with extra steps. - -### Base Install Steps - -Due to the similarities in hardware, follow the {ref}`Gloworm install instructions `. - -## Hardware-Specific Steps - -Download the hardwareConfig.json file for the version of your Limelight: - -- {download}`Limelight Version 2 `. -- {download}`Limelight Version 2+ `. - -:::{note} -No hardware config is provided for the Limelight 3 as AprilTags do not require the LEDs (meaning nobody has reverse-engineered what I/O pins drive the LEDs) and the camera FOV is determined as part of calibration. -::: - -{ref}`Import the hardwareConfig.json file `. Again, this is **REQUIRED** or target measurements will be incorrect, and LEDs will not work. - -After installation you should be able to [locate the camera](https://photonvision.github.io/gloworm-docs/docs/quickstart/#finding-gloworm) at: `http://photonvision.local:5800/` (not `gloworm.local`, as previously) diff --git a/docs/source/docs/installation/sw_install/orange-pi.md b/docs/source/docs/installation/sw_install/orange-pi.md deleted file mode 100644 index b03c7ffabe..0000000000 --- a/docs/source/docs/installation/sw_install/orange-pi.md +++ /dev/null @@ -1,39 +0,0 @@ -# Orange Pi Installation - -## Downloading Linux Image - -Starting in 2024, PhotonVision provides pre-configured system images for Orange Pi 5 devices. Download the latest release of the PhotonVision Orange Pi 5 image (.xz file suffixed with `orangepi5.xz`) from the [releases page](https://github.com/PhotonVision/photonvision/releases). You do not need to extract the downloaded archive file. This image is configured with a `pi` user with password `raspberry`. - -For an Orange Pi 4, download the latest release of the Armbian Bullseye CLI image from [here](https://armbian.tnahosting.net/archive/orangepi4/archive/Armbian_23.02.2_Orangepi4_bullseye_current_5.15.93.img.xz). - -## Flashing the Pi Image - -An 8GB or larger SD card is recommended. - -Use the 1.18.11 version of [Balena Etcher](https://github.com/balena-io/etcher/releases/tag/v1.18.11) to flash an image onto a Orange Pi. Select the downloaded image file, select your microSD card, and flash. - -For more detailed instructions on using Etcher, please see the [Etcher website](https://www.balena.io/etcher/). - -:::{warning} -Using a version of Balena Etcher older than 1.18.11 may cause bootlooping (the system will repeatedly boot and restart) when imaging your Orange Pi. Updating to 1.18.11 will fix this issue. -::: - -Alternatively, you can use the [Raspberry Pi Imager](https://www.raspberrypi.com/software/) to flash the image. - -Select "Choose OS" and then "Use custom" to select the downloaded image file. Select your microSD card and flash. - -:::{note} -If you are working on Linux, "dd" can be used in the command line to flash an image. -::: - -If you're using an Orange Pi 5, that's it! Orange Pi 4 users will need to install PhotonVision (see below). - -### Initial User Setup (Orange Pi 4 Only) - -Insert the flashed microSD card into your Orange Pi and boot it up. The first boot may take a few minutes as the Pi expands the filesystem. Be sure not to unplug during this process. - -Plug your Orange Pi into a display via HDMI and plug in a keyboard via USB once its powered up. For an Orange Pi 4, complete the initial set up which involves creating a root password and adding a user, as well as setting localization language. Additionally, choose “bash” when prompted. - -## Installing PhotonVision (Orange Pi 4 Only) - -From here, you can follow {ref}`this guide `. diff --git a/docs/source/docs/installation/sw_install/raspberry-pi.md b/docs/source/docs/installation/sw_install/raspberry-pi.md deleted file mode 100644 index b567dcd12b..0000000000 --- a/docs/source/docs/installation/sw_install/raspberry-pi.md +++ /dev/null @@ -1,50 +0,0 @@ -# Raspberry Pi Installation - -A Pre-Built Raspberry Pi image is available for ease of installation. - -## Downloading the Pi Image - -Download the latest release of the PhotonVision Raspberry image (.xz file) from the [releases page](https://github.com/PhotonVision/photonvision/releases). You do not need to extract the downloaded ZIP file. - -:::{note} -Make sure you download the image that ends in '-RaspberryPi.xz'. -::: - -## Flashing the Pi Image - -An 8GB or larger card is recommended. - -Use the 1.18.11 version of [Balena Etcher](https://github.com/balena-io/etcher/releases/tag/v1.18.11) to flash an image onto a Raspberry Pi. Select the downloaded `.tar.xz` file, select your microSD card, and flash. - -For more detailed instructions on using Etcher, please see the [Etcher website](https://www.balena.io/etcher/). - -:::{warning} -Using a version of Balena Etcher older than 1.18.11 may cause bootlooping (the system will repeatedly boot and restart) when imaging your Raspberry Pi. Updating to 1.18.11 will fix this issue. -::: - -Alternatively, you can use the [Raspberry Pi Imager](https://www.raspberrypi.com/software/) to flash the image. - -Select "Choose OS" and then "Use custom" to select the downloaded image file. Select your microSD card and flash. - -If you are using a non-standard Pi Camera connected to the CSI port, {ref}`additional configuration may be required. ` - -## Final Steps - -Insert the flashed microSD card into your Raspberry Pi and boot it up. The first boot may take a few minutes as the Pi expands the filesystem. Be sure not to unplug during this process. - -After the initial setup process, your Raspberry Pi should be configured for PhotonVision. You can verify this by making sure your Raspberry Pi and computer are connected to the same network and navigating to `http://photonvision.local:5800` in your browser on your computer. - -## Troubleshooting/Setting a Static IP - -A static IP address may be used as an alternative to the mDNS `photonvision.local` address. - -Download and run [Angry IP Scanner](https://angryip.org/download/#windows) to find PhotonVision/your coprocessor on your network. - -```{image} images/angryIP.png -``` - -Once you find it, set the IP to a desired {ref}`static IP in PhotonVision. ` - -## Updating PhotonVision - -To upgrade a Raspberry Pi device with PhotonVision already installed, follow the {ref}`Raspberry Pi update instructions`. diff --git a/docs/source/docs/installation/sw_install/romi.md b/docs/source/docs/installation/sw_install/romi.md deleted file mode 100644 index a6b327157f..0000000000 --- a/docs/source/docs/installation/sw_install/romi.md +++ /dev/null @@ -1,22 +0,0 @@ -# Romi Installation - -The [Romi](https://docs.wpilib.org/en/latest/docs/romi-robot/index.html) is a small robot that can be controlled with the WPILib software. The main controller is a Raspberry Pi that must be imaged with [WPILibPi](https://docs.wpilib.org/en/latest/docs/romi-robot/imaging-romi.html) . - -## Installation - -The WPILibPi image includes FRCVision, which reserves USB cameras; to use PhotonVision, we need to edit the `/home/pi/runCamera` script to disable it. First we will need to make the file system writeable; the easiest way to do this is to go to `10.0.0.2` and choose "Writable" at the top. - -SSH into the Raspberry Pi (using Windows command line, or a tool like [Putty](https://www.chiark.greenend.org.uk/~sgtatham/putty/) ) at the Romi's default address `10.0.0.2`. The default user is `pi`, and the password is `raspberry`. - -Follow the process for installing PhotonVision on {ref}`"Other Debian-Based Co-Processor Installation" `. As it mentions this will require an internet connection so plugging into the ethernet jack on the Raspberry Pi will be the easiest solution. The pi must remain writable! - -Next, from the SSH terminal, run `sudo nano /home/pi/runCamera` then arrow down to the start of the exec line and press "Enter" to add a new line. Then add `#` before the exec command to comment it out. Then, arrow up to the new line and type `sleep 10000`. Hit "Ctrl + O" and then "Enter" to save the file. Finally press "Ctrl + X" to exit nano. Now, reboot the Romi by typing `sudo reboot`. - -```{image} images/nano.png -``` - -After it reboots, you should be able to [locate the PhotonVision UI](https://photonvision.github.io/gloworm-docs/docs/quickstart/#finding-gloworm) at: `http://10.0.0.2:5800/`. - -:::{warning} -In order for settings, logs, etc. to be saved / take effect, ensure that PhotonVision is in writable mode. -::: diff --git a/docs/source/docs/installation/sw_install/snakeyes.md b/docs/source/docs/installation/sw_install/snakeyes.md deleted file mode 100644 index e099fde408..0000000000 --- a/docs/source/docs/installation/sw_install/snakeyes.md +++ /dev/null @@ -1,56 +0,0 @@ -# SnakeEyes Installation - -A Pre-Built Raspberry Pi image with configuration for [the SnakeEyes Raspberry Pi Hat](https://www.playingwithfusion.com/productview.php?pdid=133&catid=1014) is available for ease of setup. - -## Downloading the SnakeEyes Image - -Download the latest release of the SnakeEyes-specific PhotonVision Pi image from the [releases page](https://github.com/PlayingWithFusion/SnakeEyesDocs/releases). You do not need to extract the downloaded ZIP file. - -## Flashing the SnakeEyes Image - -An 8GB or larger card is recommended. - -Use the 1.18.11 version of [Balena Etcher](https://github.com/balena-io/etcher/releases/tag/v1.18.11) to flash an image onto a Raspberry Pi. Select the downloaded `.zip` file, select your microSD card, and flash. - -For more detailed instructions on using Etcher, please see the [Etcher website](https://www.balena.io/etcher/). - -:::{warning} -Using a version of Balena Etcher older than 1.18.11 may cause bootlooping (the system will repeatedly boot and restart) when imaging your Raspberry Pi. Updating to 1.18.11 will fix this issue. -::: - -Alternatively, you can use the [Raspberry Pi Imager](https://www.raspberrypi.com/software/) to flash the image. - -Select "Choose OS" and then "Use custom" to select the downloaded image file. Select your microSD card and flash. - -## Final Steps - -Insert the flashed microSD card into your Raspberry Pi and boot it up. The first boot may take a few minutes as the Pi expands the filesystem. Be sure not to unplug during this process. - -After the initial setup process, your Raspberry Pi should be configured for PhotonVision. You can verify this by making sure your Raspberry Pi and computer are connected to the same network and navigating to `http://photonvision.local:5800` in your browser on your computer. - -## Troubleshooting/Setting a Static IP - -A static IP address may be used as an alternative to the mDNS `photonvision.local` address. - -Download and run [Angry IP Scanner](https://angryip.org/download/#windows) to find PhotonVision/your coprocessor on your network. - -```{image} images/angryIP.png -``` - -Once you find it, set the IP to a desired {ref}`static IP in PhotonVision. ` - -## Updating PhotonVision - -Download the latest xxxxx-LinuxArm64.jar from [our releases page](https://github.com/PhotonVision/photonvision/releases), go to the settings tab, and upload the .jar using the Offline Update button. - -As an alternative option - Export your settings, reimage your coprocessor using the instructions above, and import your settings back in. - -## Hardware Troubleshooting - -To turn the LED lights off or on you need to modify the `ledMode` network tables entry or the `camera.setLED` of PhotonLib. - -## Support Links - -- [Website](https://www.playingwithfusion.com/productview.php?pdid=133) -- [Image](https://github.com/PlayingWithFusion/SnakeEyesDocs/releases/latest) -- [Documentation](https://github.com/PlayingWithFusion/SnakeEyesDocs/blob/master/PhotonVision/readme.md) diff --git a/docs/source/docs/installation/updating.md b/docs/source/docs/installation/updating.md deleted file mode 100644 index 76ad2ea54f..0000000000 --- a/docs/source/docs/installation/updating.md +++ /dev/null @@ -1,54 +0,0 @@ -# Updating PhotonVision - -PhotonVision provides many different files on a single release page. Each release contains JAR files for performing "offline updates" of a device with PhotonVision already installed, as well as full image files to "flash" to supported coprocessors. - -```{image} images/release-page.png -:alt: Example GitHub release page -``` - -In the example release above, we see: - -- Image files for flashing directly to supported coprocessors. - - - Raspberry Pi 3/4/5/CM4: follow our {ref}`Raspberry Pi flashing instructions`. - - For LimeLight devices: follow our {ref}`LimeLight flashing instructions`. - - For Orange Pi 5 devices: follow our {ref}`Orange Pi flashing instructions`. - -- JAR files for the suite of supported operating systems for use with Offline Update. In general: - - - Raspberry Pi, Limelight, and Orange Pi: use images suffixed with -linuxarm64.jar. For example: {code}`photonvision-v2024.1.1-linuxarm64.jar` - - Beelink and other Intel/AMD-based Mini-PCs: use images suffixed with -linuxx64.jar. For example: {code}`photonvision-v2024.1.1-linuxx64.jar` - -## Offline Update - -Unless noted in the release page, an offline update allows you to quickly upgrade the version of PhotonVision running on a coprocessor with PhotonVision already installed on it. - -Unless otherwise noted on the release page, config files should be backward compatible with previous version of PhotonVision, and this offline update process should preserve any pipelines and calibrations previously performed. For paranoia, we suggest exporting settings from the Settings tab prior to performing an offline update. - -:::{note} -Carefully review release notes to ensure that reflashing the device (for supported devices) or other installation steps are not required, as dependencies needed for PhotonVision may change between releases -::: - -## Installing Pre-Release Versions - -Pre-release/development version of PhotonVision can be tested by installing/downloading artifacts from Github Actions (see below), which are built automatically on commits to open pull requests and to PhotonVision's `master` branch, or by {ref}`compiling PhotonVision locally `. - -:::{warning} -If testing a pre-release version of PhotonVision with a robot, PhotonLib must be updated to match the version downloaded! If not, packet schema definitions may not match and unexpected things will occur. To update PhotonLib, refer to {ref}`installing specific version of PhotonLib`. -::: - -GitHub Actions builds pre-release version of PhotonVision automatically on PRs and on each commit merged to master. To test a particular commit to master, navigate to the [PhotonVision commit list](https://github.com/PhotonVision/photonvision/commits/master/) and click on the check mark (below). Scroll to "Build / Build fat JAR - PLATFORM", click details, and then summary. From here, JAR and image files can be downloaded to be flashed or uploaded using "Offline Update". - -```{image} images/gh_actions_1.png -:alt: Github Actions Badge -``` - -```{image} images/gh_actions_2.png -:alt: Github Actions artifact list -``` - -Built JAR files (but not image files) can also be downloaded from PRs before they are merged. Navigate to the PR in GitHub, and select Checks at the top. Click on "Build" to display the same artifact list as above. - -```{image} images/gh_actions_3.png -:alt: Github Actions artifacts from PR -``` diff --git a/docs/source/docs/installation/wiring.md b/docs/source/docs/installation/wiring.md deleted file mode 100644 index c20fc9916d..0000000000 --- a/docs/source/docs/installation/wiring.md +++ /dev/null @@ -1,42 +0,0 @@ -# Wiring - -## Off-Robot Wiring - -Plugging your coprocessor into the wall via a power brick will suffice for off robot wiring. - -:::{note} -Please make sure your chosen power supply can provide enough power for your coprocessor. Undervolting (where enough power isn't being supplied) can cause many issues. -::: - -## On-Robot Wiring - -:::{note} -We recommend users use the [SnakeEyes Pi Hat](https://www.playingwithfusion.com/productview.php?pdid=133) as it provides passive power over ethernet (POE) and other useful features to simplify wiring and make your life easier. -::: - -### Recommended: Coprocessor with Passive POE (Gloworm, Pi with SnakeEyes, Limelight) - -1. Plug the [passive POE injector](https://www.revrobotics.com/rev-11-1210/) into the coprocessor and wire it to PDP/PDH (NOT the VRM). -2. Add a breaker to relevant slot in your PDP/PDH -3. Run an ethernet cable from the passive POE injector to your network switch / radio (we *STRONGLY* recommend the usage of a network switch, see the [networking](networking.md) section for more info.) - -### Coprocessor without Passive POE - -1a. Option 1: Get a micro USB (may be USB-C if using a newer Pi) pigtail cable and connect the wire ends to a regulator like [this](https://www.pololu.com/product/4082). Then, wire the regulator into your PDP/PDH and the Micro USB / USB C into your coprocessor. - -1b. Option 2: Use a USB power bank to power your coprocessor. Refer to this year's robot rulebook on legal implementations of this. - -2. Run an ethernet cable from your Pi to your network switch / radio (we *STRONGLY* recommend the usage of a network switch, see the [networking](networking.md) section for more info.) - -This diagram shows how to use the recommended regulator to power a coprocessor. - -```{image} images/pololu-diagram.png -:alt: A flowchart-type diagram showing how to connect wires from the PDP or PDH to -: the recommended voltage regulator and then a Coprocessor. -``` - -:::{note} -The regulator comes with optional screw terminals that may be used to connect the PDP/PDH and Coprocessor power wires if you do not wish to solder them. -::: - -Once you have wired your coprocessor, you are now ready to install PhotonVision. diff --git a/docs/source/docs/objectDetection/about-object-detection.md b/docs/source/docs/objectDetection/about-object-detection.md index b40667e645..70b9512796 100644 --- a/docs/source/docs/objectDetection/about-object-detection.md +++ b/docs/source/docs/objectDetection/about-object-detection.md @@ -7,6 +7,7 @@ PhotonVision supports object detection using neural network accelerator hardware For the 2024 season, PhotonVision ships with a **pre-trained NOTE detector** (shown above), as well as a mechanism for swapping in custom models. Future development will focus on enabling lower friction management of multiple custom models. ```{image} images/notes-ui.png + ``` ## Tracking Objects @@ -32,6 +33,10 @@ Compared to other pipelines, object detection exposes very few tuning handles. T The same area, aspect ratio, and target orientation/sort parameters from {ref}`reflective pipelines ` are also exposed in the object detection card. +## Letterboxing + +Photonvision will letterbox your camera frame to 640x640. This means that if you select a resolution that is larger than 640 it will be scaled down to fit inside a 640x640 frame with black bars if needed. Smaller frames will be scaled up with black bars if needed. + ## Training Custom Models Coming soon! diff --git a/docs/source/docs/quick-start/arducam-cameras.md b/docs/source/docs/quick-start/arducam-cameras.md new file mode 100644 index 0000000000..82eaaaf87e --- /dev/null +++ b/docs/source/docs/quick-start/arducam-cameras.md @@ -0,0 +1,22 @@ +# Arducam Cameras + +Arducam cameras are supported for setups with multiple devices. This is possible because Arducam provides software that allows you to assign truly different device names to each camera. This feature is particularly useful in complex setups where multiple cameras are used simultaneously. + +## Setting Up Arducam Cameras + +1. **Download Arducam Software**: [Download and install the Arducam software from their official website.](https://docs.arducam.com/UVC-Camera/Serial-Number-Tool-Guide/) + +2. **Assign Device Names**: Use the Arducam software and Arducam [documentation](https://docs.arducam.com/UVC-Camera/Serial-Number-Tool-Guide/) to give each camera a unique device name. This will help in distinguishing between multiple cameras in your setup. + +## Steps to Configure in PhotonVision + +1. **Open PhotonVision Settings**: Navigate to the cameras page in PhotonVision. + +2. **Select Camera Model**: Select the proper camera. Use the Arducam model selector to specify the model of each Arducam camera connected to your system. + +3. **Save Settings**: Ensure that you save the settings after selecting the appropriate camera model for each device. + +```{image} images/setArducamModel.png +:alt: The camera model can be selected from the Arudcam model selector in the cameras tab +:align: center +``` diff --git a/docs/source/docs/quick-start/camera-calibration.md b/docs/source/docs/quick-start/camera-calibration.md new file mode 100644 index 0000000000..85847507b9 --- /dev/null +++ b/docs/source/docs/quick-start/camera-calibration.md @@ -0,0 +1,33 @@ +# Camera Calibration + +:::{important} +In order to detect AprilTags and use 3D mode, your camera must be calibrated at the desired resolution! Inaccurate calibration will lead to poor performance. +::: + +If you’re not using cameras in 3D mode, calibration is optional, but it can still offer benefits. Calibrating cameras helps refine the pitch and yaw values, leading to more accurate positional data in every mode. {ref}`For a more in-depth view`. + +## Print the Calibration Target + +- Downloaded from our [demo site](https://demo.photonvision.org/#/cameras), or directly from your coprocessors cameras tab. +- Use the Charuco calibration board: + - Board Type: Charuco + - Tag Family: 4x4 + - Pattern Spacing: 1.00in + - Marker Size: 0.75in + - Board Height : 8 + - Board Width : 8 + +## Prepare the Calibration Target + +- Measure Accurately: Use calipers to measure the actual size of the squares and markers. Accurate measurements are crucial for effective calibration. +- Ensure Flatness: The calibration board must be perfectly flat, without any wrinkles or bends, to avoid introducing errors into the calibration process. + +## Calibrate your Camera + +- Take lots of photos: It's recommended to capture more than 50 images to properly calibrate your camera for accuracy. 12 is the bare minimum and may not provide good results. +- Other Tips + - Move the board not the camera. + - Take photos of lots of angles: The more angles the more better (up to 45 deg). + - A couple of up close images is good. + - Cover the entire cameras fov. + - Avoid images with the board facing straight towards the camera. diff --git a/docs/source/docs/quick-start/common-setups.md b/docs/source/docs/quick-start/common-setups.md new file mode 100644 index 0000000000..ed8e72b642 --- /dev/null +++ b/docs/source/docs/quick-start/common-setups.md @@ -0,0 +1,44 @@ +# Common Hardware Setups + +## Coprocessors + +:::{note} +The Orange Pi 5 is the only currently supported device for object detection. +::: + +- Orange Pi 5 4GB + - Able to process two object detection streams at once while also processing 1 to 2 AprilTag streams at 1280x800 (30fps). +- Raspberry Pi 5 2GB + - A good cheaper option. Doesn't support object detection. Able to process 2 AprilTag streams at 1280x800 (30fps). + +## SD Cards + +- 8GB or larger micro SD card + - Many teams have found that an industrial micro sd card are much more stable in competition. One example is the SanDisk industrial 16GB micro SD card. + +## Cameras + +- AprilTag + + - Innomaker or Arducam OV9281 UVC USB cameras. + +- Object Detection + + - Arducam OV9782 works well with its global shutter. + - Most other fixed-focus color UVC USB webcams. + +- Driver Camera + - OV9281 + - OV9782 + - Pi Camera Module V1 {ref}`(More setup info)` + - Most other fixed-focus UVC USB webcams + +## Power + +- Pololu S13V30F5 Regulator + + - Wide power range input. Recommended by many teams. + +- Redux Robotics Zinc-V Regulator + + - Recently released for the 2025 season, offering reliable and easy integration. diff --git a/docs/source/docs/quick-start/images/OrangePiPololu.png b/docs/source/docs/quick-start/images/OrangePiPololu.png new file mode 100644 index 0000000000..54336f2fce Binary files /dev/null and b/docs/source/docs/quick-start/images/OrangePiPololu.png differ diff --git a/docs/source/docs/quick-start/images/OrangePiPololuPigtail.png b/docs/source/docs/quick-start/images/OrangePiPololuPigtail.png new file mode 100644 index 0000000000..172ba745b0 Binary files /dev/null and b/docs/source/docs/quick-start/images/OrangePiPololuPigtail.png differ diff --git a/docs/source/docs/quick-start/images/OrangePiZinc.png b/docs/source/docs/quick-start/images/OrangePiZinc.png new file mode 100644 index 0000000000..9dcc106645 Binary files /dev/null and b/docs/source/docs/quick-start/images/OrangePiZinc.png differ diff --git a/docs/source/docs/quick-start/images/OrangePiZincUSBC.png b/docs/source/docs/quick-start/images/OrangePiZincUSBC.png new file mode 100644 index 0000000000..0bffbc53e7 Binary files /dev/null and b/docs/source/docs/quick-start/images/OrangePiZincUSBC.png differ diff --git a/docs/source/docs/quick-start/images/RPiPololu.png b/docs/source/docs/quick-start/images/RPiPololu.png new file mode 100644 index 0000000000..dd1ae28015 Binary files /dev/null and b/docs/source/docs/quick-start/images/RPiPololu.png differ diff --git a/docs/source/docs/quick-start/images/RPiPololuPigtail.png b/docs/source/docs/quick-start/images/RPiPololuPigtail.png new file mode 100644 index 0000000000..10cc1151a1 Binary files /dev/null and b/docs/source/docs/quick-start/images/RPiPololuPigtail.png differ diff --git a/docs/source/docs/quick-start/images/RPiZinc.png b/docs/source/docs/quick-start/images/RPiZinc.png new file mode 100644 index 0000000000..f85530ed5e Binary files /dev/null and b/docs/source/docs/quick-start/images/RPiZinc.png differ diff --git a/docs/source/docs/quick-start/images/RPiZincUSBC.png b/docs/source/docs/quick-start/images/RPiZincUSBC.png new file mode 100644 index 0000000000..ea66a1cb81 Binary files /dev/null and b/docs/source/docs/quick-start/images/RPiZincUSBC.png differ diff --git a/docs/source/docs/quick-start/images/editCameraName.png b/docs/source/docs/quick-start/images/editCameraName.png new file mode 100644 index 0000000000..ccadcb4869 Binary files /dev/null and b/docs/source/docs/quick-start/images/editCameraName.png differ diff --git a/docs/source/docs/quick-start/images/editHostname.png b/docs/source/docs/quick-start/images/editHostname.png new file mode 100644 index 0000000000..eb897d7935 Binary files /dev/null and b/docs/source/docs/quick-start/images/editHostname.png differ diff --git a/docs/source/docs/quick-start/images/motionblur.png b/docs/source/docs/quick-start/images/motionblur.png new file mode 100644 index 0000000000..ed6cb4c49b Binary files /dev/null and b/docs/source/docs/quick-start/images/motionblur.png differ diff --git a/docs/source/docs/quick-start/images/networking-diagram-vividhosting.png b/docs/source/docs/quick-start/images/networking-diagram-vividhosting.png new file mode 100644 index 0000000000..a66d8f4051 Binary files /dev/null and b/docs/source/docs/quick-start/images/networking-diagram-vividhosting.png differ diff --git a/docs/source/docs/quick-start/images/networking-diagram.png b/docs/source/docs/quick-start/images/networking-diagram.png new file mode 100644 index 0000000000..733c301c6e Binary files /dev/null and b/docs/source/docs/quick-start/images/networking-diagram.png differ diff --git a/docs/source/docs/quick-start/images/setArducamModel.png b/docs/source/docs/quick-start/images/setArducamModel.png new file mode 100644 index 0000000000..8f42120042 Binary files /dev/null and b/docs/source/docs/quick-start/images/setArducamModel.png differ diff --git a/docs/source/docs/installation/images/static.png b/docs/source/docs/quick-start/images/static.png similarity index 100% rename from docs/source/docs/installation/images/static.png rename to docs/source/docs/quick-start/images/static.png diff --git a/docs/source/docs/quick-start/index.md b/docs/source/docs/quick-start/index.md new file mode 100644 index 0000000000..e9cce5df11 --- /dev/null +++ b/docs/source/docs/quick-start/index.md @@ -0,0 +1,13 @@ +# Quick Start + +```{toctree} +:maxdepth: 2 + +common-setups +quick-install +wiring +networking +arducam-cameras +camera-calibration +quick-configure +``` diff --git a/docs/source/docs/installation/networking.md b/docs/source/docs/quick-start/networking.md similarity index 60% rename from docs/source/docs/installation/networking.md rename to docs/source/docs/quick-start/networking.md index efe8762c1e..1292396be2 100644 --- a/docs/source/docs/installation/networking.md +++ b/docs/source/docs/quick-start/networking.md @@ -2,28 +2,53 @@ ## Physical Networking -:::{note} -When using PhotonVision off robot, you *MUST* plug the coprocessor into a physical router/radio. You can then connect your laptop/device used to view the webdashboard to the same network. Any other networking setup will not work and will not be supported in any capacity. +:::{warning} +When using PhotonVision off robot, you _MUST_ plug the coprocessor into a physical router/radio. You can then connect your laptop/device used to view the webdashboard to the same network. Any other networking setup will not work and will not be supported in any capacity. ::: -After imaging your coprocessor, run an ethernet cable from your coprocessor to a router/radio and power on your coprocessor by plugging it into the wall. Then connect whatever device you're using to view the webdashboard to the same network and navigate to photonvision.local:5800. +::::{tab-set} + +:::{tab-item} New Radio (2025 - present) + +```{danger} +Ensure that DIP switches 1 and 2 are turned off; otherwise, the radio PoE feature will fry your coprocessor. [More info.](https://frc-radio.vivid-hosting.net/getting-started/passive-power-over-ethernet-poe-for-downstream-devices) +``` + +```{image} images/networking-diagram-vividhosting.png +:alt: Wiring using a network switch and the new vivid hosting radio +``` + +::: -PhotonVision *STRONGLY* recommends the usage of a network switch on your robot. This is because the second radio port on the current FRC radios is known to be buggy and cause frequent connection issues that are detrimental during competition. An in-depth guide on how to install a network switch can be found [on FRC 900's website](https://team900.org/blog/ZebraSwitch/). +:::{tab-item} Old Radio (pre 2025) + +PhotonVision _STRONGLY_ recommends the usage of a network switch on your robot. This is because the second radio port on the old FRC radios is known to be buggy and cause frequent connection issues that are detrimental during competition. An in-depth guide on how to install a network switch can be found [on FRC 900's website](https://zebracorns.org/blog/ZebraSwitch/). ```{image} images/networking-diagram.png -:alt: Correctly set static IP +:alt: Wiring using a network switch and the old open mesh radio +``` + +::: +:::: + +## Network Hostname + +Rename each device from the default "Photonvision" to a unique hostname (e.g., "Photon-OrangePi-Left" or "Photon-RPi5-Back"). This helps differentiate multiple coprocessors on your network, making it easier to manage them. Navigate to the settings page and scroll down to the network section. You will find the hostname is set to "photonvision" by default, this can only contain letters (A-Z), numeric characters (0-9), and the minus sign (-). + +```{image} images/editHostname.png +:alt: The hostname can be edited in the settings page under the network section. ``` ## Digital Networking -PhotonVision *STRONGLY* recommends the usage of Static IPs as it increases reliability on the field and when using PhotonVision in general. To properly set up your static IP, follow the steps below: +PhotonVision _STRONGLY_ recommends the usage of Static IPs as it increases reliability on the field and when using PhotonVision in general. To properly set up your static IP, follow the steps below: :::{warning} Only use a static IP when connected to the **robot radio**, and never when testing at home, unless you are well versed in networking or have the relevant "know how". ::: 1. Ensure your robot is on and you are connected to the robot network. -2. Navigate to `photonvision.local:5800` (this may be different if you are using a Gloworm / Limelight) in your browser. +2. Navigate to `photonvision.local:5800`in your browser. 3. Open the settings tab on the left pane. 4. Under the Networking section, set your team number. 5. Change your IP to Static. diff --git a/docs/source/docs/quick-start/quick-configure.md b/docs/source/docs/quick-start/quick-configure.md new file mode 100644 index 0000000000..c55b7a0a58 --- /dev/null +++ b/docs/source/docs/quick-start/quick-configure.md @@ -0,0 +1,57 @@ +# Quick Configure + +## Settings to configure + +### Team number + +In order for photonvision to connect to the roborio it needs to know your team number. + +### Camera Nickname + +You **must** nickname your cameras in photonvision to ensure that every camera has a unique name. This is how we will identify cameras in robot code. The camera can be nickname using the edit button next to the camera name in the upper right of the Dashboard tab. + +```{image} images/editCameraName.png +:align: center +``` + +## Pipeline Settings + +### AprilTag + +When using an Orange Pi 5 with an Arducam OV9281 teams will usually change the following settings. For more info on AprilTag settings please review {ref}`this`. + +- Resolution: + - 1280x800 +- Decimate: + - 2 +- Mode: + - 3D +- Exposure and Gain: + - Adjust these to achieve good brightness without flicker and low motion blur. This may vary based on lighting conditions in your competition environment. +- Enable MultiTag +- Set arducam specific camera type selector to OV9281 + +#### AprilTags and Motion Blur and Rolling Shutter + +When detecting AprilTags, it's important to minimize 'motion blur' as much as possible. Motion blur appears as visual streaking or smearing in the camera feed, resulting from the movement of either the camera or the object in focus. Reducing this effect is essential, as the robot is often in motion, and a clearer image allows for detecting as many tags as possible. This is not to be confused with {ref}`rolling shutter`. + +- Fixes + - Lower your exposure as low as possible. Using gain and brightness to account for lack of brightness. +- Other Options: + - Don't use/rely vision measurements while moving. + +```{image} images/motionblur.png +:align: center +``` + +### Object Detection + +When using an Orange Pi 5 with an OV9782 teams will usually change the following settings. For more info on object detection settings please review {ref}`this`. + +- Resolution: + - Resolutions higher than 640x640 may not result in any higher detection accuracy and may lower {ref}`performance`. +- Confidence: + - 0.75 - 0.95 Lower values are fpr detecting warn game pieces or less ideal game pieces. Higher for less warn, more ideal game pieces. +- White Balance Temperature: + - Adjust this to achieve better color accuracy. This may be needed to increase confidence. +- Set arducam specific camera type selector to OV9782 diff --git a/docs/source/docs/quick-start/quick-install.md b/docs/source/docs/quick-start/quick-install.md new file mode 100644 index 0000000000..1b8e9cae6b --- /dev/null +++ b/docs/source/docs/quick-start/quick-install.md @@ -0,0 +1,38 @@ +# Quick Install + +## Install the latest image of photonvision for your coprocessor + +- For the supported coprocessors + - RPI 3,4,5 + - Orange Pi 5 + - Limelight + +For installing on non-supported devices {ref}`see. ` + +[Download the latest preconfigured image of photonvision for your coprocessor](https://github.com/PhotonVision/photonvision/releases/latest) + +| Coprocessor | Image filename | Jar | +| -------------------- | ---------------------------------------------------- | ------------------------------------- | +| OrangePi 5 | photonvision-{version}-linuxarm64_orangepi5.img.xz | photonvision-{version}-linuxarm64.jar | +| Raspberry Pi 3, 4, 5 | photonvision-{version}-linuxarm64_RaspberryPi.img.xz | photonvision-{version}-linuxarm64.jar | +| Limelight 2 | photonvision-{version}-linuxarm64_limelight2.img.xz | photonvision-{version}-linuxarm64.jar | +| Limelight 3 | photonvision-{version}-linuxarm64_limelight3.img.xz | photonvision-{version}-linuxarm64.jar | + +:::{warning} +Balena Etcher 1.18.11 is a known working version. Other versions may cause issues such as bootlooping (the system will repeatedly boot and restart) when imaging your device. +::: + +Use the 1.18.11 version of [Balena Etcher](https://github.com/balena-io/etcher/releases/tag/v1.18.11) to flash the image onto the coprocessors micro sd card. Select the downloaded `.img.xz` file, select your microSD card, and flash. + +Limelights have a different installation processes. Simply connect the limelight to your computer using the proper usb cable. Select the compute module. If it doesn’t show up after 30s try using another USB port, initialization may take a while. If prompted, install the recommended missing drivers. Select the image, and flash. + +Unless otherwise noted in release notes or if updating from the prior years version, to update PhotonVision after the initial installation, use the offline update option in the settings page with the downloaded jar file from the latest release. + +:::{note} +Limelight 2, 2+, and 3 will need a [custom hardware config file](https://github.com/PhotonVision/photonvision/tree/master/docs/source/docs/advanced-installation/sw_install/files) for lighting to work. Currently only limelight 2 and 2+ files are available. +::: + +:::{note} +Raspberry Pi installations may also use the [Raspberry Pi Imager](https://www.raspberrypi.com/software/) to flash the image. + +::: diff --git a/docs/source/docs/quick-start/wiring.md b/docs/source/docs/quick-start/wiring.md new file mode 100644 index 0000000000..001e6bf873 --- /dev/null +++ b/docs/source/docs/quick-start/wiring.md @@ -0,0 +1,93 @@ +# Wiring + +## Coprocessor with regulator + +1. **IT IS STRONGLY RECOMMENDED** to use one of the recommended power regulators to prevent vision from cutting out from voltage drops while operating the robot. We recommend wiring the regulator directly to the power header pins or using a locking USB C cable. In any case we recommend hot gluing the connector. + +2. Run an ethernet cable from your Pi to your network switch / radio. + +This diagram shows how to use the recommended regulator to power a coprocessor. + +::::{tab-set} + +:::{tab-item} Orange Pi Zinc V USB C + +```{image} images/OrangePiZincUSBC.png +:alt: Wiring the opi5 to the pdp using the Redux Robotics Zinc V and usb c +``` + +::: + +:::{tab-item} Orange Pi 5 Zinc V + +```{image} images/OrangePiZinc.png +:alt: Wiring the opi5 to the pdp using the Redux Robotics Zinc V +``` + +::: + +:::{tab-item} Orange Pi 5 Pololu S13V30F5 + +```{image} images/OrangePiPololu.png +:alt: Wiring the opi5 to the pdp using the Pololu S13V30F5 +``` + +::: + +:::{tab-item} Orange Pi 5 Pololu S13V30F5 Pigtail + +```{image} images/OrangePiPololuPigtail.png +:alt: Wiring the opi5 to the pdp using the Pololu S13V30F5 and a usb c pigtail +``` + +::: + +:::{tab-item} Raspberry Pi 5 Zinc V USB C + +```{image} images/RPiZincUSBC.png +:alt: Wiring the RPI5 to the pdp using the Redux Robotics Zinc V and usb c +``` + +::: + +:::{tab-item} Raspberry Pi 5 Zinc V + +```{image} images/RPiZinc.png +:alt: Wiring the RPI5 to the pdp using the Redux Robotics Zinc V +``` + +::: + +:::{tab-item} Raspberry Pi 5 Pololu S13V30F5 + +```{image} images/RPiPololu.png +:alt: Wiring the RPI5 to the pdp using the Pololu S13V30F5 +``` + +::: + +:::{tab-item} Raspberry Pi 5 Pololu S13V30F5 Pigtail + +```{image} images/RPiPololuPigtail.png +:alt: Wiring the RPI5 to the pdp using the Pololu S13V30F5 and a usb c pigtail +``` + +::: + +:::: + +Pigtails can be purchased from many sources we recommend [(USB C)](https://ctr-electronics.com/products/usb-type-c-wire-breakout?_pos=19&_sid=bf06b6a6b&_ss=r) [(Micro USB)](https://ctr-electronics.com/products/usb-micro-power-wire-breakout?pr_prod_strat=e5_desc&pr_rec_id=10bf36ce7&pr_rec_pid=7863771070637&pr_ref_pid=7863771103405&pr_seq=uniform) + +## Coprocessor with Passive POE (Pi with SnakeEyes and Limelight) + +1. Plug the [passive POE injector](https://www.revrobotics.com/rev-11-1210/) into the coprocessor and wire it to PDP/PDH (NOT the VRM). +2. Add a breaker to relevant slot in your PDP/PDH +3. Run an ethernet cable from the passive POE injector to your network switch / radio. + +## Off-Robot Wiring + +Plugging your coprocessor into the wall via a power brick will suffice for off robot wiring. + +:::{note} +Please make sure your chosen power supply can provide enough power for your coprocessor. Undervolting (where enough power isn't being supplied) can cause many issues. +::: diff --git a/docs/source/docs/reflectiveAndShape/3D.md b/docs/source/docs/reflectiveAndShape/3D.md index db0a745a48..e3b7180a9f 100644 --- a/docs/source/docs/reflectiveAndShape/3D.md +++ b/docs/source/docs/reflectiveAndShape/3D.md @@ -17,6 +17,6 @@ If solvePNP is working correctly, the target should be displayed as a small rect ``` -## Contour Simplification (Non-Apriltag) +## Contour Simplification (Non-AprilTag) 3D mode internally computes a polygon that approximates the target contour being tracked. This polygon is used to detect the extreme corners of the target. The contour simplification slider changes how far from the original contour the approximation is allowed to deviate. Note that the approximate polygon is drawn on the output image for tuning. diff --git a/docs/source/docs/simulation/hardware-in-the-loop-sim.md b/docs/source/docs/simulation/hardware-in-the-loop-sim.md index 5897d0b910..d3735f452c 100644 --- a/docs/source/docs/simulation/hardware-in-the-loop-sim.md +++ b/docs/source/docs/simulation/hardware-in-the-loop-sim.md @@ -2,7 +2,7 @@ Hardware in the loop simulation is using a physical device, such as a supported co-processor running PhotonVision, to enhance simulation capabilities. This is useful for developing and validating code before the camera is attached to a robot, as well as reducing the work required to use WPILib simulation with PhotonVision. -Before continuing, ensure PhotonVision is installed on your device. Instructions can be found {ref}`here ` for all devices. +Before continuing, ensure PhotonVision is installed on your device. Instructions can be found {ref}`here ` for all devices. Your coprocessor and computer running simulation will have to be connected to the same network, like a home router. Connecting the coprocessor directly to the computer will not work. @@ -26,9 +26,11 @@ Ethernet adapter Ethernet: Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.254.254 ``` + ::: ```{image} images/coproc-client-to-desktop-sim.png + ``` No code changes are required, PhotonLib should function similarly to normal operation. @@ -36,4 +38,5 @@ No code changes are required, PhotonLib should function similarly to normal oper Now launch simulation, and you should be able to see the PhotonVision table on your simulation's NetworkTables dashboard. ```{image} images/hardware-in-the-loop-sim.png + ``` diff --git a/docs/source/docs/simulation/simulation-java.md b/docs/source/docs/simulation/simulation-java.md index 7b4f99da97..3e1a612eeb 100644 --- a/docs/source/docs/simulation/simulation-java.md +++ b/docs/source/docs/simulation/simulation-java.md @@ -1,6 +1,5 @@ # Simulation Support in PhotonLib in Java - ## What Is Simulated? Simulation is a powerful tool for validating robot code without access to a physical robot. Read more about [simulation in WPILib](https://docs.wpilib.org/en/stable/docs/software/wpilib-tools/robot-simulation/introduction.html). @@ -8,18 +7,18 @@ Simulation is a powerful tool for validating robot code without access to a phys In Java, PhotonLib can simulate cameras on the field and generate target data approximating what would be seen in reality. This simulation attempts to include the following: - Camera Properties - - Field of Vision - - Lens distortion - - Image noise - - Framerate - - Latency + - Field of Vision + - Lens distortion + - Image noise + - Framerate + - Latency - Target Data - - Detected / minimum-area-rectangle corners - - Center yaw/pitch - - Contour image area percentage - - Fiducial ID - - Fiducial ambiguity - - Fiducial solvePNP transform estimation + - Detected / minimum-area-rectangle corners + - Center yaw/pitch + - Contour image area percentage + - Fiducial ID + - Fiducial ambiguity + - Fiducial solvePNP transform estimation - Camera Raw/Processed Streams (grayscale) :::{note} @@ -29,7 +28,7 @@ Simulation does NOT include the following: - Image Thresholding Process (camera gain, brightness, etc) - Pipeline switching - Snapshots -::: + ::: This scope was chosen to balance fidelity of the simulation with the ease of setup, in a way that would best benefit most teams. @@ -226,7 +225,7 @@ Each `VisionSystemSim` has its own built-in `Field2d` for displaying object pose ``` :::{figure} images/SimExampleField.png -*A* `VisionSystemSim`*'s internal* `Field2d` *customized with target images and colors* +_A_ `VisionSystemSim`_'s internal_ `Field2d` _customized with target images and colors_ ::: A `PhotonCameraSim` can also draw and publish generated camera frames to a MJPEG stream similar to an actual PhotonVision process. @@ -245,8 +244,8 @@ A `PhotonCameraSim` can also draw and publish generated camera frames to a MJPEG cameraSim.enableDrawWireframe(true); ``` -These streams follow the port order mentioned in {ref}`docs/installation/networking:Camera Stream Ports`. For example, a single simulated camera will have its raw stream at `localhost:1181` and processed stream at `localhost:1182`, which can also be found in the CameraServer tab of Shuffleboard like a normal camera stream. +These streams follow the port order mentioned in {ref}`docs/quick-start/networking:Camera Stream Ports`. For example, a single simulated camera will have its raw stream at `localhost:1181` and processed stream at `localhost:1182`, which can also be found in the CameraServer tab of Shuffleboard like a normal camera stream. :::{figure} images/SimExampleFrame.png -*A frame from the processed stream of a simulated camera viewing some 2023 AprilTags with the field wireframe enabled* +_A frame from the processed stream of a simulated camera viewing some 2023 AprilTags with the field wireframe enabled_ ::: diff --git a/docs/source/docs/troubleshooting/common-errors.md b/docs/source/docs/troubleshooting/common-errors.md index 5addeafbc2..217d4a6171 100644 --- a/docs/source/docs/troubleshooting/common-errors.md +++ b/docs/source/docs/troubleshooting/common-errors.md @@ -26,7 +26,7 @@ Please refer to our comprehensive {ref}`networking troubleshooting tips `. -If you are using a USB camera, it is possible your USB Camera isn't supported by CSCore and therefore won't work with PhotonVision. See {ref}`supported hardware page for more information `, or the above Camera Troubleshooting page for more information on determining this locally. +If you are using a USB camera, it is possible your USB Camera isn't supported by CSCore and therefore won't work with PhotonVision. ### Camera is consistently returning incorrect values when in 3D mode diff --git a/docs/source/docs/troubleshooting/networking-troubleshooting.md b/docs/source/docs/troubleshooting/networking-troubleshooting.md index d02520aae4..2600015a1f 100644 --- a/docs/source/docs/troubleshooting/networking-troubleshooting.md +++ b/docs/source/docs/troubleshooting/networking-troubleshooting.md @@ -1,24 +1,24 @@ # Networking Troubleshooting -Before reading further, ensure that you follow all the recommendations {ref}`in our networking section `. You should follow these guidelines in order for PhotonVision to work properly; other networking setups are not officially supported. +Before reading further, ensure that you follow all the recommendations {ref}`in our networking section `. You should follow these guidelines in order for PhotonVision to work properly; other networking setups are not officially supported. ## Checklist A few issues make up the majority of support requests. Run through this checklist quickly to catch some common mistakes. -- Is your camera connected to the robot's radio through a {ref}`network switch `? - - Ethernet straight from a laptop to a coprocessor will not work (most likely), due to the unreliability of link-local connections. - - Even if there's a switch between your laptop and coprocessor, you'll still want a radio or router in the loop somehow. - - The FRC radio is the *only* router we will officially support due to the innumerable variations between routers. +- Is your camera connected to the robot's radio through a {ref}`network switch `? + - Ethernet straight from a laptop to a coprocessor will not work (most likely), due to the unreliability of link-local connections. + - Even if there's a switch between your laptop and coprocessor, you'll still want a radio or router in the loop somehow. + - The FRC radio is the _only_ router we will officially support due to the innumerable variations between routers. - (Raspberry Pi, Orange Pi & Limelight only) have you flashed the correct image, and is it up to date? - - Limelights 2/2+ and Gloworms should be flashed using the Limelight 2 image (eg, `photonvision-v2024.2.8-linuxarm64_limelight2.img.xz`). - - Limelights 3 should be flashed using the Limelight 3 image (eg, `photonvision-v2024.2.8-linuxarm64_limelight3.img.xz`). - - Raspberry Pi devices (including Pi 3, Pi 4, CM3 and CM4) should be flashed using the Raspberry Pi image (eg, `photonvision-v2024.2.8-linuxarm64_RaspberryPi.img.xz`). - - Orange Pi 5 devices should be flashed using the Orange Pi 5 image (eg, `photonvision-v2024.2.8-linuxarm64_orangepi5.img.xz`). - - Orange Pi 5+ devices should be flashed using the Orange Pi 5+ image (eg, `photonvision-v2024.2.8-linuxarm64_orangepi5plus.img.xz`). + - Limelights 2/2+ should be flashed using the Limelight 2 image (eg, `photonvision-v2024.2.8-linuxarm64_limelight2.img.xz`). + - Limelights 3 should be flashed using the Limelight 3 image (eg, `photonvision-v2024.2.8-linuxarm64_limelight3.img.xz`). + - Raspberry Pi devices (including Pi 3, Pi 4, CM3 and CM4) should be flashed using the Raspberry Pi image (eg, `photonvision-v2024.2.8-linuxarm64_RaspberryPi.img.xz`). + - Orange Pi 5 devices should be flashed using the Orange Pi 5 image (eg, `photonvision-v2024.2.8-linuxarm64_orangepi5.img.xz`). + - Orange Pi 5+ devices should be flashed using the Orange Pi 5+ image (eg, `photonvision-v2024.2.8-linuxarm64_orangepi5plus.img.xz`). - Is your robot code using a **2024** version of WPILib, and is your coprocessor using the most up to date **2024** release? - - 2022, 2023 and 2024 versions of either cannot be mix-and-matched! - - Your PhotonVision version can be checked on the {ref}`settings tab`. + - 2022, 2023 and 2024 versions of either cannot be mix-and-matched! + - Your PhotonVision version can be checked on the {ref}`settings tab`. - Is your team number correctly set on the {ref}`settings tab`? ### photonvision.local Not Found diff --git a/docs/source/index.md b/docs/source/index.md index f624ad155e..2db9cc0379 100644 --- a/docs/source/index.md +++ b/docs/source/index.md @@ -2,50 +2,62 @@ :alt: PhotonVision ``` -Welcome to the official documentation of PhotonVision! PhotonVision is the free, fast, and easy-to-use vision processing solution for the *FIRST* Robotics Competition. PhotonVision is designed to get vision working on your robot *quickly*, without the significant cost of other similar solutions. PhotonVision supports a variety of COTS hardware, including the Raspberry Pi 3 and 4, the [Gloworm smart camera](https://photonvision.github.io/gloworm-docs/docs/quickstart/#finding-gloworm), the [SnakeEyes Pi hat](https://www.playingwithfusion.com/productview.php?pdid=133), and the Orange Pi 5. +Welcome to the official documentation of PhotonVision! PhotonVision is the free, fast, and easy-to-use vision processing solution for the _FIRST_ Robotics Competition. PhotonVision is designed to get vision working on your robot _quickly_, without the significant cost of other similar solutions. PhotonVision supports a variety of COTS hardware, including the Raspberry Pi 3, 4, and 5, the [SnakeEyes Pi hat](https://www.playingwithfusion.com/productview.php?pdid=133), and the Orange Pi 5. # Content ```{eval-rst} .. grid:: 2 - .. grid-item-card:: Getting Started - :link: docs/installation/index + .. grid-item-card:: Quick Start + :link: docs/quick-start/index :link-type: doc - Get started with installing PhotonVision, creating a pipeline, and tuning it for usage in competitions. + Quick start to using Photonvision. - .. grid-item-card:: Programming Reference and PhotonLib - :link: docs/programming/index + .. grid-item-card:: Advanced Installation + :link: docs/advanced-installation/index :link-type: doc - Learn more about PhotonLib, our vendor dependency which makes it easier for teams to retrieve vision data, make various calculations, and more. + Get started with installing PhotonVision on non-supported hardware. + ``` ```{eval-rst} .. grid:: 2 + .. grid-item-card:: Programming Reference and PhotonLib + :link: docs/programming/index + :link-type: doc + + Learn more about PhotonLib, our vendor dependency which makes it easier for teams to retrieve vision data, make various calculations, and more. + .. grid-item-card:: Integration :link: docs/integration/index :link-type: doc Pick how to use vision processing results to control a physical robot. +``` + +```{eval-rst} +.. grid:: 2 + .. grid-item-card:: Code Examples :link: docs/examples/index :link-type: doc View various step by step guides on how to use data from PhotonVision in your code, along with game-specific examples. -``` - -```{eval-rst} -.. grid:: 2 .. grid-item-card:: Hardware :link: docs/hardware/index :link-type: doc Select appropriate hardware for high-quality and easy vision target detection. +``` + +```{eval-rst} +.. grid:: 2 .. grid-item-card:: Contributing :link: docs/contributing/index @@ -77,8 +89,9 @@ PhotonVision is licensed under the [GNU GPL v3](https://www.gnu.org/licenses/gpl :maxdepth: 0 docs/description +docs/quick-start/index docs/hardware/index -docs/installation/index +docs/advanced-installation/index docs/settings ```