A production-ready toolkit for real-time YOLO detection, multi-object tracking, and interactive object locking with TensorRT acceleration. Optimized for NVIDIA Jetson devices and discrete GPUs, with first-class support for the Jetson ARGUS camera via GStreamer.
- TensorRT-accelerated YOLO inference (Ultralytics)
- Multi-object tracking with BoT-SORT/ByteTrack
- Interactive object locking (mouse click, class-based, or ID-based)
- Real-time overlays: FPS, status, track trails
- Jetson ARGUS camera pipelines with multiple fallbacks
- Video/file stream support and optional saving of annotated video and tracks
yolo_track_lock.py: Main script for detection, tracking, and interactive object locking across video files, streams, and Jetson camera.export_tensorrt_engine.py: Exports a YOLO model to a TensorRT.enginewith conservative, Jetson-friendly settings.jetson_camera_inference.py: Minimal example to run YOLO TensorRT inference on the Jetson ARGUS camera, with performance overlays.jetson_camera_test.py: Quick camera diagnostics to find a working GStreamer pipeline for the ARGUS camera.requirements.txt: Pinned dependencies for x86_64 and Jetson (with platform markers).
# Clone the repository
git clone https://github.com/ammarmalik17/Yolo-TensorRT-Tracking-and-Jetson-Camera-Toolkit.git
cd Yolo-TensorRT-Tracking-and-Jetson-Camera-Toolkit
# Create a virtual environment (recommended)
python -m venv .venv
source .venv/bin/activate
# Install dependencies
pip install --upgrade pip
pip install -r requirements.txtNotes for Jetson:
- Prefer system OpenCV with GStreamer (provided by JetPack). The requirements file avoids pip-OpenCV on aarch64.
- Install TensorRT and CUDA via JetPack. The Python package
nvidia-tensorrtis generally not needed on Jetson. - Tested on JetPack 6.2 (L4T r36.x) on Jetson Orin NX.
For faster, more reliable installs on Jetson, prefer prebuilt wheels from the Jetson AI Lab PyPI mirror. It hosts wheels tailored to specific JetPack (jp6/jp7) and CUDA variants (cu126, cu129, cu130), reducing build times and preventing ABI mismatches.
-
Why use it:
- Prebuilt, Jetson-compatible wheels (PyTorch/TorchVision/Ultralytics deps, etc.)
- Avoids slow or failing source builds on-device
- Ensures CUDA/TensorRT compatibility aligned with your JetPack
-
Mirror: Jetson AI Lab PyPI mirror
-
Quick usage (choose the index matching your JetPack/CUDA, e.g., jp6 + cu130):
# Prefer the mirror but allow fallback to PyPI when a package isn't available pip install --extra-index-url https://pypi.jetson-ai-lab.io/jp6/cu130/+simple/ -r requirements.txt -
Optional pip config (persistent): create
~/.config/pip/pip.confwith:[global] extra-index-url = https://pypi.jetson-ai-lab.io/jp6/cu130/+simple/
Then run the usual install:
pip install -r requirements.txt
If you're unsure which index to use: jp6 corresponds to JetPack 6, jp7 to JetPack 7; pick the cuXXX that matches your CUDA (e.g., 12.6 → cu126, 12.9 → cu129, 13.0 → cu130). If a package isn't present on the mirror, pip will fall back to PyPI when using --extra-index-url.
Jetson best practices and tips:
- Enable MAX power mode and clocks for consistent benchmarking and maximum performance:
sudo nvpmodel -m 0 sudo jetson_clocks
- Monitor system and confirm JetPack details with jetson-stats:
sudo apt update sudo pip install jetson-stats sudo reboot jtop
- Install Ultralytics, PyTorch, and TorchVision versions compatible with your JetPack release. Prefer NVIDIA/JetPack-provided wheels for PyTorch/TorchVision. TensorRT is bundled with JetPack.
- For best inference performance, convert to TensorRT and use FP16 when possible. INT8 requires calibration but can provide further speedups.
- Ensure OpenCV has GStreamer enabled for ARGUS camera and network streams (RTSP/RTMP/HTTP).
You can use either a prebuilt TensorRT engine (.engine) or export one from a YOLO model (.pt).
python export_tensorrt_engine.pyThis script loads yolo11n.pt and creates yolo11n.engine with static shapes and FP16, suitable for Jetson.
Place your engine file, e.g., best.engine or yolo11n.engine, in the repository root (or pass a path via --model).
Multi-object tracking with interactive object locking.
python yolo_track_lock.py \
--model yolo11n.engine \
--source 0 \
--conf-thres 0.25 \
--iou-thres 0.7 \
--imgsz 640 \
--tracker botsort.yaml \
--enable-lock \
--show-tracks \
--save-vidKey arguments:
--model: Path to YOLO model (.enginepreferred,.ptalso supported)--source: Video file, directory, camera index (e.g.,0), or stream URL (rtsp://,http://,https://,rtmp://).--conf-thres,--iou-thres,--max-det,--agnostic-nms,--classes--imgsz [H W]or single--imgsz 640,--half,--batch,--visualize,--augment,--retina-masks--tracker botsort.yaml|bytetrack.yaml--output-dir,--save-vid,--save-tracks--track-history,--show-tracks,--track-thickness--device(e.g.,0,cpu),--display,--verbose,--show-fps- Locking:
--enable-lock,--lock-id <int>,--lock-class <name|id>,--highlight-color R,G,B
Source modes:
- Video file: path ending with
.mp4,.avi,.mov,.mkv - Jetson ARGUS camera:
--source cameraorjetsonorcamsrc=1 - Generic camera/stream: numeric index (e.g.,
0) or URL starting withrtsp://,http://,https://, orrtmp://
Controls during runtime:
l: toggle lock modec: clear locked objecti: enter a track ID (press Enter to confirm)- Mouse click: lock object under cursor
p/Space: pause/resumes: save current frameq: quit
Examples:
- From a video file:
python yolo_track_lock.py --model yolo11n.engine --source sample.mp4 --save-vid --enable-lock
- From a USB camera:
python yolo_track_lock.py --model yolo11n.engine --source 0 --show-tracks --enable-lock
- From Jetson ARGUS camera:
python yolo_track_lock.py --model yolo11n.engine --source jetson --enable-lock --display
Minimal camera inference with FPS overlays (TensorRT engine required):
python jetson_camera_inference.pyIf loading fails, ensure yolo11n.engine exists or export via export_tensorrt_engine.py.
Probe for a working GStreamer pipeline on Jetson and show a short preview:
python jetson_camera_test.pyOn success, it prints a working pipeline and suggests running jetson_camera_inference.py.
Export a YOLO model to TensorRT with static shapes and FP16:
python export_tensorrt_engine.pyOutputs: yolo11n.engine in the project root.
- Annotated video:
<source_name>_locked.mp4(when--save-vid) - Track data (CSV):
<source_name>_tracks.txt(when--save-tracks) - Run summary:
<source_name>_tracking_summary.txt
- If OpenCV fails to open ARGUS pipelines on Jetson, ensure no
nvgstcapture-1.0process is running and OpenCV is built with GStreamer. - If FPS is low, reduce
--imgsz, increase--vid-stride, or use FP16 (--half). - For trackers, ensure dependencies are installed (see
requirements.txt, includeslapx,scipy). - If
.engineloading fails, regenerate the engine to match your device and TensorRT version.
Additional Jetson resources:
- Installing Ultralytics, PyTorch/TorchVision, and onnxruntime-gpu on specific JetPack versions
- TensorRT usage guidance (conversion and running), optional DLA usage, and best practices
- YOLO11 benchmark charts for multiple Jetson devices (AGX Orin, Orin NX, Orin Nano)
See the Ultralytics NVIDIA Jetson guide for details: Ultralytics YOLO Docs – NVIDIA Jetson.
This repository is provided for research and production use. See source headers for details.
Acknowledgment: This project’s Jetson notes and recommendations were informed by the Ultralytics NVIDIA Jetson guide: https://docs.ultralytics.com/guides/nvidia-jetson/.