Skip to content

Latest commit

 

History

History
76 lines (57 loc) · 4.36 KB

detectnet-camera.md

File metadata and controls

76 lines (57 loc) · 4.36 KB

Back | Next | Contents
Object Detection

Running the Live Camera Detection Demo

Up next we have a realtime object detection camera demo available for C++ and Python:

Similar to the previous detectnet-console example, these camera applications use detection networks, except that they process a live video feed from a camera. detectnet-camera accepts 4 optional command-line parameters:

  • --network flag setting the classification model (default is PedNet)
  • --camera flag setting the camera device to use
    • MIPI CSI cameras are used by specifying the sensor index (0 or 1, ect.)
    • V4L2 USB cameras are used by specifying their /dev/video node (/dev/video0, /dev/video1, ect.)
    • The default is to use MIPI CSI sensor 0 (--camera=0)
  • --width and --height flags setting the camera resolution (default is 1280x720)
    • The resolution should be set to a format that the camera supports.
    • Query the available formats with the following commands:
      $ sudo apt-get install v4l-utils
      $ v4l2-ctl --list-formats-ext

You can combine the usage of these flags as needed, and there are additional command line parameters available for loading custom models. Launch the application with the --help flag to recieve more info, or see the Examples readme.

Below are some typical scenarios for launching the program:

C++

$ ./detectnet-camera                          # using PedNet,  default MIPI CSI camera (1280x720)
$ ./detectnet-camera --network=facenet        # using FaceNet, default MIPI CSI camera (1280x720)
$ ./detectnet-camera --camera=/dev/video0     # using PedNet,  V4L2 camera /dev/video0 (1280x720)
$ ./detectnet-camera --width=640 --height=480 # using PedNet,  default MIPI CSI camera (640x480)

Python

$ ./detectnet-camera.py                          # using PedNet,  default MIPI CSI camera (1280x720)
$ ./detectnet-camera.py --network=facenet        # using FaceNet, default MIPI CSI camera (1280x720)
$ ./detectnet-camera.py --camera=/dev/video0     # using PedNet,  V4L2 camera /dev/video0 (1280x720)
$ ./detectnet-camera.py --width=640 --height=480 # using PedNet,  default MIPI CSI camera (640x480)

note: for example cameras to use, see these sections of the Jetson Wiki:
             - Nano:  https://eLinux.org/Jetson_Nano#Cameras
             - Xavier: https://eLinux.org/Jetson_AGX_Xavier#Ecosystem_Products_.26_Cameras
             - TX1/TX2: developer kits include an onboard MIPI CSI sensor module (0V5693)

Visualization

Displayed in the OpenGL window are the live camera stream overlayed with the bounding boxes of the detected objects. Note that the SSD-based models currently have the highest performance. Here is one using the coco-dog model:

# C++
$ ./detectnet-camera --network=coco-dog

# Python
$ ./detectnet-camera.py --network=coco-dog


Next | Semantic Segmentation with SegNet
Back | Detecting Objects from the Command Line

© 2016-2019 NVIDIA | Table of Contents