Skip to content

Latest commit

 

History

History
81 lines (61 loc) · 5.09 KB

detectnet-camera-2.md

File metadata and controls

81 lines (61 loc) · 5.09 KB

Back | Next | Contents
Object Detection

Running the Live Camera Detection Demo

Up next we have a realtime object detection camera demo available for C++ and Python:

Similar to the previous detectnet-console example, these camera applications use detection networks, except that they process a live video feed from a camera. detectnet-camera accepts various optional command-line parameters, including:

  • --network flag which changes the detection model being used (the default is SSD-Mobilenet-v2).
  • --overlay flag which can be comma-separated combinations of box, labels, conf, and none
    • The default is --overlay=box,labels,conf which displays boxes, labels, and confidence values
  • --alpha value which sets the alpha blending value used during overlay (the default is 120).
  • --threshold value which sets the minimum threshold for detection (the default is 0.5).
  • --camera flag setting the camera device to use
    • MIPI CSI cameras are used by specifying the sensor index (0 or 1, ect.)
    • V4L2 USB cameras are used by specifying their /dev/video node (/dev/video0, /dev/video1, ect.)
    • The default is to use MIPI CSI sensor 0 (--camera=0)
  • --width and --height flags setting the camera resolution (default is 1280x720)
    • The resolution should be set to a format that the camera supports.
    • Query the available formats with the following commands:
      $ sudo apt-get install v4l-utils
      $ v4l2-ctl --list-formats-ext

You can combine the usage of these flags as needed, and there are additional command line parameters available for loading custom models. Launch the application with the --help flag to recieve more info, or see the Examples readme.

Below are some typical scenarios for launching the program:

C++

$ ./detectnet-camera                             # using SSD-Mobilenet-v2, default MIPI CSI camera (1280x720)
$ ./detectnet-camera --network=ssd-inception-v2  # using SSD-Inception-v2, default MIPI CSI camera (1280x720)
$ ./detectnet-camera --camera=/dev/video0        # using SSD-Mobilenet-v2, V4L2 camera /dev/video0 (1280x720)
$ ./detectnet-camera --width=640 --height=480    # using SSD-Mobilenet-v2, default MIPI CSI camera (640x480)

Python

$ ./detectnet-camera.py                             # using SSD-Mobilenet-v2, default MIPI CSI camera (1280x720)
$ ./detectnet-camera.py --network=ssd-inception-v2  # using SSD-Inception-v2, default MIPI CSI camera (1280x720)
$ ./detectnet-camera.py --camera=/dev/video0        # using SSD-Mobilenet-v2, V4L2 camera /dev/video0 (1280x720)
$ ./detectnet-camera.py --width=640 --height=480    # using SSD-Mobilenet-v2, default MIPI CSI camera (640x480)

note: for example cameras to use, see these sections of the Jetson Wiki:
             - Nano:  https://eLinux.org/Jetson_Nano#Cameras
             - Xavier: https://eLinux.org/Jetson_AGX_Xavier#Ecosystem_Products_.26_Cameras
             - TX1/TX2: developer kits include an onboard MIPI CSI sensor module (0V5693)

Visualization

Displayed in the OpenGL window are the live camera stream overlayed with the bounding boxes of the detected objects. Note that the SSD-based models currently have the highest performance. Here is one using the coco-dog model:

# C++
$ ./detectnet-camera --network=coco-dog

# Python
$ ./detectnet-camera.py --network=coco-dog

If the desired objects aren't being detected in the video feed or you're getting spurious detections, try decreasing or increasing the detection threshold with the --threshold parameter (the default is 0.5).

Next, we'll cover creating the code for this camera detection app in Python.

Next | Coding Your Own Object Detection Program
Back | Detecting Objects from the Command Line

© 2016-2019 NVIDIA | Table of Contents