Skip to main content
Version: 3.22.x

Object detection pro

imgproxy can detect objects in the image and use them for smart cropping, bluring the detections, or drawing the detections. You can also fetch the detected objects info.

For object detection purposes, imgproxy uses the Darknet YOLO model. We provide Docker images with a model trained for face detection, but you can use any Darknet YOLO model found in the zoo or you can train your own model by following this guide.

Configuration

You need to define four config variables to enable object detection:

  • IMGPROXY_OBJECT_DETECTION_CONFIG: a path to the neural network config
  • IMGPROXY_OBJECT_DETECTION_WEIGHTS: a path to the neural network weights
  • IMGPROXY_OBJECT_DETECTION_CLASSES: a path to the text file with the classes names, one per line
  • IMGPROXY_OBJECT_DETECTION_NET_SIZE: the size of the neural network input. The width and the heights of the inputs should be the same, so this config value should be a single number. Default: 416

Read the configuration options guide for more config values info.

Usage examples

Object-oriented crop

You can crop your images and keep objects of desired classes in frame:

.../crop:256:256/g:obj:face/...

Bluring detections

You can blur objects of desired classes, thus making anonymization or hiding NSFW content possible:

.../blur_detections:7:face/...

Draw detections

You can make imgproxy draw bounding boxes for the detected objects of the desired classes (this is handy for testing your models):

.../draw_detections:1:face/...

Fetch the detected objects info

You can fetch the detected objects info using the /info endpoint:

.../info/detect_objects:1/...