SmartFace GPU acceleration in Docker

Some services can benefit from GPU acceleration, which can be enabled in docker compose file, but also some prerequisites needs to be met on host machine.

To use GPU acceleration, you will need to have the following on the docker host machine:


Please note that GPU acceleration is supported only on NVIDIA GPUs.

HW decoding

To use GPU for HW decoding and face detection for cameras uncomment runtime: nvidia and GstPipelineTemplate in docker-compose.yml for camera services sf-cam-*. When using the NVIDIA docker runtime, SmartFace camera processes need gstreamer pipelines as camera source.

Neural Networks processing GPU support

To use support GPU acceleration in camera service, remote detector service, extractor service, pedestrian detector service or liveness services uncomment environment variable: Gpu__GpuEnabled=true

For specify neural networks runtime what will be used, uncomment environment variable Gpu__GpuNeuralRuntime and set one of the value: Default, Cuda or Tensor.

Use Tensor value only if your GPU supports tensor runtime.

When using Tensor you can uncomment mapping "/var/tmp/innovatrics/tensor-rt:/var/tmp/innovatrics/tensor-rt" to retain TensorRT cache files in the host when container is recreated. This can be helpful as generating cache files is a longer operation which needs to be performed before the first run of the neural network.