Camera connection time is long

Problem

The camera takes a long time to establish a connection to the PC the first time it is connected to in a process.

Cause

OpenCL kernels are built for the specific hardware the camera is connected to, which is unknown until the camera establishes a connection to the PC. Building these kernels takes up a significant amount of the total connection time, and happens the first time the camera is connected to in a process.

If the process is kept alive (more specifically the Zivid application), then the kernels do not need to be rebuilt.

Solution

The kernels can be cached to avoid having to rebuild them every time you connect to the camera. Note that it still will be slow the first time it is connected to, but fast for every connect afterwards. Follow the instructions for your specific hardware.

Upgrade your Intel drivers to the newest available version to allow caching of the OpenCL kernels using the Intel Driver & Support assistant. To verify correct caching, check if the folder %LocalAppData%/NEO/neo_compiler_cache exists and is not empty.

If upgrading your Intel drivers did not resolve the issue, you may manually allow caching (experimental). Follow the dropdown below for further instructions.

Manually enabling caching (Experimental)

This method is considered deprecated and experimental by Intel Graphics Compute Runtime, but is a temporary solution until your drivers get updated. Be aware of the mentioned limitations of cl_cache for Windows in the above link, and consider this only for non-production setups.

  1. Create a folder where you want to store the cached kernels, e.g. in %LocalAppData%/NEO/neo_compiler_cache

  2. Add a new environment variable cl_cache_dir and set its value to the path of cache folder, e.g. %LocalAppData%/NEO/neo_compiler_cache

You may have to restart your computer for the changes to take place.

Most NVIDIA GPUs and drivers should have caching on by default. If you are still experiencing slow connection times, try updating to the latest drivers for your NVIDIA GPU.

To verify correct caching, check if the folder %AppData%/NVIDIA/ComputeCache exists and is not empty.

Upgrade your OpenCL drivers to the newest available version to allow caching of the OpenCL kernels.

sudo apt update && sudo apt upgrade intel-opencl-icd

If the latest version in the repositories is not sufficient, you can use the latest released package from Intel. To verify correct caching, check if the folder $HOME/.cache/neo_compiler_cache exists and is not empty.

If upgrading your Intel drivers did not resolve the issue, you may manually allow caching. Follow the dropdown below for further instructions.

Manually enable caching
  1. Create a folder where you want to store the cached kernels, e.g. in $HOME/.cache/neo_compiler_cache

  2. Add a new environment variable cl_cache_dir and set its value to the path of the cache folder, e.g $HOME/.cache/neo_compiler_cache

mkdir $HOME/.cache/neo_compiler_cache
export cl_cache_dir=$HOME/.cache/neo_compiler_cache

Caching should now be persistent for the current session.

Most NVIDIA GPUs and drivers should have caching on by default. If you are still experiencing slow connection times, try updating to the latest drivers for your NVIDIA GPU.

To verify correct caching, check if the folder $HOME/.nv/ComputeCache exists and is not empty.

Docker

If you are connecting to the camera from a Docker container, you will have to mount the cache directory on the host to the root in the container to improve the first connection. This is because each Docker container gets its own file system, and each time the container is started the cache directory will be empty.

The below instructions are only intended to improve the first connection to the camera, and therefore only apply when initially running the container. Note that already cached kernels on the host machine are required.

Windows is currently not supported with Docker, as OpenCL is not fully supported on Windows in a Docker container.

The kernel cache location is based on the NEO_CACHE_DIR environment variable, which defaults to $HOME/.cache/neo_compiler_cache on the host machine. Mount this directory to the root directory in the container when running to share the cache.

To avoid creating files owned by root in the home directory, you can first copy the cache to the root directory.

sudo rsync -r --mkpath "$HOME"/.cache/neo_compiler_cache/ /root/.cache/neo_compiler_cache/
sudo docker run --interactive --tty --device=/dev/dri --volume /root/.cache/neo_compiler_cache:/root/.cache/neo_compiler_cache <image>

where <image> is the name of the image you want to run.

The kernel cache is stored in $HOME/.nv/ComputeCache on the host machine. Mount this directory to the root directory in the container when running to share the cache.

To avoid creating files owned by root in the home directory, you can first copy the cache to the root directory.

sudo rsync -r --mkpath "$HOME"/.nv/ComputeCache /root/.nv/ComputeCache
sudo docker run --interactive --tty --device=/dev/dri --gpus=all --volume /root/.nv/ComputeCache:/root/.nv/ComputeCache <image>

where <image> is the name of the image you want to run.