Inference DS offers the possibility to execute deep neural networks (“inference“) that were trained using Deep Learning DS including pre- and post-processing functionalities. The inference can be done in real-time on images and point clouds captured by industrial cameras and network cameras or offline on batches of files or on videos. Furthermore, Inference DS can be used during the data acquisition phase of deep learning projects as it offers output interfaces to record images (and inference results). You can write custom plugins using Python to implement input interfaces for individual camera solutions, provide specific output interfaces, or add custom processing steps.
Inference DS is built on two basic concepts: nodes and routing (between nodes). Each node implements a specific functionality such as “model inference” or “load image files”. A running instance of Inference DS is typically configured to have multiple nodes configured that create a processing pipeline such as “video capture“, “model inference“, and “file output“. Each node can have so-called “consumers” and “producers” that allow communication to other nodes. For example, the node “file input“ has one producer that forwards images to connected consumers. The configuration of connections between producers and consumers is done globally using the routing configuration. Input nodes usually only provide one producer, whereas processing nodes use one consumer and provide one produce.
The configuration of Inference DS can either be done via the user interface or by writing and importing a configuration file. Details on the configuration can be found in section Configuration. Plugins are nodes that extend the functionalities of Inference DS that follow the same routing concept and can be configured in the same manner. Section Plugin Development includes a guide on how to create, integrate, and debug own plugins.