Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0
• TensorRT Version 7
• NVIDIA GPU Driver Version (valid for GPU only) 440
• Issue Type( questions, new requirements, bugs) question

In deepstream-app, I’m using analytics_done_buf_prob function to access tenor metata. check this question: (How to get classifier tensor metadata in deepstream-app?)
The model structure is:

pgie: yolov3 (using implementation provided by nvidia)
sgie0: works on id 2 (works on car id 2)
sgie1: works on id 0 (works on person id 0)
sgie2: works on id 2 (works on car id 2)

If I use tracker, then the model sgie1 is not run on every frame i.e. if I print the meta->unique_id, then I get the following:

Here,

const guint sgie1_unique_id = 2;
const guint sgie2_unique_id = 3;
const guint sgie3_unique_id = 4;
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 3
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 3
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 2

As you can see, the sgie2 (unique_id 3) is not printed as frequent as it should. If I disable the tracker, this is the result:


meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 2
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 2
meta->unique_id: 4
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 4
meta->unique_id: 2
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 3
meta->unique_id: 2
meta->unique_id: 4

Now, the sgie2 model (the model working on person) is executed as expected. Here, I’m using the official video.

Why it’s happening? In fact, it wasn’t happening when I was using deepstream-infer-tensor-meta-test app, I was using sgie_pad_buffer_probe function there to extract tensor metadata.

For dumping tensor data, we have one property set in deepstream-app, output-tensor-meta, set it to 1, and you can refer to sources/gst-plugins/gst-nvinfer/gstnvinfer_meta_utils.cpp::attach_tensor_output_meta how it does.

yes, I already have output-tensor-meta enabled. I think I got the reason why classifier is not running from the documentation

When the plugin is operating as a secondary classifier along with the tracker, it tries to improve performance by avoiding re-inferencing on the same objects in every frame. It does this by caching the classification output in a map with the object’s unique ID as the key. The object is inferred upon only w hen it is first seen in a frame (based on its object ID) or when the size (bounding box area) of the object increases by 20% or more. This optimization is possible only when the tracker is added as an upstream element.

But I’m surprised why wasn’t it happening in deepstream-infer-tensor-meta-test app. There classifier was running for every frame even with tracker.

deepstream-infer-tensor-meta-test app do not have tracker.

Actually, I added the tracker myself, the pipeline was like this:

streammux, pgie, queue, nvtracker, sgie1, queue5,
          sgie2, queue6, sgie3, queue2, tiler, queue3, filter1, nvvidconv, queue4, filter2, nvosd,
          nvvidconv1, filter3, videoconvert, filter4, x264enc, qtmux

I have one last question, what does this line mean? (maybe this has something to do with it)

This optimization is possible only when the tracker is added as an upstream element.

Or maybe it was because the tracker was running in a different thread.

It means after pgie and before secondary gie. you may try to compare the difference between deepstream-app and deepstream-infer-tensor-meta-test first to narrow down.