I am bringing up cameras on a custom carrier board for the TX2 NX. I can test the video devices using v4l2-ctl, but when I use nvgstcapture or nvarguscamerasrc plugin I get failures in Argus.
v4l2-ctl output:
v4l2-ctl --set-fmt-video=width=3840,height=2160,pixelformat=RG12 --stream-mmap --set-ctrl=sensor_mode=0 --stream-count=100 -d /dev/video0
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 30.39 fps
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 30.34 fps
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 30.23 fps
<<<<<<<<
output of gstreamer plugin:
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw,format=I420,width=3840,height=2160" ! fakesink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3864 x 2192 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 3981.070801; Exposure Range min 83000, max 44663000;
GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 0
Output Stream W = 3864 H = 2192
seconds to Run = 0
Frame Rate = 29.999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
nvbuf_utils: dmabuf_fd -1 mapped entry NOT found
nvbuf_utils: Can not get HW buffer from FD... Exiting...
CONSUMER: Done Success
(gst-launch-1.0:7698): GStreamer-CRITICAL **: 07:52:09.013: gst_mini_object_set_qdata: assertion 'object != NULL' failed
Got EOS from element "pipeline0".
Execution ended after 0:00:06.011648589
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
GST_ARGUS: Cleaning up
GST_ARGUS: Done Success
(Argus) Error Timeout: (propagating from src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 137)
(Argus) Error Timeout: (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 87)
Setting pipeline to NULL ...
Freeing pipeline ...
argus debug output: argus_debug.txt (26.1 KB)
trace output: tx2_nx_trace.txt (35.3 KB)
Thank you,
Joe
Looks like didn’t receive any validate data from the MIPI bus.
Have you confirm the dts configure for the sensor mode.
mode0 { // OV5693_MODE_2592X1944
mclk_khz = "24000";
num_lanes = "2";
tegra_sinterface = "serial_c";
phy_mode = "DPHY";
discontinuous_clk = "yes";
dpcm_enable = "false";
cil_settletime = "0";
active_w = "2592";
active_h = "1944";
mode_type = "bayer";
pixel_phase = "bggr";
csi_pixel_bit_depth = "10";
readout_orientation = "90";
line_length = "2688";
inherent_gain = "1";
mclk_multiplier = "6.67";
pix_clk_hz = "160000000";
gain_factor = "10";
min_gain_val = "10";/* 1DB*/
max_gain_val = "160";/* 16DB*/
step_gain_val = "1";
default_gain = "10";
min_hdr_ratio = "1";
max_hdr_ratio = "1";
framerate_factor = "1000000";
min_framerate = "1816577";/*1.816577 */
max_framerate = "30000000";/*30*/
step_framerate = "1";
default_framerate = "30000000";
exposure_factor = "1000000";
min_exp_time = "34";/* us */
max_exp_time = "550385";/* us */
step_exp_time = "1";
default_exp_time = "33334";/* us */
embedded_metadata_height = "0";
};
I am going back through settings now to be sure.
I found this in the documentation. Is the information about the clock speed still true? Does this also apply to extperiph2?
Thank you for the response. I went back through my device tree and driver to configure it for 24MHz clock. while doing that I found that my tegra_sinterface values in the mode definition for my 2 sensors was set incorrectly. sensor attached to port a was configured as attached to port c and the opposite for my other sensor.
That was the solution for this question.
This led to my next issue now that I can stream using Argus.
While trying to run nvgstcapture I see many lines with the following:
NvxBaseWorkerFunction[2575] comp OMX.Nvidia.std.iv_renderer.overlay.yuv420 Error -2147479552
Also while trying to stream h264 images from my sensors (I have no display) to my laptop I am receiving bad images.
gst-launch command on TX2 NX:
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM),format=NV12,height=360,width=640" ! omxh264enc insert-vui=true ! rtph264pay config-interval=-1 ! udpsink host=192.168.1.23 port=9002
gst-launch command on laptop:
gst-launch-1.0 udpsrc port=9002 ! "application/x-rtp" ! rtph264depay ! avdec_h264 ! autovideosink
image:
Could you try below command.
./test-launch “nvarguscamerasrc ! video/x-raw(memory:NVMM),width=(int)320,height=(int)240,format=(string)NV12 ! omxh264enc name=omxh264enc control-rate=1 ! video/x-h264,profile=baseline,stream-format=(string)byte-stream ! h264parse ! rtph264pay name=pay0”
Where does the test-launch
binary come from? It doesn’t seem to be found on my PATH.
It’s sampe code of gst rtsp
Thank you for that link. I compiled and ran that test-launch program and streamed the rtsp stream to my laptop. The result appears the same.
I also went ahead and saved a raw frame using v4l2-ctl and the result looks like the same image, but before debayering.
Are there any more advanced ways to confirm that images are being sent appropriately to the nvcsi block?
Are you able get preview image from argus app instead via rtsp?
Try the multimedia API sample code.
sudo apt list -a nvidia-l4t-jetson-multimedia-api
sudo apt install nvidia-l4t-jetson-multimedia-api=32.xxxxx
I’m not sure I follow your suggestion. In my last post I noted a frame that I pulled from v4l2-ctl which should be a frame before argus and the frame looks corrupt.
I found an application /opt/nvidia/camera/nvcapture-status-decoder
which I am using to read my trace statements from the capture. Here is one of the results. Do you have any insight?
/opt/nvidia/camera/nvcapture-status-decoder tstamp:65749928422 tag:CHANSEL_PXL_SOF channel:0x00 frame:8 vi_tstamp:65749927958 data:0x00000001
NVIDIA camera capture status decoder utility (Version 2.00)
Copyright (C) 2019-2020, NVIDIA Corporation. All rights reserved.
Stream: 0
Frame: 8
Status: 4 (ChanselFault)
Data 0x0000000000000001
Timestamp: 65749928422
ChanselFault : 0x0000000000000001
-PIXEL_SOF [ 0]: 1
The first pixel in a frame that is not that of an embedded type
-Current line in frame [31:16]: 0
Here is the trace where I got that line: mipi_trace.txt (25.5 KB)
The trace log is from v4l2-ctl? Please check the trace log after run the nvarguscamerasrc.
Also it’s better to connect the HDMI to check the preview with nvgstcapture-1.0
The v4l2-ctl trace looks fine.
Any chance to have HDMI to check the preview by nvgstcaptrue-1.0
Thank you, Sorry I did not see your edit. The problem is solved on my end. I was incorrectly setting a register in my sensor and that was causing my problematic images. Test pattern confirmed that there was no problem with the board/argus and so I have been focusing on diagnosing the sensor since then. Thank you for the help.