Hi!

I am using a jetson nano with ros2 humble docker which includes ros_deep_learning and I am using it by ssh without any display connected to the jetson. The camera connected to the jetson nano is a imx219-83 stereo camera which is in /dev/video0 and /dev/video1

I am trying to run video_source of the ros_deep_learning package as

ros2 launch ros_deep_learning video_source.ros2.launch input:=csi://0 input_width:=1920 input_height:=1080

It looks like the camera is working and the code is reading it but the it is not publishing the image. And I get this error

Hi @thiagoromero42, can you check this suggestion from here?

If you are working over SSH, make sure that X11 tunneling/forwarding is disabled. Do you have a physical display attached to your Jetson or not when this error occurs? What command did you use to start the container?

No, I don’t have any display attached to the jetson but if I run this pkg connecting a display to the jetson’s hdmi and directly executing it without using ssh, it works perfectly.
And I use this for starting the container by ssh:

sudo docker run --runtime nvidia -it --rm --network=host --device=/dev/video0 --device=/dev/video1 --volume /tmp/argus_socket:/tmp/argus_socket stereo_image

where stereo_image is a commit from your container “dustynv/ros:humble-pytorch-l4t-r32.7.1”

I will try yours suggestion but with " Otherwise, try building jetson-inference with the cmake -DENABLE_NVMM=off ../ option" you mean to build the ros_deep_learning pkg like that ? if not, where should be the folder which contains the cmake file to build inference.
And I should disable x11 in the jetson as well as in my PC ?

It should be found under /jetson-inference inside the container, and you can try this in the container:

cd /jetson-inference/build
cmake -DENABLE_NVMM=off ../
make
make install

ros_deep_learning links to jetson-inference/jetson-utils, so it shouldn’t be necessary to rebuild ros_deep_learning.

On the PC side, when you establish the SSH session, make it so that X11 forwarding isn’t used. You can also try unset DISPLAY on your Jetson and in the container.

1 Like

This solve my problem. Now I can run the node.

But now I have a question that I don’t really know if it is your matter. When I run the node in 1920x1080 it has a framerate of 30 fps but I see it very lagged. Instead when I run the node in 1280x720 with 120 fps it works very fluid.

I have done this test running this directly with gst-launch-1.0 nvarguscamerasrc and it worked very fluid in both cases.

And now I was trying to run the node directly from the jetson the 1920x1080 is laggy too but not so much as if I run it via ssh.

Is this a problem from being connected via Ethernet, like the bandwidth bottlethroat or something like that ? or this can be solved ?

Thank you very much for your help!

@thiagoromero42 if you are viewing it through ROS, there is added overhead of transporting the video over ROS topics, RMW middleware, ect (I believe it’s uncompressed). If you are using mostly DNNs for image processing, those use lower-res images for input anyways. On the Jetson side, that ENABLE_NVMM setting helps with memory copy overhead as well, but unfortunately seems to not be working for you.

1 Like

Great, thanks for the explanation :D

Well… I will have to ask again. When I execute a node for each of the the stereo camera and I try to view them in the PC the viewer program (rqt, rviz2, image_viewer) don’t matter which is, crashes. But if I do only one node for one camera it works perfectly.

Do you have any idea why this could be ?

@thiagoromero42 sorry I don’t. Do the stereo camera nodes run without crashing if you don’t try to view them through rviz/ect? I’ve also used Foxglove web UI for ROS2, but that was also slow/laggy for viewing video (although I believe they’ve recently implemented WebRTC). I typically just use RTP/RTSP/WebRTC for remotely viewing video (i.e. by using the video_output node)

They work perfect if I see them separatly, but when I want to watch both at the same time rviz or rqt crashes.

Does these RTP/etc let you see the image as a topic ? because all I want is to have the topic to use image_pipeline but in the pc instead of the jetson.

By the way, I had modified your code for video_source for to stream both cameras in the same node and apparently it works good, not perfect, but sufficiently fluid for being good

this is how the code end, if you are interested in reading it just let me now any bug that you can find:

#include "ros_compat.h"
#include "image_converter.h"

#include <jetson-utils/videoSource.h>



// globals	
videoSource* stream1 = NULL;
videoSource* stream2 = NULL;
imageConverter* image_cvt1 = NULL;
imageConverter* image_cvt2 = NULL;
Publisher<sensor_msgs::Image> image_pub1 = NULL;
Publisher<sensor_msgs::Image> image_pub2 = NULL;


// aquire and publish camera frame
bool aquireFrame()
{
	imageConverter::PixelType* nextFrame1 = NULL;
    imageConverter::PixelType* nextFrame2 = NULL;

	// get the latest frame
	if( !stream1->Capture(&nextFrame1, 1000) )
	{
		ROS_ERROR("failed to capture next frame1");
		return false;
	}

    	if( !stream2->Capture(&nextFrame2, 1000) )
	{
		ROS_ERROR("failed to capture next frame2");
		return false;
    }

	// assure correct image size
	if( !image_cvt1->Resize(stream1->GetWidth(), stream1->GetHeight(), imageConverter::ROSOutputFormat) )
	{
		ROS_ERROR("failed to resize camera image converter 1");
		return false;
	}

    if( !image_cvt2->Resize(stream2->GetWidth(), stream2->GetHeight(), imageConverter::ROSOutputFormat) )
	{
		ROS_ERROR("failed to resize camera image converter 2");
		return false;
	}

	// populate the message
	sensor_msgs::Image msg1;
    sensor_msgs::Image msg2;

	if( !image_cvt1->Convert(msg1, imageConverter::ROSOutputFormat, nextFrame1) )
	{
		ROS_ERROR("failed to convert video stream1 frame to sensor_msgs::Image");
		return false;
	}

    if( !image_cvt2->Convert(msg2, imageConverter::ROSOutputFormat, nextFrame2) )
	{
		ROS_ERROR("failed to convert video stream2 frame to sensor_msgs::Image");
		return false;
	}

	// populate timestamp in header field
	msg1.header.stamp = ROS_TIME_NOW();
    msg1.header.frame_id = "camera_frame";
    msg2.header.stamp = msg1.header.stamp;
    msg2.header.frame_id = msg1.header.frame_id;

	// publish the message
	image_pub1->publish(msg1);
    image_pub2->publish(msg2);
	ROS_DEBUG("published %ux%u video frame", stream1->GetWidth(), stream1->GetHeight());
	
	return true;
}



// node main loop
int main(int argc, char **argv)
{
	/*
	 * create node instance
	 */
	ROS_CREATE_NODE("video_source");

	/*
	 * declare parameters
	 */
	videoOptions video_options;

	std::string resource_left_str;
    std::string resource_right_str;
	std::string codec_str;
	std::string flip_str;
    std::string frame_id;
    std::string topic;
	
	int video_width = video_options.width;
	int video_height = video_options.height;
	int latency = video_options.latency;
	
	ROS_DECLARE_PARAMETER("resource_left", resource_left_str);
    ROS_DECLARE_PARAMETER("resource_right", resource_right_str);
	ROS_DECLARE_PARAMETER("codec", codec_str);
	ROS_DECLARE_PARAMETER("width", video_width);
	ROS_DECLARE_PARAMETER("height", video_height);
	ROS_DECLARE_PARAMETER("framerate", video_options.frameRate);
	ROS_DECLARE_PARAMETER("loop", video_options.loop);
	ROS_DECLARE_PARAMETER("flip", flip_str);
	ROS_DECLARE_PARAMETER("latency", latency);
    ROS_DECLARE_PARAMETER("frame_id", frame_id);
    ROS_DECLARE_PARAMETER("topic", topic);
	
	/*
	 * retrieve parameters
	 */
	ROS_GET_PARAMETER("resource_left", resource_left_str);
    ROS_GET_PARAMETER("resource_right", resource_right_str);
	ROS_GET_PARAMETER("codec", codec_str);
	ROS_GET_PARAMETER("width", video_width);
	ROS_GET_PARAMETER("height", video_height);
	ROS_GET_PARAMETER("framerate", video_options.frameRate);
	ROS_GET_PARAMETER("loop", video_options.loop);
	ROS_GET_PARAMETER("flip", flip_str);
	ROS_GET_PARAMETER("latency", latency);
    ROS_GET_PARAMETER("frame_id", frame_id);
    ROS_GET_PARAMETER("topic", topic);

	if( resource_left_str.size() == 0 )
	{
		ROS_ERROR("left resource param wasn't set - please set the node's resource parameter to the input device/filename/URL");
		return 0;
	}
    
    if( resource_right_str.size() == 0 )
    {
        ROS_ERROR("right resource param wasn't set - please set the node's resource parameter to the input device/filename/URL");
        return 0;
    }

	if( codec_str.size() != 0 )
		video_options.codec = videoOptions::CodecFromStr(codec_str.c_str());

	if( flip_str.size() != 0 )
		video_options.flipMethod = videoOptions::FlipMethodFromStr(flip_str.c_str());
	
	video_options.width = video_width;
	video_options.height = video_height;
	video_options.latency = latency;
	
	ROS_INFO("opening video source: %s", resource_left_str.c_str());
    ROS_INFO("opening video source: %s", resource_right_str.c_str());

	/*
	 * open video source
	 */
	stream1 = videoSource::Create(resource_left_str.c_str(), video_options);
    stream2 = videoSource::Create(resource_right_str.c_str(), video_options);

	if( !stream1 )
	{
		ROS_ERROR("failed to open video source 1");
		return 0;
	}
    
    if( !stream2 )
    {
        ROS_ERROR("failed to open video source 2");
        return 0;
    }


	/*
	 * create image converter
	 */
	image_cvt1 = new imageConverter();
    image_cvt2 = new imageConverter();

	if( !image_cvt1 )
	{
		ROS_ERROR("failed to create imageConverter 1");
		return 0;
	}

    if( !image_cvt2 )
	{
		ROS_ERROR("failed to create imageConverter 2");
		return 0;
	}

    
	/*
	 * advertise publisher topics
	 */
	ROS_CREATE_PUBLISHER(sensor_msgs::Image, "/left/image_raw", 2, image_pub1);
    ROS_CREATE_PUBLISHER(sensor_msgs::Image, "/right/image_raw", 2, image_pub2);


	/*
	 * start the camera stream1ing
	 */
	if( !stream1->Open() )
	{
		ROS_ERROR("failed to start stream1ing video source");
		return 0;
	}
        
    if( !stream2->Open() )
    {
        ROS_ERROR("failed to start stream2ing video source");
        return 0;
    }


	/*
	 * start publishing video frames
	 */
	while( ROS_OK() )
	{
		if( !aquireFrame() )
		{
			if( !stream1->IsStreaming() )
			{
				ROS_INFO("stream 1 is closed or reached EOS, exiting node...");
				break;
			}
            if( !stream2->IsStreaming() )
			{
				ROS_INFO("stream 2 is closed or reached EOS, exiting node...");
				break;
			}
		}

		if( ROS_OK() )
			ROS_SPIN_ONCE();
	}


	/*
	 * free resources
	 */
	delete stream1;
    delete stream2;
	delete image_cvt1;
    delete image_cvt2;

	return 0;
}

They use normal RTP/RTSP protocols outside of ROS, so you would view them through VLC or another network stream viewer. I just use it for visualization and front-end UI.

OK cool, glad you got it working! That’s interesting that when both cameras are running in the same node (process), it works okay.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.