How to Get Camera Feed from UDP using GStreamer

Introduction
To control our mavlink devices and access camera feed on our app, we must first understand how to get the feed in Qt.
In this article, we will be getting camera feed using C++ and Qt, with the dependencies of gstreamer, opencv, and Qt Quick.
Complete classes are at the end of the article and complete project is in github.
First, we have to create a class, I name it VideoItem. I want my class to be directly affecting the qml, so i will be inheriting QQuickPaintedItem in the definition. This will allow us to access the change image however and whenever we like such as paint and run methods.
Later, we need to create the necessary methods such as paint, run, a method to start the system, method to update the frame and a mathod to handle a callback whenever data is received.
In header file, we need to create a frame variable, so loop is getting the data, rewriting the frame variable and changing the frame accordingly
Override the Paint Method
Firstly let’s start with the easy methods. I update frame method, we need a way to get our frame written to an image, and painting that. remember, the class will be a QML object, so we need to create paint manually.
Lets make a quick check to frame if frame is empty, just for the safe measure. If not empty, we can use this to turn our opencv frame to QImage.
QImage img(_frame.data, _frame.cols, _frame.rows, _frame.step, QImage::Format_RGB888);
img() function needs data from frame, column and rows(normally width and height), step size and color format. In this case, we will use RGB888.
Because we inherit QQuickPaintedItem, we can access painter directly, so we can write to painter with this method
painter->drawImage(boundingRect(), img);
Running and Updating the Frame
Now that we overrode our paint method, we need to actually send an update to frame to actually use our paint method.
In the updateframe method, we can clone our inital frame and set to our actual variable, _frame. Then, we will call update method and this part is finished.
void VideoItem::updateFrame(const cv::Mat &frame) {
_frame = frame.clone();
update(); // Trigger repaint in the GUI thread
}
Now, when we complete our qml components, we have to fire the method to start. Because of that, we can add a small method just to start and add start_gst().
And just like that, our small methods are done, and we can dive deep into the big boys.
void VideoItem::run() {
start_gst();
}
Starting the Loop
Now the fun part begins.
First, we need a pipeline from camera port to the program. We will be using gstreamer for this operation. By creating a pipeline using gstreamer, we can get raw data from any connection, but we only care about the camera now.
g_strdup_printf("udpsrc port=%d ! application/x-rtp, payload=96 ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! video/x-raw,format=BGR ! videoconvert ! appsink name=sink", port);
We need to create the pipeline command in order to launch it. The command creates a pipeline with custom port using UDP connection for raw BGR format video.
Then we can error check it of course with adding gerror variable.
Now, we have to launch the command using
video_pipe = gst_parse_launch(descr, &error);
If it has errors, it will write to error, so we can check it we like.
Now that we created the pipeline, we need to get our decoded frames, and AppSink will be great to us. We can get the sink directly using:
GstElement *sink = gst_bin_get_by_name(GST_BIN(video_pipe), "sink");
We have reference to the sink, so lets use it to modify some parameters.
//For enabling emitting signals:
gst_app_sink_set_emit_signals(GST_APP_SINK(sink), true);
//For setting buffers:
gst_app_sink_set_max_buffers(GST_APP_SINK(sink), 2);
//For changing the whether sink will be sync or async:
gst_base_sink_set_sync(GST_BASE_SINK(sink), false);
//When we get raw camera data from the sink, we have to call our callback method
//to handle our data, because our sink is async:
g_signal_connect(sink, "new-sample", G_CALLBACK(callback), this);
//After all of setting parameters, we can finally change the appsink to playing state:
gst_element_set_state(video_pipe, GST_STATE_PLAYING);
If everything is correct, the pipeline should be open and ready to get some data.
Callback & Handling the Raw Data
We have everything now, just need to connect everything up with our callback method.
In order to fire our update event, we have to create a reference to ourselves using
VideoItem *self = static_cast<VideoItem*>(data);
This will help us later.
Then, data has parameters, buffer sizes etc.. So we create necessarry variables:
//Get the sample from data:
GstSample *sample = gst_app_sink_pull_sample(GST_APP_SINK(sink));
//Get the buffer of the sample:
GstBuffer *buf = gst_sample_get_buffer(sample);
//Get the aps of the sample:
GstCaps *caps = gst_sample_get_caps(sample)
//Get the general structure using caps:
GstStructure *structure = gst_caps_get_structure(caps, 0);
Normally we know the boundaries of our camera data, but if we are not we can use these methods to set variables to later use:
int width = 0, height = 0;
gst_structure_get_int(structure, "width", &width);
gst_structure_get_int(structure, "height", &height);
Need to create a map data for raw bytes:
GstMapInfo map;
gst_buffer_map(buf, &map, GST_MAP_READ);
Now, we need to use our self VideoItem variable to set our frame variable:
self->_frame = cv::Mat(height, width, CV_8UC3, map.data).clone();
We have our frame now. We can unset our variables, buffers and samples to avoid any kind of leak or error:
gst_buffer_unmap(buf, &map);
gst_sample_unref(sample);
Lastly, we need to fire the update method that we created using:
QMetaObject::invokeMethod(self, "updateFrame", Qt::QueuedConnection, Q_ARG(cv::Mat, self->_frame));
If it manages everything successfully, we can return GST_FLOW_OK as the last line of the method.
Constructor & Qml
Only thing that’s left is creating the constructor and adding it to the Qml.
VideoItem::VideoItem(QQuickItem *parent) : QQuickPaintedItem(parent), port(5600) {
qRegisterMetaType<cv::Mat>("cv::Mat");
_frame = cv::Mat(480, 640, CV_8UC3); // Initialize a 480x640 color (3-channel) matrix
gst_init(nullptr, nullptr);
run();
}
qRegisterType is necessary for Qt to recognise cv::Mat as a valid type for painting. We create a dummy frame just to not get an error. Don’t worry, this will not break the image as we resetting this variable each time a frame is generated. Then we run the inner loop using run().
Now onto the qt. As VideoItem is a Custom component, we need to create it according to that.
VideoItem videoItem;
qmlRegisterType<VideoItem>("CustomTypes", 1, 0, "VideoItem");
This method makes Qt to create a new library using the VideoItem.
In the qml file, we can import this using:
import CustomTypes 1.0
When adding this object, we can use it in QML directly.
VideoItem {
id: video
anchors {
fill: parent
}
Component.onCompleted: {
video.start_gst()
}
}
Notice that we called our firing method to start the outer loop when component is completed.
Conclusion & Last Words
Getting the right parameters are hard, but researching this was way harder. I normally expected it to have much more in depth documentations and tutorials for a beginners like me 😀
Yes, there are documentations, but for various reasons %90 of these are so specific. Because of that, i had to reseach tens of hours just to integrate this into Qt. Furthermore, from what i researched, there were near 0 documentations or tutorials from other people on how to use this fucking pipeline and explaining everything.
But, I’ve managed to implement both qml side and c++ side into single class. Now go wild .d
I hope you enjoyed this article and thank you.
Complete Classes
VideoItem.h
#ifndef VIDEOITEM_H
#define VIDEOITEM_H
#include "gst/gstelement.h"
#include <QQuickPaintedItem>
#include <opencv4/opencv2/core/mat.hpp>
class VideoItem : public QQuickPaintedItem {
Q_OBJECT
public:
VideoItem(QQuickItem *parent = nullptr);
void paint(QPainter *painter) override;
Q_INVOKABLE void start_gst();
public slots:
void updateFrame(const cv::Mat &frame);
signals:
Q_INVOKABLE void frameUpdated();
private:
int port;
cv::Mat _frame;
GstElement *video_pipe;
static GstFlowReturn callback(GstElement *sink, gpointer data);
void run();
};
#endif // VIDEOITEM_H
VideoItem.cpp
#include "videoitem.h"
#include <gst/gst.h>
#include <gst/app/gstappsink.h>
#include <QImage>
#include <QPainter>
VideoItem::VideoItem(QQuickItem *parent) : QQuickPaintedItem(parent), port(5600) {
qRegisterMetaType<cv::Mat>("cv::Mat");
_frame = cv::Mat(480, 640, CV_8UC3); // Initialize a 480x640 color (3-channel) matrix
gst_init(nullptr, nullptr);
run();
}
void VideoItem::paint(QPainter *painter) {
if (!_frame.empty()) { // Check if the frame is not empty
QImage img(_frame.data, _frame.cols, _frame.rows, _frame.step, QImage::Format_RGB888);
painter->drawImage(boundingRect(), img);
}
}
GstFlowReturn VideoItem::callback(GstElement *sink, gpointer data) {
VideoItem *self = static_cast<VideoItem*>(data);
GstSample *sample = gst_app_sink_pull_sample(GST_APP_SINK(sink));
GstBuffer *buf = gst_sample_get_buffer(sample);
GstCaps *caps = gst_sample_get_caps(sample);
GstStructure *structure = gst_caps_get_structure(caps, 0);
int width = 0, height = 0;
gst_structure_get_int(structure, "width", &width);
gst_structure_get_int(structure, "height", &height);
GstMapInfo map;
gst_buffer_map(buf, &map, GST_MAP_READ);
self->_frame = cv::Mat(height, width, CV_8UC3, map.data).clone();
gst_buffer_unmap(buf, &map);
gst_sample_unref(sample);
QMetaObject::invokeMethod(self, "updateFrame", Qt::QueuedConnection, Q_ARG(cv::Mat, self->_frame));
return GST_FLOW_OK;
}
void VideoItem::start_gst() {
gchar *descr = g_strdup_printf("udpsrc port=%d ! application/x-rtp, payload=96 ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! video/x-raw,format=BGR ! videoconvert ! appsink name=sink", port);
GError *error = nullptr;
video_pipe = gst_parse_launch(descr, &error);
g_free(descr);
if (error) {
g_printerr("Error creating pipeline: %s\n", error->message);
g_error_free(error);
return;
}
GstElement *sink = gst_bin_get_by_name(GST_BIN(video_pipe), "sink");
if (!sink) {
g_printerr("Error: appsink not found in the pipeline\n");
return;
}
gst_app_sink_set_emit_signals(GST_APP_SINK(sink), true);
gst_app_sink_set_drop(GST_APP_SINK(sink), true);
gst_app_sink_set_max_buffers(GST_APP_SINK(sink), 2);
gst_base_sink_set_sync(GST_BASE_SINK(sink), false);
g_signal_connect(sink, "new-sample", G_CALLBACK(callback), this);
gst_element_set_state(video_pipe, GST_STATE_PLAYING);
}
void VideoItem::updateFrame(const cv::Mat &frame) {
_frame = frame.clone();
update(); // Trigger repaint in the GUI thread
}
void VideoItem::run() {
start_gst();
}
main.cpp (or where do you start your qml)
VideoItem videoItem;
qmlRegisterType<VideoItem>("CustomTypes", 1, 0, "VideoItem");
QML code to add to menu
VideoItem {
id: video
anchors {
// you can set the anchors.
}
Component.onCompleted: {
video.start_gst()
}
}