介绍
opencv的VideoCapture函数支持以Gstreamer pipeline的方法对RTSP数据进行解码,VideoWriter函数支持以Gstreamer pipeline的方式进行RTSP推流。
为了实现上述的功能,opencv在编译时需要打开WITH_GSTREAMER选项,具体编译过程,需要后续进行测试,然后描述相应的过程。
查看opencv是否支持gstreamer
opencv中可以通过调用getBuildInformance函数查看opencv的编译情况,具体如下所示:
#include <opencv2/opencv.hpp>
int main(void)
{
std::cout << cv::getBuildInformation() << std::endl;
}
通过运行上述命令可以得到如下输出,其中Video I/O中可以找到Gstreamer选项是否打开。
RTSP拉流
OpenCV的VideoCapture类支持视频文件、图像序列或摄像头作为输入,获取数据。在这小节中,重点介绍如何使用VideoCapture类来拉取RTSP流媒体数据。
通过调用如下的VideoCapure构造函数能实现对RTSP流数据的拉取:
cv::VideoCapture::VideoCapture ( const String & filename,
int apiPreference
)
其中filename的输入可以是:
- 视频文件路径;
- 图像序列,例如图像序列名为image_00.jpg, image_01.jpg, image_02.jpg, 则可以设置为image_%02d.jpg;
- 视频流的URL,例如:rtsp://admin@password:192.168.170.XXX;
- Gstreamer的pipeline string,该pipeline可以使用gst-launch-1.0测试可用性;
apiPreference指的是Capture API使用何种后端,包括cv::CAP_FFMPEG or cv::CAP_IMAGES or cv::CAP_DSHOW等。
软件解码
#include <opencv2/core.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/highgui.hpp>
#include <iostream>
#include <stdio.h>
#include <string>
using namespace cv;
using namespace std;
int main(int, char**)
{
Mat frame;
//--- INITIALIZE VIDEOCAPTURE
VideoCapture cap;
string rtsp_url = "rtsp://admin:password@192.168.170.xxx:554";
int apiID = cv::CAP_ANY; // 0 = autodetect default API
// open rtsp stream using selected API
cap.open(rtsp_url, apiID);
// check if we succeeded
if (!cap.isOpened()) {
cerr << "ERROR! Unable to open camera\n";
return -1;
}
//--- GRAB AND WRITE LOOP
cout << "Start grabbing" << endl
<< "Press any key to terminate" << endl;
for (;;)
{
// wait for a new frame from camera and store it into 'frame'
cap.read(frame);
// check if we succeeded
if (frame.empty()) {
cerr << "ERROR! blank frame grabbed\n";
break;
}
// show live and wait for a key with timeout long enough to show images
imshow("Live", frame);
if (waitKey(5) >= 0)
break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
上述代码默认采用软件解码,使用CPU资源进行数据解码,但是对于NVIDIA Jetson嵌入式平台,CPU资源非常宝贵,可以使用NVDEC硬件解码器来实现流数据的硬件解码。
硬件解码
如VideoCapture定义所示,其接收gstreamer的pipeline string来拉取数据。因此,gstreamer进行硬件解码的流程如下图所示:
与其他博客不同,本文使用nvv4l2decoder替代了之前的omxh264dec,nvvidconv替换为nvvideoconvert,下面硬件解码的代码在NVIDIA的dGPU和Jetson平台都可以使用。
#include <opencv2/opencv.hpp>
#include <iostream>
#include <chrono>
int main()
{
using std::chrono::steady_clock;
typedef std::chrono::milliseconds milliseconds_type;
const int interval = 15;
std::stringstream ss;
std::string rtsp_url = "rtsp://admin:password@192.168.170.xxx:554";
size_t latency = 200;
size_t frame_width = 1920;
size_t frame_height = 1080;
size_t framerate = 15;
ss << "rtspsrc location=" << rtsp_url << " latency=" << latency << " ! application/x-rtp, media=video, encoding-name=H264 "
<< "! rtph264depay ! video/x-h264, clock-rate=90000, width=" << frame_width << ", height=" << frame_height << ", framerate="
<< framerate << "/1 ! nvv4l2decoder ! video/x-raw(memory:NVMM), width=" << frame_width << ", height=" << frame_height
<< ", framerate=" << framerate << "/1 ! nvvideoconvert ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink";
std::cout << ss.str() << std::endl;
cv::VideoCapture cap = cv::VideoCapture(ss.str(), cv::CAP_GSTREAMER);
if(!cap.isOpened())
{
std::cerr << "error to open camera." << std::endl;
return -1;
}
std::cout << cv::getBuildInformation() << std::endl;
cv::Mat frame ;
steady_clock::time_point start = steady_clock::now();
size_t frame_idx = 0;
while(1)
{
bool ret = cap.read(frame);
if(ret)
{
// cv::imwrite("tmp.jpg", frame);
++frame_idx;
}
if(frame_idx % interval == 0)
{
steady_clock::time_point end = steady_clock::now();
milliseconds_type span = std::chrono::duration_cast<milliseconds_type>(end - start) ;
std::cout << "it took " << span.count() / frame_idx << " millisencods." << std::endl;
start = end;
}
}
return 0;
}
其中上面几个关键参数需要与所拉取RTSP的设置一致,大家可以通过IP摄像头的网页查看相关信息,两个重要的参数是图像分辨率和视频帧率,如下图所示:
CMakeLists文件内容如下:文章来源:https://www.toymoban.com/news/detail-442210.html
# requirement of cmake version
cmake_minimum_required(VERSION 3.5)
# project name
PROJECT(opencv_test)
# find required opencv
find_package(OpenCV REQUIRED)
# directory of opencv headers
include_directories(${OpenCV_INCLUDE_DIRS})
# directory of opencv library
link_directories(${OpenCV_LIBRARY_DIRS})
# name of executable file and path of source file
add_executable(${PROJECT_NAME} opencv_test.cpp)
# opencv libraries
target_link_libraries(${PROJECT_NAME} ${OpenCV_LIBS})
对比实验
为了比较Jetson TX2下RTSP流媒体数据在软件解码和硬件解码下耗时,分别统计了其50s的平均耗时,结果如下所示,其中视频帧率为15FPS,分辨率为1080p,时间仅包含数据read的时间,未包含imwrite的时间:
正常来说,视频帧率为15FPS,则应该1000/15ms产生一帧数据,因此,平均耗时没有错误。但是,可以看到的是在硬件解码的情况下CPU资源的占用仅约为软件解码的一半,所以,在嵌入式平台下,使用硬件解码数据能大大减少CPU资源的占用。文章来源地址https://www.toymoban.com/news/detail-442210.html
到了这里,关于opencv+gstreamer拉流的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!