Giter Club home page Giter Club logo

nvjmi's Introduction

nvjmi:palm_tree:

1. 简介

封装Jetson Multimedia API的编解码库,基于 https://github.com/jocover/jetson-ffmpeghttps://github.com/dusty-nv/jetson-utils 基础进行的修改,未集成于ffmpeg,可单独使用。功能如下。

  1. 支持H.264解码。
  2. 支持解码后直接硬件完成缩放操作。
  3. 支持解码后直接硬件完成颜色空间转换操作。
  4. 支持Jetpack 4.3、4.4。
  5. 对于Jetpack 4.5需要使用对应的multimedia api,即使用Jetpack 4.5中/usr/src/jetson_multimedia_api更新include/和common/中的文件。

当前仅完成解码器的修改,还未完成编码器的修改。

关于解码API的使用,详见nvjmi.h接口说明。

2. 使用说明

  1. 编译
    直接使用make编译nvjmi动态库。

  2. 示例
    nvjmi接口使用示例如下:

if(jmi_ctx_ == nullptr) {
  jmi::nvJmiCtxParam jmi_ctx_param{};

  if(rsz_w > 0 && rsz_h > 0){
      jmi_ctx_param.resize_width = rsz_w;
      jmi_ctx_param.resize_height = rsz_h;
  }

  if ("H264" == m_pRtspClient->GetCodeName()) {
      jmi_ctx_param.coding_type =jmi::NV_VIDEO_CodingH264;
  }
  else if ("H265" == m_pRtspClient->GetCodeName()) {
      jmi_ctx_param.coding_type = jmi::NV_VIDEO_CodingHEVC;
  }
  string dec_name = "dec-" + session_id();
  jmi_ctx_ = jmi::nvjmi_create_decoder(dec_name.data(), &jmi_ctx_param);
}

//基于jetson nvdec解码
jmi::nvPacket nvpacket;

nvpacket.payload_size = dataLen;
nvpacket.payload = data;

int ret{};
ret = jmi::nvjmi_decoder_put_packet(jmi_ctx_, &nvpacket);
if(ret == jmi::NVJMI_ERROR_STOP) {
  LOG_INFO(VDO_RTSP_LOG, "[{}] frameCallback: nvjmi decode error, frame callback EOF!", m_ip);
}

while (ret >= 0) {
  jmi::nvFrameMeta nvframe_meta;
  ret = jmi::nvjmi_decoder_get_frame_meta(jmi_ctx_, &nvframe_meta);
  if (ret < 0) break;

  Buffer buf;
  buf.allocate(nvframe_meta.width, nvframe_meta.height, 3, nvframe_meta.payload_size / nvframe_meta.height);
  jmi::nvjmi_decoder_retrieve_frame_data(jmi_ctx_, &nvframe_meta, (void*)buf.getData());     
}

3. 常见问题

  1. Q: 出现错误nvbuf_utils: Could not get EGL display connection,并且eglGetDisplay(EGL_DEFAULT_DISPLAY)返回NULL?
    A: 1>在ssh终端输入unset DISPLAY,然后再运行程序即可。
    2>vim /etc/profile,添加unset DISPLAY,然后souce /etc/profile生效,然后重启机器reboot。

nvjmi's People

Contributors

fan-chao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

nvjmi's Issues

关于解码数据存储解析的问题

作者您好
因为nvjmi_dec.pp的309行中,关于数据的大小定义如下:

if (!ctx->frame_size)
{
    ctx->frame_size = ctx->resize_width*ctx->resize_height * 3 * sizeof(unsigned char);
 }

后面的memcpy拷贝的大小为frame_size的大小。

所有我使用如下代码存储解码后图片,但是图片为空,全是黑的。

Mat frame (width, height, CV_8UC3);
ret = jmi::nvjmi_decoder_retrieve_frame_data(jmi_ctx_, &nvframe_meta, (void *) frame.data);
if (!ret)
{
    imwrite(root+ std::to_string(index) + ".jpg", frame);
    av_packet_unref(packet);
    index++;
}
frame.release();

我看您给的Demo使用的空间开辟方式如下:
buf.allocate(nvframe_meta.width, nvframe_meta.height, 3, nvframe_meta.payload_size / nvframe_meta.height);
请问代码中的 nvframe_meta.payload_size / nvframe_meta.height这个值具体是什么作用?这样开辟出来的空间是存储什么格式的数据,是NV12吗?

NvEGLImageFromFd: No EGLDisplay to create EGLImage Error while mapping dmabuf fd (1210) to EGLImage

我用的是一个没有显示器的jetson,用ssh的方式连接的。然后在编译运行以下的代码后出现了如题的问题:

NvEGLImageFromFd: No EGLDisplay to create EGLImage
Error while mapping dmabuf fd (1210) to EGLImage
NvEGLImageFromFd: No EGLDisplay to create EGLImage
Error while mapping dmabuf fd (1210) to EGLImage
NvEGLImageFromFd: No EGLDisplay to create EGLImage
Error while mapping dmabuf fd (1210) to EGLImage
NvEGLImageFromFd: No EGLDisplay to create EGLImage
#include <string>
#include "nvjmi.h"

#define __STDC_CONSTANT_MACROS

#ifdef _WIN32
//Windows
extern "C" {
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libswscale/swscale.h"
#include "libavutil/imgutils.h"
};
#else
//Linux...
#ifdef __cplusplus
extern "C"
{
#endif
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libswscale/swscale.h>
#include <libavutil/imgutils.h>
#ifdef __cplusplus
};
#endif
#endif

int main() {
    AVFormatContext *pFormatCtx;
    AVPacket *packet;

    av_register_all();
    avformat_network_init();
    pFormatCtx = avformat_alloc_context();
    int height = 512;
    int width = 512;
    const char *filepath = "/tmp/tmp.R2FKolG3xe/data/sample_outdoor_car_1080p_10fps.h264";
    if (avformat_open_input(&pFormatCtx, filepath, NULL, NULL) != 0) {
        printf("Couldn't open input stream.\n");
        return -1;
    }
    if (avformat_find_stream_info(pFormatCtx, NULL) < 0) {
        printf("Couldn't find stream information.\n");
    }

    packet = (AVPacket *) av_malloc(sizeof(AVPacket));
    av_dump_format(pFormatCtx, 0, filepath, 0);

    jmi::nvJmiCtxParam jmi_ctx_param;
    if (width > 0 && height > 0) {
        jmi_ctx_param.resize_width = width;
        jmi_ctx_param.resize_height = height;
    }

    jmi_ctx_param.coding_type = jmi::NV_VIDEO_CodingH264;
    std::string dec_name = "test_dec";
    jmi::nvJmiCtx *jmi_ctx_ = jmi::nvjmi_create_decoder(dec_name.data(), &jmi_ctx_param);
    jmi::nvPacket  nvpacket;
    while (av_read_frame(pFormatCtx, packet) >= 0) {
        nvpacket.payload_size = packet->size;
        nvpacket.payload = packet->data;
        int ret;
        ret = jmi::nvjmi_decoder_put_packet(jmi_ctx_, &nvpacket);
        if(ret == jmi::NVJMI_ERROR_STOP) {
            printf("nvjmi decode error, frame callback EOF!\n");
        }

    }
    if (packet != nullptr) {
        av_free(packet);
    }
    printf("Exit success\n");
    return 0;
}

您好 能简单提供个调用demo吗

您好 我写了一个rtsp的客户端,解析rtsp码流 回调出来的数据写成文件再用官方00_video_decode解码 没有问题
但是调用您封装的接口的时候 总是报段错误,是不是我哪里调用不对呢 gdb调试报错信息如下:
写入数据长度=298781
fps=0
video_type=51
height=0
width=0
frame_timestamp_usec=0
channelid=0
payload_size=298781
pts=0
ctx[0]=0x55555979d0

Thread 12 "test" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fa37fdf30 (LWP 4923)]
0x0000007fb7f73024 in jmi::nvjmi_decoder_put_packet (ctx=0x55555979d0, packet=0x0) at nvjmi_dec.cpp:485
warning: Source file is more recent than executable.
485 if (packet->payload_size == 0){
(gdb) bt
#0 0x0000007fb7f73024 in jmi::nvjmi_decoder_put_packet (ctx=0x55555979d0, packet=0x0) at nvjmi_dec.cpp:485
#1 0x0000005555556594 in CallBackStreamFun(void*, int*, int, int, char*, RTSP_FRAME_INFO*, void*) ()
#2 0x0000007fb7f2a5e4 in CRtspParent::RecvData (this=this@entry=0x5555598fc0, pCRtp=0x5555c30030, pUserData=pUserData@entry=0x5555598fc0)
at /home/pes/mxj/data/nfs/mahxn0/workspace/multimedia_api/librtspclient/src/CRtspParent.cpp:1142
#3 0x0000007fb7f2b500 in CRtspParent::RecvDataThread (pUserData=pUserData@entry=0x5555598fc0)
at /home/pes/mxj/data/nfs/mahxn0/workspace/multimedia_api/librtspclient/src/CRtspParent.h:68
#4 0x0000007fb7f28db8 in rtsp_StartTask (p=) at /home/pes/mxj/data/nfs/mahxn0/workspace/multimedia_api/librtspclient/src/CRtspParent.cpp:96
#5 0x0000007fb79df088 in start_thread (arg=0x7fffffecbf) at pthread_create.c:463
#6 0x0000007fb7cd94ec in thread_start () at ../sysdeps/unix/sysv/linux/aarch64/clone.S:78
(gdb)

原调用代码如下:
FILE *fp4 = NULL;
int CallBackStreamFun(Rtsp_Client_Handle handle, int *_channelPtr, int _mediaType, int bufflen, char *pBuf, RTSP_FRAME_INFO *_frameInfo, void *userPtr)
{
int channel_id = *_channelPtr;
printf("写入数据长度=%ld\n", (unsigned long)bufflen);
printf("fps=%d\n",_frameInfo->video_fps);
printf("video_type=%d\n", _frameInfo->video_type);
printf("height=%d\n", _frameInfo->video_height);
printf("width=%d\n", _frameInfo->video_width);
printf("frame_timestamp_usec=%d\n", _frameInfo->frame_timestamp_usec);
printf("channelid=%d\n", channel_id);

nvPacket *packet;
packet->flags = 0;
packet->payload_size = (unsigned long)bufflen;
packet->payload = (unsigned char*)pBuf;
packet->pts = (unsigned long)_frameInfo->frame_timestamp_usec;
printf("payload_size=%ld\n", packet->payload_size);
printf("pts=%ld\n", packet->pts);
printf("ctx[%d]=%p\n", channel_id, ctx[channel_id]);
nvjmi_decoder_put_packet(ctx[channel_id], packet);
printf("put packet success\n");

// nvFrameMeta *frame_data;
// nvjmi_decoder_get_frame_meta(ctx[channel_id], frame_data);

return 0;

}

int CallBackStreamFun1(Rtsp_Client_Handle handle, int *_channelPtr, int _mediaType, int bufflen, char *pBuf, RTSP_FRAME_INFO *_frameInfo, void *userPtr)
{
if (NULL == fp4)
{
fp4 = fopen("../data/out.264", "wb");
int ret = fwrite(pBuf, 1, bufflen, fp4);
}
else
{
int ret = fwrite(pBuf, 1, bufflen, fp4);
}
return 0;
}

//状态回调
int StatueCallBack(int iRecvSum, int iRecvLostSum, int iNowStreamFlag, void *pUserData);
int StatueCallBack(int iRecvSum, int iRecvLostSum, int iNowStreamFlag, void *pUserData)
{
printf(">>>>>> pUserData[%s] iRecvSum[%d] iRecvLostSum[%d] iNowStreamFlag[%d] DisConnectStatue: ",
pUserData, iRecvSum, iRecvLostSum, iNowStreamFlag);
printf("\n");
}

int main()
{
int Ret;
nvJmiCtxParam ctx_param;
ctx_param.coding_type = NV_VIDEO_CodingH264;
ctx_param.resize_height = 1080;
ctx_param.resize_width = 1920;
//开始解析码流信息
//步骤1,初始化
Ret = RtspClient_Init_V3(16000,NULL,0);
if (Ret < 0)
{
printf("initial rtspclient_sdk failed!\n");
}
//步骤2,打开码流 创建解码器

for (int i = 0; i < multimedia_num; i++)
{
    //每一路创建一个handle;
    handle[i] = RtspClient_Create();
    //每一路创建一个解码器
    ctx[i] = nvjmi_create_decoder("dev0", &ctx_param);
    Ret = RtspClient_OpenStream(handle[i], URL[i], RTP_OVER_UDP, VIDEO_CODEC_H264, CallBackStreamFun, StatueCallBack, streamID[i]);
    if (0 > Ret)
    {
        printf("RtspClient_OpenStream  Failed!!!! mahxn0 \n");
        nvjmi_decoder_close(ctx[i]);
        nvjmi_decoder_free_context(&ctx[i]);
        RtspClient_CloseStream(handle[i]);
        RtspClient_Destory(handle[i]);
        handle[i] = NULL;
        sleep(20);
        continue;
    }
}

while (1)
{
    sleep(10000000);
}
RtspClient_UnInit();
return 0;

}

新的Jetpack(5.1.1)运行就segmentation fault

您好,

我尝试在新的Jetpack(5.1.1)环境下,common和include用系统自带的来编译项目。
然后运行了发现就segmentation fault。
不知道是不是比如因为nvbuf_utils的变化?

我试了下fork的这个项目:https://github.com/ald2004/nvjmi
(在您的基础上做了5.0.1适配的),
在5.1.1是可以运行的,但是播放高分辨的时候,会出现新旧帧来回跳的情况。

不知您是否有适配新版本的计划。
以及您是否碰到过这种来回跳的情况呢?
如果有的话,可否分享下调试思路。
ps. multimedia api这个比起pc上用的nvdecode真是太晦涩了,真不知道为啥同样是nv搞出来的类似功能的东西,差别这么大

谢谢

请问如何使用?

假如要进行RTSP转码,是需要使用NVIDIA的ffmpeg先将RTSP转为H.264吗?

关于多路解码时间的疑问

大佬,我又来了

官方标配:1080p是264解码16路,265是32路
现在我测试发现264最多15路,265也是。 多一路耗时很快就上去了。
我看代码发现耗时最多的地方在nv12->rgba的转换这块. 这个路数上去就需要50ms左右了

decode one frame cost  0 s+ 52 ms 

t_use=50 ms

t_use=49 ms

t_use=50 ms

t_use=50 ms

t_use=50 ms

gettimeofday(&ctx->t_start,NULL);
// do vic conversion conversion: color map convert (NV12@res#1 --> RGBA packed) and scale
ret = ctx->vic_converter->convert(dec_buffer->planes[0].fd, ctx->dst_dma_fd);        
TEST_ERROR(ret == -1, "Transform failed", ret);

gettimeofday(&ctx->t_end, NULL);
int t_use = (ctx->t_end.tv_sec - ctx->t_start.tv_sec) * 1000 +
            (ctx->t_end.tv_usec - ctx->t_start.tv_usec) / 1000;
printf("t_use=%d ms\n", t_use);

我准备直接用nv12 当作输出,在做检测跟踪的时候在需要的地方再用cuda转换,整个流程用yuv。
大佬能否提供一点指导 如何修改输出直接是NV12或者YUV420的,试了jetson-ffmpeg那个 有问题好像

为什么在多路解码的时候运行一会就会到这里面呢 jmi::NVJMI_ERROR_STOP

jmi::nvPacket nvpacket;

nvpacket.payload_size = dataLen;
nvpacket.payload = data;

int ret{};
ret = jmi::nvjmi_decoder_put_packet(jmi_ctx_, &nvpacket);
if(ret == jmi::NVJMI_ERROR_STOP) {
LOG_INFO(VDO_RTSP_LOG, "[{}] frameCallback: nvjmi decode error, frame callback EOF!", m_ip);
}

while (ret >= 0) {
jmi::nvFrameMeta nvframe_meta;
ret = jmi::nvjmi_decoder_get_frame_meta(jmi_ctx_, &nvframe_meta);
if (ret < 0) break;

Buffer buf;
buf.allocate(nvframe_meta.width, nvframe_meta.height, 3, nvframe_meta.payload_size / nvframe_meta.height);
jmi::nvjmi_decoder_retrieve_frame_data(jmi_ctx_, &nvframe_meta, (void*)buf.getData());     

}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.