从零实现简易播放器:4.ffmpeg 解码视频为yuv数据-使用avcodec_send_packet与avcodec_receive_frame
ffmpeg 解碼視頻為yuv數據
作者:史正 郵箱:shizheng163@126.com 如有錯誤還請及時指正 如果有錯誤的描述給您帶來不便還請見諒 如需交流請發送郵件,歡迎聯系csdn : https://blog.csdn.net/shizheng163
github : https://github.com/shizheng163
文章目錄
- ffmpeg 解碼視頻為yuv數據
- 簡述
- 錯誤處理
- 代碼實現
- 參考文章
簡述
簡單描述下視頻播放的步驟:
解復用解碼渲染視頻輸入視頻壓縮數據顏色空間顯示設備- 解復用:將輸入的視頻變為編碼后的壓縮數據
- 解碼: 將壓縮數據變為顏色空間(YUV, RGB等)
- 渲染: 將YUV等顏色空間繪制在顯示設備上形成圖像
下面對應上述流程說明下ffmpeg解碼為yuv數據的接口調用。
ffmpeg版本:4.1
結束過程循環解碼-圖像變換圖像色彩空間轉換環境準備解碼環境準備打開輸入流打開輸入文件查找流信息查找解碼器創建解碼環境復制解碼參數打開解碼器初始化圖像轉換環境申請圖像存儲內存開始解碼是, 輸入編碼后的圖像獲取解碼后的數據否是, 顏色空間轉換是否否av_read_frame>0?釋放相關資源結束avcodec_send_packetavcodec_receive_frame>0 ?sws_scale>0?轉儲圖像sws_getContextav_image_allocavcodec_find_decoderavcodec_alloc_context3avcodec_parameters_to_contextavcodec_open2開始avformat_open_inputavformat_find_stream_info前文曾經提過解碼后的圖像數據為YUV420, 如果只需要YUV420, 那么就不需要進行圖像轉換,圖像色彩空間轉換環境也無需初始化。
另外為了對比播放器的播放速度特意使用如下命令為視頻添加了水印:
- ffmpeg -i ./Suger.mp4 -vf "drawtext=expansion=strftime: basetime=$(date +%s -d '2019-01-12 00:00:00')000000 :text='%Y-%m-%d %H\\:%M\\:%S':fontsize=30:fontcolor=white:box=1:x=10:y=10:boxcolor=black@0.5:" -strict -2 -y "SugerTime.mp4"
2019-01-12 00:00:00為起始時間戳。
錯誤處理
-
處理過程中出現bad dst image pointers
此問題是因為沒有調用av_image_alloc
-
解碼過程中出現:Invalid data found when processing input
此問題出現是因為沒調用avcodec_parameters_to_context與avcodec_open2
代碼實現
如果打開的url是網絡流需要先調用avformat_network_init() 初始化網絡環境。
完整代碼可見 https://github.com/shizheng163/MyMediaPlayer/tree/v0.3.0
.h 文件
cpp文件
/** copyright (c) 2018-2019 shizheng. All Rights Reserved.* Please retain author information while you reference code* date: 2019-01-13* author: shizheng* email: shizheng163@126.com*/ #include "ffdecoder.h" #include <memory> extern "C" { #include <libavformat/avformat.h> #include <libswscale/swscale.h> #include <libavutil/imgutils.h> } #include "logutil.h" #include "ffmpegutil.h"using namespace ffmpegutil; using namespace std; using namespace logutil;typedef std::shared_ptr<AVPacket> AVPacketPtr; typedef std::shared_ptr<AVFrame> AVFramePtr;FFDecoder::FFDecoder():m_pInputFormatContext(NULL),m_nVideoStreamIndex(-1),m_pCodecContext(NULL) { }FFDecoder::~FFDecoder() {if(m_threadForDecode.joinable())m_threadForDecode.join();if(m_pInputFormatContext){avformat_close_input(&m_pInputFormatContext);avformat_free_context(m_pInputFormatContext);m_pInputFormatContext = NULL;}if(m_pCodecContext){avcodec_free_context(&m_pCodecContext);m_pCodecContext = NULL;} }bool FFDecoder::InitializeDecoder(string url) {m_szUrl = url;int ret = 0;ret = avformat_open_input(&m_pInputFormatContext, m_szUrl.c_str(), NULL, NULL);//GetStrError是對av_strerror的一次封裝。if(ret < 0){m_szErrName = MySprintf("FFDecoder open input failed, url = %s, err: %s", m_szUrl.c_str(), GetStrError(ret).c_str());return false;}//先探測文件信息ret = avformat_find_stream_info(m_pInputFormatContext, NULL);if(ret < 0){m_szErrName = MySprintf("FFDecoder find stream info failed, url = %s, err: %s", m_szUrl.c_str(), GetStrError(ret).c_str());return false;}for(unsigned i = 0; i < m_pInputFormatContext->nb_streams; i++){if(m_pInputFormatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO){m_nVideoStreamIndex = i;break;}}if( m_nVideoStreamIndex == -1){m_szErrName = MySprintf("FFDecoder could find video stream, url = %s", m_szUrl.c_str());return false;}// av_dump_format(m_pInputFormatContext, m_nVideoStreamIndex, NULL, 0);//InitAVDecoderAVCodec * pCodec = avcodec_find_decoder(m_pInputFormatContext->streams[m_nVideoStreamIndex]->codecpar->codec_id);if(!pCodec){m_szErrName = MySprintf("FFDecoder find AVCodec failed, url = %s, codec: %s", m_szUrl.c_str(), avcodec_get_name(m_pInputFormatContext->streams[m_nVideoStreamIndex]->codecpar->codec_id));return false;}m_pCodecContext = avcodec_alloc_context3(pCodec);//復制解碼器參數ret = avcodec_parameters_to_context(m_pCodecContext, m_pInputFormatContext->streams[m_nVideoStreamIndex]->codecpar);if(ret < 0){m_szErrName = MySprintf("FFDecoder avcodec_parameters_to_context failed, url = %s, err = %s", m_szUrl.c_str(), avcodec_get_name(m_pInputFormatContext->streams[m_nVideoStreamIndex]->codecpar->codec_id), GetStrError(ret).c_str());return false;}ret = avcodec_open2(m_pCodecContext, pCodec, NULL);if(ret < 0){m_szErrName = MySprintf("FFDecoder AVCodec Open failed, url = %s, codec: %s, err = %s", m_szUrl.c_str(), avcodec_get_name(m_pInputFormatContext->streams[m_nVideoStreamIndex]->codecpar->codec_id), GetStrError(ret).c_str());return false;}return true; }bool FFDecoder::StartDecodeThread() {if(!m_pCodecContext){m_szErrName = "decode context not init!";return false;}m_bIsRunDecodeThread = true;m_threadForDecode = std::thread(&FFDecoder::decodeInThread, this);return true; }void FFDecoder::SetProcessDataCallback(FFDecoder::ProcessYuvDataCallback callback) {std::unique_lock<mutex> locker(m_mutexForFnProcessYuvData);m_fnProcssYuvData = callback; }void FFDecoder::SetDecodeThreadExitCallback(FFDecoder::DecodeThreadExitCallback callback) {std::unique_lock<mutex> locker(m_mutexForFnThreadExit);m_fnThreadExit = callback; }void FFDecoder::StopDecodeThread() {m_bIsRunDecodeThread = false; }float FFDecoder::GetVideoFrameRate() {if(m_pInputFormatContext){return (float)m_pInputFormatContext->streams[m_nVideoStreamIndex]->avg_frame_rate.num / m_pInputFormatContext->streams[m_nVideoStreamIndex]->avg_frame_rate.den ;}return 0; }void FFDecoder::decodeInThread() {bool isEof = false;AVFramePtr pFrameScale(av_frame_alloc(), [](AVFrame * ptr){av_frame_free(&ptr);//釋放調用av_image_alloc申請的內存av_free(&ptr->data[0]);});int ret = 0;//初始化圖像轉換器AVStream * pVideoStream = m_pInputFormatContext->streams[m_nVideoStreamIndex];SwsContext * pswsContext = sws_getContext(pVideoStream->codecpar->width, pVideoStream->codecpar->height, (AVPixelFormat)pVideoStream->codecpar->format,pVideoStream->codecpar->width, pVideoStream->codecpar->height, AV_PIX_FMT_YUV420P,SWS_FAST_BILINEAR, 0, 0, 0);//申請圖像存儲內存av_image_alloc(pFrameScale->data, pFrameScale->linesize, pVideoStream->codecpar->width, pVideoStream->codecpar->height, AV_PIX_FMT_YUV420P, 1);while(m_bIsRunDecodeThread){AVPacketPtr ptrAVPacket(av_packet_alloc(), [](AVPacket * ptr){av_packet_free(&ptr);});av_init_packet(ptrAVPacket.get());ret = av_read_frame(m_pInputFormatContext, ptrAVPacket.get());if(ret < 0){string szErrName = GetStrError(ret).c_str();if(szErrName != "End of file")m_szErrName = MySprintf("FFMpeg Read Frame Failed, Err = %s", szErrName.c_str());else{MyLog(info, "FFMpeg Read Frame Complete!");isEof = true;}break;}if(ptrAVPacket->stream_index == m_nVideoStreamIndex){ret = avcodec_send_packet(m_pCodecContext, ptrAVPacket.get());AVFramePtr pTempFrame(av_frame_alloc(), [](AVFrame * ptr){av_frame_free(&ptr);});while(1){ret = avcodec_receive_frame(m_pCodecContext, pTempFrame.get());if(ret != 0)break;ret= sws_scale(pswsContext, (const uint8_t* const*)pTempFrame->data, pTempFrame->linesize, 0, pTempFrame->height,(uint8_t* const*)pFrameScale->data, pFrameScale->linesize);if(ret > 0){fileutil::FileRawData data;data.AppendData(pFrameScale->data[0], pVideoStream->codecpar->width * pVideoStream->codecpar->height);data.AppendData(pFrameScale->data[1], pVideoStream->codecpar->width * pVideoStream->codecpar->height / 4);data.AppendData(pFrameScale->data[2], pVideoStream->codecpar->width * pVideoStream->codecpar->height / 4);YuvDataPtr pYuvData(new fileutil::PictureFile(data, pVideoStream->codecpar->width, pVideoStream->codecpar->height, fileutil::PictureFile::kFormatYuv));pYuvData->m_filename = to_string(pVideoStream->codec_info_nb_frames);std::unique_lock<mutex> locker(m_mutexForFnProcessYuvData);if(m_fnProcssYuvData)m_fnProcssYuvData(pYuvData);}}}}MyLog(m_bIsRunDecodeThread && !isEof ? err : info, "FFDecoder %s exit!\n", m_bIsRunDecodeThread && !isEof ? "abnormal" : "normal");std::unique_lock<mutex> locker(m_mutexForFnThreadExit);if(m_fnThreadExit)m_fnThreadExit(m_bIsRunDecodeThread && !isEof); }參考文章
- ffmpeg簡易播放器的實現-完善版
- QT+ffmpeg 簡單視頻播放代碼及問題記錄
- FFmpeg——視頻解碼——轉YUV并輸出——av_image函數介紹
總結
以上是生活随笔為你收集整理的从零实现简易播放器:4.ffmpeg 解码视频为yuv数据-使用avcodec_send_packet与avcodec_receive_frame的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: INT_MAX INT_MIN及其运算
- 下一篇: Unrecognized option: