ffmpeg奇偶场帧Interlace progressive命令和代码处理
一、命令方式
查看所有已支持的濾鏡
ffmpeg -filters
查看doubleweave這個濾鏡的參數(shù)選項
ffmpeg -h filter=doubleweave
ffmpeg -h filter=weave
官網(wǎng)對這兩個濾鏡的解釋:
The weave takes a field-based video input and join each two sequential
fields into single frame, producing a new double height clip with half
the frame rate and half the frame count. The doubleweave works same as
weave but without halving frame rate and frame count.
也就是說doubleweave在奇場和偶場幀交織后,視頻的幀率不變,但weave就會減半.
https://ffmpeg.org/ffmpeg-all.html#Video-Filters
參考(感謝部門大佬給的這張圖片,順藤摸瓜才有了這篇博客):
以下命令都可以播放:當(dāng)執(zhí)行第二個時默認first_field=top這個選項。
ffplay -vf “weave=first_field=top” -i westLife.mp4
ffplay -fflags nobuffer -vf weave udp://127.0.0.1:6017
二、代碼實現(xiàn)
1.代碼實現(xiàn)要注意的幾個部分
(1)注意這里是從項目里摘出來的代碼,不能直接使用,濾鏡的使用都是流程性的代碼,以下代碼第一場,是設(shè)置weave這個濾鏡的參數(shù),其他和一般濾鏡的使用相同,不管什么濾鏡,使用的流程基本是固定的,這部分可以參考雷神的博客。
(2)avfilter_get_by_name(“buffersink”)的改動,ffmpeg4.3如果使用ffbuffersink報錯:
函數(shù)會返回值為:-12
報錯為:Cannot allocate memory
(3)需要連續(xù)兩幀視頻幀放入av_buffersrc_add_frame,才會在av_buffersink_get_frame中得到交織好的一幀,不然av_buffersink_get_frame會報如下錯誤:
函數(shù)返回值為:-11
報錯為:Resource temporarily unavailable
(4)同時av_buffersrc_add_frame這個函數(shù)要注意一下,它會把frame類的frame->data釋放掉。但不會把這個結(jié)構(gòu)體的內(nèi)存空間給釋放。因此文末示例代碼中每次調(diào)用這個函數(shù)后,調(diào)用av_frame_get_buffer給frame重新分配frame->data的內(nèi)存空間。av_buffersink_get_frame會給frame->data分配空間,但不會給frame分配結(jié)構(gòu)體空間結(jié)構(gòu)體,這部分需要自行分配。
int av_buffersrc_add_frame(AVFilterContext *ctx, AVFrame *frame);
int av_buffersink_get_frame(AVFilterContext *ctx, AVFrame *frame);
(5)同時ffmpeg4.3已經(jīng)沒有l(wèi)ibavfilter/avfiltergraph.h這個頭文件了,合并到libavfilter/avfilter.h頭文件了。
(6)int av_buffersrc_add_frame(AVFilterContext *ctx, AVFrame *frame)這個函數(shù)也要注意,AVFrame中的參數(shù)一定要和AVFilterContext中一致,比如視頻寬高等(AVFilterContext的配置也就是文末代碼中args變量的配置),不然會報以下錯誤:
Changing video frame properties on the fly is not supported by all filters.
2.代碼實現(xiàn)部分
const char *filter_descr = "weave=first_field=top";int FeedStream::init_filters(const char *filters_descr) {char args[512];int ret;string filter_buf = "buffer";string filter_buf_sink = "ffbuffersink";const AVFilter *buffersrc = avfilter_get_by_name("buffer");const AVFilter *buffersink = avfilter_get_by_name("buffersink");//注意這行的改動AVFilterInOut *outputs = avfilter_inout_alloc();AVFilterInOut *inputs = avfilter_inout_alloc();enum AVPixelFormat pix_fmts[] = {AV_PIX_FMT_UYVY422, AV_PIX_FMT_NONE}; //AV_PIX_FMT_UYVY422 AV_PIX_FMT_YUV420PAVBufferSinkParams *buffersink_params;filter_graph = avfilter_graph_alloc();/* buffer video source: the decoded frames from the decoder will be inserted here. */snprintf(args, sizeof(args),"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",decodec_ctx_v->width, decodec_ctx_v->height, AV_PIX_FMT_UYVY422,decodec_ctx_v->time_base.num, decodec_ctx_v->time_base.den,decodec_ctx_v->sample_aspect_ratio.num, decodec_ctx_v->sample_aspect_ratio.den);printf("-=-=--=-=-=-=-=args:%s\n", args);ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",args, NULL, filter_graph);if (ret < 0){printf("Cannot create buffer source re = %d\n", ret);return ret;}/* buffer video sink: to terminate the filter chain. */buffersink_params = av_buffersink_params_alloc();buffersink_params->pixel_fmts = pix_fmts;ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",NULL, buffersink_params, filter_graph);av_free(buffersink_params);if (ret < 0){printf("Cannot create buffer sink re = %d\n", ret);ErrorFunc(ret);}/* Endpoints for the filter graph. */outputs->name = av_strdup("in");outputs->filter_ctx = buffersrc_ctx;outputs->pad_idx = 0;outputs->next = NULL;inputs->name = av_strdup("out");inputs->filter_ctx = buffersink_ctx;inputs->pad_idx = 0;inputs->next = NULL;if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,&inputs, &outputs, NULL)) < 0)return ret;if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)return ret;return 0; }int FeedStream::Interlace() {if (av_buffersrc_add_frame(buffersrc_ctx, uvyv_frame) < 0) //這個函數(shù)會reset frame_v中的data[]{printf("Error while add frame.\n");ErrorFunc(1);}uvyv_frame->format = AV_PIX_FMT_UYVY422;uvyv_frame->width = decodec_ctx_v->width;uvyv_frame->height = decodec_ctx_v->height;uvyv_frame->pts = 0;int temp_re = av_frame_get_buffer(uvyv_frame, 32); //avpicture_fill 32if (temp_re != 0){printf("fail to get buffer for pframe\n");ErrorFunc(temp_re);}int re = av_buffersink_get_frame(buffersink_ctx, uvyv_frame_weave);if (re < 0){printf("againV_temp\n");return re;// ErrorFunc(1);}else{return re;//printf("success while get frame.uvyv_frame_weave->height:%d, uvyv_frame_weave->pts:%d\n", uvyv_frame_weave->height, uvyv_frame_weave->pts);} }補充:
因為h265不支持逐行,所以1920x1080i50被壓扁成1920x540p50了,因此需要上面的奇偶場weave處理,處理后奇場和偶場交錯在一起height成了1080;以下的命令和代碼方法則不改變分辨率。
逐行掃描轉(zhuǎn)隔行掃描:
alternate_scan設(shè)置隔行
隔行掃描轉(zhuǎn)逐行掃描:
-deinterlace后不需要有任何參數(shù)
查看文件是否是隔行的,如果是,interlaced_frame值為1:
ffprobe -select_streams v -i yekong.mp4 -read_intervals %+1 -show_entries "frame=pkt_pts_time,pkt_duration_time,interlaced_frame" -pretty -print_format json -of json在代碼中,編碼器或解碼器上下文中有field_order參數(shù):
/** Field order* - encoding: set by libavcodec* - decoding: Set by user.*/ enum AVFieldOrder field_order; enum AVFieldOrder {AV_FIELD_UNKNOWN,AV_FIELD_PROGRESSIVE,AV_FIELD_TT, //< Top coded_first, top displayed firstAV_FIELD_BB, //< Bottom coded first, bottom displayed firstAV_FIELD_TB, //< Top coded first, bottom displayed firstAV_FIELD_BT, //< Bottom coded first, top displayed first };編碼或解碼器設(shè)置后可以進行逐行掃描和隔行掃描,例如:
decodec_ctx_v->field_order = AV_FIELD_TT;在AVFrame結(jié)構(gòu)體中,interlaced_frame參數(shù)可以查看是否隔行:
/*** The content of the picture is interlaced.*/ int interlaced_frame;解碼后的AVFrame包中,弱是隔行則interlaced_frame為1,否則為0。
以及tinterlace濾鏡,注意直接用ffplay不行,只有ffmpeg才支持:
flags,當(dāng)指定這個參數(shù)時,表明在加交錯的過程中要使用垂直濾波,有兩個垂直濾波器可供選擇,作用是減少圖像因加交錯而出現(xiàn)的莫爾條紋。
這條命令在ffmpeg命令測試成功,目前在代碼中未嘗試成功。
可以在官網(wǎng)查看參數(shù),這個界面全是濾鏡:
https://ffmpeg.org/ffmpeg-filters.html#tinterlace
總結(jié)
以上是生活随笔為你收集整理的ffmpeg奇偶场帧Interlace progressive命令和代码处理的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: ValueError('need at
- 下一篇: 2010-2011上海市四金及税后工资计