FFmpeg资料来源简单分析:libswscale的sws_getContext()
=====================================================
FFmpeg庫函數(shù)的源代碼的分析文章:
【骨架】
FFmpeg源碼結(jié)構(gòu)圖 - 解碼
FFmpeg源碼結(jié)構(gòu)圖 - 編碼
【通用】
FFmpeg 源碼簡單分析:av_register_all()
FFmpeg 源碼簡單分析:avcodec_register_all()
FFmpeg 源碼簡單分析:內(nèi)存的分配和釋放(av_malloc()、av_free()等)
FFmpeg 源碼簡單分析:常見結(jié)構(gòu)體的初始化和銷毀(AVFormatContext,AVFrame等)
FFmpeg 源碼簡單分析:avio_open2()
FFmpeg 源碼簡單分析:av_find_decoder()和av_find_encoder()
FFmpeg 源碼簡單分析:avcodec_open2()
FFmpeg 源碼簡單分析:avcodec_close()
【解碼】
圖解FFMPEG打開媒體的函數(shù)avformat_open_input
FFmpeg 源碼簡單分析:avformat_open_input()
FFmpeg 源碼簡單分析:avformat_find_stream_info()
FFmpeg 源碼簡單分析:av_read_frame()
FFmpeg 源碼簡單分析:avcodec_decode_video2()
FFmpeg 源碼簡單分析:avformat_close_input()
【編碼】
FFmpeg 源碼簡單分析:avformat_alloc_output_context2()
FFmpeg 源碼簡單分析:avformat_write_header()
FFmpeg 源碼簡單分析:avcodec_encode_video()
FFmpeg 源碼簡單分析:av_write_frame()
FFmpeg 源碼簡單分析:av_write_trailer()
【其他】
FFmpeg源碼簡單分析:日志輸出系統(tǒng)(av_log()等)
FFmpeg源碼簡單分析:結(jié)構(gòu)體成員管理系統(tǒng)-AVClass
FFmpeg源碼簡單分析:結(jié)構(gòu)體成員管理系統(tǒng)-AVOption
FFmpeg源碼簡單分析:libswscale的sws_getContext()
FFmpeg源碼簡單分析:libswscale的sws_scale()
FFmpeg源碼簡單分析:libavdevice的avdevice_register_all()
FFmpeg源碼簡單分析:libavdevice的gdigrab
【腳本】
FFmpeg源碼簡單分析:makefile
FFmpeg源碼簡單分析:configure
【H.264】
FFmpeg的H.264解碼器源碼簡單分析:概述
=====================================================
打算寫兩篇文章記錄FFmpeg中的圖像處理(縮放,YUV/RGB格式轉(zhuǎn)換)類庫libswsscale的源碼。libswscale是一個主要用于處理圖片像素數(shù)據(jù)的類庫。
能夠完畢圖片像素格式的轉(zhuǎn)換,圖片的拉伸等工作。
有關(guān)libswscale的使用能夠參考文章:
《最簡單的基于FFmpeg的libswscale的演示樣例(YUV轉(zhuǎn)RGB)》
libswscale經(jīng)常使用的函數(shù)數(shù)量非常少,普通情況下就3個:
sws_getContext():初始化一個SwsContext。
sws_scale():處理圖像數(shù)據(jù)。
sws_freeContext():釋放一個SwsContext。當(dāng)中sws_getContext()也能夠用sws_getCachedContext()代替。
雖然libswscale從表面上看經(jīng)常使用函數(shù)的個數(shù)不多,它的內(nèi)部卻有一個大大的“世界”。
做為一個差點兒“萬能”的圖片像素數(shù)據(jù)處理類庫。它的內(nèi)部包括了大量的代碼。
因此計劃寫兩篇文章分析它的源碼。本文首先分析它的初始化函數(shù)sws_getContext(),而下一篇文章則分析它的數(shù)據(jù)處理函數(shù)sws_scale()。
函數(shù)調(diào)用結(jié)構(gòu)圖
分析得到的libswscale的函數(shù)調(diào)用關(guān)系例如以下圖所看到的。
Libswscale處理數(shù)據(jù)流程
Libswscale處理像素數(shù)據(jù)的流程能夠概括為下圖。
從圖中能夠看出,libswscale處理數(shù)據(jù)有兩條最基本的方式:unscaled和scaled。unscaled用于處理不須要拉伸的像素數(shù)據(jù)(屬于比較特殊的情況),scaled用于處理須要拉伸的像素數(shù)據(jù)。Unscaled僅僅須要對圖像像素格式進行轉(zhuǎn)換;而Scaled則除了對像素格式進行轉(zhuǎn)換之外。還須要對圖像進行縮放。Scaled方式能夠分成下面幾個步驟:
- XXX to YUV Converter:首相將數(shù)據(jù)像素數(shù)據(jù)轉(zhuǎn)換為8bitYUV格式。
- Horizontal scaler:水平拉伸圖像,而且轉(zhuǎn)換為15bitYUV;
- Vertical scaler:垂直拉伸圖像。
- Output converter:轉(zhuǎn)換為輸出像素格式。
SwsContext
SwsContext是使用libswscale時候一個貫穿始終的結(jié)構(gòu)體。可是我們在使用FFmpeg的類庫進行開發(fā)的時候,是無法看到它的內(nèi)部結(jié)構(gòu)的。在libswscale\swscale.h中僅僅能看到一行定義:struct SwsContext;一般人看到這個僅僅有一行定義的結(jié)構(gòu)體,會推測它的內(nèi)部一定十分簡單。可是假使我們看一下FFmpeg的源碼。會發(fā)現(xiàn)這個推測是全然錯誤的——SwsContext的定義是十分復(fù)雜的。它的定義位于libswscale\swscale_internal.h中。例如以下所看到的。
這個結(jié)構(gòu)體的定義確實比較復(fù)雜,里面包括了libswscale所須要的所有變量。一一分析這些變量是不太現(xiàn)實的。在后文中會簡單分析當(dāng)中的幾個變量。
sws_getContext()
sws_getContext()是初始化SwsContext的函數(shù)。sws_getContext()的聲明位于libswscale\swscale.h,例如以下所看到的。
該函數(shù)包括下面參數(shù):
srcW:源圖像的寬
srcH:源圖像的高
srcFormat:源圖像的像素格式
dstW:目標(biāo)圖像的寬
dstH:目標(biāo)圖像的高
dstFormat:目標(biāo)圖像的像素格式
flags:設(shè)定圖像拉伸使用的算法成功運行的話返回生成的SwsContext,否則返回NULL。
sws_getContext()的定義位于libswscale\utils.c。例如以下所看到的。
從sws_getContext()的定義中能夠看出,它首先調(diào)用了一個函數(shù)sws_alloc_context()用于給SwsContext分配內(nèi)存。然后將傳入的源圖像。目標(biāo)圖像的寬高,像素格式。以及標(biāo)志位分別賦值給該SwsContext相應(yīng)的字段。最后調(diào)用一個函數(shù)sws_init_context()完畢初始化工作。下面我們分別看一下sws_alloc_context()和sws_init_context()這兩個函數(shù)。
sws_alloc_context()
sws_alloc_context()是FFmpeg的一個API。用于給SwsContext分配內(nèi)存。它的聲明例如以下所看到的。sws_alloc_context()的定義位于libswscale\utils.c,例如以下所看到的。
SwsContext *sws_alloc_context(void) {SwsContext *c = av_mallocz(sizeof(SwsContext));av_assert0(offsetof(SwsContext, redDither) + DITHER32_INT == offsetof(SwsContext, dither32));if (c) {c->av_class = &sws_context_class;av_opt_set_defaults(c);}return c; }
從代碼中能夠看出,sws_alloc_context()首先調(diào)用av_mallocz()為SwsContext結(jié)構(gòu)體分配了一塊內(nèi)存。然后設(shè)置了該結(jié)構(gòu)體的AVClass,而且給該結(jié)構(gòu)體的字段設(shè)置了默認值。
sws_init_context()
sws_init_context()的是FFmpeg的一個API。用于初始化SwsContext。/*** Initialize the swscaler context sws_context.** @return zero or positive value on success, a negative value on* error*/ int sws_init_context(struct SwsContext *sws_context, SwsFilter *srcFilter, SwsFilter *dstFilter);
sws_init_context()的函數(shù)定義非常的長。位于libswscale\utils.c。例如以下所看到的。
SWS_DITHER_ED : SWS_DITHER_BAYER; if (!(flags & SWS_FULL_CHR_H_INT)) { if (c->dither == SWS_DITHER_ED || c->dither == SWS_DITHER_A_DITHER || c->dither == SWS_DITHER_X_DITHER) { av_log(c, AV_LOG_DEBUG, "Desired dithering only supported in full chroma interpolation for destination format '%s'\n", av_get_pix_fmt_name(dstFormat)); flags |= SWS_FULL_CHR_H_INT; c->flags = flags; } } if (flags & SWS_FULL_CHR_H_INT) { if (c->dither == SWS_DITHER_BAYER) { av_log(c, AV_LOG_DEBUG, "Ordered dither is not supported in full chroma interpolation for destination format '%s'\n", av_get_pix_fmt_name(dstFormat)); c->dither = SWS_DITHER_ED; } } } if (isPlanarRGB(dstFormat)) { if (!(flags & SWS_FULL_CHR_H_INT)) { av_log(c, AV_LOG_DEBUG, "%s output is not supported with half chroma resolution, switching to full\n", av_get_pix_fmt_name(dstFormat)); flags |= SWS_FULL_CHR_H_INT; c->flags = flags; } } /* reuse chroma for 2 pixels RGB/BGR unless user wants full * chroma interpolation */ if (flags & SWS_FULL_CHR_H_INT && isAnyRGB(dstFormat) && !isPlanarRGB(dstFormat) && dstFormat != AV_PIX_FMT_RGBA && dstFormat != AV_PIX_FMT_ARGB && dstFormat != AV_PIX_FMT_BGRA && dstFormat != AV_PIX_FMT_ABGR && dstFormat != AV_PIX_FMT_RGB24 && dstFormat != AV_PIX_FMT_BGR24 && dstFormat != AV_PIX_FMT_BGR4_BYTE && dstFormat != AV_PIX_FMT_RGB4_BYTE && dstFormat != AV_PIX_FMT_BGR8 && dstFormat != AV_PIX_FMT_RGB8 ) { av_log(c, AV_LOG_WARNING, "full chroma interpolation for destination format '%s' not yet implemented\n", av_get_pix_fmt_name(dstFormat)); flags &= ~SWS_FULL_CHR_H_INT; c->flags = flags; } if (isAnyRGB(dstFormat) && !(flags & SWS_FULL_CHR_H_INT)) c->chrDstHSubSample = 1; // drop some chroma lines if the user wants it c->vChrDrop = (flags & SWS_SRC_V_CHR_DROP_MASK) >> SWS_SRC_V_CHR_DROP_SHIFT; c->chrSrcVSubSample += c->vChrDrop; /* drop every other pixel for chroma calculation unless user * wants full chroma */ if (isAnyRGB(srcFormat) && !(flags & SWS_FULL_CHR_H_INP) && srcFormat != AV_PIX_FMT_RGB8 && srcFormat != AV_PIX_FMT_BGR8 && srcFormat != AV_PIX_FMT_RGB4 && srcFormat != AV_PIX_FMT_BGR4 && srcFormat != AV_PIX_FMT_RGB4_BYTE && srcFormat != AV_PIX_FMT_BGR4_BYTE && srcFormat != AV_PIX_FMT_GBRP9BE && srcFormat != AV_PIX_FMT_GBRP9LE && srcFormat != AV_PIX_FMT_GBRP10BE && srcFormat != AV_PIX_FMT_GBRP10LE && srcFormat != AV_PIX_FMT_GBRP12BE && srcFormat != AV_PIX_FMT_GBRP12LE && srcFormat != AV_PIX_FMT_GBRP14BE && srcFormat != AV_PIX_FMT_GBRP14LE && srcFormat != AV_PIX_FMT_GBRP16BE && srcFormat != AV_PIX_FMT_GBRP16LE && ((dstW >> c->chrDstHSubSample) <= (srcW >> 1) || (flags & SWS_FAST_BILINEAR))) c->chrSrcHSubSample = 1; // Note the FF_CEIL_RSHIFT is so that we always round toward +inf. c->chrSrcW = FF_CEIL_RSHIFT(srcW, c->chrSrcHSubSample); c->chrSrcH = FF_CEIL_RSHIFT(srcH, c->chrSrcVSubSample); c->chrDstW = FF_CEIL_RSHIFT(dstW, c->chrDstHSubSample); c->chrDstH = FF_CEIL_RSHIFT(dstH, c->chrDstVSubSample); FF_ALLOC_OR_GOTO(c, c->formatConvBuffer, FFALIGN(srcW*2+78, 16) * 2, fail); c->srcBpc = 1 + desc_src->comp[0].depth_minus1; if (c->srcBpc < 8) c->srcBpc = 8; c->dstBpc = 1 + desc_dst->comp[0].depth_minus1; if (c->dstBpc < 8) c->dstBpc = 8; if (isAnyRGB(srcFormat) || srcFormat == AV_PIX_FMT_PAL8) c->srcBpc = 16; if (c->dstBpc == 16) dst_stride <<= 1; if (INLINE_MMXEXT(cpu_flags) && c->srcBpc == 8 && c->dstBpc <= 14) { c->canMMXEXTBeUsed = dstW >= srcW && (dstW & 31) == 0 && c->chrDstW >= c->chrSrcW && (srcW & 15) == 0; if (!c->canMMXEXTBeUsed && dstW >= srcW && c->chrDstW >= c->chrSrcW && (srcW & 15) == 0 && (flags & SWS_FAST_BILINEAR)) { if (flags & SWS_PRINT_INFO) av_log(c, AV_LOG_INFO, "output width is not a multiple of 32 -> no MMXEXT scaler\n"); } if (usesHFilter || isNBPS(c->srcFormat) || is16BPS(c->srcFormat) || isAnyRGB(c->srcFormat)) c->canMMXEXTBeUsed = 0; } else c->canMMXEXTBeUsed = 0; c->chrXInc = (((int64_t)c->chrSrcW << 16) + (c->chrDstW >> 1)) / c->chrDstW; c->chrYInc = (((int64_t)c->chrSrcH << 16) + (c->chrDstH >> 1)) / c->chrDstH; /* Match pixel 0 of the src to pixel 0 of dst and match pixel n-2 of src * to pixel n-2 of dst, but only for the FAST_BILINEAR mode otherwise do * correct scaling. * n-2 is the last chrominance sample available. * This is not perfect, but no one should notice the difference, the more * correct variant would be like the vertical one, but that would require * some special code for the first and last pixel */ if (flags & SWS_FAST_BILINEAR) { if (c->canMMXEXTBeUsed) { c->lumXInc += 20; c->chrXInc += 20; } // we don't use the x86 asm scaler if MMX is available else if (INLINE_MMX(cpu_flags) && c->dstBpc <= 14) { c->lumXInc = ((int64_t)(srcW - 2) << 16) / (dstW - 2) - 20; c->chrXInc = ((int64_t)(c->chrSrcW - 2) << 16) / (c->chrDstW - 2) - 20; } } if (isBayer(srcFormat)) { if (!unscaled || (dstFormat != AV_PIX_FMT_RGB24 && dstFormat != AV_PIX_FMT_YUV420P)) { enum AVPixelFormat tmpFormat = AV_PIX_FMT_RGB24; ret = av_image_alloc(c->cascaded_tmp, c->cascaded_tmpStride, srcW, srcH, tmpFormat, 64); if (ret < 0) return ret; c->cascaded_context[0] = sws_getContext(srcW, srcH, srcFormat, srcW, srcH, tmpFormat, flags, srcFilter, NULL, c->param); if (!c->cascaded_context[0]) return -1; c->cascaded_context[1] = sws_getContext(srcW, srcH, tmpFormat, dstW, dstH, dstFormat, flags, NULL, dstFilter, c->param); if (!c->cascaded_context[1]) return -1; return 0; } } #define USE_MMAP (HAVE_MMAP && HAVE_MPROTECT && defined MAP_ANONYMOUS) /* precalculate horizontal scaler filter coefficients */ { #if HAVE_MMXEXT_INLINE // can't downscale !!! if (c->canMMXEXTBeUsed && (flags & SWS_FAST_BILINEAR)) { c->lumMmxextFilterCodeSize = ff_init_hscaler_mmxext(dstW, c->lumXInc, NULL, NULL, NULL, 8); c->chrMmxextFilterCodeSize = ff_init_hscaler_mmxext(c->chrDstW, c->chrXInc, NULL, NULL, NULL, 4); #if USE_MMAP c->lumMmxextFilterCode = mmap(NULL, c->lumMmxextFilterCodeSize, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); c->chrMmxextFilterCode = mmap(NULL, c->chrMmxextFilterCodeSize, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); #elif HAVE_VIRTUALALLOC c->lumMmxextFilterCode = VirtualAlloc(NULL, c->lumMmxextFilterCodeSize, MEM_COMMIT, PAGE_EXECUTE_READWRITE); c->chrMmxextFilterCode = VirtualAlloc(NULL, c->chrMmxextFilterCodeSize, MEM_COMMIT, PAGE_EXECUTE_READWRITE); #else c->lumMmxextFilterCode = av_malloc(c->lumMmxextFilterCodeSize); c->chrMmxextFilterCode = av_malloc(c->chrMmxextFilterCodeSize); #endif #ifdef MAP_ANONYMOUS if (c->lumMmxextFilterCode == MAP_FAILED || c->chrMmxextFilterCode == MAP_FAILED) #else if (!c->lumMmxextFilterCode || !c->chrMmxextFilterCode) #endif { av_log(c, AV_LOG_ERROR, "Failed to allocate MMX2FilterCode\n"); return AVERROR(ENOMEM); } FF_ALLOCZ_OR_GOTO(c, c->hLumFilter, (dstW / 8 + 8) * sizeof(int16_t), fail); FF_ALLOCZ_OR_GOTO(c, c->hChrFilter, (c->chrDstW / 4 + 8) * sizeof(int16_t), fail); FF_ALLOCZ_OR_GOTO(c, c->hLumFilterPos, (dstW / 2 / 8 + 8) * sizeof(int32_t), fail); FF_ALLOCZ_OR_GOTO(c, c->hChrFilterPos, (c->chrDstW / 2 / 4 + 8) * sizeof(int32_t), fail); ff_init_hscaler_mmxext( dstW, c->lumXInc, c->lumMmxextFilterCode, c->hLumFilter, (uint32_t*)c->hLumFilterPos, 8); ff_init_hscaler_mmxext(c->chrDstW, c->chrXInc, c->chrMmxextFilterCode, c->hChrFilter, (uint32_t*)c->hChrFilterPos, 4); #if USE_MMAP if ( mprotect(c->lumMmxextFilterCode, c->lumMmxextFilterCodeSize, PROT_EXEC | PROT_READ) == -1 || mprotect(c->chrMmxextFilterCode, c->chrMmxextFilterCodeSize, PROT_EXEC | PROT_READ) == -1) { av_log(c, AV_LOG_ERROR, "mprotect failed, cannot use fast bilinear scaler\n"); goto fail; } #endif } else #endif /* HAVE_MMXEXT_INLINE */ { const int filterAlign = X86_MMX(cpu_flags) ? 4 : PPC_ALTIVEC(cpu_flags) ?
8 : 1; if ((ret = initFilter(&c->hLumFilter, &c->hLumFilterPos, &c->hLumFilterSize, c->lumXInc, srcW, dstW, filterAlign, 1 << 14, (flags & SWS_BICUBLIN) ? (flags | SWS_BICUBIC) : flags, cpu_flags, srcFilter->lumH, dstFilter->lumH, c->param, get_local_pos(c, 0, 0, 0), get_local_pos(c, 0, 0, 0))) < 0) goto fail; if ((ret = initFilter(&c->hChrFilter, &c->hChrFilterPos, &c->hChrFilterSize, c->chrXInc, c->chrSrcW, c->chrDstW, filterAlign, 1 << 14, (flags & SWS_BICUBLIN) ? (flags | SWS_BILINEAR) : flags, cpu_flags, srcFilter->chrH, dstFilter->chrH, c->param, get_local_pos(c, c->chrSrcHSubSample, c->src_h_chr_pos, 0), get_local_pos(c, c->chrDstHSubSample, c->dst_h_chr_pos, 0))) < 0) goto fail; } } // initialize horizontal stuff /* precalculate vertical scaler filter coefficients */ { const int filterAlign = X86_MMX(cpu_flags) ? 2 : PPC_ALTIVEC(cpu_flags) ?
8 : 1; if ((ret = initFilter(&c->vLumFilter, &c->vLumFilterPos, &c->vLumFilterSize, c->lumYInc, srcH, dstH, filterAlign, (1 << 12), (flags & SWS_BICUBLIN) ?
(flags | SWS_BICUBIC) : flags, cpu_flags, srcFilter->lumV, dstFilter->lumV, c->param, get_local_pos(c, 0, 0, 1), get_local_pos(c, 0, 0, 1))) < 0) goto fail; if ((ret = initFilter(&c->vChrFilter, &c->vChrFilterPos, &c->vChrFilterSize, c->chrYInc, c->chrSrcH, c->chrDstH, filterAlign, (1 << 12), (flags & SWS_BICUBLIN) ?
(flags | SWS_BILINEAR) : flags, cpu_flags, srcFilter->chrV, dstFilter->chrV, c->param, get_local_pos(c, c->chrSrcVSubSample, c->src_v_chr_pos, 1), get_local_pos(c, c->chrDstVSubSample, c->dst_v_chr_pos, 1))) < 0) goto fail; #if HAVE_ALTIVEC FF_ALLOC_OR_GOTO(c, c->vYCoeffsBank, sizeof(vector signed short) * c->vLumFilterSize * c->dstH, fail); FF_ALLOC_OR_GOTO(c, c->vCCoeffsBank, sizeof(vector signed short) * c->vChrFilterSize * c->chrDstH, fail); for (i = 0; i < c->vLumFilterSize * c->dstH; i++) { int j; short *p = (short *)&c->vYCoeffsBank[i]; for (j = 0; j < 8; j++) p[j] = c->vLumFilter[i]; } for (i = 0; i < c->vChrFilterSize * c->chrDstH; i++) { int j; short *p = (short *)&c->vCCoeffsBank[i]; for (j = 0; j < 8; j++) p[j] = c->vChrFilter[i]; } #endif } // calculate buffer sizes so that they won't run out while handling these damn slices c->vLumBufSize = c->vLumFilterSize; c->vChrBufSize = c->vChrFilterSize; for (i = 0; i < dstH; i++) { int chrI = (int64_t)i * c->chrDstH / dstH; int nextSlice = FFMAX(c->vLumFilterPos[i] + c->vLumFilterSize - 1, ((c->vChrFilterPos[chrI] + c->vChrFilterSize - 1) << c->chrSrcVSubSample)); nextSlice >>= c->chrSrcVSubSample; nextSlice <<= c->chrSrcVSubSample; if (c->vLumFilterPos[i] + c->vLumBufSize < nextSlice) c->vLumBufSize = nextSlice - c->vLumFilterPos[i]; if (c->vChrFilterPos[chrI] + c->vChrBufSize < (nextSlice >> c->chrSrcVSubSample)) c->vChrBufSize = (nextSlice >> c->chrSrcVSubSample) - c->vChrFilterPos[chrI]; } for (i = 0; i < 4; i++) FF_ALLOCZ_OR_GOTO(c, c->dither_error[i], (c->dstW+2) * sizeof(int), fail); /* Allocate pixbufs (we use dynamic allocation because otherwise we would * need to allocate several megabytes to handle all possible cases) */ FF_ALLOC_OR_GOTO(c, c->lumPixBuf, c->vLumBufSize * 3 * sizeof(int16_t *), fail); FF_ALLOC_OR_GOTO(c, c->chrUPixBuf, c->vChrBufSize * 3 * sizeof(int16_t *), fail); FF_ALLOC_OR_GOTO(c, c->chrVPixBuf, c->vChrBufSize * 3 * sizeof(int16_t *), fail); if (CONFIG_SWSCALE_ALPHA && isALPHA(c->srcFormat) && isALPHA(c->dstFormat)) FF_ALLOCZ_OR_GOTO(c, c->alpPixBuf, c->vLumBufSize * 3 * sizeof(int16_t *), fail); /* Note we need at least one pixel more at the end because of the MMX code * (just in case someone wants to replace the 4000/8000). */ /* align at 16 bytes for AltiVec */ for (i = 0; i < c->vLumBufSize; i++) { FF_ALLOCZ_OR_GOTO(c, c->lumPixBuf[i + c->vLumBufSize], dst_stride + 16, fail); c->lumPixBuf[i] = c->lumPixBuf[i + c->vLumBufSize]; } // 64 / c->scalingBpp is the same as 16 / sizeof(scaling_intermediate) c->uv_off = (dst_stride>>1) + 64 / (c->dstBpc &~ 7); c->uv_offx2 = dst_stride + 16; for (i = 0; i < c->vChrBufSize; i++) { FF_ALLOC_OR_GOTO(c, c->chrUPixBuf[i + c->vChrBufSize], dst_stride * 2 + 32, fail); c->chrUPixBuf[i] = c->chrUPixBuf[i + c->vChrBufSize]; c->chrVPixBuf[i] = c->chrVPixBuf[i + c->vChrBufSize] = c->chrUPixBuf[i] + (dst_stride >> 1) + 8; } if (CONFIG_SWSCALE_ALPHA && c->alpPixBuf) for (i = 0; i < c->vLumBufSize; i++) { FF_ALLOCZ_OR_GOTO(c, c->alpPixBuf[i + c->vLumBufSize], dst_stride + 16, fail); c->alpPixBuf[i] = c->alpPixBuf[i + c->vLumBufSize]; } // try to avoid drawing green stuff between the right end and the stride end for (i = 0; i < c->vChrBufSize; i++) if(desc_dst->comp[0].depth_minus1 == 15){ av_assert0(c->dstBpc > 14); for(j=0; j<dst_stride/2+1; j++) ((int32_t*)(c->chrUPixBuf[i]))[j] = 1<<18; } else for(j=0; j<dst_stride+1; j++) ((int16_t*)(c->chrUPixBuf[i]))[j] = 1<<14; av_assert0(c->chrDstH <= dstH); //是否要輸出 if (flags & SWS_PRINT_INFO) { const char *scaler = NULL, *cpucaps; for (i = 0; i < FF_ARRAY_ELEMS(scale_algorithms); i++) { if (flags & scale_algorithms[i].flag) { scaler = scale_algorithms[i].description; break; } } if (!scaler) scaler = "ehh flags invalid?!"; av_log(c, AV_LOG_INFO, "%s scaler, from %s to %s%s ", scaler, av_get_pix_fmt_name(srcFormat), #ifdef DITHER1XBPP dstFormat == AV_PIX_FMT_BGR555 || dstFormat == AV_PIX_FMT_BGR565 || dstFormat == AV_PIX_FMT_RGB444BE || dstFormat == AV_PIX_FMT_RGB444LE || dstFormat == AV_PIX_FMT_BGR444BE || dstFormat == AV_PIX_FMT_BGR444LE ?
"dithered " : "", #else "", #endif av_get_pix_fmt_name(dstFormat)); if (INLINE_MMXEXT(cpu_flags)) cpucaps = "MMXEXT"; else if (INLINE_AMD3DNOW(cpu_flags)) cpucaps = "3DNOW"; else if (INLINE_MMX(cpu_flags)) cpucaps = "MMX"; else if (PPC_ALTIVEC(cpu_flags)) cpucaps = "AltiVec"; else cpucaps = "C"; av_log(c, AV_LOG_INFO, "using %s\n", cpucaps); av_log(c, AV_LOG_VERBOSE, "%dx%d -> %dx%d\n", srcW, srcH, dstW, dstH); av_log(c, AV_LOG_DEBUG, "lum srcW=%d srcH=%d dstW=%d dstH=%d xInc=%d yInc=%d\n", c->srcW, c->srcH, c->dstW, c->dstH, c->lumXInc, c->lumYInc); av_log(c, AV_LOG_DEBUG, "chr srcW=%d srcH=%d dstW=%d dstH=%d xInc=%d yInc=%d\n", c->chrSrcW, c->chrSrcH, c->chrDstW, c->chrDstH, c->chrXInc, c->chrYInc); } /* unscaled special cases */ //不拉伸的情況 if (unscaled && !usesHFilter && !usesVFilter && (c->srcRange == c->dstRange || isAnyRGB(dstFormat))) { //不許拉伸的情況下,初始化相應(yīng)的函數(shù) ff_get_unscaled_swscale(c); if (c->swscale) { if (flags & SWS_PRINT_INFO) av_log(c, AV_LOG_INFO, "using unscaled %s -> %s special converter\n", av_get_pix_fmt_name(srcFormat), av_get_pix_fmt_name(dstFormat)); return 0; } } //關(guān)鍵:設(shè)置SwsContext中的swscale()指針 c->swscale = ff_getSwsFunc(c); return 0; fail: // FIXME replace things by appropriate error codes if (ret == RETCODE_USE_CASCADE) { int tmpW = sqrt(srcW * (int64_t)dstW); int tmpH = sqrt(srcH * (int64_t)dstH); enum AVPixelFormat tmpFormat = AV_PIX_FMT_YUV420P; if (srcW*(int64_t)srcH <= 4LL*dstW*dstH) return AVERROR(EINVAL); ret = av_image_alloc(c->cascaded_tmp, c->cascaded_tmpStride, tmpW, tmpH, tmpFormat, 64); if (ret < 0) return ret; c->cascaded_context[0] = sws_getContext(srcW, srcH, srcFormat, tmpW, tmpH, tmpFormat, flags, srcFilter, NULL, c->param); if (!c->cascaded_context[0]) return -1; c->cascaded_context[1] = sws_getContext(tmpW, tmpH, tmpFormat, dstW, dstH, dstFormat, flags, NULL, dstFilter, c->param); if (!c->cascaded_context[1]) return -1; return 0; } return -1; }
sws_init_context()除了對SwsContext中的各種變量進行賦值之外,主要依照順序完畢了下面一些工作:
1.通過sws_rgb2rgb_init()初始化RGB轉(zhuǎn)RGB(或者YUV轉(zhuǎn)YUV)的函數(shù)(注意不包括RGB與YUV相互轉(zhuǎn)換的函數(shù))。
2.通過推斷輸入輸出圖像的寬高來推斷圖像是否須要拉伸。假設(shè)圖像須要拉伸,那么unscaled變量會被標(biāo)記為1。
3.通過sws_setColorspaceDetails()初始化顏色空間。
4.一些輸入?yún)?shù)的檢測。比如:假設(shè)沒有設(shè)置圖像拉伸方法的話,默認設(shè)置為SWS_BICUBIC;假設(shè)輸入和輸出圖像的寬高小于等于0的話,也會返回錯誤信息。
5. 初始化Filter。
這一步依據(jù)拉伸方法的不同。初始化不同的Filter。
6. 假設(shè)flags中設(shè)置了“信息打印”選項SWS_PRINT_INFO,則輸出信息。
7. 假設(shè)不須要拉伸的話,調(diào)用ff_get_unscaled_swscale()將特定的像素轉(zhuǎn)換函數(shù)的指針賦值給SwsContext中的swscale指針。
8. 假設(shè)須要拉伸的話。調(diào)用ff_getSwsFunc()將通用的swscale()賦值給SwsContext中的swscale指針(這個地方有點繞,可是確實是這種)。
下面分別記錄一下上述步驟的實現(xiàn)。
sws_rgb2rgb_init()
sws_rgb2rgb_init()的定義位于libswscale\rgb2rgb.c,例如以下所看到的。從sws_rgb2rgb_init()代碼中能夠看出,有兩個初始化函數(shù):rgb2rgb_init_c()是初始化C語言版本號的RGB互轉(zhuǎn)(或者YUV互轉(zhuǎn))的函數(shù)。rgb2rgb_init_x86()則是初始化X86匯編版本號的RGB互轉(zhuǎn)的函數(shù)。
PS:在libswscale中有一點須要注意:非常多的函數(shù)名稱中包括相似“_c”這種字符串,代表了該函數(shù)是C語言寫的。與之相應(yīng)的還有其他標(biāo)記,比方“_mmx”,“sse2”等。
rgb2rgb_init_c()
首先來看一下C語言版本號的RGB互轉(zhuǎn)函數(shù)的初始化函數(shù)rgb2rgb_init_c(),定義位于libswscale\rgb2rgb_template.c,例如以下所看到的。static av_cold void rgb2rgb_init_c(void) {rgb15to16 = rgb15to16_c;rgb15tobgr24 = rgb15tobgr24_c;rgb15to32 = rgb15to32_c;rgb16tobgr24 = rgb16tobgr24_c;rgb16to32 = rgb16to32_c;rgb16to15 = rgb16to15_c;rgb24tobgr16 = rgb24tobgr16_c;rgb24tobgr15 = rgb24tobgr15_c;rgb24tobgr32 = rgb24tobgr32_c;rgb32to16 = rgb32to16_c;rgb32to15 = rgb32to15_c;rgb32tobgr24 = rgb32tobgr24_c;rgb24to15 = rgb24to15_c;rgb24to16 = rgb24to16_c;rgb24tobgr24 = rgb24tobgr24_c;shuffle_bytes_2103 = shuffle_bytes_2103_c;rgb32tobgr16 = rgb32tobgr16_c;rgb32tobgr15 = rgb32tobgr15_c;yv12toyuy2 = yv12toyuy2_c;yv12touyvy = yv12touyvy_c;yuv422ptoyuy2 = yuv422ptoyuy2_c;yuv422ptouyvy = yuv422ptouyvy_c;yuy2toyv12 = yuy2toyv12_c;planar2x = planar2x_c;ff_rgb24toyv12 = ff_rgb24toyv12_c;interleaveBytes = interleaveBytes_c;deinterleaveBytes = deinterleaveBytes_c;vu9_to_vu12 = vu9_to_vu12_c;yvu9_to_yuy2 = yvu9_to_yuy2_c;uyvytoyuv420 = uyvytoyuv420_c;uyvytoyuv422 = uyvytoyuv422_c;yuyvtoyuv420 = yuyvtoyuv420_c;yuyvtoyuv422 = yuyvtoyuv422_c; }
能夠看出rgb2rgb_init_c()運行后,會把C語言版本號的圖像格式轉(zhuǎn)換函數(shù)賦值給系統(tǒng)的函數(shù)指針。
下面我們選擇幾個函數(shù)看一下這些轉(zhuǎn)換函數(shù)的定義。
rgb24tobgr24_c()
rgb24tobgr24_c()完畢了RGB24向BGR24格式的轉(zhuǎn)換。函數(shù)的定義例如以下所看到的。從代碼中能夠看出,該函數(shù)實現(xiàn)了“R”與“B”之間位置的對調(diào)。從而完畢了這兩種格式之間的轉(zhuǎn)換。
rgb24to16_c()
rgb24to16_c()完畢了RGB24向RGB16像素格式的轉(zhuǎn)換。函數(shù)的定義例如以下所看到的。
yuyvtoyuv422_c()
yuyvtoyuv422_c()完畢了YUYV向YUV422像素格式的轉(zhuǎn)換。函數(shù)的定義例如以下所看到的。
該函數(shù)將YUYV像素數(shù)據(jù)分離成為Y,U,V三個分量的像素數(shù)據(jù)。當(dāng)中extract_even_c()用于獲取一行像素中序數(shù)為偶數(shù)的像素,相應(yīng)提取了YUYV像素格式中的“Y”。extract_odd2_c()用于獲取一行像素中序數(shù)為奇數(shù)的像素,而且把這些像素值再次依照奇偶的不同,存儲于兩個數(shù)組中。
相應(yīng)提取了YUYV像素格式中的“U”和“V”。
extract_even_c()定義例如以下所看到的。
rgb2rgb_init_x86()
rgb2rgb_init_x86()用于初始化基于X86匯編語言的RGB互轉(zhuǎn)的代碼。由于對匯編不是非常熟。不再作具體分析,出于和rgb2rgb_init_c()相對照的目的,列出它的代碼。它的代碼位于libswscale\x86\rgb2rgb.c,例如以下所看到的。
PS:所有和匯編有關(guān)的代碼都位于libswscale文件夾的x86子文件夾下。
av_cold void rgb2rgb_init_x86(void) { #if HAVE_INLINE_ASMint cpu_flags = av_get_cpu_flags();if (INLINE_MMX(cpu_flags))rgb2rgb_init_mmx();if (INLINE_AMD3DNOW(cpu_flags))rgb2rgb_init_3dnow();if (INLINE_MMXEXT(cpu_flags))rgb2rgb_init_mmxext();if (INLINE_SSE2(cpu_flags))rgb2rgb_init_sse2();if (INLINE_AVX(cpu_flags))rgb2rgb_init_avx(); #endif /* HAVE_INLINE_ASM */ }能夠看出,rgb2rgb_init_x86()首先調(diào)用了av_get_cpu_flags()獲取CPU支持的特性。依據(jù)特性調(diào)用rgb2rgb_init_mmx(),rgb2rgb_init_3dnow(),rgb2rgb_init_mmxext(),rgb2rgb_init_sse2()。rgb2rgb_init_avx()等函數(shù)。
2.推斷圖像是否須要拉伸。
這一步主要通過比較輸入圖像和輸出圖像的寬高實現(xiàn)。系統(tǒng)使用一個unscaled變量記錄圖像是否須要拉伸。例如以下所看到的。unscaled = (srcW == dstW && srcH == dstH);
3.初始化顏色空間。
初始化顏色空間通過函數(shù)sws_setColorspaceDetails()完畢。sws_setColorspaceDetails()是FFmpeg的一個API函數(shù),它的聲明例如以下所看到的:/*** @param dstRange flag indicating the while-black range of the output (1=jpeg / 0=mpeg)* @param srcRange flag indicating the while-black range of the input (1=jpeg / 0=mpeg)* @param table the yuv2rgb coefficients describing the output yuv space, normally ff_yuv2rgb_coeffs[x]* @param inv_table the yuv2rgb coefficients describing the input yuv space, normally ff_yuv2rgb_coeffs[x]* @param brightness 16.16 fixed point brightness correction* @param contrast 16.16 fixed point contrast correction* @param saturation 16.16 fixed point saturation correction* @return -1 if not supported*/ int sws_setColorspaceDetails(struct SwsContext *c, const int inv_table[4],int srcRange, const int table[4], int dstRange,int brightness, int contrast, int saturation);
簡單解釋一下幾個參數(shù)的含義:
c:須要設(shè)定的SwsContext。
inv_table:描寫敘述輸出YUV顏色空間的參數(shù)表。
srcRange:輸入圖像的取值范圍(“1”代表JPEG標(biāo)準(zhǔn)。取值范圍是0-255。“0”代表MPEG標(biāo)準(zhǔn),取值范圍是16-235)。
table:描寫敘述輸入YUV顏色空間的參數(shù)表。
dstRange:輸出圖像的取值范圍。
brightness:未研究。
contrast:未研究。
saturation:未研究。
當(dāng)中描寫敘述顏色空間的參數(shù)表能夠通過sws_getCoefficients()獲取。該函數(shù)在后文中再具體記錄。
sws_setColorspaceDetails()的定義位于libswscale\utils.c,例如以下所看到的。
從sws_setColorspaceDetails()定義中能夠看出。該函數(shù)將輸入的參數(shù)分別賦值給了相應(yīng)的變量。而且在最后調(diào)用了一個函數(shù)fill_rgb2yuv_table()。fill_rgb2yuv_table()函數(shù)還沒有弄懂,臨時不記錄。
sws_getCoefficients()
sws_getCoefficients()用于獲取描寫敘述顏色空間的參數(shù)表。它的聲明例如以下。
當(dāng)中colorspace能夠取值例如以下變量。
默認的取值SWS_CS_DEFAULT等同于SWS_CS_ITU601或者SWS_CS_SMPTE170M。
下面看一下sws_getCoefficients()的定義,位于libswscale\yuv2rgb.c。例如以下所看到的。
const int *sws_getCoefficients(int colorspace) {if (colorspace > 7 || colorspace < 0)colorspace = SWS_CS_DEFAULT;return ff_yuv2rgb_coeffs[colorspace]; }
能夠看出它返回了一個名稱為ff_yuv2rgb_coeffs的數(shù)組中的一個元素,該數(shù)組的定義例如以下所看到的。
4.一些輸入?yún)?shù)的檢測。
比如:假設(shè)沒有設(shè)置圖像拉伸方法的話,默認設(shè)置為SWS_BICUBIC;假設(shè)輸入和輸出圖像的寬高小于等于0的話。也會返回錯誤信息。有關(guān)這方面的代碼比較多。簡單舉個樣例。
5.初始化Filter。這一步依據(jù)拉伸方法的不同,初始化不同的Filter。
這一部分的工作在函數(shù)initFilter()中完畢。臨時不具體分析。
6.假設(shè)flags中設(shè)置了“信息打印”選項SWS_PRINT_INFO,則輸出信息。
SwsContext初始化的時候。能夠給flags設(shè)置SWS_PRINT_INFO標(biāo)記。這樣SwsContext初始化完畢的時候就能夠打印出一些配置信息。與打印相關(guān)的代碼例如以下所看到的。
7.假設(shè)不須要拉伸的話,就會調(diào)用ff_get_unscaled_swscale()將特定的像素轉(zhuǎn)換函數(shù)的指針賦值給SwsContext中的swscale指針。
ff_get_unscaled_swscale()
ff_get_unscaled_swscale()的定義例如以下所看到的。該函數(shù)依據(jù)輸入圖像像素格式和輸出圖像像素格式,選擇不同的像素格式轉(zhuǎn)換函數(shù)。
void ff_get_unscaled_swscale(SwsContext *c) {const enum AVPixelFormat srcFormat = c->srcFormat;const enum AVPixelFormat dstFormat = c->dstFormat;const int flags = c->flags;const int dstH = c->dstH;int needsDither;needsDither = isAnyRGB(dstFormat) &&c->dstFormatBpp < 24 &&(c->dstFormatBpp < c->srcFormatBpp || (!isAnyRGB(srcFormat)));/* yv12_to_nv12 */if ((srcFormat == AV_PIX_FMT_YUV420P || srcFormat == AV_PIX_FMT_YUVA420P) &&(dstFormat == AV_PIX_FMT_NV12 || dstFormat == AV_PIX_FMT_NV21)) {c->swscale = planarToNv12Wrapper;}/* nv12_to_yv12 */if (dstFormat == AV_PIX_FMT_YUV420P &&(srcFormat == AV_PIX_FMT_NV12 || srcFormat == AV_PIX_FMT_NV21)) {c->swscale = nv12ToPlanarWrapper;}/* yuv2bgr */if ((srcFormat == AV_PIX_FMT_YUV420P || srcFormat == AV_PIX_FMT_YUV422P ||srcFormat == AV_PIX_FMT_YUVA420P) && isAnyRGB(dstFormat) &&!(flags & SWS_ACCURATE_RND) && (c->dither == SWS_DITHER_BAYER || c->dither == SWS_DITHER_AUTO) && !(dstH & 1)) {c->swscale = ff_yuv2rgb_get_func_ptr(c);}if (srcFormat == AV_PIX_FMT_YUV410P && !(dstH & 3) &&(dstFormat == AV_PIX_FMT_YUV420P || dstFormat == AV_PIX_FMT_YUVA420P) &&!(flags & SWS_BITEXACT)) {c->swscale = yvu9ToYv12Wrapper;}/* bgr24toYV12 */if (srcFormat == AV_PIX_FMT_BGR24 &&(dstFormat == AV_PIX_FMT_YUV420P || dstFormat == AV_PIX_FMT_YUVA420P) &&!(flags & SWS_ACCURATE_RND))c->swscale = bgr24ToYv12Wrapper;/* RGB/BGR -> RGB/BGR (no dither needed forms) */if (isAnyRGB(srcFormat) && isAnyRGB(dstFormat) && findRgbConvFn(c)&& (!needsDither || (c->flags&(SWS_FAST_BILINEAR|SWS_POINT))))c->swscale = rgbToRgbWrapper;if ((srcFormat == AV_PIX_FMT_GBRP && dstFormat == AV_PIX_FMT_GBRAP) ||(srcFormat == AV_PIX_FMT_GBRAP && dstFormat == AV_PIX_FMT_GBRP))c->swscale = planarRgbToplanarRgbWrapper;#define isByteRGB(f) ( \f == AV_PIX_FMT_RGB32 || \f == AV_PIX_FMT_RGB32_1 || \f == AV_PIX_FMT_RGB24 || \f == AV_PIX_FMT_BGR32 || \f == AV_PIX_FMT_BGR32_1 || \f == AV_PIX_FMT_BGR24)if (srcFormat == AV_PIX_FMT_GBRP && isPlanar(srcFormat) && isByteRGB(dstFormat))c->swscale = planarRgbToRgbWrapper;if ((srcFormat == AV_PIX_FMT_RGB48LE || srcFormat == AV_PIX_FMT_RGB48BE ||srcFormat == AV_PIX_FMT_BGR48LE || srcFormat == AV_PIX_FMT_BGR48BE ||srcFormat == AV_PIX_FMT_RGBA64LE || srcFormat == AV_PIX_FMT_RGBA64BE ||srcFormat == AV_PIX_FMT_BGRA64LE || srcFormat == AV_PIX_FMT_BGRA64BE) &&(dstFormat == AV_PIX_FMT_GBRP9LE || dstFormat == AV_PIX_FMT_GBRP9BE ||dstFormat == AV_PIX_FMT_GBRP10LE || dstFormat == AV_PIX_FMT_GBRP10BE ||dstFormat == AV_PIX_FMT_GBRP12LE || dstFormat == AV_PIX_FMT_GBRP12BE ||dstFormat == AV_PIX_FMT_GBRP14LE || dstFormat == AV_PIX_FMT_GBRP14BE ||dstFormat == AV_PIX_FMT_GBRP16LE || dstFormat == AV_PIX_FMT_GBRP16BE ||dstFormat == AV_PIX_FMT_GBRAP16LE || dstFormat == AV_PIX_FMT_GBRAP16BE ))c->swscale = Rgb16ToPlanarRgb16Wrapper;if ((srcFormat == AV_PIX_FMT_GBRP9LE || srcFormat == AV_PIX_FMT_GBRP9BE ||srcFormat == AV_PIX_FMT_GBRP16LE || srcFormat == AV_PIX_FMT_GBRP16BE ||srcFormat == AV_PIX_FMT_GBRP10LE || srcFormat == AV_PIX_FMT_GBRP10BE ||srcFormat == AV_PIX_FMT_GBRP12LE || srcFormat == AV_PIX_FMT_GBRP12BE ||srcFormat == AV_PIX_FMT_GBRP14LE || srcFormat == AV_PIX_FMT_GBRP14BE ||srcFormat == AV_PIX_FMT_GBRAP16LE || srcFormat == AV_PIX_FMT_GBRAP16BE) &&(dstFormat == AV_PIX_FMT_RGB48LE || dstFormat == AV_PIX_FMT_RGB48BE ||dstFormat == AV_PIX_FMT_BGR48LE || dstFormat == AV_PIX_FMT_BGR48BE ||dstFormat == AV_PIX_FMT_RGBA64LE || dstFormat == AV_PIX_FMT_RGBA64BE ||dstFormat == AV_PIX_FMT_BGRA64LE || dstFormat == AV_PIX_FMT_BGRA64BE))c->swscale = planarRgb16ToRgb16Wrapper;if (av_pix_fmt_desc_get(srcFormat)->comp[0].depth_minus1 == 7 &&isPackedRGB(srcFormat) && dstFormat == AV_PIX_FMT_GBRP)c->swscale = rgbToPlanarRgbWrapper;if (isBayer(srcFormat)) {if (dstFormat == AV_PIX_FMT_RGB24)c->swscale = bayer_to_rgb24_wrapper;else if (dstFormat == AV_PIX_FMT_YUV420P)c->swscale = bayer_to_yv12_wrapper;else if (!isBayer(dstFormat)) {av_log(c, AV_LOG_ERROR, "unsupported bayer conversion\n");av_assert0(0);}}/* bswap 16 bits per pixel/component packed formats */if (IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BAYER_BGGR16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BAYER_RGGB16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BAYER_GBRG16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BAYER_GRBG16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGR444) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGR48) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGRA64) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGR555) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGR565) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGRA64) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GRAY16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YA16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRP9) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRP10) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRP12) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRP14) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRP16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRAP16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGB444) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGB48) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGBA64) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGB555) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGB565) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGBA64) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_XYZ12) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV420P9) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV420P10) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV420P12) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV420P14) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV420P16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV422P9) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV422P10) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV422P12) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV422P14) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV422P16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV444P9) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV444P10) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV444P12) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV444P14) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV444P16))c->swscale = packed_16bpc_bswap;if (usePal(srcFormat) && isByteRGB(dstFormat))c->swscale = palToRgbWrapper;if (srcFormat == AV_PIX_FMT_YUV422P) {if (dstFormat == AV_PIX_FMT_YUYV422)c->swscale = yuv422pToYuy2Wrapper;else if (dstFormat == AV_PIX_FMT_UYVY422)c->swscale = yuv422pToUyvyWrapper;}/* LQ converters if -sws 0 or -sws 4*/if (c->flags&(SWS_FAST_BILINEAR|SWS_POINT)) {/* yv12_to_yuy2 */if (srcFormat == AV_PIX_FMT_YUV420P || srcFormat == AV_PIX_FMT_YUVA420P) {if (dstFormat == AV_PIX_FMT_YUYV422)c->swscale = planarToYuy2Wrapper;else if (dstFormat == AV_PIX_FMT_UYVY422)c->swscale = planarToUyvyWrapper;}}if (srcFormat == AV_PIX_FMT_YUYV422 &&(dstFormat == AV_PIX_FMT_YUV420P || dstFormat == AV_PIX_FMT_YUVA420P))c->swscale = yuyvToYuv420Wrapper;if (srcFormat == AV_PIX_FMT_UYVY422 &&(dstFormat == AV_PIX_FMT_YUV420P || dstFormat == AV_PIX_FMT_YUVA420P))c->swscale = uyvyToYuv420Wrapper;if (srcFormat == AV_PIX_FMT_YUYV422 && dstFormat == AV_PIX_FMT_YUV422P)c->swscale = yuyvToYuv422Wrapper;if (srcFormat == AV_PIX_FMT_UYVY422 && dstFormat == AV_PIX_FMT_YUV422P)c->swscale = uyvyToYuv422Wrapper;#define isPlanarGray(x) (isGray(x) && (x) != AV_PIX_FMT_YA8 && (x) != AV_PIX_FMT_YA16LE && (x) != AV_PIX_FMT_YA16BE)/* simple copy */if ( srcFormat == dstFormat ||(srcFormat == AV_PIX_FMT_YUVA420P && dstFormat == AV_PIX_FMT_YUV420P) ||(srcFormat == AV_PIX_FMT_YUV420P && dstFormat == AV_PIX_FMT_YUVA420P) ||(isPlanarYUV(srcFormat) && isPlanarGray(dstFormat)) ||(isPlanarYUV(dstFormat) && isPlanarGray(srcFormat)) ||(isPlanarGray(dstFormat) && isPlanarGray(srcFormat)) ||(isPlanarYUV(srcFormat) && isPlanarYUV(dstFormat) &&c->chrDstHSubSample == c->chrSrcHSubSample &&c->chrDstVSubSample == c->chrSrcVSubSample &&dstFormat != AV_PIX_FMT_NV12 && dstFormat != AV_PIX_FMT_NV21 &&srcFormat != AV_PIX_FMT_NV12 && srcFormat != AV_PIX_FMT_NV21)){if (isPacked(c->srcFormat))c->swscale = packedCopyWrapper;else /* Planar YUV or gray */c->swscale = planarCopyWrapper;}if (ARCH_PPC)ff_get_unscaled_swscale_ppc(c); // if (ARCH_ARM) // ff_get_unscaled_swscale_arm(c); }從ff_get_unscaled_swscale()源碼中能夠看出,賦值給SwsContext的swscale指針的函數(shù)名稱大多數(shù)為XXXWrapper()。實際上這些函數(shù)封裝了一些基本的像素格式轉(zhuǎn)換函數(shù)。比如yuyvToYuv422Wrapper()的定義例如以下所看到的。
static int yuyvToYuv422Wrapper(SwsContext *c, const uint8_t *src[],int srcStride[], int srcSliceY, int srcSliceH,uint8_t *dstParam[], int dstStride[]) {uint8_t *ydst = dstParam[0] + dstStride[0] * srcSliceY;uint8_t *udst = dstParam[1] + dstStride[1] * srcSliceY;uint8_t *vdst = dstParam[2] + dstStride[2] * srcSliceY;yuyvtoyuv422(ydst, udst, vdst, src[0], c->srcW, srcSliceH, dstStride[0],dstStride[1], srcStride[0]);return srcSliceH; }從yuyvToYuv422Wrapper()的定義中能夠看出,它調(diào)用了yuyvtoyuv422()。而yuyvtoyuv422()則是rgb2rgb.c中的一個函數(shù),用于將YUVU轉(zhuǎn)換為YUV422(該函數(shù)在前文中已經(jīng)記錄)。
8.假設(shè)須要拉伸的話,就會調(diào)用ff_getSwsFunc()將通用的swscale()賦值給SwsContext中的swscale指針。然后返回。
上一步驟(圖像不用縮放)實際上是一種不太常見的情況。很多其他的情況下會運行本步驟。這個時候就會調(diào)用ff_getSwsFunc()獲取圖像的縮放函數(shù)。
ff_getSwsFunc()
ff_getSwsFunc()用于獲取通用的swscale()函數(shù)。該函數(shù)的定義例如以下。從源碼中能夠看出ff_getSwsFunc()調(diào)用了函數(shù)sws_init_swscale()。假設(shè)系統(tǒng)支持X86匯編的話。還會調(diào)用ff_sws_init_swscale_x86()。
sws_init_swscale()
sws_init_swscale()的定義位于libswscale\swscale.c,例如以下所看到的。
static av_cold void sws_init_swscale(SwsContext *c) {enum AVPixelFormat srcFormat = c->srcFormat;ff_sws_init_output_funcs(c, &c->yuv2plane1, &c->yuv2planeX,&c->yuv2nv12cX, &c->yuv2packed1,&c->yuv2packed2, &c->yuv2packedX, &c->yuv2anyX);ff_sws_init_input_funcs(c);if (c->srcBpc == 8) {if (c->dstBpc <= 14) {c->hyScale = c->hcScale = hScale8To15_c;if (c->flags & SWS_FAST_BILINEAR) {c->hyscale_fast = ff_hyscale_fast_c;c->hcscale_fast = ff_hcscale_fast_c;}} else {c->hyScale = c->hcScale = hScale8To19_c;}} else {c->hyScale = c->hcScale = c->dstBpc > 14 ? hScale16To19_c: hScale16To15_c;}ff_sws_init_range_convert(c);if (!(isGray(srcFormat) || isGray(c->dstFormat) ||srcFormat == AV_PIX_FMT_MONOBLACK || srcFormat == AV_PIX_FMT_MONOWHITE))c->needs_hcscale = 1; }從函數(shù)中能夠看出,sws_init_swscale()主要調(diào)用了3個函數(shù):ff_sws_init_output_funcs(),ff_sws_init_input_funcs(),ff_sws_init_range_convert()。當(dāng)中,ff_sws_init_output_funcs()用于初始化輸出的函數(shù)。ff_sws_init_input_funcs()用于初始化輸入的函數(shù),ff_sws_init_range_convert()用于初始化像素值范圍轉(zhuǎn)換的函數(shù)。
ff_sws_init_output_funcs()
ff_sws_init_output_funcs()用于初始化“輸出函數(shù)”。“輸出函數(shù)”在libswscale中的作用就是將處理后的一行像素數(shù)據(jù)輸出出來。ff_sws_init_output_funcs()的定義位于libswscale\output.c。例如以下所看到的。
av_cold void ff_sws_init_output_funcs(SwsContext *c,yuv2planar1_fn *yuv2plane1,yuv2planarX_fn *yuv2planeX,yuv2interleavedX_fn *yuv2nv12cX,yuv2packed1_fn *yuv2packed1,yuv2packed2_fn *yuv2packed2,yuv2packedX_fn *yuv2packedX,yuv2anyX_fn *yuv2anyX) {enum AVPixelFormat dstFormat = c->dstFormat;const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(dstFormat);if (is16BPS(dstFormat)) {*yuv2planeX = isBE(dstFormat) ? yuv2planeX_16BE_c : yuv2planeX_16LE_c;*yuv2plane1 = isBE(dstFormat) ?yuv2plane1_16BE_c : yuv2plane1_16LE_c; } else if (is9_OR_10BPS(dstFormat)) { if (desc->comp[0].depth_minus1 == 8) { *yuv2planeX = isBE(dstFormat) ? yuv2planeX_9BE_c : yuv2planeX_9LE_c; *yuv2plane1 = isBE(dstFormat) ? yuv2plane1_9BE_c : yuv2plane1_9LE_c; } else if (desc->comp[0].depth_minus1 == 9) { *yuv2planeX = isBE(dstFormat) ? yuv2planeX_10BE_c : yuv2planeX_10LE_c; *yuv2plane1 = isBE(dstFormat) ? yuv2plane1_10BE_c : yuv2plane1_10LE_c; } else if (desc->comp[0].depth_minus1 == 11) { *yuv2planeX = isBE(dstFormat) ? yuv2planeX_12BE_c : yuv2planeX_12LE_c; *yuv2plane1 = isBE(dstFormat) ? yuv2plane1_12BE_c : yuv2plane1_12LE_c; } else if (desc->comp[0].depth_minus1 == 13) { *yuv2planeX = isBE(dstFormat) ?
yuv2planeX_14BE_c : yuv2planeX_14LE_c; *yuv2plane1 = isBE(dstFormat) ? yuv2plane1_14BE_c : yuv2plane1_14LE_c; } else av_assert0(0); } else { *yuv2plane1 = yuv2plane1_8_c; *yuv2planeX = yuv2planeX_8_c; if (dstFormat == AV_PIX_FMT_NV12 || dstFormat == AV_PIX_FMT_NV21) *yuv2nv12cX = yuv2nv12cX_c; } if(c->flags & SWS_FULL_CHR_H_INT) { switch (dstFormat) { case AV_PIX_FMT_RGBA: #if CONFIG_SMALL *yuv2packedX = yuv2rgba32_full_X_c; *yuv2packed2 = yuv2rgba32_full_2_c; *yuv2packed1 = yuv2rgba32_full_1_c; #else #if CONFIG_SWSCALE_ALPHA if (c->alpPixBuf) { *yuv2packedX = yuv2rgba32_full_X_c; *yuv2packed2 = yuv2rgba32_full_2_c; *yuv2packed1 = yuv2rgba32_full_1_c; } else #endif /* CONFIG_SWSCALE_ALPHA */ { *yuv2packedX = yuv2rgbx32_full_X_c; *yuv2packed2 = yuv2rgbx32_full_2_c; *yuv2packed1 = yuv2rgbx32_full_1_c; } #endif /* !CONFIG_SMALL */ break; case AV_PIX_FMT_ARGB: #if CONFIG_SMALL *yuv2packedX = yuv2argb32_full_X_c; *yuv2packed2 = yuv2argb32_full_2_c; *yuv2packed1 = yuv2argb32_full_1_c; #else #if CONFIG_SWSCALE_ALPHA if (c->alpPixBuf) { *yuv2packedX = yuv2argb32_full_X_c; *yuv2packed2 = yuv2argb32_full_2_c; *yuv2packed1 = yuv2argb32_full_1_c; } else #endif /* CONFIG_SWSCALE_ALPHA */ { *yuv2packedX = yuv2xrgb32_full_X_c; *yuv2packed2 = yuv2xrgb32_full_2_c; *yuv2packed1 = yuv2xrgb32_full_1_c; } #endif /* !CONFIG_SMALL */ break; case AV_PIX_FMT_BGRA: #if CONFIG_SMALL *yuv2packedX = yuv2bgra32_full_X_c; *yuv2packed2 = yuv2bgra32_full_2_c; *yuv2packed1 = yuv2bgra32_full_1_c; #else #if CONFIG_SWSCALE_ALPHA if (c->alpPixBuf) { *yuv2packedX = yuv2bgra32_full_X_c; *yuv2packed2 = yuv2bgra32_full_2_c; *yuv2packed1 = yuv2bgra32_full_1_c; } else #endif /* CONFIG_SWSCALE_ALPHA */ { *yuv2packedX = yuv2bgrx32_full_X_c; *yuv2packed2 = yuv2bgrx32_full_2_c; *yuv2packed1 = yuv2bgrx32_full_1_c; } #endif /* !CONFIG_SMALL */ break; case AV_PIX_FMT_ABGR: #if CONFIG_SMALL *yuv2packedX = yuv2abgr32_full_X_c; *yuv2packed2 = yuv2abgr32_full_2_c; *yuv2packed1 = yuv2abgr32_full_1_c; #else #if CONFIG_SWSCALE_ALPHA if (c->alpPixBuf) { *yuv2packedX = yuv2abgr32_full_X_c; *yuv2packed2 = yuv2abgr32_full_2_c; *yuv2packed1 = yuv2abgr32_full_1_c; } else #endif /* CONFIG_SWSCALE_ALPHA */ { *yuv2packedX = yuv2xbgr32_full_X_c; *yuv2packed2 = yuv2xbgr32_full_2_c; *yuv2packed1 = yuv2xbgr32_full_1_c; } #endif /* !CONFIG_SMALL */ break; case AV_PIX_FMT_RGB24: *yuv2packedX = yuv2rgb24_full_X_c; *yuv2packed2 = yuv2rgb24_full_2_c; *yuv2packed1 = yuv2rgb24_full_1_c; break; case AV_PIX_FMT_BGR24: *yuv2packedX = yuv2bgr24_full_X_c; *yuv2packed2 = yuv2bgr24_full_2_c; *yuv2packed1 = yuv2bgr24_full_1_c; break; case AV_PIX_FMT_BGR4_BYTE: *yuv2packedX = yuv2bgr4_byte_full_X_c; *yuv2packed2 = yuv2bgr4_byte_full_2_c; *yuv2packed1 = yuv2bgr4_byte_full_1_c; break; case AV_PIX_FMT_RGB4_BYTE: *yuv2packedX = yuv2rgb4_byte_full_X_c; *yuv2packed2 = yuv2rgb4_byte_full_2_c; *yuv2packed1 = yuv2rgb4_byte_full_1_c; break; case AV_PIX_FMT_BGR8: *yuv2packedX = yuv2bgr8_full_X_c; *yuv2packed2 = yuv2bgr8_full_2_c; *yuv2packed1 = yuv2bgr8_full_1_c; break; case AV_PIX_FMT_RGB8: *yuv2packedX = yuv2rgb8_full_X_c; *yuv2packed2 = yuv2rgb8_full_2_c; *yuv2packed1 = yuv2rgb8_full_1_c; break; case AV_PIX_FMT_GBRP: case AV_PIX_FMT_GBRP9BE: case AV_PIX_FMT_GBRP9LE: case AV_PIX_FMT_GBRP10BE: case AV_PIX_FMT_GBRP10LE: case AV_PIX_FMT_GBRP12BE: case AV_PIX_FMT_GBRP12LE: case AV_PIX_FMT_GBRP14BE: case AV_PIX_FMT_GBRP14LE: case AV_PIX_FMT_GBRP16BE: case AV_PIX_FMT_GBRP16LE: case AV_PIX_FMT_GBRAP: *yuv2anyX = yuv2gbrp_full_X_c; break; } if (!*yuv2packedX && !*yuv2anyX) goto YUV_PACKED; } else { YUV_PACKED: switch (dstFormat) { case AV_PIX_FMT_RGBA64LE: #if CONFIG_SWSCALE_ALPHA if (c->alpPixBuf) { *yuv2packed1 = yuv2rgba64le_1_c; *yuv2packed2 = yuv2rgba64le_2_c; *yuv2packedX = yuv2rgba64le_X_c; } else #endif /* CONFIG_SWSCALE_ALPHA */ { *yuv2packed1 = yuv2rgbx64le_1_c; *yuv2packed2 = yuv2rgbx64le_2_c; *yuv2packedX = yuv2rgbx64le_X_c; } break; case AV_PIX_FMT_RGBA64BE: #if CONFIG_SWSCALE_ALPHA if (c->alpPixBuf) { *yuv2packed1 = yuv2rgba64be_1_c; *yuv2packed2 = yuv2rgba64be_2_c; *yuv2packedX = yuv2rgba64be_X_c; } else #endif /* CONFIG_SWSCALE_ALPHA */ { *yuv2packed1 = yuv2rgbx64be_1_c; *yuv2packed2 = yuv2rgbx64be_2_c; *yuv2packedX = yuv2rgbx64be_X_c; } break; case AV_PIX_FMT_BGRA64LE: #if CONFIG_SWSCALE_ALPHA if (c->alpPixBuf) { *yuv2packed1 = yuv2bgra64le_1_c; *yuv2packed2 = yuv2bgra64le_2_c; *yuv2packedX = yuv2bgra64le_X_c; } else #endif /* CONFIG_SWSCALE_ALPHA */ { *yuv2packed1 = yuv2bgrx64le_1_c; *yuv2packed2 = yuv2bgrx64le_2_c; *yuv2packedX = yuv2bgrx64le_X_c; } break; case AV_PIX_FMT_BGRA64BE: #if CONFIG_SWSCALE_ALPHA if (c->alpPixBuf) { *yuv2packed1 = yuv2bgra64be_1_c; *yuv2packed2 = yuv2bgra64be_2_c; *yuv2packedX = yuv2bgra64be_X_c; } else #endif /* CONFIG_SWSCALE_ALPHA */ { *yuv2packed1 = yuv2bgrx64be_1_c; *yuv2packed2 = yuv2bgrx64be_2_c; *yuv2packedX = yuv2bgrx64be_X_c; } break; case AV_PIX_FMT_RGB48LE: *yuv2packed1 = yuv2rgb48le_1_c; *yuv2packed2 = yuv2rgb48le_2_c; *yuv2packedX = yuv2rgb48le_X_c; break; case AV_PIX_FMT_RGB48BE: *yuv2packed1 = yuv2rgb48be_1_c; *yuv2packed2 = yuv2rgb48be_2_c; *yuv2packedX = yuv2rgb48be_X_c; break; case AV_PIX_FMT_BGR48LE: *yuv2packed1 = yuv2bgr48le_1_c; *yuv2packed2 = yuv2bgr48le_2_c; *yuv2packedX = yuv2bgr48le_X_c; break; case AV_PIX_FMT_BGR48BE: *yuv2packed1 = yuv2bgr48be_1_c; *yuv2packed2 = yuv2bgr48be_2_c; *yuv2packedX = yuv2bgr48be_X_c; break; case AV_PIX_FMT_RGB32: case AV_PIX_FMT_BGR32: #if CONFIG_SMALL *yuv2packed1 = yuv2rgb32_1_c; *yuv2packed2 = yuv2rgb32_2_c; *yuv2packedX = yuv2rgb32_X_c; #else #if CONFIG_SWSCALE_ALPHA if (c->alpPixBuf) { *yuv2packed1 = yuv2rgba32_1_c; *yuv2packed2 = yuv2rgba32_2_c; *yuv2packedX = yuv2rgba32_X_c; } else #endif /* CONFIG_SWSCALE_ALPHA */ { *yuv2packed1 = yuv2rgbx32_1_c; *yuv2packed2 = yuv2rgbx32_2_c; *yuv2packedX = yuv2rgbx32_X_c; } #endif /* !CONFIG_SMALL */ break; case AV_PIX_FMT_RGB32_1: case AV_PIX_FMT_BGR32_1: #if CONFIG_SMALL *yuv2packed1 = yuv2rgb32_1_1_c; *yuv2packed2 = yuv2rgb32_1_2_c; *yuv2packedX = yuv2rgb32_1_X_c; #else #if CONFIG_SWSCALE_ALPHA if (c->alpPixBuf) { *yuv2packed1 = yuv2rgba32_1_1_c; *yuv2packed2 = yuv2rgba32_1_2_c; *yuv2packedX = yuv2rgba32_1_X_c; } else #endif /* CONFIG_SWSCALE_ALPHA */ { *yuv2packed1 = yuv2rgbx32_1_1_c; *yuv2packed2 = yuv2rgbx32_1_2_c; *yuv2packedX = yuv2rgbx32_1_X_c; } #endif /* !CONFIG_SMALL */ break; case AV_PIX_FMT_RGB24: *yuv2packed1 = yuv2rgb24_1_c; *yuv2packed2 = yuv2rgb24_2_c; *yuv2packedX = yuv2rgb24_X_c; break; case AV_PIX_FMT_BGR24: *yuv2packed1 = yuv2bgr24_1_c; *yuv2packed2 = yuv2bgr24_2_c; *yuv2packedX = yuv2bgr24_X_c; break; case AV_PIX_FMT_RGB565LE: case AV_PIX_FMT_RGB565BE: case AV_PIX_FMT_BGR565LE: case AV_PIX_FMT_BGR565BE: *yuv2packed1 = yuv2rgb16_1_c; *yuv2packed2 = yuv2rgb16_2_c; *yuv2packedX = yuv2rgb16_X_c; break; case AV_PIX_FMT_RGB555LE: case AV_PIX_FMT_RGB555BE: case AV_PIX_FMT_BGR555LE: case AV_PIX_FMT_BGR555BE: *yuv2packed1 = yuv2rgb15_1_c; *yuv2packed2 = yuv2rgb15_2_c; *yuv2packedX = yuv2rgb15_X_c; break; case AV_PIX_FMT_RGB444LE: case AV_PIX_FMT_RGB444BE: case AV_PIX_FMT_BGR444LE: case AV_PIX_FMT_BGR444BE: *yuv2packed1 = yuv2rgb12_1_c; *yuv2packed2 = yuv2rgb12_2_c; *yuv2packedX = yuv2rgb12_X_c; break; case AV_PIX_FMT_RGB8: case AV_PIX_FMT_BGR8: *yuv2packed1 = yuv2rgb8_1_c; *yuv2packed2 = yuv2rgb8_2_c; *yuv2packedX = yuv2rgb8_X_c; break; case AV_PIX_FMT_RGB4: case AV_PIX_FMT_BGR4: *yuv2packed1 = yuv2rgb4_1_c; *yuv2packed2 = yuv2rgb4_2_c; *yuv2packedX = yuv2rgb4_X_c; break; case AV_PIX_FMT_RGB4_BYTE: case AV_PIX_FMT_BGR4_BYTE: *yuv2packed1 = yuv2rgb4b_1_c; *yuv2packed2 = yuv2rgb4b_2_c; *yuv2packedX = yuv2rgb4b_X_c; break; } } switch (dstFormat) { case AV_PIX_FMT_MONOWHITE: *yuv2packed1 = yuv2monowhite_1_c; *yuv2packed2 = yuv2monowhite_2_c; *yuv2packedX = yuv2monowhite_X_c; break; case AV_PIX_FMT_MONOBLACK: *yuv2packed1 = yuv2monoblack_1_c; *yuv2packed2 = yuv2monoblack_2_c; *yuv2packedX = yuv2monoblack_X_c; break; case AV_PIX_FMT_YUYV422: *yuv2packed1 = yuv2yuyv422_1_c; *yuv2packed2 = yuv2yuyv422_2_c; *yuv2packedX = yuv2yuyv422_X_c; break; case AV_PIX_FMT_YVYU422: *yuv2packed1 = yuv2yvyu422_1_c; *yuv2packed2 = yuv2yvyu422_2_c; *yuv2packedX = yuv2yvyu422_X_c; break; case AV_PIX_FMT_UYVY422: *yuv2packed1 = yuv2uyvy422_1_c; *yuv2packed2 = yuv2uyvy422_2_c; *yuv2packedX = yuv2uyvy422_X_c; break; } }
ff_sws_init_output_funcs()依據(jù)輸出像素格式的不同,對下面幾個函數(shù)指針進行賦值:
yuv2plane1:是yuv2planar1_fn類型的函數(shù)指針。
該函數(shù)用于輸出一行水平拉伸后的planar格式數(shù)據(jù)。
數(shù)據(jù)沒有使用垂直拉伸。
yuv2planeX:是yuv2planarX_fn類型的函數(shù)指針。該函數(shù)用于輸出一行水平拉伸后的planar格式數(shù)據(jù)。數(shù)據(jù)使用垂直拉伸。
yuv2packed1:是yuv2packed1_fn類型的函數(shù)指針。該函數(shù)用于輸出一行水平拉伸后的packed格式數(shù)據(jù)。數(shù)據(jù)沒有使用垂直拉伸。
yuv2packed2:是yuv2packed2_fn類型的函數(shù)指針。該函數(shù)用于輸出一行水平拉伸后的packed格式數(shù)據(jù)。
數(shù)據(jù)使用兩行數(shù)據(jù)進行垂直拉伸。
yuv2packedX:是yuv2packedX_fn類型的函數(shù)指針。該函數(shù)用于輸出一行水平拉伸后的packed格式數(shù)據(jù)。
數(shù)據(jù)使用垂直拉伸。
yuv2nv12cX:是yuv2interleavedX_fn類型的函數(shù)指針。
還沒有研究該函數(shù)。
yuv2anyX:是yuv2anyX_fn類型的函數(shù)指針。
還沒有研究該函數(shù)。
ff_sws_init_input_funcs()
ff_sws_init_input_funcs()用于初始化“輸入函數(shù)”。“輸入函數(shù)”在libswscale中的作用就是隨意格式的像素轉(zhuǎn)換為YUV格式以供興許的處理。ff_sws_init_input_funcs()的定義位于libswscale\input.c。例如以下所看到的。av_cold void ff_sws_init_input_funcs(SwsContext *c) {enum AVPixelFormat srcFormat = c->srcFormat;c->chrToYV12 = NULL;switch (srcFormat) {case AV_PIX_FMT_YUYV422:c->chrToYV12 = yuy2ToUV_c;break;case AV_PIX_FMT_YVYU422:c->chrToYV12 = yvy2ToUV_c;break;case AV_PIX_FMT_UYVY422:c->chrToYV12 = uyvyToUV_c;break;case AV_PIX_FMT_NV12:c->chrToYV12 = nv12ToUV_c;break;case AV_PIX_FMT_NV21:c->chrToYV12 = nv21ToUV_c;break;case AV_PIX_FMT_RGB8:case AV_PIX_FMT_BGR8:case AV_PIX_FMT_PAL8:case AV_PIX_FMT_BGR4_BYTE:case AV_PIX_FMT_RGB4_BYTE:c->chrToYV12 = palToUV_c;break;case AV_PIX_FMT_GBRP9LE:c->readChrPlanar = planar_rgb9le_to_uv;break;case AV_PIX_FMT_GBRP10LE:c->readChrPlanar = planar_rgb10le_to_uv;break;case AV_PIX_FMT_GBRP12LE:c->readChrPlanar = planar_rgb12le_to_uv;break;case AV_PIX_FMT_GBRP14LE:c->readChrPlanar = planar_rgb14le_to_uv;break;case AV_PIX_FMT_GBRAP16LE:case AV_PIX_FMT_GBRP16LE:c->readChrPlanar = planar_rgb16le_to_uv;break;case AV_PIX_FMT_GBRP9BE:c->readChrPlanar = planar_rgb9be_to_uv;break;case AV_PIX_FMT_GBRP10BE:c->readChrPlanar = planar_rgb10be_to_uv;break;case AV_PIX_FMT_GBRP12BE:c->readChrPlanar = planar_rgb12be_to_uv;break;case AV_PIX_FMT_GBRP14BE:c->readChrPlanar = planar_rgb14be_to_uv;break;case AV_PIX_FMT_GBRAP16BE:case AV_PIX_FMT_GBRP16BE:c->readChrPlanar = planar_rgb16be_to_uv;break;case AV_PIX_FMT_GBRAP:case AV_PIX_FMT_GBRP:c->readChrPlanar = planar_rgb_to_uv;break; #if HAVE_BIGENDIANcase AV_PIX_FMT_YUV444P9LE:case AV_PIX_FMT_YUV422P9LE:case AV_PIX_FMT_YUV420P9LE:case AV_PIX_FMT_YUV422P10LE:case AV_PIX_FMT_YUV444P10LE:case AV_PIX_FMT_YUV420P10LE:case AV_PIX_FMT_YUV422P12LE:case AV_PIX_FMT_YUV444P12LE:case AV_PIX_FMT_YUV420P12LE:case AV_PIX_FMT_YUV422P14LE:case AV_PIX_FMT_YUV444P14LE:case AV_PIX_FMT_YUV420P14LE:case AV_PIX_FMT_YUV420P16LE:case AV_PIX_FMT_YUV422P16LE:case AV_PIX_FMT_YUV444P16LE:case AV_PIX_FMT_YUVA444P9LE:case AV_PIX_FMT_YUVA422P9LE:case AV_PIX_FMT_YUVA420P9LE:case AV_PIX_FMT_YUVA444P10LE:case AV_PIX_FMT_YUVA422P10LE:case AV_PIX_FMT_YUVA420P10LE:case AV_PIX_FMT_YUVA420P16LE:case AV_PIX_FMT_YUVA422P16LE:case AV_PIX_FMT_YUVA444P16LE:c->chrToYV12 = bswap16UV_c;break; #elsecase AV_PIX_FMT_YUV444P9BE:case AV_PIX_FMT_YUV422P9BE:case AV_PIX_FMT_YUV420P9BE:case AV_PIX_FMT_YUV444P10BE:case AV_PIX_FMT_YUV422P10BE:case AV_PIX_FMT_YUV420P10BE:case AV_PIX_FMT_YUV444P12BE:case AV_PIX_FMT_YUV422P12BE:case AV_PIX_FMT_YUV420P12BE:case AV_PIX_FMT_YUV444P14BE:case AV_PIX_FMT_YUV422P14BE:case AV_PIX_FMT_YUV420P14BE:case AV_PIX_FMT_YUV420P16BE:case AV_PIX_FMT_YUV422P16BE:case AV_PIX_FMT_YUV444P16BE:case AV_PIX_FMT_YUVA444P9BE:case AV_PIX_FMT_YUVA422P9BE:case AV_PIX_FMT_YUVA420P9BE:case AV_PIX_FMT_YUVA444P10BE:case AV_PIX_FMT_YUVA422P10BE:case AV_PIX_FMT_YUVA420P10BE:case AV_PIX_FMT_YUVA420P16BE:case AV_PIX_FMT_YUVA422P16BE:case AV_PIX_FMT_YUVA444P16BE:c->chrToYV12 = bswap16UV_c;break; #endif}if (c->chrSrcHSubSample) {switch (srcFormat) {case AV_PIX_FMT_RGBA64BE:c->chrToYV12 = rgb64BEToUV_half_c;break;case AV_PIX_FMT_RGBA64LE:c->chrToYV12 = rgb64LEToUV_half_c;break;case AV_PIX_FMT_BGRA64BE:c->chrToYV12 = bgr64BEToUV_half_c;break;case AV_PIX_FMT_BGRA64LE:c->chrToYV12 = bgr64LEToUV_half_c;break;case AV_PIX_FMT_RGB48BE:c->chrToYV12 = rgb48BEToUV_half_c;break;case AV_PIX_FMT_RGB48LE:c->chrToYV12 = rgb48LEToUV_half_c;break;case AV_PIX_FMT_BGR48BE:c->chrToYV12 = bgr48BEToUV_half_c;break;case AV_PIX_FMT_BGR48LE:c->chrToYV12 = bgr48LEToUV_half_c;break;case AV_PIX_FMT_RGB32:c->chrToYV12 = bgr32ToUV_half_c;break;case AV_PIX_FMT_RGB32_1:c->chrToYV12 = bgr321ToUV_half_c;break;case AV_PIX_FMT_BGR24:c->chrToYV12 = bgr24ToUV_half_c;break;case AV_PIX_FMT_BGR565LE:c->chrToYV12 = bgr16leToUV_half_c;break;case AV_PIX_FMT_BGR565BE:c->chrToYV12 = bgr16beToUV_half_c;break;case AV_PIX_FMT_BGR555LE:c->chrToYV12 = bgr15leToUV_half_c;break;case AV_PIX_FMT_BGR555BE:c->chrToYV12 = bgr15beToUV_half_c;break;case AV_PIX_FMT_GBRAP:case AV_PIX_FMT_GBRP:c->chrToYV12 = gbr24pToUV_half_c;break;case AV_PIX_FMT_BGR444LE:c->chrToYV12 = bgr12leToUV_half_c;break;case AV_PIX_FMT_BGR444BE:c->chrToYV12 = bgr12beToUV_half_c;break;case AV_PIX_FMT_BGR32:c->chrToYV12 = rgb32ToUV_half_c;break;case AV_PIX_FMT_BGR32_1:c->chrToYV12 = rgb321ToUV_half_c;break;case AV_PIX_FMT_RGB24:c->chrToYV12 = rgb24ToUV_half_c;break;case AV_PIX_FMT_RGB565LE:c->chrToYV12 = rgb16leToUV_half_c;break;case AV_PIX_FMT_RGB565BE:c->chrToYV12 = rgb16beToUV_half_c;break;case AV_PIX_FMT_RGB555LE:c->chrToYV12 = rgb15leToUV_half_c;break;case AV_PIX_FMT_RGB555BE:c->chrToYV12 = rgb15beToUV_half_c;break;case AV_PIX_FMT_RGB444LE:c->chrToYV12 = rgb12leToUV_half_c;break;case AV_PIX_FMT_RGB444BE:c->chrToYV12 = rgb12beToUV_half_c;break;}} else {switch (srcFormat) {case AV_PIX_FMT_RGBA64BE:c->chrToYV12 = rgb64BEToUV_c;break;case AV_PIX_FMT_RGBA64LE:c->chrToYV12 = rgb64LEToUV_c;break;case AV_PIX_FMT_BGRA64BE:c->chrToYV12 = bgr64BEToUV_c;break;case AV_PIX_FMT_BGRA64LE:c->chrToYV12 = bgr64LEToUV_c;break;case AV_PIX_FMT_RGB48BE:c->chrToYV12 = rgb48BEToUV_c;break;case AV_PIX_FMT_RGB48LE:c->chrToYV12 = rgb48LEToUV_c;break;case AV_PIX_FMT_BGR48BE:c->chrToYV12 = bgr48BEToUV_c;break;case AV_PIX_FMT_BGR48LE:c->chrToYV12 = bgr48LEToUV_c;break;case AV_PIX_FMT_RGB32:c->chrToYV12 = bgr32ToUV_c;break;case AV_PIX_FMT_RGB32_1:c->chrToYV12 = bgr321ToUV_c;break;case AV_PIX_FMT_BGR24:c->chrToYV12 = bgr24ToUV_c;break;case AV_PIX_FMT_BGR565LE:c->chrToYV12 = bgr16leToUV_c;break;case AV_PIX_FMT_BGR565BE:c->chrToYV12 = bgr16beToUV_c;break;case AV_PIX_FMT_BGR555LE:c->chrToYV12 = bgr15leToUV_c;break;case AV_PIX_FMT_BGR555BE:c->chrToYV12 = bgr15beToUV_c;break;case AV_PIX_FMT_BGR444LE:c->chrToYV12 = bgr12leToUV_c;break;case AV_PIX_FMT_BGR444BE:c->chrToYV12 = bgr12beToUV_c;break;case AV_PIX_FMT_BGR32:c->chrToYV12 = rgb32ToUV_c;break;case AV_PIX_FMT_BGR32_1:c->chrToYV12 = rgb321ToUV_c;break;case AV_PIX_FMT_RGB24:c->chrToYV12 = rgb24ToUV_c;break;case AV_PIX_FMT_RGB565LE:c->chrToYV12 = rgb16leToUV_c;break;case AV_PIX_FMT_RGB565BE:c->chrToYV12 = rgb16beToUV_c;break;case AV_PIX_FMT_RGB555LE:c->chrToYV12 = rgb15leToUV_c;break;case AV_PIX_FMT_RGB555BE:c->chrToYV12 = rgb15beToUV_c;break;case AV_PIX_FMT_RGB444LE:c->chrToYV12 = rgb12leToUV_c;break;case AV_PIX_FMT_RGB444BE:c->chrToYV12 = rgb12beToUV_c;break;}}c->lumToYV12 = NULL;c->alpToYV12 = NULL;switch (srcFormat) {case AV_PIX_FMT_GBRP9LE:c->readLumPlanar = planar_rgb9le_to_y;break;case AV_PIX_FMT_GBRP10LE:c->readLumPlanar = planar_rgb10le_to_y;break;case AV_PIX_FMT_GBRP12LE:c->readLumPlanar = planar_rgb12le_to_y;break;case AV_PIX_FMT_GBRP14LE:c->readLumPlanar = planar_rgb14le_to_y;break;case AV_PIX_FMT_GBRAP16LE:case AV_PIX_FMT_GBRP16LE:c->readLumPlanar = planar_rgb16le_to_y;break;case AV_PIX_FMT_GBRP9BE:c->readLumPlanar = planar_rgb9be_to_y;break;case AV_PIX_FMT_GBRP10BE:c->readLumPlanar = planar_rgb10be_to_y;break;case AV_PIX_FMT_GBRP12BE:c->readLumPlanar = planar_rgb12be_to_y;break;case AV_PIX_FMT_GBRP14BE:c->readLumPlanar = planar_rgb14be_to_y;break;case AV_PIX_FMT_GBRAP16BE:case AV_PIX_FMT_GBRP16BE:c->readLumPlanar = planar_rgb16be_to_y;break;case AV_PIX_FMT_GBRAP:c->readAlpPlanar = planar_rgb_to_a;case AV_PIX_FMT_GBRP:c->readLumPlanar = planar_rgb_to_y;break; #if HAVE_BIGENDIANcase AV_PIX_FMT_YUV444P9LE:case AV_PIX_FMT_YUV422P9LE:case AV_PIX_FMT_YUV420P9LE:case AV_PIX_FMT_YUV444P10LE:case AV_PIX_FMT_YUV422P10LE:case AV_PIX_FMT_YUV420P10LE:case AV_PIX_FMT_YUV444P12LE:case AV_PIX_FMT_YUV422P12LE:case AV_PIX_FMT_YUV420P12LE:case AV_PIX_FMT_YUV444P14LE:case AV_PIX_FMT_YUV422P14LE:case AV_PIX_FMT_YUV420P14LE:case AV_PIX_FMT_YUV420P16LE:case AV_PIX_FMT_YUV422P16LE:case AV_PIX_FMT_YUV444P16LE:case AV_PIX_FMT_GRAY16LE:c->lumToYV12 = bswap16Y_c;break;case AV_PIX_FMT_YUVA444P9LE:case AV_PIX_FMT_YUVA422P9LE:case AV_PIX_FMT_YUVA420P9LE:case AV_PIX_FMT_YUVA444P10LE:case AV_PIX_FMT_YUVA422P10LE:case AV_PIX_FMT_YUVA420P10LE:case AV_PIX_FMT_YUVA420P16LE:case AV_PIX_FMT_YUVA422P16LE:case AV_PIX_FMT_YUVA444P16LE:c->lumToYV12 = bswap16Y_c;c->alpToYV12 = bswap16Y_c;break; #elsecase AV_PIX_FMT_YUV444P9BE:case AV_PIX_FMT_YUV422P9BE:case AV_PIX_FMT_YUV420P9BE:case AV_PIX_FMT_YUV444P10BE:case AV_PIX_FMT_YUV422P10BE:case AV_PIX_FMT_YUV420P10BE:case AV_PIX_FMT_YUV444P12BE:case AV_PIX_FMT_YUV422P12BE:case AV_PIX_FMT_YUV420P12BE:case AV_PIX_FMT_YUV444P14BE:case AV_PIX_FMT_YUV422P14BE:case AV_PIX_FMT_YUV420P14BE:case AV_PIX_FMT_YUV420P16BE:case AV_PIX_FMT_YUV422P16BE:case AV_PIX_FMT_YUV444P16BE:case AV_PIX_FMT_GRAY16BE:c->lumToYV12 = bswap16Y_c;break;case AV_PIX_FMT_YUVA444P9BE:case AV_PIX_FMT_YUVA422P9BE:case AV_PIX_FMT_YUVA420P9BE:case AV_PIX_FMT_YUVA444P10BE:case AV_PIX_FMT_YUVA422P10BE:case AV_PIX_FMT_YUVA420P10BE:case AV_PIX_FMT_YUVA420P16BE:case AV_PIX_FMT_YUVA422P16BE:case AV_PIX_FMT_YUVA444P16BE:c->lumToYV12 = bswap16Y_c;c->alpToYV12 = bswap16Y_c;break; #endifcase AV_PIX_FMT_YA16LE:c->lumToYV12 = read_ya16le_gray_c;c->alpToYV12 = read_ya16le_alpha_c;break;case AV_PIX_FMT_YA16BE:c->lumToYV12 = read_ya16be_gray_c;c->alpToYV12 = read_ya16be_alpha_c;break;case AV_PIX_FMT_YUYV422:case AV_PIX_FMT_YVYU422:case AV_PIX_FMT_YA8:c->lumToYV12 = yuy2ToY_c;break;case AV_PIX_FMT_UYVY422:c->lumToYV12 = uyvyToY_c;break;case AV_PIX_FMT_BGR24:c->lumToYV12 = bgr24ToY_c;break;case AV_PIX_FMT_BGR565LE:c->lumToYV12 = bgr16leToY_c;break;case AV_PIX_FMT_BGR565BE:c->lumToYV12 = bgr16beToY_c;break;case AV_PIX_FMT_BGR555LE:c->lumToYV12 = bgr15leToY_c;break;case AV_PIX_FMT_BGR555BE:c->lumToYV12 = bgr15beToY_c;break;case AV_PIX_FMT_BGR444LE:c->lumToYV12 = bgr12leToY_c;break;case AV_PIX_FMT_BGR444BE:c->lumToYV12 = bgr12beToY_c;break;case AV_PIX_FMT_RGB24:c->lumToYV12 = rgb24ToY_c;break;case AV_PIX_FMT_RGB565LE:c->lumToYV12 = rgb16leToY_c;break;case AV_PIX_FMT_RGB565BE:c->lumToYV12 = rgb16beToY_c;break;case AV_PIX_FMT_RGB555LE:c->lumToYV12 = rgb15leToY_c;break;case AV_PIX_FMT_RGB555BE:c->lumToYV12 = rgb15beToY_c;break;case AV_PIX_FMT_RGB444LE:c->lumToYV12 = rgb12leToY_c;break;case AV_PIX_FMT_RGB444BE:c->lumToYV12 = rgb12beToY_c;break;case AV_PIX_FMT_RGB8:case AV_PIX_FMT_BGR8:case AV_PIX_FMT_PAL8:case AV_PIX_FMT_BGR4_BYTE:case AV_PIX_FMT_RGB4_BYTE:c->lumToYV12 = palToY_c;break;case AV_PIX_FMT_MONOBLACK:c->lumToYV12 = monoblack2Y_c;break;case AV_PIX_FMT_MONOWHITE:c->lumToYV12 = monowhite2Y_c;break;case AV_PIX_FMT_RGB32:c->lumToYV12 = bgr32ToY_c;break;case AV_PIX_FMT_RGB32_1:c->lumToYV12 = bgr321ToY_c;break;case AV_PIX_FMT_BGR32:c->lumToYV12 = rgb32ToY_c;break;case AV_PIX_FMT_BGR32_1:c->lumToYV12 = rgb321ToY_c;break;case AV_PIX_FMT_RGB48BE:c->lumToYV12 = rgb48BEToY_c;break;case AV_PIX_FMT_RGB48LE:c->lumToYV12 = rgb48LEToY_c;break;case AV_PIX_FMT_BGR48BE:c->lumToYV12 = bgr48BEToY_c;break;case AV_PIX_FMT_BGR48LE:c->lumToYV12 = bgr48LEToY_c;break;case AV_PIX_FMT_RGBA64BE:c->lumToYV12 = rgb64BEToY_c;break;case AV_PIX_FMT_RGBA64LE:c->lumToYV12 = rgb64LEToY_c;break;case AV_PIX_FMT_BGRA64BE:c->lumToYV12 = bgr64BEToY_c;break;case AV_PIX_FMT_BGRA64LE:c->lumToYV12 = bgr64LEToY_c;}if (c->alpPixBuf) {if (is16BPS(srcFormat) || isNBPS(srcFormat)) {if (HAVE_BIGENDIAN == !isBE(srcFormat))c->alpToYV12 = bswap16Y_c;}switch (srcFormat) {case AV_PIX_FMT_BGRA64LE:case AV_PIX_FMT_BGRA64BE:case AV_PIX_FMT_RGBA64LE:case AV_PIX_FMT_RGBA64BE: c->alpToYV12 = rgba64ToA_c; break;case AV_PIX_FMT_BGRA:case AV_PIX_FMT_RGBA:c->alpToYV12 = rgbaToA_c;break;case AV_PIX_FMT_ABGR:case AV_PIX_FMT_ARGB:c->alpToYV12 = abgrToA_c;break;case AV_PIX_FMT_YA8:c->alpToYV12 = uyvyToY_c;break;case AV_PIX_FMT_PAL8 :c->alpToYV12 = palToA_c;break;}} }
ff_sws_init_input_funcs()依據(jù)輸入像素格式的不同。對下面幾個函數(shù)指針進行賦值:
lumToYV12:轉(zhuǎn)換得到Y(jié)分量。
chrToYV12:轉(zhuǎn)換得到UV分量。
alpToYV12:轉(zhuǎn)換得到Alpha分量。
readLumPlanar:讀取planar格式的數(shù)據(jù)轉(zhuǎn)換為Y。
readChrPlanar:讀取planar格式的數(shù)據(jù)轉(zhuǎn)換為UV。
下面看幾個樣例。
當(dāng)輸入像素格式為AV_PIX_FMT_RGB24的時候,lumToYV12()指針指向的函數(shù)是rgb24ToY_c(),例如以下所看到的。case AV_PIX_FMT_RGB24:c->lumToYV12 = rgb24ToY_c;break;
rgb24ToY_c()
rgb24ToY_c()的定義例如以下。
static void rgb24ToY_c(uint8_t *_dst, const uint8_t *src, const uint8_t *unused1, const uint8_t *unused2, int width,uint32_t *rgb2yuv) {int16_t *dst = (int16_t *)_dst;int32_t ry = rgb2yuv[RY_IDX], gy = rgb2yuv[GY_IDX], by = rgb2yuv[BY_IDX];int i;for (i = 0; i < width; i++) {int r = src[i * 3 + 0];int g = src[i * 3 + 1];int b = src[i * 3 + 2];dst[i] = ((ry*r + gy*g + by*b + (32<<(RGB2YUV_SHIFT-1)) + (1<<(RGB2YUV_SHIFT-7)))>>(RGB2YUV_SHIFT-6));} }從源碼中能夠看出。該函數(shù)主要完畢了下面三步:
1.取系數(shù)。通過讀取rgb2yuv數(shù)組中存儲的參數(shù)獲得R。G,B每一個分量的系數(shù)。
2.取像素值。分別讀取R,G。B每一個分量的像素值。
3.計算得到亮度值。
依據(jù)R。G,B的系數(shù)和值。計算得到亮度值Y。
當(dāng)輸入像素格式為AV_PIX_FMT_RGB24的時候。chrToYV12 ()指針指向的函數(shù)是rgb24ToUV_half_c(),例如以下所看到的。
case AV_PIX_FMT_RGB24:c->chrToYV12 = rgb24ToUV_half_c;break;
rgb24ToUV_half_c()
rgb24ToUV_half_c()定義例如以下。static void rgb24ToUV_half_c(uint8_t *_dstU, uint8_t *_dstV, const uint8_t *unused0, const uint8_t *src1,const uint8_t *src2, int width, uint32_t *rgb2yuv) {int16_t *dstU = (int16_t *)_dstU;int16_t *dstV = (int16_t *)_dstV;int i;int32_t ru = rgb2yuv[RU_IDX], gu = rgb2yuv[GU_IDX], bu = rgb2yuv[BU_IDX];int32_t rv = rgb2yuv[RV_IDX], gv = rgb2yuv[GV_IDX], bv = rgb2yuv[BV_IDX];av_assert1(src1 == src2);for (i = 0; i < width; i++) {int r = src1[6 * i + 0] + src1[6 * i + 3];int g = src1[6 * i + 1] + src1[6 * i + 4];int b = src1[6 * i + 2] + src1[6 * i + 5];dstU[i] = (ru*r + gu*g + bu*b + (256<<RGB2YUV_SHIFT) + (1<<(RGB2YUV_SHIFT-6)))>>(RGB2YUV_SHIFT-5);dstV[i] = (rv*r + gv*g + bv*b + (256<<RGB2YUV_SHIFT) + (1<<(RGB2YUV_SHIFT-6)))>>(RGB2YUV_SHIFT-5);} }
rgb24ToUV_half_c()的過程相比rgb24ToY_c()要略微復(fù)雜些。這主要是由于U,V取值的數(shù)量僅僅有Y的一半。
因此須要首先求出每2個像素點的平均值之后。再進行計算。
當(dāng)輸入像素格式為AV_PIX_FMT_GBRP(注意這個是planar格式。三個分量分別為G,B,R)的時候,readLumPlanar指向的函數(shù)是planar_rgb_to_y()。例如以下所看到的。
planar_rgb_to_y()
planar_rgb_to_y()定義例如以下。
static void planar_rgb_to_y(uint8_t *_dst, const uint8_t *src[4], int width, int32_t *rgb2yuv) {uint16_t *dst = (uint16_t *)_dst;int32_t ry = rgb2yuv[RY_IDX], gy = rgb2yuv[GY_IDX], by = rgb2yuv[BY_IDX];int i;for (i = 0; i < width; i++) {int g = src[0][i];int b = src[1][i];int r = src[2][i];dst[i] = (ry*r + gy*g + by*b + (0x801<<(RGB2YUV_SHIFT-7))) >> (RGB2YUV_SHIFT-6);} }能夠看出處理planar格式的GBR數(shù)據(jù)和處理packed格式的RGB數(shù)據(jù)的方法是基本一樣的。在這里不再反復(fù)。
ff_sws_init_range_convert()
ff_sws_init_range_convert()用于初始化像素值范圍轉(zhuǎn)換的函數(shù)。它的定義位于libswscale\swscale.c,例如以下所看到的。ff_sws_init_range_convert()包括了兩種像素取值范圍的轉(zhuǎn)換:
lumConvertRange:亮度分量取值范圍的轉(zhuǎn)換。
chrConvertRange:色度分量取值范圍的轉(zhuǎn)換。
從JPEG標(biāo)準(zhǔn)轉(zhuǎn)換為MPEG標(biāo)準(zhǔn)的函數(shù)有:lumRangeFromJpeg_c()和chrRangeFromJpeg_c()。
lumRangeFromJpeg_c()
亮度轉(zhuǎn)換(0-255轉(zhuǎn)換為16-235)函數(shù)lumRangeFromJpeg_c()例如以下所看到的。static void lumRangeFromJpeg_c(int16_t *dst, int width) {int i;for (i = 0; i < width; i++)dst[i] = (dst[i] * 14071 + 33561947) >> 14; }
能夠簡單代入一個數(shù)字驗證一下上述函數(shù)的正確性。該函數(shù)將亮度值“0”映射成“16”,“255”映射成“235”,因此我們能夠代入一個“255”看看轉(zhuǎn)換后的數(shù)值是否為“235”。在這里須要注意,dst中存儲的像素數(shù)值是15bit的亮度值。
因此我們須要將8bit的數(shù)值“255”左移7位后帶入。經(jīng)過計算,255左移7位后取值為32640,計算后得到的數(shù)值為30080。右移7位后得到的8bit亮度值即為235。
興許幾個函數(shù)都能夠用上面描寫敘述的方法進行驗證,就不再反復(fù)了。
chrRangeFromJpeg_c()
色度轉(zhuǎn)換(0-255轉(zhuǎn)換為16-240)函數(shù)chrRangeFromJpeg_c()例如以下所看到的。static void chrRangeFromJpeg_c(int16_t *dstU, int16_t *dstV, int width) {int i;for (i = 0; i < width; i++) {dstU[i] = (dstU[i] * 1799 + 4081085) >> 11; // 1469dstV[i] = (dstV[i] * 1799 + 4081085) >> 11; // 1469} }
從MPEG標(biāo)準(zhǔn)轉(zhuǎn)換為JPEG標(biāo)準(zhǔn)的函數(shù)有:lumRangeToJpeg_c()和chrRangeToJpeg_c()。
lumRangeToJpeg_c()
亮度轉(zhuǎn)換(16-235轉(zhuǎn)換為0-255)函數(shù)lumRangeToJpeg_c()定義例如以下所看到的。static void lumRangeToJpeg_c(int16_t *dst, int width) {int i;for (i = 0; i < width; i++)dst[i] = (FFMIN(dst[i], 30189) * 19077 - 39057361) >> 14; }
chrRangeToJpeg_c()
色度轉(zhuǎn)換(16-240轉(zhuǎn)換為0-255)函數(shù)chrRangeToJpeg_c()定義例如以下所看到的。static void chrRangeToJpeg_c(int16_t *dstU, int16_t *dstV, int width) {int i;for (i = 0; i < width; i++) {dstU[i] = (FFMIN(dstU[i], 30775) * 4663 - 9289992) >> 12; // -264dstV[i] = (FFMIN(dstV[i], 30775) * 4663 - 9289992) >> 12; // -264} }
至今sws_getContext()源代碼分析基本完成。
雷曉驊
leixiaohua1020@126.com
http://blog.csdn.net/leixiaohua1020
總結(jié)
以上是生活随笔為你收集整理的FFmpeg资料来源简单分析:libswscale的sws_getContext()的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。