MTK 多帧算法集成
和你一起終身學習,這里是程序員Android
經典好文推薦,通過閱讀本文,您將收獲以下知識點:
一、選擇feature和配置feature table
二、 掛載算法
三、自定義metadata
四、APP調用算法
五、結語
一、選擇feature和配置feature table
1.1 選擇feature
多幀降噪算法(MFNR)是一種很常見的多幀算法,在MTK已預置的feature中有MTK_FEATURE_MFNR和TP_FEATURE_MFNR。因此,我們可以對號入座,不用再額外添加feature。這里我們是第三方算法,所以我們選擇TP_FEATURE_MFNR。
1.2 配置feature table
確定了feature為TP_FEATURE_MFNR后,我們還需要將其添加到feature table中:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp index f14ff8a6e2..38365e0602 100755 --- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp @@ -106,6 +106,7 @@ using namespace NSCam::v3::pipeline::policy::scenariomgr;#define MTK_FEATURE_COMBINATION_TP_VSDOF_MFNR (MTK_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_VSDOF| TP_FEATURE_WATERMARK)#define MTK_FEATURE_COMBINATION_TP_FUSION (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_FUSION| TP_FEATURE_WATERMARK)#define MTK_FEATURE_COMBINATION_TP_PUREBOKEH (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_PUREBOKEH| TP_FEATURE_WATERMARK) +#define MTK_FEATURE_COMBINATION_TP_MFNR (TP_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| MTK_FEATURE_MFNR)// streaming feature combination (TODO: it should be refined by streaming scenario feature)#define MTK_FEATURE_COMBINATION_VIDEO_NORMAL (MTK_FEATURE_FB|TP_FEATURE_FB|TP_FEATURE_WATERMARK) @@ -136,6 +137,7 @@ const std::vector<std::unordered_map<int32_t, ScenarioFeatures>> gMtkScenarioFeADD_CAMERA_FEATURE_SET(TP_FEATURE_HDR, MTK_FEATURE_COMBINATION_HDR)ADD_CAMERA_FEATURE_SET(MTK_FEATURE_AINR, MTK_FEATURE_COMBINATION_AINR)ADD_CAMERA_FEATURE_SET(MTK_FEATURE_MFNR, MTK_FEATURE_COMBINATION_MFNR) + ADD_CAMERA_FEATURE_SET(TP_FEATURE_MFNR, MTK_FEATURE_COMBINATION_TP_MFNR)ADD_CAMERA_FEATURE_SET(MTK_FEATURE_REMOSAIC, MTK_FEATURE_COMBINATION_REMOSAIC)ADD_CAMERA_FEATURE_SET(NO_FEATURE_NORMAL, MTK_FEATURE_COMBINATION_SINGLE)CAMERA_SCENARIO_END注意:
MTK在Android Q(10.0)及更高版本上優化了scenario配置表的客制化,Android Q及更高版本,feature需要在:
vendor/mediatek/proprietary/custom/[platform]/hal/camera/camera_custom_feature_table.cpp中配置,[platform]是諸如mt6580,mt6763之類的。
二、 掛載算法
2.1 為算法選擇plugin
MTK HAL3在vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/plugin/PipelinePluginType.h 中將三方算法的掛載點大致分為以下幾類:
- BokehPlugin:Bokeh算法掛載點,雙攝景深算法的虛化部分。 
- DepthPlugin:Depth算法掛載點,雙攝景深算法的計算深度部分。 
- FusionPlugin:Depth和Bokeh放在1個算法中,即合并的雙攝景深算法掛載點。 
- JoinPlugin:Streaming相關算法掛載點,預覽算法都掛載在這里。 
- MultiFramePlugin:多幀算法掛載點,包括YUV與RAW,例如MFNR/HDR 
- RawPlugin:RAW算法掛載點,例如remosaic 
- YuvPlugin:Yuv單幀算法掛載點,例如美顏、廣角鏡頭畸變校正等。 
對號入座,為要集成的算法選擇相應的plugin。這里是多幀算法,只能選擇MultiFramePlugin。并且,一般情況下多幀算法只用于拍照,不用于預覽。
2.2 添加全局宏控
為了能控制某個項目是否集成此算法,我們在device/mediateksample/[platform]/ProjectConfig.mk中添加一個宏,用于控制新接入算法的編譯:
QXT_MFNR_SUPPORT = yes當某個項目不需要這個算法時,將device/mediateksample/[platform]/ProjectConfig.mk的QXT_MFNR_SUPPORT的值設為 no 就可以了。
2.3 編寫算法集成文件
參照vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mfnr/MFNRImpl.cpp中實現MFNR拍照。目錄結構如下:
vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/cp_tp_mfnr/
├── Android.mk
├── include
│ └── mf_processor.h
├── lib
│ ├── arm64-v8a
│ │ └── libmultiframe.so
│ └── armeabi-v7a
│ └── libmultiframe.so
└── MFNRImpl.cpp
文件說明:
- Android.mk中配置算法庫、頭文件、集成的源代碼MFNRImpl.cpp文件,將它們編譯成庫libmtkcam.plugin.tp_mfnr,供libmtkcam_3rdparty.customer依賴調用。 
- libmultiframe.so實現了將連續4幀圖像縮小,并拼接成一張圖的功能,libmultiframe.so用來模擬需要接入的第三方多幀算法庫。mf_processor.h是頭文件。 
- MFNRImpl.cpp是集成的源代碼CPP文件。 
2.3.1 mtkcam3/3rdparty/customer/cp_tp_mfnr/Android.mk
ifeq ($(QXT_MFNR_SUPPORT),yes) LOCAL_PATH := $(call my-dir)include $(CLEAR_VARS) LOCAL_MODULE := libmultiframe LOCAL_SRC_FILES_32 := lib/armeabi-v7a/libmultiframe.so LOCAL_SRC_FILES_64 := lib/arm64-v8a/libmultiframe.so LOCAL_MODULE_TAGS := optional LOCAL_MODULE_CLASS := SHARED_LIBRARIES LOCAL_MODULE_SUFFIX := .so LOCAL_PROPRIETARY_MODULE := true LOCAL_MULTILIB := both include $(BUILD_PREBUILT)################################################################################ # ################################################################################ include $(CLEAR_VARS)#----------------------------------------------------------- -include $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam/mtkcam.mk#----------------------------------------------------------- LOCAL_SRC_FILES += MFNRImpl.cpp#----------------------------------------------------------- LOCAL_C_INCLUDES += $(MTKCAM_C_INCLUDES) LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam3/include $(MTK_PATH_SOURCE)/hardware/mtkcam/include LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_COMMON)/hal/inc LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_CUSTOM_PLATFORM)/hal/inc LOCAL_C_INCLUDES += $(TOP)/external/libyuv/files/include/ LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam3/3rdparty/customer/cp_tp_mfnr/include # LOCAL_C_INCLUDES += system/media/camera/include#----------------------------------------------------------- LOCAL_CFLAGS += $(MTKCAM_CFLAGS) ##----------------------------------------------------------- LOCAL_STATIC_LIBRARIES += # LOCAL_WHOLE_STATIC_LIBRARIES +=#----------------------------------------------------------- LOCAL_SHARED_LIBRARIES += liblog LOCAL_SHARED_LIBRARIES += libutils LOCAL_SHARED_LIBRARIES += libcutils LOCAL_SHARED_LIBRARIES += libmtkcam_modulehelper LOCAL_SHARED_LIBRARIES += libmtkcam_stdutils LOCAL_SHARED_LIBRARIES += libmtkcam_pipeline LOCAL_SHARED_LIBRARIES += libmtkcam_metadata LOCAL_SHARED_LIBRARIES += libmtkcam_metastore LOCAL_SHARED_LIBRARIES += libmtkcam_streamutils LOCAL_SHARED_LIBRARIES += libmtkcam_imgbuf LOCAL_SHARED_LIBRARIES += libmtkcam_exif #LOCAL_SHARED_LIBRARIES += libmtkcam_3rdparty#----------------------------------------------------------- LOCAL_HEADER_LIBRARIES := libutils_headers liblog_headers libhardware_headers#----------------------------------------------------------- LOCAL_MODULE := libmtkcam.plugin.tp_mfnr LOCAL_PROPRIETARY_MODULE := true LOCAL_MODULE_OWNER := mtk LOCAL_MODULE_TAGS := optional include $(MTK_STATIC_LIBRARY)################################################################################ # ################################################################################ include $(call all-makefiles-under,$(LOCAL_PATH)) endif2.3.2 mtkcam3/3rdparty/customer/cp_tp_mfnr/include/mf_processor.h
#ifndef QXT_MULTI_FRAME_H #define QXT_MULTI_FRAME_Hclass MFProcessor {public:virtual ~MFProcessor() {}virtual void setFrameCount(int num) = 0;virtual void setParams() = 0;virtual void addFrame(unsigned char *src, int srcWidth, int srcHeight) = 0;virtual void addFrame(unsigned char *srcY, unsigned char *srcU, unsigned char *srcV,int srcWidth, int srcHeight) = 0;virtual void scale(unsigned char *src, int srcWidth, int srcHeight,unsigned char *dst, int dstWidth, int dstHeight) = 0;virtual void process(unsigned char *output, int outputWidth, int outputHeight) = 0;virtual void process(unsigned char *outputY, unsigned char *outputU, unsigned char *outputV,int outputWidth, int outputHeight) = 0;static MFProcessor* createInstance(int width, int height); };#endif //QXT_MULTI_FRAME_H頭文件中的接口函數介紹:
- setFrameCount:沒有實際作用,用于模擬設置第三方多幀算法的幀數。因為部分第三方多幀算法在不同場景下需要的幀數可能是不同的。 
- setParams:也沒有實際作用,用于模擬設置第三方多幀算法所需的參數。 
- addFrame:用于添加一幀圖像數據,用于模擬第三方多幀算法添加圖像數據。 
- process:將前面添加的4幀圖像數據,縮小并拼接成一張原大小的圖。 
- createInstance:創建接口類對象。 
為了方便有興趣的童鞋們,實現代碼mf_processor_impl.cpp也一并貼上:
#include <libyuv/scale.h> #include <cstring> #include "mf_processor.h"using namespace std; using namespace libyuv;class MFProcessorImpl : public MFProcessor { private:int frameCount = 4;int currentIndex = 0;unsigned char *dstBuf = nullptr;unsigned char *tmpBuf = nullptr;public:MFProcessorImpl();MFProcessorImpl(int width, int height);~MFProcessorImpl() override;void setFrameCount(int num) override;void setParams() override;void addFrame(unsigned char *src, int srcWidth, int srcHeight) override;void addFrame(unsigned char *srcY, unsigned char *srcU, unsigned char *srcV,int srcWidth, int srcHeight) override;void scale(unsigned char *src, int srcWidth, int srcHeight,unsigned char *dst, int dstWidth, int dstHeight) override;void process(unsigned char *output, int outputWidth, int outputHeight) override;void process(unsigned char *outputY, unsigned char *outputU, unsigned char *outputV,int outputWidth, int outputHeight) override;static MFProcessor *createInstance(int width, int height); };MFProcessorImpl::MFProcessorImpl() = default;MFProcessorImpl::MFProcessorImpl(int width, int height) {if (dstBuf == nullptr) {dstBuf = new unsigned char[width * height * 3 / 2];}if (tmpBuf == nullptr) {tmpBuf = new unsigned char[width / 2 * height / 2 * 3 / 2];} }MFProcessorImpl::~MFProcessorImpl() {if (dstBuf != nullptr) {delete[] dstBuf;}if (tmpBuf != nullptr) {delete[] tmpBuf;} }void MFProcessorImpl::setFrameCount(int num) {frameCount = num; }void MFProcessorImpl::setParams() {}void MFProcessorImpl::addFrame(unsigned char *src, int srcWidth, int srcHeight) {int srcYCount = srcWidth * srcHeight;int srcUVCount = srcWidth * srcHeight / 4;int tmpWidth = srcWidth >> 1;int tmpHeight = srcHeight >> 1;int tmpYCount = tmpWidth * tmpHeight;int tmpUVCount = tmpWidth * tmpHeight / 4;//scaleI420Scale(src, srcWidth,src + srcYCount, srcWidth >> 1,src + srcYCount + srcUVCount, srcWidth >> 1,srcWidth, srcHeight,tmpBuf, tmpWidth,tmpBuf + tmpYCount, tmpWidth >> 1,tmpBuf + tmpYCount + tmpUVCount, tmpWidth >> 1,tmpWidth, tmpHeight,kFilterNone);//mergeunsigned char *pDstY;unsigned char *pTmpY;for (int i = 0; i < tmpHeight; i++) {pTmpY = tmpBuf + i * tmpWidth;if (currentIndex == 0) {pDstY = dstBuf + i * srcWidth;} else if (currentIndex == 1) {pDstY = dstBuf + i * srcWidth + tmpWidth;} else if (currentIndex == 2) {pDstY = dstBuf + (i + tmpHeight) * srcWidth;} else {pDstY = dstBuf + (i + tmpHeight) * srcWidth + tmpWidth;}memcpy(pDstY, pTmpY, tmpWidth);}int uvHeight = tmpHeight / 2;int uvWidth = tmpWidth / 2;unsigned char *pDstU;unsigned char *pDstV;unsigned char *pTmpU;unsigned char *pTmpV;for (int i = 0; i < uvHeight; i++) {pTmpU = tmpBuf + tmpYCount + uvWidth * i;pTmpV = tmpBuf + tmpYCount + tmpUVCount + uvWidth * i;if (currentIndex == 0) {pDstU = dstBuf + srcYCount + i * tmpWidth;pDstV = dstBuf + srcYCount + srcUVCount + i * tmpWidth;} else if (currentIndex == 1) {pDstU = dstBuf + srcYCount + i * tmpWidth + uvWidth;pDstV = dstBuf + srcYCount + srcUVCount + i * tmpWidth + uvWidth;} else if (currentIndex == 2) {pDstU = dstBuf + srcYCount + (i + uvHeight) * tmpWidth;pDstV = dstBuf + srcYCount + srcUVCount + (i + uvHeight) * tmpWidth;} else {pDstU = dstBuf + srcYCount + (i + uvHeight) * tmpWidth + uvWidth;pDstV = dstBuf + srcYCount + srcUVCount + (i + uvHeight) * tmpWidth + uvWidth;}memcpy(pDstU, pTmpU, uvWidth);memcpy(pDstV, pTmpV, uvWidth);}if (currentIndex < frameCount) currentIndex++; }void MFProcessorImpl::addFrame(unsigned char *srcY, unsigned char *srcU, unsigned char *srcV,int srcWidth, int srcHeight) {int srcYCount = srcWidth * srcHeight;int srcUVCount = srcWidth * srcHeight / 4;int tmpWidth = srcWidth >> 1;int tmpHeight = srcHeight >> 1;int tmpYCount = tmpWidth * tmpHeight;int tmpUVCount = tmpWidth * tmpHeight / 4;//scaleI420Scale(srcY, srcWidth,srcU, srcWidth >> 1,srcV, srcWidth >> 1,srcWidth, srcHeight,tmpBuf, tmpWidth,tmpBuf + tmpYCount, tmpWidth >> 1,tmpBuf + tmpYCount + tmpUVCount, tmpWidth >> 1,tmpWidth, tmpHeight,kFilterNone);//mergeunsigned char *pDstY;unsigned char *pTmpY;for (int i = 0; i < tmpHeight; i++) {pTmpY = tmpBuf + i * tmpWidth;if (currentIndex == 0) {pDstY = dstBuf + i * srcWidth;} else if (currentIndex == 1) {pDstY = dstBuf + i * srcWidth + tmpWidth;} else if (currentIndex == 2) {pDstY = dstBuf + (i + tmpHeight) * srcWidth;} else {pDstY = dstBuf + (i + tmpHeight) * srcWidth + tmpWidth;}memcpy(pDstY, pTmpY, tmpWidth);}int uvHeight = tmpHeight / 2;int uvWidth = tmpWidth / 2;unsigned char *pDstU;unsigned char *pDstV;unsigned char *pTmpU;unsigned char *pTmpV;for (int i = 0; i < uvHeight; i++) {pTmpU = tmpBuf + tmpYCount + uvWidth * i;pTmpV = tmpBuf + tmpYCount + tmpUVCount + uvWidth * i;if (currentIndex == 0) {pDstU = dstBuf + srcYCount + i * tmpWidth;pDstV = dstBuf + srcYCount + srcUVCount + i * tmpWidth;} else if (currentIndex == 1) {pDstU = dstBuf + srcYCount + i * tmpWidth + uvWidth;pDstV = dstBuf + srcYCount + srcUVCount + i * tmpWidth + uvWidth;} else if (currentIndex == 2) {pDstU = dstBuf + srcYCount + (i + uvHeight) * tmpWidth;pDstV = dstBuf + srcYCount + srcUVCount + (i + uvHeight) * tmpWidth;} else {pDstU = dstBuf + srcYCount + (i + uvHeight) * tmpWidth + uvWidth;pDstV = dstBuf + srcYCount + srcUVCount + (i + uvHeight) * tmpWidth + uvWidth;}memcpy(pDstU, pTmpU, uvWidth);memcpy(pDstV, pTmpV, uvWidth);}if (currentIndex < frameCount) currentIndex++; }void MFProcessorImpl::scale(unsigned char *src, int srcWidth, int srcHeight,unsigned char *dst, int dstWidth, int dstHeight) {I420Scale(src, srcWidth,//Ysrc + srcWidth * srcHeight, srcWidth >> 1,//Usrc + srcWidth * srcHeight * 5 / 4, srcWidth >> 1,//VsrcWidth, srcHeight,dst, dstWidth,//Ydst + dstWidth * dstHeight, dstWidth >> 1,//Udst + dstWidth * dstHeight * 5 / 4, dstWidth >> 1,//VdstWidth, dstHeight,kFilterNone); }void MFProcessorImpl::process(unsigned char *output, int outputWidth, int outputHeight) {memcpy(output, dstBuf, outputWidth * outputHeight * 3 / 2);currentIndex = 0; }void MFProcessorImpl::process(unsigned char *outputY, unsigned char *outputU, unsigned char *outputV,int outputWidth, int outputHeight) {int yCount = outputWidth * outputHeight;int uvCount = yCount / 4;memcpy(outputY, dstBuf, yCount);memcpy(outputU, dstBuf + yCount, uvCount);memcpy(outputV, dstBuf + yCount + uvCount, uvCount);currentIndex = 0; }MFProcessor* MFProcessor::createInstance(int width, int height) {return new MFProcessorImpl(width, height); }2.3.3 mtkcam3/3rdparty/customer/cp_tp_mfnr/MFNRImpl.cpp
#ifdef LOG_TAG #undef LOG_TAG #endif // LOG_TAG #define LOG_TAG "MFNRProvider" static const char *__CALLERNAME__ = LOG_TAG;// #include <mtkcam/utils/std/Log.h> // #include <stdlib.h> #include <utils/Errors.h> #include <utils/List.h> #include <utils/RefBase.h> #include <sstream> #include <unordered_map> // std::unordered_map // #include <mtkcam/utils/metadata/client/mtk_metadata_tag.h> #include <mtkcam/utils/metadata/hal/mtk_platform_metadata_tag.h> //zHDR #include <mtkcam/utils/hw/HwInfoHelper.h> // NSCamHw::HwInfoHelper #include <mtkcam3/feature/utils/FeatureProfileHelper.h> //ProfileParam #include <mtkcam/drv/IHalSensor.h> // #include <mtkcam/utils/imgbuf/IIonImageBufferHeap.h> // #include <mtkcam/utils/std/Format.h> #include <mtkcam/utils/std/Time.h> // #include <mtkcam3/pipeline/hwnode/NodeId.h> // #include <mtkcam/utils/metastore/IMetadataProvider.h> #include <mtkcam/utils/metastore/ITemplateRequest.h> #include <mtkcam/utils/metastore/IMetadataProvider.h> #include <mtkcam3/3rdparty/plugin/PipelinePlugin.h> #include <mtkcam3/3rdparty/plugin/PipelinePluginType.h> // #include <isp_tuning/isp_tuning.h> //EIspProfile_T, EOperMode_*// #include <custom_metadata/custom_metadata_tag.h>// #include <libyuv.h> #include <mf_processor.h>using namespace NSCam; using namespace android; using namespace std; using namespace NSCam::NSPipelinePlugin; using namespace NSIspTuning; /*************************************************************************************************************************************************************/ #define MY_LOGV(fmt, arg...) CAM_LOGV("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg) #define MY_LOGD(fmt, arg...) CAM_LOGD("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg) #define MY_LOGI(fmt, arg...) CAM_LOGI("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg) #define MY_LOGW(fmt, arg...) CAM_LOGW("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg) #define MY_LOGE(fmt, arg...) CAM_LOGE("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg) // #define MY_LOGV_IF(cond, ...) do { if ( (cond) ) { MY_LOGV(__VA_ARGS__); } }while(0) #define MY_LOGD_IF(cond, ...) do { if ( (cond) ) { MY_LOGD(__VA_ARGS__); } }while(0) #define MY_LOGI_IF(cond, ...) do { if ( (cond) ) { MY_LOGI(__VA_ARGS__); } }while(0) #define MY_LOGW_IF(cond, ...) do { if ( (cond) ) { MY_LOGW(__VA_ARGS__); } }while(0) #define MY_LOGE_IF(cond, ...) do { if ( (cond) ) { MY_LOGE(__VA_ARGS__); } }while(0) // #define ASSERT(cond, msg) do { if (!(cond)) { printf("Failed: %s\n", msg); return; } }while(0)#define __DEBUG // enable debug#ifdef __DEBUG #include <memory> #define FUNCTION_SCOPE \ auto __scope_logger__ = [](char const* f)->std::shared_ptr<const char>{ \CAM_LOGD("(%d)[%s] + ", ::gettid(), f); \return std::shared_ptr<const char>(f, [](char const* p){CAM_LOGD("(%d)[%s] -", ::gettid(), p);}); \ }(__FUNCTION__) #else #define FUNCTION_SCOPE #endiftemplate <typename T> inline MBOOL tryGetMetadata(IMetadata* pMetadata,MUINT32 const tag,T & rVal ) {if (pMetadata == NULL) {MY_LOGW("pMetadata == NULL");return MFALSE;}IMetadata::IEntry entry = pMetadata->entryFor(tag);if (!entry.isEmpty()) {rVal = entry.itemAt(0, Type2Type<T>());return MTRUE;}return MFALSE; }#define MFNR_FRAME_COUNT 4 /****************************************************************************** * ******************************************************************************/ class MFNRProviderImpl : public MultiFramePlugin::IProvider {typedef MultiFramePlugin::Property Property;typedef MultiFramePlugin::Selection Selection;typedef MultiFramePlugin::Request::Ptr RequestPtr;typedef MultiFramePlugin::RequestCallback::Ptr RequestCallbackPtr;public:virtual void set(MINT32 iOpenId, MINT32 iOpenId2) {MY_LOGD("set openId:%d openId2:%d", iOpenId, iOpenId2);mOpenId = iOpenId;}virtual const Property& property() {FUNCTION_SCOPE;static Property prop;static bool inited;if (!inited) {prop.mName = "TP_MFNR";prop.mFeatures = TP_FEATURE_MFNR;prop.mThumbnailTiming = eTiming_P2;prop.mPriority = ePriority_Highest;prop.mZsdBufferMaxNum = 8; // maximum frames requirementprop.mNeedRrzoBuffer = MTRUE; // rrzo requirement for BSSinited = MTRUE;}return prop;};virtual MERROR negotiate(Selection& sel) {FUNCTION_SCOPE;IMetadata* appInMeta = sel.mIMetadataApp.getControl().get();tryGetMetadata<MINT32>(appInMeta, QXT_FEATURE_MFNR, mEnable);MY_LOGD("mEnable: %d", mEnable);if (!mEnable) {MY_LOGD("Force off TP_MFNR shot");return BAD_VALUE;}sel.mRequestCount = MFNR_FRAME_COUNT;MY_LOGD("mRequestCount=%d", sel.mRequestCount);sel.mIBufferFull.setRequired(MTRUE).addAcceptedFormat(eImgFmt_I420) // I420 first.addAcceptedFormat(eImgFmt_YV12).addAcceptedFormat(eImgFmt_NV21).addAcceptedFormat(eImgFmt_NV12).addAcceptedSize(eImgSize_Full);//sel.mIBufferSpecified.setRequired(MTRUE).setAlignment(16, 16);sel.mIMetadataDynamic.setRequired(MTRUE);sel.mIMetadataApp.setRequired(MTRUE);sel.mIMetadataHal.setRequired(MTRUE);if (sel.mRequestIndex == 0) {sel.mOBufferFull.setRequired(MTRUE).addAcceptedFormat(eImgFmt_I420) // I420 first.addAcceptedFormat(eImgFmt_YV12).addAcceptedFormat(eImgFmt_NV21).addAcceptedFormat(eImgFmt_NV12).addAcceptedSize(eImgSize_Full);sel.mOMetadataApp.setRequired(MTRUE);sel.mOMetadataHal.setRequired(MTRUE);} else {sel.mOBufferFull.setRequired(MFALSE);sel.mOMetadataApp.setRequired(MFALSE);sel.mOMetadataHal.setRequired(MFALSE);}return OK;};virtual void init() {FUNCTION_SCOPE;mDump = property_get_bool("vendor.debug.camera.mfnr.dump", 0);//nothing to do for MFNR};virtual MERROR process(RequestPtr pRequest, RequestCallbackPtr pCallback) {FUNCTION_SCOPE;MERROR ret = 0;// restore callback function for abort APIif (pCallback != nullptr) {m_callbackprt = pCallback;}//maybe need to keep a copy in member<sp>IMetadata* pAppMeta = pRequest->mIMetadataApp->acquire();IMetadata* pHalMeta = pRequest->mIMetadataHal->acquire();IMetadata* pHalMetaDynamic = pRequest->mIMetadataDynamic->acquire();MINT32 processUniqueKey = 0;IImageBuffer* pInImgBuffer = NULL;uint32_t width = 0;uint32_t height = 0;if (!IMetadata::getEntry<MINT32>(pHalMeta, MTK_PIPELINE_UNIQUE_KEY, processUniqueKey)) {MY_LOGE("cannot get unique about MFNR capture");return BAD_VALUE;}if (pRequest->mIBufferFull != nullptr) {pInImgBuffer = pRequest->mIBufferFull->acquire();width = pInImgBuffer->getImgSize().w;height = pInImgBuffer->getImgSize().h;MY_LOGD("[IN] Full image VA: 0x%p, Size(%dx%d), Format: %s",pInImgBuffer->getBufVA(0), width, height, format2String(pInImgBuffer->getImgFormat()));if (mDump) {char path[256];snprintf(path, sizeof(path), "/data/vendor/camera_dump/mfnr_capture_in_%d_%dx%d.%s",pRequest->mRequestIndex, width, height, format2String(pInImgBuffer->getImgFormat()));pInImgBuffer->saveToFile(path);}}if (pRequest->mIBufferSpecified != nullptr) {IImageBuffer* pImgBuffer = pRequest->mIBufferSpecified->acquire();MY_LOGD("[IN] Specified image VA: 0x%p, Size(%dx%d)", pImgBuffer->getBufVA(0), pImgBuffer->getImgSize().w, pImgBuffer->getImgSize().h);}if (pRequest->mOBufferFull != nullptr) {mOutImgBuffer = pRequest->mOBufferFull->acquire();MY_LOGD("[OUT] Full image VA: 0x%p, Size(%dx%d)", mOutImgBuffer->getBufVA(0), mOutImgBuffer->getImgSize().w, mOutImgBuffer->getImgSize().h);}if (pRequest->mIMetadataDynamic != nullptr) {IMetadata *meta = pRequest->mIMetadataDynamic->acquire();if (meta != NULL)MY_LOGD("[IN] Dynamic metadata count: ", meta->count());elseMY_LOGD("[IN] Dynamic metadata Empty");}MY_LOGD("frame:%d/%d, width:%d, height:%d", pRequest->mRequestIndex, pRequest->mRequestCount, width, height);if (pInImgBuffer != NULL && mOutImgBuffer != NULL) {uint32_t yLength = pInImgBuffer->getBufSizeInBytes(0);uint32_t uLength = pInImgBuffer->getBufSizeInBytes(1);uint32_t vLength = pInImgBuffer->getBufSizeInBytes(2);uint32_t yuvLength = yLength + uLength + vLength;if (pRequest->mRequestIndex == 0) {//First frame//When width or height changed, recreate multiFrameif (mLatestWidth != width || mLatestHeight != height) {if (mMFProcessor != NULL) {delete mMFProcessor;mMFProcessor = NULL;}mLatestWidth = width;mLatestHeight = height;}if (mMFProcessor == NULL) {MY_LOGD("create mMFProcessor %dx%d", mLatestWidth, mLatestHeight);mMFProcessor = MFProcessor::createInstance(mLatestWidth, mLatestHeight);mMFProcessor->setFrameCount(pRequest->mRequestCount);}}mMFProcessor->addFrame((uint8_t *)pInImgBuffer->getBufVA(0),(uint8_t *)pInImgBuffer->getBufVA(1),(uint8_t *)pInImgBuffer->getBufVA(2),mLatestWidth, mLatestHeight);if (pRequest->mRequestIndex == pRequest->mRequestCount - 1) {//Last frameif (mMFProcessor != NULL) {mMFProcessor->process((uint8_t *)mOutImgBuffer->getBufVA(0),(uint8_t *)mOutImgBuffer->getBufVA(1),(uint8_t *)mOutImgBuffer->getBufVA(2),mLatestWidth, mLatestHeight);if (mDump) {char path[256];snprintf(path, sizeof(path), "/data/vendor/camera_dump/mfnr_capture_out_%d_%dx%d.%s",pRequest->mRequestIndex, mOutImgBuffer->getImgSize().w, mOutImgBuffer->getImgSize().h,format2String(mOutImgBuffer->getImgFormat()));mOutImgBuffer->saveToFile(path);}} else {memcpy((uint8_t *)mOutImgBuffer->getBufVA(0),(uint8_t *)pInImgBuffer->getBufVA(0),pInImgBuffer->getBufSizeInBytes(0));memcpy((uint8_t *)mOutImgBuffer->getBufVA(1),(uint8_t *)pInImgBuffer->getBufVA(1),pInImgBuffer->getBufSizeInBytes(1));memcpy((uint8_t *)mOutImgBuffer->getBufVA(2),(uint8_t *)pInImgBuffer->getBufVA(2),pInImgBuffer->getBufSizeInBytes(2));}mOutImgBuffer = NULL;}}if (pRequest->mIBufferFull != nullptr) {pRequest->mIBufferFull->release();}if (pRequest->mIBufferSpecified != nullptr) {pRequest->mIBufferSpecified->release();}if (pRequest->mOBufferFull != nullptr) {pRequest->mOBufferFull->release();}if (pRequest->mIMetadataDynamic != nullptr) {pRequest->mIMetadataDynamic->release();}mvRequests.push_back(pRequest);MY_LOGD("collected request(%d/%d)", pRequest->mRequestIndex, pRequest->mRequestCount);if (pRequest->mRequestIndex == pRequest->mRequestCount - 1) {for (auto req : mvRequests) {MY_LOGD("callback request(%d/%d) %p", req->mRequestIndex, req->mRequestCount, pCallback.get());if (pCallback != nullptr) {pCallback->onCompleted(req, 0);}}mvRequests.clear();}return ret;};virtual void abort(vector<RequestPtr>& pRequests) {FUNCTION_SCOPE;bool bAbort = false;IMetadata *pHalMeta;MINT32 processUniqueKey = 0;for (auto req:pRequests) {bAbort = false;pHalMeta = req->mIMetadataHal->acquire();if (!IMetadata::getEntry<MINT32>(pHalMeta, MTK_PIPELINE_UNIQUE_KEY, processUniqueKey)) {MY_LOGW("cannot get unique about MFNR capture");}if (m_callbackprt != nullptr) {MY_LOGD("m_callbackprt is %p", m_callbackprt.get());/*MFNR plugin callback request to MultiFrameNode */for (Vector<RequestPtr>::iterator it = mvRequests.begin() ; it != mvRequests.end(); it++) {if ((*it) == req) {mvRequests.erase(it);m_callbackprt->onAborted(req);bAbort = true;break;}}} else {MY_LOGW("callbackptr is null");}if (!bAbort) {MY_LOGW("Desire abort request[%d] is not found", req->mRequestIndex);}}};virtual void uninit() {FUNCTION_SCOPE;if (mMFProcessor != NULL) {delete mMFProcessor;mMFProcessor = NULL;}mLatestWidth = 0;mLatestHeight = 0;};virtual ~MFNRProviderImpl() {FUNCTION_SCOPE;};const char * format2String(MINT format) {switch(format) {case NSCam::eImgFmt_RGBA8888: return "rgba";case NSCam::eImgFmt_RGB888: return "rgb";case NSCam::eImgFmt_RGB565: return "rgb565";case NSCam::eImgFmt_STA_BYTE: return "byte";case NSCam::eImgFmt_YVYU: return "yvyu";case NSCam::eImgFmt_UYVY: return "uyvy";case NSCam::eImgFmt_VYUY: return "vyuy";case NSCam::eImgFmt_YUY2: return "yuy2";case NSCam::eImgFmt_YV12: return "yv12";case NSCam::eImgFmt_YV16: return "yv16";case NSCam::eImgFmt_NV16: return "nv16";case NSCam::eImgFmt_NV61: return "nv61";case NSCam::eImgFmt_NV12: return "nv12";case NSCam::eImgFmt_NV21: return "nv21";case NSCam::eImgFmt_I420: return "i420";case NSCam::eImgFmt_I422: return "i422";case NSCam::eImgFmt_Y800: return "y800";case NSCam::eImgFmt_BAYER8: return "bayer8";case NSCam::eImgFmt_BAYER10: return "bayer10";case NSCam::eImgFmt_BAYER12: return "bayer12";case NSCam::eImgFmt_BAYER14: return "bayer14";case NSCam::eImgFmt_FG_BAYER8: return "fg_bayer8";case NSCam::eImgFmt_FG_BAYER10: return "fg_bayer10";case NSCam::eImgFmt_FG_BAYER12: return "fg_bayer12";case NSCam::eImgFmt_FG_BAYER14: return "fg_bayer14";default: return "unknown";};};private:MINT32 mUniqueKey;MINT32 mOpenId;MINT32 mRealIso;MINT32 mShutterTime;MBOOL mZSDMode;MBOOL mFlashOn;Vector<RequestPtr> mvRequests;RequestCallbackPtr m_callbackprt;MFProcessor* mMFProcessor = NULL;IImageBuffer* mOutImgBuffer = NULL;uint32_t mLatestWidth = 0;uint32_t mLatestHeight = 0;MINT32 mEnable = 0;MINT32 mDump = 0;// add end };REGISTER_PLUGIN_PROVIDER(MultiFrame, MFNRProviderImpl);主要函數介紹:
- 在property函數中feature類型設置成TP_FEATURE_MFNR,并設置名稱、優先級、最大幀數等等屬性。尤其注意mNeedRrzoBuffer屬性,一般情況下,多幀算法必須要設置為MTRUE。 
- 在negotiate函數中配置算法需要的輸入、輸出圖像的格式、尺寸。注意,多幀算法有多幀輸入,但是只需要一幀輸出。因此這里設置了mRequestIndex == 0時才需要mOBufferFull。也就是只有第一幀才有輸入和輸出,其它幀只有輸入。 
 另外,還在negotiate函數中獲取上層傳下來的metadata參數,根據參數決定算法是否運行。
- 在process函數中接入算法。第一幀時創建算法接口類對象,然后每一幀都調用算法接口函數addFrame加入,最后一幀再調用算法接口函數process進行處理并獲取輸出。 
2.3.4 mtkcam3/3rdparty/customer/Android.mk
最終vendor.img需要的目標共享庫是libmtkcam_3rdparty.customer.so。因此,我們還需要修改Android.mk,使模塊libmtkcam_3rdparty.customer依賴libmtkcam.plugin.tp_mfnr。
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk index ff5763d3c2..5e5dd6524f 100755 --- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk @@ -77,6 +77,12 @@ LOCAL_SHARED_LIBRARIES += libyuv.vendorLOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_watermarkendif+ifeq ($(QXT_MFNR_SUPPORT), yes) +LOCAL_SHARED_LIBRARIES += libmultiframe +LOCAL_SHARED_LIBRARIES += libyuv.vendor +LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_mfnr +endif +# for app super night ev decision (experimental for customer only)LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.control.customersupernightevdecision################################################################################2.3.5 移除MTK示例的MFNR算法
一般情況下,MFNR 算法同一時間只允許運行一個。因此,需要移除 MTK 示例的 MFNR 算法。我們可以使用宏控來移除,這里就簡單粗暴,直接注釋掉了。
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/Android.mk b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/Android.mk index 4e2bc68dff..da98ebd0ad 100644 --- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/Android.mk +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/Android.mk @@ -118,7 +118,7 @@ LOCAL_SHARED_LIBRARIES += libfeature.stereo.provider#-----------------------------------------------------------ifneq ($(strip $(MTKCAM_HAVE_MFB_SUPPORT)),0) -LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.mfnr +#LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.mfnrendif#4 CellLOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.remosaic三、自定義metadata
添加metadata是為了讓APP層能夠通過metadata傳遞相應的參數給HAL層,以此來控制算法在運行時是否啟用。APP層是通過CaptureRequest.Builder.set(@NonNull Key<T> key, T value)來設置參數的。由于MTK原生相機APP沒有多幀降噪模式,因此,我們自定義metadata來驗證集成效果。
vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h index b020352092..714d05f350 100755 --- a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h +++ b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h @@ -602,6 +602,7 @@ typedef enum mtk_camera_metadata_tag {MTK_FLASH_FEATURE_END,QXT_FEATURE_WATERMARK = QXT_FEATURE_START, + QXT_FEATURE_MFNR,QXT_FEATURE_END,} mtk_camera_metadata_tag_t;vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl index 1b4fc75a0e..cba4511511 100755 --- a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl +++ b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl @@ -95,6 +95,8 @@ _IMP_SECTION_INFO_(QXT_FEATURE, "com.qxt.camera")_IMP_TAG_INFO_( QXT_FEATURE_WATERMARK,MINT32, "watermark") +_IMP_TAG_INFO_( QXT_FEATURE_MFNR, + MINT32, "mfnr")/*******************************************************************************vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h :
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h index 33e581adfd..4f4772424d 100755 --- a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h +++ b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h @@ -383,6 +383,8 @@ static auto& _QxtFeature_()sInst = {_TAG_(QXT_FEATURE_WATERMARK,"watermark", TYPE_INT32), + _TAG_(QXT_FEATURE_MFNR, + "mfnr", TYPE_INT32),};//return sInst;vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp :
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp index 591b25b162..9c3db8b1d1 100755 --- a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp +++ b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp @@ -583,10 +583,12 @@ updateData(IMetadata &rMetadata){IMetadata::IEntry qxtAvailRequestEntry = rMetadata.entryFor(MTK_REQUEST_AVAILABLE_REQUEST_KEYS);qxtAvailRequestEntry.push_back(QXT_FEATURE_WATERMARK , Type2Type< MINT32 >()); + qxtAvailRequestEntry.push_back(QXT_FEATURE_MFNR , Type2Type< MINT32 >());rMetadata.update(qxtAvailRequestEntry.tag(), qxtAvailRequestEntry);IMetadata::IEntry qxtAvailSessionEntry = rMetadata.entryFor(MTK_REQUEST_AVAILABLE_SESSION_KEYS);qxtAvailSessionEntry.push_back(QXT_FEATURE_WATERMARK , Type2Type< MINT32 >()); + qxtAvailSessionEntry.push_back(QXT_FEATURE_MFNR , Type2Type< MINT32 >());rMetadata.update(qxtAvailSessionEntry.tag(), qxtAvailSessionEntry);}#endif @@ -605,7 +607,7 @@ updateData(IMetadata &rMetadata)// to store manual update metadata for sensor driver.IMetadata::IEntry availCharactsEntry = rMetadata.entryFor(MTK_REQUEST_AVAILABLE_CHARACTERISTICS_KEYS);availCharactsEntry.push_back(MTK_MULTI_CAM_FEATURE_SENSOR_MANUAL_UPDATED , Type2Type< MINT32 >()); - rMetadata.update(availCharactsEntry.tag(), availCharactsEntry); + rMetadata.update(availCharactsEntry.tag(), availCharactsEntry);}if(physicIdsList.size() > 1){前面這些步驟完成之后,集成工作就基本完成了。我們需要重新編譯一下系統源碼,為節約時間,也可以只編譯vendor.img。
四、APP調用算法
驗證算法我們無需再重新寫APP,繼續使用《MTK HAL算法集成之單幀算法》中的APP代碼,只需要將KEY_WATERMARK的值改為"com.qxt.camera.mfnr"即可。為樣機刷入系統整包或者vendor.img,開機后,安裝demo驗證。我們來拍一張看看效果:
image
可以看到,集成后,這個模擬MFNR的多幀算法已經將連續的4幀圖像縮小并拼接成一張圖了。
五、結語
真正的多幀算法要復雜一些,例如,MFNR算法可能會根據曝光值決定是否啟用,光線好就不啟用,光線差就啟用;HDR算法,可能會要求獲取連續幾幀不同曝光的圖像。可能還會有智能的場景檢測等等。但是不管怎么變,多幀算法大體上的集成步驟都是類似的。如果遇到不同的需求,可能要根據需求靈活調整一下代碼。
原文鏈接:https://www.jianshu.com/p/f0b57072ea74
友情推薦:
Android 開發干貨集錦
至此,本篇已結束。轉載網絡的文章,小編覺得很優秀,歡迎點擊閱讀原文,支持原創作者,如有侵權,懇請聯系小編刪除,歡迎您的建議與指正。同時期待您的關注,感謝您的閱讀,謝謝!
點個在看,方便您使用時快速查找!
總結
以上是生活随笔為你收集整理的MTK 多帧算法集成的全部內容,希望文章能夠幫你解決所遇到的問題。
 
                            
                        - 上一篇: MTK modem编译
- 下一篇: vscode字体等宽策略( JetBra
