换脸系列——整脸替换
前言
前面介紹了僅替換五官的方法,這里介紹整張臉的方法。
國(guó)際慣例,參考博客:
[圖形算法]Delaunay三角剖分算法
維諾圖(Voronoi Diagram)分析與實(shí)現(xiàn)
Delaunay Triangulation and Voronoi Diagram using OpenCV ( C++ / Python )
Face Swap using OpenCV ( C++ / Python )
learnopencv中的換臉源碼
流程
整臉替換的流程與僅替換五官的時(shí)候,稍微有點(diǎn)區(qū)別,步驟為:
- 檢測(cè)人臉關(guān)鍵點(diǎn)
- 依據(jù)人臉關(guān)鍵點(diǎn)的凸包進(jìn)行人臉三角剖分
- 對(duì)兩人人臉對(duì)應(yīng)的三角網(wǎng)格進(jìn)行變形對(duì)齊
- 使用seamlessclone柏松融合算法進(jìn)行貼圖
先加載必要的庫(kù)
import cv2 import numpy as np import matplotlib.pyplot as plt檢測(cè)人臉關(guān)鍵點(diǎn)
跟上一篇人臉替換的博客一樣,代碼直接貼過(guò)來(lái)了
cas = cv2.CascadeClassifier('./model/haarcascade_frontalface_alt2.xml') obj = cv2.face.createFacemarkLBF() obj.loadModel('./model/lbfmodel.yaml') # opencv檢測(cè)關(guān)鍵點(diǎn) def detect_facepoint(img):img_gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)print(img_gray.shape)print(cas.detectMultiScale(img_gray,2,3,0,(30,30)))faces = cas.detectMultiScale(img_gray,2,3,0,(30,30))landmarks = obj.fit(img_gray,faces)assert landmarks[0],'no face detected'if(len(landmarks[1])>1):print('multi face detected,use the first')return faces[0],np.squeeze(landmarks[1][0]) #繪制人臉關(guān)鍵點(diǎn) def draw_kps(img,face_box,kps,kpssize=3):img_show = img.copy()cv2.rectangle(img_show,(face_box[0],face_box[1]),(face_box[0]+face_box[2],face_box[1]+face_box[3]),(0,255,0),3)for i in range(kps.shape[0]):cv2.circle(img_show,(kps[i,0],kps[i,1]),kpssize,(0,0,255),-1)img_show = cv2.cvtColor(img_show,cv2.COLOR_BGR2RGB)return img_show三角剖分
根據(jù)人臉關(guān)鍵點(diǎn),提取人臉三角網(wǎng)格,流程是先提取人臉區(qū)域的凸包,參考這里,接下來(lái)使用getTriangleList函數(shù)提取人臉網(wǎng)格:
def get_triangle(img,facekpts):convex_kps = cv2.convexHull(facekpts,returnPoints=True)kps = np.squeeze(convex_kps)rect = (0,0,img.shape[1],img.shape[0])subdiv = cv2.Subdiv2D(rect)for i in range(kps.shape[0]):subdiv.insert((kps[i,0],kps[i,1]))triangleList = subdiv.getTriangleList()return triangleList寫一個(gè)畫圖函數(shù),可視化三角網(wǎng)格
def draw_triangles(img,triangles):for t in triangles:pt1 = (t[0],t[1])pt2 = (t[2],t[3])pt3 = (t[4],t[5])cv2.line(img,pt1,pt2,(0,255,0),2,cv2.LINE_AA)cv2.line(img,pt1,pt3,(0,255,0),2,cv2.LINE_AA)cv2.line(img,pt2,pt3,(0,255,0),2,cv2.LINE_AA)img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)return img可視化看看:
# 提取人臉關(guān)鍵點(diǎn) img1 = cv2.imread('./images/hjh.jpg') img2 = cv2.imread('./images/zly.jpg') face_box1,face_kps1 = detect_facepoint(img1) face_kps1=face_kps1.astype(int) face_box2,face_kps2 = detect_facepoint(img2) face_kps2=face_kps2.astype(int) #獲取三角網(wǎng)格 img_t1 = get_triangle(img1,face_kps1) img_t2 = get_triangle(img2,face_kps2) #可視化 plt.figure(figsize=(8,8)) plt.subplot(121) plt.imshow(draw_triangles(img1.copy(),img_t1)) plt.axis('off') plt.subplot(122) plt.imshow(draw_triangles(img2.copy(),img_t2)) plt.axis('off')網(wǎng)格變形
目的是將第二個(gè)人臉?lè)謩e用網(wǎng)格變形到第一個(gè)人臉對(duì)應(yīng)的網(wǎng)格區(qū)域。
所以第二個(gè)人臉的網(wǎng)格沒(méi)用,可以按照第一個(gè)人臉的網(wǎng)格分割第二個(gè)人臉。這兩個(gè)人臉唯一對(duì)應(yīng)的地方就是他們關(guān)鍵點(diǎn)的索引順序相同,所以找到第一個(gè)網(wǎng)格每個(gè)頂點(diǎn)對(duì)應(yīng)是哪個(gè)人臉關(guān)鍵點(diǎn),就能用索引重新分割第二個(gè)人臉。
找第一個(gè)人臉的每個(gè)網(wǎng)格對(duì)應(yīng)的人臉關(guān)鍵點(diǎn)
#找到三角網(wǎng)格對(duì)應(yīng)的關(guān)鍵點(diǎn)索引 def get_nearest(img_t,face_kps):triangle_idx=[]for t in img_t:idx1=np.argmin(np.sum(abs(face_kps-np.array([[t[0],t[1]]])),axis=1))idx2=np.argmin(np.sum(abs(face_kps-np.array([[t[2],t[3]]])),axis=1))idx3=np.argmin(np.sum(abs(face_kps-np.array([[t[4],t[5]]])),axis=1))triangle_idx.append([idx1,idx2,idx3])return triangle_idx接下來(lái)提取第二個(gè)圖像的每一塊進(jìn)行變形,舉個(gè)例子,比如第二塊網(wǎng)格。流程是提取三角網(wǎng)格的外接矩形,把它切出來(lái),并且把舉行里面對(duì)應(yīng)的三個(gè)關(guān)鍵點(diǎn)的坐標(biāo)重新計(jì)算一下:
# 提取第一張圖的所有三角網(wǎng)格對(duì)應(yīng)的人臉關(guān)鍵點(diǎn)索引 wrap_idx = get_nearest(img_t1,face_kps1) i=2 # 塊索引 # 三角形的三個(gè)坐標(biāo) t1 = face_kps1[wrap_idx[i]] t2 = face_kps2[wrap_idx[i]] # 提取三角網(wǎng)格的外接矩形 patch_rect1 = cv2.boundingRect(t1) patch_rect2 = cv2.boundingRect(t2) # 重置關(guān)鍵點(diǎn)坐標(biāo) new_t1 = t1 - np.array([[ patch_rect1[0],patch_rect1[1] ]]) new_t2 = t2 - np.array([[ patch_rect2[0],patch_rect2[1] ]]) # 把第二張圖像對(duì)應(yīng)的圖像塊切分開(kāi) img_patch2 = img2[patch_rect2[1]:patch_rect2[1]+patch_rect2[3],patch_rect2[0]:patch_rect2[0]+patch_rect2[2]]可視化看看當(dāng)前的圖像切塊和關(guān)鍵點(diǎn)是不是對(duì)應(yīng),因?yàn)閛pencv里面經(jīng)常出現(xiàn)坐標(biāo)軸弄反的問(wèn)題
#驗(yàn)證當(dāng)前關(guān)鍵點(diǎn)是否正確 plt.figure(figsize=(8,8)) plt.subplot(131) plt.imshow(cv2.cvtColor(img_patch2.copy(),cv2.COLOR_BGR2RGB)) plt.axis('off') plt.subplot(132) plt.imshow(draw_kps(img_patch2,(0,0,patch_rect2[2],patch_rect2[3]),new_t2)) plt.axis('off') plt.subplot(133) plt.imshow(draw_kps(img2.copy(),face_box2,t2,4)) plt.axis('off')接下來(lái)將第二個(gè)圖像的三角網(wǎng)格變形,使其能夠貼到第一張圖對(duì)應(yīng)的三角區(qū)域,變形函數(shù)很簡(jiǎn)單難,就是利用opencv的計(jì)算變形矩陣函數(shù)getAffineTransform和應(yīng)用變形函數(shù)warpAffine,將兩個(gè)區(qū)域變形對(duì)齊:
def applyAffineTransform(src, srcTri, dstTri, size) : # 給定兩個(gè)三角形,找到第一個(gè)到第二個(gè)的仿射變換矩陣warpMat = cv2.getAffineTransform( np.float32(srcTri), np.float32(dstTri) ) # 將第一個(gè)做仿射變換dst = cv2.warpAffine( src, warpMat, (size[0], size[1]), None, flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_REFLECT_101 )return dst調(diào)用上面的函數(shù),利用兩個(gè)三角形的仿射變換矩陣,將圖二的塊變形
patch_affine2=applyAffineTransform(img_patch2,new_t2,new_t1,(patch_rect1[2],patch_rect1[3]))可視化看看唄
plt.imshow(cv2.cvtColor(patch_affine2.copy(),cv2.COLOR_BGR2RGB)) plt.axis('off')我們只需要將三角網(wǎng)格部分貼過(guò)去,而非貼上面的這個(gè)矩形區(qū)域,所以利用掩膜去貼三角區(qū)域
mask = np.zeros((patch_rect1[3],patch_rect1[2],3),dtype=np.uint8) mask = cv2.fillConvexPoly(mask,new_t1,(1,1,1),16,0) mask_img = patch_affine2*mask img1[patch_rect1[1]:patch_rect1[1]+patch_rect1[3],patch_rect1[0]:patch_rect1[0]+patch_rect1[2]] = \img1[patch_rect1[1]:patch_rect1[1]+patch_rect1[3],patch_rect1[0]:patch_rect1[0]+patch_rect1[2]]*(1-mask) img1[patch_rect1[1]:patch_rect1[1]+patch_rect1[3],patch_rect1[0]:patch_rect1[0]+patch_rect1[2]] = \img1[patch_rect1[1]:patch_rect1[1]+patch_rect1[3],patch_rect1[0]:patch_rect1[0]+patch_rect1[2]]+mask_img可視化看看
plt.imshow(cv2.cvtColor(img1.copy(),cv2.COLOR_BGR2RGB)) plt.axis('off')仔細(xì)看,下嘴唇下面有一道印,那個(gè)地方就是第2塊三角網(wǎng)格的貼圖結(jié)果。
這一塊的整體函數(shù)是:
def warp_triangle(dst_img,src_img,img_tri1,kps1,kps2):# 提取第一張圖的所有三角網(wǎng)格對(duì)應(yīng)的人臉關(guān)鍵點(diǎn)索引wrap_idx = get_nearest(img_tri1,kps1)for i in range(len(wrap_idx)): #將第二個(gè)圖的每個(gè)網(wǎng)格變形貼到第一張圖的對(duì)應(yīng)位置t1 = kps1[wrap_idx[i]]t2 = kps2[wrap_idx[i]]patch_rect1 = cv2.boundingRect(t1)patch_rect2 = cv2.boundingRect(t2)new_t1 = t1 - np.array([[ patch_rect1[0],patch_rect1[1] ]])new_t2 = t2 - np.array([[ patch_rect2[0],patch_rect2[1] ]])img_patch2 = src_img[patch_rect2[1]:patch_rect2[1]+patch_rect2[3],patch_rect2[0]:patch_rect2[0]+patch_rect2[2]]# 提取第二張圖的網(wǎng)格圖像patch_affine2=applyAffineTransform(img_patch2,new_t2,new_t1,(patch_rect1[2],patch_rect1[3])) #變形#將第二張圖網(wǎng)格變形后貼到第一張圖對(duì)應(yīng)地方mask = np.zeros((patch_rect1[3],patch_rect1[2],3),dtype=np.uint8)mask = cv2.fillConvexPoly(mask,new_t1,(1,1,1),16,0)mask_img = patch_affine2*maskdst_img[patch_rect1[1]:patch_rect1[1]+patch_rect1[3],patch_rect1[0]:patch_rect1[0]+patch_rect1[2]] = \dst_img[patch_rect1[1]:patch_rect1[1]+patch_rect1[3],patch_rect1[0]:patch_rect1[0]+patch_rect1[2]]*(1-mask)dst_img[patch_rect1[1]:patch_rect1[1]+patch_rect1[3],patch_rect1[0]:patch_rect1[0]+patch_rect1[2]] = \dst_img[patch_rect1[1]:patch_rect1[1]+patch_rect1[3],patch_rect1[0]:patch_rect1[0]+patch_rect1[2]]+mask_imgreturn dst_img所有的網(wǎng)格都變形以后的結(jié)果圖:
# 對(duì)人臉網(wǎng)格進(jìn)行變形 wrap_img = warp_triangle(img1.copy(),img2.copy(),img_t1,face_kps1,face_kps2)可視化結(jié)果:
plt.imshow(cv2.cvtColor(wrap_img,cv2.COLOR_BGR2RGB)) plt.axis('off')很明顯遇到了顏色不一致問(wèn)題,導(dǎo)致貼圖痕跡明顯。
顏色校正
上一章節(jié)我們引用的是一種高斯校正的方法,這里我們直接用opencv的seamlessClone()方法,使用泊松融合的方法矯正貼圖痕跡過(guò)于明顯的問(wèn)題。
流程就是重新提取一次面部掩膜,利用此掩膜調(diào)用opencv函數(shù)貼圖
# 對(duì)人臉進(jìn)行重新貼圖 convex1 = cv2.convexHull(face_kps1,returnPoints=True) mask = np.zeros_like(img1) mask = cv2.fillConvexPoly(mask,convex1,(255,255,255)) r=cv2.boundingRect(convex1) center = ((r[0]+int(r[2]/2)),r[1]+int(r[3]/2)) result_img = cv2.seamlessClone(wrap_img,img1,mask,center,cv2.NORMAL_CLONE)可視化結(jié)果
plt.figure(figsize=(18,18)) plt.subplot(131) plt.imshow(mask.astype(np.uint8)) plt.axis('off') plt.subplot(132) plt.imshow(cv2.cvtColor(wrap_img,cv2.COLOR_BGR2RGB)) plt.axis('off') plt.subplot(133) plt.imshow(cv2.cvtColor(result_img,cv2.COLOR_BGR2RGB)) plt.axis('off')
這個(gè)臉的方向看起來(lái)很怪,所以我們盡量讓換臉的兩個(gè)人的圖面部朝向保持一致。用女神俞飛鴻和趙麗穎的圖像做替換,效果如下:
好吧,反正我是看不出來(lái)左下角圖片是趙麗穎。不要慌,算法多多,后面再想其他算法。
后記
本文是上一片只替換五官換臉?lè)椒ǖ母M(jìn)一步的替換方法,替換整張面部。
博客代碼:
鏈接: https://pan.baidu.com/s/11syxp6yM96GVGi09FSGUwQ
提取碼: e8ae
本文已經(jīng)同步到微信公眾號(hào)中,公眾號(hào)與本博客將持續(xù)同步更新運(yùn)動(dòng)捕捉、機(jī)器學(xué)習(xí)、深度學(xué)習(xí)、計(jì)算機(jī)視覺(jué)算法,敬請(qǐng)關(guān)注
總結(jié)
以上是生活随笔為你收集整理的换脸系列——整脸替换的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 换脸系列——眼鼻口替换
- 下一篇: 3D人脸重建——PRNet网络输出的理解