Tensorflow_yolov3 Intel Realsense D435奇怪的现象,多摄像头连接时一旦能检测到深度马上就会卡(卡住)
生活随笔
收集整理的這篇文章主要介紹了
Tensorflow_yolov3 Intel Realsense D435奇怪的现象,多摄像头连接时一旦能检测到深度马上就会卡(卡住)
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
兩個攝像頭連接時一旦能檢測到深度馬上就會卡(小于30公分),,單個攝像頭沒事,這是使用了多線程傳輸后的現象,不知咋回事。。。
后來加了這句驗證全局變量是否存在,好像好點了,有待驗證
20200401
貌似還是不行,卡
直接上代碼:
# -*- coding: utf-8 -*- """ @File : 20200401_多攝像頭多線程調度_整合到一個文件.py @Time : 2020/4/1 8:41 @Author : Dontla @Email : sxana@qq.com @Software: PyCharm """import sys import threading import time import traceback import pyrealsense2 as rs import core.utils as utils import colorsys import random import cv2 import numpy as np import tensorflow as tf from core.config import cfg from core.yolov3 import YOLOV3# Parameters # cam_serials = ['836612070298', '838212073806', '827312071616'] cam_serials = ['838212071055', '826212070395'] # cam_serials = ['826212070395']# Needn't modify cam_num = len(cam_serials) ctx = rs.context()# 卡頓檢查器 globals()['a'] = 0# Dontla 191106注釋:創建能夠返回讀取class.names文件信息為字典參數的函數 def read_class_names(class_file_name):"""loads class name from a file"""names = {}with open(class_file_name, 'r') as data:for ID, name in enumerate(data):names[ID] = name.strip('\n')return namesmy_classes = read_class_names(cfg.YOLO.CLASSES)class YoloTest(object):def __init__(self):# D·C 191111:__C.TEST.INPUT_SIZE = 544self.input_size = cfg.TEST.INPUT_SIZEself.anchor_per_scale = cfg.YOLO.ANCHOR_PER_SCALE# Dontla 191106注釋:初始化class.names文件的字典信息屬性self.classes = utils.read_class_names(cfg.YOLO.CLASSES)# D·C 191115:類數量屬性self.num_classes = len(self.classes)self.anchors = np.array(utils.get_anchors(cfg.YOLO.ANCHORS))# D·C 191111:__C.TEST.SCORE_THRESHOLD = 0.3self.score_threshold = cfg.TEST.SCORE_THRESHOLD# D·C 191120:__C.TEST.IOU_THRESHOLD = 0.45self.iou_threshold = cfg.TEST.IOU_THRESHOLDself.moving_ave_decay = cfg.YOLO.MOVING_AVE_DECAY# D·C 191120:__C.TEST.ANNOT_PATH = "./data/dataset/Dontla/20191023_Artificial_Flower/test.txt"self.annotation_path = cfg.TEST.ANNOT_PATH# D·C 191120:__C.TEST.WEIGHT_FILE = "./checkpoint/f_g_c_weights_files/yolov3_test_loss=15.8845.ckpt-47"self.weight_file = cfg.TEST.WEIGHT_FILE# D·C 191115:可寫標記(bool類型值)self.write_image = cfg.TEST.WRITE_IMAGE# D·C 191115:__C.TEST.WRITE_IMAGE_PATH = "./data/detection/"(識別圖片畫框并標注文本后寫入的圖片路徑)self.write_image_path = cfg.TEST.WRITE_IMAGE_PATH# D·C 191116:TEST.SHOW_LABEL設置為Trueself.show_label = cfg.TEST.SHOW_LABEL# D·C 191120:創建命名空間“input”with tf.name_scope('input'):# D·C 191120:建立變量(創建占位符開辟內存空間)self.input_data = tf.placeholder(dtype=tf.float32, name='input_data')self.trainable = tf.placeholder(dtype=tf.bool, name='trainable')model = YOLOV3(self.input_data, self.trainable)self.pred_sbbox, self.pred_mbbox, self.pred_lbbox = model.pred_sbbox, model.pred_mbbox, model.pred_lbbox# D·C 191120:創建命名空間“指數滑動平均”with tf.name_scope('ema'):ema_obj = tf.train.ExponentialMovingAverage(self.moving_ave_decay)# D·C 191120:在允許軟設備放置的會話中啟動圖形并記錄放置決策。(不懂啥意思。。。)allow_soft_placement=True表示允許tf自動選擇可用的GPU和CPUself.sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))# D·C 191120:variables_to_restore()用于加載模型計算滑動平均值時將影子變量直接映射到變量本身self.saver = tf.train.Saver(ema_obj.variables_to_restore())# D·C 191120:用于下次訓練時恢復模型self.saver.restore(self.sess, self.weight_file)# 攝像頭序列號# self.cam_serials = ['838212073249', '827312070790']# self.cam_serials = ['838212073249', '827312070790', '838212071055', '838212074152', '838212073806',# '827312071616']self.cam_serials = ['838212074152', '838212072365', '827312070790', '838212073806', '826212070395']self.cam_num = len(self.cam_serials)# self.cam_num = 2def predict(self, image):# D·C 191107:復制一份圖片的鏡像,避免對圖片直接操作改變圖片的內在屬性org_image = np.copy(image)# D·C 191107:獲取圖片尺寸org_h, org_w, _ = org_image.shape# D·C 191108:該函數將源圖結合input_size,將其轉換成預投喂的方形圖像(作者默認544×544,中間為縮小尺寸的源圖,上下空區域為灰圖):image_data = utils.image_preprocess(image, [self.input_size, self.input_size])# D·C 191108:打印維度看看:# print(image_data.shape)# (544, 544, 3)# D·C 191108:創建新軸,不懂要創建新軸干嘛?image_data = image_data[np.newaxis, ...]# D·C 191108:打印維度看看:# print(image_data.shape)# (1, 544, 544, 3)# D·C 191110:三個box可能存放了預測框圖(可能是N多的框,有用的沒用的重疊的都在里面)的信息(但是打印出來的值完全看不懂啊喂?)time1 = time.time()pred_sbbox, pred_mbbox, pred_lbbox = self.sess.run([self.pred_sbbox, self.pred_mbbox, self.pred_lbbox],feed_dict={self.input_data: image_data,self.trainable: False})time2 = time.time()print(time2 - time1)# D·C 191110:打印三個box的類型、形狀和值看看:# print(type(pred_sbbox))# print(type(pred_mbbox))# print(type(pred_lbbox))# 都是<class 'numpy.ndarray'># print(pred_sbbox.shape)# print(pred_mbbox.shape)# print(pred_lbbox.shape)# (1, 68, 68, 3, 6)# (1, 34, 34, 3, 6)# (1, 17, 17, 3, 6)# print(pred_sbbox)# print(pred_mbbox)# print(pred_lbbox)# D·C 191110:(-1,6)表示不知道有多少行,反正你給我整成6列,然后concatenate又把它們仨給疊起來,最終得到無數個6列數組(后面self.num_classes)個數存放的貌似是這個框屬于類的概率)pred_bbox = np.concatenate([np.reshape(pred_sbbox, (-1, 5 + self.num_classes)),np.reshape(pred_mbbox, (-1, 5 + self.num_classes)),np.reshape(pred_lbbox, (-1, 5 + self.num_classes))], axis=0)# D·C 191111:打印pred_bbox和它的維度看看:# print(pred_bbox)# print(pred_bbox.shape)# (18207, 6)# D·C 191111:猜測是第一道過濾,過濾掉score_threshold以下的圖片,過濾完之后少了好多:# D·C 191115:bboxes維度為[n,6],前四列是坐標,第五列是得分,第六列是對應類下標bboxes = utils.postprocess_boxes(pred_bbox, (org_h, org_w), self.input_size, self.score_threshold)# D·C 191111:猜測是第二道過濾,過濾掉iou_threshold以下的圖片:bboxes = utils.nms(bboxes, self.iou_threshold)return bboxesdef draw_bbox(self, image, bboxes, aligned_depth_frame=None, color_intrin_part=None, show_label=True):"""bboxes: [x_min, y_min, x_max, y_max, probability, cls_id] format coordinates."""# D·C 191117:創建存放class.names文件中類數目的變量num_classes = len(my_classes)# Dontla 20191014# print(num_classes)# 80# D·A 191117# print(num_classes)# 1# D·A 191117:獲取圖片分辨率image_h, image_w, _ = image.shape# Dontla 20191014# print('(image_h,image_w):', image_h, '', image_w)# (image_h,image_w): 240 424# D·C 191118:hsv顏色模式第一個數表示色階,用于將不同類別的框分配給不同顏色,# 用RGB模式不好做顏色分配,用這種方法分配后再轉換成rgb模式,方便很多!hsv_tuples = [(1.0 * x / num_classes, 1., 1.) for x in range(num_classes)]# D·A 191118 打印hsv_tuples看看# print(hsv_tuples)# [(0.0, 1.0, 1.0)]# D·C 191118:將hsv顏色模式轉換成rgb顏色模式colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples))# D·A 191118 打印colors看看# print(colors)# [(1.0, 0.0, 0.0)]# D·C 191118:將轉換后的值分配給0-255colors = list(map(lambda x: (int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)), colors))# Dontla 20191014# print(type(colors))# <class 'list'># print(colors)# [(255, 0, 0)]# D·C 191119:生成固定隨機數種子random.seed(0)# D·C 191119:將colors里的顏色打亂(由于設定了隨機數種子,每次打亂后的顏色順序都是一樣的)random.shuffle(colors)# D·C 191119:取消固定隨機數種子random.seed(None)# Dontla 20191017# print('*' * 50)# print(bboxes)# [array([1.57846606e+00, 3.41371887e+02, 8.33116150e+01, 4.69713715e+02, 3.56435806e-01, 7.30000000e+01]),# array([3.03579620e+02, 3.27595886e+02, 6.24216125e+02, 4.67514557e+02, 4.15683120e-01, 6.30000000e+01]),# array([356.42529297, 143.32351685, 424.81719971, 191.51617432,0.55459815, 62. ]),# array([468.05984497, 186.09997559, 638.58270264, 295.89181519,0.68913358, 57. ])]# 解釋:每個元素前四位數值為框左上角和右下角坐標,第五位為真實概率,第六位為類號。如book為73,laptop為63。# 識別出目標:book 中心點像素坐標:(42, 405) 深度:0.640m# 識別出目標:laptop 中心點像素坐標:(464, 397) 深度:0.479m# 識別出目標:tvmonitor 中心點像素坐標:(390, 167) 深度:4.274m# 識別出目標:sofa 中心點像素坐標:(553, 240) 深度:1.608m# Dontla 20191017# 提取ppx,ppy,fx,fy# Dontla 20191104 Dontla reformed at 20200301# 如果color_intrin_part不為空(因為參數在傳入之前color_frame和aligned_depth_frame都已經判斷過了,這里沒必要再做判斷)if color_intrin_part:ppx = color_intrin_part[0]ppy = color_intrin_part[1]fx = color_intrin_part[2]fy = color_intrin_part[3]# D·C 191119:獲取行列索引作為下標for i, bbox in enumerate(bboxes):# 取前四位數字作為畫框坐標coor = np.array(bbox[:4], dtype=np.int32)# 創建標注字符字體大小的變量?(將fontScale增加后,發現畫框左上角的文字變大了)fontScale = 0.5# 取第五位數字作為得分值score = bbox[4]# 取第六位數字作為類號class_ind = int(bbox[5])# print(class_ind)# 如:59# 以類序號分配顏色bbox_color = colors[class_ind]# print(bbox_color)# 如:(255, 0, 133) 不知道是咋跟序號對應上的?(見前面,框顏色的創建) 【識別出目標:bed 中心點像素坐標:(322, 374) 深度:4.527m】# D·C 191119:設置框的邊寬?不過為啥不直接設置成:bbox_thick = int((image_h + image_w) / 1000)bbox_thick = int(0.6 * (image_h + image_w) / 600)# Dontla 20191014# print(type(bbox_thick))# <class 'int'># print(bbox_thick)# 640×480像素下固定為1,因為之前用的是424×248像素所以值為0# print(0.6 * (640 + 480) / 600)# 1.12# 創建存儲框左上角和右下角坐標的變量c1, c2 = (coor[0], coor[1]), (coor[2], coor[3])# Dontla 20191014# print(c1, '', c2)# 貌似打印出的就是識別框的左上角和右下角坐標(以圖像左上角為原點,x軸水平向右,y軸垂直向下)# 如 (126, 77) (229, 206)# (363, 112) (423, 239)# D·C 191119:繪制矩形框cv2.rectangle(image, c1, c2, bbox_color, bbox_thick)# D·C 191119:如果允許顯示標簽if show_label:# D·C 191119:創建需顯示在框左上角的字符信息(類名+得分)bbox_mess = '%s: %.2f' % (my_classes[class_ind], score)# Dontla 20191017# print(type(bbox_mess))# <class 'str'># print(bbox_mess)# 如:# keyboard: 0.66# laptop: 0.48# D·A 191119:獲取目標的中心點坐標,round后默認帶一位小數,可用int去掉它target_xy_pixel = [int(round((coor[0] + coor[2]) / 2)), int(round((coor[1] + coor[3]) / 2))]# Dontla 20191104 Dontla reformed at 20200301# 如果aligned_depth_frame不為空(因為參數在傳入之前color_frame和aligned_depth_frame都已經判斷過來,這里沒必要再做判斷)if aligned_depth_frame:target_depth = aligned_depth_frame.get_distance(target_xy_pixel[0], target_xy_pixel[1])target_xy_true = [(target_xy_pixel[0] - ppx) * target_depth / fx,(target_xy_pixel[1] - ppy) * target_depth / fy]# Dontla commented out at 20200301(測試掉線不需要打印這個,先注釋掉)# print('識別出目標:{} 中心點像素坐標:({}, {}) 實際坐標(mm):({:.0f},{:.0f}) 深度(mm):{:.0f}'.format(classes[class_ind],# target_xy_pixel[0],# target_xy_pixel[1],# target_xy_true[# 0] * 1000,# -target_xy_true[# 1] * 1000,# target_depth * 1000))# print('識別出目標:{} 中心點像素坐標:({}, {}) 深度:{:.3f}m'.format(classes[class_ind], target_xy_pixel[0],# target_xy_pixel[1],# target_depth))# 識別出目標:cup 中心點像素坐標:(317, 148)# 識別出目標:keyboard 中心點像素坐標:(398, 162)# 識別出目標:bottle 中心點像素坐標:(321, 124)# D·C 191119:計算bbox_mess字符所占像素的大小t_size = cv2.getTextSize(bbox_mess, 0, fontScale, thickness=bbox_thick // 2)[0]# D·A 191119:打印t_size看看:# print(t_size)# (97, 12)# D·A 191119:打印cv2.getTextSize(bbox_mess, 0, fontScale, thickness=bbox_thick // 2)看看:# print(cv2.getTextSize(bbox_mess, 0, fontScale, thickness=bbox_thick // 2))# ((97, 12), 5) # 5是基線與中線的距離,我們這里用不到,所以就舍去了# D·C 191119:繪制裝載文字的矩形實心框# D·R 191119:當框在頂端,標簽顯示不出來,我想把它弄到框內# 原始為:cv2.rectangle(image, c1, (c1[0] + t_size[0], c1[1] - t_size[1] - 3), bbox_color, thickness=-1) # filledcv2.rectangle(image, c1, (c1[0] + t_size[0], c1[1] + t_size[1] + 3), bbox_color, thickness=-1) # filled# D·C 191119:繪制文字# D·R 191119:當框在頂端,標簽顯示不出來,我想把它弄到框內(bbox_thick是線寬。-2是底線到中線距離?)# 原始為:cv2.putText(image, bbox_mess, (c1[0], c1[1] - 2), cv2.FONT_HERSHEY_SIMPLEX,# fontScale, (0, 0, 0), bbox_thick // 2, lineType=cv2.LINE_AA)cv2.putText(image, bbox_mess, (c1[0], c1[1] + t_size[1] + bbox_thick - 2), cv2.FONT_HERSHEY_SIMPLEX,fontScale, (0, 0, 0), bbox_thick // 2, lineType=cv2.LINE_AA)return image# 【類】單個攝像頭幀傳輸線程 class CamThread(threading.Thread):def __init__(self, cam_serial):threading.Thread.__init__(self)self.cam_serial = cam_serial# 【類函數】def run(self):while True:try:print('攝像頭{}線程啟動:'.format(self.cam_serial))# 配置攝像頭并啟動流# self.cam_cfg(self.cam_serial) # 放函數里就不行了不知為什么?(因為那是局部變量啊傻子,只能在函數內使用)locals()['pipeline' + self.cam_serial] = rs.pipeline(ctx)locals()['config' + self.cam_serial] = rs.config()locals()['config' + self.cam_serial].enable_device(self.cam_serial)locals()['config' + self.cam_serial].enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)locals()['config' + self.cam_serial].enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)locals()['pipeline' + self.cam_serial].start(locals()['config' + self.cam_serial])locals()['align' + self.cam_serial] = rs.align(rs.stream.color)# 從內存循環讀取攝像頭傳輸幀while True:# globals()['a'] += 1# print(globals()['a'])locals()['frames' + self.cam_serial] = locals()['pipeline' + self.cam_serial].wait_for_frames()locals()['aligned_frames' + self.cam_serial] = locals()['align' + self.cam_serial].process(locals()['frames' + self.cam_serial])globals()['aligned_depth_frame' + self.cam_serial] = locals()['aligned_frames' + self.cam_serial].get_depth_frame()locals()['color_frame' + self.cam_serial] = locals()['aligned_frames' + self.cam_serial].get_color_frame()locals()['color_profile' + self.cam_serial] = locals()['color_frame' + self.cam_serial].get_profile()locals()['cvsprofile' + self.cam_serial] = rs.video_stream_profile(locals()['color_profile' + self.cam_serial])locals()['color_intrin' + self.cam_serial] = locals()['cvsprofile' + self.cam_serial].get_intrinsics()globals()['color_intrin_part' + self.cam_serial] = [locals()['color_intrin' + self.cam_serial].ppx,locals()['color_intrin' + self.cam_serial].ppy,locals()['color_intrin' + self.cam_serial].fx,locals()['color_intrin' + self.cam_serial].fy]globals()['color_image' + self.cam_serial] = np.asanyarray(locals()['color_frame' + self.cam_serial].get_data())# globals()['depth_image_raw' + self.cam_serial] = np.asanyarray(# globals()['aligned_depth_frame' + self.cam_serial].get_data())# globals()['depth_image' + self.cam_serial] = cv2.applyColorMap(# cv2.convertScaleAbs(globals()['depth_image_raw' + self.cam_serial], alpha=0.03),# cv2.COLORMAP_JET)except Exception:traceback.print_exc()# Dontla 20200326 下面這句主要擔心攝像頭掉線后,重新配置直到pipeline.start()時,攝像頭還未連上,然后又重新執行下面這句就會報管道無法在啟動之前關閉的錯誤,所以加個trytry:locals()['pipeline' + self.cam_serial].stop()except Exception:traceback.print_exc()passprint('攝像頭{}線程{}掉線重連:'.format(self.cam_serial, self.name))# 【類】幀處理與顯示 class ImgProcess(threading.Thread):def __init__(self, cam_serial):threading.Thread.__init__(self)self.cam_serial = cam_serial# 【類函數】def run(self):while True:try:# 搞了半天我說怎么卡住運行不下去,原來是加鎖沒try,即使出錯也不會釋放鎖。。。那如果對了不就?是不是不釋放鎖即使是擁有鎖的線程也沒法再次獲得鎖的?# TODO(Dontla)如果某個攝像頭掉線,它還一直占用鎖怎么辦?必須要用到隊列?現在的情況是所有都會卡住# Dontla 20200331 貌似加不加線程鎖都會卡啊,而且是一攝像頭拍攝超過一定距離就會卡(30cm?)是不是我cpu不行了??# Dontla 20200331 估計要把YoloTest弄過來才行,不然導來導去可能會變慢# 必須要檢測globals()['color_image' + self.cam_serial]是否為空# if 'color_image{}'.format(self.cam_serial) not in globals() or 'depth_image{}'.format(# self.cam_serial) not in globals():if 'color_image{}'.format(self.cam_serial) not in globals():continuethreadLock.acquire()try:# time1 = time.time()locals()['boxes_pr' + self.cam_serial] = YoloTest.predict(globals()['color_image' + self.cam_serial])# time2 = time.time()# print(time2 - time1)# 0.3167684078216553# 0.30120229721069336# 0.22382521629333496# 0.29132938385009766# 0.3072044849395752# 0.2861909866333008# 0.2730879783630371# 0.2817072868347168# 0.288830041885376except Exception:threadLock.release()raisethreadLock.release()# locals()['boxes_image' + self.cam_serial] = YoloTest.draw_bbox(# globals()['color_image' + self.cam_serial], locals()['boxes_pr' + self.cam_serial],# globals()['aligned_depth_frame' + self.cam_serial],# globals()['color_intrin_part' + self.cam_serial])locals()['boxes_image' + self.cam_serial] = YoloTest.draw_bbox(globals()['color_image' + self.cam_serial], locals()['boxes_pr' + self.cam_serial])cv2.imshow('color{}'.format(self.cam_serial), locals()['boxes_image' + self.cam_serial])# cv2.imshow('depth{}'.format(self.cam_serial), globals()['depth_image' + self.cam_serial])cv2.waitKey(1)except Exception:traceback.print_exc()pass# 【函數】攝像頭連續驗證、連續驗證機制 def cam_conti_veri(cam_num, ctx):# D·C 1911202:創建最大驗證次數max_veri_times;創建連續穩定值continuous_stable_value,用于判斷設備重置后是否處于穩定狀態max_veri_times = 100continuous_stable_value = 5print('\n', end='')print('開始連續驗證,連續驗證穩定值:{},最大驗證次數:{}:'.format(continuous_stable_value, max_veri_times))continuous_value = 0veri_times = 0while True:devices = ctx.query_devices()# for dev in devices:# print(dev.get_info(rs.camera_info.serial_number), dev.get_info(rs.camera_info.usb_type_descriptor))connected_cam_num = len(devices)print('攝像頭個數:{}'.format(connected_cam_num))if connected_cam_num == cam_num:continuous_value += 1if continuous_value == continuous_stable_value:breakelse:continuous_value = 0veri_times += 1if veri_times == max_veri_times:print("檢測超時,請檢查攝像頭連接!")sys.exit()# 【函數】循環reset攝像頭 def cam_hardware_reset(ctx, cam_serials):# hardware_reset()后是不是應該延遲一段時間?不延遲就會報錯print('\n', end='')print('開始初始化攝像頭:')for dev in ctx.query_devices():# 先將設備的序列號放進一個變量里,免得在下面for循環里訪問設備的信息過多(雖然不知道它會不會每次都重新訪問)dev_serial = dev.get_info(rs.camera_info.serial_number)# 匹配序列號,重置我們需重置的特定攝像頭(注意兩個for循環順序,哪個在外哪個在內很重要,不然會導致剛重置的攝像頭又被訪問導致報錯)for serial in cam_serials:if serial == dev_serial:dev.hardware_reset()# 像下面這條語句居然不會報錯,不是剛剛才重置了dev嗎?莫非區別在于沒有通過for循環ctx.query_devices()去訪問?# 是不是剛重置后可以通過ctx.query_devices()去查看有這個設備,但是卻沒有存儲設備地址?如果是這樣,# 也就能夠解釋為啥能夠通過len(ctx.query_devices())函數獲取設備數量,但訪問序列號等信息就會報錯的原因了print('攝像頭{}初始化成功'.format(dev.get_info(rs.camera_info.serial_number)))# 如果只有一個攝像頭,要讓它睡夠5秒(避免出錯,保險起見)time.sleep(5 / len(cam_serials))if __name__ == '__main__':# 連續驗證cam_conti_veri(cam_num, ctx)# 攝像頭重置cam_hardware_reset(ctx, cam_serials)# 連續驗證cam_conti_veri(cam_num, ctx)# 創建YoloTest對象YoloTest = YoloTest()# 創建線程鎖threadLock = threading.Lock()globals()['flag'] = True# 創建新線程for serial in cam_serials:locals()['CamThread_{}'.format(serial)] = CamThread(serial)locals()['ImgProcess_{}'.format(serial)] = ImgProcess(serial)# 開啟新線程for serial in cam_serials:locals()['CamThread_{}'.format(serial)].start()locals()['ImgProcess_{}'.format(serial)].start()# 阻塞主程序for serial in cam_serials:locals()['CamThread_{}'.format(serial)].join()print("退出主線程")就是這里面卡,不懂咋回事。。。
相關問題:python Intel Realsense D435 多線程資源分配問題(卡住、卡死)
總結
以上是生活随笔為你收集整理的Tensorflow_yolov3 Intel Realsense D435奇怪的现象,多摄像头连接时一旦能检测到深度马上就会卡(卡住)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: pycharm黄色高亮提示:Defaul
- 下一篇: 如何判断locals()变量或globa