sift线特征提取代码_车道线检测LaneNet
Segmentation branch
解決樣本分布不均衡
車道線像素遠(yuǎn)小于背景像素.loss函數(shù)的設(shè)計(jì)對(duì)不同像素賦給不同權(quán)重,降低背景權(quán)重.
該分支的輸出為(w,h,2).
Embedding branch
loss的設(shè)計(jì)思路為使得屬于同一條車道線的像素距離盡量小,屬于不同車道線的像素距離盡可能大.即Discriminative loss.
該分支的輸出為(w,h,n).n為表示像素的向量的維度.
實(shí)例分割
在Segmentation branch完成語(yǔ)義分割,Embedding branch完成像素的向量表示后,做聚類,完成實(shí)例分割.
H-net
透視變換
to do
車道線擬合
LaneNet的輸出是每條車道線的像素集合,還需要根據(jù)這些像素點(diǎn)回歸出一條車道線。傳統(tǒng)的做法是將圖片投影到鳥(niǎo)瞰圖中,然后使用二次或三次多項(xiàng)式進(jìn)行擬合。在這種方法中,轉(zhuǎn)換矩陣H只被計(jì)算一次,所有的圖片使用的是相同的轉(zhuǎn)換矩陣,這會(huì)導(dǎo)致坡度變化下的誤差。
為了解決這個(gè)問(wèn)題,論文訓(xùn)練了一個(gè)可以預(yù)測(cè)變換矩陣H的神經(jīng)網(wǎng)絡(luò)HNet,網(wǎng)絡(luò)的輸入是圖片,輸出是轉(zhuǎn)置矩陣H。之前移植過(guò)Opencv逆透視變換矩陣的源碼,里面轉(zhuǎn)換矩陣需要8個(gè)參數(shù),這兒只給了6個(gè)參數(shù)的自由度,一開(kāi)始有些疑惑,后來(lái)仔細(xì)閱讀paper,發(fā)現(xiàn)作者已經(jīng)給出了解釋,是為了對(duì)轉(zhuǎn)換矩陣在水平方向上的變換進(jìn)行約束。
代碼分析
binary_seg_image, instance_seg_image = sess.run( [binary_seg_ret, instance_seg_ret], feed_dict={input_tensor: [image]} )輸入(1,256,512,3)輸出binary_seg_image:(1, 256, 512) instance_seg_image:(1, 256, 512, 4)
完成像素級(jí)別的分類和向量表示
class LaneNet的inference分為兩步.
第一步提取分割的特征,包括了用于語(yǔ)義分割的特征和用以實(shí)例分割的特征.
class LaneNet(cnn_basenet.CNNBaseModel): def inference(self, input_tensor, name): """ :param input_tensor: :param name: :return: """ with tf.variable_scope(name_or_scope=name, reuse=self._reuse): # first extract image features extract_feats_result = self._frontend.build_model( input_tensor=input_tensor, name='{:s}_frontend'.format(self._net_flag), reuse=self._reuse ) #得到一個(gè)字典,包含了用于語(yǔ)義分割的feature map和用于實(shí)例分割的feature map. #binary_segment_logits (1,256,512,2) 2是類別數(shù)目.即車道/背景. #instance_segment_logits (1,256,512,64) 用以后面再做卷積為每個(gè)像素生成一個(gè)向量表示 print('features:',extract_feats_result) # second apply backend process binary_seg_prediction, instance_seg_prediction = self._backend.inference( binary_seg_logits=extract_feats_result['binary_segment_logits']['data'], instance_seg_logits=extract_feats_result['instance_segment_logits']['data'], name='{:s}_backend'.format(self._net_flag), reuse=self._reuse ) if not self._reuse: self._reuse = True return binary_seg_prediction, instance_seg_prediction第一步得到的features如下:
features : OrderedDict([('encode_stage_1_share', {'data': , 'shape': [1, 256, 512, 64]}), ('encode_stage_2_share', {'data': , 'shape': [1, 128, 256, 128]}), ('encode_stage_3_share', {'data': , 'shape': [1, 64, 128, 256]}), ('encode_stage_4_share', {'data': , 'shape': [1, 32, 64, 512]}), ('encode_stage_5_binary', {'data': , 'shape': [1, 16, 32, 512]}), ('encode_stage_5_instance', {'data': , 'shape': [1, 16, 32, 512]}), ('binary_segment_logits', {'data': , 'shape': [1, 256, 512, 2]}), ('instance_segment_logits', {'data': , 'shape': [1, 256, 512, 64]})])特征提取完畢,做后處理
class LaneNetBackEnd(cnn_basenet.CNNBaseModel): def inference(self, binary_seg_logits, instance_seg_logits, name, reuse): """ :param binary_seg_logits: :param instance_seg_logits: :param name: :param reuse: :return: """ with tf.variable_scope(name_or_scope=name, reuse=reuse): with tf.variable_scope(name_or_scope='binary_seg'): binary_seg_score = tf.nn.softmax(logits=binary_seg_logits) binary_seg_prediction = tf.argmax(binary_seg_score, axis=-1) with tf.variable_scope(name_or_scope='instance_seg'): pix_bn = self.layerbn( inputdata=instance_seg_logits, is_training=self._is_training, name='pix_bn') pix_relu = self.relu(inputdata=pix_bn, name='pix_relu') instance_seg_prediction = self.conv2d( inputdata=pix_relu, out_channel=CFG.TRAIN.EMBEDDING_FEATS_DIMS, kernel_size=1, use_bias=False, name='pix_embedding_conv' ) return binary_seg_prediction, instance_seg_prediction對(duì)每個(gè)像素的分類,做softmax轉(zhuǎn)成概率.再argmax求概率較大值的下標(biāo).對(duì)每個(gè)像素的向量表示,用1x1卷積核做卷積,得到channel維度=CFG.TRAIN.EMBEDDING_FEATS_DIMS(配置為4).即(1,256,512,64)卷積得到(1,256,512,4)的tensor.即每個(gè)像素用一個(gè)四維向量表示.
所以,整個(gè)LaneNet的inference返回的是兩個(gè)tensor.一個(gè)shape為(1,256,512) 一個(gè)為(1,256,512,4).
后處理
class LaneNetPostProcessor(object): def postprocess(self, binary_seg_result, instance_seg_result=None, min_area_threshold=100, source_image=None, data_source='tusimple'):對(duì)binary_seg_result,先通過(guò)形態(tài)學(xué)操作將小的空洞去除.參考 https://www.cnblogs.com/sdu20112013/p/11672634.html
然后做聚類.
def _get_lane_embedding_feats(binary_seg_ret, instance_seg_ret): """ get lane embedding features according the binary seg result :param binary_seg_ret: :param instance_seg_ret: :return: """ idx = np.where(binary_seg_ret == 255) #idx (b,h,w) lane_embedding_feats = instance_seg_ret[idx] # idx_scale = np.vstack((idx[0] / 256.0, idx[1] / 512.0)).transpose() # lane_embedding_feats = np.hstack((lane_embedding_feats, idx_scale)) lane_coordinate = np.vstack((idx[1], idx[0])).transpose() assert lane_embedding_feats.shape[0] == lane_coordinate.shape[0] ret = { 'lane_embedding_feats': lane_embedding_feats, 'lane_coordinates': lane_coordinate } return ret獲取到坐標(biāo)及對(duì)應(yīng)坐標(biāo)像素對(duì)應(yīng)的向量表示.
np.where(condition)
只有條件 (condition),沒(méi)有x和y,則輸出滿足條件 (即非0) 元素的坐標(biāo) (等價(jià)于numpy.nonzero)。這里的坐標(biāo)以tuple的形式給出,通常原數(shù)組有多少維,輸出的tuple中就包含幾個(gè)數(shù)組,分別對(duì)應(yīng)符合條件元素的各維坐標(biāo)。
測(cè)試結(jié)果
tensorflow-gpu 1.15.2
4張titan xp
(4, 256, 512) (4, 256, 512, 4)
I0302 17:04:31.276140 29376 test_lanenet.py:222] imgae inference cost time: 2.58794s
(32, 256, 512) (32, 256, 512, 4)
I0302 17:05:50.322593 29632 test_lanenet.py:222] imgae inference cost time: 4.31036s
類似于高吞吐量,高延遲.對(duì)單幀圖片處理在1-2s,多幅圖片同時(shí)處理,平均下來(lái)的處理速度在0.1s.
論文里的backbone為enet,在nvida 1080 ti上推理速度52fps.
對(duì)于這個(gè)問(wèn)題的解釋,作者的解釋是
2.Origin paper use Enet as backbone net but I use vgg16 as backbone net so speed will not get as fast as that. 3.Gpu need a short time to warm up and you can adjust your batch size to test the speed again:)
一個(gè)是特征提取網(wǎng)絡(luò)和論文里不一致,一個(gè)是gpu有一個(gè)短暫的warm up的時(shí)間.
我自己的測(cè)試結(jié)果是在extract image features耗時(shí)較多.換一個(gè)backbone可能會(huì)有改善.
def inference(self, input_tensor, name): """ :param input_tensor: :param name: :return: """ print("***************,input_tensor shape:",input_tensor.shape) with tf.variable_scope(name_or_scope=name, reuse=self._reuse): t_start = time.time() # first extract image features extract_feats_result = self._frontend.build_model( input_tensor=input_tensor, name='{:s}_frontend'.format(self._net_flag), reuse=self._reuse ) t_cost = time.time() - t_start glog.info('extract image features cost time: {:.5f}s'.format(t_cost)) # second apply backend process t_start = time.time() binary_seg_prediction, instance_seg_prediction = self._backend.inference( binary_seg_logits=extract_feats_result['binary_segment_logits']['data'], instance_seg_logits=extract_feats_result['instance_segment_logits']['data'], name='{:s}_backend'.format(self._net_flag), reuse=self._reuse ) t_cost = time.time() - t_start glog.info('backend process cost time: {:.5f}s'.format(t_cost)) if not self._reuse: self._reuse = True return binary_seg_prediction, instance_seg_prediction總結(jié)
以上是生活随笔為你收集整理的sift线特征提取代码_车道线检测LaneNet的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: sql 统计记录条数后 打印出所有记录_
- 下一篇: 特斯拉在新加坡推出1万美元优惠 但价格仍