Udacity机器人软件工程师课程笔记(五)-样本搜索和找回-基于漫游者号模拟器-自主驾驶
9.自主駕駛
在接下來的環(huán)節(jié)中,我們要實(shí)現(xiàn)漫游者號的自動(dòng)駕駛功能。
完成這個(gè)功能我們需要四個(gè)程序,第一個(gè)為感知程序,其對攝像頭輸入的圖片進(jìn)行變換處理和坐標(biāo)變換使用。第二個(gè)程序?yàn)闆Q策程序,功能是幫助漫游者號根據(jù)當(dāng)前條件和狀態(tài)進(jìn)行相應(yīng)的決策,以實(shí)現(xiàn)漫游者號前進(jìn),后退,轉(zhuǎn)向等功能。第三個(gè)是支持程序,來定義一些關(guān)于漫游者號狀態(tài)的類等。最后為主程序,來調(diào)用三個(gè)函數(shù)對漫游者號進(jìn)行控制的。
Udacity提供的初始程序在Udacity機(jī)器人軟件工程師課程筆記(三)中有所提供。
經(jīng)過前面的學(xué)習(xí),我們已經(jīng)清楚了感知程序和決策程序,因?yàn)橹拔叶及阉鼘懗闪艘粋€(gè)程序,這樣導(dǎo)致程序顯得非常復(fù)雜,可讀性不是太高?,F(xiàn)在針對各個(gè)功能程序進(jìn)行程序的劃分,來方便最后的主程序的調(diào)用。以下的初始程序都放在RoboND-Rover-Project文件夾下的code文件夾中。
(1) 感知部分perception.py
首先是感知部分。感知部分perception.py應(yīng)當(dāng)具有三個(gè)功能,
- 圖像閾值處理
- 透視變換
- 坐標(biāo)變換
這些功能在之前的程序中已經(jīng)多次使用了,但是我們需要調(diào)用這些功能去編寫一個(gè)綜合感知函數(shù) perception_step() 來對圖像進(jìn)行處理,它需要完成以下功能:
1)定義透視變換的源點(diǎn)和目標(biāo)點(diǎn)
2)應(yīng)用透視變換
3)應(yīng)用顏色閾值來識(shí)別可導(dǎo)航的地形/障礙物/巖石樣本
4)更新Rover.vision_image(這個(gè)圖像顯示在屏幕的左側(cè))
5)將地圖圖像像素值轉(zhuǎn)換為以漫游者號為中心的坐標(biāo)
6)將以漫游者號為中心的像素值坐標(biāo)轉(zhuǎn)換為世界坐標(biāo)
7)更新漫游者號的世界地圖(這個(gè)圖像顯示在屏幕右側(cè))
8)將以漫游者號為中心的像素位置轉(zhuǎn)換為極坐標(biāo)
感知部分的程序如下
import numpy as np
import cv2
import matplotlib.pyplot as plt##圖像處理
# 定義二值化圖像函數(shù)
def color_thresh(img, rgb_thresh=(160, 160, 160)):img_thresh = np.zeros_like(img[:, :, 0])above_thresh = (img[:, :, 0] > rgb_thresh[0]) \& (img[:, :, 1] > rgb_thresh[1]) \& (img[:, :, 2] > rgb_thresh[2])img_thresh[above_thresh] = 255return img_thresh##透視變換
# 定義圖像映射函數(shù),將攝像頭的圖像映射到平面坐標(biāo)中去
def perspect_transform(img, src, dst):M = cv2.getPerspectiveTransform(src, dst) # 定義變換矩陣img_perspect = cv2.warpPerspective(img, M, (img.shape[1], img.shape[0]))return img_perspect##坐標(biāo)變換
# 定義從圖像坐標(biāo)轉(zhuǎn)換成漫游者號坐標(biāo)函數(shù)
def rover_coords(binary_img):ypos, xpos = binary_img.nonzero()x_pixel = -(ypos - binary_img.shape[0]).astype(np.float)y_pixel = -(xpos - binary_img.shape[1]/2 ).astype(np.float)return x_pixel, y_pixel# 定義旋轉(zhuǎn)操作函數(shù)
def rotate_pix(xpix, ypix, yaw):yaw_rad = yaw * np.pi / 180xpix_rotated = (xpix * np.cos(yaw_rad)) - (ypix * np.sin(yaw_rad))ypix_rotated = (xpix * np.sin(yaw_rad)) + (ypix * np.cos(yaw_rad))return xpix_rotated, ypix_rotated# 定義平移操作函數(shù)
def translate_pix(xpix_rot, ypix_rot, xpos, ypos, scale):xpix_translated = (xpix_rot / scale) + xposypix_translated = (ypix_rot / scale) + yposreturn xpix_translated, ypix_translated# 定義綜合函數(shù),將旋轉(zhuǎn)和平移函數(shù)進(jìn)行結(jié)合,并限制了圖像范圍
def pix_to_world(xpix, ypix, xpos, ypos, yaw, world_size, scale):xpix_rot, ypix_rot = rotate_pix(xpix, ypix, yaw)xpix_tran, ypix_tran = translate_pix(xpix_rot, ypix_rot, xpos, ypos, scale)x_pix_world = np.clip(np.int_(xpix_tran), 0, world_size - 1)y_pix_world = np.clip(np.int_(ypix_tran), 0, world_size - 1)return x_pix_world, y_pix_world# 定義轉(zhuǎn)換為極坐標(biāo)函數(shù)
def to_polar_coords(xpix, ypix):dist = np.sqrt(xpix**2 + ypix ** 2)angles = np.arctan2(ypix, xpix)return dist, anglesdef perception_step(Rover):# (1)定義透視變換的原點(diǎn)和目標(biāo)點(diǎn)# 參考圖像filename = 'E:/2019/RoboND-Rover-Project-master/calibration_images/example_grid1.jpg'image = cv2.imread(filename)dst_size = 5bottom_offset = 0src = np.float32([[14, 140], [301, 140], [200, 96], [118, 96]])dst = np.float32([[image.shape[1] / 2 - dst_size, image.shape[0] - bottom_offset],[image.shape[1] / 2 + dst_size, image.shape[0] - bottom_offset],[image.shape[1] / 2 + dst_size, image.shape[0] - 2 * dst_size - bottom_offset],[image.shape[1] / 2 - dst_size, image.shape[0] - 2 * dst_size - bottom_offset],])# (2)應(yīng)用透視變換img_perspect = perspect_transform(Rover.img, src, dst)# (3)應(yīng)用顏色閾值來識(shí)別可導(dǎo)航的地形/障礙物/巖石樣本img_thresh_ter = color_thresh(img_perspect, rgb_thresh=(160, 160, 160))img_thresh_obs = color_thresh(img_perspect, rgb_thresh=(160, 160, 160))img_thresh_roc = color_thresh(img_perspect, rgb_thresh=(160, 130, 0))# (4)更新ROver.vision_imageRover.vision_image[:, :, 0] = img_thresh_rocRover.vision_image[:, :, 1] = img_thresh_obsRover.vision_image[:, :, 2] = img_thresh_ter# (5)將地圖像素值轉(zhuǎn)換為以漫游者號為中心的坐標(biāo)xpix, ypix = rover_coords(img_thresh_ter)# (6)將以漫游者號為中心的像素值坐標(biāo)轉(zhuǎn)化為世界地圖坐標(biāo)x_pix_world, y_pix_world = pix_to_world(xpix, ypix, xpos=Rover.pos[0], ypos=Rover.pos[1], yaw=Rover.yaw, world_size=200, scale=10)# (7)更新漫游者號的世界地圖Rover.worldmap[x_pix_world, y_pix_world] += 1# (8)將以漫游者號為中心的像素位置轉(zhuǎn)換為極坐標(biāo)rover_distances, rover_angles = to_polar_coords(xpix, ypix)# 可導(dǎo)航地形像素的角度的數(shù)組Rover.nav_angles = rover_angles# 獲取轉(zhuǎn)向角,使用極坐標(biāo)系角度的平均值,這樣可以運(yùn)行,但是會(huì)有意外情況avg_angle_degrees = np.mean((rover_angles ) * 180 / np.pi)Rover.steer= np.clip(avg_angle_degrees, -15, 15)# 獲取油門速度,使用平均極坐標(biāo)系的平均值Rover.throttle = np.mean(rover_distances)print('steer:', Rover.steer, 'Rover.throttle:', Rover.throttle)return Rover
(2) 決策部分decision.py
使用決策樹邏輯編寫的簡單判別系統(tǒng),控制漫游者號對目前的狀態(tài)和環(huán)境做出相應(yīng)的判斷,并執(zhí)行相應(yīng)的加速,轉(zhuǎn)向,停止等操作。
import numpy as np# 命令基于perception_step()函數(shù)的輸出
def decision_step(Rover):if Rover.nav_angles is not None:# 檢查 Rover.mode 狀態(tài)if Rover.mode == 'forward':# 檢查可導(dǎo)航地形的范圍if len(Rover.nav_angles) >= Rover.stop_forward: # 如果為forward模式,可導(dǎo)航地圖是好的,速度如果低于最大值,則加速if Rover.vel < Rover.max_vel:# 設(shè)置油門值Rover.throttle = Rover.throttle_setelse:Rover.throttle = 0Rover.brake = 0# 將轉(zhuǎn)向設(shè)置為平均夾角,范圍為 +/- 15Rover.steer = np.clip(np.mean(Rover.nav_angles * 180/np.pi), -15, 15)# 如果缺少地形圖,則設(shè)置為停止模式elif len(Rover.nav_angles) < Rover.stop_forward:# 設(shè)置到停止模式并剎車Rover.throttle = 0# 設(shè)置制動(dòng)值Rover.brake = Rover.brake_setRover.steer = 0Rover.mode = 'stop'# 如果我們已經(jīng)處于“停止”模式,那么做出不同的決定elif Rover.mode == 'stop':# 如果我們處于停止模式但仍然繼續(xù)制動(dòng)if Rover.vel > 0.2:Rover.throttle = 0Rover.brake = Rover.brake_setRover.steer = 0# 如果我們沒有移動(dòng)(vel <0.2),那就做點(diǎn)別的elif Rover.vel <= 0.2:# 現(xiàn)在為停止?fàn)顟B(tài),依靠視覺數(shù)據(jù),判斷是否有前進(jìn)的道路if len(Rover.nav_angles) < Rover.go_forward:Rover.throttle = 0# 松開制動(dòng)器以便轉(zhuǎn)動(dòng)Rover.brake = 0# 轉(zhuǎn)彎范圍為+/- 15度,當(dāng)停止時(shí),下一條線將引起4輪轉(zhuǎn)向Rover.steer = -15#如果停下來但看到前面有足夠的可導(dǎo)航地形,那就啟動(dòng)if len(Rover.nav_angles) >= Rover.go_forward:# 將油門設(shè)置回存儲(chǔ)值Rover.throttle = Rover.throttle_setRover.brake = 0# 將轉(zhuǎn)向設(shè)置為平均角度Rover.steer = np.clip(np.mean(Rover.nav_angles * 180/np.pi), -15, 15)Rover.mode = 'forward'else:Rover.throttle = Rover.throttle_setRover.steer = 0Rover.brake = 0# 在可拾取巖石的狀態(tài)下發(fā)送拾取命if Rover.near_sample and Rover.vel == 0 and not Rover.picking_up:Rover.send_pickup = Truereturn Rover
(3)項(xiàng)目支持函數(shù)
提供一些支持項(xiàng)目運(yùn)行的函數(shù),比如創(chuàng)建輸出圖像等等,具體的功能可以看程序。
程序如下:
import numpy as np
import cv2
from PIL import Image
from io import BytesIO, StringIO
import base64
import time# 定義一個(gè)函數(shù),將遙測字符串轉(zhuǎn)換為浮點(diǎn)數(shù),與十進(jìn)制約定無關(guān)
def convert_to_float(string_to_convert):if ',' in string_to_convert:float_value = np.float(string_to_convert.replace(',', '.'))else:float_value = np.float(string_to_convert)return float_valuedef update_rover(Rover, data):# 初始化開始時(shí)間和樣本位置if Rover.start_time is None:Rover.start_time = time.time()Rover.total_time = 0samples_xpos = np.int_([convert_to_float(pos.strip()) for pos in data["samples_x"].split(';')])samples_ypos = np.int_([convert_to_float(pos.strip()) for pos in data["samples_y"].split(';')])Rover.samples_pos = (samples_xpos, samples_ypos)Rover.samples_to_find = np.int(data["sample_count"])# 或者只是更新已用時(shí)間eelse:tot_time = time.time() - Rover.start_timeif np.isfinite(tot_time):Rover.total_time = tot_time# 打印遙測數(shù)據(jù)字典中的字段print(data.keys())# 漫游者號的當(dāng)前速度,單位為m / sRover.vel = convert_to_float(data["speed"])# 漫游者號的當(dāng)前位置Rover.pos = [convert_to_float(pos.strip()) for pos in data["position"].split(';')]# 當(dāng)前的漫游者號偏航角Rover.yaw = convert_to_float(data["yaw"])# 當(dāng)前的漫游者號的傾斜度Rover.pitch = convert_to_float(data["pitch"])# 當(dāng)前的漫游者號轉(zhuǎn)動(dòng)角度Rover.roll = convert_to_float(data["roll"])# 當(dāng)前油門設(shè)置Rover.throttle = convert_to_float(data["throttle"])# 當(dāng)前轉(zhuǎn)向角Rover.steer = convert_to_float(data["steering_angle"])# Near sample 標(biāo)志Rover.near_sample = np.int(data["near_sample"])# Picking up 標(biāo)志Rover.picking_up = np.int(data["picking_up"])# 更新收集的巖石數(shù)量Rover.samples_collected = Rover.samples_to_find - np.int(data["sample_count"])print('speed =', Rover.vel, 'position =', Rover.pos, 'throttle =',Rover.throttle, 'steer_angle =', Rover.steer, 'near_sample:', Rover.near_sample,'picking_up:', data["picking_up"], 'sending pickup:', Rover.send_pickup,'total time:', Rover.total_time, 'samples remaining:', data["sample_count"],'samples collected:', Rover.samples_collected)# 從漫游者號的中央攝像頭獲取當(dāng)前圖像imgString = data["image"]image = Image.open(BytesIO(base64.b64decode(imgString)))Rover.img = np.asarray(image)# 返回更新的漫游者號和單獨(dú)的圖像以進(jìn)行可選保存return Rover, image# 定義一個(gè)函數(shù),根據(jù)世界地圖結(jié)果創(chuàng)建顯示輸出
def create_output_images(Rover):# 創(chuàng)建一個(gè)縮放的地圖,用于繪制和清理一下障礙物/導(dǎo)航像素if np.max(Rover.worldmap[:, :, 2]) > 0:nav_pix = Rover.worldmap[:, :, 2] > 0navigable = Rover.worldmap[:, :, 2] * (255 / np.mean(Rover.worldmap[nav_pix, 2]))else:navigable = Rover.worldmap[:, :, 2]if np.max(Rover.worldmap[:, :, 0]) > 0:obs_pix = Rover.worldmap[:, :, 0] > 0obstacle = Rover.worldmap[:, :, 0] * (255 / np.mean(Rover.worldmap[obs_pix, 0]))else:obstacle = Rover.worldmap[:, :, 0]likely_nav = navigable >= obstacleobstacle[likely_nav] = 0plotmap = np.zeros_like(Rover.worldmap)plotmap[:, :, 0] = obstacleplotmap[:, :, 2] = navigableplotmap = plotmap.clip(0, 255)# 覆蓋障礙物和可導(dǎo)航的地形圖與地面實(shí)況圖map_add = cv2.addWeighted(plotmap, 1, Rover.ground_truth, 0.5, 0)# 檢查worldmap中是否存在任何巖石rock_world_pos = Rover.worldmap[:, :, 1].nonzero()# 如果有,我們將逐步完成已知的樣品位置# 確認(rèn)檢測是否真實(shí)samples_located = 0if rock_world_pos[0].any():rock_size = 2for idx in range(len(Rover.samples_pos[0])):test_rock_x = Rover.samples_pos[0][idx]test_rock_y = Rover.samples_pos[1][idx]rock_sample_dists = np.sqrt((test_rock_x - rock_world_pos[1]) ** 2 + \(test_rock_y - rock_world_pos[0]) ** 2)# 如果在已知樣品位置3米范圍內(nèi)檢測到巖石,# 則認(rèn)為它成功并在地圖上繪制已知樣品的位置if np.min(rock_sample_dists) < 3:samples_located += 1map_add[test_rock_y - rock_size:test_rock_y + rock_size,test_rock_x - rock_size:test_rock_x + rock_size, :] = 255# 計(jì)算地圖結(jié)果的一些統(tǒng)計(jì)數(shù)據(jù)# 首先獲取可導(dǎo)航地形圖中的總像素?cái)?shù)tot_nav_pix = np.float(len((plotmap[:, :, 2].nonzero()[0])))# 接下來計(jì)算出其中有多少對應(yīng)于地面實(shí)況像素good_nav_pix = np.float(len(((plotmap[:, :, 2] > 0) & (Rover.ground_truth[:, :, 1] > 0)).nonzero()[0]))# 接下來找出有多少不對應(yīng)地面實(shí)況像素bad_nav_pix = np.float(len(((plotmap[:, :, 2] > 0) & (Rover.ground_truth[:, :, 1] == 0)).nonzero()[0]))# 抓取地圖像素的總數(shù)tot_map_pix = np.float(len((Rover.ground_truth[:, :, 1].nonzero()[0])))# 計(jì)算已成功找到的地面實(shí)況圖的百分比perc_mapped = round(100 * good_nav_pix / tot_map_pix, 1)# 計(jì)算好的地圖像素檢測數(shù)除以總像素?cái)?shù)# 發(fā)現(xiàn)是可導(dǎo)航的地形if tot_nav_pix > 0:fidelity = round(100 * good_nav_pix / (tot_nav_pix), 1)else:fidelity = 0# 翻轉(zhuǎn)地圖進(jìn)行繪圖,使y軸在顯示屏中指向上方map_add = np.flipud(map_add).astype(np.float32)# 添加一些關(guān)于地圖和巖石樣本檢測結(jié)果的文字cv2.putText(map_add, "Time: " + str(np.round(Rover.total_time, 1)) + ' s', (0, 10),cv2.FONT_HERSHEY_COMPLEX, 0.4, (255, 255, 255), 1)cv2.putText(map_add, "Mapped: " + str(perc_mapped) + '%', (0, 25),cv2.FONT_HERSHEY_COMPLEX, 0.4, (255, 255, 255), 1)cv2.putText(map_add, "Fidelity: " + str(fidelity) + '%', (0, 40),cv2.FONT_HERSHEY_COMPLEX, 0.4, (255, 255, 255), 1)cv2.putText(map_add, "Rocks", (0, 55),cv2.FONT_HERSHEY_COMPLEX, 0.4, (255, 255, 255), 1)cv2.putText(map_add, " Located: " + str(samples_located), (0, 70),cv2.FONT_HERSHEY_COMPLEX, 0.4, (255, 255, 255), 1)cv2.putText(map_add, " Collected: " + str(Rover.samples_collected), (0, 85),cv2.FONT_HERSHEY_COMPLEX, 0.4, (255, 255, 255), 1)# 將地圖和視覺圖像轉(zhuǎn)換為base64字符串以便發(fā)送到服務(wù)器pil_img = Image.fromarray(map_add.astype(np.uint8))buff = BytesIO()pil_img.save(buff, format="JPEG")encoded_string1 = base64.b64encode(buff.getvalue()).decode("utf-8")pil_img = Image.fromarray(Rover.vision_image.astype(np.uint8))buff = BytesIO()pil_img.save(buff, format="JPEG")encoded_string2 = base64.b64encode(buff.getvalue()).decode("utf-8")return encoded_string1, encoded_string2
(4)主函數(shù)drive_rover.py
控制漫游者號運(yùn)行的主程序,調(diào)用前面三個(gè)子程序完成漫游者號的自動(dòng)駕駛功能。
其中有一些關(guān)于socketio服務(wù)器和Flask應(yīng)用程序的語句,這里不詳細(xì)說明了,可能的話我以后會(huì)單獨(dú)寫一篇文章來單獨(dú)說明。如果有想要詳細(xì)的,可以看這里。
其程序如下:
import argparse
import shutil
import base64
from datetime import datetime
import os
import cv2
import numpy as np
import socketio
import eventlet
import eventlet.wsgi
from PIL import Image
from flask import Flask
from io import BytesIO, StringIO
import json
import pickle
import matplotlib.image as mpimg
import time# Import functions for perception and decision making
from perception import perception_step
from decision import decision_step
from supporting_functions import update_rover, create_output_images
# 初始化socketio服務(wù)器和Flask應(yīng)用程序
# (learn more at: https://python-socketio.readthedocs.io/en/latest/)
sio = socketio.Server()
app = Flask(__name__)# 讀取地面實(shí)況圖并創(chuàng)建綠色通道的圖像
# 注意:圖像默認(rèn)以左上角為原點(diǎn)(0,0)y軸向下讀取圖片,要求圖片為單層
ground_truth = mpimg.imread('E:/2019/RoboND-Rover-Project-master/calibration_images/map_bw.png')
ground_truth = ground_truth[:,:,0] # 我的圖片維度出了一些問題
# 下一行在紅色和藍(lán)色通道中創(chuàng)建零數(shù)組,并將地圖放入綠色通道。
# 這就是地圖輸出在顯示圖像中顯示為綠色的原因
ground_truth_3d = np.dstack((ground_truth*0, ground_truth*255, ground_truth*0)).astype(np.float)# 定義RoverState()類
class RoverState():def __init__(self):self.start_time = None # 記錄導(dǎo)航開始的時(shí)間self.total_time = None # 記錄導(dǎo)航的總持續(xù)時(shí)間self.img = None # 當(dāng)前的相機(jī)圖像self.pos = None # 當(dāng)前的位置 (x, y)self.yaw = None # 當(dāng)前的偏航角self.pitch = None # 當(dāng)前的俯仰角self.roll = None # 當(dāng)前的旋轉(zhuǎn)角self.vel = None # 當(dāng)前的速度self.steer = 0 # 當(dāng)前的轉(zhuǎn)向角self.throttle = 0 # 當(dāng)前的油門值self.brake = 0 # 當(dāng)前的制動(dòng)值self.nav_angles = None # 可導(dǎo)航地形像素的角度self.nav_dists = None # 可導(dǎo)航地形像素的距離self.ground_truth = ground_truth_3d # 真實(shí)的世界地圖self.mode = 'forward' # 當(dāng)前模式 (可以是前進(jìn)或者停止)self.throttle_set = 0.2 # 加速時(shí)的節(jié)流設(shè)定self.brake_set = 10 # 剎車時(shí)的剎車設(shè)定# 下面的stop_forward和go_forward字段表示可導(dǎo)航地形像素的總數(shù)。# 這是一種非常粗糙的形式來表示漫游者號當(dāng)何時(shí)可以繼續(xù)前進(jìn)或者何時(shí)應(yīng)該停止。# 可以隨意添加新字段或修改這些字段。self.stop_forward = 50 # T啟動(dòng)停止時(shí)的閾值self.go_forward = 500 # 再次前進(jìn)的閾值self.max_vel = 2 # 最大速度 (m/s)# 感知步驟的圖像輸入# 更新此圖像以顯示中間分析步驟# 在自主模式下的屏幕上self.vision_image = np.zeros((160, 320, 3), dtype=np.float)# 世界地圖# 使用可導(dǎo)航的位置更新此圖像# 障礙物和巖石樣本self.worldmap = np.zeros((200, 200, 3), dtype=np.float)self.samples_pos = None # 存儲(chǔ)實(shí)際樣本的位置self.samples_to_find = 0 # 儲(chǔ)存樣本的初始計(jì)數(shù)self.samples_located = 0 # 儲(chǔ)存位于地圖上的樣本數(shù)self.samples_collected = 0 # 儲(chǔ)存收集的樣本數(shù)self.near_sample = 0 # 設(shè)置附近樣本數(shù)為0self.picking_up = 0 # 設(shè)置["picking_up"]值self.send_pickup = False# 實(shí)例化類
Rover = RoverState()# 用于跟蹤每秒幀數(shù)的變量(FPS)
# 初始化幀計(jì)數(shù)器
frame_counter = 0
# 設(shè)置幀計(jì)數(shù)器
second_counter = time.time()
fps = None# 定義遙測功能
@sio.on('telemetry')
def telemetry(sid, data):global frame_counter, second_counter, fpsframe_counter += 1# 粗略計(jì)算每秒幀數(shù)(FPS)if (time.time() - second_counter) > 1:fps = frame_counterframe_counter = 0second_counter = time.time()print("Current FPS: {}".format(fps))if data:global Rover# 使用當(dāng)前遙測技術(shù)初始化/更新漫游者號Rover, image = update_rover(Rover, data)if np.isfinite(Rover.vel):# 執(zhí)行感知和決策步驟以更新漫游者號的狀態(tài)Rover = perception_step(Rover)Rover = decision_step(Rover)# 創(chuàng)建輸出圖像以發(fā)送到服務(wù)器out_image_string1, out_image_string2 = create_output_images(Rover)# 行動(dòng)步驟!,發(fā)送命令到漫游者號# 不要同時(shí)發(fā)送這兩個(gè),它們都會(huì)觸發(fā)模擬器發(fā)送回新的遙測數(shù)據(jù),# 因此我們只能將其中一個(gè)發(fā)送回當(dāng)前的遙測數(shù)據(jù)。# 如果處于可拾取巖石的狀態(tài)發(fā)送拾取命令if Rover.send_pickup and not Rover.picking_up:send_pickup()Rover.send_pickup = Falseelse:# 發(fā)送命令到漫游者號commands = (Rover.throttle, Rover.brake, Rover.steer)send_control(commands, out_image_string1, out_image_string2)# 如果遙測無效,請發(fā)送空命令else:# 發(fā)送零點(diǎn)用于油門,制動(dòng)和轉(zhuǎn)向以及清空圖像send_control((0, 0, 0), '', '')# 如果要從自動(dòng)駕駛中保存攝像機(jī)圖像,請指定路徑# Example: $ python drive_rover.py image_folder_path# 如果指定了文件夾,則可以保存圖像幀if args.image_folder != '':timestamp = datetime.utcnow().strftime('%Y_%m_%d_%H_%M_%S_%f')[:-3]image_filename = os.path.join(args.image_folder, timestamp)image.save('{}.jpg'.format(image_filename))else:sio.emit('manual', data={}, skip_sid=True)@sio.on('connect')
def connect(sid, environ):print("connect ", sid)send_control((0, 0, 0), '', '')sample_data = {}sio.emit("get_samples",sample_data,skip_sid=True)def send_control(commands, image_string1, image_string2):# 定義要發(fā)送到漫游者號的命令data = {'throttle': commands[0].__str__(),'brake': commands[1].__str__(),'steering_angle': commands[2].__str__(),'inset_image1': image_string1,'inset_image2': image_string2,}# 通過socketIO服務(wù)器發(fā)送命令sio.emit("data",data,skip_sid=True)eventlet.sleep(0)# 定義一個(gè)發(fā)送“拾取”命令的功能
def send_pickup():print("Picking up")pickup = {}sio.emit("pickup",pickup,skip_sid=True)eventlet.sleep(0)if __name__ == '__main__':parser = argparse.ArgumentParser(description='Remote Driving')parser.add_argument('image_folder',type=str,nargs='?',default='',help='Path to image folder. This is where the images from the run will be saved.')args = parser.parse_args()# os.system('rm -rf IMG_stream/*')if args.image_folder != '':print("Creating image folder at {}".format(args.image_folder))if not os.path.exists(args.image_folder):os.makedirs(args.image_folder)else:shutil.rmtree(args.image_folder)os.makedirs(args.image_folder)print("Recording this run ...")else:print("NOT recording this run ...")# 用socketio的中間件包裝Flask應(yīng)用程序app = socketio.Middleware(sio, app)# 部署為eventlet WSGI服務(wù)器eventlet.wsgi.server(eventlet.listen(('', 4567)), app)
其輸出如下:
gif演示:
因?yàn)閳D片大小限制的原因,gif的質(zhì)量比較差。
但是我們可以看出,程序已經(jīng)可以基本正常運(yùn)行了。但是我們的自主駕駛還有很多問題,比如當(dāng)我們正對著巖石障礙時(shí),漫游者號會(huì)選擇直沖向巖石。這是由我們?nèi)∑骄档乃惴Q定的,但是這完全是不可行的。
而且漫游者號現(xiàn)在還處于一個(gè)隨機(jī)駕駛狀態(tài),它只能在地圖中進(jìn)行隨機(jī)的駕駛,無法遍歷整個(gè)地圖。這也是我們的程序需要升級的地方。
在下個(gè)筆記中,要實(shí)現(xiàn)漫游者號對地圖進(jìn)行穩(wěn)定的遍歷,而且要對所有巖石樣本進(jìn)行采集。然后我們才算完成了這個(gè)漫游者號火星車項(xiàng)目的所有部分。
總結(jié)
以上是生活随笔為你收集整理的Udacity机器人软件工程师课程笔记(五)-样本搜索和找回-基于漫游者号模拟器-自主驾驶的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Udacity机器人软件工程师课程笔记(
- 下一篇: Udacity机器人软件工程师课程笔记(