Turtlebot+ROS Stage仿真环境实现MPC轨迹跟踪
在無人車系統(十一):軌跡跟蹤模型預測控制(MPC)原理與python實現【40行代碼】中介紹了MPC方法在無人車軌跡跟蹤中的應用。以Udacity中的例子作為引子,詳細介紹了MPC的原理,無人車的運動學模型,以及給出基于python的實現。Udacity的仿真器是一些特定的環境,沒有ros中的stage或gazebo靈活。本文以turtlebot的軌跡跟蹤任務作為引子,介紹在ROS Stage仿真環境中的實現移動機器人軌跡跟蹤控制。
1. 仿真環境設置
1.1 更改stage環境地圖
ubuntu 18.04+ros melodic
- 首先,請參照install turtlebot on ubuntu 18.04 + ros melodic在melodic版本的ROS中安裝好turtlebot。
- 然后,參照turtlebot_stageTutorialsindigoCustomizing the Stage Simulator熟悉如何在stage仿真環境中使用自己的地圖,以及如何在仿真環境中增減障礙物。stage仿真環境設置過程中重要的三個文件后綴分別為:“.png,.world,.yaml”。其中".png"為地圖的圖片,“.world"為整個環境的配置文件(包括地圖,機器人,障礙物等),”.yaml"為地圖的配置文件。因此,更改要將turtlebot stage中的原始地圖更改成自己的地圖,只需要將.png與.yaml文件替換原始的.png與.yaml文件。
例如:
1.2 獲取要跟蹤的軌跡
1.2.1 獲取路點
點擊rviz工具欄中的"+"號,在新出現的界面中選擇“PublishPoint”,點擊OK退出該界面。
上面工具欄就會多出一個“Publish Point”,每點擊一次該按鈕,就可以在RVIZ界面中點擊任意位置,并通過名稱為“/clicked_point”的Topic發出該位置在“/map”坐標系下的位置(x,y)。
可采用如下程序來依次記錄選取的路點(保存為“wps.txt”):
wps.txt文件內容如下:
2.775404453277587891e+00 1.849611759185791016e+00 3.828148126602172852e+00 1.734118461608886719e+00 5.651257514953613281e+00 1.688370704650878906e+00 7.475209712982177734e+00 1.680246791839599609e+00 9.300391197204589844e+00 1.697043418884277344e+00 1.103435707092285156e+01 1.695110073089599609e+00 1.266803169250488281e+01 1.93492584228515625e+00 1.315559005737304688e+01 3.573270320892333984e+00 1.319308471679687500e+01 5.110162258148193359e+00 1.324398708343505859e+01 6.808045387268066406e+00 1.320587062835693359e+01 8.812158584594726562e+00 1.317775154113769531e+01 1.039849281311035156e+01 1.308286705017089844e+01 1.215956592559814453e+01 1.269116973876953125e+01 1.304349098205566406e+01 1.132855510711669922e+01 1.330544090270996094e+01 1.001482582092285156e+01 1.338727951049804688e+01 8.556406974792480469e+00 1.343830871582031250e+01 6.953704357147216797e+00 1.346025466918945312e+01 5.448281288146972656e+00 1.340069961547851562e+01 3.750601768493652344e+00 1.338894081115722656e+01 2.197587871551513672e+00 1.309178924560546875e+01 1.648979549407958984e+00 1.113714027404785156e+01 1.638838768005371094e+00 8.914575576782226562e+00 1.582886219024658203e+00 6.895450115203857422e+00 1.609868049621582031e+00 5.023091793060302734e+00 1.697061538696289062e+00 2.892292022705078125e+00 2.775404453277587891e+00 1.849611759185791016e+001.2.2 擬合路點得到機器人要跟蹤的路線
import numpy as np from scipy.interpolate import interp1d wps = np.loadtxt("wps.txt") x = wps[:,0] y = wps[:,1] t = np.linspace(0, 1, num=len(x)) f1 = interp1d(t,x,kind='cubic') f2 = interp1d(t,y,kind='cubic') newt = np.linspace(0,1,100) newx = f1(newt) newy = f2(newt)import matplotlib.pyplot as plt %matplotlib inlineplt.scatter(x, y) plt.plot(newx, newy) plt.show()2. Turtlebot的MPC軌跡跟蹤問題
與無人車不同,Turtlebot是基于左右輪差速來達到轉彎,前行運動的。ROS中的Turtlebot包中的控制項有線速度vvv與角速度www。Turtlebot的運動學模型如下:
x˙=vcos(θ)y˙=vsin(θ)θ˙=w\dot{x}=vcos(\theta)\\ \dot{y}=vsin(\theta)\\ \dot{\theta}=wx˙=vcos(θ)y˙?=vsin(θ)θ˙=w
首先我們需要將以上連續的微分模型離散化成差分模型(差分間隔為dtd_tdt?):
xk+1=xk+vkcos?(θk)dtyk+1=yk+vksin?(θk)dtθk+1=θk+wkdtctek+1=ctek+vksin?(θk)dtepsik+1=epsik+wkdt(2)\begin{matrix} x_{k+1}=x_k+v_k\cos(\theta_k)d_t \\ y_{k+1}=y_k+v_k\sin(\theta_k)d_t \\ \theta_{k+1}=\theta_{k}+w_k d_t \\ \text{cte}_{k+1} = \text{cte}_k+v_k \sin (\theta_k)d_t \\ \text{epsi}_{k+1}=\text{epsi}_k+w_kd_t \end{matrix} \tag{2} xk+1?=xk?+vk?cos(θk?)dt?yk+1?=yk?+vk?sin(θk?)dt?θk+1?=θk?+wk?dt?ctek+1?=ctek?+vk?sin(θk?)dt?epsik+1?=epsik?+wk?dt??(2)
其中,cte\text{cte}cte與epsi\text{epsi}epsi的計算法公式詳見無人車系統(十一):軌跡跟蹤模型預測控制(MPC)原理與python實現【40行代碼】。
對于一個預測步長為NNN的MPC控制器求解問題,可以設計以下優化目標函數:
min?J=∑k=1N(ωcte∣∣ctet∣∣2+ωepsi∣∣epsik∣∣2)+∑k=0N?1(ωw∣∣wk∣∣2+ωv2∣∣vk∣∣2+ωv1∣∣vk?vref∣∣2)+∑k=0N?2(ωratew∣∣wk+1?wk∣∣2+ωratev∣∣vk+1?vk∣∣2)(4)\begin{array}{cc} \text{min } &\mathcal{J}=\sum_{k=1}^N(\omega_{\text{cte}}||\text{cte}_t||^2+\omega_{\text{epsi}}||\text{epsi}_k||^2) \\ & +\sum_{k=0}^{N-1} (\omega_{w}||w_k||^2+\omega_{v2}||v_k||^2+\omega_{v1} ||v_k-v_{\text{ref}}||^2) \\ & +\sum_{k=0}^{N-2}(\omega_{\text{rate}_{w}}||w_{k+1}-w_{k}||^2+\omega_{\text{rate}_{v}}||v_{k+1}-v_k||^2) \\ \end{array}\tag{4} min??J=∑k=1N?(ωcte?∣∣ctet?∣∣2+ωepsi?∣∣epsik?∣∣2)+∑k=0N?1?(ωw?∣∣wk?∣∣2+ωv2?∣∣vk?∣∣2+ωv1?∣∣vk??vref?∣∣2)+∑k=0N?2?(ωratew??∣∣wk+1??wk?∣∣2+ωratev??∣∣vk+1??vk?∣∣2)?(4)
注意:預測步長N步是指在初始狀態s0s_0s0?開始向前滾動預測N步,有預測狀態N個,分別為sn,n=1,2,...,Ns_n, n=1,2,...,Nsn?,n=1,2,...,N。因此,公式(4)第一行,關于狀態項的累積是從k=1k=1k=1到NNN。但是,在代碼實現過程中,為了簡潔和一致性,通常把s0s_0s0?也一并加入到目標函數中,形成k=0k=0k=0到NNN的累加和(詳見python代碼m.sk = RangeSet(0, N))。雖然加入了,我們要明白,在MPC中初始狀態已經確定了,并不是它的優化對象,它優化的是未來的狀態,一般將s0=0s_0=\mathbf{0}s0?=0就可以了。
滿足動態模型約束:
s.t.xk+1=xk+vkcos(θk)dt,k=0,1,2,...,N?1yk+1=yk+vksin(θk)dt,k=0,1,2,...,N?1θk+1=θk+wkdt,k=0,1,2,...,N?1ctek+1=f(xk)?yk+vksin?(θk)dt,k=0,1,2,...,N?1epsik+1=arctan?(f′(xk))?θ+wkdt,k=0,1,2,...,N?1(5)\begin{array}{c} \text{s.t.} & x_{k+1}=x_k+v_{k}cos(\theta_k)d_t &, k=0,1,2,...,N-1\\ & y_{k+1}=y_k+v_{k}sin(\theta_k)d_t &, k=0,1,2,...,N-1\\ & \theta_{k+1}=\theta_{k}+w_{k} d_t &, k=0,1,2,...,N-1\\ & \text{cte}_{k+1} =f(x_k)-y_k+v_{k} \sin (\theta_k)d_t &,k=0,1,2,...,N-1 \\ & \text{epsi}_{k+1}=arc\tan(f'(x_k))-\theta+w_{k} d_t &, k=0,1,2,...,N-1 \end{array}\tag{5} s.t.?xk+1?=xk?+vk?cos(θk?)dt?yk+1?=yk?+vk?sin(θk?)dt?θk+1?=θk?+wk?dt?ctek+1?=f(xk?)?yk?+vk?sin(θk?)dt?epsik+1?=arctan(f′(xk?))?θ+wk?dt??,k=0,1,2,...,N?1,k=0,1,2,...,N?1,k=0,1,2,...,N?1,k=0,1,2,...,N?1,k=0,1,2,...,N?1?(5)
執行器約束:
vk∈[vmin,vmax],k=0,1,2,...,N?1wk∈[wmin,wmax],k=0,1,2,...,N?1(6)\begin{array}{cc} v_k\in[v_{\text{min}}, v_{\text{max}}] &, k=0,1,2,...,N-1\\ w_k\in [w_{\text{min}}, w_{\text{max}}]&, k=0,1,2,...,N-1 \end{array}\tag{6} vk?∈[vmin?,vmax?]wk?∈[wmin?,wmax?]?,k=0,1,2,...,N?1,k=0,1,2,...,N?1?(6)
3. 程序與運行結果
在本此仿真中,一些參數如下:ωcte=ωepsi=1000\omega_{cte}=\omega_{epsi}=1000ωcte?=ωepsi?=1000,ωv1=100\omega_{v1}=100ωv1?=100,ωw=ωv2=10\omega_{w}=\omega_{v2}=10ωw?=ωv2?=10,ωratev=ωratew=1\omega_{\text{rate}_{v}}=\omega_{\text{rate}_{w}}=1ωratev??=ωratew??=1,vmin=?0.01v_{\text{min}}=-0.01vmin?=?0.01,vmax=2.0v_{\text{max}}=2.0vmax?=2.0,wmin=?1.5w_{\text{min}}=-1.5wmin?=?1.5,wmax=1.5w_{\text{max}}=1.5wmax?=1.5,向前預測步為N=19N=19N=19。
import rospy import copy import tf import numpy as np from scipy import spatial from geometry_msgs.msg import PointStamped from geometry_msgs.msg import PoseStamped from nav_msgs.msg import Odometry from nav_msgs.msg import Path from geometry_msgs.msg import Twist from pyomo.environ import * from pyomo.dae import * from scipy.interpolate import interp1d import matplotlib import matplotlib.pyplot as plt %matplotlib inline # set up matplotlib is_ipython = 'inline' in matplotlib.get_backend() if is_ipython:from IPython import displayplt.ion()wps = np.loadtxt("wps.txt") x = wps[:,0] y = wps[:,1] t = np.linspace(0, 1, num=len(x)) f1 = interp1d(t,x,kind='cubic') f2 = interp1d(t,y,kind='cubic') newt = np.linspace(0,1,100) nwps = np.zeros((100, 2)) nwps[:,0] = f1(newt) nwps[:,1] = f2(newt) wpstree = spatial.KDTree(nwps)def getcwps(rp):_, nindex = wpstree.query(rp)cwps = np.zeros((5,2))for i in range(5):cwps[i] = nwps[(nindex+i)%len(nwps)]# if (nindex + 5) >= 100: # cwps[0:100-nindex-1] = nwps[nindex:-1] # cwps[100-nindex-1:-1] = nwps[0:nindex+5-100] # else: # cwps = nwps[nindex:nindex+5]return cwps def cubic_fun(coeffs, x):return coeffs[0]*x**3+coeffs[1]*x**2+coeffs[2]*x+coeffs[3] def plot_durations(cwps, prex, prey):plt.figure(2)plt.clf()plt.plot(cwps[:,0],cwps[:,1])plt.plot(prex, prey)plt.scatter(x, y)if is_ipython:display.clear_output(wait=True)display.display(plt.gcf())N = 19 # forward predict steps ns = 5 # state numbers / here: 1: x, 2: y, 3: psi, 4: cte, 5: epsi na = 2 # actuator numbers /here: 1: steering angle, 2: omegaclass MPC(object):def __init__(self):m = ConcreteModel()m.sk = RangeSet(0, N)m.uk = RangeSet(0, N-1)m.uk1 = RangeSet(0, N-2)m.wg = Param(RangeSet(0, 3), initialize={0:1., 1:10., 2:100., 3:1000}, mutable=True) m.dt = Param(initialize=0.1, mutable=True)m.ref_v = Param(initialize=0.5, mutable=True)m.ref_cte = Param(initialize=0.0, mutable=True)m.ref_epsi = Param(initialize=0.0, mutable=True)m.s0 = Param(RangeSet(0, ns-1), initialize={0:0., 1:0., 2:0., 3:0., 4:0.}, mutable=True)m.coeffs = Param(RangeSet(0, 3), initialize={0:-0.000458316, 1:0.00734257, 2:0.0538795, 3:0.080728}, mutable=True)m.s = Var(RangeSet(0, ns-1), m.sk)m.f = Var(m.sk)m.psides = Var(m.sk)m.uv = Var(m.uk, bounds=(-0.01, 2.0))m.uw = Var(m.uk, bounds=(-1.5, 1.5))# 0: x, 1: y, 2: psi, 3: cte, 4: epsim.s0_update = Constraint(RangeSet(0, ns-1), rule = lambda m, i: m.s[i,0] == m.s0[i])m.x_update = Constraint(m.sk, rule=lambda m, k: m.s[0,k+1]==m.s[0,k]+m.uv[k]*cos(m.s[2,k])*m.dt if k<N-1 else Constraint.Skip)m.y_update = Constraint(m.sk, rule=lambda m, k: m.s[1,k+1]==m.s[1,k]+m.uv[k]*sin(m.s[2,k])*m.dt if k<N-1 else Constraint.Skip)m.psi_update = Constraint(m.sk, rule=lambda m, k: m.s[2,k+1]==m.s[2,k]+ m.uw[k]*m.dt if k<N-1 else Constraint.Skip) m.f_update = Constraint(m.sk, rule=lambda m, k: m.f[k]==m.coeffs[0]*m.s[0,k]**3+m.coeffs[1]*m.s[0,k]**2+m.coeffs[2]*m.s[0,k]+m.coeffs[3])m.psides_update = Constraint(m.sk, rule=lambda m, k: m.psides[k]==atan(3*m.coeffs[0]*m.s[0,k]**2+2*m.coeffs[1]*m.s[0,k]+m.coeffs[2]))m.cte_update = Constraint(m.sk, rule=lambda m, k: m.s[3,k+1]==(m.f[k]-m.s[1,k]+m.uv[k]*sin(m.s[2,k])*m.dt) if k<N-1 else Constraint.Skip)m.epsi_update = Constraint(m.sk, rule=lambda m, k: m.s[4, k+1]==m.psides[k]-m.s[2,k]+m.uw[k]*m.dt if k<N-1 else Constraint.Skip) m.cteobj = m.wg[3]*sum((m.s[3,k]-m.ref_cte)**2 for k in m.sk)m.epsiobj = m.wg[3]*sum((m.s[4,k]-m.ref_epsi)**2 for k in m.sk)m.vobj = m.wg[2]*sum((m.uv[k]-0.5)**2 for k in m.uk)m.uvobj = m.wg[1]*sum(m.uv[k]**2 for k in m.uk)m.uwobj = m.wg[1]*sum(m.uw[k]**2 for k in m.uk)m.sudobj = m.wg[0]*sum((m.uv[k+1]-m.uv[k])**2 for k in m.uk1)m.suaobj = m.wg[0]*sum((m.uw[k+1]-m.uw[k])**2 for k in m.uk1) m.obj = Objective(expr = m.cteobj+m.epsiobj+m.vobj+m.uvobj+m.uwobj+m.sudobj+m.suaobj, sense=minimize)self.iN = m#.create_instance()def Solve(self, state, coeffs): self.iN.s0.reconstruct({0:state[0], 1: state[1], 2:state[2], 3:state[3], 4:state[4]})self.iN.coeffs.reconstruct({0:coeffs[0], 1:coeffs[1], 2:coeffs[2], 3:coeffs[3]})self.iN.f_update.reconstruct()self.iN.s0_update.reconstruct()self.iN.psides_update.reconstruct()SolverFactory('ipopt').solve(self.iN)x_pred_vals = [self.iN.s[0,k]() for k in self.iN.sk]y_pred_vals = [self.iN.s[1,k]() for k in self.iN.sk]pre_path = np.zeros((N,2))pre_path[:,0] = np.array(x_pred_vals)pre_path[:,1] = np.array(y_pred_vals) v = self.iN.uv[0]()w = self.iN.uw[0]() return pre_path, v, w class Turtlebot_core():def __init__(self):rospy.init_node("Turtlebot_core", anonymous=True)self.listener = tf.TransformListener()rospy.Subscriber("/odom", Odometry, self.odomCallback)self.pub_refpath = rospy.Publisher("/ref_path", Path, queue_size=1)self.pub_prepath = rospy.Publisher("/pre_path", Path, queue_size=1)self.pub_cmd = rospy.Publisher("/cmd_vel_mux/input/teleop", Twist, queue_size=1)self.rp = np.zeros(3) self.crv = 0.0self.crw = 0.0self.mpc = MPC() rate = rospy.Rate(10) # 10HZwhile not rospy.is_shutdown():self.getrobotpose()cwps = getcwps(self.rp[0:2])px = self.rp[0] + self.crv*np.cos(self.rp[2])*0.1py = self.rp[1] + self.crw*np.sin(self.rp[2])*0.1psi = self.rp[2] + self.crw*0.1self.rp[0] = pxself.rp[1] = pyself.rp[2] = psicwps_robot = np.zeros((len(cwps), 2))for i in range(len(cwps)):dx = cwps[i,0] - pxdy = cwps[i,1] - pycwps_robot[i,0] = dx*np.cos(psi) + dy*np.sin(psi)cwps_robot[i,1] = dy*np.cos(psi) - dx*np.sin(psi)coeffs = np.polyfit(cwps_robot[:,0], cwps_robot[:,1], 3)cte = cubic_fun(coeffs, 0)f_prime_x = coeffs[2]epsi = np.arctan(f_prime_x)s0 = np.array([0.0, 0.0, 0.0, cte, epsi])pre_path, v, w = self.mpc.Solve(s0, coeffs)self.pub_ref_path(cwps_robot)self.pub_pre_path(pre_path)self.pub_Command(v, w)print(v,w) # plot_durations(cwps, x_pred_vals, y_pred_vals)rate.sleep() rospy.spin() def getrobotpose(self):try:(trans,rot) = self.listener.lookupTransform('/map', '/base_link', rospy.Time(0))except (tf.LookupException, tf.ConnectivityException, tf.ExtrapolationException):return self.rp[0] = trans[0]self.rp[1] = trans[1]r,p,y = tf.transformations.euler_from_quaternion(rot)self.rp[2] = ydef odomCallback(self, data):self.crv = data.twist.twist.linear.xself.crw = data.twist.twist.angular.zdef pub_ref_path(self, ref_path): msg_ref_path = Path()msg_ref_path.header.stamp = rospy.Time.now()msg_ref_path.header.frame_id = "base_link"for i in range(len(ref_path)):pose = PoseStamped()pose.pose.position.x = ref_path[i,0]pose.pose.position.y = ref_path[i,1]msg_ref_path.poses.append(copy.deepcopy(pose))self.pub_refpath.publish(msg_ref_path)def pub_pre_path(self, pre_path):msg_pre_path = Path()msg_pre_path.header.stamp = rospy.Time.now()msg_pre_path.header.frame_id = "base_link"for i in range(len(pre_path)):pose = PoseStamped()pose.pose.position.x = pre_path[i,0]pose.pose.position.y = pre_path[i,1]msg_pre_path.poses.append(copy.deepcopy(pose)) self.pub_prepath.publish(msg_pre_path)def pub_Command(self, v, w):twist = Twist()twist.linear.x = vtwist.angular.z = wself.pub_cmd.publish(twist) if __name__ == "__main__":turtlebot_core = Turtlebot_core()結果如下圖所示(綠線為預測的軌跡,紅線為turtlebot要跟蹤的軌跡):
4. 寫在后面
基于MPC的軌跡跟蹤方法的效果與所設定的目標函數、目標函數的參數有很大的關系。本文最后的參數都是通過不斷的試驗得到的,最終turtlebot能夠在回形走廊環境中跟蹤上要跟蹤的軌跡。
by Toby 2020-2-2
愿疫情早點結束!
總結
以上是生活随笔為你收集整理的Turtlebot+ROS Stage仿真环境实现MPC轨迹跟踪的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 牛客网刷算法题的输入输出(C++)
- 下一篇: GitHub 的 Fork 是什么意思?