D455 如何同时传输视频深度流和惯性单元IMU流?(双管道方法与调用回调方法)
文章目錄
- 雙管道方法
- 回調callback方法
【D455】【python】How to get color_stream\depth_steam\accel_stream\gyro_stream at the same time?
MartyG-RealSense commented yesterday ?
Hi @Dontla When streaming depth, color and IMU at the same time, there is sometimes a problem where the RGB stream becomes No Frames Received or more rarely, the IMU frames stop being received. To deal with this issue, the recommended solutions are to either use two separate pipelines (IMU alone on one pipeline and depth / color on the other pipeline), or to use callbacks.
For the separate pipeline approach, I recommend the Python script in the link below…
#5628 (comment)
For the callback approach in Python, the link below is a good reference.
#5417
雙管道方法
# -*- coding: utf-8 -*- """ @File : D455_double_pipeline.py @Time : 2020/12/19 16:07 @Author : Dontla @Email : sxana@qq.com @Software: PyCharm """ ## License: Apache 2.0. See LICENSE file in root directory. ## Parts of this code are ## Copyright(c) 2015-2017 Intel Corporation. All Rights Reserved.################################################## ## configurable realsense viewer ## ##################################################import pyrealsense2 as rs import numpy as np import cv2 import time# # NOTE: it appears that imu, rgb and depth cannot all be running simultaneously. # Any two of those 3 are fine, but not all three: causes timeout on wait_for_frames() # device_id = None # "923322071108" # serial number of device to use or None to use default enable_imu = True enable_rgb = True enable_depth = True # TODO: enable_pose # TODO: enable_ir_stereo# Configure streams if enable_imu:imu_pipeline = rs.pipeline()imu_config = rs.config()if None != device_id:imu_config.enable_device(device_id)imu_config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, 63) # accelerationimu_config.enable_stream(rs.stream.gyro, rs.format.motion_xyz32f, 200) # gyroscopeimu_profile = imu_pipeline.start(imu_config)if enable_depth or enable_rgb:pipeline = rs.pipeline()config = rs.config()# if we are provided with a specific device, then enable itif None != device_id:config.enable_device(device_id)if enable_depth:config.enable_stream(rs.stream.depth, 848, 480, rs.format.z16, 60) # depthif enable_rgb:config.enable_stream(rs.stream.color, 424, 240, rs.format.bgr8, 60) # rgb# Start streamingprofile = pipeline.start(config)# Getting the depth sensor's depth scale (see rs-align example for explanation)if enable_depth:depth_sensor = profile.get_device().first_depth_sensor()depth_scale = depth_sensor.get_depth_scale()print("Depth Scale is: ", depth_scale)if enable_depth:# Create an align object# rs.align allows us to perform alignment of depth frames to others frames# The "align_to" is the stream type to which we plan to align depth frames.align_to = rs.stream.coloralign = rs.align(align_to)try:frame_count = 0start_time = time.time()frame_time = start_timewhile True:last_time = frame_timeframe_time = time.time() - start_timeframe_count += 1## get the frames#if enable_rgb or enable_depth:frames = pipeline.wait_for_frames(200 if (frame_count > 1) else 10000) # wait 10 seconds for first frameif enable_imu:imu_frames = imu_pipeline.wait_for_frames(200 if (frame_count > 1) else 10000)if enable_rgb or enable_depth:# Align the depth frame to color framealigned_frames = align.process(frames) if enable_depth and enable_rgb else Nonedepth_frame = aligned_frames.get_depth_frame() if aligned_frames is not None else frames.get_depth_frame()color_frame = aligned_frames.get_color_frame() if aligned_frames is not None else frames.get_color_frame()# Convert images to numpy arraysdepth_image = np.asanyarray(depth_frame.get_data()) if enable_depth else Nonecolor_image = np.asanyarray(color_frame.get_data()) if enable_rgb else None# Apply colormap on depth image (image must be converted to 8-bit per pixel first)depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03),cv2.COLORMAP_JET) if enable_depth else None# Stack both images horizontallyimages = Noneif enable_rgb:images = np.hstack((color_image, depth_colormap)) if enable_depth else color_imageelif enable_depth:images = depth_colormap# Show imagescv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)if images is not None:cv2.imshow('RealSense', images)if enable_imu:accel_frame = imu_frames.first_or_default(rs.stream.accel, rs.format.motion_xyz32f)gyro_frame = imu_frames.first_or_default(rs.stream.gyro, rs.format.motion_xyz32f)print("imu frame {} in {} seconds: \n\taccel = {}, \n\tgyro = {}".format(str(frame_count),str(frame_time - last_time), str(accel_frame.as_motion_frame().get_motion_data()), str(gyro_frame.as_motion_frame().get_motion_data())))# Press esc or 'q' to close the image windowkey = cv2.waitKey(1)if key & 0xFF == ord('q') or key == 27:cv2.destroyAllWindows()breakfinally:# Stop streamingpipeline.stop()不錯的,能正常運行
回調callback方法
代碼運行不成功,還沒解決咋弄
# -*- coding: utf-8 -*- """ @File : D455_callback.py @Time : 2020/12/25 11:30 @Author : Dontla @Email : sxana@qq.com @Software: PyCharm """ import pyrealsense2 as rspipeline = rs.pipeline()def test_callback(fs):global framesetframeset = fspipeline.stop()profile = pipeline.start(test_callback) # print(profile) # <pyrealsense2.pyrealsense2.pipeline_profile object at 0x000001FF99DE3490>profile.get_streams() # print(stream) # [<pyrealsense2.video_stream_profile: 1(0) 848x480 @ 30fps 1>, <pyrealsense2.video_stream_profile: 2(0) 1280x720 @ 30fps 5>, <pyrealsense2.stream_profile: 5(0) @ 200fps 16>, <pyrealsense2.stream_profile: 6(0) @ 63fps 16>]frameset D:\20191031_tensorflow_yolov3\python\python.exe C:/Users/Administrator/Desktop/test_D455/D455_callback.py Traceback (most recent call last):File "C:/Users/Administrator/Desktop/test_D455/D455_callback.py", line 28, in <module>frameset NameError: name 'frameset' is not definedProcess finished with exit code 1總結
以上是生活随笔為你收集整理的D455 如何同时传输视频深度流和惯性单元IMU流?(双管道方法与调用回调方法)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: D455参数
- 下一篇: PIL image.fromarray(