Browse Source

[Release] v1.0 : new local image server, README, etc. for g1(29DoF)

main
silencht 1 year ago
parent
commit
ab923c670c
  1. 99
      README.md
  2. 96
      README_zh-CN.md
  3. 16829
      hardware/head_stereo_camera_mount.STEP
  4. 4169
      hardware/left_wrist_D405_camera_mount.STEP
  5. 14824
      hardware/right_wrist_D405_camera_mount.STEP
  6. 14519
      hardware/wrist_ring_mount.STEP
  7. 48
      teleop/image_server/image_client.py
  8. 75
      teleop/image_server/image_server.py
  9. 12
      teleop/robot_control/robot_hand_unitree.py
  10. 53
      teleop/teleop_hand_and_arm.py

99
README.md

@ -9,13 +9,14 @@
# 📺 Video Demo
<p align="center">
<a href="https://www.youtube.com/watch?v=pNjr2f_XHoo">
<a href="https://www.youtube.com/watch?v=pNjr2f_XHoo" target="_blank">
<img src="https://img.youtube.com/vi/pNjr2f_XHoo/maxresdefault.jpg" alt="Watch the video" style="width: 35%;">
</a>
</p>
# 0. 📖 Introduction
This repository implements teleoperation of the **Unitree humanoid robot** using **Apple Vision Pro**.
@ -25,28 +26,36 @@ Here are the robots that will be supported,
<tr>
<th style="text-align: center;"> &#129302; Robot </th>
<th style="text-align: center;"> &#9898; Status </th>
<th style="text-align: center;"> &#128221; Remarks </th>
</tr>
<tr>
<td style="text-align: center;"> G1 (29DoF) + Dex3-1 </td>
<td style="text-align: center;"> &#9989; Completed </td>
<td style="text-align: center;"> </td>
</tr>
<tr>
<td style="text-align: center;"> G1 (23DoF) </td>
<td style="text-align: center;"> &#9201; In Progress </td>
<td style="text-align: center;"> </td>
</tr>
<tr>
<td style="text-align: center;"> H1 (Arm 4DoF) </td>
<td style="text-align: center;"> &#9201; In Progress </td>
<td style="text-align: center;"> <a href="https://github.com/unitreerobotics/avp_teleoperate/tree/h1" target="_blank">Refer to this branch's ik temporarily</a> </td>
</tr>
<tr>
<td style="text-align: center;"> H1_2 (Arm 7DoF) + Inspire </td>
<td style="text-align: center;"> &#9201; In Progress </td>
<td style="text-align: center;"> &#9989; Completed </td>
<td style="text-align: center;"> <a href="https://github.com/unitreerobotics/avp_teleoperate/tree/master" target="_blank">Refer to this branch temporarily</a> </td>
</tr>
<tr>
<td style="text-align: center;"> ··· </td>
<td style="text-align: center;"> ··· </td>
<td style="text-align: center;"> ··· </td>
</tr>
</table>
Here are the required devices and wiring diagram,
<p align="center">
@ -85,7 +94,7 @@ In the Ubuntu system's `~/.bashrc` file, the default configuration is: `PS1='${d
> - `$` shows the current shell is Bash (for non-root users).
> - `pip install meshcat` is the command `unitree` wants to execute on `Host`.
>
> You can refer to [Harley Hahn's Guide to Unix and Linux](https://www.harley.com/unix-book/book/chapters/04.html#H) and the [Conda User Guide](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html) to learn more.
> You can refer to [Harley Hahn's Guide to Unix and Linux](https://www.harley.com/unix-book/book/chapters/04.html#H) and [Conda User Guide](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html) to learn more.
## 1.2 🕹️ unitree_sdk2_python
@ -98,7 +107,7 @@ In the Ubuntu system's `~/.bashrc` file, the default configuration is: `PS1='${d
# 2. 🛠️ TeleVision and Apple Vision Pro configuration
# 2. ️ TeleVision and Apple Vision Pro configuration
## 2.1 📥 basic
@ -316,7 +325,87 @@ avp_teleoperate/
# 5. 🙏 Acknowledgement
# 5. 🛠️ Hardware
## 5.1 📋 List
| Item | Quantity | Link | Remarks |
| :--------------------------: | :------: | :----------------------------------------------------------: | :---------------------------------------------------------: |
| **Unitree Robot G1** | 1 | https://www.unitree.com/g1 | With development computing unit |
| **Apple Vision Pro** | 1 | https://www.apple.com/apple-vision-pro/ | |
| **Router** | 1 | | |
| **User PC** | 1 | | Recommended graphics card performance at RTX 4080 and above |
| **Head Stereo Camera** | 1 | [For reference only] http://e.tb.cn/h.TaZxgkpfWkNCakg?tk=KKz03Kyu04u | For head |
| **Head Camera Mount** | 1 | https://github.com/unitreerobotics/avp_teleoperate/blob/g1/hardware/head_stereo_camera_mount.STEP | For mounting head stereo camera, FOV 130° |
| Intel RealSense D405 | 2 | https://www.intelrealsense.com/depth-camera-d405/ | For wrist |
| Wrist Ring Mount | 2 | https://github.com/unitreerobotics/avp_teleoperate/blob/g1/hardware/wrist_ring_mount.STEP | Used with wrist camera mount |
| Left Wrist Camera Mount | 1 | https://github.com/unitreerobotics/avp_teleoperate/blob/g1/hardware/left_wrist_D405_camera_mount.STEP | For mounting left wrist RealSense D405 camera |
| Right Wrist Camera Mount | 1 | https://github.com/unitreerobotics/avp_teleoperate/blob/g1/hardware/right_wrist_D405_camera_mount.STEP | For mounting right wrist RealSense D405 camera |
| M3 hex nuts | 4 | [For reference only] https://a.co/d/1opqtOr | For Wrist fastener |
| M3x12 screws | 4 | [For reference only] https://amzn.asia/d/aU9NHSf | For wrist fastener |
| M3x6 screws | 4 | [For reference only] https://amzn.asia/d/0nEz5dJ | For wrist fastener |
| **M4x14 screws** | 2 | [For reference only] https://amzn.asia/d/cfta55x | For head fastener |
| **M2x4 self-tapping screws** | 4 | [For reference only] https://amzn.asia/d/1msRa5B | For head fastener |
> Note: The bolded items are essential equipment for teleoperation tasks, while the other items are optional equipment for recording [datasets](https://huggingface.co/unitreerobotics).
## 5.2 🔨 Installation diagram
<table>
<tr>
<th style="text-align: center;">Item</th>
<th style="text-align: center;" colspan="2">Simulation</th>
<th style="text-align: center;" colspan="2">Real</th>
</tr>
<tr>
<td style="text-align: center;">Head</td>
<td style="text-align: center;">
<figure>
<img src="./img/head_camera_mount.png" alt="head" style="width: 100%;">
<figcaption>Head Mount</figcaption>
</figure>
</td>
<td style="text-align: center;">
<figure>
<img src="./img/head_camera_mount_install.png" alt="head" style="width: 80%;">
<figcaption>Side View of Assembly</figcaption>
</figure>
</td>
<td style="text-align: center;" colspan="2">
<figure>
<img src="./img/real_head.jpg" alt="head" style="width: 20%;">
<figcaption>Front View of Assembly</figcaption>
</figure>
</td>
</tr>
<tr>
<td style="text-align: center;">Wrist</td>
<td style="text-align: center;" colspan="2">
<figure>
<img src="./img/wrist_and_ring_mount.png" alt="wrist" style="width: 100%;">
<figcaption>Wrist Ring and Camera Mount</figcaption>
</figure>
</td>
<td style="text-align: center;">
<figure>
<img src="./img/real_left_hand.jpg" alt="wrist" style="width: 50%;">
<figcaption>Left Hand Assembly</figcaption>
</figure>
</td>
<td style="text-align: center;">
<figure>
<img src="./img/real_right_hand.jpg" alt="wrist" style="width: 50%;">
<figcaption>Right Hand Assembly</figcaption>
</figure>
</td>
</tr>
</table>
> Note: The wrist ring mount should align with the seam of the robot's wrist, as shown by the red circle in the image.
# 6. 🙏 Acknowledgement
This code builds upon following open-source code-bases. Please visit the URLs to see the respective LICENSES:

96
README_zh-CN.md

@ -9,11 +9,12 @@
# 📺 视频演示
<p align="center">
<a href="https://www.bilibili.com/video/BV1SW421X7kA">
<a href="https://www.bilibili.com/video/BV1SW421X7kA" target="_blank">
<img src="./img/video_cover.jpg" alt="Watch the video" style="width: 35%;">
</a>
</p>
# 0. 📖 介绍
该仓库实现了使用 **Apple Vision Pro****宇树(Unitree)人形机器人** 的遥操作控制。
@ -24,26 +25,32 @@
<tr>
<th style="text-align: center;"> &#129302; 机器人 </th>
<th style="text-align: center;"> &#9898; 状态 </th>
<th style="text-align: center;"> &#128221; 备注 </th>
</tr>
<tr>
<td style="text-align: center;"> G1 (29自由度) + Dex3-1 </td>
<td style="text-align: center;"> &#9989; 完成 </td>
<th style="text-align: center;"> </th>
</tr>
<tr>
<td style="text-align: center;"> G1 (23自由度) </td>
<td style="text-align: center;"> &#9201; 进行中 </td>
<th style="text-align: center;"> </th>
</tr>
<tr>
<td style="text-align: center;"> H1 (手臂4自由度) </td>
<td style="text-align: center;"> &#9201; 进行中 </td>
<th style="text-align: center;"> <a href="https://github.com/unitreerobotics/avp_teleoperate/tree/h1" target="_blank">可参考该临时分支下ik解算</a> </th>
</tr>
<tr>
<td style="text-align: center;"> H1_2 (手臂7自由度) + Inspire </td>
<td style="text-align: center;"> &#9201; 进行中 </td>
<td style="text-align: center;"> &#9989; 完成 </td>
<th style="text-align: center;"> <a href="https://github.com/unitreerobotics/avp_teleoperate/tree/master" target="_blank">可参考该临时分支</a> </th>
</tr>
<tr>
<td style="text-align: center;"> ··· </td>
<td style="text-align: center;"> ··· </td>
<th style="text-align: center;"> ··· </th>
</tr>
</table>
@ -78,7 +85,6 @@ unitree@Host:~$ conda activate tv
> 提醒:命令前面的所有标识符是为了提示:该命令应该在哪个设备和目录下执行。
>
> p.s. 在 Ubuntu 系统 `~/.bashrc` 文件中,默认配置: `PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '`
> - 以`(tv) unitree@Host:~$ pip install meshcat` 命令为例:
>
>- `(tv)` 表示 shell 此时位于 conda 创建的 tv 环境中;
@ -102,7 +108,7 @@ unitree@Host:~$ conda activate tv
# 2. 🛠️ TeleVision 和 Apple Vision Pro 配置
# 2. ️ TeleVision 和 Apple Vision Pro 配置
## 2.1 📥 基础
@ -316,7 +322,87 @@ avp_teleoperate/
# 5. 🙏 鸣谢
# 5. 🛠️ 硬件
## 5.1 📋 清单
| 项目 | 数量 | 链接 | 备注 |
| :-----------------------: | :--: | :----------------------------------------------------------: | :----------------------------: |
| **宇树通用人形机器人 G1** | 1 | https://www.unitree.com/cn/g1 | 需选配开发计算单元版本 |
| **Apple Vision Pro** | 1 | https://www.apple.com.cn/apple-vision-pro/ | |
| **路由器** | 1 | | |
| **用户电脑** | 1 | | 推荐显卡性能在RTX 4080 以上 |
| **头部双目相机** | 1 | [仅供参考] http://e.tb.cn/h.TaZxgkpfWkNCakg?tk=KKz03Kyu04u | 用于机器人头部视野,视场角130° |
| **头部相机支架** | 1 | https://github.com/unitreerobotics/avp_teleoperate/blob/g1/hardware/head_stereo_camera_mount.STEP | 用于装配头部相机 |
| 英特尔 RealSense D405相机 | 2 | https://www.intelrealsense.com/depth-camera-d405/ | 用于腕部灵巧操作视野 |
| 腕部相机环形支架 | 2 | https://github.com/unitreerobotics/avp_teleoperate/blob/g1/hardware/wrist_ring_mount.STEP | 与腕部相机支架搭配使用 |
| 左腕相机支架 | 1 | https://github.com/unitreerobotics/avp_teleoperate/blob/g1/hardware/left_wrist_D405_camera_mount.STEP | 用于装配左腕D405相机 |
| 右腕相机支架 | 1 | https://github.com/unitreerobotics/avp_teleoperate/blob/g1/hardware/right_wrist_D405_camera_mount.STEP | 用于装配右腕D405相机 |
| M3-1 六角螺母 | 4 | [仅供参考] https://a.co/d/gQaLtHD | 用于腕部紧固件 |
| M3x12 螺钉 | 4 | [仅供参考] https://amzn.asia/d/aU9NHSf | 用于腕部紧固件 |
| M3x6 螺钉 | 4 | [仅供参考] https://amzn.asia/d/0nEz5dJ | 用于腕部紧固件 |
| **M4x14 螺钉** | 2 | [仅供参考] https://amzn.asia/d/cfta55x | 用于头部紧固件 |
| **M2x4 自攻螺钉** | 4 | [仅供参考] https://amzn.asia/d/1msRa5B | 用于头部紧固件 |
> 注意:加粗项目是进行遥操作任务时的必需设备,其余项目是录制[数据集](https://huggingface.co/unitreerobotics)时的可选设备。
## 5.2 🔨 安装示意图
<table>
<tr>
<th style="text-align: center;">项目</th>
<th style="text-align: center;" colspan="2">仿真</th>
<th style="text-align: center;" colspan="2">实物</th>
</tr>
<tr>
<td style="text-align: center;">头部</td>
<td style="text-align: center;">
<figure>
<img src="./img/head_camera_mount.png" alt="head" style="width: 100%;">
<figcaption>头部支架</figcaption>
</figure>
</td>
<td style="text-align: center;">
<figure>
<img src="./img/head_camera_mount_install.png" alt="head" style="width: 80%;">
<figcaption>装配侧视</figcaption>
</figure>
</td>
<td style="text-align: center;" colspan="2">
<figure>
<img src="./img/real_head.jpg" alt="head" style="width: 20%;">
<figcaption>装配正视</figcaption>
</figure>
</td>
</tr>
<tr>
<td style="text-align: center;">腕部</td>
<td style="text-align: center;" colspan="2">
<figure>
<img src="./img/wrist_and_ring_mount.png" alt="wrist" style="width: 100%;">
<figcaption>腕圈及相机支架</figcaption>
</figure>
</td>
<td style="text-align: center;">
<figure>
<img src="./img/real_left_hand.jpg" alt="wrist" style="width: 50%;">
<figcaption>装配左手</figcaption>
</figure>
</td>
<td style="text-align: center;">
<figure>
<img src="./img/real_right_hand.jpg" alt="wrist" style="width: 50%;">
<figcaption>装配右手</figcaption>
</figure>
</td>
</tr>
</table>
> 注意:如图中红圈所示,腕圈支架与机器人手腕接缝对齐。
# 6. 🙏 鸣谢
该代码基于以下开源代码库构建。请访问以下链接查看各自的许可证:

16829
hardware/head_stereo_camera_mount.STEP
File diff suppressed because it is too large
View File

4169
hardware/left_wrist_D405_camera_mount.STEP
File diff suppressed because it is too large
View File

14824
hardware/right_wrist_D405_camera_mount.STEP
File diff suppressed because it is too large
View File

14519
hardware/wrist_ring_mount.STEP
File diff suppressed because it is too large
View File

48
teleop/image_server/image_client.py

@ -7,11 +7,16 @@ from collections import deque
from multiprocessing import shared_memory
class ImageClient:
def __init__(self, img_shape = None, img_shm_name = None, image_show = False, server_address = "192.168.123.164", port = 5555, Unit_Test = False):
def __init__(self, tv_img_shape = None, tv_img_shm_name = None, wrist_img_shape = None, wrist_img_shm_name = None,
image_show = False, server_address = "192.168.123.164", port = 5555, Unit_Test = False):
"""
img_shape: User's expected camera resolution shape (H, W, C). It should match the output of the image service terminal.
tv_img_shape: User's expected head camera resolution shape (H, W, C). It should match the output of the image service terminal.
img_shm_name: Shared memory is used to easily transfer images across processes to the Vuer.
tv_img_shm_name: Shared memory is used to easily transfer images across processes to the Vuer.
wrist_img_shape: User's expected wrist camera resolution shape (H, W, C). It should maintain the same shape as tv_img_shape.
wrist_img_shm_name: Shared memory is used to easily transfer images.
image_show: Whether to display received images in real time.
@ -27,11 +32,20 @@ class ImageClient:
self._server_address = server_address
self._port = port
self.enable_shm = False
if img_shape is not None and img_shm_name is not None:
self._image_shm = shared_memory.SharedMemory(name=img_shm_name)
self._img_array = np.ndarray(img_shape, dtype = np.uint8, buffer = self._image_shm.buf)
self.enable_shm = True
self.tv_img_shape = tv_img_shape
self.wrist_img_shape = wrist_img_shape
self.tv_enable_shm = False
if self.tv_img_shape is not None and tv_img_shm_name is not None:
self.tv_image_shm = shared_memory.SharedMemory(name=tv_img_shm_name)
self.tv_img_array = np.ndarray(tv_img_shape, dtype = np.uint8, buffer = self.tv_image_shm.buf)
self.tv_enable_shm = True
self.wrist_enable_shm = False
if self.wrist_img_shape is not None and wrist_img_shm_name is not None:
self.wrist_image_shm = shared_memory.SharedMemory(name=wrist_img_shm_name)
self.wrist_img_array = np.ndarray(wrist_img_shape, dtype = np.uint8, buffer = self.wrist_image_shm.buf)
self.wrist_enable_shm = True
# Performance evaluation parameters
self._enable_performance_eval = Unit_Test
@ -142,11 +156,15 @@ class ImageClient:
print("[Image Client] Failed to decode image.")
continue
if self.enable_shm:
np.copyto(self._img_array, np.array(current_image))
if self.tv_enable_shm:
np.copyto(self.tv_img_array, np.array(current_image[:, :self.tv_img_shape[1]]))
if self.wrist_enable_shm:
np.copyto(self.wrist_img_array, np.array(current_image[:, self.wrist_img_shape[1]:]))
if self._image_show:
resized_image = cv2.resize(current_image, (current_image.shape[1] // 2, current_image.shape[0] // 2))
height, width = current_image.shape[:2]
resized_image = cv2.resize(current_image, (width // 2, height // 2))
cv2.imshow('Image Client Stream', resized_image)
if cv2.waitKey(1) & 0xFF == ord('q'):
self.running = False
@ -164,10 +182,10 @@ class ImageClient:
if __name__ == "__main__":
# example1
# img_shape = (720, 1280, 3)
# img_shm = shared_memory.SharedMemory(create=True, size=np.prod(img_shape) * np.uint8().itemsize)
# img_array = np.ndarray(img_shape, dtype=np.uint8, buffer=img_shm.buf)
# img_client = ImageClient(img_shape = img_shape, img_shm_name = img_shm.name)
# tv_img_shape = (480, 1280, 3)
# img_shm = shared_memory.SharedMemory(create=True, size=np.prod(tv_img_shape) * np.uint8().itemsize)
# img_array = np.ndarray(tv_img_shape, dtype=np.uint8, buffer=img_shm.buf)
# img_client = ImageClient(tv_img_shape = tv_img_shape, tv_img_shm_name = img_shm.name)
# img_client.receive_process()
# example2

75
teleop/image_server/image_server.py

@ -3,9 +3,64 @@ import zmq
import time
import struct
from collections import deque
import numpy as np
import pyrealsense2 as rs
class RealSenseCamera(object):
def __init__(self, img_shape, fps, serial_number = None, enable_depth = False) -> None:
self.img_shape = img_shape
self.fps = fps
self.serial_number = serial_number
self.enable_depth = enable_depth
align_to = rs.stream.color
self.align = rs.align(align_to)
self.init_realsense()
def init_realsense(self):
self.pipeline = rs.pipeline()
config = rs.config()
if self.serial_number is not None:
config.enable_device(self.serial_number)
config.enable_stream(rs.stream.color, self.img_shape[1] // 2, self.img_shape[0], rs.format.bgr8, self.fps)
if self.enable_depth:
config.enable_stream(rs.stream.depth, self.img_shape[1] // 2, self.img_shape[0], rs.format.z16, self.fps)
profile = self.pipeline.start(config)
self._device = profile.get_device()
if self._device is None:
print('pipe_profile.get_device() is None .')
if self.enable_depth:
assert self._device is not None
depth_sensor = self._device.first_depth_sensor()
self.g_depth_scale = depth_sensor.get_depth_scale()
self.intrinsics = profile.get_stream(rs.stream.color).as_video_stream_profile().get_intrinsics()
def get_frame(self):
frames = self.pipeline.wait_for_frames()
aligned_frames = self.align.process(frames)
color_frame = aligned_frames.get_color_frame()
if self.enable_depth:
depth_frame = aligned_frames.get_depth_frame()
if not color_frame:
return None, None
color_image = np.asanyarray(color_frame.get_data())
# color_image = cv2.cvtColor(color_image, cv2.COLOR_BGR2RGB)
depth_image = np.asanyarray(depth_frame.get_data()) if self.enable_depth else None
return color_image, depth_image
class ImageServer:
def __init__(self, img_shape = (480, 640 * 2, 3), fps = 30, port = 5555, Unit_Test = False):
def __init__(self, img_shape = (480, 640 * 2, 3), fps = 30, enable_wrist = False, port = 5555, Unit_Test = False):
"""
img_shape: User's expected camera resolution shape (H, W, C).
@ -23,6 +78,7 @@ class ImageServer:
"""
self.img_shape = img_shape
self.fps = fps
self.enable_wrist = enable_wrist
self.port = port
self.enable_performance_eval = Unit_Test
@ -33,6 +89,11 @@ class ImageServer:
self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, self.img_shape[0])
self.cap.set(cv2.CAP_PROP_FPS, self.fps)
if self.enable_wrist:
# initiate realsense camera
self.left_cam = RealSenseCamera(img_shape = self.img_shape, fps = self.fps, serial_number = "218622271789") # left wrist camera
self.right_cam = RealSenseCamera(img_shape = self.img_shape, fps = self.fps, serial_number = "218622278527") # right wrist camera
# set ZeroMQ context and socket
self.context = zmq.Context()
self.socket = self.context.socket(zmq.PUB)
@ -74,12 +135,18 @@ class ImageServer:
def send_process(self):
try:
while True:
ret, frame = self.cap.read()
ret, head_color = self.cap.read()
if not ret:
print("[Image Server] Frame read is error.")
break
ret, buffer = cv2.imencode('.jpg', frame)
if self.enable_wrist:
left_wrist_color, left_wrist_depth = self.left_cam.get_frame()
right_wrist_color, right_wrist_depth = self.right_cam.get_frame()
# Concatenate images along the width
head_color = cv2.hconcat([head_color, left_wrist_color, right_wrist_color])
ret, buffer = cv2.imencode('.jpg', head_color)
if not ret:
print("[Image Server] Frame imencode is failed.")
continue
@ -108,5 +175,5 @@ class ImageServer:
if __name__ == "__main__":
# server = ImageServer(img_shape = (720, 640 * 2, 3), Unit_Test = True) # test
server = ImageServer(img_shape = (720, 1280 * 2, 3), Unit_Test = False) # deployment
server = ImageServer(img_shape = (640, 480 * 2, 3), enable_wrist = True, Unit_Test = False) # deployment
server.send_process()

12
teleop/robot_control/robot_hand_unitree.py

@ -223,10 +223,10 @@ if __name__ == "__main__":
print(f"args:{args}\n")
# image
img_shape = (720, 1280, 3)
img_shape = (480, 1280, 3)
img_shm = shared_memory.SharedMemory(create=True, size=np.prod(img_shape) * np.uint8().itemsize)
img_array = np.ndarray(img_shape, dtype=np.uint8, buffer=img_shm.buf)
img_client = ImageClient(img_shape = img_shape, img_shm_name = img_shm.name)
img_client = ImageClient(tv_img_shape = img_shape, tv_img_shm_name = img_shm.name)
image_receive_thread = threading.Thread(target=img_client.receive_process, daemon=True)
image_receive_thread.daemon = True
image_receive_thread.start()
@ -245,5 +245,9 @@ if __name__ == "__main__":
user_input = input("Please enter the start signal (enter 's' to start the subsequent program):\n")
if user_input.lower() == 's':
while True:
print(f"{dual_hand_state_array[1]}")
time.sleep(0.1)
head_rmat, left_wrist, right_wrist, left_hand, right_hand = tv_wrapper.get_data()
# send hand skeleton data to hand_ctrl.control_process
left_hand_array[:] = left_hand.flatten()
right_hand_array[:] = right_hand.flatten()
print(f"Dual hand state array: {list(dual_hand_state_array)}")
time.sleep(0.01)

53
teleop/teleop_hand_and_arm.py

@ -26,25 +26,38 @@ if __name__ == '__main__':
parser.add_argument('--record', action = 'store_true', help = 'Save data or not')
parser.add_argument('--no-record', dest = 'record', action = 'store_false', help = 'Do not save data')
parser.set_defaults(record=False)
parser.set_defaults(record = True)
parser.add_argument('--binocular', action = 'store_true', help = 'Use binocular camera')
parser.add_argument('--monocular', dest = 'binocular', action = 'store_false', help = 'Use monocular camera')
parser.set_defaults(binocular = True)
parser.add_argument('--wrist', action = 'store_true', help = 'Use wrist camera')
parser.add_argument('--no-wrist', dest = 'wrist', action = 'store_false', help = 'Not use wrist camera')
parser.set_defaults(wrist = False)
args = parser.parse_args()
print(f"args:{args}\n")
# image
img_shape = (720, 2560, 3)
img_shm = shared_memory.SharedMemory(create=True, size=np.prod(img_shape) * np.uint8().itemsize)
img_array = np.ndarray(img_shape, dtype=np.uint8, buffer=img_shm.buf)
img_client = ImageClient(img_shape = img_shape, img_shm_name = img_shm.name)
tv_img_shape = (480, 640 * 2, 3)
tv_img_shm = shared_memory.SharedMemory(create = True, size = np.prod(tv_img_shape) * np.uint8().itemsize)
tv_img_array = np.ndarray(tv_img_shape, dtype = np.uint8, buffer = tv_img_shm.buf)
if args.wrist:
wrist_img_shape = tv_img_shape
wrist_img_shm = shared_memory.SharedMemory(create = True, size = np.prod(wrist_img_shape) * np.uint8().itemsize)
wrist_img_array = np.ndarray(wrist_img_shape, dtype = np.uint8, buffer = wrist_img_shm.buf)
img_client = ImageClient(tv_img_shape = tv_img_shape, tv_img_shm_name = tv_img_shm.name,
wrist_img_shape = wrist_img_shape, wrist_img_shm_name = wrist_img_shm.name)
else:
img_client = ImageClient(tv_img_shape = tv_img_shape, tv_img_shm_name = tv_img_shm.name)
image_receive_thread = threading.Thread(target = img_client.receive_process, daemon = True)
image_receive_thread.daemon = True
image_receive_thread.start()
# television and arm
tv_wrapper = TeleVisionWrapper(args.binocular, img_shape, img_shm.name)
tv_wrapper = TeleVisionWrapper(args.binocular, tv_img_shape, tv_img_shm.name)
arm_ctrl = G1_29_ArmController()
arm_ik = G1_29_ArmIK()
@ -87,8 +100,8 @@ if __name__ == '__main__':
# print(f"ik:\t{round(time_ik_end - time_ik_start, 6)}")
arm_ctrl.ctrl_dual_arm(sol_q, sol_tauff)
resized_image = cv2.resize(img_array, (img_shape[1] // 2, img_shape[0] // 2))
cv2.imshow("record image", resized_image)
tv_resized_image = cv2.resize(tv_img_array, (tv_img_shape[1] // 2, tv_img_shape[0] // 2))
cv2.imshow("record image", tv_resized_image)
key = cv2.waitKey(1) & 0xFF
if key == ord('q'):
running = False
@ -112,8 +125,9 @@ if __name__ == '__main__':
right_hand_state = dual_hand_state_array[-7:]
left_hand_action = dual_hand_action_array[:7]
right_hand_action = dual_hand_action_array[-7:]
current_image = img_array.copy()
current_tv_image = tv_img_array.copy()
if args.wrist:
current_wrist_image = wrist_img_array.copy()
left_arm_state = current_lr_arm_q[:7]
right_arm_state = current_lr_arm_q[-7:]
left_arm_action = sol_q[:7]
@ -123,10 +137,16 @@ if __name__ == '__main__':
colors = {}
depths = {}
if args.binocular:
colors[f"color_{0}"] = current_image[:, :img_shape[1]//2]
colors[f"color_{1}"] = current_image[:, img_shape[1]//2:]
colors[f"color_{0}"] = current_tv_image[:, :tv_img_shape[1]//2]
colors[f"color_{1}"] = current_tv_image[:, tv_img_shape[1]//2:]
if args.wrist:
colors[f"color_{2}"] = current_wrist_image[:, :wrist_img_shape[1]//2]
colors[f"color_{3}"] = current_wrist_image[:, wrist_img_shape[1]//2:]
else:
colors[f"color_{0}"] = current_image
colors[f"color_{0}"] = current_tv_image
if args.wrist:
colors[f"color_{1}"] = current_wrist_image[:, :wrist_img_shape[1]//2]
colors[f"color_{2}"] = current_wrist_image[:, wrist_img_shape[1]//2:]
states = {
"left_arm": {
"qpos": left_arm_state.tolist(), # numpy.array -> list
@ -184,7 +204,10 @@ if __name__ == '__main__':
except KeyboardInterrupt:
print("KeyboardInterrupt, exiting program...")
finally:
img_shm.close()
img_shm.unlink()
tv_img_shm.unlink()
tv_img_shm.close()
if args.wrist:
wrist_img_shm.unlink()
wrist_img_shm.close()
print("Finally, exiting program...")
exit(0)
Loading…
Cancel
Save