Browse Source

Merge pull request #66 from unitreerobotics/newVuer

New vuer
main
silencht 9 months ago
committed by GitHub
parent
commit
c43facb043
No known key found for this signature in database GPG Key ID: B5690EEEBB952194
  1. 6
      .gitmodules
  2. 7
      LICENSE
  3. 456
      README.md
  4. 559
      README_ja-JP.md
  5. 432
      README_zh-CN.md
  6. 63
      assets/inspire_hand/inspire_hand.yml
  7. 40
      assets/unitree_hand/unitree_dex3.yml
  8. 27
      requirements.txt
  9. 20
      teleop/image_server/image_client.py
  10. 42
      teleop/image_server/image_server.py
  11. 53
      teleop/open_television/constants.py
  12. 178
      teleop/open_television/television.py
  13. 147
      teleop/open_television/tv_wrapper.py
  14. 1
      teleop/robot_control/dex-retargeting
  15. 22
      teleop/robot_control/dex_retargeting/CITATION.cff
  16. 21
      teleop/robot_control/dex_retargeting/LICENSE
  17. 1
      teleop/robot_control/dex_retargeting/__init__.py
  18. 85
      teleop/robot_control/dex_retargeting/constants.py
  19. 102
      teleop/robot_control/dex_retargeting/kinematics_adaptor.py
  20. 511
      teleop/robot_control/dex_retargeting/optimizer.py
  21. 17
      teleop/robot_control/dex_retargeting/optimizer_utils.py
  22. 251
      teleop/robot_control/dex_retargeting/retargeting_config.py
  23. 93
      teleop/robot_control/dex_retargeting/robot_wrapper.py
  24. 151
      teleop/robot_control/dex_retargeting/seq_retarget.py
  25. 2237
      teleop/robot_control/dex_retargeting/yourdfpy.py
  26. 16
      teleop/robot_control/hand_retargeting.py
  27. 178
      teleop/robot_control/robot_arm.py
  28. 32
      teleop/robot_control/robot_arm_ik.py
  29. 29
      teleop/robot_control/robot_hand_inspire.py
  30. 124
      teleop/robot_control/robot_hand_unitree.py
  31. 485
      teleop/teleop_hand_and_arm.py
  32. 1
      teleop/televuer
  33. 44
      teleop/utils/episode_writer.py
  34. 14
      teleop/utils/mat_tool.py
  35. 32
      teleop/utils/rerun_visualizer.py
  36. 290
      teleop/utils/sim_state_topic.py

6
.gitmodules

@ -0,0 +1,6 @@
[submodule "teleop/televuer"]
path = teleop/televuer
url = https://github.com/silencht/televuer
[submodule "teleop/robot_control/dex-retargeting"]
path = teleop/robot_control/dex-retargeting
url = https://github.com/silencht/dex-retargeting

7
LICENSE

@ -23,10 +23,7 @@ This code builds upon following open-source code-bases. Please visit the URLs to
5) https://github.com/casadi/casadi
6) https://github.com/meshcat-dev/meshcat-python
7) https://github.com/zeromq/pyzmq
8) https://github.com/unitreerobotics/unitree_dds_wrapper
9) https://github.com/tonyzhaozh/act
10) https://github.com/facebookresearch/detr
11) https://github.com/Dingry/BunnyVisionPro
12) https://github.com/unitreerobotics/unitree_sdk2_python
8) https://github.com/Dingry/BunnyVisionPro
9) https://github.com/unitreerobotics/unitree_sdk2_python
------------------

456
README.md

@ -1,11 +1,14 @@
<div align="center">
<h1 align="center"> avp_teleoperate </h1>
<h3 align="center"> Unitree Robotics </h3>
<h1 align="center">xr_teleoperate</h1>
<a href="https://www.unitree.com/" target="_blank">
<img src="https://www.unitree.com/images/0079f8938336436e955ea3a98c4e1e59.svg" alt="Unitree LOGO" width="15%">
</a>
<p align="center">
<a> English </a> | <a href="README_zh-CN.md">中文</a> | <a href="README_ja-JP.md">日本語</a>
</p>
</div>
# 📺 Video Demo
<p align="center">
@ -27,218 +30,254 @@
</table>
</p>
# 🔖 Release Note
1. **Upgraded the Vuer library** to support more XR device modes. The project has been renamed from **`avp_teleoperate`** to **`xr_teleoperate`** to better reflect its broader scope — now supporting not only Apple Vision Pro but also Meta Quest 3 (with controllers) and PICO 4 Ultra Enterprise (with controllers).
2. **Modularized** parts of the codebase and introduced Git submodules (`git submodule`) for better structure and maintainability.
3. Added **headless**, **motion**, and **simulation** modes. The startup parameter configuration has been improved for ease of use (see Section 2.2). The **simulation** mode enables environment validation and hardware failure diagnostics.
4. Changed the default hand retarget algorithm from Vector to **DexPilot**, improving the precision and intuitiveness of fingertip pinching.
5. Various other improvements and refinements.
# 0. 📖 Introduction
This repository implements teleoperation of the **Unitree humanoid robot** using **XR Devices** ( such as Apple Vision Pro、 PICO 4 Ultra Enterprise or Meta Quest 3 ).
Here are the currently supported robots,
This repository implements **teleoperation** control of a **Unitree humanoid robot** using **XR (Extended Reality) devices** (such as Apple Vision Pro, PICO 4 Ultra Enterprise, or Meta Quest 3).
Here are the required devices and wiring diagram,
<p align="center">
<a href="https://oss-global-cdn.unitree.com/static/3f75e91e41694ed28c29bcad22954d1d_5990x4050.png">
<img src="https://oss-global-cdn.unitree.com/static/3f75e91e41694ed28c29bcad22954d1d_5990x4050.png" alt="System Diagram" style="width: 100%;">
</a>
</p>
The currently supported devices in this repository:
<table>
<tr>
<th style="text-align: center;"> &#129302; Robot </th>
<th style="text-align: center;"> &#9898; Status </th>
<th align="center">🤖 Robot</th>
<th align="center">⚪ Status</th>
</tr>
<tr>
<td align="center"><a href="https://www.unitree.com/cn/g1" target="_blank">G1 (29 DoF)</a></td>
<td align="center">✅ Complete</td>
</tr>
<tr>
<td style="text-align: center;"> <a href="https://www.unitree.com/g1" target="_blank"> G1 (29DoF) </td>
<td style="text-align: center;"> &#9989; Completed </td>
<td align="center"><a href="https://www.unitree.com/cn/g1" target="_blank">G1 (23 DoF)</a></td>
<td align="center">✅ Complete</td>
</tr>
<tr>
<td style="text-align: center;"> <a href="https://www.unitree.com/g1" target="_blank"> G1 (23DoF) </td>
<td style="text-align: center;"> &#9989; Completed </td>
<td align="center"><a href="https://www.unitree.com/cn/h1" target="_blank">H1 (4‑DoF arm)</a></td>
<td align="center">✅ Complete</td>
</tr>
<tr>
<td style="text-align: center;"> <a href="https://www.unitree.com/h1" target="_blank"> H1 (Arm 4DoF) </td>
<td style="text-align: center;"> &#9989; Completed </td>
<td align="center"><a href="https://www.unitree.com/cn/h1" target="_blank">H1_2 (7‑DoF arm)</a></td>
<td align="center">✅ Complete</td>
</tr>
<tr>
<td style="text-align: center;"> <a href="https://www.unitree.com/h1" target="_blank"> H1_2 (Arm 7DoF) </td>
<td style="text-align: center;"> &#9989; Completed </td>
<td align="center"><a href="https://www.unitree.com/cn/Dex1-1" target="_blank">Dex1‑1 gripper</a></td>
<td align="center">✅ Complete</td>
</tr>
<tr>
<td style="text-align: center;"> <a href="https://www.unitree.com/Dex3-1" target="_blank"> Dex3-1 hand </td>
<td style="text-align: center;"> &#9989; Completed </td>
<td align="center"><a href="https://www.unitree.com/cn/Dex3-1" target="_blank">Dex3‑1 dexterous hand</a></td>
<td align="center">✅ Complete</td>
</tr>
<tr>
<td style="text-align: center;"> <a href="https://support.unitree.com/home/en/G1_developer/inspire_dfx_dexterous_hand" target="_blank"> Inspire hand </td>
<td style="text-align: center;"> &#9989; Completed </td>
<td align="center"><a href="https://support.unitree.com/home/zh/G1_developer/inspire_dfx_dexterous_hand" target="_blank">Inspire dexterous hand</a></td>
<td align="center">✅ Complete</td>
</tr>
<tr>
<td style="text-align: center;"> ... </td>
<td style="text-align: center;"> ... </td>
<td align="center"> ··· </td>
<td align="center"> ··· </td>
</tr>
</table>
# 1. 📦 Installation
Here are the required devices and wiring diagram,
<p align="center">
<a href="https://oss-global-cdn.unitree.com/static/0ab3a06368464245b30f7f25161f44b8_2965x1395.png">
<img src="https://oss-global-cdn.unitree.com/static/0ab3a06368464245b30f7f25161f44b8_2965x1395.png" alt="Watch the Document" style="width: 100%;">
</a>
</p>
This is a network topology diagram, using the G1 robot as an example,
We tested our code on Ubuntu 20.04 and Ubuntu 22.04, other operating systems may be configured differently. This document primarily describes the **default mode**.
<p align="center">
<a href="https://oss-global-cdn.unitree.com/static/9871e3bac4c54140b0839c68baf48a4a_1872x929.png">
<img src="https://oss-global-cdn.unitree.com/static/9871e3bac4c54140b0839c68baf48a4a_1872x929.png" alt="Watch the Document" style="width: 100%;">
</a>
</p>
# 1. 📦 Prerequisites
For more information, you can refer to [Official Documentation ](https://support.unitree.com/home/zh/Teleoperation) and [OpenTeleVision](https://github.com/OpenTeleVision/TeleVision).
We tested our code on Ubuntu 20.04 and Ubuntu 22.04, other operating systems may be configured differently.
## 1.1 📥 basic
For more information, you can refer to [Official Documentation ](https://support.unitree.com/home/zh/Teleoperation) and [OpenTeleVision](https://github.com/OpenTeleVision/TeleVision).
```bash
# Create a conda environment
(base) unitree@Host:~$ conda create -n tv python=3.10 pinocchio=3.1.0 numpy=1.26.4 -c conda-forge
(base) unitree@Host:~$ conda activate tv
# Clone this repo
(tv) unitree@Host:~$ git clone https://github.com/unitreerobotics/xr_teleoperate.git
(tv) unitree@Host:~$ cd xr_teleoperate
# Shallow clone submodule
(tv) unitree@Host:~/xr_teleoperate$ git submodule update --init --depth 1
# Install televuer submodule
(tv) unitree@Host:~/xr_teleoperate$ cd teleop/televuer
(tv) unitree@Host:~/xr_teleoperate/teleop/televuer$ pip install -e .
# Generate the certificate files required for televuer submodule
(tv) unitree@Host:~/xr_teleoperate/teleop/televuer$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout key.pem -out cert.pem
# Install dex-retargeting submodule
(tv) unitree@Host:~/xr_teleoperate/teleop/televuer$ cd ../robot_control/dex-retargeting/
(tv) unitree@Host:~/xr_teleoperate/teleop/robot_control/dex-retargeting$ pip install -e .
# Install additional dependencies required by this repo
(tv) unitree@Host:~/xr_teleoperate/teleop/robot_control/dex-retargeting$ cd ../../../
(tv) unitree@Host:~/xr_teleoperate$ pip install -r requirements.txt
```
## 1.1 🦾 inverse kinematics
## 1.2 🕹️ unitree_sdk2_python
```bash
unitree@Host:~$ conda create -n tv python=3.8
unitree@Host:~$ conda activate tv
# If you use `pip install`, Make sure pinocchio version is 3.1.0
(tv) unitree@Host:~$ conda install pinocchio -c conda-forge
(tv) unitree@Host:~$ pip install meshcat
(tv) unitree@Host:~$ pip install casadi
# Install unitree_sdk2_python library which handles communication with the robot
(tv) unitree@Host:~$ git clone https://github.com/unitreerobotics/unitree_sdk2_python.git
(tv) unitree@Host:~$ cd unitree_sdk2_python
(tv) unitree@Host:~/unitree_sdk2_python$ pip install -e .
```
> p.s. All identifiers in front of the command are meant for prompting: **Which device and directory the command should be executed on**.
> **Note 1**: The [unitree_dds_wrapper](https://github.com/unitreerobotics/unitree_dds_wrapper) in the original h1_2 branch was a temporary version. It has now been fully migrated to the official Python-based control and communication library: [unitree_sdk2_python](https://github.com/unitreerobotics/unitree_sdk2_python).
>
> **Note 2**: All identifiers in front of the command are meant for prompting: **Which device and directory the command should be executed on**.
>
In the Ubuntu system's `~/.bashrc` file, the default configuration is: `PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '`
> In the Ubuntu system's `~/.bashrc` file, the default configuration is: `PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '`
>
> Taking the command `(tv) unitree@Host:~$ pip install meshcat` as an example:
>
> - `(tv)` Indicates the shell is in the conda environment named `tv`.
>- `unitree@Host:~` Shows the user `\u` `unitree` is logged into the device `\h` `Host`, with the current working directory `\w` as `$HOME`.
> - `unitree@Host:~` Shows the user `\u` `unitree` is logged into the device `\h` `Host`, with the current working directory `\w` as `$HOME`.
> - `$` shows the current shell is Bash (for non-root users).
> - `pip install meshcat` is the command `unitree` wants to execute on `Host`.
>
>
> You can refer to [Harley Hahn's Guide to Unix and Linux](https://www.harley.com/unix-book/book/chapters/04.html#H) and [Conda User Guide](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html) to learn more.
## 1.2 🕹️ unitree_sdk2_python
```bash
# Install unitree_sdk2_python.
(tv) unitree@Host:~$ git clone https://github.com/unitreerobotics/unitree_sdk2_python.git
(tv) unitree@Host:~$ cd unitree_sdk2_python
(tv) unitree@Host:~$ pip install -e .
```
> p.s. The [unitree_dds_wrapper](https://github.com/unitreerobotics/unitree_dds_wrapper) in the original h1_2 branch was a temporary version. It has now been fully migrated to the official Python-based control and communication library: [unitree_sdk2_python](https://github.com/unitreerobotics/unitree_sdk2_python).
# 2. 💻 Simulation Deployment
## 2.1 📥 Environment Setup
# 2. ⚙️ Configuration
First, install [unitree_sim_isaaclab](https://github.com/unitreerobotics/unitree_sim_isaaclab). Follow that repo’s README.
## 2.1 📥 basic
Then launch the simulation with a G1(29 DoF) and Dex3 hand configuration:
```bash
(tv) unitree@Host:~$ cd ~
(tv) unitree@Host:~$ git clone https://github.com/unitreerobotics/avp_teleoperate.git
(tv) unitree@Host:~$ cd ~/avp_teleoperate
(tv) unitree@Host:~$ pip install -r requirements.txt
(base) unitree@Host:~$ conda activate unitree_sim_env
(unitree_sim_env) unitree@Host:~$ cd ~/unitree_sim_isaaclab
(unitree_sim_env) unitree@Host:~/unitree_sim_isaaclab$ python sim_main.py --device cpu --enable_cameras --task Isaac-PickPlace-Cylinder-G129-Dex3-Joint --enable_dex3_dds --robot_type g129
```
## 2.2 🔌 Local streaming
After simulation starts, click once in the window to activate it. The terminal will show: `controller started, start main loop...`
**2.2.1 Apple Vision Pro**
Here is the simulation GUI:
Apple does not allow WebXR on non-https connections. To test the application locally, we need to create a self-signed certificate and install it on the client. You need a ubuntu machine and a router. Connect the Apple Vision Pro and the ubuntu **Host machine** to the same router.
<p align="center"> <a href="https://oss-global-cdn.unitree.com/static/bea51ef618d748368bf59c60f4969a65_1749x1090.png"> <img src="https://oss-global-cdn.unitree.com/static/bea51ef618d748368bf59c60f4969a65_1749x1090.png" alt="Simulation UI" style="width: 75%;"> </a> </p>
1. install mkcert: https://github.com/FiloSottile/mkcert
2. check **Host machine** local ip address:
## 2.2 🚀 Launch
```bash
(tv) unitree@Host:~/avp_teleoperate$ ifconfig | grep inet
```
This program supports XR control of a physical robot or in simulation. Choose modes with command-line arguments:
Suppose the local ip address of the **Host machine** is `192.168.123.2`
- **Basic control parameters**
> p.s. You can use `ifconfig` command to check your **Host machine** ip address.
| ⚙️ Parameter | 📜 Description | 🔘 Options | 📌 Default |
| :---------: | :-------------------------------------------: | :----------------------------------------------------------: | :-------: |
| `--xr-mode` | Choose XR input mode | `hand` (**hand tracking**)<br/>`controller` (**controller tracking**) | `hand` |
| `--arm` | Choose robot arm type (see 0. 📖 Introduction) | `G1_29`<br/>`G1_23`<br/>`H1_2`<br/>`H1` | `G1_29` |
| `--ee` | Choose end-effector (see 0. 📖 Introduction) | `dex1`<br/>`dex3`<br/>`inspire1` | none |
3. create certificate:
- **Mode flags**
```bash
(tv) unitree@Host:~/avp_teleoperate$ mkcert -install && mkcert -cert-file cert.pem -key-file key.pem 192.168.123.2 localhost 127.0.0.1
```
| ⚙️ Flag | 📜 Description |
| :----------: | :----------------------------------------------------------: |
| `--record` | **Enable data recording**: after pressing **r** to start, press **s** to start/stop saving an episode. Can repeat. |
| `--motion` | **Enable motion control**: allows independent robot control during teleop. In hand mode, use the R3 remote for walking; in controller mode, use joystick for walking |
| `--headless` | Run without GUI (for headless PC2 deployment) |
| `--sim` | Enable **simulation mode** |
place the generated `cert.pem` and `key.pem` files in `teleop`
Assuming hand tracking with G1(29 DoF) + Dex3 in simulation with recording:
```bash
(tv) unitree@Host:~/avp_teleoperate$ cp cert.pem key.pem ~/avp_teleoperate/teleop/
(tv) unitree@Host:~$ cd ~/xr_teleoperate/teleop/
(tv) unitree@Host:~/xr_teleoperate/teleop/$ python teleop_hand_and_arm.py --xr-mode=hand --arm=G1_29 --ee=dex3 --sim --record
# Simplified (defaults apply):
(tv) unitree@Host:~/xr_teleoperate/teleop/$ python teleop_hand_and_arm.py --ee=dex3 --sim --record
```
4. open firewall on server:
After the program starts, the terminal shows:
```bash
(tv) unitree@Host:~/avp_teleoperate$ sudo ufw allow 8012
```
<p align="center"> <a href="https://oss-global-cdn.unitree.com/static/735464d237214f6c9edf8c7db9847a0a_1874x1275.png"> <img src="https://oss-global-cdn.unitree.com/static/735464d237214f6c9edf8c7db9847a0a_1874x1275.png" alt="Terminal Start Log" style="width: 75%;"> </a> </p>
5. install ca-certificates on Apple Vision Pro:
Next steps:
```bash
(tv) unitree@Host:~/avp_teleoperate$ mkcert -CAROOT
```
1. Wear your XR headset (e.g. Apple Vision Pro, PICO4, etc.)
2. Connect to the corresponding Wi‑Fi
Copy the `rootCA.pem` via AirDrop to Apple Vision Pro and install it.
3. Open a browser (e.g. Safari or PICO Browser) and go to: `https://192.168.123.2:8012?ws=wss://192.168.123.2:8012`
Settings > General > About > Certificate Trust Settings. Under "Enable full trust for root certificates", turn on trust for the certificate.
> **Note 1**: This IP must match your **Host** IP (check with `ifconfig`).
> **Note 2**: You may see a warning page. Click **Advanced**, then **Proceed to IP (unsafe)**.
> In the new version of Vision OS 2, this step is different: After copying the certificate to the Apple Vision Pro device via AirDrop, a certificate-related information section will appear below the account bar in the top left corner of the Settings app. Tap it to enable trust for the certificate.
<p align="center">
<a href="https://oss-global-cdn.unitree.com/static/cef18751ca6643b683bfbea35fed8e7c_1279x1002.png">
<img src="https://oss-global-cdn.unitree.com/static/cef18751ca6643b683bfbea35fed8e7c_1279x1002.png" alt="vuer_unsafe" style="width: 50%;">
</a>
</p>
Settings > Apps > Safari > Advanced > Feature Flags > Enable WebXR Related Features.
4. In the Vuer web, click **Virtual Reality**. Allow all prompts to start the VR session.
------
<p align="center"> <a href="https://oss-global-cdn.unitree.com/static/fdeee4e5197f416290d8fa9ecc0b28e6_2480x1286.png"> <img src="https://oss-global-cdn.unitree.com/static/fdeee4e5197f416290d8fa9ecc0b28e6_2480x1286.png" alt="Vuer UI" style="width: 75%;"> </a> </p>
**2.2.2 PICO 4 Ultra Enterprise or Meta Quest 3**
5. You’ll see the robot’s first-person view in the headset. The terminal prints connection info:
We have tried using hand tracking for teleoperation on the PICO 4 Ultra Enterprise and Meta-Quest 3.
```bash
websocket is connected. id:dbb8537d-a58c-4c57-b49d-cbb91bd25b90
default socket worker is up, adding clientEvents
Uplink task running. id:dbb8537d-a58c-4c57-b49d-cbb91bd25b90
```
The system specifications of PICO 4 Ultra Enterprise:
6. Align your arm to the **robot’s initial pose** to avoid sudden movements at start:
> System Version: 5.12.6.U; Android version number: 14; Software version number: c000_cf01_bv1.0.1_sv5.12.6_202412121344_sparrow_b4244_user; browser version: [4.0.28 beta version](https://github.com/vuer-ai/vuer/issues/45#issuecomment-2674918619)
<p align="center"> <a href="https://oss-global-cdn.unitree.com/static/2522a83214744e7c8c425cc2679a84ec_670x867.png"> <img src="https://oss-global-cdn.unitree.com/static/2522a83214744e7c8c425cc2679a84ec_670x867.png" alt="Initial Pose" style="width: 25%;"> </a> </p>
The system specifications of Meta-Quest 3:
7. Press **r** in the terminal to begin teleoperation. You can now control the robot arm and dexterous hand.
> System version: 49829370066100510; Version: 62.0.0.273.343.570372087; Runtime version: 62.0.0.269.341.570372063; OS version: SQ3A.220605.009.A1.
8. During teleoperation, press **s** to start recording; press **s** again to stop and save. Repeatable process.
For more configuration steps, please refer to the [issue](https://github.com/unitreerobotics/avp_teleoperate/issues/32).
<p align="center"> <a href="https://oss-global-cdn.unitree.com/static/f5b9b03df89e45ed8601b9a91adab37a_2397x1107.png"> <img src="https://oss-global-cdn.unitree.com/static/f5b9b03df89e45ed8601b9a91adab37a_2397x1107.png" alt="Recording Process" style="width: 75%;"> </a> </p>
## 2.3 🔎 Unit Test
> **Note 1**: Recorded data is stored in `xr_teleoperate/teleop/utils/data` by default, with usage instructions at this repo: [unitree_IL_lerobot](https://github.com/unitreerobotics/unitree_IL_lerobot/tree/main?tab=readme-ov-file#data-collection-and-conversion).
> **Note 2**: Please pay attention to your disk space size during data recording.
This step is to verify that the environment is installed correctly.
## 2.3 🔚 Exit
comming soon.
Press **q** in the terminal (or “record image” window) to quit.
# 3. 🚀 Usage
# 3. 🤖 Physical Deployment
Please read the [Official Documentation ](https://support.unitree.com/home/zh/Teleoperation) at least once before starting this program.
Physical deployment steps are similar to simulation, with these key differences:
## 3.1 🖼️ Image Service
## 3.1 🖼️ Image Server
In the simulation environment, the image service is automatically enabled. For physical deployment, you need to manually start the image service based on your specific camera hardware. The steps are as follows:
Copy `image_server.py` in the `avp_teleoperate/teleop/image_server` directory to the **Development Computing Unit PC2** of Unitree Robot (G1/H1/H1_2/etc.), and execute the following command **in the PC2**:
Copy `image_server.py` in the `xr_teleoperate/teleop/image_server` directory to the **Development Computing Unit PC2** of Unitree Robot (G1/H1/H1_2/etc.),
```bash
# p.s.1 You can transfer image_server.py to PC2 via the scp command and then use ssh to remotely login to PC2 to execute it.
# p.s. You can transfer image_server.py to PC2 via the scp command and then use ssh to remotely login to PC2 to execute it.
# Assuming the IP address of the development computing unit PC2 is 192.168.123.164, the transmission process is as follows:
# log in to PC2 via SSH and create the folder for the image server
(tv) unitree@Host:~$ ssh unitree@192.168.123.164 "mkdir -p ~/image_server"
# Copy the local image_server.py to the ~/image_server directory on PC2
(tv) unitree@Host:~$ scp ~/avp_teleoperate/teleop/image_server/image_server.py unitree@192.168.123.164:~/image_server/
(tv) unitree@Host:~$ scp ~/xr_teleoperate/teleop/image_server/image_server.py unitree@192.168.123.164:~/image_server/
```
and execute the following command **in the PC2**:
# p.s.2 Currently, this image transmission program supports two methods for reading images: OpenCV and Realsense SDK. Please refer to the comments in the `ImageServer` class within `image_server.py` to configure your image transmission service according to your camera hardware.
```bash
# p.s. Currently, this image transmission program supports two methods for reading images: OpenCV and Realsense SDK. Please refer to the comments in the `ImageServer` class within `image_server.py` to configure your image transmission service according to your camera hardware.
# Now located in Unitree Robot PC2 terminal
unitree@PC2:~/image_server$ python image_server.py
# You can see the terminal output as follows:
@ -250,12 +289,14 @@ unitree@PC2:~/image_server$ python image_server.py
After image service is started, you can use `image_client.py` **in the Host** terminal to test whether the communication is successful:
```bash
(tv) unitree@Host:~/avp_teleoperate/teleop/image_server$ python image_client.py
(tv) unitree@Host:~/xr_teleoperate/teleop/image_server$ python image_client.py
```
## 3.2 ✋ Inspire hands Server (optional)
## 3.2 ✋ Inspire Hand Service (optional)
> Note: If the selected robot configuration does not use the Inspire dexterous hand (Gen1), please ignore this section.
> **Note 1**: Skip this if your config does not use the Inspire hand.
> **Note 2**: For the G1 robot with [Inspire DFX hand](https://support.unitree.com/home/zh/G1_developer/inspire_dfx_dexterous_hand), see [issue #46](https://github.com/unitreerobotics/xr_teleoperate/issues/46).
> **Note 3**: For [Inspire FTP hand]((https://support.unitree.com/home/zh/G1_developer/inspire_ftp_dexterity_hand)), see [issue #48](https://github.com/unitreerobotics/xr_teleoperate/issues/48).
You can refer to [Dexterous Hand Development](https://support.unitree.com/home/zh/H1_developer/Dexterous_hand) to configure related environments and compile control programs. First, use [this URL](https://oss-global-cdn.unitree.com/static/0a8335f7498548d28412c31ea047d4be.zip) to download the dexterous hand control interface program. Copy it to **PC2** of Unitree robots.
@ -275,143 +316,99 @@ unitree@PC2:~/h1_inspire_service/build$ ./h1_hand_example
If two hands open and close continuously, it indicates success. Once successful, close the `./h1_hand_example` program in Terminal 2.
## 3.3 🚀 Start
## 3.3 🚀 Launch
> ![Warning](https://img.shields.io/badge/Warning-Important-red)
>
> 1. Everyone must keep a safe distance from the robot to prevent any potential danger!
>
> 2. Please make sure to read the [Official Documentation](https://support.unitree.com/home/zh/Teleoperation) at least once before running this program.
>
> 3. Always make sure that the robot has entered [debug mode (L2+R2)](https://support.unitree.com/home/zh/H1_developer/Remote_control) to stop the motion control program, this will avoid potential command conflict problems.
>
It's best to have two operators to run this program, referred to as **Operator A** and **Operator B**.
First, **Operator B** needs to perform the following steps:
1. Modify the `img_config` image client configuration under the `if __name__ == '__main__':` section in `~/avp_teleoperate/teleop/teleop_hand_and_arm.py`. It should match the image server parameters you configured on PC2 in Section 3.1.
2. Choose different launch parameters based on your robot configuration. Here are some example commands:
```bash
# 1. G1 (29DoF) Robot + Dex3-1 Dexterous Hand (Note: G1_29 is the default value for --arm, so it can be omitted)
(tv) unitree@Host:~/avp_teleoperate/teleop$ python teleop_hand_and_arm.py --arm=G1_29 --hand=dex3
# 2. G1 (29DoF) Robot only
(tv) unitree@Host:~/avp_teleoperate/teleop$ python teleop_hand_and_arm.py
# 3. G1 (23DoF) Robot
(tv) unitree@Host:~/avp_teleoperate/teleop$ python teleop_hand_and_arm.py --arm=G1_23
# 4. H1_2 Robot + Inspire hand
(tv) unitree@Host:~/avp_teleoperate/teleop$ python teleop_hand_and_arm.py --arm=H1_2 --hand=inspire1
# 5. H1 Robot
(tv) unitree@Host:~/avp_teleoperate/teleop$ python teleop_hand_and_arm.py --arm=H1
# 6. If you want to enable data visualization + recording, you can add the --record option
(tv) unitree@Host:~/avp_teleoperate/teleop$ python teleop_hand_and_arm.py --arm=G1_23 --record
```
3. If the program starts successfully, the terminal will pause at the final line displaying the message: "Please enter the start signal (enter 'r' to start the subsequent program):"
And then, **Operator A** needs to perform the following steps:
1. Wear your Apple Vision Pro device.
2. Open Safari on Apple Vision Pro and visit : https://192.168.123.2:8012?ws=wss://192.168.123.2:8012
> p.s. This IP address should match the IP address of your **Host machine**.
3. Click `Enter VR` and `Allow` to start the VR session.
4. You will see the robot's first-person perspective in the Apple Vision Pro.
> 3. Without `--motion`, always make sure that the robot has entered [debug mode (L2+R2)](https://support.unitree.com/home/zh/H1_developer/Remote_control) to stop the motion control program, this will avoid potential command conflict problems.
> 4. To use motion mode (with `--motion`), ensure the robot is in control mode (via [R3 remote](https://www.unitree.com/R3)).
> 5. In motion mode:
> - Right controller **A** = Exit teleop
> - Both joysticks pressed = soft emergency stop (switch to damping mode)
> - Left joystick = drive directions;
> - right joystick = turning;
> - max speed is limited in the code.
Next, **Operator B** can start teleoperation program by pressing the **r** key in the terminal.
At this time, **Operator A** can remotely control the robot's arms (and dexterous hands).
If the `--record` parameter is used, **Operator B** can press **s** key in the opened "record image" window to start recording data, and press **s** again to stop. This operation can be repeated as needed for multiple recordings.
> p.s.1 Recorded data is stored in `avp_teleoperate/teleop/utils/data` by default, with usage instructions at this repo: [unitree_IL_lerobot](https://github.com/unitreerobotics/unitree_IL_lerobot/tree/main?tab=readme-ov-file#data-collection-and-conversion).
>
> p.s.2 Please pay attention to your disk space size during data recording.
Same as simulation but follow the safety warnings above.
## 3.4 🔚 Exit
To exit the program, **Operator B** can press the **q** key in the 'record image' window.
> ![Warning](https://img.shields.io/badge/Warning-Important-red)
> ![Warning](https://img.shields.io/badge/Warning-Important-red)
> To avoid damaging the robot, it is recommended to position the robot's arms close to the initial pose before pressing **q** to exit.
>
> To avoid damaging the robot, it's best to ensure that **Operator A** positions the robot's arms in a naturally lowered or appropriate position before **Operator B** presses **q** to exit.
> - In **Debug Mode**: After pressing the exit key, both arms will return to the robot's **initial pose** within 5 seconds, and then the control will end.
>
> - In **Motion Mode**: After pressing the exit key, both arms will return to the robot's **motion control pose** within 5 seconds, and then the control will end.
Same as simulation but follow the safety warnings above.
# 4. 🗺️ Codebase Tutorial
# 4. 🗺️ Codebase Overview
```
avp_teleoperate/
xr_teleoperate/
├── assets [Storage of robot URDF-related files]
├── hardware [3D‑printed hardware modules]
├── teleop
│ ├── image_server
│ │ ├── image_client.py [Used to receive image data from the robot image server]
│ │ ├── image_server.py [Capture images from cameras and send via network (Running on robot's on-board computer)]
│ │ ├── image_client.py [Used to receive image data from the robot image server]
│ │ ├── image_server.py [Capture images from cameras and send via network (Running on robot's Development Computing Unit PC2)]
│ │
│ ├── open_television
│ │ ├── television.py [Using Vuer to capture wrist and hand data from apple vision pro]
│ │ ├── tv_wrapper.py [Post-processing of captured data]
│ ├── televuer
│ │ ├── src/televuer
│ │ ├── television.py [captures XR devices's head, wrist, hand/controller data]
│ │ ├── tv_wrapper.py [Post-processing of captured data]
│ │ ├── test
│ │ ├── _test_television.py [test for television.py]
│ │ ├── _test_tv_wrapper.py [test for tv_wrapper.py]
│ │
│ ├── robot_control
│ │ ├── robot_arm_ik.py [Inverse kinematics of the arm]
│ │ ├── robot_arm.py [Control dual arm joints and lock the others]
│ │ ├── src/dex-retargeting [Dexterous hand retargeting algorithm library]
│ │ ├── robot_arm_ik.py [Inverse kinematics of the arm]
│ │ ├── robot_arm.py [Control dual arm joints and lock the others]
│ │ ├── hand_retargeting.py [Dexterous hand retargeting algorithm library Wrapper]
│ │ ├── robot_hand_inspire.py [Control inspire hand joints]
│ │ ├── robot_hand_unitree.py [Control unitree hand joints]
│ │
│ ├── utils
│ │ ├── episode_writer.py [Used to record data for imitation learning]
│ │ ├── mat_tool.py [Some small math tools]
│ │ ├── episode_writer.py [Used to record data for imitation learning]
│ │ ├── weighted_moving_filter.py [For filtering joint data]
│ │ ├── rerun_visualizer.py [For visualizing data during recording]
│ │
│──teleop_hand_and_arm.py [Startup execution code for teleoperation]
└── teleop_hand_and_arm.py [Startup execution code for teleoperation]
```
# 5. 🛠️ Hardware
## 5.1 📋 List
| Item | Quantity | Link | Remarks |
| :--------------------------: | :------: | :----------------------------------------------------------: | :---------------------------------------------------------: |
| **Unitree Robot G1** | 1 | https://www.unitree.com/g1 | With development computing unit |
| **Apple Vision Pro** | 1 | https://www.apple.com/apple-vision-pro/ | |
| **Router** | 1 | | |
| **User PC** | 1 | | Recommended graphics card performance at RTX 4080 and above |
| **Head Stereo Camera** | 1 | [For reference only] http://e.tb.cn/h.TaZxgkpfWkNCakg?tk=KKz03Kyu04u | For head |
| **Head Camera Mount** | 1 | https://github.com/unitreerobotics/avp_teleoperate/blob/g1/hardware/head_stereo_camera_mount.STEP | For mounting head stereo camera, FOV 130° |
| Intel RealSense D405 | 2 | https://www.intelrealsense.com/depth-camera-d405/ | For wrist |
| Wrist Ring Mount | 2 | https://github.com/unitreerobotics/avp_teleoperate/blob/g1/hardware/wrist_ring_mount.STEP | Used with wrist camera mount |
| Left Wrist Camera Mount | 1 | https://github.com/unitreerobotics/avp_teleoperate/blob/g1/hardware/left_wrist_D405_camera_mount.STEP | For mounting left wrist RealSense D405 camera |
| Right Wrist Camera Mount | 1 | https://github.com/unitreerobotics/avp_teleoperate/blob/g1/hardware/right_wrist_D405_camera_mount.STEP | For mounting right wrist RealSense D405 camera |
| M3 hex nuts | 4 | [For reference only] https://a.co/d/1opqtOr | For Wrist fastener |
| M3x12 screws | 4 | [For reference only] https://amzn.asia/d/aU9NHSf | For wrist fastener |
| M3x6 screws | 4 | [For reference only] https://amzn.asia/d/0nEz5dJ | For wrist fastener |
| **M4x14 screws** | 2 | [For reference only] https://amzn.asia/d/cfta55x | For head fastener |
| **M2x4 self-tapping screws** | 4 | [For reference only] https://amzn.asia/d/1msRa5B | For head fastener |
## 5.1 📋 Parts List
| Item | Quantity | Link(s) | Remarks |
| :--------------------------: | :------: | :----------------------------------------------------------: | :----------------------------------------------------------: |
| **humanoid robot** | 1 | https://www.unitree.com | With development computing unit |
| **XR device** | 1 | https://www.apple.com/apple-vision-pro/<br/>https://www.meta.com/quest/quest-3<br/>https://www.picoxr.com/products/pico4-ultra-enterprise | |
| **Router** | 1 | | Required for **default mode**; **wireless mode** not need. |
| **User PC** | 1 | | In **simulation mode**, please use the [officially recommended](https://docs.isaacsim.omniverse.nvidia.com/4.5.0/installation/requirements.html) hardware resources for deployment. |
| **Head stereo camera** | 1 | [For reference only] http://e.tb.cn/h.TaZxgkpfWkNCakg?tk=KKz03Kyu04u | For robot head view |
| **Head camera mount** | 1 | https://github.com/unitreerobotics/xr_teleoperate/blob/g1/hardware/head_stereo_camera_mount.STEP | For mounting head stereo camera |
| Intel RealSense D405 camera | 2 | https://www.intelrealsense.com/depth-camera-d405/ | For wrist |
| Wrist ring mount | 2 | https://github.com/unitreerobotics/xr_teleoperate/blob/g1/hardware/wrist_ring_mount.STEP | Used with wrist camera mount |
| Left wrist D405 mount | 1 | https://github.com/unitreerobotics/xr_teleoperate/blob/g1/hardware/left_wrist_D405_camera_mount.STEP | For mounting left wrist RealSense D405 camera |
| Right wrist D405 mount | 1 | https://github.com/unitreerobotics/xr_teleoperate/blob/g1/hardware/right_wrist_D405_camera_mount.STEP | For mounting right wrist RealSense D405 camera |
| M3-1 hex nuts | 4 | [For reference only] https://a.co/d/1opqtOr | For Wrist fastener |
| M3x12 screws | 4 | [For reference only] https://amzn.asia/d/aU9NHSf | For Wrist fastener |
| M3x6 screws | 4 | [For reference only] https://amzn.asia/d/0nEz5dJ | For Wrist fastener |
| **M4x14 screws** | 2 | [For reference only] https://amzn.asia/d/cfta55x | For head fastener |
| **M2x4 selftapping screws** | 4 | [For reference only] https://amzn.asia/d/1msRa5B | For head fastener |
> Note: The bolded items are essential equipment for teleoperation tasks, while the other items are optional equipment for recording [datasets](https://huggingface.co/unitreerobotics).
## 5.2 🔨 Installation diagram
## 5.2 🔨 Mounting Diagrams
<table>
<tr>
@ -471,15 +468,12 @@ avp_teleoperate/
This code builds upon following open-source code-bases. Please visit the URLs to see the respective LICENSES:
1) https://github.com/OpenTeleVision/TeleVision
2) https://github.com/dexsuite/dex-retargeting
3) https://github.com/vuer-ai/vuer
4) https://github.com/stack-of-tasks/pinocchio
5) https://github.com/casadi/casadi
6) https://github.com/meshcat-dev/meshcat-python
7) https://github.com/zeromq/pyzmq
8) https://github.com/unitreerobotics/unitree_dds_wrapper
9) https://github.com/tonyzhaozh/act
10) https://github.com/facebookresearch/detr
11) https://github.com/Dingry/BunnyVisionPro
12) https://github.com/unitreerobotics/unitree_sdk2_python
1. https://github.com/OpenTeleVision/TeleVision
2. https://github.com/dexsuite/dex-retargeting
3. https://github.com/vuer-ai/vuer
4. https://github.com/stack-of-tasks/pinocchio
5. https://github.com/casadi/casadi
6. https://github.com/meshcat-dev/meshcat-python
7. https://github.com/zeromq/pyzmq
8. https://github.com/Dingry/BunnyVisionPro
9. https://github.com/unitreerobotics/unitree_sdk2_python

559
README_ja-JP.md

@ -1,12 +1,14 @@
<div align="center">
<h1 align="center"> avp_teleoperate </h1>
<h3 align="center"> Unitree Robotics </h3>
<h1 align="center">xr_teleoperate</h1>
<a href="https://www.unitree.com/" target="_blank">
<img src="https://www.unitree.com/images/0079f8938336436e955ea3a98c4e1e59.svg" alt="Unitree LOGO" width="15%">
</a>
<p align="center">
<a href="README.md"> English </a> | <a href="README_zh-CN.md">中文</a> | <a>日本語</a>
</p>
</div>
# 📺 ビデオデモ
# 📺 デモ動画
<p align="center">
<table>
@ -15,465 +17,396 @@
<a href="https://www.youtube.com/watch?v=OTWHXTu09wE" target="_blank">
<img src="https://img.youtube.com/vi/OTWHXTu09wE/maxresdefault.jpg" alt="Video 1" width="75%">
</a>
<p><b> G1 (29自由度) + Dex3-1 </b></p>
<p><b> G1 (29DoF) + Dex3-1 </b></p>
</td>
<td align="center" width="50%">
<a href="https://www.youtube.com/watch?v=pNjr2f_XHoo" target="_blank">
<img src="https://img.youtube.com/vi/pNjr2f_XHoo/maxresdefault.jpg" alt="Video 2" width="75%">
</a>
<p><b> H1_2(腕 7自由度)</b></p>
<p><b> H1_2 (Arm 7DoF) </b></p>
</td>
</tr>
</table>
</p>
# 🔖 更新内容
1. **Vuerライブラリをアップグレード**し、より多くのXRデバイスモードに対応しました。これに伴い、プロジェクト名を **`avp_teleoperate`** から **`xr_teleoperate`** に変更しました。従来の Apple Vision Pro に加え、**Meta Quest 3(コントローラー対応)** や **PICO 4 Ultra Enterprise(コントローラー対応)** にも対応しています。
2. 一部の機能を**モジュール化**し、Gitサブモジュール(`git submodule`)を用いて管理・読み込みを行うことで、コード構造の明確化と保守性を向上させました。
3. **ヘッドレスモード**、**運用モード**、**シミュレーションモード**を新たに追加し、起動パラメータの設定を最適化しました(第2.2節参照)。**シミュレーションモード**により、環境構成の検証やハードウェア故障の切り分けが容易になります。
4. デフォルトの手指マッピングアルゴリズムを Vector から **DexPilot** に変更し、指先のつまみ動作の精度と操作性を向上させました。
5. その他、さまざまな最適化を実施しました。
# 0. 📖 イントロダクション
このリポジトリは、**Apple Vision Pro** を使用して **Unitree ヒューマノイドロボット** を遠隔操作するためのものです。
以下は本リポジトリで現在サポートされているロボットの種類です,
このリポジトリでは、**XR(拡張現実)デバイス**(Apple Vision Pro、PICO 4 Ultra Enterprise、Meta Quest 3など)を使用して**Unitreeヒューマノイドロボット**の**遠隔操作**を実装しています。
必要なデバイスと配線図は以下の通りです。
<p align="center">
<a href="https://oss-global-cdn.unitree.com/static/3f75e91e41694ed28c29bcad22954d1d_5990x4050.png">
<img src="https://oss-global-cdn.unitree.com/static/3f75e91e41694ed28c29bcad22954d1d_5990x4050.png" alt="システム構成図" style="width: 100%;">
</a>
</p>
このリポジトリで現在サポートされているデバイス:
<table>
<tr>
<th style="text-align: center;"> 🤖 ロボット </th>
<th style="text-align: center;"> ⚪ ステータス </th>
<th align="center">🤖 ロボット</th>
<th align="center">⚪ ステータス</th>
</tr>
<tr>
<td align="center"><a href="https://www.unitree.com/cn/g1" target="_blank">G1 (29 DoF)</a></td>
<td align="center">✅ 実装済み</td>
</tr>
<tr>
<td style="text-align: center;"> <a href="https://www.unitree.com/g1" target="_blank"> G1(29自由度) </td>
<td style="text-align: center;"> ✅ 完了 </td>
<td align="center"><a href="https://www.unitree.com/cn/g1" target="_blank">G1 (23 DoF)</a></td>
<td align="center">✅ 実装済み</td>
</tr>
<tr>
<td style="text-align: center;"> <a href="https://www.unitree.com/g1" target="_blank"> G1(23自由度) </td>
<td style="text-align: center;"> ✅ 完了 </td>
<td align="center"><a href="https://www.unitree.com/cn/h1" target="_blank">H1 (4自由度アーム)</a></td>
<td align="center">✅ 実装済み</td>
</tr>
<tr>
<td style="text-align: center;"> <a href="https://www.unitree.com/h1" target="_blank"> H1(腕 4自由度) </td>
<td style="text-align: center;"> ✅ 完了 </td>
<td align="center"><a href="https://www.unitree.com/cn/h1" target="_blank">H1_2 (7自由度アーム)</a></td>
<td align="center">✅ 実装済み</td>
</tr>
<tr>
<td style="text-align: center;"> <a href="https://www.unitree.com/h1" target="_blank"> H1_2(腕 7自由度) </td>
<td style="text-align: center;"> ✅ 完了 </td>
<td align="center"><a href="https://www.unitree.com/cn/Dex1-1" target="_blank">Dex1‑1グリッパー</a></td>
<td align="center">✅ 実装済み</td>
</tr>
<tr>
<td style="text-align: center;"> <a href="https://www.unitree.com/Dex3-1" target="_blank"> Dex3-1 ハンド </td>
<td style="text-align: center;"> ✅ 完了 </td>
<td align="center"><a href="https://www.unitree.com/cn/Dex3-1" target="_blank">Dex3‑1多指ハンド</a></td>
<td align="center">✅ 実装済み</td>
</tr>
<tr>
<td style="text-align: center;"> <a href="https://support.unitree.com/home/en/G1_developer/inspire_dfx_dexterous_hand" target="_blank"> Inspire ハンド </td>
<td style="text-align: center;"> ✅ 完了 </td>
<td align="center"><a href="https://support.unitree.com/home/zh/G1_developer/inspire_dfx_dexterous_hand" target="_blank">Inspire多指ハンド</a></td>
<td align="center">✅ 実装済み</td>
</tr>
<tr>
<td style="text-align: center;"> ... </td>
<td style="text-align: center;"> ... </td>
<td align="center"> ··· </td>
<td align="center"> ··· </td>
</tr>
</table>
以下は、必要なデバイスと配線図です:
<p align="center">
<a href="https://oss-global-cdn.unitree.com/static/0ab3a06368464245b30f7f25161f44b8_2965x1395.png">
<img src="https://oss-global-cdn.unitree.com/static/0ab3a06368464245b30f7f25161f44b8_2965x1395.png" alt="Watch the Document" style="width: 100%;">
</a>
</p>
これはネットワークトポロジー図で、G1ロボットを例にしています:
<p align="center">
<a href="https://oss-global-cdn.unitree.com/static/9871e3bac4c54140b0839c68baf48a4a_1872x929.png">
<img src="https://oss-global-cdn.unitree.com/static/9871e3bac4c54140b0839c68baf48a4a_1872x929.png" alt="Watch the Document" style="width: 100%;">
</a>
</p>
# 1. 📦 インストール
Ubuntu 20.04と22.04でテスト済みです。他のOSでは設定が異なる場合があります。本ドキュメントでは、主に通常モードについて説明します。
# 1. 📦 前提条件
詳細は[公式ドキュメント](https://support.unitree.com/home/zh/Teleoperation)と[OpenTeleVision](https://github.com/OpenTeleVision/TeleVision)を参照してください。
私たちは Ubuntu 20.04 と Ubuntu 22.04 でコードをテストしました。他のオペレーティングシステムでは異なる設定が必要かもしれません。
詳細については、[公式ドキュメント](https://support.unitree.com/home/zh/Teleoperation) および [OpenTeleVision](https://github.com/OpenTeleVision/TeleVision) を参照してください。
## 1.1 🦾 逆運動学
## 1.1 📥 基本設定
```bash
unitree@Host:~$ conda create -n tv python=3.8
unitree@Host:~$ conda activate tv
# `pip install` を使用する場合、pinocchio のバージョンが 3.1.0 であることを確認してください
(tv) unitree@Host:~$ conda install pinocchio -c conda-forge
(tv) unitree@Host:~$ pip install meshcat
(tv) unitree@Host:~$ pip install casadi
# Create a conda environment
(base) unitree@Host:~$ conda create -n tv python=3.10 pinocchio=3.1.0 numpy=1.26.4 -c conda-forge
(base) unitree@Host:~$ conda activate tv
# Clone this repo
(tv) unitree@Host:~$ git clone https://github.com/unitreerobotics/xr_teleoperate.git
(tv) unitree@Host:~$ cd xr_teleoperate
# Shallow clone submodule
(tv) unitree@Host:~/xr_teleoperate$ git submodule update --init --depth 1
# Install televuer submodule
(tv) unitree@Host:~/xr_teleoperate$ cd teleop/televuer
(tv) unitree@Host:~/xr_teleoperate/teleop/televuer$ pip install -e .
# Generate the certificate files required for televuer submodule
(tv) unitree@Host:~/xr_teleoperate/teleop/televuer$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout key.pem -out cert.pem
# Install dex-retargeting submodule
(tv) unitree@Host:~/xr_teleoperate/teleop/televuer$ cd ../robot_control/dex-retargeting/
(tv) unitree@Host:~/xr_teleoperate/teleop/robot_control/dex-retargeting$ pip install -e .
# Install additional dependencies required by this repo
(tv) unitree@Host:~/xr_teleoperate/teleop/robot_control/dex-retargeting$ cd ../../../
(tv) unitree@Host:~/xr_teleoperate$ pip install -r requirements.txt
```
> 注意:コマンドの前にあるすべての識別子は、**どのデバイスとディレクトリでコマンドを実行するべきか**を示すためのものです。
>
Ubuntu システムの `~/.bashrc` ファイルでは、デフォルトの設定は次の通りです: `PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '`
>
> 例として、`(tv) unitree@Host:~$ pip install meshcat` コマンドを取り上げます。
>
> - `(tv)` はシェルが conda 環境 `tv` にあることを示します。
>- `unitree@Host:~` はユーザー `\u` `unitree` がデバイス `\h` `Host` にログインしており、現在の作業ディレクトリ `\w``$HOME` であることを示します。
> - `$` は現在のシェルが Bash であることを示します(非ルートユーザーの場合)。
> - `pip install meshcat``unitree``Host` で実行したいコマンドです。
>
> 詳細については、[Harley Hahn's Guide to Unix and Linux](https://www.harley.com/unix-book/book/chapters/04.html#H) および [Conda User Guide](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html) を参照してください。
## 1.2 🕹️ unitree_sdk2_python
```bash
# unitree_sdk2_python をインストールします。
# ロボット通信用ライブラリインストール
(tv) unitree@Host:~$ git clone https://github.com/unitreerobotics/unitree_sdk2_python.git
(tv) unitree@Host:~$ cd unitree_sdk2_python
(tv) unitree@Host:~$ pip install -e .
(tv) unitree@Host:~/unitree_sdk2_python$ pip install -e .
```
> 注意:元の h1_2 ブランチの [unitree_dds_wrapper](https://github.com/unitreerobotics/unitree_dds_wrapper) は一時的なバージョンであり、現在は公式の Python 版制御通信ライブラリ [unitree_sdk2_python](https://github.com/unitreerobotics/unitree_sdk2_python) に完全に移行されています。
> **注1**: 元のh1_2ブランチのunitree_dds_wrapperは暫定版でした。現在は公式Python制御ライブラリunitree_sdk2_pythonに移行済みです。
>
> **注2**: コマンド前の識別子は「どのデバイスでどのディレクトリで実行するか」を示しています。
>
> Ubuntuの`~/.bashrc`デフォルト設定: `PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '`
>
> 例: `(tv) unitree@Host:~$ pip install meshcat`
>
> - `(tv)` conda環境`tv`を表示
> - `unitree@Host:~` ユーザー`unitree`が`Host`デバイスにログイン、カレントディレクトリは`$HOME`
> - `$` Bashシェル(非rootユーザー)
> - `pip install meshcat` は`Host`で実行するコマンド
# 2. 💻 シミュレーション環境
## 2.1 📥 環境設定
# 2. ⚙️ 設定
まずunitree_sim_isaaclabをインストールし、READMEに従って設定します。
## 2.1 📥 基本
G1(29 DoF)とDex3ハンド構成でシミュレーションを起動:
```bash
(tv) unitree@Host:~$ cd ~
(tv) unitree@Host:~$ git clone https://github.com/unitreerobotics/avp_teleoperate.git
(tv) unitree@Host:~$ cd ~/avp_teleoperate
(tv) unitree@Host:~$ pip install -r requirements.txt
(base) unitree@Host:~$ conda activate unitree_sim_env
(unitree_sim_env) unitree@Host:~$ cd ~/unitree_sim_isaaclab
(unitree_sim_env) unitree@Host:~/unitree_sim_isaaclab$ python sim_main.py --device cpu --enable_cameras --task Isaac-PickPlace-Cylinder-G129-Dex3-Joint --enable_dex3_dds --robot_type g129
```
## 2.2 🔌 ローカルストリーミング
**Apple Vision Pro**
Apple は非 HTTPS 接続での WebXR を許可していません。アプリケーションをローカルでテストするには、自己署名証明書を作成し、クライアントにインストールする必要があります。Ubuntu マシンとルーターが必要です。Apple Vision Pro と Ubuntu **ホストマシン** を同じルーターに接続します。
1. mkcert をインストールします: https://github.com/FiloSottile/mkcert
2. **ホストマシン** のローカル IP アドレスを確認します:
```bash
(tv) unitree@Host:~/avp_teleoperate$ ifconfig | grep inet
```
シミュレーション起動後、ウィンドウをクリックして有効化。ターミナルに`controller started, start main loop...`と表示されます。
**ホストマシン** のローカル IP アドレスが `192.168.123.2` であると仮定します。
シミュレーションGUI:
> p.s. `ifconfig` コマンドを使用して **ホストマシン** の IP アドレスを確認できます。
<p align="center"> <a href="https://oss-global-cdn.unitree.com/static/bea51ef618d748368bf59c60f4969a65_1749x1090.png"> <img src="https://oss-global-cdn.unitree.com/static/bea51ef618d748368bf59c60f4969a65_1749x1090.png" alt="シミュレーションUI" style="width: 75%;"> </a> </p>
3. 証明書を作成します:
## 2.2 🚀 起動
```bash
(tv) unitree@Host:~/avp_teleoperate$ mkcert -install && mkcert -cert-file cert.pem -key-file key.pem 192.168.123.2 localhost 127.0.0.1
```
物理ロボットとシミュレーションの両方でXR制御をサポート。コマンドライン引数でモード選択:
生成された `cert.pem``key.pem` ファイルを `teleop` に配置します。
- **基本制御パラメータ**
```bash
(tv) unitree@Host:~/avp_teleoperate$ cp cert.pem key.pem ~/avp_teleoperate/teleop/
```
| ⚙️ パラメータ | 📜 説明 | 🔘 オプション | 📌 デフォルト |
| :----------: | :----------------------------------: | :----------------------------------------------------------: | :----------: |
| `--xr-mode` | XR入力モード選択 | `hand` (**ハンドトラッキング**) `controller` (**コントローラートラッキング**) | `hand` |
| `--arm` | ロボットアームタイプ選択 (0. 📖 参照) | `G1_29` `G1_23` `H1_2` `H1` | `G1_29` |
| `--ee` | エンドエフェクタ選択 (0. 📖 参照) | `dex1` `dex3` `inspire1` | none |
4. サーバーでファイアウォールを開きます:
- **モードフラグ**
```bash
(tv) unitree@Host:~/avp_teleoperate$ sudo ufw allow 8012
```
| ⚙️ フラグ | 📜 説明 |
| :----------: | :----------------------------------------------------------: |
| `--record` | **データ記録有効化**: **r**押下で開始後、**s**でエピソード記録開始/停止。繰り返し可能 |
| `--motion` | **モーション制御有効化**: 遠隔操作中に独立したロボット制御を許可。ハンドモードではR3リモコンで歩行、コントローラーモードではジョイスティックで歩行 |
| `--headless` | GUIなしで実行(ヘッドレスPC2展開用) |
| `--sim` | **シミュレーションモード**有効化 |
5. Apple Vision Pro に ca-certificates をインストールします:
G1(29 DoF) + Dex3でハンドトラッキング、シミュレーション、記録モードで起動:
```bash
(tv) unitree@Host:~/avp_teleoperate$ mkcert -CAROOT
(tv) unitree@Host:~$ cd ~/xr_teleoperate/teleop/
(tv) unitree@Host:~/xr_teleoperate/teleop/$ python teleop_hand_and_arm.py --xr-mode=hand --arm=G1_29 --ee=dex3 --sim --record
# 簡略化(デフォルト適用):
(tv) unitree@Host:~/xr_teleoperate/teleop/$ python teleop_hand_and_arm.py --ee=dex3 --sim --record
```
`rootCA.pem` を AirDrop 経由で Apple Vision Pro にコピーし、インストールします。
プログラム起動後、ターミナル表示:
設定 > 一般 > 情報 > 証明書信頼設定。「ルート証明書の完全な信頼を有効にする」の下で、証明書の信頼をオンにします。
<p align="center"> <a href="https://oss-global-cdn.unitree.com/static/735464d237214f6c9edf8c7db9847a0a_1874x1275.png"> <img src="https://oss-global-cdn.unitree.com/static/735464d237214f6c9edf8c7db9847a0a_1874x1275.png" alt="ターミナル起動ログ" style="width: 75%;"> </a> </p>
設定 > アプリ > Safari > 高度な設定 > 機能フラグ > WebXR 関連機能を有効にします。
次の手順:
> 注意:新しい Vision OS 2 システムでは、この手順が異なります。証明書を AirDrop 経由で Apple Vision Pro デバイスにコピーした後、設定アプリの左上のアカウント欄の下に証明書関連情報欄が表示されます。それをクリックして、証明書の信頼を有効にします。
1. XRヘッドセット(Apple Vision Pro、PICO4など)を装着
------
2. 対応するWi-Fiに接続
**2.2.2 PICO 4 Ultra Enterprise or Meta Quest 3**
3. ブラウザ(SafariやPICO Browserなど)で以下にアクセス: `https://192.168.123.2:8012?ws=wss://192.168.123.2:8012`
PICO 4 Ultra Enterprise と Meta-Quest 3 において、ハンドトラッキングを用いたテレオペレーションを試みました。
> **注1**: このIPは**Host**のIPと一致させる必要あり(`ifconfig`で確認)。
> ​**​注2​**: 警告ページが表示される場合があります。[詳細設定]→[IPにアクセス(安全ではない)]を選択。
PICO 4 Ultra Enterprise のシステム仕様:
<p align="center"> <a href="https://oss-global-cdn.unitree.com/static/cef18751ca6643b683bfbea35fed8e7c_1279x1002.png"> <img src="https://oss-global-cdn.unitree.com/static/cef18751ca6643b683bfbea35fed8e7c_1279x1002.png" alt="vuer_unsafe" style="width: 50%;"> </a> </p>
> システムバージョン:5.12.6.U;Android バージョン:14;ソフトウェアバージョン:c000_cf01_bv1.0.1_sv5.12.6_202412121344_sparrow_b4244_user; ブラウザーバージョン: [4.0.28 beta version](https://github.com/vuer-ai/vuer/issues/45#issuecomment-2674918619)
4. Vuerウェブで[Virtual Reality]をクリック。すべてのプロンプトを許可してVRセッションを開始。
Meta-Quest 3 のシステム仕様:
<p align="center"> <a href="https://oss-global-cdn.unitree.com/static/fdeee4e5197f416290d8fa9ecc0b28e6_2480x1286.png"> <img src="https://oss-global-cdn.unitree.com/static/fdeee4e5197f416290d8fa9ecc0b28e6_2480x1286.png" alt="Vuer UI" style="width: 75%;"> </a> </p>
> システムバージョン:49829370066100510;バージョン:62.0.0.273.343.570372087;ランタイムバージョン:62.0.0.269.341.570372063;OS バージョン:SQ3A.220605.009.A1
5. ヘッドセットにロボットの一人称視点が表示されます。ターミナルに接続情報が表示:
さらなる設定手順については、その [issue](https://github.com/unitreerobotics/avp_teleoperate/issues/32) をご覧ください。
```bash
websocket is connected. id:dbb8537d-a58c-4c57-b49d-cbb91bd25b90
default socket worker is up, adding clientEvents
Uplink task running. id:dbb8537d-a58c-4c57-b49d-cbb91bd25b90
```
## 2.3 🔎 ユニットテスト
6. 急な動きを防ぐため、ロボットの**初期姿勢**に腕を合わせる:
このステップは、環境が正しくインストールされているかを確認するためのものです。
<p align="center"> <a href="https://oss-global-cdn.unitree.com/static/2522a83214744e7c8c425cc2679a84ec_670x867.png"> <img src="https://oss-global-cdn.unitree.com/static/2522a83214744e7c8c425cc2679a84ec_670x867.png" alt="初期姿勢" style="width: 25%;"> </a> </p>
近日公開
7. ターミナルで**r**を押して遠隔操作を開始。ロボットアームと多指ハンドを制御できます
8. 遠隔操作中、**s**で記録開始、再度**s**で停止・保存。繰り返し可能。
<p align="center"> <a href="https://oss-global-cdn.unitree.com/static/f5b9b03df89e45ed8601b9a91adab37a_2397x1107.png"> <img src="https://oss-global-cdn.unitree.com/static/f5b9b03df89e45ed8601b9a91adab37a_2397x1107.png" alt="記録プロセス" style="width: 75%;"> </a> </p>
> **注1**: 記録データはデフォルトで`xr_teleoperate/teleop/utils/data`に保存。unitree_IL_lerobotで使用方法を確認。
> **注2**: データ記録時はディスク容量に注意してください。
## 2.3 🔚 終了
# 3. 🚀 使用方法
ターミナル(または「record image」ウィンドウ)で**q**を押して終了。
このプログラムを開始する前に、[公式ドキュメント](https://support.unitree.com/home/zh/Teleoperation) を少なくとも一度は読んでください。
# 3. 🤖 物理環境展開
物理環境展開の手順はシミュレーションと似ていますが、以下の点が異なります:
## 3.1 🖼️ 画像サーバー
## 3.1 🖼️ 画像サービス
`avp_teleoperate/teleop/image_server` ディレクトリにある `image_server.py` を Unitree ロボット (G1/H1/H1_2 など) の **開発用コンピューティングユニット PC2** にコピーし、**PC2** で次のコマンドを実行します:
`xr_teleoperate/teleop/image_server`ディレクトリの`image_server.py`をUnitreeロボット(G1/H1/H1_2など)の**開発用計算ユニットPC2**にコピーし。
```bash
# 注意1:scp コマンドを使用して image_server.py を PC2 に転送し、ssh を使用して PC2 にリモートログインして実行できます。
# 開発用コンピューティングユニット PC2 の IP アドレスが 192.168.123.164 であると仮定すると、転送プロセスは以下のようになります:
# まず ssh で PC2 にログインし、画像サーバーのフォルダを作成します
# 補足: scpコマンドでimage_server.pyをPC2に転送後、sshでPC2にリモートログインして実行可能
# 開発用計算ユニットPC2のIPが192.168.123.164の場合の転送手順:
# SSHでPC2にログインし、画像サーバー用フォルダ作成
(tv) unitree@Host:~$ ssh unitree@192.168.123.164 "mkdir -p ~/image_server"
# ローカルの image_server.py を PC2 の ~/image_server ディレクトリにコピーします
(tv) unitree@Host:~$ scp ~/avp_teleoperate/teleop/image_server/image_server.py unitree@192.168.123.164:~/image_server/
# ローカルのimage_server.pyをPC2の~/image_serverディレクトリにコピー
(tv) unitree@Host:~$ scp ~/xr_teleoperate/teleop/image_server/image_server.py unitree@192.168.123.164:~/image_server/
```
**PC2**で以下を実行:
# 注意2:現在、この画像転送プログラムは OpenCV と Realsense SDK の 2 つの画像読み取り方法をサポートしています。image_server.py の ImageServer クラスのコメントを読んで、カメラハードウェアに応じて画像転送サービスを設定してください。
# 現在、Unitree ロボット PC2 端末にいます
```bash
# 補足: 現在この画像転送プログラムは、OpenCVとRealsense SDKの2つの画像読み取り方法をサポート。`image_server.py`内の`ImageServer`クラスのコメントを参照し、カメラハードウェアに合わせて画像転送サービスを設定。
# UnitreeロボットPC2のターミナルで実行
unitree@PC2:~/image_server$ python image_server.py
# 端末に次のように出力されます:
# ターミナルに以下の出力が表示:
# {'fps': 30, 'head_camera_type': 'opencv', 'head_camera_image_shape': [480, 1280], 'head_camera_id_numbers': [0]}
# [Image Server] Head camera 0 resolution: 480.0 x 1280.0
# [Image Server] Image server has started, waiting for client connections...
```
画像サービスが開始された後、**ホスト** 端末で `image_client.py` を使用して通信が成功したかどうかをテストできます:
画像サービス起動後、**Host**ターミナルで`image_client.py`を使用して通信テスト可能:
```bash
(tv) unitree@Host:~/avp_teleoperate/teleop/image_server$ python image_client.py
(tv) unitree@Host:~/xr_teleoperate/teleop/image_server$ python image_client.py
```
## 3.2 ✋ Inspire ハンドサーバー(オプション)
## 3.2 ✋ Inspireハンドサービス(オプション)
> 注意: 選択したロボット構成に Inspire デクスタラスハンドが使用されていない場合、このセクションは無視してください。
> **Note 1**: Skip this if your config does not use the Inspire hand.
> **Note 2**: For the G1 robot with [Inspire DFX hand](https://support.unitree.com/home/zh/G1_developer/inspire_dfx_dexterous_hand), see [issue #46](https://github.com/unitreerobotics/xr_teleoperate/issues/46).
> **Note 3**: For [Inspire FTP hand]((https://support.unitree.com/home/zh/G1_developer/inspire_ftp_dexterity_hand)), see [issue #48](https://github.com/unitreerobotics/xr_teleoperate/issues/48).
関連する環境を設定し、制御プログラムをコンパイルするには、[デクスタラスハンド開発](https://support.unitree.com/home/zh/H1_developer/Dexterous_hand) を参照できます。まず、[このリンク](https://oss-global-cdn.unitree.com/static/0a8335f7498548d28412c31ea047d4be.zip) を使用してデクスタラスハンド制御インターフェースプログラムをダウンロードし、Unitree ロボットの **PC2** にコピーします。
多指ハンド開発を参照して関連環境を設定し、制御プログラムをコンパイル。このURLから多指ハンド制御インターフェースプログラムをダウンロードし、Unitreeロボットの**PC2**にコピー。
Unitree ロボットの **PC2** で次のコマンドを実行します:
Unitreeロボットの**PC2**で以下を実行:
```bash
unitree@PC2:~$ sudo apt install libboost-all-dev libspdlog-dev
# プロジェクトビルドします
# プロジェクトビルド
unitree@PC2:~$ cd h1_inspire_service & mkdir build & cd build
unitree@PC2:~/h1_inspire_service/build$ cmake .. -DCMAKE_BUILD_TYPE=Release
unitree@PC2:~/h1_inspire_service/build$ make
# ターミナル 1. h1 inspire ハンドサービス実行します
# ターミナル1. h1 inspireハンドサービス実行
unitree@PC2:~/h1_inspire_service/build$ sudo ./inspire_hand -s /dev/ttyUSB0
# ターミナル 2. サンプル実行します
# ターミナル2. サンプル実行
unitree@PC2:~/h1_inspire_service/build$ ./h1_hand_example
```
2 つの手が連続して開閉する場合、成功を示します。成功したら、ターミナル 2 で `./h1_hand_example` プログラムを閉じます
両手が連続的に開閉すれば成功。成功後、ターミナル2の`./h1_hand_example`プログラムを終了
## 3.3 🚀 スタート
## 3.3 🚀 起動
> ![Warning](https://img.shields.io/badge/Warning-Important-red)
> ![Warning](https://img.shields.io/badge/Warning-Important-red)
>
> 1. すべての人は、潜在的な危険を防ぐためにロボットから安全な距離を保つ必要があります!
>
> 2. このプログラムを実行する前に、[公式ドキュメント](https://support.unitree.com/home/zh/Teleoperation) を少なくとも一度は読んでください。
>
> 3. 常にロボットが [デバッグモード (L2+R2)](https://support.unitree.com/home/zh/H1_developer/Remote_control) に入っていることを確認し、モーションコントロールプログラムを停止して、潜在的なコマンドの競合問題を回避します。
>
このプログラムを実行するには、**オペレーター A** と **オペレーター B** と呼ばれる 2 人のオペレーターがいるのが最適です。
まず、**オペレーター B** は次の手順を実行する必要があります:
1. `~/avp_teleoperate/teleop/teleop_hand_and_arm.py``if __name__ == '__main__':` コードの下にある `img_config` 画像クライアント設定を変更します。これは、3.1 節で PC2 に設定した画像サーバーパラメータと同じである必要があります。
2. ロボット構成に応じて異なる起動パラメータを選択します。 以下は、いくつかのコマンド例です:
```bash
# 1. G1(29自由度)ロボット + Dex3-1 多指ハンド(※ G1_29 は --arm のデフォルト値なので省略可能)
(tv) unitree@Host:~/avp_teleoperate/teleop$ python teleop_hand_and_arm.py --arm=G1_29 --hand=dex3
# 2. G1(29自由度)ロボットのみ
(tv) unitree@Host:~/avp_teleoperate/teleop$ python teleop_hand_and_arm.py
# 3. G1(23自由度)ロボット
(tv) unitree@Host:~/avp_teleoperate/teleop$ python teleop_hand_and_arm.py --arm=G1_23
# 4. H1_2 ロボット + Inspire Hand
(tv) unitree@Host:~/avp_teleoperate/teleop$ python teleop_hand_and_arm.py --arm=H1_2 --hand=inspire1
# 5. H1 ロボット
(tv) unitree@Host:~/avp_teleoperate/teleop$ python teleop_hand_and_arm.py --arm=H1
# 6. データの可視化および記録を有効にしたい場合は、--record オプションを追加してください
(tv) unitree@Host:~/avp_teleoperate/teleop$ python teleop_hand_and_arm.py --arm=G1_23 --record
```
3. プログラムが正常に起動すると、端末の最後の行に "Please enter the start signal (enter 'r' to start the subsequent program):" というメッセージが表示されます。
次に、**オペレーター A** は次の手順を実行します:
1. Apple Vision Pro デバイスを装着します。
2. Apple Vision Pro で Safari を開き、次の URL にアクセスします: https://192.168.123.2:8012?ws=wss://192.168.123.2:8012
> p.s. この IP アドレスは **ホストマシン** の IP アドレスと一致する必要があります。
3. `Enter VR` をクリックし、`Allow` を選択して VR セッションを開始します。
4. Apple Vision Pro でロボットの一人称視点が表示されます。
次に、**オペレーター B** は端末で **r** キーを押して遠隔操作プログラムを開始します。
この時点で、**オペレーター A** はロボットのアーム(およびデクスタラスハンド)を遠隔操作できます。
`--record` パラメータを使用した場合、**オペレーター B** は開いている "record image" ウィンドウで **s** キーを押してデータの記録を開始し、再度 **s** キーを押して停止できます。必要に応じてこれを繰り返すことができます。
> 注意1:記録されたデータはデフォルトで `avp_teleoperate/teleop/utils/data` に保存されます。使用方法については、このリポジトリを参照してください: [unitree_IL_lerobot](https://github.com/unitreerobotics/unitree_IL_lerobot/tree/main?tab=readme-ov-file#data-collection-and-conversion)。
>
> 注意2:データ記録時にはディスク容量に注意してください。
> 1. すべての人はロボットから安全な距離を保ち、潜在的な危険を防止してください!
> 2. このプログラムを実行する前に、少なくとも一度は公式ドキュメントをお読みください。
> 3. `--motion`なしの場合、ロボットがデバッグモード(L2+R2)に入り、モーション制御プログラムが停止していることを確認してください。これにより潜在的なコマンド競合問題を回避できます。
> 4. モーションモード(`--motion`あり)を使用する場合、ロボットが制御モード(R3リモコン経由)にあることを確認。
> 5. モーションモード時:
> - 右コントローラー**A** = 遠隔操作終了
> - 両ジョイスティック押下 = ソフト非常停止(ダンピングモードに切替)
> - 左ジョイスティック = 移動方向;
> - 右ジョイスティック = 旋回;
> - 最大速度はコード内で制限。
シミュレーションと同じですが、上記の安全警告に従ってください。
## 3.4 🔚 終了
プログラムを終了するには、**オペレーター B** は 'record image' ウィンドウで **q** キーを押すことができます。
> ![Warning](https://img.shields.io/badge/Warning-Important-red)
> ![Warning](https://img.shields.io/badge/Warning-Important-red)
>
> ロボットを損傷しないようにするために、**オペレーター A** がロボットのアームを自然に下げた位置または適切な位置に配置した後、**オペレーター B** が **q** を押して終了するのが最適です。
> ロボット損傷を防ぐため、終了前にロボットの腕を初期姿勢に近づけることを推奨。
>
> - **デバッグモード**: 終了キー押下後、両腕は5秒以内にロボットの**初期姿勢**に戻り、制御終了。
> - **モーションモード**: 終了キー押下後、両腕は5秒以内にロボットの**モーション制御姿勢**に戻り、制御終了。
シミュレーションと同じですが、上記の安全警告に従ってください。
# 4. 🗺️ コードベースチュートリアル
# 4. 🗺️ コード構成
```
avp_teleoperate/
xr_teleoperate/
├── assets [ロボットURDF関連ファイル格納]
├── assets [ロボット URDF 関連ファイルの保存]
├── hardware [3Dプリントハードウェアモジュール]
├── teleop
│ ├── image_server
│ │ ├── image_client.py [ロボット画像サーバーから画像データを受信するために使用]
│ │ ├── image_server.py [カメラから画像をキャプチャし、ネットワーク経由で送信(ロボットのオンボードコンピュータで実行)]
│ │ ├── image_client.py [ロボット画像サーバーから画像データを受信]
│ │ ├── image_server.py [カメラから画像をキャプチャしネットワーク送信(ロボットの開発用計算ユニットPC2で実行)]
│ │
│ ├── open_television
│ │ ├── television.py [Apple Vision Pro から Vuer を使用して手首と手のデータをキャプチャ]
│ │ ├── tv_wrapper.py [キャプチャされたデータの後処理]
│ ├── televuer
│ │ ├── src/televuer
│ │ ├── television.py [XRデバイスの頭部、手首、手・コントローラーのデータを取得]
│ │ ├── tv_wrapper.py [取得データの後処理]
│ │ ├── test
│ │ ├── _test_television.py [television.pyのテスト]
│ │ ├── _test_tv_wrapper.py [tv_wrapper.pyのテスト]
│ │
│ ├── robot_control
│ │ ├── robot_arm_ik.py [アームの逆運動学]
│ │ ├── robot_arm.py [デュアルアームジョイントを制御し、他の部分をロック]
│ │ ├── robot_hand_inspire.py [Inspire ハンドジョイントを制御]
│ │ ├── robot_hand_unitree.py [Unitree ハンドジョイントを制御]
│ │ ├── src/dex-retargeting [多指ハンドリターゲティングアルゴリズムライブラリ]
│ │ ├── robot_arm_ik.py [アームの逆運動学]
│ │ ├── robot_arm.py [両腕関節を制御し他をロック]
│ │ ├── hand_retargeting.py [多指ハンドリターゲティングアルゴリズムラッパー]
│ │ ├── robot_hand_inspire.py [inspireハンド関節を制御]
│ │ ├── robot_hand_unitree.py [unitreeハンド関節を制御]
│ │
│ ├── utils
│ │ ├── episode_writer.py [模倣学習のデータを記録するために使用]
│ │ ├── mat_tool.py [いくつかの小さな数学ツール]
│ │ ├── weighted_moving_filter.py [ジョイントデータをフィルタリングするため]
│ │ ├── rerun_visualizer.py [記録中のデータを可視化するため]
│ │ ├── episode_writer.py [模倣学習用データ記録]
│ │ ├── weighted_moving_filter.py [関節データのフィルタリング]
│ │ ├── rerun_visualizer.py [記録中のデータ可視化]
│ │
│──teleop_hand_and_arm.py [遠隔操作の起動実行コード]
│ └── teleop_hand_and_arm.py [遠隔操作起動実行コード]
```
# 5. 🛠️ ハードウェア
## 5.1 📋 リスト
| アイテム | 数量 | リンク | 備考 |
| :--------------------------: | :------: | :----------------------------------------------------------: | :---------------------------------------------------------: |
| **Unitree ロボット G1** | 1 | https://www.unitree.com/g1 | 開発用コンピューティングユニット付き |
| **Apple Vision Pro** | 1 | https://www.apple.com/apple-vision-pro/ | |
| **ルーター** | 1 | | |
| **ユーザー PC** | 1 | | 推奨グラフィックカード性能は RTX 4080 以上 |
| **ヘッドステレオカメラ** | 1 | [参考のみ] http://e.tb.cn/h.TaZxgkpfWkNCakg?tk=KKz03Kyu04u | ヘッド用 |
| **ヘッドカメラマウント** | 1 | https://github.com/unitreerobotics/avp_teleoperate/blob/g1/hardware/head_stereo_camera_mount.STEP | ヘッドステレオカメラの取り付け用, FOV 130° |
| Intel RealSense D405 | 2 | https://www.intelrealsense.com/depth-camera-d405/ | リスト用 |
| リストリングマウント | 2 | https://github.com/unitreerobotics/avp_teleoperate/blob/g1/hardware/wrist_ring_mount.STEP | リストカメラマウントと一緒に使用 |
| 左リストカメラマウント | 1 | https://github.com/unitreerobotics/avp_teleoperate/blob/g1/hardware/left_wrist_D405_camera_mount.STEP | 左リスト RealSense D405 カメラの取り付け用 |
| 右リストカメラマウント | 1 | https://github.com/unitreerobotics/avp_teleoperate/blob/g1/hardware/right_wrist_D405_camera_mount.STEP | 右リスト RealSense D405 カメラの取り付け用 |
| M3 六角ナット | 4 | [参考のみ] https://a.co/d/1opqtOr | リストファスナー用 |
| M3x12 ネジ | 4 | [参考のみ] https://amzn.asia/d/aU9NHSf | リストファスナー用 |
| M3x6 ネジ | 4 | [参考のみ] https://amzn.asia/d/0nEz5dJ | リストファスナー用 |
| **M4x14 ネジ** | 2 | [参考のみ] https://amzn.asia/d/cfta55x | ヘッドファスナー用 |
| **M2x4 自タッピングネジ** | 4 | [参考のみ] https://amzn.asia/d/1msRa5B | ヘッドファスナー用 |
> 注意: 太字のアイテムは遠隔操作タスクに必要な機器であり、他のアイテムは [データセット](https://huggingface.co/unitreerobotics) を記録するためのオプション機器です。
## 5.2 🔨 インストール図
## 5.1 📋 部品リスト
<table>
<tr>
<th align="center">アイテム</th>
<th align="center" colspan="2">シミュレーション</th>
<th align="center" colspan="2">実物</th>
</tr>
<tr>
<td align="center">ヘッド</td>
<td align="center">
<p align="center">
<img src="./img/head_camera_mount.png" alt="head" width="100%">
<figcaption>ヘッドマウント</figcaption>
</p>
</td>
<td align="center">
<p align="center">
<img src="./img/head_camera_mount_install.png" alt="head" width="80%">
<figcaption>組み立ての側面図</figcaption>
</p>
</td>
<td align="center" colspan="2">
<p align="center">
<img src="./img/real_head.jpg" alt="head" width="20%">
<figcaption>組み立ての正面図</figcaption>
</p>
</td>
</tr>
<tr>
<td align="center">リスト</td>
<td align="center" colspan="2">
<p align="center">
<img src="./img/wrist_and_ring_mount.png" alt="wrist" width="100%">
<figcaption>リストリングとカメラマウント</figcaption>
</p>
</td>
<td align="center">
<p align="center">
<img src="./img/real_left_hand.jpg" alt="wrist" width="50%">
<figcaption>左手の組み立て</figcaption>
</p>
</td>
<td align="center">
<p align="center">
<img src="./img/real_right_hand.jpg" alt="wrist" width="50%">
<figcaption>右手の組み立て</figcaption>
</p>
</td>
</tr>
</table>
| アイテム | 数量 | リンク | 備考 |
| :------------------------: | :--: | :----------------------------------------------------------: | :----------------------------------------------------------: |
| **ヒューマノイドロボット** | 1 | https://www.unitree.com | 開発用計算ユニット付属 |
| **XRデバイス** | 1 | https://www.apple.com/apple-vision-pro/ https://www.meta.com/quest/quest-3 https://www.picoxr.com/products/pico4-ultra-enterprise | |
| **ルーター** | 1 | | **デフォルトモード**で必要; **ワイヤレスモード**では不要 |
| **ユーザーPC** | 1 | | **シミュレーションモード**では、公式推奨ハードウェアリソースを使用。 |
| **ヘッドステレオカメラ** | 1 | [参考] http://e.tb.cn/h.TaZxgkpfWkNCakg?tk=KKz03Kyu04u | ロボット頭部視点用 |
| **ヘッドカメラマウント** | 1 | https://github.com/unitreerobotics/xr_teleoperate/blob/g1/hardware/head_stereo_camera_mount.STEP | ヘッドステレオカメラ取り付け用 |
| Intel RealSense D405カメラ | 2 | https://www.intelrealsense.com/depth-camera-d405/ | 手首用 |
| 手首リングマウント | 2 | https://github.com/unitreerobotics/xr_teleoperate/blob/g1/hardware/wrist_ring_mount.STEP | 手首カメラマウントと併用 |
| 左手首D405マウント | 1 | https://github.com/unitreerobotics/xr_teleoperate/blob/g1/hardware/left_wrist_D405_camera_mount.STEP | 左手首RealSense D405カメラ取り付け用 |
| 右手首D405マウント | 1 | https://github.com/unitreerobotics/xr_teleoperate/blob/g1/hardware/right_wrist_D405_camera_mount.STEP | 右手首RealSense D405カメラ取り付け用 |
| M3-1六角ナット | 4 | [参考] https://a.co/d/1opqtOr | 手首固定用 |
| M3x12ネジ | 4 | [参考] https://amzn.asia/d/aU9NHSf | 手首固定用 |
| M3x6ネジ | 4 | [参考] https://amzn.asia/d/0nEz5dJ | 手首固定用 |
| **M4x14ネジ** | 2 | [参考] https://amzn.asia/d/cfta55x | 頭部固定用 |
| **M2x4自攻ネジ** | 4 | [参考] https://amzn.asia/d/1msRa5B | 頭部固定用 |
> 注: 太字のアイテムは遠隔操作タスクに必須の設備、その他はデータセット記録用のオプション設備。
> 注意: リストリングマウントは、画像の赤い円で示されているように、ロボットのリストのシームと一致する必要があります。
## 5.2 🔨 取り付け図
<table> <tr> <th align="center">アイテム</th> <th align="center" colspan="2">シミュレーション</th> <th align="center" colspan="2">実機</th> </tr> <tr> <td align="center">頭部</td> <td align="center"> <p align="center"> <img src="./img/head_camera_mount.png" alt="頭部" width="100%"> <figcaption>頭部マウント</figcaption> </p> </td> <td align="center"> <p align="center"> <img src="./img/head_camera_mount_install.png" alt="頭部" width="80%"> <figcaption>取り付け側面図</figcaption> </p> </td> <td align="center" colspan="2"> <p align="center"> <img src="./img/real_head.jpg" alt="頭部" width="20%"> <figcaption>取り付け正面図</figcaption> </p> </td> </tr> <tr> <td align="center">手首</td> <td align="center" colspan="2"> <p align="center"> <img src="./img/wrist_and_ring_mount.png" alt="手首" width="100%"> <figcaption>手首リングとカメラマウント</figcaption> </p> </td> <td align="center"> <p align="center"> <img src="./img/real_left_hand.jpg" alt="手首" width="50%"> <figcaption>左手取り付け図</figcaption> </p> </td> <td align="center"> <p align="center"> <img src="./img/real_right_hand.jpg" alt="手首" width="50%"> <figcaption>右手取り付け図</figcaption> </p> </td> </tr> </table>
> 注: 手首リングマウントは、ロボットの手首の継ぎ目に合わせて取り付け(画像の赤丸部分)。
# 6. 🙏 謝辞
このコードは、以下のオープンソースコードベースに基づいて構築されています。各ライセンスを確認するには、以下の URL を訪れてください。
1) https://github.com/OpenTeleVision/TeleVision
2) https://github.com/dexsuite/dex-retargeting
3) https://github.com/vuer-ai/vuer
4) https://github.com/stack-of-tasks/pinocchio
5) https://github.com/casadi/casadi
6) https://github.com/meshcat-dev/meshcat-python
7) https://github.com/zeromq/pyzmq
8) https://github.com/unitreerobotics/unitree_dds_wrapper
9) https://github.com/tonyzhaozh/act
10) https://github.com/facebookresearch/detr
11) https://github.com/Dingry/BunnyVisionPro
12) https://github.com/unitreerobotics/unitree_sdk2_python
このコードは以下のオープンソースコードを基にしています。各LICENSEはURLで確認してください:
1. https://github.com/OpenTeleVision/TeleVision
2. https://github.com/dexsuite/dex-retargeting
3. https://github.com/vuer-ai/vuer
4. https://github.com/stack-of-tasks/pinocchio
5. https://github.com/casadi/casadi
6. https://github.com/meshcat-dev/meshcat-python
7. https://github.com/zeromq/pyzmq
8. https://github.com/Dingry/BunnyVisionPro
9. https://github.com/unitreerobotics/unitree_sdk2_python

432
README_zh-CN.md

@ -1,11 +1,14 @@
<div align="center">
<h1 align="center"> avp_teleoperate </h1>
<h3 align="center"> Unitree Robotics </h3>
<h1 align="center"> xr_teleoperate </h1>
<a href="https://www.unitree.com/" target="_blank">
<img src="https://www.unitree.com/images/0079f8938336436e955ea3a98c4e1e59.svg" alt="Unitree LOGO" width="15%">
</a>
<p align="center">
<a href="README.md"> English </a> | <a>中文</a> | <a href="README_ja-JP.md">日本語</a>
</p>
</div>
# 📺 视频演示
<p align="center">
@ -27,13 +30,29 @@
</table>
</p>
# 🔖 发布说明
1. 升级 [Vuer](https://github.com/vuer-ai/vuer) 库,扩展了设备支持模式。为更准确反映功能范围,项目由 **avp_teleoperate** 更名为 **xr_teleoperate**,从最初仅支持 Apple Vision Pro,扩展至兼容 Meta Quest 3(含手柄) 与 PICO 4 Ultra Enterprise(含手柄) 等多款 XR 设备。
2. 对部分功能进行了**模块化**拆分,并通过 Git 子模块(git submodule)方式进行管理和加载,提升代码结构的清晰度与维护性。
3. 新增**无头**、**运控**及**仿真**模式,优化启动参数配置(详见第2.2节),提升使用便捷性。**仿真**模式的加入,方便了环境验证和硬件故障排查。
4. 将默认手部映射算法从 Vector 切换为 **DexPilot**,优化了指尖捏合的精度与交互体验。
5. 其他一些优化
# 0. 📖 介绍
该仓库实现了使用 **XR设备**(比如 Apple Vision Pro、PICO 4 Ultra Enterprise 或 Meta Quest 3) 对 **宇树(Unitree)人形机器人** 的遥操作控制。
该仓库实现了使用 **XR设备(Extended Reality)**(比如 Apple Vision Pro、PICO 4 Ultra Enterprise 或 Meta Quest 3) 对 **宇树(Unitree)人形机器人** 的遥操作控制。
以下是本仓库目前支持的机器人类型:
以下是系统示意图:
<p align="center">
<a href="https://oss-global-cdn.unitree.com/static/54cff8fe11ac4158b9b15a73f0842843_5990x4060.png">
<img src="https://oss-global-cdn.unitree.com/static/54cff8fe11ac4158b9b15a73f0842843_5990x4060.png" alt="Watch the Document" style="width: 100%;">
</a>
</p>
以下是本仓库目前支持的设备类型:
<table>
<tr>
@ -56,6 +75,10 @@
<td style="text-align: center;"> <a href="https://www.unitree.com/cn/h1" target="_blank"> H1_2 (手臂7自由度) </td>
<td style="text-align: center;"> &#9989; 完成 </td>
</tr>
<tr>
<td style="text-align: center;"> <a href="https://www.unitree.com/cn/Dex1-1" target="_blank"> Dex1-1 夹爪 </td>
<td style="text-align: center;"> &#9989; 完成 </td>
</tr>
<tr>
<td style="text-align: center;"> <a href="https://www.unitree.com/cn/Dex3-1" target="_blank"> Dex3-1 灵巧手 </td>
<td style="text-align: center;"> &#9989; 完成 </td>
@ -65,179 +88,230 @@
<td style="text-align: center;"> &#9989; 完成 </td>
</tr>
<tr>
<td style="text-align: center;"> ... </td>
<td style="text-align: center;"> ... </td>
<td style="text-align: center;"> ··· </td>
<td style="text-align: center;"> ··· </td>
</tr>
</table>
# 1. 📦 安装
以下是需要的设备和接线示意图:
<p align="center">
<a href="https://oss-global-cdn.unitree.com/static/849e601aedca41e39014ec0f75a86c1e_2950x1445.png">
<img src="https://oss-global-cdn.unitree.com/static/849e601aedca41e39014ec0f75a86c1e_2950x1445.png" alt="Watch the Document" style="width: 100%;">
</a>
</p>
以下是网络拓扑图,以G1机器人为例:
我们在 Ubuntu 20.04 和 Ubuntu 22.04 上测试了我们的代码,其他操作系统可能需要不同的配置。本文档主要介绍常规模式。
<p align="center">
<a href="https://oss-global-cdn.unitree.com/static/9871e3bac4c54140b0839c68baf48a4a_1872x929.png">
<img src="https://oss-global-cdn.unitree.com/static/9871e3bac4c54140b0839c68baf48a4a_1872x929.png" alt="Watch the Document" style="width: 100%;">
</a>
</p>
# 1. 📦 前置条件
有关更多信息,您可以参考 [官方文档](https://support.unitree.com/home/zh/Teleoperation) 和 [OpenTeleVision](https://github.com/OpenTeleVision/TeleVision)。
我们在 Ubuntu 20.04 和 Ubuntu 22.04 上测试了我们的代码,其他操作系统可能需要不同的配置。
## 1.1 📥 基础环境
有关更多信息,您可以参考 [官方文档](https://support.unitree.com/home/zh/Teleoperation) 和 [OpenTeleVision](https://github.com/OpenTeleVision/TeleVision)。
```bash
# 创建 conda 基础环境
(base) unitree@Host:~$ conda create -n tv python=3.10 pinocchio=3.1.0 numpy=1.26.4 -c conda-forge
(base) unitree@Host:~$ conda activate tv
# 克隆本仓库
(tv) unitree@Host:~$ git clone https://github.com/unitreerobotics/xr_teleoperate.git
(tv) unitree@Host:~$ cd xr_teleoperate
# 浅克隆子模块
(tv) unitree@Host:~/xr_teleoperate$ git submodule update --init --depth 1
# 安装 televuer 模块
(tv) unitree@Host:~/xr_teleoperate$ cd teleop/televuer
(tv) unitree@Host:~/xr_teleoperate/teleop/televuer$ pip install -e .
# 生成 televuer 模块所需的证书文件
(tv) unitree@Host:~/xr_teleoperate/teleop/televuer$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout key.pem -out cert.pem
# 安装 dex-retargeting 模块
(tv) unitree@Host:~/xr_teleoperate/teleop/televuer$ cd ../robot_control/dex-retargeting/
(tv) unitree@Host:~/xr_teleoperate/teleop/robot_control/dex-retargeting$ pip install -e .
# 安装本仓库所需的其他依赖库
(tv) unitree@Host:~/xr_teleoperate/teleop/robot_control/dex-retargeting$ cd ../../../
(tv) unitree@Host:~/xr_teleoperate$ pip install -r requirements.txt
```
## 1.1 🦾 逆运动学
## 1.2 🕹️ unitree_sdk2_python
```bash
unitree@Host:~$ conda create -n tv python=3.8
unitree@Host:~$ conda activate tv
# 如果您使用 `pip install`,请确保 pinocchio 版本为 3.1.0
(tv) unitree@Host:~$ conda install pinocchio -c conda-forge
(tv) unitree@Host:~$ pip install meshcat
(tv) unitree@Host:~$ pip install casadi
# 安装 unitree_sdk2_python 库,该库负责开发设备与机器人之间的通信控制功能
(tv) unitree@Host:~$ git clone https://github.com/unitreerobotics/unitree_sdk2_python.git
(tv) unitree@Host:~$ cd unitree_sdk2_python
(tv) unitree@Host:~/unitree_sdk2_python$ pip install -e .
```
> 提醒:命令前面的所有标识符是为了提示:该命令应该在哪个设备和目录下执行。
> 注意1:原 h1_2 分支中的 [unitree_dds_wrapper](https://github.com/unitreerobotics/unitree_dds_wrapper) 为临时版本,现已全面转换到上述正式的 Python 版控制通信库:[unitree_sdk2_python](https://github.com/unitreerobotics/unitree_sdk2_python)
> 注意2:命令前面的所有标识符是为了提示:该命令应该在哪个设备和目录下执行。
>
> p.s. 在 Ubuntu 系统 `~/.bashrc` 文件中,默认配置: `PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '`
>
> - 以`(tv) unitree@Host:~$ pip install meshcat` 命令为例:
>
>- `(tv)` 表示 shell 此时位于 conda 创建的 tv 环境中;
>
>- `unitree@Host:~` 表示用户标识 unitree 在设备 Host 上登录,当前的工作目录为 `$HOME`
>
>- $ 表示当前 shell 为 Bash;
>
>- pip install meshcat 是用户标识 unitree 要在 设备 Host 上执行的命令。
>
>您可以参考 [Harley Hahn's Guide to Unix and Linux](https://www.harley.com/unix-book/book/chapters/04.html#H) 和 [Conda User Guide](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html) 来深入了解这些知识。
> - `(tv)` 表示 shell 此时位于 conda 创建的 tv 环境中;
>
> - `unitree@Host:~` 表示用户标识 unitree 在设备 Host 上登录,当前的工作目录为 `$HOME`
>
> - $ 表示当前 shell 为 Bash;
>
> - pip install meshcat 是用户标识 unitree 要在 设备 Host 上执行的命令。
>
> 您可以参考 [Harley Hahn's Guide to Unix and Linux](https://www.harley.com/unix-book/book/chapters/04.html#H) 和 [Conda User Guide](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html) 来深入了解这些知识。
## 1.2 🕹️ unitree_sdk2_python
# 2. 💻 仿真部署
## 2.1 📥 环境配置
首先,请安装 [unitree_sim_isaaclab](https://github.com/unitreerobotics/unitree_sim_isaaclab)。具体安装步骤,可参考该仓库 README 文档。
其次,启动 unitree_sim_isaaclab 仿真环境。假设使用 G1(29 DoF) 和 Dex3 灵巧手配置进行仿真,则启动命令示例如下:
```bash
# 安装 unitree_sdk2_python 库
(tv) unitree@Host:~$ git clone https://github.com/unitreerobotics/unitree_sdk2_python.git
(tv) unitree@Host:~$ cd unitree_sdk2_python
(tv) unitree@Host:~$ pip install -e .
(base) unitree@Host:~$ conda activate unitree_sim_env
(unitree_sim_env) unitree@Host:~$ cd ~/unitree_sim_isaaclab
(unitree_sim_env) unitree@Host:~/unitree_sim_isaaclab$ python sim_main.py --device cpu --enable_cameras --task Isaac-PickPlace-Cylinder-G129-Dex3-Joint --enable_dex3_dds --robot_type g129
```
> 提醒:原 h1_2 分支中的 [unitree_dds_wrapper](https://github.com/unitreerobotics/unitree_dds_wrapper) 为临时版本,现已全面转换到上述正式的 Python 版控制通信库:[unitree_sdk2_python](https://github.com/unitreerobotics/unitree_sdk2_python)
仿真环境启动后,使用鼠标左键在窗口内点击一次以激活仿真运行状态。此时,终端内输出 `controller started, start main loop...`
仿真界面如下图所示:
<p align="center">
<a href="https://oss-global-cdn.unitree.com/static/bea51ef618d748368bf59c60f4969a65_1749x1090.png">
<img src="https://oss-global-cdn.unitree.com/static/bea51ef618d748368bf59c60f4969a65_1749x1090.png" alt="Unitree sim isaaclab" style="width: 75%;">
</a>
</p>
# 2. ⚙️ 配置
## 2.1 📥 基础
```bash
(tv) unitree@Host:~$ cd ~
(tv) unitree@Host:~$ git clone https://github.com/unitreerobotics/avp_teleoperate.git
(tv) unitree@Host:~$ cd ~/avp_teleoperate
(tv) unitree@Host:~$ pip install -r requirements.txt
```
## 2.2 🚀 启动遥操
## 2.2 🔌 本地流媒体
本程序支持通过 XR 设备(比如手势或手柄)来控制实际机器人动作,也支持在虚拟仿真中运行。你可以根据需要,通过命令行参数来配置运行方式。
**Apple Vision Pro**
以下是本程序的启动参数说明:
苹果不允许在非 HTTPS 连接上使用 WebXR。要在本地测试应用程序,我们需要创建一个自签名证书并在客户端上安装它。您需要一台 Ubuntu 机器和一个路由器。将 Apple Vision Pro 和 Ubuntu **主机**连接到同一个路由器。
- 基础控制参数
1. 安装 mkcert:https://github.com/FiloSottile/mkcert
2. 检查**主机**本地 IP 地址:
| ⚙️ 参数 | 📜 说明 | 🔘 目前可选值 | 📌 默认值 |
| :---------: | :----------------------------------------------: | :------------------------------------------------------: | :------: |
| `--xr-mode` | 选择 XR 输入模式(通过什么方式控制机器人) | `hand`(**手势跟踪**)<br />`controller`(**手柄跟踪**) | `hand` |
| `--arm` | 选择机器人设备类型(可参考 0. 📖 介绍) | `G1_29`<br />`G1_23`<br />`H1_2`<br />`H1` | `G1_29` |
| `--ee` | 选择手臂的末端执行器设备类型(可参考 0. 📖 介绍) | `dex1`<br />`dex3`<br />`inspire1` | 无默认值 |
```bash
(tv) unitree@Host:~/avp_teleoperate$ ifconfig | grep inet
```
- 模式开关参数
| ⚙️ 参数 | 📜 说明 |
| :----------: | :----------------------------------------------------------: |
| `--record` | 【启用**数据录制**模式】<br />**r** 键进入遥操后,按 **s** 键可开启数据录制,再次按 **s** 键可结束录制并保存本次 episode 数据。<br />继续按下 **s** 键可重复前述过程。 |
| `--motion` | 【启用**运动控制**模式】<br />开启本模式后,可在机器人运控程序运行下进行遥操作程序。<br />**手势跟踪**模式下,可使用 [R3遥控器](https://www.unitree.com/cn/R3) 控制机器人正常行走;**手柄跟踪**模式下,也可使用[手柄摇杆控制机器人行走](https://github.com/unitreerobotics/xr_teleoperate/blob/375cdc27605de377c698e2b89cad0e5885724ca6/teleop/teleop_hand_and_arm.py#L247-L257)。 |
| `--headless` | 【启用**无图形界面**模式】<br />适用于本程序部署在开发计算单元(PC2)等无显示器情况 |
| `--sim` | 【启用[**仿真模式**](https://github.com/unitreerobotics/unitree_sim_isaaclab)】 |
假设 **主机** 的本地 IP 地址为 `192.168.123.2`
------
> 提醒:您可以使用 `ifconfig` 命令检查您的 **主机** IP 地址。
根据上述参数说明以及仿真环境配置,我们假设选择**手势跟踪**来控制 G1(29 DoF) + Dex3 灵巧手设备,同时开启仿真模式和数据录制模式
3. 创建证书
则启动命令如下所示
```bash
(tv) unitree@Host:~/avp_teleoperate$ mkcert -install && mkcert -cert-file cert.pem -key-file key.pem 192.168.123.2 localhost 127.0.0.1
(tv) unitree@Host:~$ cd ~/xr_teleoperate/teleop/
(tv) unitree@Host:~/xr_teleoperate/teleop/$ python teleop_hand_and_arm.py --xr-mode=hand --arm=G1_29 --ee=dex3 --sim --record
# 实际上,由于一些参数存在默认值,该命令也可简化为:
(tv) unitree@Host:~/xr_teleoperate/teleop/$ python teleop_hand_and_arm.py --ee=dex3 --sim --record
```
将生成的 `cert.pem``key.pem` 文件放在 `teleop` 目录中
程序正常启动后,终端输出信息如下图所示:
```bash
(tv) unitree@Host:~/avp_teleoperate$ cp cert.pem key.pem ~/avp_teleoperate/teleop/
```
<p align="center">
<a href="https://oss-global-cdn.unitree.com/static/735464d237214f6c9edf8c7db9847a0a_1874x1275.png">
<img src="https://oss-global-cdn.unitree.com/static/735464d237214f6c9edf8c7db9847a0a_1874x1275.png" alt="start_terminal_log" style="width: 75%;">
</a>
</p>
4. 在服务器上打开防火墙:
接下来,执行以下步骤
```bash
(tv) unitree@Host:~/avp_teleoperate$ sudo ufw allow 8012
```
1. 戴上您的 XR 头显设备(比如 apple vision pro 或 pico4 ultra enterprise等)
5. 在 Apple Vision Pro 上安装 CA 证书:
2. 连接对应的 WiFi 热点
```bash
(tv) unitree@Host:~/avp_teleoperate$ mkcert -CAROOT
```
3. 打开浏览器应用(比如 Safari 或 PICO Browser),输入并访问网址:https://192.168.123.2:8012?ws=wss://192.168.123.2:8012
通过 AirDrop 将 `rootCA.pem` 复制到 Apple Vision Pro 并安装它
> 注意1:此 IP 地址应与您的 **主机** IP 地址匹配。该地址可以使用 `ifconfig` 等类似命令查询。
设置 > 通用 > 关于本机 > 证书信任设置。在“启用对根证书的完全信任”下,打开对证书的信任
> 注意2:此时可能弹出下图所示的警告信息。请点击`Advanced`按钮后,继续点击 `Proceed to ip (unsafe)` 按钮,使用非安全方式继续登录服务器
设置 > 应用 > Safari > 高级 > 功能标志 > 启用 WebXR 相关功能。
<p align="center">
<a href="https://oss-global-cdn.unitree.com/static/cef18751ca6643b683bfbea35fed8e7c_1279x1002.png">
<img src="https://oss-global-cdn.unitree.com/static/cef18751ca6643b683bfbea35fed8e7c_1279x1002.png" alt="vuer_unsafe" style="width: 50%;">
</a>
</p>
> 提醒:在新版本 Vision OS 2 系统中,该步骤有所不同:将证书通过 AirDrop 复制到 Apple Vision Pro 设备后,将会在设置 APP 中左上角账户栏的下方出现证书相关信息栏,点击进去即可启用对该证书的信任。
4. 进入`Vuer`网页界面后,点击 **`Virtual Reality`** 按钮。在允许后续的所有对话框后,启动 VR 会话。界面如下图所示:
------
<p align="center">
<a href="https://oss-global-cdn.unitree.com/static/fdeee4e5197f416290d8fa9ecc0b28e6_2480x1286.png">
<img src="https://oss-global-cdn.unitree.com/static/fdeee4e5197f416290d8fa9ecc0b28e6_2480x1286.png" alt="vuer" style="width: 75%;">
</a>
</p>
5. 此时,您将会在 XR 头显设备中看到机器人的第一人称视野。同时,终端打印出链接建立的信息:
```bash
websocket is connected. id:dbb8537d-a58c-4c57-b49d-cbb91bd25b90
default socket worker is up, adding clientEvents
Uplink task running. id:dbb8537d-a58c-4c57-b49d-cbb91bd25b90
```
**2.2.2 PICO 4 Ultra Enterprise or Meta Quest 3**
6. 然后,将手臂形状摆放到与**机器人初始姿态**相接近的姿势。这一步是为了避免在实物部署时,初始位姿差距过大导致机器人产生过大的摆动。
我们已经尝试在 PICO 4 Ultra Enterprise 和 Meta-Quest 3 上使用手部追踪进行遥操作。
机器人初始姿态示意图如下:
PICO 4 Ultra Enterprise 的系统规格如下:
<p align="center">
<a href="https://oss-global-cdn.unitree.com/static/2522a83214744e7c8c425cc2679a84ec_670x867.png">
<img src="https://oss-global-cdn.unitree.com/static/2522a83214744e7c8c425cc2679a84ec_670x867.png" alt="robot_init_pose" style="width: 25%;">
</a>
</p>
> 系统版本:5.12.6.U;Android 版本号:14;软件版本号:c000_cf01_bv1.0.1_sv5.12.6_202412121344_sparrow_b4244_user
>
> 浏览器版本:[4.0.28 beta version](https://github.com/vuer-ai/vuer/issues/45#issuecomment-2674918619)
7. 最后,在终端中按下 **r** 键后,正式开启遥操作程序。此时,您可以远程控制机器人的手臂(和灵巧手)
Meta-Quest 3 的系统规格如下:
8. 在遥操过程中,按 **s** 键可开启数据录制,再次按 **s** 键可结束录制并保存数据(该过程可重复)
> 系统版本:49829370066100510;版本:62.0.0.273.343.570372087;运行时版本:62.0.0.269.341.570372063;操作系统版本:SQ3A.220605.009.A1
数据录制过程示意图如下:
更多配置步骤信息,您可以查看该 [issue](https://github.com/unitreerobotics/avp_teleoperate/issues/32)。
<p align="center">
<a href="https://oss-global-cdn.unitree.com/static/f5b9b03df89e45ed8601b9a91adab37a_2397x1107.png">
<img src="https://oss-global-cdn.unitree.com/static/f5b9b03df89e45ed8601b9a91adab37a_2397x1107.png" alt="record" style="width: 75%;">
</a>
</p>
## 2.3 🔎 单元测试
> 注意1:录制的数据默认存储在 `xr_teleoperate/teleop/utils/data` 中。数据使用说明见此仓库: [unitree_IL_lerobot](https://github.com/unitreerobotics/unitree_IL_lerobot/blob/main/README_zh.md#%E6%95%B0%E6%8D%AE%E9%87%87%E9%9B%86%E4%B8%8E%E8%BD%AC%E6%8D%A2)。
>
> 注意2:请在录制数据时注意您的硬盘空间大小。
## 2.3 🔚 退出
此步骤用于验证环境是否正确安装。
要退出程序,可以在终端窗口(或 'record image' 窗口)中按下 **q**
即将展现。
# 3. 🤖 实物部署
# 3. 🚀 使用方法
实物部署与仿真部署步骤基本相似,下面将重点指出不同之处。
在开始此程序之前,请至少阅读一次 [官方文档](https://support.unitree.com/home/zh/Teleoperation)。
## 3.1 🖼️ 图像服务
## 3.1 🖼️ 图像服务器
仿真环境中已经自动开启了图像服务。实物部署时,需要针对自身相机硬件类型,手动开启图像服务。步骤如下:
`avp_teleoperate/teleop/image_server` 目录中的 `image_server.py` 复制到宇树机器人(G1/H1/H1_2 等)的 **开发计算单元 PC2**,并在 **PC2** 上执行以下命令:
`xr_teleoperate/teleop/image_server` 目录中的 `image_server.py` 复制到宇树机器人(G1/H1/H1_2 等)的 **开发计算单元 PC2**
```bash
# 提醒1:可以通过scp命令将image_server.py传输到PC2,然后使用ssh远程登录PC2后执行它。
# 提醒:可以通过scp命令将image_server.py传输到PC2,然后使用ssh远程登录PC2后执行它。
# 假设开发计算单元PC2的ip地址为192.168.123.164,那么传输过程示例如下:
# 先ssh登录PC2,创建图像服务器的文件夹
(tv) unitree@Host:~$ ssh unitree@192.168.123.164 "mkdir -p ~/image_server"
# 将本地的image_server.py拷贝至PC2的~/image_server目录下
(tv) unitree@Host:~$ scp ~/avp_teleoperate/teleop/image_server/image_server.py unitree@192.168.123.164:~/image_server/
(tv) unitree@Host:~$ scp ~/xr_teleoperate/teleop/image_server/image_server.py unitree@192.168.123.164:~/image_server/
```
# 提醒2:目前该图像传输程序支持OpenCV和Realsense SDK两种读取图像的方式,请阅读image_server.py的ImageServer类的注释以便您根据自己的相机硬件来配置自己的图像传输服务。
并在 **PC2** 上执行以下命令:
```bash
# 提醒:目前该图像传输程序支持OpenCV和Realsense SDK两种读取图像的方式,请阅读image_server.py的ImageServer类的注释以便您根据自己的相机硬件来配置自己的图像传输服务。
# 现在位于宇树机器人 PC2 终端
unitree@PC2:~/image_server$ python image_server.py
# 您可以看到终端输出如下:
@ -249,14 +323,18 @@ unitree@PC2:~/image_server$ python image_server.py
在图像服务启动后,您可以在 **主机** 终端上使用 `image_client.py` 测试通信是否成功:
```bash
(tv) unitree@Host:~/avp_teleoperate/teleop/image_server$ python image_client.py
(tv) unitree@Host:~/xr_teleoperate/teleop/image_server$ python image_client.py
```
## 3.2 ✋ Inspire 手部服务(可选)
## 3.2 ✋ Inspire 手部服务(可选)
> 注意:如果选择的机器人配置中没有使用一代 Inspire 灵巧手,那么请忽略本节内容。
> 注意1:如果选择的机器人配置中没有使用 Inspire 系列灵巧手,那么请忽略本节内容。
>
> 注意2:如果选择的G1机器人配置,且使用 [Inspire DFX 灵巧手](https://support.unitree.com/home/zh/G1_developer/inspire_dfx_dexterous_hand),那么请参考 [issue #46](https://github.com/unitreerobotics/xr_teleoperate/issues/46)。
>
> 注意3:如果选择的机器人配置中使用了 [Inspire FTP 灵巧手](https://support.unitree.com/home/zh/G1_developer/inspire_ftp_dexterity_hand),那么请参考 [issue #48](https://github.com/unitreerobotics/xr_teleoperate/issues/48)。
您可以参考 [灵巧手开发](https://support.unitree.com/home/zh/H1_developer/Dexterous_hand) 配置相关环境并编译控制程序。首先,使用 [此链接](https://oss-global-cdn.unitree.com/static/0a8335f7498548d28412c31ea047d4be.zip) 下载灵巧手控制接口程序,然后将其复制到宇树机器人的**PC2**。
您可以参考 [H1-DFX灵巧手开发](https://support.unitree.com/home/zh/H1_developer/Dexterous_hand) 配置相关环境并编译控制程序。首先,使用 [此链接](https://oss-global-cdn.unitree.com/static/0a8335f7498548d28412c31ea047d4be.zip) 下载灵巧手控制接口程序,然后将其复制到宇树机器人的**PC2**。
在宇树机器人的 **PC2** 上,执行命令:
@ -274,107 +352,68 @@ unitree@PC2:~/h1_inspire_service/build$ ./h1_hand_example
如果两只手连续打开和关闭,则表示成功。一旦成功,即可关闭终端 2 中的 `./h1_hand_example` 程序。
## 3.3 🚀 启动
## 3.3 🚀 启动遥操
> ![Warning](https://img.shields.io/badge/Warning-Important-red)
>
> 1. 所有人员必须与机器人保持安全距离,以防止任何潜在的危险!
> 2. 在运行此程序之前,请确保至少阅读一次 [官方文档](https://support.unitree.com/home/zh/Teleoperation)。
> 3. 请务必确保机器人已经进入[调试模式(L2+R2)](https://support.unitree.com/home/zh/H1_developer/Remote_control),以停止运动控制程序发送指令,这样可以避免潜在的指令冲突问题。
最好有两名操作员来运行此程序,称为 **操作员 A****操作员 B**
> 3. 没有开启**运动控制**模式(`--motion`)时,请务必确保机器人已经进入 [调试模式(L2+R2)](https://support.unitree.com/home/zh/G1_developer/remote_control),以停止运动控制程序发送指令,这样可以避免潜在的指令冲突问题。
> 4. 如果要开启**运动控制**模式遥操作,请提前使用 [R3遥控器](https://www.unitree.com/cn/R3) 确保机器人进入主运控模式。
> 5. 开启**运动控制**模式(`--motion`)时:
> - 右手柄按键 `A` 为遥操作**退出**功能按键;
> - 左手柄和右手柄的两个摇杆按键同时按下为软急停按键,机器人会退出运控程序并进入阻尼模式,该功能只在必要情况下使用
> - 左手柄摇杆控制机器人前后左右(最大控制速度已经在程序中进行了限制)
> - 右手柄摇杆控制机器人转向(最大控制速度已经在程序中进行了限制)
首先,**操作员 B** 需要执行以下步骤:
1. 修改 `~/avp_teleoperate/teleop/teleop_hand_and_arm.py``if __name__ == '__main__':` 代码下方的 `img_config` 图像客户端配置,它应该与 3.1 节中您在 PC2 配置的图像服务器参数相同。
2. 根据您的机器人配置选择不同的启动参数。以下是一些启动命令示例:
```bash
# 1. G1(29DoF)机器人 + Dex3-1 灵巧手 (实际上,G1_29是--arm的默认参数,可以选择不填写)
(tv) unitree@Host:~/avp_teleoperate/teleop$ python teleop_hand_and_arm.py --arm=G1_29 --hand=dex3
# 2. 仅G1(29DoF)机器人
(tv) unitree@Host:~/avp_teleoperate/teleop$ python teleop_hand_and_arm.py
# 3. G1 (23DoF) 机器人
(tv) unitree@Host:~/avp_teleoperate/teleop$ python teleop_hand_and_arm.py --arm=G1_23
# 4. H1_2 机器人 + 一代 Inspire 灵巧手
(tv) unitree@Host:~/avp_teleoperate/teleop$ python teleop_hand_and_arm.py --arm=H1_2 --hand=inspire1
# 5. H1 机器人
(tv) unitree@Host:~/avp_teleoperate/teleop$ python teleop_hand_and_arm.py --arm=H1
# 6. 如果您想开启数据可视化+录制,还可以追加 --record 选项
(tv) unitree@Host:~/avp_teleoperate/teleop$ python teleop_hand_and_arm.py --arm=G1_23 --record
```
3. 程序如果正常启动,终端最后一行将停留在 “Please enter the start signal (enter 'r' to start the subsequent program):” 的字符串输出。
然后,**操作员 A** 需要执行以下步骤:
1. 戴上您的 Apple Vision Pro 设备。
2. 在 Apple Vision Pro 上打开 Safari,访问:https://192.168.123.2:8012?ws=wss://192.168.123.2:8012
> 注意:此 IP 地址应与您的 **主机** IP 地址匹配。
3. 点击 `Enter VR` 并选择 `Allow` 以启动 VR 会话。
4. 您将会在Apple Vision Pro中看到机器人的第一人称视野。
接下来,**操作员 B** 可以在终端中按下 **r** 键以启动远程操作程序。
此时,**操作员 A** 可以远程控制机器人的手臂(和灵巧手)。
如果使用了`--record`参数,那么**操作员 B** 可以在打开的“record image”窗口中按 **s** 键开始录制数据,再次按 **s** 键停止。可以根据需要重复此操作进行多次录制。
> 注意1:录制的数据默认存储在 `avp_teleoperate/teleop/utils/data` 中,使用说明见此仓库: [unitree_IL_lerobot](https://github.com/unitreerobotics/unitree_IL_lerobot/blob/main/README_zh.md#%E6%95%B0%E6%8D%AE%E9%87%87%E9%9B%86%E4%B8%8E%E8%BD%AC%E6%8D%A2)。
>
> 注意2:请在录制数据时注意您的硬盘空间大小。
与仿真部署基本一致,但要注意上述警告事项。
## 3.4 🔚 退出
> ![Warning](https://img.shields.io/badge/Warning-Important-red)
>
> 为了避免损坏机器人,最好确保**操作员 A** 将机器人手臂摆放为自然下垂或其他恰当位置后,**操作员B **再按 **q** 退出。
> 为了避免损坏机器人,最好确保将机器人手臂摆放为与机器人初始姿态附近的恰当位置后,再按 **q** 退出。
>
> 调试模式下:按下退出键后,机器人双臂将在5秒内返回机器人初始姿态,然后结束控制。
>
> 运控模式下:按下退出键后,机器人双臂将在5秒内返回机器人运控姿态,然后结束控制。
要退出程序,**操作员 B** 可以在 'record image' 窗口中按下 **q** 键。
与仿真部署基本一致,但要注意上述警告事项。
# 4. 🗺️ 代码库教程
```
avp_teleoperate/
xr_teleoperate/
├── assets [存储机器人 URDF 相关文件]
├── hardware [存储 3D 打印模组]
├── teleop
│ ├── image_server
│ │ ├── image_client.py [用于从机器人图像服务器接收图像数据]
│ │ ├── image_server.py [从摄像头捕获图像并通过网络发送(在机器人板载计算单元上运行)]
│ │ ├── image_server.py [从摄像头捕获图像并通过网络发送(在机器人板载计算单元PC2上运行)]
│ │
│ ├── open_television
│ │ ├── television.py [使用 Vuer 从 Apple Vision Pro 捕获腕部和手部数据]
│ │ ├── tv_wrapper.py [对捕获的数据进行后处理]
│ ├── televuer
│ │ ├── src/televuer
│ │ ├── television.py [使用 Vuer 从 XR 设备捕获头部、腕部和手部/手柄等数据]
│ │ ├── tv_wrapper.py [对捕获的数据进行后处理]
│ │ ├── test
│ │ ├── _test_television.py [television.py 的测试程序]
│ │ ├── _test_tv_wrapper.py [tv_wrapper.py 的测试程序]
│ │
│ ├── robot_control
│ │ ├── src/dex-retargeting [灵巧手映射算法库]
│ │ ├── robot_arm_ik.py [手臂的逆运动学]
│ │ ├── robot_arm.py [控制双臂关节并锁定其他部分]
│ │ ├── hand_retargeting.py [灵巧手映射算法库 Wrapper]
│ │ ├── robot_hand_inspire.py [控制因时灵巧手]
│ │ ├── robot_hand_unitree.py [控制宇树灵巧手]
│ │
│ ├── utils
│ │ ├── episode_writer.py [用于记录模仿学习的数据]
│ │ ├── mat_tool.py [一些小的数学工具]
│ │ ├── weighted_moving_filter.py [用于过滤关节数据的滤波器]
│ │ ├── rerun_visualizer.py [用于可视化录制数据]
│ │
@ -387,23 +426,23 @@ avp_teleoperate/
## 5.1 📋 清单
| 项目 | 数量 | 链接 | 备注 |
| :-----------------------: | :--: | :----------------------------------------------------------: | :----------------------------: |
| **宇树通用人形机器人 G1** | 1 | https://www.unitree.com/cn/g1 | 需选配开发计算单元版本 |
| **Apple Vision Pro** | 1 | https://www.apple.com.cn/apple-vision-pro/ | |
| **路由器** | 1 | | |
| **用户电脑** | 1 | | 推荐显卡性能在RTX 4080 以上 |
| **头部双目相机** | 1 | [仅供参考] http://e.tb.cn/h.TaZxgkpfWkNCakg?tk=KKz03Kyu04u | 用于机器人头部视野,视场角130° |
| **头部相机支架** | 1 | https://github.com/unitreerobotics/avp_teleoperate/blob/g1/hardware/head_stereo_camera_mount.STEP | 用于装配头部相机 |
| 英特尔 RealSense D405相机 | 2 | https://www.intelrealsense.com/depth-camera-d405/ | 用于腕部灵巧操作视野 |
| 腕部相机环形支架 | 2 | https://github.com/unitreerobotics/avp_teleoperate/blob/g1/hardware/wrist_ring_mount.STEP | 与腕部相机支架搭配使用 |
| 左腕相机支架 | 1 | https://github.com/unitreerobotics/avp_teleoperate/blob/g1/hardware/left_wrist_D405_camera_mount.STEP | 用于装配左腕D405相机 |
| 右腕相机支架 | 1 | https://github.com/unitreerobotics/avp_teleoperate/blob/g1/hardware/right_wrist_D405_camera_mount.STEP | 用于装配右腕D405相机 |
| M3-1 六角螺母 | 4 | [仅供参考] https://a.co/d/gQaLtHD | 用于腕部紧固件 |
| M3x12 螺钉 | 4 | [仅供参考] https://amzn.asia/d/aU9NHSf | 用于腕部紧固件 |
| M3x6 螺钉 | 4 | [仅供参考] https://amzn.asia/d/0nEz5dJ | 用于腕部紧固件 |
| **M4x14 螺钉** | 2 | [仅供参考] https://amzn.asia/d/cfta55x | 用于头部紧固件 |
| **M2x4 自攻螺钉** | 4 | [仅供参考] https://amzn.asia/d/1msRa5B | 用于头部紧固件 |
| 项目 | 数量 | 链接 | 备注 |
| :-----------------------: | :--: | :----------------------------------------------------------: | :----------------------------------------------------------: |
| **宇树通用人形机器人 G1** | 1 | https://www.unitree.com/cn/g1 | 需选配开发计算单元版本 |
| **XR 设备** | 1 | https://www.apple.com.cn/apple-vision-pro/<br />https://www.meta.com/quest/quest-3<br />https://www.picoxr.com/products/pico4-ultra-enterprise | |
| 路由器 | 1 | | 常规模式必须,无线模式不需要 |
| **用户电脑** | 1 | | 仿真模式下请使用[官方推荐](https://docs.isaacsim.omniverse.nvidia.com/4.5.0/installation/requirements.html)的硬件资源进行部署使用 |
| **头部双目相机** | 1 | [仅供参考] http://e.tb.cn/h.TaZxgkpfWkNCakg?tk=KKz03Kyu04u | 用于机器人头部视野 |
| **头部相机支架** | 1 | https://github.com/unitreerobotics/xr_teleoperate/blob/g1/hardware/head_stereo_camera_mount.STEP | 用于装配头部相机 |
| 英特尔 RealSense D405相机 | 2 | https://www.intelrealsense.com/depth-camera-d405/ | 用于腕部灵巧操作视野 |
| 腕部相机环形支架 | 2 | https://github.com/unitreerobotics/xr_teleoperate/blob/g1/hardware/wrist_ring_mount.STEP | 与腕部相机支架搭配使用 |
| 左腕相机支架 | 1 | https://github.com/unitreerobotics/xr_teleoperate/blob/g1/hardware/left_wrist_D405_camera_mount.STEP | 用于装配左腕D405相机 |
| 右腕相机支架 | 1 | https://github.com/unitreerobotics/xr_teleoperate/blob/g1/hardware/right_wrist_D405_camera_mount.STEP | 用于装配右腕D405相机 |
| M3-1 六角螺母 | 4 | [仅供参考] https://a.co/d/gQaLtHD | 用于腕部紧固件 |
| M3x12 螺钉 | 4 | [仅供参考] https://amzn.asia/d/aU9NHSf | 用于腕部紧固件 |
| M3x6 螺钉 | 4 | [仅供参考] https://amzn.asia/d/0nEz5dJ | 用于腕部紧固件 |
| **M4x14 螺钉** | 2 | [仅供参考] https://amzn.asia/d/cfta55x | 用于头部紧固件 |
| **M2x4 自攻螺钉** | 4 | [仅供参考] https://amzn.asia/d/1msRa5B | 用于头部紧固件 |
> 注意:加粗项目是进行遥操作任务时的必需设备,其余项目是录制[数据集](https://huggingface.co/unitreerobotics)时的可选设备。
@ -475,8 +514,5 @@ avp_teleoperate/
5. https://github.com/casadi/casadi
6. https://github.com/meshcat-dev/meshcat-python
7. https://github.com/zeromq/pyzmq
8. https://github.com/unitreerobotics/unitree_dds_wrapper
9. https://github.com/tonyzhaozh/act
10. https://github.com/facebookresearch/detr
11. https://github.com/Dingry/BunnyVisionPro
12. https://github.com/unitreerobotics/unitree_sdk2_python
8. https://github.com/Dingry/BunnyVisionPro
9. https://github.com/unitreerobotics/unitree_sdk2_python

63
assets/inspire_hand/inspire_hand.yml

@ -1,39 +1,62 @@
left:
type: vector
type: DexPilot # or vector
urdf_path: inspire_hand/inspire_hand_left.urdf
wrist_link_name: "L_hand_base_link"
# Target refers to the retargeting target, which is the robot hand
target_joint_names: ['L_thumb_proximal_yaw_joint', 'L_thumb_proximal_pitch_joint',
'L_index_proximal_joint', 'L_middle_proximal_joint',
'L_ring_proximal_joint', 'L_pinky_proximal_joint' ]
target_joint_names:
[
'L_thumb_proximal_yaw_joint',
'L_thumb_proximal_pitch_joint',
'L_index_proximal_joint',
'L_middle_proximal_joint',
'L_ring_proximal_joint',
'L_pinky_proximal_joint'
]
# for DexPilot type
wrist_link_name: "L_hand_base_link"
finger_tip_link_names: [ "L_thumb_tip", "L_index_tip", "L_middle_tip", "L_ring_tip", "L_pinky_tip" ]
# If you do not know exactly how it is used, please leave it to None for default.
target_link_human_indices_dexpilot: [[ 9, 14, 19, 24, 14, 19, 24, 19, 24, 24, 0, 0, 0, 0, 0], [ 4, 4, 4, 4, 9, 9, 9, 14, 14, 19, 4, 9, 14, 19, 24]]
# for vector type
target_origin_link_names: [ "L_hand_base_link", "L_hand_base_link", "L_hand_base_link", "L_hand_base_link", "L_hand_base_link"]
target_task_link_names: [ "L_thumb_tip", "L_index_tip", "L_middle_tip", "L_ring_tip", "L_pinky_tip" ]
scaling_factor: 1.20
# Source refers to the retargeting input, which usually corresponds to the human hand
# The joint indices of human hand joint which corresponds to each link in the target_link_names
target_link_human_indices: [ [ 0, 0, 0, 0, 0 ], [ 4, 9, 14, 19, 24 ] ]
target_link_human_indices_vector: [ [ 0, 0, 0, 0, 0 ], [ 4, 9, 14, 19, 24 ] ]
# Scaling factor for vector retargeting only
# For example, Allegro is 1.6 times larger than normal human hand, then this scaling factor should be 1.6
scaling_factor: 1.20
# A smaller alpha means stronger filtering, i.e. more smooth but also larger latency
low_pass_alpha: 0.2
right:
type: vector
type: DexPilot # or vector
urdf_path: inspire_hand/inspire_hand_right.urdf
wrist_link_name: "R_hand_base_link"
# Target refers to the retargeting target, which is the robot hand
target_joint_names: ['R_thumb_proximal_yaw_joint', 'R_thumb_proximal_pitch_joint',
'R_index_proximal_joint', 'R_middle_proximal_joint',
'R_ring_proximal_joint', 'R_pinky_proximal_joint' ]
target_joint_names:
[
'R_thumb_proximal_yaw_joint',
'R_thumb_proximal_pitch_joint',
'R_index_proximal_joint',
'R_middle_proximal_joint',
'R_ring_proximal_joint',
'R_pinky_proximal_joint'
]
# for DexPilot type
wrist_link_name: "R_hand_base_link"
finger_tip_link_names: [ "R_thumb_tip", "R_index_tip", "R_middle_tip", "R_ring_tip", "R_pinky_tip" ]
# If you do not know exactly how it is used, please leave it to None for
target_link_human_indices_dexpilot: [[ 9, 14, 19, 24, 14, 19, 24, 19, 24, 24, 0, 0, 0, 0, 0], [ 4, 4, 4, 4, 9, 9, 9, 14, 14, 19, 4, 9, 14, 19, 24]]
target_origin_link_names: [ "R_hand_base_link", "R_hand_base_link", "R_hand_base_link", "R_hand_base_link", "R_hand_base_link"]
target_task_link_names: [ "R_thumb_tip", "R_index_tip", "R_middle_tip", "R_ring_tip", "R_pinky_tip" ]
target_link_human_indices_vector: [ [ 0, 0, 0, 0, 0 ], [ 4, 9, 14, 19, 24 ] ]
# Scaling factor for vector retargeting only
# For example, Allegro is 1.6 times larger than normal human hand, then this scaling factor should be 1.6
scaling_factor: 1.20
# Source refers to the retargeting input, which usually corresponds to the human hand
# The joint indices of human hand joint which corresponds to each link in the target_link_names
target_link_human_indices: [ [ 0, 0, 0, 0, 0 ], [ 4, 9, 14, 19, 24 ] ]
# A smaller alpha means stronger filtering, i.e. more smooth but also larger latency
low_pass_alpha: 0.2

40
assets/unitree_hand/unitree_dex3.yml

@ -1,9 +1,8 @@
left:
type: vector
type: DexPilot # or vector
urdf_path: unitree_hand/unitree_dex3_left.urdf
# Target refers to the retargeting target, which is the robot hand
# target_joint_names:
target_joint_names:
[
"left_hand_thumb_0_joint",
@ -14,27 +13,26 @@ left:
"left_hand_index_0_joint",
"left_hand_index_1_joint",
]
wrist_link_name: None
# for DexPilot type
wrist_link_name: "base_link"
finger_tip_link_names: [ "thumb_tip", "index_tip", "middle_tip"]
# If you do not know exactly how it is used, please leave it to None for default.
target_link_human_indices_dexpilot: [[ 9, 14, 14, 0, 0, 0], [ 4, 4, 9, 4, 9, 14]]
# for vector type
target_origin_link_names: ["base_link_thumb","base_link_index","base_link_middle"]
target_task_link_names: ["thumb_tip","index_tip","middle_tip"]
target_link_human_indices: [[0, 0, 0], [4, 9, 14]]
target_link_human_indices_vector: [[0, 0, 0], [4, 9, 14]]
# Currently, the scaling factor for each finger is individually distinguished in the robot_hand_unitree.py file.
# The Unitree Dex3 has three fingers with the same specifications, so the retarget scaling factors need to be adjusted separately.
# The relevant code is as follows:
# ref_left_value[0] = ref_left_value[0] * 1.15
# ref_left_value[1] = ref_left_value[1] * 1.05
# ref_left_value[2] = ref_left_value[2] * 0.95
# ref_right_value[0] = ref_right_value[0] * 1.15
# ref_right_value[1] = ref_right_value[1] * 1.05
# ref_right_value[2] = ref_right_value[2] * 0.95
# Scaling factor for vector retargeting only
# For example, Allegro is 1.6 times larger than normal human hand, then this scaling factor should be 1.6
scaling_factor: 1.0
# A smaller alpha means stronger filtering, i.e. more smooth but also larger latency
low_pass_alpha: 0.2
right:
type: vector
type: DexPilot # or vector
urdf_path: unitree_hand/unitree_dex3_right.urdf
# Target refers to the retargeting target, which is the robot hand
@ -48,12 +46,18 @@ right:
"right_hand_middle_0_joint",
"right_hand_middle_1_joint",
]
wrist_link_name: None
# for DexPilot type
wrist_link_name: "base_link"
finger_tip_link_names: [ "thumb_tip", "index_tip", "middle_tip"]
target_link_human_indices_dexpilot: [[ 9, 14, 14, 0, 0, 0], [ 4, 4, 9, 4, 9, 14]]
# for vector type
target_origin_link_names: ["base_link_thumb","base_link_index","base_link_middle"]
target_task_link_names: ["thumb_tip", "index_tip", "middle_tip"]
target_link_human_indices: [[0, 0, 0], [4, 9, 14]]
target_link_human_indices_vector: [[0, 0, 0], [4, 9, 14]]
# Scaling factor for vector retargeting only
# For example, Allegro is 1.6 times larger than normal human hand, then this scaling factor should be 1.6
scaling_factor: 1.0
# A smaller alpha means stronger filtering, i.e. more smooth but also larger latency
low_pass_alpha: 0.2

27
requirements.txt

@ -1,27 +1,4 @@
aiohttp==3.9.5
aiohttp_cors==0.7.0
aiortc==1.8.0
av==11.0.0
einops==0.8.0
h5py==3.11.0
ipython==8.12.3
matplotlib==3.7.5
numpy==1.23.0
opencv_contrib_python==4.10.0.82
opencv_python==4.9.0.80
packaging==24.1
pandas==2.0.3
params_proto==2.12.1
pytransform3d==3.5.0
PyYAML==6.0.1
scikit_learn==1.3.2
scipy==1.10.1
seaborn==0.13.2
setuptools==69.5.1
torch==2.3.0
torchvision==0.18.0
vuer==0.0.32rc7
rerun-sdk==0.20.1
nlopt>=2.6.1,<2.8.0
trimesh>=4.4.0
anytree>=2.12.0
meshcat==0.3.2
sshkeyboard==2.3.1

20
teleop/image_server/image_client.py

@ -5,6 +5,8 @@ import time
import struct
from collections import deque
from multiprocessing import shared_memory
import logging_mp
logger_mp = logging_mp.get_logger(__name__)
class ImageClient:
def __init__(self, tv_img_shape = None, tv_img_shm_name = None, wrist_img_shape = None, wrist_img_shm_name = None,
@ -85,10 +87,10 @@ class ImageClient:
if frame_id != expected_frame_id:
lost = frame_id - expected_frame_id
if lost < 0:
print(f"[Image Client] Received out-of-order frame ID: {frame_id}")
logger_mp.info(f"[Image Client] Received out-of-order frame ID: {frame_id}")
else:
self._lost_frames += lost
print(f"[Image Client] Detected lost frames: {lost}, Expected frame ID: {expected_frame_id}, Received frame ID: {frame_id}")
logger_mp.warning(f"[Image Client] Detected lost frames: {lost}, Expected frame ID: {expected_frame_id}, Received frame ID: {frame_id}")
self._last_frame_id = frame_id
self._total_frames = frame_id + 1
@ -111,7 +113,7 @@ class ImageClient:
# Calculate lost frame rate
lost_frame_rate = (self._lost_frames / self._total_frames) * 100 if self._total_frames > 0 else 0
print(f"[Image Client] Real-time FPS: {real_time_fps:.2f}, Avg Latency: {avg_latency*1000:.2f} ms, Max Latency: {max_latency*1000:.2f} ms, \
logger_mp.info(f"[Image Client] Real-time FPS: {real_time_fps:.2f}, Avg Latency: {avg_latency*1000:.2f} ms, Max Latency: {max_latency*1000:.2f} ms, \
Min Latency: {min_latency*1000:.2f} ms, Jitter: {jitter*1000:.2f} ms, Lost Frame Rate: {lost_frame_rate:.2f}%")
def _close(self):
@ -119,7 +121,7 @@ class ImageClient:
self._context.term()
if self._image_show:
cv2.destroyAllWindows()
print("Image client has been closed.")
logger_mp.info("Image client has been closed.")
def receive_process(self):
@ -129,7 +131,7 @@ class ImageClient:
self._socket.connect(f"tcp://{self._server_address}:{self._port}")
self._socket.setsockopt_string(zmq.SUBSCRIBE, "")
print("\nImage client has started, waiting to receive data...")
logger_mp.info("Image client has started, waiting to receive data...")
try:
while self.running:
# Receive message
@ -144,7 +146,7 @@ class ImageClient:
jpg_bytes = message[header_size:]
timestamp, frame_id = struct.unpack('dI', header)
except struct.error as e:
print(f"[Image Client] Error unpacking header: {e}, discarding message.")
logger_mp.warning(f"[Image Client] Error unpacking header: {e}, discarding message.")
continue
else:
# No header, entire message is image data
@ -153,7 +155,7 @@ class ImageClient:
np_img = np.frombuffer(jpg_bytes, dtype=np.uint8)
current_image = cv2.imdecode(np_img, cv2.IMREAD_COLOR)
if current_image is None:
print("[Image Client] Failed to decode image.")
logger_mp.warning("[Image Client] Failed to decode image.")
continue
if self.tv_enable_shm:
@ -174,9 +176,9 @@ class ImageClient:
self._print_performance_metrics(receive_time)
except KeyboardInterrupt:
print("Image client interrupted by user.")
logger_mp.info("Image client interrupted by user.")
except Exception as e:
print(f"[Image Client] An error occurred while receiving data: {e}")
logger_mp.warning(f"[Image Client] An error occurred while receiving data: {e}")
finally:
self._close()

42
teleop/image_server/image_server.py

@ -5,6 +5,8 @@ import struct
from collections import deque
import numpy as np
import pyrealsense2 as rs
import logging_mp
logger_mp = logging_mp.get_logger(__name__, level=logging_mp.DEBUG)
class RealSenseCamera(object):
@ -37,7 +39,7 @@ class RealSenseCamera(object):
profile = self.pipeline.start(config)
self._device = profile.get_device()
if self._device is None:
print('[Image Server] pipe_profile.get_device() is None .')
logger_mp.error('[Image Server] pipe_profile.get_device() is None .')
if self.enable_depth:
assert self._device is not None
depth_sensor = self._device.first_depth_sensor()
@ -82,7 +84,7 @@ class OpenCVCamera():
# Test if the camera can read frames
if not self._can_read_frame():
print(f"[Image Server] Camera {self.id} Error: Failed to initialize the camera or read frames. Exiting...")
logger_mp.error(f"[Image Server] Camera {self.id} Error: Failed to initialize the camera or read frames. Exiting...")
self.release()
def _can_read_frame(self):
@ -136,7 +138,7 @@ class ImageServer:
#'wrist_camera_id_numbers': ["218622271789", "241222076627"], # serial number (realsense)
}
"""
print(config)
logger_mp.info(config)
self.fps = config.get('fps', 30)
self.head_camera_type = config.get('head_camera_type', 'opencv')
self.head_image_shape = config.get('head_camera_image_shape', [480, 640]) # (height, width)
@ -161,7 +163,7 @@ class ImageServer:
camera = RealSenseCamera(img_shape=self.head_image_shape, fps=self.fps, serial_number=serial_number)
self.head_cameras.append(camera)
else:
print(f"[Image Server] Unsupported head_camera_type: {self.head_camera_type}")
logger_mp.warning(f"[Image Server] Unsupported head_camera_type: {self.head_camera_type}")
# Initialize wrist cameras if provided
self.wrist_cameras = []
@ -175,7 +177,7 @@ class ImageServer:
camera = RealSenseCamera(img_shape=self.wrist_image_shape, fps=self.fps, serial_number=serial_number)
self.wrist_cameras.append(camera)
else:
print(f"[Image Server] Unsupported wrist_camera_type: {self.wrist_camera_type}")
logger_mp.warning(f"[Image Server] Unsupported wrist_camera_type: {self.wrist_camera_type}")
# Set ZeroMQ context and socket
self.context = zmq.Context()
@ -187,21 +189,21 @@ class ImageServer:
for cam in self.head_cameras:
if isinstance(cam, OpenCVCamera):
print(f"[Image Server] Head camera {cam.id} resolution: {cam.cap.get(cv2.CAP_PROP_FRAME_HEIGHT)} x {cam.cap.get(cv2.CAP_PROP_FRAME_WIDTH)}")
logger_mp.info(f"[Image Server] Head camera {cam.id} resolution: {cam.cap.get(cv2.CAP_PROP_FRAME_HEIGHT)} x {cam.cap.get(cv2.CAP_PROP_FRAME_WIDTH)}")
elif isinstance(cam, RealSenseCamera):
print(f"[Image Server] Head camera {cam.serial_number} resolution: {cam.img_shape[0]} x {cam.img_shape[1]}")
logger_mp.info(f"[Image Server] Head camera {cam.serial_number} resolution: {cam.img_shape[0]} x {cam.img_shape[1]}")
else:
print("[Image Server] Unknown camera type in head_cameras.")
logger_mp.warning("[Image Server] Unknown camera type in head_cameras.")
for cam in self.wrist_cameras:
if isinstance(cam, OpenCVCamera):
print(f"[Image Server] Wrist camera {cam.id} resolution: {cam.cap.get(cv2.CAP_PROP_FRAME_HEIGHT)} x {cam.cap.get(cv2.CAP_PROP_FRAME_WIDTH)}")
logger_mp.info(f"[Image Server] Wrist camera {cam.id} resolution: {cam.cap.get(cv2.CAP_PROP_FRAME_HEIGHT)} x {cam.cap.get(cv2.CAP_PROP_FRAME_WIDTH)}")
elif isinstance(cam, RealSenseCamera):
print(f"[Image Server] Wrist camera {cam.serial_number} resolution: {cam.img_shape[0]} x {cam.img_shape[1]}")
logger_mp.info(f"[Image Server] Wrist camera {cam.serial_number} resolution: {cam.img_shape[0]} x {cam.img_shape[1]}")
else:
print("[Image Server] Unknown camera type in wrist_cameras.")
logger_mp.warning("[Image Server] Unknown camera type in wrist_cameras.")
print("[Image Server] Image server has started, waiting for client connections...")
logger_mp.info("[Image Server] Image server has started, waiting for client connections...")
@ -224,7 +226,7 @@ class ImageServer:
if self.frame_count % 30 == 0:
elapsed_time = current_time - self.start_time
real_time_fps = len(self.frame_times) / self.time_window
print(f"[Image Server] Real-time FPS: {real_time_fps:.2f}, Total frames sent: {self.frame_count}, Elapsed time: {elapsed_time:.2f} sec")
logger_mp.info(f"[Image Server] Real-time FPS: {real_time_fps:.2f}, Total frames sent: {self.frame_count}, Elapsed time: {elapsed_time:.2f} sec")
def _close(self):
for cam in self.head_cameras:
@ -233,7 +235,7 @@ class ImageServer:
cam.release()
self.socket.close()
self.context.term()
print("[Image Server] The server has been closed.")
logger_mp.info("[Image Server] The server has been closed.")
def send_process(self):
try:
@ -243,12 +245,12 @@ class ImageServer:
if self.head_camera_type == 'opencv':
color_image = cam.get_frame()
if color_image is None:
print("[Image Server] Head camera frame read is error.")
logger_mp.error("[Image Server] Head camera frame read is error.")
break
elif self.head_camera_type == 'realsense':
color_image, depth_iamge = cam.get_frame()
if color_image is None:
print("[Image Server] Head camera frame read is error.")
logger_mp.error("[Image Server] Head camera frame read is error.")
break
head_frames.append(color_image)
if len(head_frames) != len(self.head_cameras):
@ -261,12 +263,12 @@ class ImageServer:
if self.wrist_camera_type == 'opencv':
color_image = cam.get_frame()
if color_image is None:
print("[Image Server] Wrist camera frame read is error.")
logger_mp.error("[Image Server] Wrist camera frame read is error.")
break
elif self.wrist_camera_type == 'realsense':
color_image, depth_iamge = cam.get_frame()
if color_image is None:
print("[Image Server] Wrist camera frame read is error.")
logger_mp.error("[Image Server] Wrist camera frame read is error.")
break
wrist_frames.append(color_image)
wrist_color = cv2.hconcat(wrist_frames)
@ -278,7 +280,7 @@ class ImageServer:
ret, buffer = cv2.imencode('.jpg', full_color)
if not ret:
print("[Image Server] Frame imencode is failed.")
logger_mp.error("[Image Server] Frame imencode is failed.")
continue
jpg_bytes = buffer.tobytes()
@ -299,7 +301,7 @@ class ImageServer:
self._print_performance_metrics(current_time)
except KeyboardInterrupt:
print("[Image Server] Interrupted by user.")
logger_mp.warning("[Image Server] Interrupted by user.")
finally:
self._close()

53
teleop/open_television/constants.py

@ -1,53 +0,0 @@
import numpy as np
T_to_unitree_left_wrist = np.array([[1, 0, 0, 0],
[0, 0, -1, 0],
[0, 1, 0, 0],
[0, 0, 0, 1]])
T_to_unitree_right_wrist = np.array([[1, 0, 0, 0],
[0, 0, 1, 0],
[0, -1, 0, 0],
[0, 0, 0, 1]])
T_to_unitree_hand = np.array([[0, 0, 1, 0],
[-1,0, 0, 0],
[0, -1,0, 0],
[0, 0, 0, 1]])
T_robot_openxr = np.array([[0, 0, -1, 0],
[-1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 0, 1]])
const_head_vuer_mat = np.array([[1, 0, 0, 0],
[0, 1, 0, 1.5],
[0, 0, 1, -0.2],
[0, 0, 0, 1]])
# For G1 initial position
const_right_wrist_vuer_mat = np.array([[1, 0, 0, 0.15],
[0, 1, 0, 1.13],
[0, 0, 1, -0.3],
[0, 0, 0, 1]])
# For G1 initial position
const_left_wrist_vuer_mat = np.array([[1, 0, 0, -0.15],
[0, 1, 0, 1.13],
[0, 0, 1, -0.3],
[0, 0, 0, 1]])
# legacy
# const_right_wrist_vuer_mat = np.array([[1, 0, 0, 0.5],
# [0, 1, 0, 1],
# [0, 0, 1, -0.5],
# [0, 0, 0, 1]])
# const_left_wrist_vuer_mat = np.array([[1, 0, 0, -0.5],
# [0, 1, 0, 1],
# [0, 0, 1, -0.5],
# [0, 0, 0, 1]])

178
teleop/open_television/television.py

@ -1,178 +0,0 @@
import time
from vuer import Vuer
from vuer.schemas import ImageBackground, Hands
from multiprocessing import Array, Process, shared_memory
import numpy as np
import asyncio
import cv2
from multiprocessing import context
Value = context._default_context.Value
class TeleVision:
def __init__(self, binocular, img_shape, img_shm_name, cert_file="./cert.pem", key_file="./key.pem", ngrok=False):
self.binocular = binocular
self.img_height = img_shape[0]
if binocular:
self.img_width = img_shape[1] // 2
else:
self.img_width = img_shape[1]
if ngrok:
self.vuer = Vuer(host='0.0.0.0', queries=dict(grid=False), queue_len=3)
else:
self.vuer = Vuer(host='0.0.0.0', cert=cert_file, key=key_file, queries=dict(grid=False), queue_len=3)
self.vuer.add_handler("HAND_MOVE")(self.on_hand_move)
self.vuer.add_handler("CAMERA_MOVE")(self.on_cam_move)
existing_shm = shared_memory.SharedMemory(name=img_shm_name)
self.img_array = np.ndarray(img_shape, dtype=np.uint8, buffer=existing_shm.buf)
if binocular:
self.vuer.spawn(start=False)(self.main_image_binocular)
else:
self.vuer.spawn(start=False)(self.main_image_monocular)
self.left_hand_shared = Array('d', 16, lock=True)
self.right_hand_shared = Array('d', 16, lock=True)
self.left_landmarks_shared = Array('d', 75, lock=True)
self.right_landmarks_shared = Array('d', 75, lock=True)
self.head_matrix_shared = Array('d', 16, lock=True)
self.aspect_shared = Value('d', 1.0, lock=True)
self.process = Process(target=self.vuer_run)
self.process.daemon = True
self.process.start()
def vuer_run(self):
self.vuer.run()
async def on_cam_move(self, event, session, fps=60):
try:
self.head_matrix_shared[:] = event.value["camera"]["matrix"]
self.aspect_shared.value = event.value['camera']['aspect']
except:
pass
async def on_hand_move(self, event, session, fps=60):
try:
self.left_hand_shared[:] = event.value["leftHand"]
self.right_hand_shared[:] = event.value["rightHand"]
self.left_landmarks_shared[:] = np.array(event.value["leftLandmarks"]).flatten()
self.right_landmarks_shared[:] = np.array(event.value["rightLandmarks"]).flatten()
except:
pass
async def main_image_binocular(self, session, fps=60):
session.upsert @ Hands(fps=fps, stream=True, key="hands", showLeft=False, showRight=False)
while True:
display_image = cv2.cvtColor(self.img_array, cv2.COLOR_BGR2RGB)
# aspect_ratio = self.img_width / self.img_height
session.upsert(
[
ImageBackground(
display_image[:, :self.img_width],
aspect=1.778,
height=1,
distanceToCamera=1,
# The underlying rendering engine supported a layer binary bitmask for both objects and the camera.
# Below we set the two image planes, left and right, to layers=1 and layers=2.
# Note that these two masks are associated with left eye’s camera and the right eye’s camera.
layers=1,
format="jpeg",
quality=50,
key="background-left",
interpolate=True,
),
ImageBackground(
display_image[:, self.img_width:],
aspect=1.778,
height=1,
distanceToCamera=1,
layers=2,
format="jpeg",
quality=50,
key="background-right",
interpolate=True,
),
],
to="bgChildren",
)
# 'jpeg' encoding should give you about 30fps with a 16ms wait in-between.
await asyncio.sleep(0.016 * 2)
async def main_image_monocular(self, session, fps=60):
session.upsert @ Hands(fps=fps, stream=True, key="hands", showLeft=False, showRight=False)
while True:
display_image = cv2.cvtColor(self.img_array, cv2.COLOR_BGR2RGB)
# aspect_ratio = self.img_width / self.img_height
session.upsert(
[
ImageBackground(
display_image,
aspect=1.778,
height=1,
distanceToCamera=1,
format="jpeg",
quality=50,
key="background-mono",
interpolate=True,
),
],
to="bgChildren",
)
await asyncio.sleep(0.016)
@property
def left_hand(self):
return np.array(self.left_hand_shared[:]).reshape(4, 4, order="F")
@property
def right_hand(self):
return np.array(self.right_hand_shared[:]).reshape(4, 4, order="F")
@property
def left_landmarks(self):
return np.array(self.left_landmarks_shared[:]).reshape(25, 3)
@property
def right_landmarks(self):
return np.array(self.right_landmarks_shared[:]).reshape(25, 3)
@property
def head_matrix(self):
return np.array(self.head_matrix_shared[:]).reshape(4, 4, order="F")
@property
def aspect(self):
return float(self.aspect_shared.value)
if __name__ == '__main__':
import os
import sys
current_dir = os.path.dirname(os.path.abspath(__file__))
parent_dir = os.path.dirname(current_dir)
sys.path.append(parent_dir)
import threading
from image_server.image_client import ImageClient
# image
img_shape = (480, 640 * 2, 3)
img_shm = shared_memory.SharedMemory(create=True, size=np.prod(img_shape) * np.uint8().itemsize)
img_array = np.ndarray(img_shape, dtype=np.uint8, buffer=img_shm.buf)
img_client = ImageClient(tv_img_shape = img_shape, tv_img_shm_name = img_shm.name)
image_receive_thread = threading.Thread(target=img_client.receive_process, daemon=True)
image_receive_thread.start()
# television
tv = TeleVision(True, img_shape, img_shm.name)
print("vuer unit test program running...")
print("you can press ^C to interrupt program.")
while True:
time.sleep(0.03)

147
teleop/open_television/tv_wrapper.py

@ -1,147 +0,0 @@
import numpy as np
from teleop.open_television.television import TeleVision
from teleop.open_television.constants import *
from teleop.utils.mat_tool import mat_update, fast_mat_inv
"""
(basis) OpenXR Convention : y up, z back, x right.
(basis) Robot Convention : z up, y left, x front.
p.s. Vuer's all raw data follows OpenXR Convention, WORLD coordinate.
under (basis) Robot Convention, wrist's initial pose convention:
# (Left Wrist) XR/AppleVisionPro Convention:
- the x-axis pointing from wrist toward middle.
- the y-axis pointing from index toward pinky.
- the z-axis pointing from palm toward back of the hand.
# (Right Wrist) XR/AppleVisionPro Convention:
- the x-axis pointing from wrist toward middle.
- the y-axis pointing from pinky toward index.
- the z-axis pointing from palm toward back of the hand.
# (Left Wrist URDF) Unitree Convention:
- the x-axis pointing from wrist toward middle.
- the y-axis pointing from palm toward back of the hand.
- the z-axis pointing from pinky toward index.
# (Right Wrist URDF) Unitree Convention:
- the x-axis pointing from wrist toward middle.
- the y-axis pointing from back of the hand toward palm.
- the z-axis pointing from pinky toward index.
under (basis) Robot Convention, hand's initial pose convention:
# (Left Hand) XR/AppleVisionPro Convention:
- the x-axis pointing from wrist toward middle.
- the y-axis pointing from index toward pinky.
- the z-axis pointing from palm toward back of the hand.
# (Right Hand) XR/AppleVisionPro Convention:
- the x-axis pointing from wrist toward middle.
- the y-axis pointing from pinky toward index.
- the z-axis pointing from palm toward back of the hand.
# (Left Hand URDF) Unitree Convention:
- The x-axis pointing from palm toward back of the hand.
- The y-axis pointing from middle toward wrist.
- The z-axis pointing from pinky toward index.
# (Right Hand URDF) Unitree Convention:
- The x-axis pointing from palm toward back of the hand.
- The y-axis pointing from middle toward wrist.
- The z-axis pointing from index toward pinky.
p.s. From website: https://registry.khronos.org/OpenXR/specs/1.1/man/html/openxr.html.
You can find **(Left/Right Wrist) XR/AppleVisionPro Convention** related information like this below:
"The wrist joint is located at the pivot point of the wrist, which is location invariant when twisting the hand without moving the forearm.
The backward (+Z) direction is parallel to the line from wrist joint to middle finger metacarpal joint, and points away from the finger tips.
The up (+Y) direction points out towards back of the hand and perpendicular to the skin at wrist.
The X direction is perpendicular to the Y and Z directions and follows the right hand rule."
Note: The above context is of course under **(basis) OpenXR Convention**.
p.s. **(Wrist/Hand URDF) Unitree Convention** information come from URDF files.
"""
class TeleVisionWrapper:
def __init__(self, binocular, img_shape, img_shm_name):
self.tv = TeleVision(binocular, img_shape, img_shm_name)
def get_data(self):
# --------------------------------wrist-------------------------------------
# TeleVision obtains a basis coordinate that is OpenXR Convention
head_vuer_mat, head_flag = mat_update(const_head_vuer_mat, self.tv.head_matrix.copy())
left_wrist_vuer_mat, left_wrist_flag = mat_update(const_left_wrist_vuer_mat, self.tv.left_hand.copy())
right_wrist_vuer_mat, right_wrist_flag = mat_update(const_right_wrist_vuer_mat, self.tv.right_hand.copy())
# Change basis convention: VuerMat ((basis) OpenXR Convention) to WristMat ((basis) Robot Convention)
# p.s. WristMat = T_{robot}_{openxr} * VuerMat * T_{robot}_{openxr}^T
# Reason for right multiply fast_mat_inv(T_robot_openxr):
# This is similarity transformation: B = PAP^{-1}, that is B ~ A
# For example:
# - For a pose data T_r under the Robot Convention, left-multiplying WristMat means:
# - WristMat * T_r ==> T_{robot}_{openxr} * VuerMat * T_{openxr}_{robot} * T_r
# - First, transform to the OpenXR Convention (The function of T_{openxr}_{robot})
# - then, apply the rotation VuerMat in the OpenXR Convention (The function of VuerMat)
# - finally, transform back to the Robot Convention (The function of T_{robot}_{openxr})
# This results in the same rotation effect under the Robot Convention as in the OpenXR Convention.
head_mat = T_robot_openxr @ head_vuer_mat @ fast_mat_inv(T_robot_openxr)
left_wrist_mat = T_robot_openxr @ left_wrist_vuer_mat @ fast_mat_inv(T_robot_openxr)
right_wrist_mat = T_robot_openxr @ right_wrist_vuer_mat @ fast_mat_inv(T_robot_openxr)
# Change wrist convention: WristMat ((Left Wrist) XR/AppleVisionPro Convention) to UnitreeWristMat((Left Wrist URDF) Unitree Convention)
# Reason for right multiply (T_to_unitree_left_wrist) : Rotate 90 degrees counterclockwise about its own x-axis.
# Reason for right multiply (T_to_unitree_right_wrist): Rotate 90 degrees clockwise about its own x-axis.
unitree_left_wrist = left_wrist_mat @ (T_to_unitree_left_wrist if left_wrist_flag else np.eye(4))
unitree_right_wrist = right_wrist_mat @ (T_to_unitree_right_wrist if right_wrist_flag else np.eye(4))
# Transfer from WORLD to HEAD coordinate (translation only).
unitree_left_wrist[0:3, 3] = unitree_left_wrist[0:3, 3] - head_mat[0:3, 3]
unitree_right_wrist[0:3, 3] = unitree_right_wrist[0:3, 3] - head_mat[0:3, 3]
# --------------------------------hand-------------------------------------
# Homogeneous, [xyz] to [xyz1]
# p.s. np.concatenate([25,3]^T,(1,25)) ==> hand_vuer_mat.shape is (4,25)
# Now under (basis) OpenXR Convention, mat shape like this:
# x0 x1 x2 ··· x23 x24
# y0 y1 y1 ··· y23 y24
# z0 z1 z2 ··· z23 z24
# 1 1 1 ··· 1 1
left_hand_vuer_mat = np.concatenate([self.tv.left_landmarks.copy().T, np.ones((1, self.tv.left_landmarks.shape[0]))])
right_hand_vuer_mat = np.concatenate([self.tv.right_landmarks.copy().T, np.ones((1, self.tv.right_landmarks.shape[0]))])
# Change basis convention: from (basis) OpenXR Convention to (basis) Robot Convention
# Just a change of basis for 3D points. No rotation, only translation. No need to right-multiply fast_mat_inv(T_robot_openxr).
left_hand_mat = T_robot_openxr @ left_hand_vuer_mat
right_hand_mat = T_robot_openxr @ right_hand_vuer_mat
# Transfer from WORLD to WRIST coordinate. (this process under (basis) Robot Convention)
# p.s. HandMat_WristBased = WristMat_{wrold}_{wrist}^T * HandMat_{wrold}
# HandMat_WristBased = WristMat_{wrist}_{wrold} * HandMat_{wrold}, that is HandMat_{wrist}
left_hand_mat_wb = fast_mat_inv(left_wrist_mat) @ left_hand_mat
right_hand_mat_wb = fast_mat_inv(right_wrist_mat) @ right_hand_mat
# Change hand convention: HandMat ((Left Hand) XR/AppleVisionPro Convention) to UnitreeHandMat((Left Hand URDF) Unitree Convention)
# Reason for left multiply : T_to_unitree_hand @ left_hand_mat_wb ==> (4,4) @ (4,25) ==> (4,25), (4,25)[0:3, :] ==> (3,25), (3,25).T ==> (25,3)
# Now under (Left Hand URDF) Unitree Convention, mat shape like this:
# [x0, y0, z0]
# [x1, y1, z1]
# ···
# [x23,y23,z23]
# [x24,y24,z24]
unitree_left_hand = (T_to_unitree_hand @ left_hand_mat_wb)[0:3, :].T
unitree_right_hand = (T_to_unitree_hand @ right_hand_mat_wb)[0:3, :].T
# --------------------------------offset-------------------------------------
head_rmat = head_mat[:3, :3]
# The origin of the coordinate for IK Solve is the WAIST joint motor. You can use teleop/robot_control/robot_arm_ik.py Unit_Test to check it.
# The origin of the coordinate of unitree_left_wrist is HEAD. So it is necessary to translate the origin of unitree_left_wrist from HEAD to WAIST.
unitree_left_wrist[0, 3] +=0.15
unitree_right_wrist[0,3] +=0.15
unitree_left_wrist[2, 3] +=0.45
unitree_right_wrist[2,3] +=0.45
return head_rmat, unitree_left_wrist, unitree_right_wrist, unitree_left_hand, unitree_right_hand

1
teleop/robot_control/dex-retargeting

@ -0,0 +1 @@
Subproject commit d7753d38c9ff11f80bafea6cd168351fd3db9b0e

22
teleop/robot_control/dex_retargeting/CITATION.cff

@ -1,22 +0,0 @@
cff-version: 1.2.0
authors:
- family-names: "Qin"
given-names: "Yuzhe"
- family-names: "Yang"
given-names: "Wei"
- family-names: "Huang"
given-names: "Binghao"
- family-names: "Van Wyk"
given-names: "Karl"
- family-names: "Su"
given-names: "Hao"
- family-names: "Wang"
given-names: "Xiaolong"
- family-names: "Chao"
given-names: "Yu-Wei"
- family-names: "Fox"
given-names: "Dieter"
date-released: 2023-10-25
title: "AnyTeleop"
message: "Thanks for using dex-retargeting. If you use this software, please cite it as below."
url: "https://github.com/dexsuite/dex-retargeting"

21
teleop/robot_control/dex_retargeting/LICENSE

@ -1,21 +0,0 @@
The MIT License (MIT)
Copyright (c) 2023 Yuzhe Qin
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

1
teleop/robot_control/dex_retargeting/__init__.py

@ -1 +0,0 @@
__version__ = "0.4.6"

85
teleop/robot_control/dex_retargeting/constants.py

@ -1,85 +0,0 @@
import enum
from pathlib import Path
from typing import Optional
import numpy as np
OPERATOR2MANO_RIGHT = np.array(
[
[0, 0, -1],
[-1, 0, 0],
[0, 1, 0],
]
)
OPERATOR2MANO_LEFT = np.array(
[
[0, 0, -1],
[1, 0, 0],
[0, -1, 0],
]
)
class RobotName(enum.Enum):
allegro = enum.auto()
shadow = enum.auto()
svh = enum.auto()
leap = enum.auto()
ability = enum.auto()
inspire = enum.auto()
panda = enum.auto()
class RetargetingType(enum.Enum):
vector = enum.auto() # For teleoperation, no finger closing prior
position = enum.auto() # For offline data processing, especially hand-object interaction data
dexpilot = enum.auto() # For teleoperation, with finger closing prior
class HandType(enum.Enum):
right = enum.auto()
left = enum.auto()
ROBOT_NAME_MAP = {
RobotName.allegro: "allegro_hand",
RobotName.shadow: "shadow_hand",
RobotName.svh: "schunk_svh_hand",
RobotName.leap: "leap_hand",
RobotName.ability: "ability_hand",
RobotName.inspire: "inspire_hand",
RobotName.panda: "panda_gripper",
}
ROBOT_NAMES = list(ROBOT_NAME_MAP.keys())
def get_default_config_path(
robot_name: RobotName, retargeting_type: RetargetingType, hand_type: HandType
) -> Optional[Path]:
config_path = Path(__file__).parent / "configs"
if retargeting_type is RetargetingType.position:
config_path = config_path / "offline"
else:
config_path = config_path / "teleop"
robot_name_str = ROBOT_NAME_MAP[robot_name]
hand_type_str = hand_type.name
if "gripper" in robot_name_str: # For gripper robots, only use gripper config file.
if retargeting_type == RetargetingType.dexpilot:
config_name = f"{robot_name_str}_dexpilot.yml"
else:
config_name = f"{robot_name_str}.yml"
else:
if retargeting_type == RetargetingType.dexpilot:
config_name = f"{robot_name_str}_{hand_type_str}_dexpilot.yml"
else:
config_name = f"{robot_name_str}_{hand_type_str}.yml"
return config_path / config_name
OPERATOR2MANO = {
HandType.right: OPERATOR2MANO_RIGHT,
HandType.left: OPERATOR2MANO_LEFT,
}

102
teleop/robot_control/dex_retargeting/kinematics_adaptor.py

@ -1,102 +0,0 @@
from abc import abstractmethod
from typing import List
import numpy as np
from .robot_wrapper import RobotWrapper
class KinematicAdaptor:
def __init__(self, robot: RobotWrapper, target_joint_names: List[str]):
self.robot = robot
self.target_joint_names = target_joint_names
# Index mapping
self.idx_pin2target = np.array([robot.get_joint_index(n) for n in target_joint_names])
@abstractmethod
def forward_qpos(self, qpos: np.ndarray) -> np.ndarray:
"""
Adapt the joint position for different kinematics constraints.
Note that the joint order of this qpos is consistent with pinocchio
Args:
qpos: the pinocchio qpos
Returns: the adapted qpos with the same shape as input
"""
pass
@abstractmethod
def backward_jacobian(self, jacobian: np.ndarray) -> np.ndarray:
"""
Adapt the jacobian for different kinematics applications.
Note that the joint order of this Jacobian is consistent with pinocchio
Args:
jacobian: the original jacobian
Returns: the adapted jacobian with the same shape as input
"""
pass
class MimicJointKinematicAdaptor(KinematicAdaptor):
def __init__(
self,
robot: RobotWrapper,
target_joint_names: List[str],
source_joint_names: List[str],
mimic_joint_names: List[str],
multipliers: List[float],
offsets: List[float],
):
super().__init__(robot, target_joint_names)
self.multipliers = np.array(multipliers)
self.offsets = np.array(offsets)
# Joint name check
union_set = set(mimic_joint_names).intersection(set(target_joint_names))
if len(union_set) > 0:
raise ValueError(
f"Mimic joint should not be one of the target joints.\n"
f"Mimic joints: {mimic_joint_names}.\n"
f"Target joints: {target_joint_names}\n"
f"You need to specify the target joint names explicitly in your retargeting config"
f" for robot with mimic joint constraints: {target_joint_names}"
)
# Indices in the pinocchio
self.idx_pin2source = np.array([robot.get_joint_index(name) for name in source_joint_names])
self.idx_pin2mimic = np.array([robot.get_joint_index(name) for name in mimic_joint_names])
# Indices in the output results
self.idx_target2source = np.array([self.target_joint_names.index(n) for n in source_joint_names])
# Dimension check
len_source, len_mimic = self.idx_target2source.shape[0], self.idx_pin2mimic.shape[0]
len_mul, len_offset = self.multipliers.shape[0], self.offsets.shape[0]
if not (len_mimic == len_source == len_mul == len_offset):
raise ValueError(
f"Mimic joints setting dimension mismatch.\n"
f"Source joints: {len_source}, mimic joints: {len_mimic}, multiplier: {len_mul}, offset: {len_offset}"
)
self.num_active_joints = len(robot.dof_joint_names) - len_mimic
# Uniqueness check
if len(mimic_joint_names) != len(np.unique(mimic_joint_names)):
raise ValueError(f"Redundant mimic joint names: {mimic_joint_names}")
def forward_qpos(self, pin_qpos: np.ndarray) -> np.ndarray:
mimic_qpos = pin_qpos[self.idx_pin2source] * self.multipliers + self.offsets
pin_qpos[self.idx_pin2mimic] = mimic_qpos
return pin_qpos
def backward_jacobian(self, jacobian: np.ndarray) -> np.ndarray:
target_jacobian = jacobian[..., self.idx_pin2target]
mimic_joint_jacobian = jacobian[..., self.idx_pin2mimic] * self.multipliers
for i, index in enumerate(self.idx_target2source):
target_jacobian[..., index] += mimic_joint_jacobian[..., i]
return target_jacobian

511
teleop/robot_control/dex_retargeting/optimizer.py

@ -1,511 +0,0 @@
from abc import abstractmethod
from typing import List, Optional
import nlopt
import numpy as np
import torch
from .kinematics_adaptor import KinematicAdaptor, MimicJointKinematicAdaptor
from .robot_wrapper import RobotWrapper
class Optimizer:
retargeting_type = "BASE"
def __init__(
self,
robot: RobotWrapper,
target_joint_names: List[str],
target_link_human_indices: np.ndarray,
):
self.robot = robot
self.num_joints = robot.dof
joint_names = robot.dof_joint_names
idx_pin2target = []
for target_joint_name in target_joint_names:
if target_joint_name not in joint_names:
raise ValueError(f"Joint {target_joint_name} given does not appear to be in robot XML.")
idx_pin2target.append(joint_names.index(target_joint_name))
self.target_joint_names = target_joint_names
self.idx_pin2target = np.array(idx_pin2target)
self.idx_pin2fixed = np.array([i for i in range(robot.dof) if i not in idx_pin2target], dtype=int)
self.opt = nlopt.opt(nlopt.LD_SLSQP, len(idx_pin2target))
self.opt_dof = len(idx_pin2target) # This dof includes the mimic joints
# Target
self.target_link_human_indices = target_link_human_indices
# Free joint
link_names = robot.link_names
self.has_free_joint = len([name for name in link_names if "dummy" in name]) >= 6
# Kinematics adaptor
self.adaptor: Optional[KinematicAdaptor] = None
def set_joint_limit(self, joint_limits: np.ndarray, epsilon=1e-3):
if joint_limits.shape != (self.opt_dof, 2):
raise ValueError(f"Expect joint limits have shape: {(self.opt_dof, 2)}, but get {joint_limits.shape}")
self.opt.set_lower_bounds((joint_limits[:, 0] - epsilon).tolist())
self.opt.set_upper_bounds((joint_limits[:, 1] + epsilon).tolist())
def get_link_indices(self, target_link_names):
return [self.robot.get_link_index(link_name) for link_name in target_link_names]
def set_kinematic_adaptor(self, adaptor: KinematicAdaptor):
self.adaptor = adaptor
# Remove mimic joints from fixed joint list
if isinstance(adaptor, MimicJointKinematicAdaptor):
fixed_idx = self.idx_pin2fixed
mimic_idx = adaptor.idx_pin2mimic
new_fixed_id = np.array([x for x in fixed_idx if x not in mimic_idx], dtype=int)
self.idx_pin2fixed = new_fixed_id
def retarget(self, ref_value, fixed_qpos, last_qpos):
"""
Compute the retargeting results using non-linear optimization
Args:
ref_value: the reference value in cartesian space as input, different optimizer has different reference
fixed_qpos: the fixed value (not optimized) in retargeting, consistent with self.fixed_joint_names
last_qpos: the last retargeting results or initial value, consistent with function return
Returns: joint position of robot, the joint order and dim is consistent with self.target_joint_names
"""
if len(fixed_qpos) != len(self.idx_pin2fixed):
raise ValueError(
f"Optimizer has {len(self.idx_pin2fixed)} joints but non_target_qpos {fixed_qpos} is given"
)
objective_fn = self.get_objective_function(ref_value, fixed_qpos, np.array(last_qpos).astype(np.float32))
self.opt.set_min_objective(objective_fn)
try:
qpos = self.opt.optimize(last_qpos)
return np.array(qpos, dtype=np.float32)
except RuntimeError as e:
print(e)
return np.array(last_qpos, dtype=np.float32)
@abstractmethod
def get_objective_function(self, ref_value: np.ndarray, fixed_qpos: np.ndarray, last_qpos: np.ndarray):
pass
@property
def fixed_joint_names(self):
joint_names = self.robot.dof_joint_names
return [joint_names[i] for i in self.idx_pin2fixed]
class PositionOptimizer(Optimizer):
retargeting_type = "POSITION"
def __init__(
self,
robot: RobotWrapper,
target_joint_names: List[str],
target_link_names: List[str],
target_link_human_indices: np.ndarray,
huber_delta=0.02,
norm_delta=4e-3,
):
super().__init__(robot, target_joint_names, target_link_human_indices)
self.body_names = target_link_names
self.huber_loss = torch.nn.SmoothL1Loss(beta=huber_delta)
self.norm_delta = norm_delta
# Sanity check and cache link indices
self.target_link_indices = self.get_link_indices(target_link_names)
self.opt.set_ftol_abs(1e-5)
def get_objective_function(self, target_pos: np.ndarray, fixed_qpos: np.ndarray, last_qpos: np.ndarray):
qpos = np.zeros(self.num_joints)
qpos[self.idx_pin2fixed] = fixed_qpos
torch_target_pos = torch.as_tensor(target_pos)
torch_target_pos.requires_grad_(False)
def objective(x: np.ndarray, grad: np.ndarray) -> float:
qpos[self.idx_pin2target] = x
# Kinematics forwarding for qpos
if self.adaptor is not None:
qpos[:] = self.adaptor.forward_qpos(qpos)[:]
self.robot.compute_forward_kinematics(qpos)
target_link_poses = [self.robot.get_link_pose(index) for index in self.target_link_indices]
body_pos = np.stack([pose[:3, 3] for pose in target_link_poses], axis=0) # (n ,3)
# Torch computation for accurate loss and grad
torch_body_pos = torch.as_tensor(body_pos)
torch_body_pos.requires_grad_()
# Loss term for kinematics retargeting based on 3D position error
huber_distance = self.huber_loss(torch_body_pos, torch_target_pos)
result = huber_distance.cpu().detach().item()
if grad.size > 0:
jacobians = []
for i, index in enumerate(self.target_link_indices):
link_body_jacobian = self.robot.compute_single_link_local_jacobian(qpos, index)[:3, ...]
link_pose = target_link_poses[i]
link_rot = link_pose[:3, :3]
link_kinematics_jacobian = link_rot @ link_body_jacobian
jacobians.append(link_kinematics_jacobian)
# Note: the joint order in this jacobian is consistent pinocchio
jacobians = np.stack(jacobians, axis=0)
huber_distance.backward()
grad_pos = torch_body_pos.grad.cpu().numpy()[:, None, :]
# Convert the jacobian from pinocchio order to target order
if self.adaptor is not None:
jacobians = self.adaptor.backward_jacobian(jacobians)
else:
jacobians = jacobians[..., self.idx_pin2target]
# Compute the gradient to the qpos
grad_qpos = np.matmul(grad_pos, jacobians)
grad_qpos = grad_qpos.mean(1).sum(0)
grad_qpos += 2 * self.norm_delta * (x - last_qpos)
grad[:] = grad_qpos[:]
return result
return objective
class VectorOptimizer(Optimizer):
retargeting_type = "VECTOR"
def __init__(
self,
robot: RobotWrapper,
target_joint_names: List[str],
target_origin_link_names: List[str],
target_task_link_names: List[str],
target_link_human_indices: np.ndarray,
huber_delta=0.02,
norm_delta=4e-3,
scaling=1.0,
):
super().__init__(robot, target_joint_names, target_link_human_indices)
self.origin_link_names = target_origin_link_names
self.task_link_names = target_task_link_names
self.huber_loss = torch.nn.SmoothL1Loss(beta=huber_delta, reduction="mean")
self.norm_delta = norm_delta
self.scaling = scaling
# Computation cache for better performance
# For one link used in multiple vectors, e.g. hand palm, we do not want to compute it multiple times
self.computed_link_names = list(set(target_origin_link_names).union(set(target_task_link_names)))
self.origin_link_indices = torch.tensor(
[self.computed_link_names.index(name) for name in target_origin_link_names]
)
self.task_link_indices = torch.tensor([self.computed_link_names.index(name) for name in target_task_link_names])
# Cache link indices that will involve in kinematics computation
self.computed_link_indices = self.get_link_indices(self.computed_link_names)
self.opt.set_ftol_abs(1e-6)
def get_objective_function(self, target_vector: np.ndarray, fixed_qpos: np.ndarray, last_qpos: np.ndarray):
qpos = np.zeros(self.num_joints)
qpos[self.idx_pin2fixed] = fixed_qpos
torch_target_vec = torch.as_tensor(target_vector) * self.scaling
torch_target_vec.requires_grad_(False)
def objective(x: np.ndarray, grad: np.ndarray) -> float:
qpos[self.idx_pin2target] = x
# Kinematics forwarding for qpos
if self.adaptor is not None:
qpos[:] = self.adaptor.forward_qpos(qpos)[:]
self.robot.compute_forward_kinematics(qpos)
target_link_poses = [self.robot.get_link_pose(index) for index in self.computed_link_indices]
body_pos = np.array([pose[:3, 3] for pose in target_link_poses])
# Torch computation for accurate loss and grad
torch_body_pos = torch.as_tensor(body_pos)
torch_body_pos.requires_grad_()
# Index link for computation
origin_link_pos = torch_body_pos[self.origin_link_indices, :]
task_link_pos = torch_body_pos[self.task_link_indices, :]
robot_vec = task_link_pos - origin_link_pos
# Loss term for kinematics retargeting based on 3D position error
vec_dist = torch.norm(robot_vec - torch_target_vec, dim=1, keepdim=False)
huber_distance = self.huber_loss(vec_dist, torch.zeros_like(vec_dist))
result = huber_distance.cpu().detach().item()
if grad.size > 0:
jacobians = []
for i, index in enumerate(self.computed_link_indices):
link_body_jacobian = self.robot.compute_single_link_local_jacobian(qpos, index)[:3, ...]
link_pose = target_link_poses[i]
link_rot = link_pose[:3, :3]
link_kinematics_jacobian = link_rot @ link_body_jacobian
jacobians.append(link_kinematics_jacobian)
# Note: the joint order in this jacobian is consistent pinocchio
jacobians = np.stack(jacobians, axis=0)
huber_distance.backward()
grad_pos = torch_body_pos.grad.cpu().numpy()[:, None, :]
# Convert the jacobian from pinocchio order to target order
if self.adaptor is not None:
jacobians = self.adaptor.backward_jacobian(jacobians)
else:
jacobians = jacobians[..., self.idx_pin2target]
grad_qpos = np.matmul(grad_pos, np.array(jacobians))
grad_qpos = grad_qpos.mean(1).sum(0)
grad_qpos += 2 * self.norm_delta * (x - last_qpos)
grad[:] = grad_qpos[:]
return result
return objective
class DexPilotOptimizer(Optimizer):
"""Retargeting optimizer using the method proposed in DexPilot
This is a broader adaptation of the original optimizer delineated in the DexPilot paper.
While the initial DexPilot study focused solely on the four-fingered Allegro Hand, this version of the optimizer
embraces the same principles for both four-fingered and five-fingered hands. It projects the distance between the
thumb and the other fingers to facilitate more stable grasping.
Reference: https://arxiv.org/abs/1910.03135
Args:
robot:
target_joint_names:
finger_tip_link_names:
wrist_link_name:
gamma:
project_dist:
escape_dist:
eta1:
eta2:
scaling:
"""
retargeting_type = "DEXPILOT"
def __init__(
self,
robot: RobotWrapper,
target_joint_names: List[str],
finger_tip_link_names: List[str],
wrist_link_name: str,
target_link_human_indices: Optional[np.ndarray] = None,
huber_delta=0.03,
norm_delta=4e-3,
# DexPilot parameters
# gamma=2.5e-3,
project_dist=0.03,
escape_dist=0.05,
eta1=1e-4,
eta2=3e-2,
scaling=1.0,
):
if len(finger_tip_link_names) < 2 or len(finger_tip_link_names) > 5:
raise ValueError(
f"DexPilot optimizer can only be applied to hands with 2 to 5 fingers, but got "
f"{len(finger_tip_link_names)} fingers."
)
self.num_fingers = len(finger_tip_link_names)
origin_link_index, task_link_index = self.generate_link_indices(self.num_fingers)
if target_link_human_indices is None:
target_link_human_indices = (np.stack([origin_link_index, task_link_index], axis=0) * 4).astype(int)
link_names = [wrist_link_name] + finger_tip_link_names
target_origin_link_names = [link_names[index] for index in origin_link_index]
target_task_link_names = [link_names[index] for index in task_link_index]
super().__init__(robot, target_joint_names, target_link_human_indices)
self.origin_link_names = target_origin_link_names
self.task_link_names = target_task_link_names
self.scaling = scaling
self.huber_loss = torch.nn.SmoothL1Loss(beta=huber_delta, reduction="none")
self.norm_delta = norm_delta
# DexPilot parameters
self.project_dist = project_dist
self.escape_dist = escape_dist
self.eta1 = eta1
self.eta2 = eta2
# Computation cache for better performance
# For one link used in multiple vectors, e.g. hand palm, we do not want to compute it multiple times
self.computed_link_names = list(set(target_origin_link_names).union(set(target_task_link_names)))
self.origin_link_indices = torch.tensor(
[self.computed_link_names.index(name) for name in target_origin_link_names]
)
self.task_link_indices = torch.tensor([self.computed_link_names.index(name) for name in target_task_link_names])
# Sanity check and cache link indices
self.computed_link_indices = self.get_link_indices(self.computed_link_names)
self.opt.set_ftol_abs(1e-6)
# DexPilot cache
self.projected, self.s2_project_index_origin, self.s2_project_index_task, self.projected_dist = (
self.set_dexpilot_cache(self.num_fingers, eta1, eta2)
)
@staticmethod
def generate_link_indices(num_fingers):
"""
Example:
>>> generate_link_indices(4)
([2, 3, 4, 3, 4, 4, 0, 0, 0, 0], [1, 1, 1, 2, 2, 3, 1, 2, 3, 4])
"""
origin_link_index = []
task_link_index = []
# Add indices for connections between fingers
for i in range(1, num_fingers):
for j in range(i + 1, num_fingers + 1):
origin_link_index.append(j)
task_link_index.append(i)
# Add indices for connections to the base (0)
for i in range(1, num_fingers + 1):
origin_link_index.append(0)
task_link_index.append(i)
return origin_link_index, task_link_index
@staticmethod
def set_dexpilot_cache(num_fingers, eta1, eta2):
"""
Example:
>>> set_dexpilot_cache(4, 0.1, 0.2)
(array([False, False, False, False, False, False]),
[1, 2, 2],
[0, 0, 1],
array([0.1, 0.1, 0.1, 0.2, 0.2, 0.2]))
"""
projected = np.zeros(num_fingers * (num_fingers - 1) // 2, dtype=bool)
s2_project_index_origin = []
s2_project_index_task = []
for i in range(0, num_fingers - 2):
for j in range(i + 1, num_fingers - 1):
s2_project_index_origin.append(j)
s2_project_index_task.append(i)
projected_dist = np.array([eta1] * (num_fingers - 1) + [eta2] * ((num_fingers - 1) * (num_fingers - 2) // 2))
return projected, s2_project_index_origin, s2_project_index_task, projected_dist
def get_objective_function(self, target_vector: np.ndarray, fixed_qpos: np.ndarray, last_qpos: np.ndarray):
qpos = np.zeros(self.num_joints)
qpos[self.idx_pin2fixed] = fixed_qpos
len_proj = len(self.projected)
len_s2 = len(self.s2_project_index_task)
len_s1 = len_proj - len_s2
# Update projection indicator
target_vec_dist = np.linalg.norm(target_vector[:len_proj], axis=1)
self.projected[:len_s1][target_vec_dist[0:len_s1] < self.project_dist] = True
self.projected[:len_s1][target_vec_dist[0:len_s1] > self.escape_dist] = False
self.projected[len_s1:len_proj] = np.logical_and(
self.projected[:len_s1][self.s2_project_index_origin], self.projected[:len_s1][self.s2_project_index_task]
)
self.projected[len_s1:len_proj] = np.logical_and(
self.projected[len_s1:len_proj], target_vec_dist[len_s1:len_proj] <= 0.03
)
# Update weight vector
normal_weight = np.ones(len_proj, dtype=np.float32) * 1
high_weight = np.array([200] * len_s1 + [400] * len_s2, dtype=np.float32)
weight = np.where(self.projected, high_weight, normal_weight)
# We change the weight to 10 instead of 1 here, for vector originate from wrist to fingertips
# This ensures better intuitive mapping due wrong pose detection
weight = torch.from_numpy(
np.concatenate([weight, np.ones(self.num_fingers, dtype=np.float32) * len_proj + self.num_fingers])
)
# Compute reference distance vector
normal_vec = target_vector * self.scaling # (10, 3)
dir_vec = target_vector[:len_proj] / (target_vec_dist[:, None] + 1e-6) # (6, 3)
projected_vec = dir_vec * self.projected_dist[:, None] # (6, 3)
# Compute final reference vector
reference_vec = np.where(self.projected[:, None], projected_vec, normal_vec[:len_proj]) # (6, 3)
reference_vec = np.concatenate([reference_vec, normal_vec[len_proj:]], axis=0) # (10, 3)
torch_target_vec = torch.as_tensor(reference_vec, dtype=torch.float32)
torch_target_vec.requires_grad_(False)
def objective(x: np.ndarray, grad: np.ndarray) -> float:
qpos[self.idx_pin2target] = x
# Kinematics forwarding for qpos
if self.adaptor is not None:
qpos[:] = self.adaptor.forward_qpos(qpos)[:]
self.robot.compute_forward_kinematics(qpos)
target_link_poses = [self.robot.get_link_pose(index) for index in self.computed_link_indices]
body_pos = np.array([pose[:3, 3] for pose in target_link_poses])
# Torch computation for accurate loss and grad
torch_body_pos = torch.as_tensor(body_pos)
torch_body_pos.requires_grad_()
# Index link for computation
origin_link_pos = torch_body_pos[self.origin_link_indices, :]
task_link_pos = torch_body_pos[self.task_link_indices, :]
robot_vec = task_link_pos - origin_link_pos
# Loss term for kinematics retargeting based on 3D position error
# Different from the original DexPilot, we use huber loss here instead of the squared dist
vec_dist = torch.norm(robot_vec - torch_target_vec, dim=1, keepdim=False)
huber_distance = (
self.huber_loss(vec_dist, torch.zeros_like(vec_dist)) * weight / (robot_vec.shape[0])
).sum()
huber_distance = huber_distance.sum()
result = huber_distance.cpu().detach().item()
if grad.size > 0:
jacobians = []
for i, index in enumerate(self.computed_link_indices):
link_body_jacobian = self.robot.compute_single_link_local_jacobian(qpos, index)[:3, ...]
link_pose = target_link_poses[i]
link_rot = link_pose[:3, :3]
link_kinematics_jacobian = link_rot @ link_body_jacobian
jacobians.append(link_kinematics_jacobian)
# Note: the joint order in this jacobian is consistent pinocchio
jacobians = np.stack(jacobians, axis=0)
huber_distance.backward()
grad_pos = torch_body_pos.grad.cpu().numpy()[:, None, :]
# Convert the jacobian from pinocchio order to target order
if self.adaptor is not None:
jacobians = self.adaptor.backward_jacobian(jacobians)
else:
jacobians = jacobians[..., self.idx_pin2target]
grad_qpos = np.matmul(grad_pos, np.array(jacobians))
grad_qpos = grad_qpos.mean(1).sum(0)
# In the original DexPilot, γ = 2.5 × 10−3 is a weight on regularizing the Allegro angles to zero
# which is equivalent to fully opened the hand
# In our implementation, we regularize the joint angles to the previous joint angles
grad_qpos += 2 * self.norm_delta * (x - last_qpos)
grad[:] = grad_qpos[:]
return result
return objective

17
teleop/robot_control/dex_retargeting/optimizer_utils.py

@ -1,17 +0,0 @@
class LPFilter:
def __init__(self, alpha):
self.alpha = alpha
self.y = None
self.is_init = False
def next(self, x):
if not self.is_init:
self.y = x
self.is_init = True
return self.y.copy()
self.y = self.y + self.alpha * (x - self.y)
return self.y.copy()
def reset(self):
self.y = None
self.is_init = False

251
teleop/robot_control/dex_retargeting/retargeting_config.py

@ -1,251 +0,0 @@
from dataclasses import dataclass
from pathlib import Path
from typing import List, Optional, Dict, Any, Tuple
from typing import Union
import numpy as np
import yaml
import os
from . import yourdfpy as urdf
from .kinematics_adaptor import MimicJointKinematicAdaptor
from .optimizer_utils import LPFilter
from .robot_wrapper import RobotWrapper
from .seq_retarget import SeqRetargeting
from .yourdfpy import DUMMY_JOINT_NAMES
@dataclass
class RetargetingConfig:
type: str
urdf_path: str
# Whether to add free joint to the root of the robot. Free joint enable the robot hand move freely in the space
add_dummy_free_joint: bool = False
# Source refers to the retargeting input, which usually corresponds to the human hand
# The joint indices of human hand joint which corresponds to each link in the target_link_names
target_link_human_indices: Optional[np.ndarray] = None
# The link on the robot hand which corresponding to the wrist of human hand
wrist_link_name: Optional[str] = None
# Position retargeting link names
target_link_names: Optional[List[str]] = None
# Vector retargeting link names
target_joint_names: Optional[List[str]] = None
target_origin_link_names: Optional[List[str]] = None
target_task_link_names: Optional[List[str]] = None
# DexPilot retargeting link names
finger_tip_link_names: Optional[List[str]] = None
# Scaling factor for vector retargeting only
# For example, Allegro is 1.6 times larger than normal human hand, then this scaling factor should be 1.6
scaling_factor: float = 1.0
# Optimization parameters
normal_delta: float = 4e-3
huber_delta: float = 2e-2
# DexPilot optimizer parameters
project_dist: float = 0.03
escape_dist: float = 0.05
# Joint limit tag
has_joint_limits: bool = True
# Mimic joint tag
ignore_mimic_joint: bool = False
# Low pass filter
low_pass_alpha: float = 0.1
_TYPE = ["vector", "position", "dexpilot"]
_DEFAULT_URDF_DIR = "./"
def __post_init__(self):
# Retargeting type check
self.type = self.type.lower()
if self.type not in self._TYPE:
raise ValueError(f"Retargeting type must be one of {self._TYPE}")
# Vector retargeting requires: target_origin_link_names + target_task_link_names
# Position retargeting requires: target_link_names
if self.type == "vector":
if self.target_origin_link_names is None or self.target_task_link_names is None:
raise ValueError(f"Vector retargeting requires: target_origin_link_names + target_task_link_names")
if len(self.target_task_link_names) != len(self.target_origin_link_names):
raise ValueError(f"Vector retargeting origin and task links dim mismatch")
if self.target_link_human_indices.shape != (2, len(self.target_origin_link_names)):
raise ValueError(f"Vector retargeting link names and link indices dim mismatch")
if self.target_link_human_indices is None:
raise ValueError(f"Vector retargeting requires: target_link_human_indices")
elif self.type == "position":
if self.target_link_names is None:
raise ValueError(f"Position retargeting requires: target_link_names")
self.target_link_human_indices = self.target_link_human_indices.squeeze()
if self.target_link_human_indices.shape != (len(self.target_link_names),):
raise ValueError(f"Position retargeting link names and link indices dim mismatch")
if self.target_link_human_indices is None:
raise ValueError(f"Position retargeting requires: target_link_human_indices")
elif self.type == "dexpilot":
if self.finger_tip_link_names is None or self.wrist_link_name is None:
raise ValueError(f"Position retargeting requires: finger_tip_link_names + wrist_link_name")
if self.target_link_human_indices is not None:
print(
"\033[33m",
"Target link human indices is provided in the DexPilot retargeting config, which is uncommon.\n"
"If you do not know exactly how it is used, please leave it to None for default.\n"
"\033[00m",
)
# URDF path check
urdf_path = Path(self.urdf_path)
if not urdf_path.is_absolute():
urdf_path = self._DEFAULT_URDF_DIR / urdf_path
urdf_path = urdf_path.absolute()
if not urdf_path.exists():
raise ValueError(f"URDF path {urdf_path} does not exist")
self.urdf_path = str(urdf_path)
@classmethod
def set_default_urdf_dir(cls, urdf_dir: Union[str, Path]):
path = Path(urdf_dir)
if not path.exists():
raise ValueError(f"URDF dir {urdf_dir} not exists.")
cls._DEFAULT_URDF_DIR = urdf_dir
@classmethod
def load_from_file(cls, config_path: Union[str, Path], override: Optional[Dict] = None):
path = Path(config_path)
if not path.is_absolute():
path = path.absolute()
with path.open("r") as f:
yaml_config = yaml.load(f, Loader=yaml.FullLoader)
cfg = yaml_config["retargeting"]
return cls.from_dict(cfg, override)
@classmethod
def from_dict(cls, cfg: Dict[str, Any], override: Optional[Dict] = None):
if "target_link_human_indices" in cfg:
cfg["target_link_human_indices"] = np.array(cfg["target_link_human_indices"])
if override is not None:
for key, value in override.items():
cfg[key] = value
config = RetargetingConfig(**cfg)
return config
def build(self) -> SeqRetargeting:
from .optimizer import (
VectorOptimizer,
PositionOptimizer,
DexPilotOptimizer,
)
import tempfile
# Process the URDF with yourdfpy to better find file path
robot_urdf = urdf.URDF.load(
self.urdf_path, add_dummy_free_joints=self.add_dummy_free_joint, build_scene_graph=False
)
urdf_name = self.urdf_path.split(os.path.sep)[-1]
temp_dir = tempfile.mkdtemp(prefix="dex_retargeting-")
temp_path = f"{temp_dir}/{urdf_name}"
robot_urdf.write_xml_file(temp_path)
# Load pinocchio model
robot = RobotWrapper(temp_path)
# Add 6D dummy joint to target joint names so that it will also be optimized
if self.add_dummy_free_joint and self.target_joint_names is not None:
self.target_joint_names = DUMMY_JOINT_NAMES + self.target_joint_names
joint_names = self.target_joint_names if self.target_joint_names is not None else robot.dof_joint_names
if self.type == "position":
optimizer = PositionOptimizer(
robot,
joint_names,
target_link_names=self.target_link_names,
target_link_human_indices=self.target_link_human_indices,
norm_delta=self.normal_delta,
huber_delta=self.huber_delta,
)
elif self.type == "vector":
optimizer = VectorOptimizer(
robot,
joint_names,
target_origin_link_names=self.target_origin_link_names,
target_task_link_names=self.target_task_link_names,
target_link_human_indices=self.target_link_human_indices,
scaling=self.scaling_factor,
norm_delta=self.normal_delta,
huber_delta=self.huber_delta,
)
elif self.type == "dexpilot":
optimizer = DexPilotOptimizer(
robot,
joint_names,
finger_tip_link_names=self.finger_tip_link_names,
wrist_link_name=self.wrist_link_name,
target_link_human_indices=self.target_link_human_indices,
scaling=self.scaling_factor,
project_dist=self.project_dist,
escape_dist=self.escape_dist,
)
else:
raise RuntimeError()
if 0 <= self.low_pass_alpha <= 1:
lp_filter = LPFilter(self.low_pass_alpha)
else:
lp_filter = None
# Parse mimic joints and set kinematics adaptor for optimizer
has_mimic_joints, source_names, mimic_names, multipliers, offsets = parse_mimic_joint(robot_urdf)
if has_mimic_joints and not self.ignore_mimic_joint:
adaptor = MimicJointKinematicAdaptor(
robot,
target_joint_names=joint_names,
source_joint_names=source_names,
mimic_joint_names=mimic_names,
multipliers=multipliers,
offsets=offsets,
)
optimizer.set_kinematic_adaptor(adaptor)
print(
"\033[34m",
"Mimic joint adaptor enabled. The mimic joint tags in the URDF will be considered during retargeting.\n"
"To disable mimic joint adaptor, consider setting ignore_mimic_joint=True in the configuration.",
"\033[39m",
)
retargeting = SeqRetargeting(
optimizer,
has_joint_limits=self.has_joint_limits,
lp_filter=lp_filter,
)
return retargeting
def get_retargeting_config(config_path: Union[str, Path]) -> RetargetingConfig:
config = RetargetingConfig.load_from_file(config_path)
return config
def parse_mimic_joint(robot_urdf: urdf.URDF) -> Tuple[bool, List[str], List[str], List[float], List[float]]:
mimic_joint_names = []
source_joint_names = []
multipliers = []
offsets = []
for name, joint in robot_urdf.joint_map.items():
if joint.mimic is not None:
mimic_joint_names.append(name)
source_joint_names.append(joint.mimic.joint)
multipliers.append(joint.mimic.multiplier)
offsets.append(joint.mimic.offset)
return len(mimic_joint_names) > 0, source_joint_names, mimic_joint_names, multipliers, offsets

93
teleop/robot_control/dex_retargeting/robot_wrapper.py

@ -1,93 +0,0 @@
from typing import List
import numpy as np
import numpy.typing as npt
import pinocchio as pin
class RobotWrapper:
"""
This class does not take mimic joint into consideration
"""
def __init__(self, urdf_path: str, use_collision=False, use_visual=False):
# Create robot model and data
self.model: pin.Model = pin.buildModelFromUrdf(urdf_path)
self.data: pin.Data = self.model.createData()
if use_visual or use_collision:
raise NotImplementedError
self.q0 = pin.neutral(self.model)
if self.model.nv != self.model.nq:
raise NotImplementedError(f"Can not handle robot with special joint.")
# -------------------------------------------------------------------------- #
# Robot property
# -------------------------------------------------------------------------- #
@property
def joint_names(self) -> List[str]:
return list(self.model.names)
@property
def dof_joint_names(self) -> List[str]:
nqs = self.model.nqs
return [name for i, name in enumerate(self.model.names) if nqs[i] > 0]
@property
def dof(self) -> int:
return self.model.nq
@property
def link_names(self) -> List[str]:
link_names = []
for i, frame in enumerate(self.model.frames):
link_names.append(frame.name)
return link_names
@property
def joint_limits(self):
lower = self.model.lowerPositionLimit
upper = self.model.upperPositionLimit
return np.stack([lower, upper], axis=1)
# -------------------------------------------------------------------------- #
# Query function
# -------------------------------------------------------------------------- #
def get_joint_index(self, name: str):
return self.dof_joint_names.index(name)
def get_link_index(self, name: str):
if name not in self.link_names:
raise ValueError(f"{name} is not a link name. Valid link names: \n{self.link_names}")
return self.model.getFrameId(name, pin.BODY)
def get_joint_parent_child_frames(self, joint_name: str):
joint_id = self.model.getFrameId(joint_name)
parent_id = self.model.frames[joint_id].parent
child_id = -1
for idx, frame in enumerate(self.model.frames):
if frame.previousFrame == joint_id:
child_id = idx
if child_id == -1:
raise ValueError(f"Can not find child link of {joint_name}")
return parent_id, child_id
# -------------------------------------------------------------------------- #
# Kinematics function
# -------------------------------------------------------------------------- #
def compute_forward_kinematics(self, qpos: npt.NDArray):
pin.forwardKinematics(self.model, self.data, qpos)
def get_link_pose(self, link_id: int) -> npt.NDArray:
pose: pin.SE3 = pin.updateFramePlacement(self.model, self.data, link_id)
return pose.homogeneous
def get_link_pose_inv(self, link_id: int) -> npt.NDArray:
pose: pin.SE3 = pin.updateFramePlacement(self.model, self.data, link_id)
return pose.inverse().homogeneous
def compute_single_link_local_jacobian(self, qpos, link_id: int) -> npt.NDArray:
J = pin.computeFrameJacobian(self.model, self.data, qpos, link_id)
return J

151
teleop/robot_control/dex_retargeting/seq_retarget.py

@ -1,151 +0,0 @@
import time
from typing import Optional
import numpy as np
from pytransform3d import rotations
from .constants import OPERATOR2MANO, HandType
from .optimizer import Optimizer
from .optimizer_utils import LPFilter
class SeqRetargeting:
def __init__(
self,
optimizer: Optimizer,
has_joint_limits=True,
lp_filter: Optional[LPFilter] = None,
):
self.optimizer = optimizer
robot = self.optimizer.robot
# Joint limit
self.has_joint_limits = has_joint_limits
joint_limits = np.ones_like(robot.joint_limits)
joint_limits[:, 0] = -1e4 # a large value is equivalent to no limit
joint_limits[:, 1] = 1e4
if has_joint_limits:
joint_limits[:] = robot.joint_limits[:]
self.optimizer.set_joint_limit(joint_limits[self.optimizer.idx_pin2target])
self.joint_limits = joint_limits[self.optimizer.idx_pin2target]
# Temporal information
self.last_qpos = joint_limits.mean(1)[self.optimizer.idx_pin2target].astype(np.float32)
self.accumulated_time = 0
self.num_retargeting = 0
# Filter
self.filter = lp_filter
# Warm started
self.is_warm_started = False
def warm_start(
self,
wrist_pos: np.ndarray,
wrist_quat: np.ndarray,
hand_type: HandType = HandType.right,
is_mano_convention: bool = False,
):
"""
Initialize the wrist joint pose using analytical computation instead of retargeting optimization.
This function is specifically for position retargeting with the flying robot hand, i.e. has 6D free joint
You are not expected to use this function for vector retargeting, e.g. when you are working on teleoperation
Args:
wrist_pos: position of the hand wrist, typically from human hand pose
wrist_quat: quaternion of the hand wrist, the same convention as the operator frame definition if not is_mano_convention
hand_type: hand type, used to determine the operator2mano matrix
is_mano_convention: whether the wrist_quat is in mano convention
"""
# This function can only be used when the first joints of robot are free joints
if len(wrist_pos) != 3:
raise ValueError(f"Wrist pos: {wrist_pos} is not a 3-dim vector.")
if len(wrist_quat) != 4:
raise ValueError(f"Wrist quat: {wrist_quat} is not a 4-dim vector.")
operator2mano = OPERATOR2MANO[hand_type] if is_mano_convention else np.eye(3)
robot = self.optimizer.robot
target_wrist_pose = np.eye(4)
target_wrist_pose[:3, :3] = rotations.matrix_from_quaternion(wrist_quat) @ operator2mano.T
target_wrist_pose[:3, 3] = wrist_pos
name_list = [
"dummy_x_translation_joint",
"dummy_y_translation_joint",
"dummy_z_translation_joint",
"dummy_x_rotation_joint",
"dummy_y_rotation_joint",
"dummy_z_rotation_joint",
]
wrist_link_id = robot.get_joint_parent_child_frames(name_list[5])[1]
# Set the dummy joints angles to zero
old_qpos = robot.q0
new_qpos = old_qpos.copy()
for num, joint_name in enumerate(self.optimizer.target_joint_names):
if joint_name in name_list:
new_qpos[num] = 0
robot.compute_forward_kinematics(new_qpos)
root2wrist = robot.get_link_pose_inv(wrist_link_id)
target_root_pose = target_wrist_pose @ root2wrist
euler = rotations.euler_from_matrix(target_root_pose[:3, :3], 0, 1, 2, extrinsic=False)
pose_vec = np.concatenate([target_root_pose[:3, 3], euler])
# Find the dummy joints
for num, joint_name in enumerate(self.optimizer.target_joint_names):
if joint_name in name_list:
index = name_list.index(joint_name)
self.last_qpos[num] = pose_vec[index]
self.is_warm_started = True
def retarget(self, ref_value, fixed_qpos=np.array([])):
tic = time.perf_counter()
qpos = self.optimizer.retarget(
ref_value=ref_value.astype(np.float32),
fixed_qpos=fixed_qpos.astype(np.float32),
last_qpos=np.clip(self.last_qpos, self.joint_limits[:, 0], self.joint_limits[:, 1]),
)
self.accumulated_time += time.perf_counter() - tic
self.num_retargeting += 1
self.last_qpos = qpos
robot_qpos = np.zeros(self.optimizer.robot.dof)
robot_qpos[self.optimizer.idx_pin2fixed] = fixed_qpos
robot_qpos[self.optimizer.idx_pin2target] = qpos
if self.optimizer.adaptor is not None:
robot_qpos = self.optimizer.adaptor.forward_qpos(robot_qpos)
if self.filter is not None:
robot_qpos = self.filter.next(robot_qpos)
return robot_qpos
def set_qpos(self, robot_qpos: np.ndarray):
target_qpos = robot_qpos[self.optimizer.idx_pin2target]
self.last_qpos = target_qpos
def get_qpos(self, fixed_qpos: Optional[np.ndarray] = None):
robot_qpos = np.zeros(self.optimizer.robot.dof)
robot_qpos[self.optimizer.idx_pin2target] = self.last_qpos
if fixed_qpos is not None:
robot_qpos[self.optimizer.idx_pin2fixed] = fixed_qpos
return robot_qpos
def verbose(self):
min_value = self.optimizer.opt.last_optimum_value()
print(f"Retargeting {self.num_retargeting} times takes: {self.accumulated_time}s")
print(f"Last distance: {min_value}")
def reset(self):
self.last_qpos = self.joint_limits.mean(1).astype(np.float32)
self.num_retargeting = 0
self.accumulated_time = 0
@property
def joint_names(self):
return self.optimizer.robot.dof_joint_names

2237
teleop/robot_control/dex_retargeting/yourdfpy.py
File diff suppressed because it is too large
View File

16
teleop/robot_control/hand_retargeting.py

@ -1,7 +1,9 @@
from .dex_retargeting.retargeting_config import RetargetingConfig
from dex_retargeting import RetargetingConfig
from pathlib import Path
import yaml
from enum import Enum
import logging_mp
logger_mp = logging_mp.get_logger(__name__)
class HandType(Enum):
INSPIRE_HAND = "../assets/inspire_hand/inspire_hand.yml"
@ -36,6 +38,8 @@ class HandRetargeting:
self.left_retargeting_joint_names = self.left_retargeting.joint_names
self.right_retargeting_joint_names = self.right_retargeting.joint_names
self.left_indices = self.left_retargeting.optimizer.target_link_human_indices
self.right_indices = self.right_retargeting.optimizer.target_link_human_indices
if hand_type == HandType.UNITREE_DEX3 or hand_type == HandType.UNITREE_DEX3_Unit_Test:
# In section "Sort by message structure" of https://support.unitree.com/home/en/G1_developer/dexterous_hand
@ -49,11 +53,11 @@ class HandRetargeting:
self.right_dex_retargeting_to_hardware = [ self.right_retargeting_joint_names.index(name) for name in self.right_dex3_api_joint_names]
# Archive: This is the joint order of the dex-retargeting library version 0.1.1.
# print([joint.get_name() for joint in self.left_retargeting.optimizer.robot.get_active_joints()])
# logger_mp.info([joint.get_name() for joint in self.left_retargeting.optimizer.robot.get_active_joints()])
# ['left_hand_thumb_0_joint', 'left_hand_thumb_1_joint', 'left_hand_thumb_2_joint',
# 'left_hand_middle_0_joint', 'left_hand_middle_1_joint',
# 'left_hand_index_0_joint', 'left_hand_index_1_joint']
# print([joint.get_name() for joint in self.right_retargeting.optimizer.robot.get_active_joints()])
# logger_mp.info([joint.get_name() for joint in self.right_retargeting.optimizer.robot.get_active_joints()])
# ['right_hand_thumb_0_joint', 'right_hand_thumb_1_joint', 'right_hand_thumb_2_joint',
# 'right_hand_middle_0_joint', 'right_hand_middle_1_joint',
# 'right_hand_index_0_joint', 'right_hand_index_1_joint']
@ -66,11 +70,11 @@ class HandRetargeting:
self.right_dex_retargeting_to_hardware = [ self.right_retargeting_joint_names.index(name) for name in self.right_inspire_api_joint_names]
except FileNotFoundError:
print(f"Configuration file not found: {config_file_path}")
logger_mp.warning(f"Configuration file not found: {config_file_path}")
raise
except yaml.YAMLError as e:
print(f"YAML error while reading {config_file_path}: {e}")
logger_mp.warning(f"YAML error while reading {config_file_path}: {e}")
raise
except Exception as e:
print(f"An error occurred: {e}")
logger_mp.error(f"An error occurred: {e}")
raise

178
teleop/robot_control/robot_arm.py

@ -11,8 +11,13 @@ from unitree_sdk2py.utils.crc import CRC
from unitree_sdk2py.idl.unitree_go.msg.dds_ import ( LowCmd_ as go_LowCmd, LowState_ as go_LowState) # idl for h1
from unitree_sdk2py.idl.default import unitree_go_msg_dds__LowCmd_
kTopicLowCommand = "rt/lowcmd"
import logging_mp
logger_mp = logging_mp.get_logger(__name__)
kTopicLowCommand_Debug = "rt/lowcmd"
kTopicLowCommand_Motion = "rt/arm_sdk"
kTopicLowState = "rt/lowstate"
G1_29_Num_Motors = 35
G1_23_Num_Motors = 35
H1_2_Num_Motors = 35
@ -54,11 +59,12 @@ class DataBuffer:
self.data = data
class G1_29_ArmController:
def __init__(self):
print("Initialize G1_29_ArmController...")
def __init__(self, motion_mode = False, simulation_mode = False):
logger_mp.info("Initialize G1_29_ArmController...")
self.q_target = np.zeros(14)
self.tauff_target = np.zeros(14)
self.motion_mode = motion_mode
self.simulation_mode = simulation_mode
self.kp_high = 300.0
self.kd_high = 3.0
self.kp_low = 80.0
@ -76,7 +82,10 @@ class G1_29_ArmController:
# initialize lowcmd publisher and lowstate subscriber
ChannelFactoryInitialize(0)
self.lowcmd_publisher = ChannelPublisher(kTopicLowCommand, hg_LowCmd)
if self.motion_mode:
self.lowcmd_publisher = ChannelPublisher(kTopicLowCommand_Motion, hg_LowCmd)
else:
self.lowcmd_publisher = ChannelPublisher(kTopicLowCommand_Debug, hg_LowCmd)
self.lowcmd_publisher.Init()
self.lowstate_subscriber = ChannelSubscriber(kTopicLowState, hg_LowState)
self.lowstate_subscriber.Init()
@ -89,7 +98,8 @@ class G1_29_ArmController:
while not self.lowstate_buffer.GetData():
time.sleep(0.01)
print("[G1_29_ArmController] Waiting to subscribe dds...")
logger_mp.warning("[G1_29_ArmController] Waiting to subscribe dds...")
logger_mp.info("[G1_29_ArmController] Subscribe dds ok.")
# initialize hg's lowcmd msg
self.crc = CRC()
@ -98,9 +108,9 @@ class G1_29_ArmController:
self.msg.mode_machine = self.get_mode_machine()
self.all_motor_q = self.get_current_motor_q()
print(f"Current all body motor state q:\n{self.all_motor_q} \n")
print(f"Current two arms motor state q:\n{self.get_current_dual_arm_q()}\n")
print("Lock all joints except two arms...\n")
logger_mp.info(f"Current all body motor state q:\n{self.all_motor_q} \n")
logger_mp.info(f"Current two arms motor state q:\n{self.get_current_dual_arm_q()}\n")
logger_mp.info("Lock all joints except two arms...\n")
arm_indices = set(member.value for member in G1_29_JointArmIndex)
for id in G1_29_JointIndex:
@ -120,7 +130,7 @@ class G1_29_ArmController:
self.msg.motor_cmd[id].kp = self.kp_high
self.msg.motor_cmd[id].kd = self.kd_high
self.msg.motor_cmd[id].q = self.all_motor_q[id]
print("Lock OK!\n")
logger_mp.info("Lock OK!\n")
# initialize publish thread
self.publish_thread = threading.Thread(target=self._ctrl_motor_state)
@ -128,7 +138,7 @@ class G1_29_ArmController:
self.publish_thread.daemon = True
self.publish_thread.start()
print("Initialize G1_29_ArmController OK!\n")
logger_mp.info("Initialize G1_29_ArmController OK!\n")
def _subscribe_motor_state(self):
while True:
@ -149,6 +159,9 @@ class G1_29_ArmController:
return cliped_arm_q_target
def _ctrl_motor_state(self):
if self.motion_mode:
self.msg.motor_cmd[G1_29_JointIndex.kNotUsedJoint0].q = 1.0;
while True:
start_time = time.time()
@ -156,12 +169,15 @@ class G1_29_ArmController:
arm_q_target = self.q_target
arm_tauff_target = self.tauff_target
cliped_arm_q_target = self.clip_arm_q_target(arm_q_target, velocity_limit = self.arm_velocity_limit)
if self.simulation_mode:
cliped_arm_q_target = arm_q_target
else:
cliped_arm_q_target = self.clip_arm_q_target(arm_q_target, velocity_limit = self.arm_velocity_limit)
for idx, id in enumerate(G1_29_JointArmIndex):
self.msg.motor_cmd[id].q = cliped_arm_q_target[idx]
self.msg.motor_cmd[id].dq = 0
self.msg.motor_cmd[id].tau = arm_tauff_target[idx]
self.msg.motor_cmd[id].tau = arm_tauff_target[idx]
self.msg.crc = self.crc.Crc(self.msg)
self.lowcmd_publisher.Write(self.msg)
@ -174,8 +190,8 @@ class G1_29_ArmController:
all_t_elapsed = current_time - start_time
sleep_time = max(0, (self.control_dt - all_t_elapsed))
time.sleep(sleep_time)
# print(f"arm_velocity_limit:{self.arm_velocity_limit}")
# print(f"sleep_time:{sleep_time}")
# logger_mp.debug(f"arm_velocity_limit:{self.arm_velocity_limit}")
# logger_mp.debug(f"sleep_time:{sleep_time}")
def ctrl_dual_arm(self, q_target, tauff_target):
'''Set control target values q & tau of the left and right arm motors.'''
@ -201,16 +217,23 @@ class G1_29_ArmController:
def ctrl_dual_arm_go_home(self):
'''Move both the left and right arms of the robot to their home position by setting the target joint angles (q) and torques (tau) to zero.'''
print("[G1_29_ArmController] ctrl_dual_arm_go_home start...")
logger_mp.info("[G1_29_ArmController] ctrl_dual_arm_go_home start...")
max_attempts = 100
current_attempts = 0
with self.ctrl_lock:
self.q_target = np.zeros(14)
# self.tauff_target = np.zeros(14)
tolerance = 0.05 # Tolerance threshold for joint angles to determine "close to zero", can be adjusted based on your motor's precision requirements
while True:
while current_attempts < max_attempts:
current_q = self.get_current_dual_arm_q()
if np.all(np.abs(current_q) < tolerance):
print("[G1_29_ArmController] both arms have reached the home position.")
if self.motion_mode:
for weight in np.arange(1, 0, -0.01):
self.msg.motor_cmd[G1_29_JointIndex.kNotUsedJoint0].q = weight;
time.sleep(0.02)
logger_mp.info("[G1_29_ArmController] both arms have reached the home position.")
break
current_attempts += 1
time.sleep(0.05)
def speed_gradual_max(self, t = 5.0):
@ -318,8 +341,10 @@ class G1_29_JointIndex(IntEnum):
kNotUsedJoint5 = 34
class G1_23_ArmController:
def __init__(self):
print("Initialize G1_23_ArmController...")
def __init__(self, simulation_mode = False):
self.simulation_mode = simulation_mode
logger_mp.info("Initialize G1_23_ArmController...")
self.q_target = np.zeros(10)
self.tauff_target = np.zeros(10)
@ -340,7 +365,7 @@ class G1_23_ArmController:
# initialize lowcmd publisher and lowstate subscriber
ChannelFactoryInitialize(0)
self.lowcmd_publisher = ChannelPublisher(kTopicLowCommand, hg_LowCmd)
self.lowcmd_publisher = ChannelPublisher(kTopicLowCommand_Debug, hg_LowCmd)
self.lowcmd_publisher.Init()
self.lowstate_subscriber = ChannelSubscriber(kTopicLowState, hg_LowState)
self.lowstate_subscriber.Init()
@ -353,7 +378,8 @@ class G1_23_ArmController:
while not self.lowstate_buffer.GetData():
time.sleep(0.01)
print("[G1_23_ArmController] Waiting to subscribe dds...")
logger_mp.warning("[G1_23_ArmController] Waiting to subscribe dds...")
logger_mp.info("[G1_23_ArmController] Subscribe dds ok.")
# initialize hg's lowcmd msg
self.crc = CRC()
@ -362,9 +388,9 @@ class G1_23_ArmController:
self.msg.mode_machine = self.get_mode_machine()
self.all_motor_q = self.get_current_motor_q()
print(f"Current all body motor state q:\n{self.all_motor_q} \n")
print(f"Current two arms motor state q:\n{self.get_current_dual_arm_q()}\n")
print("Lock all joints except two arms...\n")
logger_mp.info(f"Current all body motor state q:\n{self.all_motor_q} \n")
logger_mp.info(f"Current two arms motor state q:\n{self.get_current_dual_arm_q()}\n")
logger_mp.info("Lock all joints except two arms...\n")
arm_indices = set(member.value for member in G1_23_JointArmIndex)
for id in G1_23_JointIndex:
@ -384,7 +410,7 @@ class G1_23_ArmController:
self.msg.motor_cmd[id].kp = self.kp_high
self.msg.motor_cmd[id].kd = self.kd_high
self.msg.motor_cmd[id].q = self.all_motor_q[id]
print("Lock OK!\n")
logger_mp.info("Lock OK!\n")
# initialize publish thread
self.publish_thread = threading.Thread(target=self._ctrl_motor_state)
@ -392,7 +418,7 @@ class G1_23_ArmController:
self.publish_thread.daemon = True
self.publish_thread.start()
print("Initialize G1_23_ArmController OK!\n")
logger_mp.info("Initialize G1_23_ArmController OK!\n")
def _subscribe_motor_state(self):
while True:
@ -420,7 +446,10 @@ class G1_23_ArmController:
arm_q_target = self.q_target
arm_tauff_target = self.tauff_target
cliped_arm_q_target = self.clip_arm_q_target(arm_q_target, velocity_limit = self.arm_velocity_limit)
if self.simulation_mode:
cliped_arm_q_target = arm_q_target
else:
cliped_arm_q_target = self.clip_arm_q_target(arm_q_target, velocity_limit = self.arm_velocity_limit)
for idx, id in enumerate(G1_23_JointArmIndex):
self.msg.motor_cmd[id].q = cliped_arm_q_target[idx]
@ -438,8 +467,8 @@ class G1_23_ArmController:
all_t_elapsed = current_time - start_time
sleep_time = max(0, (self.control_dt - all_t_elapsed))
time.sleep(sleep_time)
# print(f"arm_velocity_limit:{self.arm_velocity_limit}")
# print(f"sleep_time:{sleep_time}")
# logger_mp.debug(f"arm_velocity_limit:{self.arm_velocity_limit}")
# logger_mp.debug(f"sleep_time:{sleep_time}")
def ctrl_dual_arm(self, q_target, tauff_target):
'''Set control target values q & tau of the left and right arm motors.'''
@ -465,16 +494,19 @@ class G1_23_ArmController:
def ctrl_dual_arm_go_home(self):
'''Move both the left and right arms of the robot to their home position by setting the target joint angles (q) and torques (tau) to zero.'''
print("[G1_23_ArmController] ctrl_dual_arm_go_home start...")
logger_mp.info("[G1_23_ArmController] ctrl_dual_arm_go_home start...")
max_attempts = 100
current_attempts = 0
with self.ctrl_lock:
self.q_target = np.zeros(10)
# self.tauff_target = np.zeros(10)
tolerance = 0.05 # Tolerance threshold for joint angles to determine "close to zero", can be adjusted based on your motor's precision requirements
while True:
while current_attempts < max_attempts:
current_q = self.get_current_dual_arm_q()
if np.all(np.abs(current_q) < tolerance):
print("[G1_23_ArmController] both arms have reached the home position.")
logger_mp.info("[G1_23_ArmController] both arms have reached the home position.")
break
current_attempts += 1
time.sleep(0.05)
def speed_gradual_max(self, t = 5.0):
@ -574,8 +606,10 @@ class G1_23_JointIndex(IntEnum):
kNotUsedJoint5 = 34
class H1_2_ArmController:
def __init__(self):
print("Initialize H1_2_ArmController...")
def __init__(self, simulation_mode = False):
self.simulation_mode = simulation_mode
logger_mp.info("Initialize H1_2_ArmController...")
self.q_target = np.zeros(14)
self.tauff_target = np.zeros(14)
@ -596,7 +630,7 @@ class H1_2_ArmController:
# initialize lowcmd publisher and lowstate subscriber
ChannelFactoryInitialize(0)
self.lowcmd_publisher = ChannelPublisher(kTopicLowCommand, hg_LowCmd)
self.lowcmd_publisher = ChannelPublisher(kTopicLowCommand_Debug, hg_LowCmd)
self.lowcmd_publisher.Init()
self.lowstate_subscriber = ChannelSubscriber(kTopicLowState, hg_LowState)
self.lowstate_subscriber.Init()
@ -609,7 +643,8 @@ class H1_2_ArmController:
while not self.lowstate_buffer.GetData():
time.sleep(0.01)
print("[H1_2_ArmController] Waiting to subscribe dds...")
logger_mp.warning("[H1_2_ArmController] Waiting to subscribe dds...")
logger_mp.info("[H1_2_ArmController] Subscribe dds ok.")
# initialize hg's lowcmd msg
self.crc = CRC()
@ -618,9 +653,9 @@ class H1_2_ArmController:
self.msg.mode_machine = self.get_mode_machine()
self.all_motor_q = self.get_current_motor_q()
print(f"Current all body motor state q:\n{self.all_motor_q} \n")
print(f"Current two arms motor state q:\n{self.get_current_dual_arm_q()}\n")
print("Lock all joints except two arms...\n")
logger_mp.info(f"Current all body motor state q:\n{self.all_motor_q} \n")
logger_mp.info(f"Current two arms motor state q:\n{self.get_current_dual_arm_q()}\n")
logger_mp.info("Lock all joints except two arms...\n")
arm_indices = set(member.value for member in H1_2_JointArmIndex)
for id in H1_2_JointIndex:
@ -640,7 +675,7 @@ class H1_2_ArmController:
self.msg.motor_cmd[id].kp = self.kp_high
self.msg.motor_cmd[id].kd = self.kd_high
self.msg.motor_cmd[id].q = self.all_motor_q[id]
print("Lock OK!\n")
logger_mp.info("Lock OK!\n")
# initialize publish thread
self.publish_thread = threading.Thread(target=self._ctrl_motor_state)
@ -648,7 +683,7 @@ class H1_2_ArmController:
self.publish_thread.daemon = True
self.publish_thread.start()
print("Initialize H1_2_ArmController OK!\n")
logger_mp.info("Initialize H1_2_ArmController OK!\n")
def _subscribe_motor_state(self):
while True:
@ -676,7 +711,10 @@ class H1_2_ArmController:
arm_q_target = self.q_target
arm_tauff_target = self.tauff_target
cliped_arm_q_target = self.clip_arm_q_target(arm_q_target, velocity_limit = self.arm_velocity_limit)
if self.simulation_mode:
cliped_arm_q_target = arm_q_target
else:
cliped_arm_q_target = self.clip_arm_q_target(arm_q_target, velocity_limit = self.arm_velocity_limit)
for idx, id in enumerate(H1_2_JointArmIndex):
self.msg.motor_cmd[id].q = cliped_arm_q_target[idx]
@ -694,8 +732,8 @@ class H1_2_ArmController:
all_t_elapsed = current_time - start_time
sleep_time = max(0, (self.control_dt - all_t_elapsed))
time.sleep(sleep_time)
# print(f"arm_velocity_limit:{self.arm_velocity_limit}")
# print(f"sleep_time:{sleep_time}")
# logger_mp.debug(f"arm_velocity_limit:{self.arm_velocity_limit}")
# logger_mp.debug(f"sleep_time:{sleep_time}")
def ctrl_dual_arm(self, q_target, tauff_target):
'''Set control target values q & tau of the left and right arm motors.'''
@ -721,16 +759,19 @@ class H1_2_ArmController:
def ctrl_dual_arm_go_home(self):
'''Move both the left and right arms of the robot to their home position by setting the target joint angles (q) and torques (tau) to zero.'''
print("[H1_2_ArmController] ctrl_dual_arm_go_home start...")
logger_mp.info("[H1_2_ArmController] ctrl_dual_arm_go_home start...")
max_attempts = 100
current_attempts = 0
with self.ctrl_lock:
self.q_target = np.zeros(14)
# self.tauff_target = np.zeros(14)
tolerance = 0.05 # Tolerance threshold for joint angles to determine "close to zero", can be adjusted based on your motor's precision requirements
while True:
while current_attempts < max_attempts:
current_q = self.get_current_dual_arm_q()
if np.all(np.abs(current_q) < tolerance):
print("[H1_2_ArmController] both arms have reached the home position.")
logger_mp.info("[H1_2_ArmController] both arms have reached the home position.")
break
current_attempts += 1
time.sleep(0.05)
def speed_gradual_max(self, t = 5.0):
@ -837,8 +878,10 @@ class H1_2_JointIndex(IntEnum):
kNotUsedJoint7 = 34
class H1_ArmController:
def __init__(self):
print("Initialize H1_ArmController...")
def __init__(self, simulation_mode = False):
self.simulation_mode = simulation_mode
logger_mp.info("Initialize H1_ArmController...")
self.q_target = np.zeros(8)
self.tauff_target = np.zeros(8)
@ -857,7 +900,7 @@ class H1_ArmController:
# initialize lowcmd publisher and lowstate subscriber
ChannelFactoryInitialize(0)
self.lowcmd_publisher = ChannelPublisher(kTopicLowCommand, go_LowCmd)
self.lowcmd_publisher = ChannelPublisher(kTopicLowCommand_Debug, go_LowCmd)
self.lowcmd_publisher.Init()
self.lowstate_subscriber = ChannelSubscriber(kTopicLowState, go_LowState)
self.lowstate_subscriber.Init()
@ -870,7 +913,8 @@ class H1_ArmController:
while not self.lowstate_buffer.GetData():
time.sleep(0.01)
print("[H1_ArmController] Waiting to subscribe dds...")
logger_mp.warning("[H1_ArmController] Waiting to subscribe dds...")
logger_mp.info("[H1_ArmController] Subscribe dds ok.")
# initialize h1's lowcmd msg
self.crc = CRC()
@ -881,9 +925,9 @@ class H1_ArmController:
self.msg.gpio = 0
self.all_motor_q = self.get_current_motor_q()
print(f"Current all body motor state q:\n{self.all_motor_q} \n")
print(f"Current two arms motor state q:\n{self.get_current_dual_arm_q()}\n")
print("Lock all joints except two arms...\n")
logger_mp.info(f"Current all body motor state q:\n{self.all_motor_q} \n")
logger_mp.info(f"Current two arms motor state q:\n{self.get_current_dual_arm_q()}\n")
logger_mp.info("Lock all joints except two arms...\n")
for id in H1_JointIndex:
if self._Is_weak_motor(id):
@ -895,7 +939,7 @@ class H1_ArmController:
self.msg.motor_cmd[id].kd = self.kd_high
self.msg.motor_cmd[id].mode = 0x0A
self.msg.motor_cmd[id].q = self.all_motor_q[id]
print("Lock OK!\n")
logger_mp.info("Lock OK!\n")
# initialize publish thread
self.publish_thread = threading.Thread(target=self._ctrl_motor_state)
@ -903,7 +947,7 @@ class H1_ArmController:
self.publish_thread.daemon = True
self.publish_thread.start()
print("Initialize H1_ArmController OK!\n")
logger_mp.info("Initialize H1_ArmController OK!\n")
def _subscribe_motor_state(self):
while True:
@ -931,7 +975,10 @@ class H1_ArmController:
arm_q_target = self.q_target
arm_tauff_target = self.tauff_target
cliped_arm_q_target = self.clip_arm_q_target(arm_q_target, velocity_limit = self.arm_velocity_limit)
if self.simulation_mode:
cliped_arm_q_target = arm_q_target
else:
cliped_arm_q_target = self.clip_arm_q_target(arm_q_target, velocity_limit = self.arm_velocity_limit)
for idx, id in enumerate(H1_JointArmIndex):
self.msg.motor_cmd[id].q = cliped_arm_q_target[idx]
@ -949,8 +996,8 @@ class H1_ArmController:
all_t_elapsed = current_time - start_time
sleep_time = max(0, (self.control_dt - all_t_elapsed))
time.sleep(sleep_time)
# print(f"arm_velocity_limit:{self.arm_velocity_limit}")
# print(f"sleep_time:{sleep_time}")
# logger_mp.debug(f"arm_velocity_limit:{self.arm_velocity_limit}")
# logger_mp.debug(f"sleep_time:{sleep_time}")
def ctrl_dual_arm(self, q_target, tauff_target):
'''Set control target values q & tau of the left and right arm motors.'''
@ -972,16 +1019,19 @@ class H1_ArmController:
def ctrl_dual_arm_go_home(self):
'''Move both the left and right arms of the robot to their home position by setting the target joint angles (q) and torques (tau) to zero.'''
print("[H1_ArmController] ctrl_dual_arm_go_home start...")
logger_mp.info("[H1_ArmController] ctrl_dual_arm_go_home start...")
max_attempts = 100
current_attempts = 0
with self.ctrl_lock:
self.q_target = np.zeros(8)
# self.tauff_target = np.zeros(8)
tolerance = 0.05 # Tolerance threshold for joint angles to determine "close to zero", can be adjusted based on your motor's precision requirements
while True:
while current_attempts < max_attempts:
current_q = self.get_current_dual_arm_q()
if np.all(np.abs(current_q) < tolerance):
print("[H1_ArmController] both arms have reached the home position.")
logger_mp.info("[H1_ArmController] both arms have reached the home position.")
break
current_attempts += 1
time.sleep(0.05)
def speed_gradual_max(self, t = 5.0):
@ -1054,7 +1104,7 @@ if __name__ == "__main__":
import pinocchio as pin
arm_ik = G1_29_ArmIK(Unit_Test = True, Visualization = False)
arm = G1_29_ArmController()
arm = G1_29_ArmController(simulation_mode=True)
# arm_ik = G1_23_ArmIK(Unit_Test = True, Visualization = False)
# arm = G1_23_ArmController()
# arm_ik = H1_2_ArmIK(Unit_Test = True, Visualization = False)

32
teleop/robot_control/robot_arm_ik.py

@ -7,7 +7,8 @@ from pinocchio import casadi as cpin
from pinocchio.visualize import MeshcatVisualizer
import os
import sys
import logging_mp
logger_mp = logging_mp.get_logger(__name__)
parent2_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
sys.path.append(parent2_dir)
@ -83,9 +84,10 @@ class G1_29_ArmIK:
# for i in range(self.reduced_robot.model.nframes):
# frame = self.reduced_robot.model.frames[i]
# frame_id = self.reduced_robot.model.getFrameId(frame.name)
# print(f"Frame ID: {frame_id}, Name: {frame.name}")
for idx, name in enumerate(self.reduced_robot.model.names):
print(f"{idx}: {name}")
# logger_mp.debug(f"Frame ID: {frame_id}, Name: {frame.name}")
# for idx, name in enumerate(self.reduced_robot.model.names):
# logger_mp.debug(f"{idx}: {name}")
# Creating Casadi models and data for symbolic computing
self.cmodel = cpin.Model(self.reduced_robot.model)
self.cdata = self.cmodel.createData()
@ -236,7 +238,7 @@ class G1_29_ArmIK:
return sol_q, sol_tauff
except Exception as e:
print(f"ERROR in convergence, plotting debug info.{e}")
logger_mp.error(f"ERROR in convergence, plotting debug info.{e}")
sol_q = self.opti.debug.value(self.var_q)
self.smooth_filter.add_data(sol_q)
@ -251,7 +253,7 @@ class G1_29_ArmIK:
sol_tauff = pin.rnea(self.reduced_robot.model, self.reduced_robot.data, sol_q, v, np.zeros(self.reduced_robot.model.nv))
print(f"sol_q:{sol_q} \nmotorstate: \n{current_lr_arm_motor_q} \nleft_pose: \n{left_wrist} \nright_pose: \n{right_wrist}")
logger_mp.error(f"sol_q:{sol_q} \nmotorstate: \n{current_lr_arm_motor_q} \nleft_pose: \n{left_wrist} \nright_pose: \n{right_wrist}")
if self.Visualization:
self.vis.display(sol_q) # for visualization
@ -310,7 +312,7 @@ class G1_23_ArmIK:
# for i in range(self.reduced_robot.model.nframes):
# frame = self.reduced_robot.model.frames[i]
# frame_id = self.reduced_robot.model.getFrameId(frame.name)
# print(f"Frame ID: {frame_id}, Name: {frame.name}")
# logger_mp.debug(f"Frame ID: {frame_id}, Name: {frame.name}")
# Creating Casadi models and data for symbolic computing
self.cmodel = cpin.Model(self.reduced_robot.model)
@ -462,7 +464,7 @@ class G1_23_ArmIK:
return sol_q, sol_tauff
except Exception as e:
print(f"ERROR in convergence, plotting debug info.{e}")
logger_mp.error(f"ERROR in convergence, plotting debug info.{e}")
sol_q = self.opti.debug.value(self.var_q)
self.smooth_filter.add_data(sol_q)
@ -477,7 +479,7 @@ class G1_23_ArmIK:
sol_tauff = pin.rnea(self.reduced_robot.model, self.reduced_robot.data, sol_q, v, np.zeros(self.reduced_robot.model.nv))
print(f"sol_q:{sol_q} \nmotorstate: \n{current_lr_arm_motor_q} \nleft_pose: \n{left_wrist} \nright_pose: \n{right_wrist}")
logger_mp.error(f"sol_q:{sol_q} \nmotorstate: \n{current_lr_arm_motor_q} \nleft_pose: \n{left_wrist} \nright_pose: \n{right_wrist}")
if self.Visualization:
self.vis.display(sol_q) # for visualization
@ -561,7 +563,7 @@ class H1_2_ArmIK:
# for i in range(self.reduced_robot.model.nframes):
# frame = self.reduced_robot.model.frames[i]
# frame_id = self.reduced_robot.model.getFrameId(frame.name)
# print(f"Frame ID: {frame_id}, Name: {frame.name}")
# logger_mp.debug(f"Frame ID: {frame_id}, Name: {frame.name}")
# Creating Casadi models and data for symbolic computing
self.cmodel = cpin.Model(self.reduced_robot.model)
@ -713,7 +715,7 @@ class H1_2_ArmIK:
return sol_q, sol_tauff
except Exception as e:
print(f"ERROR in convergence, plotting debug info.{e}")
logger_mp.error(f"ERROR in convergence, plotting debug info.{e}")
sol_q = self.opti.debug.value(self.var_q)
self.smooth_filter.add_data(sol_q)
@ -728,7 +730,7 @@ class H1_2_ArmIK:
sol_tauff = pin.rnea(self.reduced_robot.model, self.reduced_robot.data, sol_q, v, np.zeros(self.reduced_robot.model.nv))
print(f"sol_q:{sol_q} \nmotorstate: \n{current_lr_arm_motor_q} \nleft_pose: \n{left_wrist} \nright_pose: \n{right_wrist}")
logger_mp.error(f"sol_q:{sol_q} \nmotorstate: \n{current_lr_arm_motor_q} \nleft_pose: \n{left_wrist} \nright_pose: \n{right_wrist}")
if self.Visualization:
self.vis.display(sol_q) # for visualization
@ -815,7 +817,7 @@ class H1_ArmIK:
# for i in range(self.reduced_robot.model.nframes):
# frame = self.reduced_robot.model.frames[i]
# frame_id = self.reduced_robot.model.getFrameId(frame.name)
# print(f"Frame ID: {frame_id}, Name: {frame.name}")
# logger_mp.debug(f"Frame ID: {frame_id}, Name: {frame.name}")
# Creating Casadi models and data for symbolic computing
self.cmodel = cpin.Model(self.reduced_robot.model)
@ -967,7 +969,7 @@ class H1_ArmIK:
return sol_q, sol_tauff
except Exception as e:
print(f"ERROR in convergence, plotting debug info.{e}")
logger_mp.error(f"ERROR in convergence, plotting debug info.{e}")
sol_q = self.opti.debug.value(self.var_q)
self.smooth_filter.add_data(sol_q)
@ -982,7 +984,7 @@ class H1_ArmIK:
sol_tauff = pin.rnea(self.reduced_robot.model, self.reduced_robot.data, sol_q, v, np.zeros(self.reduced_robot.model.nv))
print(f"sol_q:{sol_q} \nmotorstate: \n{current_lr_arm_motor_q} \nleft_pose: \n{left_wrist} \nright_pose: \n{right_wrist}")
logger_mp.error(f"sol_q:{sol_q} \nmotorstate: \n{current_lr_arm_motor_q} \nleft_pose: \n{left_wrist} \nright_pose: \n{right_wrist}")
if self.Visualization:
self.vis.display(sol_q) # for visualization

29
teleop/robot_control/robot_hand_inspire.py

@ -8,9 +8,11 @@ import numpy as np
from enum import IntEnum
import threading
import time
from multiprocessing import Process, shared_memory, Array, Lock
from multiprocessing import Process, Array
import logging_mp
logger_mp = logging_mp.get_logger(__name__)
inspire_tip_indices = [4, 9, 14, 19, 24]
Inspire_Num_Motors = 6
kTopicInspireCommand = "rt/inspire/cmd"
kTopicInspireState = "rt/inspire/state"
@ -18,7 +20,7 @@ kTopicInspireState = "rt/inspire/state"
class Inspire_Controller:
def __init__(self, left_hand_array, right_hand_array, dual_hand_data_lock = None, dual_hand_state_array = None,
dual_hand_action_array = None, fps = 100.0, Unit_Test = False):
print("Initialize Inspire_Controller...")
logger_mp.info("Initialize Inspire_Controller...")
self.fps = fps
self.Unit_Test = Unit_Test
if not self.Unit_Test:
@ -47,14 +49,15 @@ class Inspire_Controller:
if any(self.right_hand_state_array): # any(self.left_hand_state_array) and
break
time.sleep(0.01)
print("[Inspire_Controller] Waiting to subscribe dds...")
logger_mp.warning("[Inspire_Controller] Waiting to subscribe dds...")
logger_mp.info("[Inspire_Controller] Subscribe dds ok.")
hand_control_process = Process(target=self.control_process, args=(left_hand_array, right_hand_array, self.left_hand_state_array, self.right_hand_state_array,
dual_hand_data_lock, dual_hand_state_array, dual_hand_action_array))
hand_control_process.daemon = True
hand_control_process.start()
print("Initialize Inspire_Controller OK!\n")
logger_mp.info("Initialize Inspire_Controller OK!\n")
def _subscribe_hand_state(self):
while True:
@ -76,7 +79,7 @@ class Inspire_Controller:
self.hand_msg.cmds[id].q = right_q_target[idx]
self.HandCmb_publisher.Write(self.hand_msg)
# print("hand ctrl publish ok.")
# logger_mp.debug("hand ctrl publish ok.")
def control_process(self, left_hand_array, right_hand_array, left_hand_state_array, right_hand_state_array,
dual_hand_data_lock = None, dual_hand_state_array = None, dual_hand_action_array = None):
@ -98,15 +101,17 @@ class Inspire_Controller:
while self.running:
start_time = time.time()
# get dual hand state
left_hand_mat = np.array(left_hand_array[:]).reshape(25, 3).copy()
right_hand_mat = np.array(right_hand_array[:]).reshape(25, 3).copy()
with left_hand_array.get_lock():
left_hand_data = np.array(left_hand_array[:]).reshape(25, 3).copy()
with right_hand_array.get_lock():
right_hand_data = np.array(right_hand_array[:]).reshape(25, 3).copy()
# Read left and right q_state from shared arrays
state_data = np.concatenate((np.array(left_hand_state_array[:]), np.array(right_hand_state_array[:])))
if not np.all(right_hand_mat == 0.0) and not np.all(left_hand_mat[4] == np.array([-1.13, 0.3, 0.15])): # if hand data has been initialized.
ref_left_value = left_hand_mat[inspire_tip_indices]
ref_right_value = right_hand_mat[inspire_tip_indices]
if not np.all(right_hand_data == 0.0) and not np.all(left_hand_data[4] == np.array([-1.13, 0.3, 0.15])): # if hand data has been initialized.
ref_left_value = left_hand_data[self.hand_retargeting.left_indices[1,:]] - left_hand_data[self.hand_retargeting.left_indices[0,:]]
ref_right_value = right_hand_data[self.hand_retargeting.right_indices[1,:]] - right_hand_data[self.hand_retargeting.right_indices[0,:]]
left_q_target = self.hand_retargeting.left_retargeting.retarget(ref_left_value)[self.hand_retargeting.left_dex_retargeting_to_hardware]
right_q_target = self.hand_retargeting.right_retargeting.retarget(ref_right_value)[self.hand_retargeting.right_dex_retargeting_to_hardware]
@ -145,7 +150,7 @@ class Inspire_Controller:
sleep_time = max(0, (1 / self.fps) - time_elapsed)
time.sleep(sleep_time)
finally:
print("Dex3_1_Controller has been closed.")
logger_mp.info("Inspire_Controller has been closed.")
# Update hand state, according to the official documentation, https://support.unitree.com/home/en/G1_developer/inspire_dfx_dexterous_hand
# the state sequence is as shown in the table below

124
teleop/robot_control/robot_hand_unitree.py

@ -20,8 +20,10 @@ sys.path.append(parent2_dir)
from teleop.robot_control.hand_retargeting import HandRetargeting, HandType
from teleop.utils.weighted_moving_filter import WeightedMovingFilter
import logging_mp
logger_mp = logging_mp.get_logger(__name__)
unitree_tip_indices = [4, 9, 14] # [thumb, index, middle] in OpenXR
Dex3_Num_Motors = 7
kTopicDex3LeftCommand = "rt/dex3/left/cmd"
kTopicDex3RightCommand = "rt/dex3/right/cmd"
@ -30,26 +32,26 @@ kTopicDex3RightState = "rt/dex3/right/state"
class Dex3_1_Controller:
def __init__(self, left_hand_array, right_hand_array, dual_hand_data_lock = None, dual_hand_state_array = None,
dual_hand_action_array = None, fps = 100.0, Unit_Test = False):
def __init__(self, left_hand_array_in, right_hand_array_in, dual_hand_data_lock = None, dual_hand_state_array_out = None,
dual_hand_action_array_out = None, fps = 100.0, Unit_Test = False):
"""
[note] A *_array type parameter requires using a multiprocessing Array, because it needs to be passed to the internal child process
left_hand_array: [input] Left hand skeleton data (required from XR device) to hand_ctrl.control_process
left_hand_array_in: [input] Left hand skeleton data (required from XR device) to hand_ctrl.control_process
right_hand_array: [input] Right hand skeleton data (required from XR device) to hand_ctrl.control_process
right_hand_array_in: [input] Right hand skeleton data (required from XR device) to hand_ctrl.control_process
dual_hand_data_lock: Data synchronization lock for dual_hand_state_array and dual_hand_action_array
dual_hand_state_array: [output] Return left(7), right(7) hand motor state
dual_hand_state_array_out: [output] Return left(7), right(7) hand motor state
dual_hand_action_array: [output] Return left(7), right(7) hand motor action
dual_hand_action_array_out: [output] Return left(7), right(7) hand motor action
fps: Control frequency
Unit_Test: Whether to enable unit testing
"""
print("Initialize Dex3_1_Controller...")
logger_mp.info("Initialize Dex3_1_Controller...")
self.fps = fps
self.Unit_Test = Unit_Test
@ -83,14 +85,15 @@ class Dex3_1_Controller:
if any(self.left_hand_state_array) and any(self.right_hand_state_array):
break
time.sleep(0.01)
print("[Dex3_1_Controller] Waiting to subscribe dds...")
logger_mp.warning("[Dex3_1_Controller] Waiting to subscribe dds...")
logger_mp.info("[Dex3_1_Controller] Subscribe dds ok.")
hand_control_process = Process(target=self.control_process, args=(left_hand_array, right_hand_array, self.left_hand_state_array, self.right_hand_state_array,
dual_hand_data_lock, dual_hand_state_array, dual_hand_action_array))
hand_control_process = Process(target=self.control_process, args=(left_hand_array_in, right_hand_array_in, self.left_hand_state_array, self.right_hand_state_array,
dual_hand_data_lock, dual_hand_state_array_out, dual_hand_action_array_out))
hand_control_process.daemon = True
hand_control_process.start()
print("Initialize Dex3_1_Controller OK!\n")
logger_mp.info("Initialize Dex3_1_Controller OK!\n")
def _subscribe_hand_state(self):
while True:
@ -127,10 +130,10 @@ class Dex3_1_Controller:
self.LeftHandCmb_publisher.Write(self.left_msg)
self.RightHandCmb_publisher.Write(self.right_msg)
# print("hand ctrl publish ok.")
# logger_mp.debug("hand ctrl publish ok.")
def control_process(self, left_hand_array, right_hand_array, left_hand_state_array, right_hand_state_array,
dual_hand_data_lock = None, dual_hand_state_array = None, dual_hand_action_array = None):
def control_process(self, left_hand_array_in, right_hand_array_in, left_hand_state_array, right_hand_state_array,
dual_hand_data_lock = None, dual_hand_state_array_out = None, dual_hand_action_array_out = None):
self.running = True
left_q_target = np.full(Dex3_Num_Motors, 0)
@ -170,31 +173,27 @@ class Dex3_1_Controller:
while self.running:
start_time = time.time()
# get dual hand state
left_hand_mat = np.array(left_hand_array[:]).reshape(25, 3).copy()
right_hand_mat = np.array(right_hand_array[:]).reshape(25, 3).copy()
with left_hand_array_in.get_lock():
left_hand_data = np.array(left_hand_array_in[:]).reshape(25, 3).copy()
with right_hand_array_in.get_lock():
right_hand_data = np.array(right_hand_array_in[:]).reshape(25, 3).copy()
# Read left and right q_state from shared arrays
state_data = np.concatenate((np.array(left_hand_state_array[:]), np.array(right_hand_state_array[:])))
if not np.all(right_hand_mat == 0.0) and not np.all(left_hand_mat[4] == np.array([-1.13, 0.3, 0.15])): # if hand data has been initialized.
ref_left_value = left_hand_mat[unitree_tip_indices]
ref_right_value = right_hand_mat[unitree_tip_indices]
ref_left_value[0] = ref_left_value[0] * 1.15
ref_left_value[1] = ref_left_value[1] * 1.05
ref_left_value[2] = ref_left_value[2] * 0.95
ref_right_value[0] = ref_right_value[0] * 1.15
ref_right_value[1] = ref_right_value[1] * 1.05
ref_right_value[2] = ref_right_value[2] * 0.95
if not np.all(right_hand_data == 0.0) and not np.all(left_hand_data[4] == np.array([-1.13, 0.3, 0.15])): # if hand data has been initialized.
ref_left_value = left_hand_data[self.hand_retargeting.left_indices[1,:]] - left_hand_data[self.hand_retargeting.left_indices[0,:]]
ref_right_value = right_hand_data[self.hand_retargeting.right_indices[1,:]] - right_hand_data[self.hand_retargeting.right_indices[0,:]]
left_q_target = self.hand_retargeting.left_retargeting.retarget(ref_left_value)[self.hand_retargeting.right_dex_retargeting_to_hardware]
right_q_target = self.hand_retargeting.right_retargeting.retarget(ref_right_value)[self.hand_retargeting.right_dex_retargeting_to_hardware]
# get dual hand action
action_data = np.concatenate((left_q_target, right_q_target))
if dual_hand_state_array and dual_hand_action_array:
if dual_hand_state_array_out and dual_hand_action_array_out:
with dual_hand_data_lock:
dual_hand_state_array[:] = state_data
dual_hand_action_array[:] = action_data
dual_hand_state_array_out[:] = state_data
dual_hand_action_array_out[:] = action_data
self.ctrl_dual_hand(left_q_target, right_q_target)
current_time = time.time()
@ -202,7 +201,7 @@ class Dex3_1_Controller:
sleep_time = max(0, (1 / self.fps) - time_elapsed)
time.sleep(sleep_time)
finally:
print("Dex3_1_Controller has been closed.")
logger_mp.info("Dex3_1_Controller has been closed.")
class Dex3_1_Left_JointIndex(IntEnum):
kLeftHandThumb0 = 0
@ -229,30 +228,32 @@ kTopicGripperCommand = "rt/unitree_actuator/cmd"
kTopicGripperState = "rt/unitree_actuator/state"
class Gripper_Controller:
def __init__(self, left_hand_array, right_hand_array, dual_gripper_data_lock = None, dual_gripper_state_out = None, dual_gripper_action_out = None,
filter = True, fps = 200.0, Unit_Test = False):
def __init__(self, left_gripper_value_in, right_gripper_value_in, dual_gripper_data_lock = None, dual_gripper_state_out = None, dual_gripper_action_out = None,
filter = True, fps = 200.0, Unit_Test = False, simulation_mode = False):
"""
[note] A *_array type parameter requires using a multiprocessing Array, because it needs to be passed to the internal child process
left_hand_array: [input] Left hand skeleton data (required from XR device) to control_thread
left_gripper_value_in: [input] Left ctrl data (required from XR device) to control_thread
right_hand_array: [input] Right hand skeleton data (required from XR device) to control_thread
right_gripper_value_in: [input] Right ctrl data (required from XR device) to control_thread
dual_gripper_data_lock: Data synchronization lock for dual_gripper_state_array and dual_gripper_action_array
dual_gripper_state: [output] Return left(1), right(1) gripper motor state
dual_gripper_state_out: [output] Return left(1), right(1) gripper motor state
dual_gripper_action: [output] Return left(1), right(1) gripper motor action
dual_gripper_action_out: [output] Return left(1), right(1) gripper motor action
fps: Control frequency
Unit_Test: Whether to enable unit testing
"""
print("Initialize Gripper_Controller...")
logger_mp.info("Initialize Gripper_Controller...")
self.fps = fps
self.Unit_Test = Unit_Test
self.simulation_mode = simulation_mode
if filter:
self.smooth_filter = WeightedMovingFilter(np.array([0.5, 0.3, 0.2]), Gripper_Num_Motors)
else:
@ -279,14 +280,15 @@ class Gripper_Controller:
if any(state != 0.0 for state in self.dual_gripper_state):
break
time.sleep(0.01)
print("[Gripper_Controller] Waiting to subscribe dds...")
logger_mp.warning("[Gripper_Controller] Waiting to subscribe dds...")
logger_mp.info("[Gripper_Controller] Subscribe dds ok.")
self.gripper_control_thread = threading.Thread(target=self.control_thread, args=(left_hand_array, right_hand_array, self.dual_gripper_state,
self.gripper_control_thread = threading.Thread(target=self.control_thread, args=(left_gripper_value_in, right_gripper_value_in, self.dual_gripper_state,
dual_gripper_data_lock, dual_gripper_state_out, dual_gripper_action_out))
self.gripper_control_thread.daemon = True
self.gripper_control_thread.start()
print("Initialize Gripper_Controller OK!\n")
logger_mp.info("Initialize Gripper_Controller OK!\n")
def _subscribe_gripper_state(self):
while True:
@ -302,15 +304,17 @@ class Gripper_Controller:
self.gripper_msg.cmds[id].q = gripper_q_target[idx]
self.GripperCmb_publisher.Write(self.gripper_msg)
# print("gripper ctrl publish ok.")
# logger_mp.debug("gripper ctrl publish ok.")
def control_thread(self, left_hand_array, right_hand_array, dual_gripper_state_in, dual_hand_data_lock = None,
def control_thread(self, left_gripper_value_in, right_gripper_value_in, dual_gripper_state_in, dual_hand_data_lock = None,
dual_gripper_state_out = None, dual_gripper_action_out = None):
self.running = True
DELTA_GRIPPER_CMD = 0.18 # The motor rotates 5.4 radians, the clamping jaw slide open 9 cm, so 0.6 rad <==> 1 cm, 0.18 rad <==> 3 mm
THUMB_INDEX_DISTANCE_MIN = 0.05 # Assuming a minimum Euclidean distance is 5 cm between thumb and index.
THUMB_INDEX_DISTANCE_MAX = 0.07 # Assuming a maximum Euclidean distance is 9 cm between thumb and index.
if self.simulation_mode:
DELTA_GRIPPER_CMD = 1.0
else:
DELTA_GRIPPER_CMD = 0.18 # The motor rotates 5.4 radians, the clamping jaw slide open 9 cm, so 0.6 rad <==> 1 cm, 0.18 rad <==> 3 mm
THUMB_INDEX_DISTANCE_MIN = 5.0
THUMB_INDEX_DISTANCE_MAX = 7.0
LEFT_MAPPED_MIN = 0.0 # The minimum initial motor position when the gripper closes at startup.
RIGHT_MAPPED_MIN = 0.0 # The minimum initial motor position when the gripper closes at startup.
# The maximum initial motor position when the gripper closes before calibration (with the rail stroke calculated as 0.6 cm/rad * 9 rad = 5.4 cm).
@ -336,15 +340,15 @@ class Gripper_Controller:
while self.running:
start_time = time.time()
# get dual hand skeletal point state from XR device
left_hand_mat = np.array(left_hand_array[:]).reshape(25, 3).copy()
right_hand_mat = np.array(right_hand_array[:]).reshape(25, 3).copy()
with left_gripper_value_in.get_lock():
left_gripper_value = left_gripper_value_in.value
with right_gripper_value_in.get_lock():
right_gripper_value = right_gripper_value_in.value
if not np.all(right_hand_mat == 0.0) and not np.all(left_hand_mat[4] == np.array([-1.13, 0.3, 0.15])): # if hand data has been initialized.
left_euclidean_distance = np.linalg.norm(left_hand_mat[unitree_gripper_indices[1]] - left_hand_mat[unitree_gripper_indices[0]])
right_euclidean_distance = np.linalg.norm(right_hand_mat[unitree_gripper_indices[1]] - right_hand_mat[unitree_gripper_indices[0]])
if left_gripper_value != 0.0 or right_gripper_value != 0.0: # if hand data has been initialized.
# Linear mapping from [0, THUMB_INDEX_DISTANCE_MAX] to gripper action range
left_target_action = np.interp(left_euclidean_distance, [THUMB_INDEX_DISTANCE_MIN, THUMB_INDEX_DISTANCE_MAX], [LEFT_MAPPED_MIN, LEFT_MAPPED_MAX])
right_target_action = np.interp(right_euclidean_distance, [THUMB_INDEX_DISTANCE_MIN, THUMB_INDEX_DISTANCE_MAX], [RIGHT_MAPPED_MIN, RIGHT_MAPPED_MAX])
left_target_action = np.interp(left_gripper_value, [THUMB_INDEX_DISTANCE_MIN, THUMB_INDEX_DISTANCE_MAX], [LEFT_MAPPED_MIN, LEFT_MAPPED_MAX])
right_target_action = np.interp(right_gripper_value, [THUMB_INDEX_DISTANCE_MIN, THUMB_INDEX_DISTANCE_MAX], [RIGHT_MAPPED_MIN, RIGHT_MAPPED_MAX])
# else: # TEST WITHOUT XR DEVICE
# current_time = time.time()
# period = 2.5
@ -372,9 +376,9 @@ class Gripper_Controller:
dual_gripper_state_out[:] = dual_gripper_state - np.array([RIGHT_MAPPED_MIN, LEFT_MAPPED_MIN])
dual_gripper_action_out[:] = dual_gripper_action - np.array([RIGHT_MAPPED_MIN, LEFT_MAPPED_MIN])
# print(f"LEFT: euclidean:{left_euclidean_distance:.4f} \tstate:{dual_gripper_state_out[1]:.4f}\
# logger_mp.debug(f"LEFT: euclidean:{left_euclidean_distance:.4f} \tstate:{dual_gripper_state_out[1]:.4f}\
# \ttarget_action:{right_target_action - RIGHT_MAPPED_MIN:.4f} \tactual_action:{dual_gripper_action_out[1]:.4f}")
# print(f"RIGHT:euclidean:{right_euclidean_distance:.4f} \tstate:{dual_gripper_state_out[0]:.4f}\
# logger_mp.debug(f"RIGHT:euclidean:{right_euclidean_distance:.4f} \tstate:{dual_gripper_state_out[0]:.4f}\
# \ttarget_action:{left_target_action - LEFT_MAPPED_MIN:.4f} \tactual_action:{dual_gripper_action_out[0]:.4f}")
self.ctrl_dual_gripper(dual_gripper_action)
@ -383,7 +387,7 @@ class Gripper_Controller:
sleep_time = max(0, (1 / self.fps) - time_elapsed)
time.sleep(sleep_time)
finally:
print("Gripper_Controller has been closed.")
logger_mp.info("Gripper_Controller has been closed.")
class Gripper_JointIndex(IntEnum):
kLeftGripper = 0
@ -392,7 +396,7 @@ class Gripper_JointIndex(IntEnum):
if __name__ == "__main__":
import argparse
from teleop.open_television.tv_wrapper import TeleVisionWrapper
from televuer import TeleVuerWrapper
from teleop.image_server.image_client import ImageClient
parser = argparse.ArgumentParser()
@ -400,7 +404,7 @@ if __name__ == "__main__":
parser.add_argument('--gripper', dest='dex', action='store_false', help='Use gripper')
parser.set_defaults(dex=True)
args = parser.parse_args()
print(f"args:{args}\n")
logger_mp.info(f"args:{args}\n")
# image
img_config = {
@ -428,7 +432,7 @@ if __name__ == "__main__":
image_receive_thread.start()
# television and arm
tv_wrapper = TeleVisionWrapper(BINOCULAR, tv_img_shape, img_shm.name)
tv_wrapper = TeleVuerWrapper(BINOCULAR, tv_img_shape, img_shm.name)
if args.dex:
left_hand_array = Array('d', 75, lock=True)
@ -455,5 +459,5 @@ if __name__ == "__main__":
right_hand_array[:] = right_hand.flatten()
# with dual_hand_data_lock:
# print(f"state : {list(dual_hand_state_array)} \naction: {list(dual_hand_action_array)} \n")
# logger_mp.info(f"state : {list(dual_hand_state_array)} \naction: {list(dual_hand_action_array)} \n")
time.sleep(0.01)

485
teleop/teleop_hand_and_arm.py

@ -2,8 +2,11 @@ import numpy as np
import time
import argparse
import cv2
from multiprocessing import shared_memory, Array, Lock
from multiprocessing import shared_memory, Value, Array, Lock
import threading
import logging_mp
logging_mp.basic_config(level=logging_mp.INFO)
logger_mp = logging_mp.get_logger(__name__)
import os
import sys
@ -11,40 +14,83 @@ current_dir = os.path.dirname(os.path.abspath(__file__))
parent_dir = os.path.dirname(current_dir)
sys.path.append(parent_dir)
from teleop.open_television.tv_wrapper import TeleVisionWrapper
from televuer import TeleVuerWrapper
from teleop.robot_control.robot_arm import G1_29_ArmController, G1_23_ArmController, H1_2_ArmController, H1_ArmController
from teleop.robot_control.robot_arm_ik import G1_29_ArmIK, G1_23_ArmIK, H1_2_ArmIK, H1_ArmIK
from teleop.robot_control.robot_hand_unitree import Dex3_1_Controller, Gripper_Controller
from teleop.robot_control.robot_hand_inspire import Inspire_Controller
from teleop.image_server.image_client import ImageClient
from teleop.utils.episode_writer import EpisodeWriter
from sshkeyboard import listen_keyboard, stop_listening
# for simulation
from unitree_sdk2py.core.channel import ChannelPublisher
from unitree_sdk2py.idl.std_msgs.msg.dds_ import String_
def publish_reset_category(category: int,publisher): # Scene Reset signal
msg = String_(data=str(category))
publisher.Write(msg)
logger_mp.info(f"published reset category: {category}")
# state transition
start_signal = False
running = True
should_toggle_recording = False
is_recording = False
def on_press(key):
global running, start_signal, should_toggle_recording
if key == 'r':
start_signal = True
logger_mp.info("Program start signal received.")
elif key == 'q':
stop_listening()
running = False
elif key == 's':
should_toggle_recording = True
else:
logger_mp.info(f"{key} was pressed, but no action is defined for this key.")
listen_keyboard_thread = threading.Thread(target=listen_keyboard, kwargs={"on_press": on_press, "until": None, "sequential": False,}, daemon=True)
listen_keyboard_thread.start()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--task_dir', type = str, default = './utils/data', help = 'path to save data')
parser.add_argument('--frequency', type = int, default = 30.0, help = 'save data\'s frequency')
parser.add_argument('--record', action = 'store_true', help = 'Save data or not')
parser.add_argument('--no-record', dest = 'record', action = 'store_false', help = 'Do not save data')
parser.set_defaults(record = False)
parser.add_argument('--frequency', type = float, default = 90.0, help = 'save data\'s frequency')
parser.add_argument('--xr-mode', type=str, choices=['hand', 'controller'], default='hand', help='Select XR device tracking source')
parser.add_argument('--arm', type=str, choices=['G1_29', 'G1_23', 'H1_2', 'H1'], default='G1_29', help='Select arm controller')
parser.add_argument('--hand', type=str, choices=['dex3', 'gripper', 'inspire1'], help='Select hand controller')
parser.add_argument('--ee', type=str, choices=['dex3', 'gripper', 'inspire1'], help='Select end effector controller')
parser.add_argument('--record', action = 'store_true', help = 'Enable data recording')
parser.add_argument('--motion', action = 'store_true', help = 'Enable motion control mode')
parser.add_argument('--headless', action='store_true', help='Enable headless mode (no display)')
parser.add_argument('--sim', action = 'store_true', help = 'Enable isaac simulation mode')
args = parser.parse_args()
print(f"args:{args}\n")
logger_mp.info(f"args: {args}")
# image client: img_config should be the same as the configuration in image_server.py (of Robot's development computing unit)
img_config = {
'fps': 30,
'head_camera_type': 'opencv',
'head_camera_image_shape': [480, 1280], # Head camera resolution
'head_camera_id_numbers': [0],
'wrist_camera_type': 'opencv',
'wrist_camera_image_shape': [480, 640], # Wrist camera resolution
'wrist_camera_id_numbers': [2, 4],
}
if args.sim:
img_config = {
'fps': 30,
'head_camera_type': 'opencv',
'head_camera_image_shape': [480, 640], # Head camera resolution
'head_camera_id_numbers': [0],
'wrist_camera_type': 'opencv',
'wrist_camera_image_shape': [480, 640], # Wrist camera resolution
'wrist_camera_id_numbers': [2, 4],
}
else:
img_config = {
'fps': 30,
'head_camera_type': 'opencv',
'head_camera_image_shape': [480, 1280], # Head camera resolution
'head_camera_id_numbers': [0],
'wrist_camera_type': 'opencv',
'wrist_camera_image_shape': [480, 640], # Wrist camera resolution
'wrist_camera_id_numbers': [2, 4],
}
ASPECT_RATIO_THRESHOLD = 2.0 # If the aspect ratio exceeds this value, it is considered binocular
if len(img_config['head_camera_id_numbers']) > 1 or (img_config['head_camera_image_shape'][1] / img_config['head_camera_image_shape'][0] > ASPECT_RATIO_THRESHOLD):
BINOCULAR = True
@ -63,7 +109,13 @@ if __name__ == '__main__':
tv_img_shm = shared_memory.SharedMemory(create = True, size = np.prod(tv_img_shape) * np.uint8().itemsize)
tv_img_array = np.ndarray(tv_img_shape, dtype = np.uint8, buffer = tv_img_shm.buf)
if WRIST:
if WRIST and args.sim:
wrist_img_shape = (img_config['wrist_camera_image_shape'][0], img_config['wrist_camera_image_shape'][1] * 2, 3)
wrist_img_shm = shared_memory.SharedMemory(create = True, size = np.prod(wrist_img_shape) * np.uint8().itemsize)
wrist_img_array = np.ndarray(wrist_img_shape, dtype = np.uint8, buffer = wrist_img_shm.buf)
img_client = ImageClient(tv_img_shape = tv_img_shape, tv_img_shm_name = tv_img_shm.name,
wrist_img_shape = wrist_img_shape, wrist_img_shm_name = wrist_img_shm.name, server_address="127.0.0.1")
elif WRIST and not args.sim:
wrist_img_shape = (img_config['wrist_camera_image_shape'][0], img_config['wrist_camera_image_shape'][1] * 2, 3)
wrist_img_shm = shared_memory.SharedMemory(create = True, size = np.prod(wrist_img_shape) * np.uint8().itemsize)
wrist_img_array = np.ndarray(wrist_img_shape, dtype = np.uint8, buffer = wrist_img_shm.buf)
@ -77,195 +129,279 @@ if __name__ == '__main__':
image_receive_thread.start()
# television: obtain hand pose data from the XR device and transmit the robot's head camera image to the XR device.
tv_wrapper = TeleVisionWrapper(BINOCULAR, tv_img_shape, tv_img_shm.name)
tv_wrapper = TeleVuerWrapper(binocular=BINOCULAR, use_hand_tracking=args.xr_mode == 'hand', img_shape=tv_img_shape, img_shm_name=tv_img_shm.name,
return_state_data=True, return_hand_rot_data = False)
# arm
if args.arm == 'G1_29':
arm_ctrl = G1_29_ArmController()
arm_ctrl = G1_29_ArmController(motion_mode=args.motion, simulation_mode=args.sim)
arm_ik = G1_29_ArmIK()
elif args.arm == 'G1_23':
arm_ctrl = G1_23_ArmController()
arm_ctrl = G1_23_ArmController(simulation_mode=args.sim)
arm_ik = G1_23_ArmIK()
elif args.arm == 'H1_2':
arm_ctrl = H1_2_ArmController()
arm_ctrl = H1_2_ArmController(simulation_mode=args.sim)
arm_ik = H1_2_ArmIK()
elif args.arm == 'H1':
arm_ctrl = H1_ArmController()
arm_ctrl = H1_ArmController(simulation_mode=args.sim)
arm_ik = H1_ArmIK()
# hand
if args.hand == "dex3":
left_hand_array = Array('d', 75, lock = True) # [input]
right_hand_array = Array('d', 75, lock = True) # [input]
# end-effector
if args.ee == "dex3":
left_hand_pos_array = Array('d', 75, lock = True) # [input]
right_hand_pos_array = Array('d', 75, lock = True) # [input]
dual_hand_data_lock = Lock()
dual_hand_state_array = Array('d', 14, lock = False) # [output] current left, right hand state(14) data.
dual_hand_action_array = Array('d', 14, lock = False) # [output] current left, right hand action(14) data.
hand_ctrl = Dex3_1_Controller(left_hand_array, right_hand_array, dual_hand_data_lock, dual_hand_state_array, dual_hand_action_array)
elif args.hand == "gripper":
left_hand_array = Array('d', 75, lock=True)
right_hand_array = Array('d', 75, lock=True)
dual_hand_state_array = Array('d', 14, lock = False) # [output] current left, right hand state(14) data.
dual_hand_action_array = Array('d', 14, lock = False) # [output] current left, right hand action(14) data.
hand_ctrl = Dex3_1_Controller(left_hand_pos_array, right_hand_pos_array, dual_hand_data_lock, dual_hand_state_array, dual_hand_action_array)
elif args.ee == "gripper":
left_gripper_value = Value('d', 0.0, lock=True) # [input]
right_gripper_value = Value('d', 0.0, lock=True) # [input]
dual_gripper_data_lock = Lock()
dual_gripper_state_array = Array('d', 2, lock=False) # current left, right gripper state(2) data.
dual_gripper_action_array = Array('d', 2, lock=False) # current left, right gripper action(2) data.
gripper_ctrl = Gripper_Controller(left_hand_array, right_hand_array, dual_gripper_data_lock, dual_gripper_state_array, dual_gripper_action_array)
elif args.hand == "inspire1":
left_hand_array = Array('d', 75, lock = True) # [input]
right_hand_array = Array('d', 75, lock = True) # [input]
gripper_ctrl = Gripper_Controller(left_gripper_value, right_gripper_value, dual_gripper_data_lock, dual_gripper_state_array, dual_gripper_action_array, simulation_mode=args.sim)
elif args.ee == "inspire1":
left_hand_pos_array = Array('d', 75, lock = True) # [input]
right_hand_pos_array = Array('d', 75, lock = True) # [input]
dual_hand_data_lock = Lock()
dual_hand_state_array = Array('d', 12, lock = False) # [output] current left, right hand state(12) data.
dual_hand_action_array = Array('d', 12, lock = False) # [output] current left, right hand action(12) data.
hand_ctrl = Inspire_Controller(left_hand_array, right_hand_array, dual_hand_data_lock, dual_hand_state_array, dual_hand_action_array)
hand_ctrl = Inspire_Controller(left_hand_pos_array, right_hand_pos_array, dual_hand_data_lock, dual_hand_state_array, dual_hand_action_array)
else:
pass
# simulation mode
if args.sim:
reset_pose_publisher = ChannelPublisher("rt/reset_pose/cmd", String_)
reset_pose_publisher.Init()
from teleop.utils.sim_state_topic import start_sim_state_subscribe
sim_state_subscriber = start_sim_state_subscribe()
# controller + motion mode
if args.xr_mode == 'controller' and args.motion:
from unitree_sdk2py.g1.loco.g1_loco_client import LocoClient
sport_client = LocoClient()
sport_client.SetTimeout(0.0001)
sport_client.Init()
if args.record:
# record + headless mode
if args.record and args.headless:
recorder = EpisodeWriter(task_dir = args.task_dir, frequency = args.frequency, rerun_log = False)
elif args.record and not args.headless:
recorder = EpisodeWriter(task_dir = args.task_dir, frequency = args.frequency, rerun_log = True)
recording = False
try:
user_input = input("Please enter the start signal (enter 'r' to start the subsequent program):\n")
if user_input.lower() == 'r':
arm_ctrl.speed_gradual_max()
running = True
while running:
start_time = time.time()
head_rmat, left_wrist, right_wrist, left_hand, right_hand = tv_wrapper.get_data()
# send hand skeleton data to hand_ctrl.control_process
if args.hand:
left_hand_array[:] = left_hand.flatten()
right_hand_array[:] = right_hand.flatten()
# get current state data.
current_lr_arm_q = arm_ctrl.get_current_dual_arm_q()
current_lr_arm_dq = arm_ctrl.get_current_dual_arm_dq()
# solve ik using motor data and wrist pose, then use ik results to control arms.
time_ik_start = time.time()
sol_q, sol_tauff = arm_ik.solve_ik(left_wrist, right_wrist, current_lr_arm_q, current_lr_arm_dq)
time_ik_end = time.time()
# print(f"ik:\t{round(time_ik_end - time_ik_start, 6)}")
arm_ctrl.ctrl_dual_arm(sol_q, sol_tauff)
logger_mp.info("Please enter the start signal (enter 'r' to start the subsequent program)")
while not start_signal:
time.sleep(0.01)
arm_ctrl.speed_gradual_max()
while running:
start_time = time.time()
if not args.headless:
tv_resized_image = cv2.resize(tv_img_array, (tv_img_shape[1] // 2, tv_img_shape[0] // 2))
cv2.imshow("record image", tv_resized_image)
key = cv2.waitKey(1) & 0xFF
if key == ord('q'):
stop_listening()
running = False
elif key == ord('s') and args.record:
recording = not recording # state flipping
if recording:
if not recorder.create_episode():
recording = False
else:
recorder.save_episode()
if args.sim:
publish_reset_category(2, reset_pose_publisher)
elif key == ord('s'):
should_toggle_recording = True
elif key == ord('a'):
if args.sim:
publish_reset_category(2, reset_pose_publisher)
# record data
if args.record:
# dex hand or gripper
if args.hand == "dex3":
with dual_hand_data_lock:
left_hand_state = dual_hand_state_array[:7]
right_hand_state = dual_hand_state_array[-7:]
left_hand_action = dual_hand_action_array[:7]
right_hand_action = dual_hand_action_array[-7:]
elif args.hand == "gripper":
with dual_gripper_data_lock:
left_hand_state = [dual_gripper_state_array[1]]
right_hand_state = [dual_gripper_state_array[0]]
left_hand_action = [dual_gripper_action_array[1]]
right_hand_action = [dual_gripper_action_array[0]]
elif args.hand == "inspire1":
with dual_hand_data_lock:
left_hand_state = dual_hand_state_array[:6]
right_hand_state = dual_hand_state_array[-6:]
left_hand_action = dual_hand_action_array[:6]
right_hand_action = dual_hand_action_array[-6:]
if args.record and should_toggle_recording:
should_toggle_recording = False
if not is_recording:
if recorder.create_episode():
is_recording = True
else:
print("No dexterous hand set.")
pass
# head image
current_tv_image = tv_img_array.copy()
# wrist image
if WRIST:
current_wrist_image = wrist_img_array.copy()
# arm state and action
left_arm_state = current_lr_arm_q[:7]
right_arm_state = current_lr_arm_q[-7:]
left_arm_action = sol_q[:7]
right_arm_action = sol_q[-7:]
logger_mp.error("Failed to create episode. Recording not started.")
else:
is_recording = False
recorder.save_episode()
if args.sim:
publish_reset_category(1, reset_pose_publisher)
# get input data
tele_data = tv_wrapper.get_motion_state_data()
if (args.ee == 'dex3' or args.ee == 'inspire1') and args.xr_mode == 'hand':
with left_hand_pos_array.get_lock():
left_hand_pos_array[:] = tele_data.left_hand_pos.flatten()
with right_hand_pos_array.get_lock():
right_hand_pos_array[:] = tele_data.right_hand_pos.flatten()
elif args.ee == 'gripper' and args.xr_mode == 'controller':
with left_gripper_value.get_lock():
left_gripper_value.value = tele_data.left_trigger_value
with right_gripper_value.get_lock():
right_gripper_value.value = tele_data.right_trigger_value
elif args.ee == 'gripper' and args.xr_mode == 'hand':
with left_gripper_value.get_lock():
left_gripper_value.value = tele_data.left_pinch_value
with right_gripper_value.get_lock():
right_gripper_value.value = tele_data.right_pinch_value
else:
pass
# high level control
if args.xr_mode == 'controller' and args.motion:
# quit teleoperate
if tele_data.tele_state.right_aButton:
running = False
# command robot to enter damping mode. soft emergency stop function
if tele_data.tele_state.left_thumbstick_state and tele_data.tele_state.right_thumbstick_state:
sport_client.Damp()
# control, limit velocity to within 0.3
sport_client.Move(-tele_data.tele_state.left_thumbstick_value[1] * 0.3,
-tele_data.tele_state.left_thumbstick_value[0] * 0.3,
-tele_data.tele_state.right_thumbstick_value[0] * 0.3)
# get current robot state data.
current_lr_arm_q = arm_ctrl.get_current_dual_arm_q()
current_lr_arm_dq = arm_ctrl.get_current_dual_arm_dq()
# solve ik using motor data and wrist pose, then use ik results to control arms.
time_ik_start = time.time()
sol_q, sol_tauff = arm_ik.solve_ik(tele_data.left_arm_pose, tele_data.right_arm_pose, current_lr_arm_q, current_lr_arm_dq)
time_ik_end = time.time()
logger_mp.debug(f"ik:\t{round(time_ik_end - time_ik_start, 6)}")
arm_ctrl.ctrl_dual_arm(sol_q, sol_tauff)
if recording:
colors = {}
depths = {}
if BINOCULAR:
colors[f"color_{0}"] = current_tv_image[:, :tv_img_shape[1]//2]
colors[f"color_{1}"] = current_tv_image[:, tv_img_shape[1]//2:]
if WRIST:
colors[f"color_{2}"] = current_wrist_image[:, :wrist_img_shape[1]//2]
colors[f"color_{3}"] = current_wrist_image[:, wrist_img_shape[1]//2:]
else:
colors[f"color_{0}"] = current_tv_image
if WRIST:
colors[f"color_{1}"] = current_wrist_image[:, :wrist_img_shape[1]//2]
colors[f"color_{2}"] = current_wrist_image[:, wrist_img_shape[1]//2:]
states = {
"left_arm": {
"qpos": left_arm_state.tolist(), # numpy.array -> list
"qvel": [],
"torque": [],
},
"right_arm": {
"qpos": right_arm_state.tolist(),
"qvel": [],
"torque": [],
},
"left_hand": {
"qpos": left_hand_state,
"qvel": [],
"torque": [],
},
"right_hand": {
"qpos": right_hand_state,
"qvel": [],
"torque": [],
},
"body": None,
}
actions = {
"left_arm": {
"qpos": left_arm_action.tolist(),
"qvel": [],
"torque": [],
},
"right_arm": {
"qpos": right_arm_action.tolist(),
"qvel": [],
"torque": [],
},
"left_hand": {
"qpos": left_hand_action,
"qvel": [],
"torque": [],
},
"right_hand": {
"qpos": right_hand_action,
"qvel": [],
"torque": [],
},
"body": None,
}
# record data
if args.record:
# dex hand or gripper
if args.ee == "dex3" and args.xr_mode == 'hand':
with dual_hand_data_lock:
left_ee_state = dual_hand_state_array[:7]
right_ee_state = dual_hand_state_array[-7:]
left_hand_action = dual_hand_action_array[:7]
right_hand_action = dual_hand_action_array[-7:]
current_body_state = []
current_body_action = []
elif args.ee == "gripper" and args.xr_mode == 'hand':
with dual_gripper_data_lock:
left_ee_state = [dual_gripper_state_array[1]]
right_ee_state = [dual_gripper_state_array[0]]
left_hand_action = [dual_gripper_action_array[1]]
right_hand_action = [dual_gripper_action_array[0]]
current_body_state = []
current_body_action = []
elif args.ee == "gripper" and args.xr_mode == 'controller':
with dual_gripper_data_lock:
left_ee_state = [dual_gripper_state_array[1]]
right_ee_state = [dual_gripper_state_array[0]]
left_hand_action = [dual_gripper_action_array[1]]
right_hand_action = [dual_gripper_action_array[0]]
current_body_state = arm_ctrl.get_current_motor_q().tolist()
current_body_action = [-tele_data.tele_state.left_thumbstick_value[1] * 0.3,
-tele_data.tele_state.left_thumbstick_value[0] * 0.3,
-tele_data.tele_state.right_thumbstick_value[0] * 0.3]
elif args.ee == "inspire1" and args.xr_mode == 'hand':
with dual_hand_data_lock:
left_ee_state = dual_hand_state_array[:6]
right_ee_state = dual_hand_state_array[-6:]
left_hand_action = dual_hand_action_array[:6]
right_hand_action = dual_hand_action_array[-6:]
current_body_state = []
current_body_action = []
else:
left_ee_state = []
right_ee_state = []
left_hand_action = []
right_hand_action = []
current_body_state = []
current_body_action = []
# head image
current_tv_image = tv_img_array.copy()
# wrist image
if WRIST:
current_wrist_image = wrist_img_array.copy()
# arm state and action
left_arm_state = current_lr_arm_q[:7]
right_arm_state = current_lr_arm_q[-7:]
left_arm_action = sol_q[:7]
right_arm_action = sol_q[-7:]
if is_recording:
colors = {}
depths = {}
if BINOCULAR:
colors[f"color_{0}"] = current_tv_image[:, :tv_img_shape[1]//2]
colors[f"color_{1}"] = current_tv_image[:, tv_img_shape[1]//2:]
if WRIST:
colors[f"color_{2}"] = current_wrist_image[:, :wrist_img_shape[1]//2]
colors[f"color_{3}"] = current_wrist_image[:, wrist_img_shape[1]//2:]
else:
colors[f"color_{0}"] = current_tv_image
if WRIST:
colors[f"color_{1}"] = current_wrist_image[:, :wrist_img_shape[1]//2]
colors[f"color_{2}"] = current_wrist_image[:, wrist_img_shape[1]//2:]
states = {
"left_arm": {
"qpos": left_arm_state.tolist(), # numpy.array -> list
"qvel": [],
"torque": [],
},
"right_arm": {
"qpos": right_arm_state.tolist(),
"qvel": [],
"torque": [],
},
"left_ee": {
"qpos": left_ee_state,
"qvel": [],
"torque": [],
},
"right_ee": {
"qpos": right_ee_state,
"qvel": [],
"torque": [],
},
"body": {
"qpos": current_body_state,
},
}
actions = {
"left_arm": {
"qpos": left_arm_action.tolist(),
"qvel": [],
"torque": [],
},
"right_arm": {
"qpos": right_arm_action.tolist(),
"qvel": [],
"torque": [],
},
"left_ee": {
"qpos": left_hand_action,
"qvel": [],
"torque": [],
},
"right_ee": {
"qpos": right_hand_action,
"qvel": [],
"torque": [],
},
"body": {
"qpos": current_body_action,
},
}
if args.sim:
sim_state = sim_state_subscriber.read_data()
recorder.add_item(colors=colors, depths=depths, states=states, actions=actions, sim_state=sim_state)
else:
recorder.add_item(colors=colors, depths=depths, states=states, actions=actions)
current_time = time.time()
time_elapsed = current_time - start_time
sleep_time = max(0, (1 / float(args.frequency)) - time_elapsed)
time.sleep(sleep_time)
# print(f"main process sleep: {sleep_time}")
current_time = time.time()
time_elapsed = current_time - start_time
sleep_time = max(0, (1 / args.frequency) - time_elapsed)
time.sleep(sleep_time)
logger_mp.debug(f"main process sleep: {sleep_time}")
except KeyboardInterrupt:
print("KeyboardInterrupt, exiting program...")
logger_mp.info("KeyboardInterrupt, exiting program...")
finally:
arm_ctrl.ctrl_dual_arm_go_home()
tv_img_shm.unlink()
@ -275,5 +411,8 @@ if __name__ == '__main__':
wrist_img_shm.close()
if args.record:
recorder.close()
print("Finally, exiting program...")
exit(0)
if args.sim:
sim_state_subscriber.stop_subscribe()
listen_keyboard_thread.join()
logger_mp.info("Finally, exiting program...")
exit(0)

1
teleop/televuer

@ -0,0 +1 @@
Subproject commit 34f4475fca12166d2c52f2469385a851f614fd4e

44
teleop/utils/episode_writer.py

@ -7,22 +7,24 @@ import time
from .rerun_visualizer import RerunLogger
from queue import Queue, Empty
from threading import Thread
import logging_mp
logger_mp = logging_mp.get_logger(__name__)
class EpisodeWriter():
def __init__(self, task_dir, frequency=30, image_size=[640, 480], rerun_log = True):
"""
image_size: [width, height]
"""
print("==> EpisodeWriter initializing...\n")
logger_mp.info("==> EpisodeWriter initializing...\n")
self.task_dir = task_dir
self.frequency = frequency
self.image_size = image_size
self.rerun_log = rerun_log
if self.rerun_log:
print("==> RerunLogger initializing...\n")
logger_mp.info("==> RerunLogger initializing...\n")
self.rerun_logger = RerunLogger(prefix="online/", IdxRangeBoundary = 60, memory_limit = "300MB")
print("==> RerunLogger initializing ok.\n")
logger_mp.info("==> RerunLogger initializing ok.\n")
self.data = {}
self.episode_data = []
@ -32,22 +34,22 @@ class EpisodeWriter():
episode_dirs = [episode_dir for episode_dir in os.listdir(self.task_dir) if 'episode_' in episode_dir]
episode_last = sorted(episode_dirs)[-1] if len(episode_dirs) > 0 else None
self.episode_id = 0 if episode_last is None else int(episode_last.split('_')[-1])
print(f"==> task_dir directory already exist, now self.episode_id is:{self.episode_id}\n")
logger_mp.info(f"==> task_dir directory already exist, now self.episode_id is:{self.episode_id}\n")
else:
os.makedirs(self.task_dir)
print(f"==> episode directory does not exist, now create one.\n")
logger_mp.info(f"==> episode directory does not exist, now create one.\n")
self.data_info()
self.text_desc()
self.is_available = True # Indicates whether the class is available for new operations
# Initialize the queue and worker thread
self.item_data_queue = Queue(maxsize=100)
self.item_data_queue = Queue(-1)
self.stop_worker = False
self.need_save = False # Flag to indicate when save_episode is triggered
self.worker_thread = Thread(target=self.process_queue)
self.worker_thread.start()
print("==> EpisodeWriter initialized successfully.\n")
logger_mp.info("==> EpisodeWriter initialized successfully.\n")
def data_info(self, version='1.0.0', date=None, author=None):
self.info = {
@ -59,16 +61,17 @@ class EpisodeWriter():
"audio": {"sample_rate": 16000, "channels": 1, "format":"PCM", "bits":16}, # PCM_S16
"joint_names":{
"left_arm": ['kLeftShoulderPitch' ,'kLeftShoulderRoll', 'kLeftShoulderYaw', 'kLeftElbow', 'kLeftWristRoll', 'kLeftWristPitch', 'kLeftWristyaw'],
"left_hand": [],
"left_ee": [],
"right_arm": [],
"right_hand": [],
"right_ee": [],
"body": [],
},
"tactile_names": {
"left_hand": [],
"right_hand": [],
"left_ee": [],
"right_ee": [],
},
"sim_state": ""
}
def text_desc(self):
self.text = {
@ -87,7 +90,7 @@ class EpisodeWriter():
Once successfully created, this function will only be available again after save_episode complete its save task.
"""
if not self.is_available:
print("==> The class is currently unavailable for new operations. Please wait until ongoing tasks are completed.")
logger_mp.info("==> The class is currently unavailable for new operations. Please wait until ongoing tasks are completed.")
return False # Return False if the class is unavailable
# Reset episode-related data and create necessary directories
@ -108,10 +111,10 @@ class EpisodeWriter():
self.online_logger = RerunLogger(prefix="online/", IdxRangeBoundary = 60, memory_limit="300MB")
self.is_available = False # After the episode is created, the class is marked as unavailable until the episode is successfully saved
print(f"==> New episode created: {self.episode_dir}")
logger_mp.info(f"==> New episode created: {self.episode_dir}")
return True # Return True if the episode is successfully created
def add_item(self, colors, depths=None, states=None, actions=None, tactiles=None, audios=None):
def add_item(self, colors, depths=None, states=None, actions=None, tactiles=None, audios=None,sim_state=None):
# Increment the item ID
self.item_id += 1
# Create the item data dictionary
@ -123,6 +126,7 @@ class EpisodeWriter():
'actions': actions,
'tactiles': tactiles,
'audios': audios,
'sim_state': sim_state,
}
# Enqueue the item data
self.item_data_queue.put(item_data)
@ -135,7 +139,7 @@ class EpisodeWriter():
try:
self._process_item_data(item_data)
except Exception as e:
print(f"Error processing item_data (idx={item_data['idx']}): {e}")
logger_mp.info(f"Error processing item_data (idx={item_data['idx']}): {e}")
self.item_data_queue.task_done()
except Empty:
pass
@ -155,7 +159,7 @@ class EpisodeWriter():
for idx_color, (color_key, color) in enumerate(colors.items()):
color_name = f'{str(idx).zfill(6)}_{color_key}.jpg'
if not cv2.imwrite(os.path.join(self.color_dir, color_name), color):
print(f"Failed to save color image.")
logger_mp.info(f"Failed to save color image.")
item_data['colors'][color_key] = os.path.join('colors', color_name)
# Save depths
@ -163,7 +167,7 @@ class EpisodeWriter():
for idx_depth, (depth_key, depth) in enumerate(depths.items()):
depth_name = f'{str(idx).zfill(6)}_{depth_key}.jpg'
if not cv2.imwrite(os.path.join(self.depth_dir, depth_name), depth):
print(f"Failed to save depth image.")
logger_mp.info(f"Failed to save depth image.")
item_data['depths'][depth_key] = os.path.join('depths', depth_name)
# Save audios
@ -179,7 +183,7 @@ class EpisodeWriter():
# Log data if necessary
if self.rerun_log:
curent_record_time = time.time()
print(f"==> episode_id:{self.episode_id} item_id:{self.item_id} current_time:{curent_record_time}")
logger_mp.info(f"==> episode_id:{self.episode_id} item_id:{idx} current_time:{curent_record_time}")
self.rerun_logger.log_item_data(item_data)
def save_episode(self):
@ -187,7 +191,7 @@ class EpisodeWriter():
Trigger the save operation. This sets the save flag, and the process_queue thread will handle it.
"""
self.need_save = True # Set the save flag
print(f"==> Episode saved start...")
logger_mp.info(f"==> Episode saved start...")
def _save_episode(self):
"""
@ -200,7 +204,7 @@ class EpisodeWriter():
jsonf.write(json.dumps(self.data, indent=4, ensure_ascii=False))
self.need_save = False # Reset the save flag
self.is_available = True # Mark the class as available after saving
print(f"==> Episode saved successfully to {self.json_path}.")
logger_mp.info(f"==> Episode saved successfully to {self.json_path}.")
def close(self):
"""

14
teleop/utils/mat_tool.py

@ -1,14 +0,0 @@
import numpy as np
def mat_update(prev_mat, mat):
if np.linalg.det(mat) == 0:
return prev_mat, False # Return previous matrix and False flag if the new matrix is non-singular (determinant ≠ 0).
else:
return mat, True
def fast_mat_inv(mat):
ret = np.eye(4)
ret[:3, :3] = mat[:3, :3].T
ret[:3, 3] = -mat[:3, :3].T @ mat[:3, 3]
return ret

32
teleop/utils/rerun_visualizer.py

@ -5,6 +5,7 @@ import time
import rerun as rr
import rerun.blueprint as rrb
from datetime import datetime
os.environ["RUST_LOG"] = "error"
class RerunEpisodeReader:
def __init__(self, task_dir = ".", json_file="data.json"):
@ -89,8 +90,8 @@ class RerunLogger:
data_plot_paths = [
f"{self.prefix}left_arm",
f"{self.prefix}right_arm",
f"{self.prefix}left_hand",
f"{self.prefix}right_hand"
f"{self.prefix}left_ee",
f"{self.prefix}right_ee"
]
for plot_path in data_plot_paths:
view = rrb.TimeSeriesView(
@ -141,7 +142,7 @@ class RerunLogger:
# Log states
states = item_data.get('states', {}) or {}
for part, state_info in states.items():
if state_info:
if part != "body" and state_info:
values = state_info.get('qpos', [])
for idx, val in enumerate(values):
rr.log(f"{self.prefix}{part}/states/qpos/{idx}", rr.Scalar(val))
@ -149,7 +150,7 @@ class RerunLogger:
# Log actions
actions = item_data.get('actions', {}) or {}
for part, action_info in actions.items():
if action_info:
if part != "body" and action_info:
values = action_info.get('qpos', [])
for idx, val in enumerate(values):
rr.log(f"{self.prefix}{part}/actions/qpos/{idx}", rr.Scalar(val))
@ -188,6 +189,9 @@ if __name__ == "__main__":
import gdown
import zipfile
import os
import logging_mp
logger_mp = logging_mp.get_logger(__name__, level=logging_mp.INFO)
zip_file = "rerun_testdata.zip"
zip_file_download_url = "https://drive.google.com/file/d/1f5UuFl1z_gaByg_7jDRj1_NxfJZh2evD/view?usp=sharing"
unzip_file_output_dir = "./testdata"
@ -195,16 +199,16 @@ if __name__ == "__main__":
if not os.path.exists(zip_file):
file_id = zip_file_download_url.split('/')[5]
gdown.download(id=file_id, output=zip_file, quiet=False)
print("download ok.")
logger_mp.info("download ok.")
if not os.path.exists(unzip_file_output_dir):
os.makedirs(unzip_file_output_dir)
with zipfile.ZipFile(zip_file, 'r') as zip_ref:
zip_ref.extractall(unzip_file_output_dir)
print("uncompress ok.")
logger_mp.info("uncompress ok.")
os.remove(zip_file)
print("clean file ok.")
logger_mp.info("clean file ok.")
else:
print("rerun_testdata exits.")
logger_mp.info("rerun_testdata exits.")
episode_reader = RerunEpisodeReader(task_dir = unzip_file_output_dir)
@ -212,20 +216,20 @@ if __name__ == "__main__":
user_input = input("Please enter the start signal (enter 'off' or 'on' to start the subsequent program):\n")
if user_input.lower() == 'off':
episode_data6 = episode_reader.return_episode_data(6)
print("Starting offline visualization...")
logger_mp.info("Starting offline visualization...")
offline_logger = RerunLogger(prefix="offline/")
offline_logger.log_episode_data(episode_data6)
print("Offline visualization completed.")
logger_mp.info("Offline visualization completed.")
# TEST EXAMPLE 2 : ONLINE DATA TEST, SLIDE WINDOW SIZE IS 60, MEMORY LIMIT IS 50MB
if user_input.lower() == 'on':
episode_data8 = episode_reader.return_episode_data(8)
print("Starting online visualization with fixed idx size...")
logger_mp.info("Starting online visualization with fixed idx size...")
online_logger = RerunLogger(prefix="online/", IdxRangeBoundary = 60, memory_limit='50MB')
for item_data in episode_data8:
online_logger.log_item_data(item_data)
time.sleep(0.033) # 30hz
print("Online visualization completed.")
logger_mp.info("Online visualization completed.")
# # TEST DATA OF data_dir
@ -236,9 +240,9 @@ if __name__ == "__main__":
# episode_data8 = episode_reader2.return_episode_data(episode_data_number)
# if user_input.lower() == 'on':
# # Example 2: Offline Visualization with Fixed Time Window
# print("Starting offline visualization with fixed idx size...")
# logger_mp.info("Starting offline visualization with fixed idx size...")
# online_logger = RerunLogger(prefix="offline/", IdxRangeBoundary = 60)
# for item_data in episode_data8:
# online_logger.log_item_data(item_data)
# time.sleep(0.033) # 30hz
# print("Offline visualization completed.")
# logger_mp.info("Offline visualization completed.")

290
teleop/utils/sim_state_topic.py

@ -0,0 +1,290 @@
# Copyright (c) 2025, Unitree Robotics Co., Ltd. All Rights Reserved.
# License: Apache License, Version 2.0
"""
Simple sim state subscriber class
Subscribe to rt/sim_state_cmd topic and write to shared memory
"""
import threading
import time
import json
from multiprocessing import shared_memory
from typing import Any, Dict, Optional
from unitree_sdk2py.core.channel import ChannelSubscriber, ChannelFactoryInitialize
from unitree_sdk2py.idl.std_msgs.msg.dds_ import String_
import logging_mp
logger_mp = logging_mp.get_logger(__name__)
class SharedMemoryManager:
"""Shared memory manager"""
def __init__(self, name: str = None, size: int = 512):
"""Initialize shared memory manager
Args:
name: shared memory name, if None, create new one
size: shared memory size (bytes)
"""
self.size = size
self.lock = threading.RLock() # reentrant lock
if name:
try:
self.shm = shared_memory.SharedMemory(name=name)
self.shm_name = name
self.created = False
except FileNotFoundError:
self.shm = shared_memory.SharedMemory(create=True, size=size)
self.shm_name = self.shm.name
self.created = True
else:
self.shm = shared_memory.SharedMemory(create=True, size=size)
self.shm_name = self.shm.name
self.created = True
def write_data(self, data: Dict[str, Any]) -> bool:
"""Write data to shared memory
Args:
data: data to write
Returns:
bool: write success or not
"""
try:
with self.lock:
json_str = json.dumps(data)
json_bytes = json_str.encode('utf-8')
if len(json_bytes) > self.size - 8: # reserve 8 bytes for length and timestamp
logger_mp.warning(f"Data too large for shared memory ({len(json_bytes)} > {self.size - 8})")
return False
# write timestamp (4 bytes) and data length (4 bytes)
timestamp = int(time.time()) & 0xFFFFFFFF # 32-bit timestamp, use bitmask to ensure in range
self.shm.buf[0:4] = timestamp.to_bytes(4, 'little')
self.shm.buf[4:8] = len(json_bytes).to_bytes(4, 'little')
# write data
self.shm.buf[8:8+len(json_bytes)] = json_bytes
return True
except Exception as e:
logger_mp.error(f"Error writing to shared memory: {e}")
return False
def read_data(self) -> Optional[Dict[str, Any]]:
"""Read data from shared memory
Returns:
Dict[str, Any]: read data dictionary, return None if failed
"""
try:
with self.lock:
# read timestamp and data length
timestamp = int.from_bytes(self.shm.buf[0:4], 'little')
data_len = int.from_bytes(self.shm.buf[4:8], 'little')
if data_len == 0:
return None
# read data
json_bytes = bytes(self.shm.buf[8:8+data_len])
data = json.loads(json_bytes.decode('utf-8'))
data['_timestamp'] = timestamp # add timestamp information
return data
except Exception as e:
logger_mp.error(f"Error reading from shared memory: {e}")
return None
def get_name(self) -> str:
"""Get shared memory name"""
return self.shm_name
def cleanup(self):
"""Clean up shared memory"""
if hasattr(self, 'shm') and self.shm:
self.shm.close()
if self.created:
try:
self.shm.unlink()
except:
pass
def __del__(self):
"""Destructor"""
self.cleanup()
class SimStateSubscriber:
"""Simple sim state subscriber class"""
def __init__(self, shm_name: str = "sim_state_cmd_data", shm_size: int = 3096):
"""Initialize the subscriber
Args:
shm_name: shared memory name
shm_size: shared memory size
"""
self.shm_name = shm_name
self.shm_size = shm_size
self.running = False
self.subscriber = None
self.subscribe_thread = None
self.shared_memory = None
# initialize shared memory
self._setup_shared_memory()
logger_mp.info(f"[SimStateSubscriber] Initialized with shared memory: {shm_name}")
def _setup_shared_memory(self):
"""Setup shared memory"""
try:
self.shared_memory = SharedMemoryManager(self.shm_name, self.shm_size)
logger_mp.info(f"[SimStateSubscriber] Shared memory setup successfully")
except Exception as e:
logger_mp.error(f"[SimStateSubscriber] Failed to setup shared memory: {e}")
def _message_handler(self, msg: String_):
"""Handle received messages"""
try:
# parse the received message
data = json.loads(msg.data)
# write to shared memory
if self.shared_memory:
self.shared_memory.write_data(data)
except Exception as e:
logger_mp.error(f"[SimStateSubscriber] Error processing message: {e}")
def _subscribe_loop(self):
"""Subscribe loop thread"""
logger_mp.info(f"[SimStateSubscriber] Subscribe thread started")
while self.running:
try:
time.sleep(0.001) # keep thread alive
except Exception as e:
logger_mp.error(f"[SimStateSubscriber] Error in subscribe loop: {e}")
time.sleep(0.01)
logger_mp.info(f"[SimStateSubscriber] Subscribe thread stopped")
def start_subscribe(self):
"""Start subscribing"""
if self.running:
logger_mp.info(f"[SimStateSubscriber] Already running")
return
try:
# setup subscriber
self.subscriber = ChannelSubscriber("rt/sim_state", String_)
self.subscriber.Init(self._message_handler, 10)
self.running = True
logger_mp.info(f"[SimStateSubscriber] Subscriber initialized")
# start subscribe thread
self.subscribe_thread = threading.Thread(target=self._subscribe_loop)
self.subscribe_thread.daemon = True
self.subscribe_thread.start()
logger_mp.info(f"[SimStateSubscriber] Started subscribing to rt/sim_state_cmd")
except Exception as e:
logger_mp.error(f"[SimStateSubscriber] Failed to start subscribing: {e}")
self.running = False
def stop_subscribe(self):
"""Stop subscribing"""
if not self.running:
return
logger_mp.info(f"[SimStateSubscriber] Stopping subscriber...")
self.running = False
# wait for thread to finish
if self.subscribe_thread:
self.subscribe_thread.join(timeout=1.0)
if self.shared_memory:
# cleanup shared memory
self.shared_memory.cleanup()
logger_mp.info(f"[SimStateSubscriber] Subscriber stopped")
def read_data(self) -> Optional[Dict[str, Any]]:
"""Read data from shared memory
Returns:
Dict: received data, None if no data or error
"""
try:
if self.shared_memory:
return self.shared_memory.read_data()
return None
except Exception as e:
logger_mp.error(f"[SimStateSubscriber] Error reading data: {e}")
return None
def is_running(self) -> bool:
"""Check if subscriber is running"""
return self.running
def __del__(self):
"""Destructor"""
self.stop_subscribe()
# convenient functions
def create_sim_state_subscriber(shm_name: str = "sim_state_cmd_data", shm_size: int = 3096) -> SimStateSubscriber:
"""Create a sim state subscriber instance
Args:
shm_name: shared memory name
shm_size: shared memory size
Returns:
SimStateSubscriber: subscriber instance
"""
return SimStateSubscriber(shm_name, shm_size)
def start_sim_state_subscribe(shm_name: str = "sim_state_cmd_data", shm_size: int = 3096) -> SimStateSubscriber:
"""Start sim state subscribing
Args:
shm_name: shared memory name
shm_size: shared memory size
Returns:
SimStateSubscriber: started subscriber instance
"""
subscriber = create_sim_state_subscriber(shm_name, shm_size)
subscriber.start_subscribe()
return subscriber
# if __name__ == "__main__":
# # example usage
# logger_mp.info("Starting sim state subscriber...")
# ChannelFactoryInitialize(0)
# # create and start subscriber
# subscriber = start_sim_state_subscribe()
# try:
# # keep running and check for data
# while True:
# data = subscriber.read_data()
# if data:
# logger_mp.info(f"Read data: {data}")
# time.sleep(1)
# except KeyboardInterrupt:
# logger_mp.warning("\nInterrupted by user")
# finally:
# subscriber.stop_subscribe()
# logger_mp.info("Subscriber stopped")
Loading…
Cancel
Save