--- id: open-questions title: "Open Questions & Known Unknowns" status: established source_sections: null related_topics: [hardware-specs, joint-configuration, sdk-programming, safety-limits] key_equations: [] key_terms: [] images: [] examples: [] open_questions: [] --- # Open Questions & Known Unknowns This file catalogs what we know, what we don't know, and what would resolve the gaps. When a question gets answered, move it to the **Resolved** section with the answer and source. --- ## Hardware & Mechanical ### Open - **Q:** What are the per-joint velocity limits (dq_max) for all joints? - _Partial:_ Position limits are known for major joints. Velocity limits not published. - _Would resolve:_ SDK configuration files, URDF damping/velocity tags, or actuator datasheets. - **Q:** What are the exact link masses and inertia tensors for dynamics modeling? - _Partial:_ Total mass ~35 kg known. Per-link breakdown should be in URDF. - _Would resolve:_ Parse URDF from unitree_ros/robots/g1_description/. - **Q:** What is the ankle joint range of motion? - _Partial:_ Ankle joints exist (pitch + roll) but ranges not in public specs. - _Would resolve:_ URDF joint limits, or physical measurement. - **Q:** What are the exact gear ratios per joint actuator? - _Partial:_ Planetary gearboxes used, ratios not published. - _Would resolve:_ Actuator datasheets. - **Q:** What is the IP rating and environmental protection? - _Partial:_ Not documented anywhere. - _Would resolve:_ Official compliance documentation. ### Open (Dexterous Hands) - **Q:** What are the Dex3-1 per-finger force limits? - _Partial:_ 33 tactile sensors, force-position hybrid control confirmed. Max force not documented. - _Would resolve:_ Dex3-1 actuator datasheet or testing. - **Q:** What is the exact INSPIRE hand DOF configuration per finger? - _Partial:_ Known to be 5-finger, "advanced." Per-finger DOF not documented. - _Would resolve:_ INSPIRE DFX documentation page (partially available). --- ## Software & SDK ### Open - **Q:** What is the complete Python vs C++ API coverage parity? - _Partial:_ Python SDK uses pybind11 wrapping C++. Unclear if 100% coverage. - _Would resolve:_ Side-by-side API comparison. - **Q:** What firmware version ships on currently-sold G1 units? - _Partial:_ v3.2+ mentioned in documentation. Actual shipping version unclear. - _Would resolve:_ Checking a new unit, or Unitree support inquiry. - **Q:** What is the LLM integration capability in firmware v3.2+? - _Partial:_ "Preliminary LLM integration support" mentioned. Details sparse. - _Would resolve:_ Testing on EDU unit with Jetson, or official documentation update. --- ## Control & Locomotion ### Open - **Q:** Can the G1 run (not just walk)? What is the true max speed? - _Partial:_ Walking max is 2 m/s. H1-2 can run at 3.3 m/s. G1 running gait not confirmed. - _Would resolve:_ Testing or official confirmation. - **Q:** What is the exact RL policy observation/action space? - _Partial:_ Stock `ai_sport` policy obs/action not documented. MuJoCo Playground G1JoystickFlatTerrain: 103-dim state obs, 165-dim privileged obs, 29-dim action. GR00T-WBC: 516-dim obs (86×6 history), 15-dim action (lower body only). - _Update (2026-02-15):_ MuJoCo Playground env source inspected. Full obs breakdown documented in `context/learning-and-ai.md` §8. - _Would resolve:_ Stock `ai_sport` binary analysis (still unknown). - **Q:** Can a custom locomotion policy be deployed natively to the RK3588 locomotion computer? - _Partial:_ Root access achieved via BLE exploits (UniPwn, FreeBOT). The `ai_sport` binary is the stock policy. Nobody has publicly documented replacing it. Config files use FMX encryption (partially cracked). - _Would resolve:_ Full reverse engineering of `master_service` orchestrator and `ai_sport` binary, or Unitree providing official developer access. --- ## Simulation ### Open - **Q:** What are the optimal domain randomization parameters for G1 sim-to-real? - _Partial:_ MuJoCo Playground defaults: friction U(0.4,1.0), body mass ±10%, torso mass offset ±1kg, DOF armature 1.0-1.05x, DOF frictionloss 0.5-2.0x. Whether these are optimal or need tuning for real G1 is unconfirmed. - _Update (2026-02-15):_ Inspected MuJoCo Playground `randomize.py` source. Parameters documented in `context/learning-and-ai.md` §8. - _Would resolve:_ Sim-to-real transfer testing on physical G1. --- ## Safety & Deployment ### Open - **Q:** What are the official safety certifications (CE, FCC, etc.)? - _Partial:_ Not documented in any available source. - _Would resolve:_ Official compliance documentation. - **Q:** What is the recommended operating temperature and humidity range? - _Partial:_ Not documented. Indoor operation recommended as best practice. - _Would resolve:_ Official operating manual detailed specs. - **Q:** What is the low-battery cutoff voltage and auto-shutdown behavior? - _Partial:_ 13-cell LiPo at ~48V nominal, but cutoff threshold not documented. - _Would resolve:_ Battery management system documentation. - **Q:** What is the Jetson Orin NX power draw under computational load? - _Partial:_ Standard Orin NX specs suggest 10-25W. G1-specific thermal management unclear. - _Would resolve:_ Power measurement on running unit. --- ## Motion Capture & Balance (Phase 2) ### Open - **Q:** What is the max recoverable push force for the stock G1 controller? - _Partial:_ Light push recovery confirmed. Max impulse not quantified. - _Would resolve:_ Physical push testing with force measurement, or Unitree internal documentation. - **Q:** Can GR00T-WBC run at 500 Hz on the Jetson Orin NX? - _Partial:_ On GB10 (much faster than Orin NX), loop runs at ~3.5ms/iteration at 50 Hz in sync mode. Orin NX benchmarking still needed. - _Update (2026-02-14):_ GR00T-WBC default is 50 Hz control, not 500 Hz. At 50 Hz on GB10, it uses only 17.5% of time budget. Orin NX likely feasible at 50 Hz but 500 Hz unconfirmed. - _Would resolve:_ Benchmarking GR00T-WBC on actual Orin NX hardware. - **Q:** What is the end-to-end latency from mocap capture to robot execution? - _Partial:_ DDS latency ~2ms. XR teleoperate latency not documented. Video-based pose estimation adds 30-100ms. - _Update (2026-02-14):_ GR00T-WBC teleop accepts upper-body commands via `/ControlPolicy/upper_body_pose` ROS topic. Pico VR is the primary tested device. Camera-based mocap not yet integrated. - _Would resolve:_ End-to-end timing measurement with each mocap source. - **Q:** Does residual policy overlay work with the proprietary locomotion computer, or does it require full replacement? - _Partial:_ rt/lowcmd can send joint commands. Unclear if these override or add to stock controller output. - _Would resolve:_ Testing: send small corrections to leg joints while stock controller is active. Observe behavior. - **Q:** What AMASS motions have been successfully replayed on physical G1? - _Partial:_ AMASS retarget exists on HuggingFace. Which specific motions have been executed on real hardware is not documented publicly. - _Would resolve:_ Testing with the retargeted dataset, or finding lab reports from groups using it. - **Q:** What is the minimum viable sensor set for push detection (IMU only vs. IMU + F/T)? - _Partial:_ G1 has IMU and joint encoders but no force/torque sensors at feet. External force must be estimated. - _Would resolve:_ Compare push detection accuracy: IMU-only vs. IMU + momentum observer vs. added F/T sensors. --- ## GR00T-WBC Deployment (Phase 3) ### Open - **Q:** How does GR00T-WBC Walk policy compare to stock G1 controller for push recovery? - _Partial:_ Balance policy tested on real robot. Robot can recover from light pushes with IMU offset + teleop gains. Not yet quantified with force measurement. Walk policy not yet tested on real robot. - _Would resolve:_ Side-by-side push testing on real robot with force measurement. - **Q:** What is the exact training recipe for the pre-trained ONNX policies (Balance, Walk)? - _Partial:_ PPO via RSL-RL in Isaac Lab. MLP [512, 256, 128]. Domain randomization. Zero-shot transfer. Exact reward function and perturbation curriculum not published by NVIDIA. WBC-AGILE (nvidia-isaac/WBC-AGILE) provides training framework but may differ from pre-trained models. - _Would resolve:_ NVIDIA publishing training code, or reverse-engineering from ONNX model + observation/action analysis. - **Q:** What is the optimal IMU pitch offset for the G1? - _Partial:_ Approximately -6° (np.deg2rad(-6.0)) calibrated on one G1 EDU Ultimate E (U7). May vary per unit due to manufacturing tolerance. The stock Unitree controller handles this calibration internally. - _Would resolve:_ Testing on multiple G1 units to determine if offset is consistent or per-unit. - **Q:** What camera-based mocap solution integrates best with GR00T-WBC's upper body teleop? - _Partial:_ GR00T-WBC supports Pico VR, LeapMotion, HTC Vive, iPhone natively. Camera-based (MediaPipe, OpenPose) not built-in but could publish to the same ROS topic. - _Update (2026-02-15):_ **Apple Vision Pro selected as primary telepresence device.** Two integration paths identified: - **Path 1 (fastest):** Unitree `xr_teleoperate` — Vision Pro connects via Safari WebXR, no app needed. Uses TeleVuer/Vuer web server. But bypasses GR00T-WBC (uses stock controller). - **Path 2 (best quality):** VisionProTeleop (MIT, open-source native visionOS app "Tracking Streamer") → `avp_stream` Python lib (gRPC) → bridge to GR00T-WBC's `ControlPolicy/upper_body_pose` ROS2 topic. Enables RL-based balance via GR00T-WBC. - _Would resolve:_ Implement one of the paths and measure tracking quality + latency. - **Q:** Why does MuJoCo's GLFW passive viewer freeze on virtual/remote displays after a few seconds? - _Partial:_ Observed on both Xvfb+VNC and NoMachine. GLFW event loop thread appears to stall. X11 framebuffer screenshots confirm rendering IS happening intermittently. ffmpeg x11grab also shows stalling. - _Would resolve:_ GLFW debug logging, or switching to MuJoCo offscreen renderer + custom display pipeline. ## Unified WBC Training (Phase 4 Research Direction) ### Open - **Q:** Can a unified policy (locomotion + upper body tracking) maintain balance when mocap drives arms to extreme positions? - _Partial:_ ExBody/ExBody2 validate the approach on other humanoids. No published results on G1 specifically. - _Plan:_ Fork G1JoystickFlatTerrain, add upper body tracking reward, 4-stage curriculum (400M steps, ~5.5 hrs on GB10). See `plans/eager-shimmying-raccoon.md`. - _Would resolve:_ Training the unified policy and evaluating push survival with arbitrary upper body poses. - **Q:** Are procedural upper body targets sufficient for training, or is AMASS motion data required? - _Partial:_ MuJoCo Playground uses only parametric gait generation (no AMASS). ExBody2 uses motion capture. Procedural (uniform random + sinusoidal) may cover the config space but miss real-world motion correlations. - _Would resolve:_ Compare tracking quality: procedural-trained vs. AMASS-trained on real AVP data. - **Q:** What is the Apple Vision Pro → G1 retargeting latency and accuracy? - _Partial:_ AVP provides full hand/body tracking. Multiple integration paths researched. - _Update (2026-02-15):_ xr_teleoperate uses Pinocchio IK for retargeting (WebXR wrist poses → G1 arm joints). VisionProTeleop provides native ARKit with 25 finger joints/hand via gRPC. GR00T-WBC's `InterpolationPolicy` accepts 17-DOF upper body targets and interpolates smoothly. - _Would resolve:_ Build AVP bridge, measure end-to-end latency and joint angle accuracy. ## Vision Pro Telepresence Integration (Phase 5) ### Open - **Q:** Which Vision Pro integration path works best with GR00T-WBC? - _Partial:_ xr_teleoperate WebXR path verified working (arm tracking confirmed 2026-02-18). Bypasses GR00T-WBC (uses stock controller + `rt/arm_sdk`). VisionProTeleop native app path blocked — `avp_stream` server can't bind port 12345 on robot (port held by Unitree process). GR00T-WBC bridge untested. - _Would resolve:_ Resolve port 12345 conflict on robot, or find alternate gRPC port for avp_stream. - **Q:** Can the GR00T-WBC Walk policy maintain balance with Vision Pro-driven arm poses? - _Partial:_ The Walk ONNX policy receives upper body joint angles as observation input and can compensate. But it was NOT trained with arbitrary arm configurations — conservative motions likely fine, extreme poses may destabilize. - _Related:_ Unified WBC training plan (plans/eager-shimmying-raccoon.md) would train specifically for this. - _Would resolve:_ Test with real Vision Pro driving arms while walking. If unstable, proceed with unified training plan. - **Q:** Does the Unitree wireless remote work under GR00T-WBC? - **A:** No. GR00T-WBC takes over low-level motor control via rt/lowcmd, bypassing the stock controller that reads the remote. The rt/wirelesscontroller DDS topic is still published by the robot but nothing in GR00T-WBC subscribes to it on real hardware. Keyboard control (w/s/a/d/q/e) is the built-in alternative. The `wireless_remote` field is present in rt/lowstate and could be bridged to GR00T-WBC's command system. [T1 — Source inspection, 2026-02-15] --- ## Resolved ### Vision Pro xr_teleoperate WebXR (Resolved 2026-02-18) - **Q:** Can xr_teleoperate's WebXR pipeline (Vuer/TeleVuer) work with Apple Vision Pro for arm teleoperation? - **A:** Yes. Requires: (1) vuer v0.0.60 (v0.0.40 client JS incompatible with visionOS Safari), (2) JS port fix (`hostname` → `host` in all chunk files with `wss://`), (3) aiohttp SSL assertion fix (`assert self._paused` → `if not self._paused: return`), (4) CA-signed certs with rootCA installed + full trust enabled on VP, (5) launch from `~/xr_teleoperate/teleop/` for URDF relative paths. Arms track hand movements at 30 Hz IK loop. IK configuration flipping is a known issue near singularities (recoverable by restarting teleop). [T1 — Verified on real robot, 2026-02-18] - **Q:** Why does Safari on visionOS fail to establish WebSocket connections to self-signed HTTPS servers? - **A:** Safari treats HTTPS page trust and WebSocket (`wss://`) trust separately. Clicking "Accept" on the browser cert warning only trusts the page load — WebSocket connections are silently rejected with no error and no prompt. The root CA must be installed as a device profile AND explicitly enabled in Settings → General → About → Certificate Trust Settings. If stale cert state has accumulated from debugging, a factory reset of the Vision Pro may be needed for a clean install. [T1 — Verified 2026-02-18] ### GR00T-WBC Real Robot (Resolved 2026-02-15) - **Q:** Can GR00T-WBC relay real-time control from GB10 to G1 over the network? - **A:** Yes. GB10 at 192.168.123.100 sends rt/lowcmd and receives rt/lowstate via DDS. CYCLONEDDS_URI must specify the network interface explicitly. UFW firewall must allow 192.168.123.0/24. Robot stands and balances autonomously with the Balance ONNX policy. [T1 — Verified on real robot, 2026-02-15] - **Q:** Why does GR00T-WBC cause persistent backward lean on the real G1? - **A:** **IMU mounting offset.** The G1's pelvis IMU has a physical pitch offset (~6°) relative to simulation. The stock Unitree controller compensates internally, but GR00T-WBC reads raw IMU data via DDS and has no calibration step. Fix: apply a quaternion pitch rotation before gravity computation. This is NOT a sim-to-real gap in the policy — it's a missing sensor calibration. Multiple GitHub users (Issues #21, #22, #23) report the same problem. [T1 — Root-caused and verified on real robot, 2026-02-15] - **Q:** Should GR00T-WBC ONNX policy actions be clipped to [-1, 1]? - **A:** No. NVIDIA's reference code does NOT clip actions. RSL-RL does not clip by default. Policy outputs exceeding [-1,1] are intentional for push recovery and large corrections. Adding np.clip() causes policy saturation — outputs rail at clip boundaries with no room for balance corrections. [T1 — Verified via NVIDIA source inspection and real robot testing, 2026-02-15] - **Q:** What is the G1's mode_machine value? - **A:** mode_machine=5 (g1_29dof_rev_1_0) on our G1 EDU Ultimate E (U7), confirmed via 63 DDS samples. GR00T-WBC should read this dynamically from rt/lowstate (PR #11) rather than hardcoding. [T1 — Verified 2026-02-15] - **Q:** What PD gains work best for GR00T-WBC on the real G1? - **A:** Unitree xr_teleoperate gains (KP: 300/300/300/300/80/80 per leg, 300/300/300 waist; KD: 3/3/3/3/2/2 per leg, 5/5/5 waist) combined with IMU offset calibration give the best results. The sim-trained gains (KP: 150/150/150/200/40/40) give better push recovery but less tracking precision. The policy was trained with the sim gains, so its corrections are calibrated for those — but with the correct IMU reference frame, stiffer gains improve tracking without the policy fighting itself. [T1 — Iterative tuning on real robot, 2026-02-15] ### MuJoCo Playground Training (Resolved) - **Q:** Can MuJoCo Playground train G1 policies on Blackwell (sm_121)? - **A:** Yes. JAX 0.9.0.1 + CUDA 12 works. JIT compilation takes ~37s per operation first time, then cached. 8192 parallel envs, ~17K steps/sec. Full 200M-step training in ~3.5 hours. [T1 — Verified on GB10, 2026-02-15] - **Q:** What is the reward function for G1JoystickFlatTerrain? - **A:** 23 reward terms. Key ones: tracking_lin_vel(1.0), tracking_ang_vel(0.75), feet_phase(1.0), feet_air_time(2.0), orientation(-2.0), termination(-100.0). Push perturbations enabled (0.1-2.0 m/s, every 5-10s). Full breakdown in `context/learning-and-ai.md` §8. [T1 — Source inspection, 2026-02-15] - **Q:** What training time estimates for different policy types on GB10? - **A:** Locomotion-only: 5M steps in 7 min (sanity), 200M in ~3.5 hrs. Unified WBC (estimated): 400M in ~5.5 hrs. [T1 — Measured on GB10, 2026-02-15] ### GR00T-WBC Deployment (Resolved) - **Q:** Does GR00T-WBC run on aarch64 (ARM64)? - **A:** Yes, with patches. CycloneDDS `` XML causes buffer overflow on aarch64 — fix by removing the section. ROS2 Jazzy + Python 3.12 on Ubuntu 24.04 aarch64 works. [T1 — Verified on GB10, 2026-02-14] - **Q:** Can ROS2 Jazzy and Unitree SDK2 coexist in the same process? - **A:** Yes. CycloneDDS system lib (0.10.4) and ROS2 lib (0.10.5) are ABI-incompatible, but using the default FastRTPS RMW avoids the conflict. Second ChannelFactory init fails gracefully. [T1 — Verified on GB10, 2026-02-14] - **Q:** What are the ONNX policy observation/action dimensions? - **A:** Both Balance and Walk: 516-dim observation (proprioception + history), 15-dim action (lower body joint targets). [T1 — ONNX model inspection, 2026-02-14] ### Hardware & Mechanical (Resolved) - **Q:** What are the exact DOF configurations for each G1 variant? - **A:** Base = 23-DOF (6+6+1+5+5), Mid = 29-DOF (6+6+3+7+7), Full = 43-DOF (29 body + 14 hand). [T0 — Official spec sheet] - **Q:** What are the per-joint torque limits? - **A (partial):** Knee max = 90 Nm (base) / 120 Nm (EDU). Other joints not published. [T0 — Official spec sheet] - **Q:** What is the maximum payload capacity? - **A:** 2 kg per arm (standard), 3 kg per arm (EDU). [T0 — Official spec sheet] ### Sensors & Perception (Resolved) - **Q:** What is the full sensor suite across G1 variants? - **A:** Intel RealSense D435i (RGBD), Livox MID360 (3D LiDAR), IMU, 4-element microphone array, 5W speaker, dual joint encoders, per-motor temperature sensors. Dex3-1 adds 33 tactile sensors per hand. [T0 — Official product page] - **Q:** What are the camera specifications? - **A (partial):** Intel RealSense D435i — standard specs 1280x720@30fps (depth), 1920x1080@30fps (RGB), 87°×58° FOV. G1-specific configuration may differ. [T0/T3 — D435i datasheet + inference] ### Software & SDK (Resolved) - **Q:** What is the current stable SDK2 version and Python API coverage? - **A:** unitree_sdk2 v2.0.2 (C++, BSD-3). Python wrapper via unitree_sdk2_python using pybind11, Python ≥3.8. CycloneDDS 0.10.2 required. [T0 — GitHub] - **Q:** What ROS2 distributions are officially supported? - **A:** Ubuntu 20.04 + ROS2 Foxy, Ubuntu 22.04 + ROS2 Humble (recommended). [T0 — unitree_ros2 README] - **Q:** What control modes are available via SDK? - **A:** Position (q), velocity (dq), torque (tau), force-position hybrid. Via MotorCmd_ structure on rt/lowcmd topic. Enable/disable per motor. [T0 — Developer guide] ### Simulation (Resolved) - **Q:** Are there official URDF/MJCF models available? - **A:** Yes. MJCF in MuJoCo Menagerie (g1.xml, g1_with_hands.xml). URDF in unitree_ros/robots/g1_description/. USD in unitree_model (deprecated → HuggingFace). [T0 — GitHub repos] ### Hardware & Architecture (Resolved) - **Q:** What processor runs on the locomotion computer? - **A:** Rockchip RK3588 (8-core ARM Cortex-A76/A55, 8GB LPDDR4X, 32GB eMMC), running Linux 5.10.176-rt86+. Runs 26 daemons including `ai_sport` (the stock locomotion policy). [T1 — arXiv:2509.14096, arXiv:2509.14139] - **Q:** How to replace the stock locomotion policy with a custom one? - **A:** Enter debug mode (L2+R2 on remote while robot is suspended in damping state). This shuts down `ai_sport`. Then send motor commands from Jetson or external PC via `rt/lowcmd` DDS topic. All published research groups use this method. Native deployment to the RK3588 has not been publicly achieved. [T1 — Community + research papers] ### Control & Locomotion (Resolved) - **Q:** What locomotion controller is running on the stock G1? - **A:** Gait-conditioned reinforcement learning with multi-phase curriculum. Biomechanically inspired reward shaping. Trained in simulation, deployed via sim-to-real transfer. [T1 — arXiv:2505.20619] - **Q:** What are the achievable walking speeds and terrain capabilities? - **A:** Max 2 m/s walking. Verified on tile, concrete, carpet. Light push recovery. Smooth gait transitions. [T0/T1 — Official specs + testing] - **Q:** What sim-to-real transfer approaches work best for the G1? - **A:** Domain randomization in Isaac Gym/MuJoCo, with Sim2Sim cross-validation. Zero-shot transfer demonstrated for locomotion and fall recovery. DDS interface identical in sim and real (switch network config only). [T1 — Multiple papers]