Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions Sim2Real/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ TASK_NAME can be one of the following:

## (Optional) Train Sim2Real RL policy

Train Sim2Real RL policy for the tasks. The trained policy weights for most tasks are already in the [Sim2Real/logs/](https://github.com/GenRobo/DreamControl/tree/main/Sim2Real/logs) folder. You may skip this step if you want to directly use these trained policies for inference.
Train Sim2Real RL policy for the tasks. The trained policy weights for most tasks are already in the [Sim2Real/deploy/policies/](https://github.com/GenRobo/DreamControl/tree/main/Sim2Real/deploy/policies/) folder. You may skip this step if you want to directly use these trained policies for inference.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better maintainability and portability of the documentation, it's recommended to use relative links for files within the same repository. Absolute URLs are tied to a specific branch (e.g., main) and might not work correctly in forks or other branches.

Suggested change
Train Sim2Real RL policy for the tasks. The trained policy weights for most tasks are already in the [Sim2Real/deploy/policies/](https://github.com/GenRobo/DreamControl/tree/main/Sim2Real/deploy/policies/) folder. You may skip this step if you want to directly use these trained policies for inference.
Train Sim2Real RL policy for the tasks. The trained policy weights for most tasks are already in the [Sim2Real/deploy/policies/](./deploy/policies/) folder. You may skip this step if you want to directly use these trained policies for inference.


```shell
./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py --task=Isaac-Motion-Tracking-<TASK-NAME>-Real-v0 --headless --device cuda:1
Expand Down Expand Up @@ -204,4 +204,4 @@ python deploy_real_Bimanual_Pick.py enp3s0 g1_full_body.yaml

When you run the script, it will first get you to `zero torque mode`. In the zero torque state, press the `start` button on the remote control, and the robot will move to the default joint position state. At this point lower the gantry such the the robot's feet touch the ground. Press the `A` button to enter the still standing mode. The robot should stand still and balance itself at this point. Trying giving it a little push and it should balance itself. If anything goes wrong, press the `select` button on the remote control to exit the policy and enter damping mode.

If obj is not set to "none" in the script, it should pop out a window with the object/goal visible in the camera stream. If you see a bounding box around the correct object/goal, you can press the `q` key on the keyboard to close the window. If everything looks good, press the `B` button on the remote control to start the policy. Be extremely careful and monitor the robot's behavior closely. If anything looks wrong at any point, press the `select` button on the remote control to exit the policy and enter damping mode. After the policy is executed the robot will enter damping mode again. You can increase POLICY_TIME variable in the script to longer values to keep the robot in the policy for longer (robot will pause at the end configuration after 10s).
If obj is not set to "none" in the script, it should pop out a window with the object/goal visible in the camera stream. If you see a bounding box around the correct object/goal, you can press the `q` key on the keyboard to close the window. If everything looks good, press the `B` button on the remote control to start the policy. Be extremely careful and monitor the robot's behavior closely. If anything looks wrong at any point, press the `select` button on the remote control to exit the policy and enter damping mode. After the policy is executed the robot will enter damping mode again. You can increase POLICY_TIME variable in the script to longer values to keep the robot in the policy for longer (robot will pause at the end configuration after 10s).