Skip to content

Commit

Permalink
Created documentations
Browse files Browse the repository at this point in the history
  • Loading branch information
CihatAltiparmak committed Jul 1, 2024
1 parent a682568 commit d5b2d39
Show file tree
Hide file tree
Showing 4 changed files with 233 additions and 0 deletions.
25 changes: 25 additions & 0 deletions docs/how_to_install.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
## How To Install


Firstly, setup the dependencies of moveit_middleware_benchmark repositories. It's suggested to test with rolling version of ROS.
```sh
# install colcon extentions
source /opt/ros/rolling/setup.bash
sudo apt install python3-colcon-common-extensions
sudo apt install python3-colcon-mixin
colcon mixin add default https://raw.githubusercontent.com/colcon/colcon-mixin-repository/master/index.yaml
colcon mixin update default

# create workspace
mkdir ws/src -p
cd ws/src

# clone this repository
git clone [email protected]:CihatAltiparmak/moveit_middleware_benchmark.git -b fix/refactor_codebase
vsc import moveit_middleware_benchmark/moveit_middleware_benchmark.repos --recursive

# build the workspace
cd ws
sudo apt update && rosdep install -r --from-paths . --ignore-src --rosdistro $ROS_DISTRO -y
colcon build --mixin release
```
9 changes: 9 additions & 0 deletions docs/how_to_run.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
## Scenarios

### [Perception Pipeline Benchmark](scenarios/perception_pipeline_benchmark.md)

This benchmark measures the elapsed time by which the determined path is sent for the robot to follow. This benchmark calculates `elapsed time`, `success_number` and `failure_number`. `elapsed_time` is used for measuring how much time the pipeline takes. `success_number` is used for presenting successfuly plannings and `failure_number` is used for presenting failed plannings.

Firstly, `node` and `move_group_interface`in SetUp are created before each benchmark. `poses` inside `nav_msgs/msg/Path` is sent one by one to plan trajectory for robot. If planning is failed, only `failure_rate` is increased. If planning is successful, the trajectory_plan which move_group_server plan is sent via `move_group_interface` to start the execution of this planned trajectory. Then `success_number` is increased.

For instance, the selected test_case includes 20 goal poses. These 20 goals is sent one by one to `move_group_server`. If the 5 goal poses out of 20 goal poses are failed, `success_number` equals 15 and `failure_number` equals 5. `success_number` and `failure_number` is important to observe the middlewares' behaviours.
6 changes: 6 additions & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# MoveIt Middleware Benchmark

This is a package created for measuring middleware effects on various scenarios. This package contains some scenarios such as perception pipeline.

* [How To Install](./how_to_install.md)
* [How To Run](./how_to_run.md)
193 changes: 193 additions & 0 deletions docs/scenarios/perception_pipeline_benchmark.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,193 @@
## How To Run Perception Benchmark

Firstly, source your ros version. It's suggested to test with rolling version of ROS.

For instance, to test with rmw_zenoh, start to zenoh router using following command
```sh
# go to your workspace
cd ws
# Be sure that ros2 daemon is killed.
pkill -9 -f ros && ros2 daemon stop
# Then start zenoh router
ros2 run rmw_zenoh_cpp rmw_zenohd
```

Select your rmw_implementation as `rmw_zenoh_cpp` and run the perception benchmark launch file.
```sh
# go to your workspace
cd ws
source /opt/ros/rolling/setup.bash
source install/setup.bash
export RMW_IMPLEMENTATION=rmw_zenoh_cpp # select your rmw_implementation to benchmark
ros2 launch moveit_middleware_benchmark moveit_middleware_benchmark_demo.launch.py
```

It will be created the json file named `middleware_benchmark_results.json` for benchmarking results after finishing benchmark code execution. You can see the benchmark results in more detail inside this json file.

## How to benchmark the perception pipeline

The main idea here is to send some goal poses in `nav_msgs/msg/Path` format to move_group_server via move_group_interface and to measure the elapsed time.

## How to create test cases

You can add your test cases to `scenario_perception_pipeline_test_cases.yaml` file. For benchmarking of the scenario of perception pipeline, your test case must be presented in a type of nav_msgs/msg/Path. `poses` section presents `poses` field of `nav_msgs/msg/Path` type. There is a following example test case file.

```yaml
test_cases:
- poses:
- pose:
position:
x: 0.5
y: 0.5
z: 0.5
orientation:
w : 1.0
- pose:
position:
x: 0.5
y: -0.5
z: 0.7
orientation:
w: 1.0
- poses:
- pose:
position:
x: 0.5
y: 0.5
z: 0.5
orientation:
w : 1.0
- pose:
position:
x: 0.5
y: -0.5
z: 0.7
orientation:
w: 1.0
- pose:
position:
x: 0.5
y: 0.5
z: 0.5
orientation:
w : 1.0
- pose:
position:
x: 0.5
y: -0.5
z: 0.7
orientation:
w: 1.0
- pose:
position:
x: 0.5
y: 0.5
z: 0.5
orientation:
w : 1.0
- pose:
position:
x: 0.5
y: -0.5
z: 0.7
orientation:
w: 1.0
- pose:
position:
x: 0.5
y: 0.5
z: 0.5
orientation:
w : 1.0
- pose:
position:
x: 0.5
y: -0.5
z: 0.7
orientation:
w: 1.0
- pose:
position:
x: 0.5
y: 0.5
z: 0.5
orientation:
w : 1.0
- pose:
position:
x: 0.5
y: -0.5
z: 0.7
orientation:
w: 1.0
- pose:
position:
x: 0.5
y: 0.5
z: 0.5
orientation:
w : 1.0
- pose:
position:
x: 0.5
y: -0.5
z: 0.7
orientation:
w: 1.0
- pose:
position:
x: 0.5
y: 0.5
z: 0.5
orientation:
w : 1.0
- pose:
position:
x: 0.5
y: -0.5
z: 0.7
orientation:
w: 1.0
- pose:
position:
x: 0.5
y: 0.5
z: 0.5
orientation:
w : 1.0
- pose:
position:
x: 0.5
y: -0.5
z: 0.7
orientation:
w: 1.0
- pose:
position:
x: 0.5
y: 0.5
z: 0.5
orientation:
w : 1.0
- pose:
position:
x: 0.5
y: -0.5
z: 0.7
orientation:
w: 1.0
- pose:
position:
x: 0.5
y: 0.5
z: 0.5
orientation:
w : 1.0
- pose:
position:
x: 0.5
y: -0.5
z: 0.7
orientation:
w: 1.0
```

0 comments on commit d5b2d39

Please sign in to comment.