diff --git a/docs/how_to_run.md b/docs/how_to_run.md index db915ad..3767bf8 100644 --- a/docs/how_to_run.md +++ b/docs/how_to_run.md @@ -1,3 +1,84 @@ +## Run All Benchmarks +Before running the following benchmarks, please read and apply the instructions outlined in the [How To Install](how_to_install.md) section for providing the necessary requirements. + +To run all benchmarks, just select your middleware implementation and go ahead with `run_all_benchmarks.sh` bash script. This command initially will run the script you chose. In this example, it's used the default config scripts which the necessary middleware configuration is applied. This property serves the functionality for the users to apply some custom configurations freely. + +```shell +# go to workspace this repository is built +cd ws +source /opt/ros/rolling/setup.sh +source install/setup.sh +# go to moveit_middleware_benchmark package's directory +cd src/moveit_middleware_benchmark +# conduct all benchmarks +sh src/moveit_middleware_benchmark/scripts/run_all_benchmarks.sh -i ./src/moveit_middleware_benchmark/middleware_configurations/rmw_cyclonedds/config.sh -d /benchmark_results +``` + +Let's explain all operations at `run_all_benchmarks.sh`. + +#### selection of initial script +``` +-i ./src/moveit_middleware_benchmark/middleware_configurations/rmw_cyclonedds/config.sh +``` + +This argument is for selecting the initial scripts to be run. These initial scripts are used for configuring middleware-specific settings for improved performance. For example, you can use the initial script to configure TCP settings for rmw_zenoh like below. + +```shell +echo "The configurations for rmw_zenoh_cpp is started!" +export RMW_IMPLEMENTATION=rmw_zenoh_cpp +sudo sysctl -w "net.ipv4.tcp_rmem=4096 4096 4096" +sudo sysctl -w "net.ipv4.tcp_wmem=4096 4096 4096" +sudo sysctl -w "net.ipv4.tcp_mem=4096 4096 4096" +echo "The configurations for rmw_zenoh_cpp is finished!" +``` + +#### directory selection to save benchmark results +``` +-d /benchmark_results +``` + +This argument indicates where the benchmark results are saved. For scenario_perception_benchmark and scenario_basic_service_client, the results of these scenarios are written in the shape of the below directory tree. It should be added that `run_all_benchmarks.sh` script uses json format to save the benchmark results. + +``` +benchmark_results/ +├── scenario_basic_service_client +│   ├── rmw_cyclonedds_cpp.json +│   ├── rmw_fastrtps_cpp.json +│   └── rmw_zenoh_cpp.json +└── scenario_perception_pipeline + ├── rmw_cyclonedds_cpp.json + ├── rmw_fastrtps_cpp.json + └── rmw_zenoh_cpp.json +``` + +## Plot Visualization of Benchmark Results + +After running `run_all_benchmarks.sh`, you can also visualize the box plots of benchmark results. Suppose that you have some benchmark results stored in below directory and the directory named `benchmark_results` is located in `ws` directory which this repository is built. + + +``` +benchmark_results/ +├── scenario_basic_service_client +│   ├── rmw_cyclonedds_cpp.json +│   ├── rmw_fastrtps_cpp.json +│   └── rmw_zenoh_cpp.json +└── scenario_perception_pipeline + ├── rmw_cyclonedds_cpp.json + ├── rmw_fastrtps_cpp.json + └── rmw_zenoh_cpp.json +``` + +Just give the directory of benchmark results as argument and then visualize the benchmark results in plot. +```shell +cd ws +python3 src/moveit_middleware_benchmark/scripts/box_plot_visualizer.py benchmark_results + +``` + +**NOTE THAT THE BELOW PICTURE DOESN'T PRESENT REAL RESULTS. IT'S JUST FOR SHOWCASE** + +![](./pictures/box_plot_example.png) + ## Scenarios ### [Perception Pipeline Benchmark](scenarios/perception_pipeline_benchmark.md) diff --git a/docs/pictures/box_plot_example.png b/docs/pictures/box_plot_example.png new file mode 100644 index 0000000..eaa3808 Binary files /dev/null and b/docs/pictures/box_plot_example.png differ diff --git a/middleware_configurations/rmw_cyclonedds/config.sh b/middleware_configurations/rmw_cyclonedds/config.sh new file mode 100644 index 0000000..f9eba3b --- /dev/null +++ b/middleware_configurations/rmw_cyclonedds/config.sh @@ -0,0 +1,3 @@ +echo "The configurations for rmw_cyclonedds_cpp is started!" +export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp +echo "The configurations for rmw_cyclonedds_cpp is finished!" diff --git a/middleware_configurations/rmw_fastrtps/config.sh b/middleware_configurations/rmw_fastrtps/config.sh new file mode 100644 index 0000000..5cfb878 --- /dev/null +++ b/middleware_configurations/rmw_fastrtps/config.sh @@ -0,0 +1,3 @@ +echo "The configurations for rmw_fastrtps_cpp is started!" +export RMW_IMPLEMENTATION=rmw_fastrtps_cpp +echo "The configurations for rmw_fastrtps_cpp is finished!" diff --git a/middleware_configurations/rmw_zenoh/config.sh b/middleware_configurations/rmw_zenoh/config.sh new file mode 100644 index 0000000..ad969e5 --- /dev/null +++ b/middleware_configurations/rmw_zenoh/config.sh @@ -0,0 +1,6 @@ +echo "The configurations for rmw_zenoh_cpp is started!" +export RMW_IMPLEMENTATION=rmw_zenoh_cpp +sudo sysctl -w "net.ipv4.tcp_rmem=4096 4096 4096" +sudo sysctl -w "net.ipv4.tcp_wmem=4096 4096 4096" +sudo sysctl -w "net.ipv4.tcp_mem=4096 4096 4096" +echo "The configurations for rmw_zenoh_cpp is finished!" diff --git a/scripts/box_plot_visualizer.py b/scripts/box_plot_visualizer.py new file mode 100644 index 0000000..1eaa980 --- /dev/null +++ b/scripts/box_plot_visualizer.py @@ -0,0 +1,77 @@ +import matplotlib.pyplot as plt +import numpy as np +import json +import os +import sys + +BENCHMARK_RESULTS_DIR = sys.argv[1] + +middleware_colors = { + "rmw_zenoh_cpp": "orange", + "rmw_cyclonedds_cpp": "peachpuff", + "rmw_fastrtps_cpp": "tomato", +} + +middleware_list = ["rmw_zenoh_cpp", "rmw_cyclonedds_cpp", "rmw_fastrtps_cpp"] + + +def read_benchmark_json(file_name): + benchmark_json_data = None + with open(file_name) as f: + benchmark_json_data = json.load(f) + return benchmark_json_data + + +def get_real_time_list_from_benchmark_json(benchmark_json_data): + real_time_list = [] + for benchmark_info in benchmark_json_data["benchmarks"]: + if benchmark_info["run_type"] == "iteration": + real_time_list.append(benchmark_info["real_time"]) + return real_time_list + + +def get_middleware_dataset_for_scenario(scenario_name): + middleware_datasets = [] + for middleware_name in middleware_list: + file_name = os.path.join( + BENCHMARK_RESULTS_DIR, scenario_name, f"{middleware_name}.json" + ) + benchmark_json_data = read_benchmark_json(file_name) + dataset = get_real_time_list_from_benchmark_json(benchmark_json_data) + middleware_datasets.append( + { + "name": middleware_name, + "dataset": dataset, + "color": middleware_colors[middleware_name], + } + ) + return middleware_datasets + + +def plot_dataset_of_scenario(plt, scenario_name, middleware_datasets): + labels = [] + colors = [] + datasets = [] + + for x in middleware_datasets: + labels.append(x["name"]) + colors.append(x["color"]) + datasets.append(x["dataset"]) + + fig, ax = plt.subplots() + ax.set_title(scenario_name) + ax.set_ylabel("real time (ns)") + ax.set_xlabel("middlewares") + + bplot = ax.boxplot(datasets, patch_artist=True, tick_labels=labels) + + # fill with colors + for patch, color in zip(bplot["boxes"], colors): + patch.set_facecolor(color) + + +for scenario_name in os.listdir(BENCHMARK_RESULTS_DIR): + middleware_datasets = get_middleware_dataset_for_scenario(scenario_name) + plot_dataset_of_scenario(plt, scenario_name, middleware_datasets) + +plt.show() diff --git a/scripts/run_all_benchmarks.sh b/scripts/run_all_benchmarks.sh new file mode 100644 index 0000000..800d8b6 --- /dev/null +++ b/scripts/run_all_benchmarks.sh @@ -0,0 +1,40 @@ +# Inspired from https://unix.stackexchange.com/questions/31414/how-can-i-pass-a-command-line-argument-into-a-shell-script +helpFunction() +{ + echo "" + echo "Usage: $0 -i initial_script -d benchmark_results_directory" + echo -i "\t-i initial_script to run once all benchmarks are started" + echo -d "\t-d the directory the benchmark results is saved" + exit 1 # Exit script after printing help +} + +while getopts "i:m:d:" opt +do + case "$opt" in + i ) initial_script="$OPTARG" ;; + d ) benchmark_results_directory="$OPTARG" ;; + ? ) helpFunction ;; # Print helpFunction in case parameter is non-existent + esac +done + +# Print helpFunction in case parameters are empty +if [ -z "$initial_script" ] || [ -z "$benchmark_results_directory" ] +then + echo "Some or all of the parameters are empty"; + helpFunction +fi + +echo "benchmark results directory is $benchmark_results_directory" + +echo "Benchmarking is starting!" +echo "Starting initial scripts before benchmarks run!" +. "$initial_script" +echo "Initial script has finished! Now starting to benchmark middleware with scenarios!" + +mkdir ${benchmark_results_directory}/scenario_basic_service_client -p +ros2 daemon stop +ros2 launch moveit_middleware_benchmark scenario_basic_service_client_benchmark.launch.py benchmark_command_args:="--benchmark_out=${benchmark_results_directory}/scenario_basic_service_client/${RMW_IMPLEMENTATION}.json --benchmark_out_format=json --benchmark_repetitions=6" + +mkdir ${benchmark_results_directory}/scenario_perception_pipeline -p +ros2 daemon stop +ros2 launch moveit_middleware_benchmark scenario_perception_pipeline_benchmark.launch.py benchmark_command_args:="--benchmark_out=${benchmark_results_directory}/scenario_perception_pipeline/${RMW_IMPLEMENTATION}.json --benchmark_out_format=json --benchmark_repetitions=6"