Code for running MaxEnt IRL for offroad navigation
rosbag_to_dataset
: Note that there is a separate IRL preproc branch for now. Additionally, this dependency is only necessary for generating new train data.
torch_mpc
: Currently a private repo, ask @striest for access
All scripts have relatively helpful descriptions of their run args (python3 <script> -h
)
scripts/run_experiment.py
: Main driver script. Need to provide a --setup_fp
arg that points to a config file (a good example is: configs/training/pointpillars_debug.yaml
).
scripts/generate_metrics.py
: After running experiment, evaluate/visualize network with this script.
scripts/preprocess_dataset_no_pointpillars.py
: Take a rosbag_to_dataset
dataset and process it for IRL.
scripts/ros/gridmap_to_cvar_costmap.py
: ROS node that runs the trained CVaR IRL map in ROS.
src/maxent_irl_costmaps/algos/mppi_irl_speedmaps.py
: Main IRL training code
src/maxent_irl_costmaps/experiment_management/parse_configs.py
: Registry of strings->files to set up IRL experiments. New network definitions should be added here.
Run via:
cd scripts
python3 run_experiment.py --setup_fp <your config here>
This code is designed to train a network for inverse RL for a ROS-based autonomy stack similar to TartanDrive. At a high level, its input is rosbags with the following:
- GridMaps of local terrain
- Odometry of robot state
- Images FPV images (viz only for now)
- Steering Angle (as stamped Float32 in deg, robot specific)
Note that we require both grid maps and odometry to be in the same frame.
Outputs will be a directory of trained networks that can be run on robot with the ROS script.