-
Clone the repository.
git clone https://github.com/Rak-r/ROS1-ROS2-contents.git
-
Navigate inside the repository directory.
cd <repo_directoy>
The repo consists of different various ROS2 nodes which can be used for quite specific scenarios and has been tested with a real robot platoform. I have tried to keep the track of different scenraios encounterd while developing a full custom autonomous vehicle robot based on ROS2.
This repoistroy contains the information about the concepts of ROS1 and ROS2. It can also be suitable for porting the existing ROS1 projects to ROS2 as the repository mentions most of the possible erros and steps for new beginner to ROS/ROS2.
The launch files maybe of xml or python format but in ros2 there has been some changes in the name of arguements and attributes like type>exec
, ns>namespace
. Also the Navigation stack has been upgraded wuth some more functionalities and there has been chnages in the terms and packages. However the basic concept is the same as ROS1.
-
The Gmapping package for the mapping task in ROS1 with other skam packages like Karto, cartographer and the building
slam_toolbox
which steve mckenzie has introduced in ROS2, it basically offers structured functionality. Some parameters have been renamed but does the same task. -
The Adaptive Monte Carlo Localization package of the nav_stack has te same adopted to nav2.
-
map_server apckage has been renamed to nav2_map_server: This package implements a server to handle the request of the map loading and also gives a map topic (map_server/map). There is also a map saver server running in the backend which saves the generated map according to the service requests.
-
nav2_plannar replaces the global planner in ROS2 and nav2_controller replaced the local planner.
- Planner serveer are used to compute the global plan or the shortest path.
- These servers are used to follow he the global plan generated by the planner. These were called a local planners in ROS1.
- For instance, suppose the robot has been stuck at some point due to some obstackle coming in ts way. The behaviour servers makes it possible for the robit to act in thses king=d of scenarios through the actions like turning back, spinning which brings the robot into obstcakle free zone.
- Sometimes we cannot fully rely on the path provided/computed by the plnner so we use the smoother server swhich helps in refining the planer path.
- Navigation 2 features a nav2_waypoint_follower which has a plugin interface and helps to complete a specific task/operation for example; bringing the cup etc.
- Firstly, to localise the robot a groundtruth odometry data is required which can be obtianed from various sensor s like wheel encoders, IMUs, GPS/GNSS, Visual Odometry, LiDAR based odometry approches also gaining a lot of popularity.
- After getting the data, transformations required to make the robot's postion and orientation availale for Navigation and other applications, so
odom --> base_link
transform needs to be provided. Custom ROOS2 nodes can be created to publish the odometry message and the transforms. - Following the above, the senond task is to generate the map of the environment in which the robot is present. Various map libraries or
SLAM
algorithms are used to generate the map. ROS2 has provided support ofslam_toolbox
, an upgraded version of Karto package, whhich works good with 2d LiDAR data. - It subscribes to
LaserScan
messages and publishes the requiredmap>odom
frame transformation. This can be done with other methods as well namely; Cartographer, RTABMAP and ORB-SLAM (In case of RGB-D sesnors).
Costmaps are the 2D occupancy grid maps which represents the environment for the robot. In these grids, the cell stores the value between (0-254) and the one with the value 0 means = No Occupancy while cell with value 254 = Lethally Occupied. Costmaps have different layers which consist different information about the environment.
- Represents the map section of the costmap.
- Represents the objects detected by the sensors that publishes either or both LaserScan and PointCloud messages.
- Does the same thing as that of the obstacle layer but handles the 3D data.
- Represents the added cost values around the obstacles.
In order to move the robot for a specific task or operation it is important to have the full detailed data of veery joint of the robot like the angle for the moving joint, displacemnet of linear actuator and velocity. Joint state publisher is used to keeping the track of this data and sends it to Robot state Publisher.
This recieves the data from the Joint state publisher and outputs the position and orientation of each coordinate frame and this data is pubishes to the TF2 library/package of ROS2 which keeps the track record for all the co-ordinate frames over time.
It takes the input from the robot state publsher and is responsilble for keeping track of pose estimate and orientation of all coordinate frames over time.
The data recieved or collected from different sensors like LiDARs, Caneras, IMUs etc need to fused in order to use that infomation for robust working in different scenarios. To fuse the multi-sensor data, motion models are used and one of the most common method used is called Kalman Filtering. The Different-drive robot, Ackermann steered robot mostly uses Extended Kalman filtering (EKF) and partu=icle filter Monte Carlo method is used.
If a robot is using 2d laser sensor then it has specific message type sensor_msgs/LaserScan which reprsents the data from the lidar sensor.
*Each laserscan.msg contains a single sweep of the environment. *Data is an array of floats representing range in metres. *Tells which tf frame is the laser sensor is attached to.
- When having a 3d lidar sensor on the robot, the data is best represented by pointclouds.
- Pointcloud is just a whole lot of points in the 3d space.
- ROS has specific message type for handling pointcloud data of 3d lidar through
sensor_msgs/PointCloud
. - In case of sing RGB-D , stereo cameras for sensor input then, new message type has been provided by ROS community 'sensor_mssg/msg/PointCloud2`.
SLAM ToolBox: (https://github.com/SteveMacenski/slam_toolbox)
ROS1 featured different SLAM approaches in order to build the map and perform localization. These methods are Gmapping, Cartograph, Karto Slam. In ROS2, the Gmapping has been replaced with Slam_toolbox. Slam_toolbox works in the following way:
-
ROS Node: SLAM toolbox runs in synchronous mode and generates a ros node hwich subcribes tpo the laser scan and odometry topics and publishes map to odom transform and a map.
-
Getting LiDAR and Odometry Data: A callback from laser topic generates a pose and a laser scan is attached to the node. The posedscan objects makes a queue, which uis processed by the algorithm (Karto).
-
Pose Graph: The PosedScan object queue are used to form a pose graph which is utilised compute the robot pose and find loop closures.
-
Mapping: Laser scans associated to each pose in the Pose Graph are used to construct and publish a map.
-
Loop Closure: Ability of a SLAM algorithm to identify the previously detected images and correct the drift accumulated during the sensor movement.
The basic idea behind the Slam_toolbox or Gmapping in ROS1 is that we can create a map of the environment which can then be saved and used for navigation of the robot.
- Firstly, look at the congfiguartion file in the official
slam-toolbox
repository to get some idea of the terms. - Then just create the parameters/config in your robot workspace/src/robot_slam package.
- Initially, copy the configs values for testing. Then create a launch file which can used from the reference from the repository only.
- Build the package. <colcon build --symlink-install>
<source install/setup.bash>
(Don't forget to source the overlay i.e.source /opt/ros/ros-distro/setp.bash
).- Use the
<ros2 launch>
command to launch the slam_toolbox and view in rviz.
Navigation 2(https://github.com/ros-planning/navigation2)
NAV2 Planners are used to compute the shortest path in the enviroment for the robot using the algorithms.
Different algorithms used:
- This algorithm uses either Dijkstra or A*.
- This implemnts a 2D A* algorithm using 4 or 8 connected neighbourhood with a smoother query.
Issue with these planners is that they may not be feasable to provide the map for the robot whch is Ackermann steered and legged robots.
- This algorithm provides the support for Ackermann and legged robots and supports the Dubin and Reeds-Shepp motion nmodels.
Different Algorithms used:
- DWB controller based on Dynamic window approach algorithm which have configurable plugins to compute the commands for the robots.
2.) Other controller server plugin is the TEB (Time Elastic Band) which optimizes the robot's trajectory based on its execution time, distance form obstacle and feasability w.r.t robot's kinematic constraints. Can be used fo Ackermann , legged robots.
Coming to the structure for building the urdf file for the robot, xacro software can be used if the model is too complex. Things to keep in mind when dealing with URDF files. Below are the list of common things to consider while developing the URDF. URDF files are used to describe the robot in the ros environment and SDF files are used to describe the robot in Gazebo simulation.
- Visual: Describes the robot visual properties (how it will look).
- Collision: Physical properties that govern the robot collision with other objects.
- Inertial: Describes some physics properties of the robot. (mass, inertia along axis, origin etc).
- These are used to create the joint between two links. Includes name tag, type: Revolute for wheelss, continous etc, parent and child link names.
- Check the xml parsings and tags, correct spacing and tag closings.
- In the urdf file define the base_footprint for ease and a joint fixing base-footprint with base_link/chassiss in case of car/4WD robots. carefully define the link lables with providing the origin tag, collision tag, visual tag.
- Joint labels should be defined to show the connection between two links (parent and child).
- Failed to find the root link: 2 root links found.
Set a parent link to one of those two links according to your robot structure.
- Rviz2 not showing the mesh file and dae.file for your robot model.
- Make sure you added the meshes or the directory of your STL and .dae files to the CmakeLists.txt of the package which you are running.
- Secondly, try to use file://$(find package_name) instead of file://package in the mesh file path in your urdf.
- Thirdly, try to replace the path with the absolute path of the stl/dae files.
Note: Rviz2 axis colour and designation: x- Red, y-Green z-Blue. If Robot model appears in the rviz2 console and the joints are cluttered, then try to manipulate with the origin xyz and rpy values of those particular joints one by one and visualize the changes by launch file again and again.
- To check the URDF is correctly built, try the command in the terminal :
check_urdf <name of the urdf file>
Gazebo converts the URDF to SDF format itself when passed through the launch files. Also, if using xacro, then it automatically parses the complete URDF to gazebo.(Later in this repo). There are lot of errors occur in Gazebo when using launch files to view the ROS URDF model in gazebo. I tried to mention the one I faced with some solutions below:
*Solution: Can ignore the warning or just use different names for each link and joint. 2. #### Cannot have 2 joints with same name.
A child link can have only one parent but a parent link can have as many child links.
Check in the urdf or sdf if a child link is having two/more parent link and correct it.
Correcting the above error will automatically resolve this one.
4. In ROS2, Joint State Publisher in gazebo plugin is used as <joint_name> enter_the_name_of_the_joint</joint_name>
.
5. Gazebo server already in use:
killall gazebo
OR killall gzserver
and restart terminal OR sudo pkill -9 gzserver
. Use ps for more help.
After successfully building the urdf and viewing it in RViz and Gazebo, now to control the robot there are different ways.
- First method to move the robot in simulation is to use gazebo plugins which are already made available to users from gazebo. (Diff. drive, Tricycle Drive and others can be found at: https://github.com/ros-simulation/gazebo_ros_pkgs).
- Second method is newly introduced in ROS2, the ros2_control package. This involves a hardware interface,a controller manager and controller interface. The controller manager acts as a communication medium between the hardware interface and controller interface.
- Third, there are some diffrent robots with different kinematics models namely, Ackermann steered robot in our case and there is no support of gazebo plugin for this right now. There are some sources which uses ackermann steer robots, however these are based on ROS1. Therefore, to handle this issue we are struggling to find different ways to implement and acheive this.
- As per the design of the
OpenPodCar_1
consider a tracking rod connected to base link in the front to which the right and left pivots are connected and to the pivots the front right and front left wheels are connected. - Plan is to apply force to the rod so that it can move in either right/left.
Ackermann demands the velocity message in x for logitudenal movemnet (front or reverse) and angular velocity in z (y) for steering angle movement. So writing samll ros2 nodes can help with this. start with joystick package of ros2 and take the commands of the joystick node which outs the topic with message type /Joy. Convert this /Joy message to speed command (x) and angular command for wheel steer (z) using ros2 nodes joy2speed.py and joy2wheelangle.py. After that, provide these converted messages to anothers nodes cmd_velhandler to get the main messages which are: joy2speed to /speedcmd_vel and joy2wheel to /wheelAnglecmd.
Then, a simple ros_gazebo plugin can be cretaed which will not do any calculations but just transfer the ros2 messages came form the small nodes to gazebo for simualtion and controlling.
The new Gazebo Garden is used for the simulation of OpenPodCar_V2. The new gazebo features more functionalities with enhanced inetrface. As our robot behaves as car-like robot and features Ackermann-Steering kinematics. To maintain this behaviour in simulation the new gazebo now has an Ackermann system plugin which could be used according the robot configuartions. The plugin outputs standard Twist messages of field linear.x
and angular.z
. This also outputs the odometry information which might not be the correct odometry for the whole robot instead it is the odometry information for steering. To develop the communication between ROS2 and Gazebo the new package ros_gz_bridge
provides a message transportation bridge for most commonly used messages in ROS2.
Complete the installation of ros-gz bridge from source by selceting the Humble branch from the repo.(https://github.com/gazebosim/ros_gz/tree/humble)
-
one terminal:
gz sim
-
second terminal:
gz topic -l
(lists all the available topics), then go to root of the workspace and source it usingsource install/setup.bash
. -
Initialize the bridge:
ros2 run ros_gz_bridge parameter_bridge <name of topic>@type of topic in ROS2[ignition.msgs.<type of topic in gazebo>
. -
Third terminal: Try
ros2 topic pub
and see the bridge working.
Getting the odometry messages from the gazebo and sending it to ROS2 using odometry_publisher gazebo plugin
.
Visualise the topic info and values in ros using
ros 2 topic echo <topic_name>
, ros2 topic info <topic_name>
.
Install the NAV2 from the offcial documentation.
-
Points to look for: Sending the odom message to nav2 ?
-
Adding slam toolbox and look for parameters.
-
For nav2 we need the minimal transforms to be present
MAP -->> ODOM AND ODOM -->> BASE_LINK
-
Map to odom transform is done by either AMCL if you are using pre built map and suitable for mostly inf=dor environment while if you want to build the map while moving the robot, then SLAM techniques are appointed which creates the map on the move and publishes
MAP to Odom
transform. Then, launching the navigation file and set the initla pose using 2DPoseEstimate and once its set, hit the Navigate2GOAL button in rviz. -
How the transform settled up for
odom to base_link
from gazebo. -
Create the
GZ --> ROS
bridge for transferring the topic/model/podcar/tf
which contains the transnform ofodom to base_link
to ROS2 side and remapp the topic to/tf
as ROS publishes 2 simple topics related to transforms/tf and /tf_static
. -
Check that the remapped tf topic getting the
odom to base_link
transform.
- Original error: According to the loaded plugin descriptions the class differential with base class type nav2_amcl::MotionModel does not exist. Declared types are nav2_amcl::DifferentialMotionModel nav2_amcl::OmniMotionModel
- Change the parameter defination of robot_model_type : differential to nav2::DifferentialMotionModel
-
Transforms are not published (map-->odom) and (odom-->base_link).
-
Robot is jumping to and fro.
- It is because the when launching the AMCL and SLAM_TOOLBOX together both the nodes are publishing to the map topic and the transform (Map to Odom), which might interfare each other. To solve, try to disable AMCL from NAV2 CONFIG/PARAMS FILE.
-
Even after changing the NAV2 PARAMS under global and local costmap the parameter footprint-padding to double type, still showing the same issue
-
Future error walkaround using this issue of github -->>
-
Changing the RWMs to Cyclone DDS. Link : ros-navigation/navigation2#3560
-
Other error costmap might look: ros-navigation/navigation2#2541
-
Unable to run controller and planner server ros-navigation/navigation2#3703
-
Second Issue: Unable to use Smac planner based NAV2 params ros-navigation/navigation2#3717 As suggest in issue number 8 above, just try to rebuild the workspace freshly to see the changes in NAV2 package.
-
[controller_server-1] 2023-07-21 15:14:58.631 [RTPS_TRANSPORT_SHM Error] Failed init_port fastrtps_port7420: open_and_lock_file failed -> Function open_port_internal
Similar issue: https://answers.ros.org/question/394404/rtps_transport_shm-error-and-at-the-same-time-no-topics-are-available-in-ros2-topic-list/
- This error is due to different rmw_fastrtps versions. To check the versions in the ROS system use [SOLVED].
- ros2 doctor --report | grep fastrtps
- try to provide he right Path to xml node in the nav2 params file or in the launch filea the default nav2 launch file suggests (DISAPPEARED).
-
Look for issue: ros-navigation/navigation2#3521
-
How to debug ROS2 code for getting the4 backtrace
-
Debugger backtrace https://navigation.ros.org/tutorials/docs/get_backtrace.html
-
-
NAV2 BT Navigator node not launching.
ERROR: bt_navigator-4] [ERROR] [1690388832.138353661] []: Original error: Could not load library: libnav2_are_error_codes_active_condition_bt_node.so: cannot open shared object file: No such file or directory. [RESOLVED]
*This might be because of some wrong configuration settings under the bt_navigator parameters in NAV2 params file.Check the parameters given are correct. *Also, check which nodes are available for the specific ROS2 version. This might be the most probalble reason that the nodes passed in bt navigator parameter file is not availabel in the ros2 version installed.
-
[INFO] [1691424608.497790289] [rviz]: Message Filter dropping message: frame 'odom' at time 3503.340 for reason 'the timestamp on the message is earlier than all the data in the transform cache'.
[planner_server-2] [ERROR] [1691424804.953690738] [transformPoseInTargetFrame]: Extrapolation Error looking up target frame: Lookup would require extrapolation into the future. Requested time 1691424773.212782 but the latest data is at time 3636.800000, when looking up transform from frame [map] to frame [odom]
-
- So to build python packages withput error do:
pip install setuptools==58.2.0 in usr or main system. not in venv
- So to build python packages withput error do:
ros-navigation/navigation2#3737
- ros-navigation/navigation2#2439
- ros-navigation/navigation2#3586
- ros-navigation/navigation2#3807
- https://robotics.stackexchange.com/questions/105003/nav2-robot-cant-create-valid-plan
- Just set the min_vel_x param in DWB controller to some negative value, say (-0.5) and try it. (https://groups.google.com/g/ros-sig-navigation/c/z-9iCtvwGIA)
Looked for VO approaches but all of them a bit complicated and needs ros2 wrappers. Below are some solutions.
A laser scan matcher is an algorithm to estimate robot's motion or pose (position and orientation).
- Firstly, the sensor maybe lidar or depth cameras which produces point clouds (can be converted to laserscan) and laser scans data is captured.
- Then, data aquisition takes place, meaning the lasersca data is represnetd in 2d space as a collect of 2d points consisting of objects in robot's surroundings.
- The scan matcher takes scans and tries to find the best relative transformation (translation and rotation) between them.
- Once, the best transformation is found it is used to compute the robot's odometry which includes chnages in (position and orientation)
The ROS2 navigation stack is working but some issues are still hanging.
- Let the gazebo simulation run on its simulation time and publish the topics
/model/podcar/odometry
,/lidar_scan
and the transformodom to base_link
on/tf
topic. - The time stamp on these topics will be sim time of gazebo which starts from 0 seconds everytime the gazebo is launched.
- After running gazebo, create small
ros2 nodes
which will convert the incoming messages fromgazebo time stamp ----->> unix time (wall time)
. - Once done, verify that the nodes are wroking and transferring messages correctly and make the topics names
/odom
,/scan
to keep it simple for the NAV2. We can use whatever name we want using remapping. - After that, try launcho=ing the nav2 stack alomng with AMCL Localization and open rviz and view the terminal for some erros (if persists).
- Then set the initial pose using 2dposeestimate in rviz2. Once done, hit the NAV2GOAL button nad see the robot moving in gazebo.
- To create custom message in ROS2, make the msg directory in the new package only for messages.
- Make sure that the name of the msg directory inside which all the messages are getting stored should only and only be "msg".
For the rest can follow the tutorial -->> https://docs.ros.org/en/crystal/Tutorials/Custom-ROS2-Interfaces.html
-
Folow the guide for ros2 link for intel wraper -- (https://github.com/IntelRealSense/realsense-ros#installation-instructions)
*Choose option 1 and click on linux debian installation guide and run commands mentioned line by line.
-
In the same repo:
-
Under installation and step 2 : install latest Intel® RealSense™ SDK 2.0
-
select option 2: Install librealsense2 package from ROS servers:
sudo apt install ros-humble-librealsense2*
.
- Then, install ROS2 Wrapper
- option 1 -->>
sudo apt install ros-humble-realsense2-*
ros2 launch realsense2_camera rs_launch.py align_depth.enable:=true pointcloud.enable:=true
REFER THIS: IntelRealSense/realsense-ros#2295
-
Tried -->
dmesg | grep tty
, failed at first Operation not permitted -
try -->
sudo dmesg | grep tty,sudo dmesg | tail
-
Then -->
sudo sysctl kernel.dmesg_restrict=0
[run once] -
Try again -->
dmesg | grep tty
-
To check the ports with python serial first download python3-serial for linux using -->>
sudo apt-get install python3-serial
-
Then in terminal -->>
python3 -m serial.tools.list_ports
[this will shw if there is port detcted or not]
- Grant permissions to read/write to the serial port with this terminal command --->
sudo chmod a+rw /dev/ttyACM0
Here replace tty port with your respective ubuntu port.
Problem 2 : Failed to open /dev/ttyACM0 (port busy) Solution : This problem appears when serial port is busy or already occupied.
- So kill the busy serial port with command --->
fuser -k /dev/ttyACM0
. Here replace tty port with your respective ubuntu port.
Problem 3 : Board at /dev/ttyACM0 is not available Solution : In this case your serial port in tools menu will be greyed out.
-
I googled a lot for this, but I none of solution worked for me. Atlast I tried different arduino board and usb connector and it was working for me. So, if you are having old arduino board (can be solved using required drivers) or defected arduino board then only this problem arises.
-
Try this as well -->>
-
groups computing
-
sudo usermod -a -G tty computing
-
sudo usermod -a -G dialout computing
-
groups computing
Downloading serial in Linux/Ubuntu: `https://ubuntu.pkgs.org/22.04/ubuntu-main-amd64/python3-serial_3.5-1_all.deb.html'
-
Grant permission to serial port for read and write in Linux using:
sudo chmod a+rw /dev/ttyACM0
replace the port name with whatever is it. -
Add user to dialout and tty group using:
sudo usermod -a -G tty <name_of_user>
,sudo usermod -a -G dialout <name_of_user>
Making Teensy communicate with Ubuntu for serial connections https://www.pjrc.com/teensy/loader_linux.html
- Add the udev rules to
/etc/udev/rules.d/
-
Copy the udev rules from:
https://www.pjrc.com/teensy/00-teensy.rules
to a file create with name:00-teensy.rules
-
Restart the terminal and serial code again and it should read the port.
- https://robotics.stackexchange.com/questions/110374/issues-while-setting-up-ekf-for-ackermann-steered-robot
- https://robotics.stackexchange.com/questions/105045/ukf-position-data-closely-match-slam-absolute-pose-regard-wheel-odometry/108335#108335
In case your robot only have RGBD or stereo camera as the main sensor for Localization & Mapping tasks, RTABMAP is one the widely adopted SLAM algorithm.
However, there are numerous ways to setup RTABMAP for specific purpose and it sometimes becomes confusing, that;s why I added the guide for some most confusing queries and which I faced myself as well.
-
In the standard launch file provided by RTABMAP (ROS2), replace the topic name of depth image and rgb image.
-
Parameters to set to true are: 'subcribe_rgb
and
subcribe_depth`. -
If want the
LaserScan
to create 2D occupancy Map instead of making it via depth image, setsubcribe_scan = 'Ture'
, it will ignore depth image and generate 2D Map for NAV2 viaLaserScan
. -
To convert
depth to scan
, we can usedepthimage_to_laserscan
which I don't recommend. The reason behind is the package reportsNan
, when the sensor is on Holiday (cannot see any data out of desired range) while the other ROS2 stack incluing NAV2, SLAM algorithms demandsInf
. To have this corrected at the time of my project, the other packagepointcloud_to_laserscan
was used which directly converts thePointCloud2
message to 1DLaserScan
. -
The other paramters could be tuned according to further needs.
- introlab/rtabmap_ros#398
- http://wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot
- introlab/rtabmap#391
- introlab/rtabmap_ros#467
-
Firstly check the Rtabmap slam with just the RTAB'S RGBD oodmetry and verify it behaves correct for indoors.
-
Now, if fusing the data from other sensors and RGBD odometry, turn of the transform publisher parameter in RGBD odometry. publish_tf : False for odom->base_link
-
Make sure that your sensor node / wheel odmetry node not publishing tf from odom->base_link.
-
Add the odom topics or whatever your node outputs, to the EKF and use absolute poses from the one which you are more confident and use velocities from the one which is less stable. Example: VO gets lost with white walls , at that time just need other source to handle the cases.
-
Then, in RGBD odometry launch file, set the publish_null_when_lost : False and 'Odom/ResetCountdown': '1' to immediately reset and use the other source for measurement.
-
Run rviz/rviz2 and visualize its behaving as similar as the case with just RGBD odometry in global frame set to odom, small difference will be fine.
-
Now, in the RTABMAP's Slam launch file, remap the topic odom to EKF's output topic usually odometry/filtered and start the slam, change the global frame to map, if want test your ways or try as the one mentioned in my issue.
-
There might be need to tweak EKF's process noise covariances.
-
Set frame id to vo if want to use RGBD oodmetry as like as ekf and set guess_frame_id to odom & keep publishing odom -> base_link from your sensor or wheel odometry
-
This will not break the tf tree and new setup would be like vo -> odom -> base_link.
-
When RTABMAP's Slam is launched the tf tree would look like map -> vo -> odom -> base_link.