You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I have read the paper "Data-Efficient Decentralized Visual SLAM" of you. And there has a question I can not understand. How to estimate the relative pose between robots, did the robot know the initial pose of other robots? Or is there have a common reference frame knew by all robots? Look forward to your reply, thanks.
The text was updated successfully, but these errors were encountered:
Hi @Gaoee , sorry for my late reply. Once robot A knows that, in frame X, it observes the same scene as robot B in frame Y (from decentralized visual place recognition), robot A sends all data necessary for relative pose estimation between X and Y to robot B. Then, robot B establishes the pose of X relative to the pose of Y (no common frame of reference needed) , and sends it back to A.
This data comprises feature point locations and their descriptors, and relative pose is established using P3P and RANSAC.
Hello, I have read the paper "Data-Efficient Decentralized Visual SLAM" of you. And there has a question I can not understand. How to estimate the relative pose between robots, did the robot know the initial pose of other robots? Or is there have a common reference frame knew by all robots? Look forward to your reply, thanks.
The text was updated successfully, but these errors were encountered: