-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Skimap_ros integration with ORBSLAM2 #7
Comments
@AmirCognitive if you see the ROS+RGBD example file in the ORB_SLAM2 package (file) on row 112 there is the real "track" operation made with OrbSlam:
This method returns the Camera 6DOF Pose (related obviously to the first camera reference frame) so you can retrieve this:
this is a 4x4 transformation matrix so you can convert easily in a Ros tf::Transform and broadcast it to the system, so it can be used by SkiMap as source for sensor pose. @raulmur may correct me if i'm wrong. The alternative is to call in an asynch way the method to retrieve all the keyframes stored in that moment in the Orb Slam System with:
But in this case you have a list of only the KeyFrame's poses , you don't have any information about RGB-D Images so you have to store in a separated Map all your RGB-D data, mapped with timestamp that is the same key used to store KeyFrames in OrbSlam2, @raulmur correct me also here. |
Sorry for late reply. I followed your first method and added the following to the "file":
However, I guess there is a problem with my code adding camera 6DOF pose to tf::Transform since I receive the following error, when camera moves. "Segmentation fault (core dumped)" Then, "so it can be used by SkiMap as source for sensor pose", should I use the skimap_map_service? Could you explain a bit in detail. Thanks, |
@AmirCognitive SkiMap testing example explained in the README works if is there a TF ready to be used as sensor 6DOF pose. This code about ORBSLAM2 produces only the abovementioned TF, just it. SEGMENTATION FAULT is a problem with your code, maybe related to some pointers or some bad indexing of arrays. |
Maybe my question is very simple, but as I understand you are saying we created the TF in the previous post, should I change the following part in the launch file of skimap_live in order to connect to the new TF created or I should use the skimap_service?
|
yes, but this not resolve SEGMENTATION FAULT that is a bug in your code |
Beside what you say about Segmentation Fault which is happening after adding the code, I have this error also in the Skimap_ros launch file: [ERROR] [1497013568.329012342]: "odom" passed to lookupTransform argument target_frame does not exist. |
"odom" is the base frame of you odometry system. The one always fixed in the space that is the parent for each of the "camera_pose" tfs |
After running the ORB RGB-D, and openni2 for Asus camera, this is the following TFs on my system, output of "rosrun tf tf_monitor"
and these are the topics I have:
Obviously I have no odom, so which one should I use? |
Sorry but unfortunately this is not a tutorial. Maybe you don't have a strong background about ROS and a generic SLAM system, so i can't help you, sorry. By the way your ODOM is the Identity in this case, try to publish a Static TF that is the identity 4x4 matrix, and it should work |
No need to say so many sorry, after all, you did a good job by providing this package for people. I was trying to use it but as you know, there is a problem here and there is a problem in the second issue I opened in this repository, and all of these messages I am sending are just to run your package to work in a similar way to the video clips you guys uploaded on youtube. My SLAM background is not good, not every one are good in SLAM and yes you are right, this is not a tutorial, but if you upload a package then it should not work similar to the description of the package and videos that are published by default? I wanted to use a simple 3D mapping system for model-based machine learning algorithms, since it is a better idea to do test with the packages that already exist I tried your package, but that is not a good idea for me to study SLAM from scratch, rather is better to cooperate with a person who knows it well. |
@m4nh In Orb_Slam_2 discussion you wrote (raulmur/ORB_SLAM2#322):
I would like to know if it's sufficient to apply the changes you suggest in this issue to get a working system (like the one shown in the video (https://www.youtube.com/watch?v=W3nm2LXmgqE). |
@adnion , @zhengguoxian123 in the README of SkiMap is shown a simple example called "skimap_live.launch" which just works with a moving TF of the camera (with a fixed world TF) and a stream from an RGBD camera (so double topic: one for RGB and one for DEPTH). It works regardless of who publish the abovementioned data. So you need to modify the ORBSLAM2 code just to enable the TF Broadcasting of the tracked camera pose, in this way SkiMap can exploit it. If this is not clear please answer and maybe together we can produce a working example of this. Thanks |
@zhengguoxian123 if you launch "skimap_live.launch" in the launch file you have to set the name of the live TF of the camera ("camera_frame_name"), the name of the map TF (odom or world, or a fixed one: "base_frame_name"), both the name of RGB/DEPTH topic of camera ("camera_rgb_topic"/"camera_depth_topic"). And it should work, it publishes on RVIZ the Visualization Marker of the whole map. You can modify the "skimap_live.cpp" node to manage map in another way |
@zhengguoxian123 did you change TF for camera and TF for world? |
@adnion SkiMap is not able to manage a Monocular camera as it is because without Depth information is impossible to build a 3D representation for RGB points. If the Slam System (like ORBSLAM2 in this case) produces a semi-dense or dense map of points during tracking SkiMap can use this subset to build the map (an example is LSD Slam (https://www.youtube.com/watch?v=GnuQzP3gty4&t=1s) that produces a semi-dense map with monocular camera) |
I thought the output of ORBSLAM2 is camera type independent. Thanks for your reply! |
@zhengguoxian123 can you send me a small "bag" file of your context so i can test it on my computer? |
@zhengguoxian123 i don't understand very well. If you modified ORBSLAM2 to obtain your custom camera TF you can call it with a custom name, for example "tracked_camera", and broadcast it with a custom parent, for example choose "world" as frame parent id. now you have the fixed frame that is "world" and the moving frame that is "tracked_camera". If you launch rviz now you need to set "world" as fixed frame obtaining a correct visualization. (You need to use "world" and "tracked_camera" also as parameter of Skimap Launch file!) |
[ INFO] [1499238154.252125537]: Time for Integration: 13.172000 ms I occured the problem,does it not affect the mapping? can the node save the map and reload the map? |
@zhengguoxian123 the SkiMap class has the methods "saveToFile" and "loadFromFile" but you have to reimplement this logic in a custom node. In the next release i will provide a full stack service able to save/load maps |
Hey, I've got the integration almost working, wondering if you would be willing to share how you modified the pose to match correctly? I'm currently doing this (see code below) and it seems as though it doesn't match the orientation that skimap is looking for? Also, when launching skimap_live it looks like the map is always straight up? (see image) In my mind, it would make sense for the map to match the camera view (aka in front of the green not in front of the red)
|
In SkiMapLive the test is made with a Odometry system, so the ZERO reference frame is the first pose of the robot mobile base (so onto ground) the first camera pose , instead , is on the head of the robot. The robot is this: In a SLAM system instead, like OrbSlam, there is no ZERO reference frame onto ground, but the ZERO reference frame is the first CAMERA frame. For this reason your image is correct, the map starts in this way, and the RF of camera has the Z(blue) pointing in front of camera, the X (red) pointing on the right and the Y(green) the remaining in cross product right hand convention. |
@kentsommer Hi, did you modify the ORB source in order to get your posted code running? I get segmentation fault while executing the example with ORB SLAM RGBD. Could you please provide your steps? Thank you! |
Hi @kentsommer did you solved the orientation problem?
When I run ORB SLAM2 I see the Z value strangely increasing see the video below |
Hi @AndreaIncertiDelmonte the fixed frame anyway is wrong because in that BAG the recording is made by means of the mobile robot which starts with the head bent downwards (about 30/40 degrees).. not perfectly aligned with floor. In the bag however there are all the TFs of the robots. Also the TF corresponding to the camera (computed by the Odometry+Kinematics of the robot), you can use that as initial fixed frame. |
Hi @m4nh, Now when I try to connect ORBSLAM and Skimap I get this error Which timestamp is better to use inside br.sendTransform(tf::StampedTransform(...)) on ORBSLAM side? msgRGB->header.stamp or ros::Time::now()? Andrea Update: the previous error was caused by not using rosbag time and it was fixed by adding |
Hi! @m4nh |
Yes @JeenQ , you need to separate the two steps:
|
Hi AndreaIncertiDelmonte , |
Hi @m4nh , skimap_live.launch
`
If I don't integrate Skimap_ros with ORB-SLAM2, only when I use the tiago_lar.bag that you offered , can I get the correct map. When I choose other dataset.bag , I also can't get the map. Can you give me some advice ?
|
@wmj1238 did you see the TF "camera" moving in RVIZ while bag is running? Are you sure that the Image topic "/camera/rgb/image_raw" and related depth topic are correct in the ORBLSMA2 part? |
Hi @m4nh , the TF "camera" don't move in RVIZ while bag is running. Why ? |
@wmj1238 with those rows:
ORBSLAM will publish the TF.. so if you don't see the TF "CAMERA" means that ORBSLAM is not working.. try to print the
to debug also the RGB/DEPTH part |
Thank you @m4nh ,during this time, I tried to debug as you said , but my computer is running slow and will show |
@wmj1238 no i think skimap_live is working... the "core dumped" is on ORBSLAM, so i think that you have problems with your image callback: if imshow produces "core dumped" maybe the rgb image is void! |
I am very grateful for your help @m4nh , you are right and my code is wrong in this line: |
@wmj1238 do you have the VisualizationMarker topic enabled in rviz? however i see the tf "camera_pose" and not the "camera" ... you set "camera" as tf_name in the launch file of skimap |
@m4nh Yes, I have the VisualizationMarker topic enabled in rviz. And I change the tf "camera_pose" to prevent confusion. Now |
@wmj1238 sorry but i can't help you a lot form here :/ ... it' s hard to debug for me you code! try to modify the skimap_live.cpp to debug... sorry for that :( |
OK, thanks again, I will check the code carefully. Thank you. @m4nh |
@m4nh which library to you use to visualization on youtube demo? Or just ORBSLAM? |
@m4nh,@wmj1238. Hello, bro, I basically integrated ORBSLAM 2 algorithm and Skimap algorithm, but the point cloud doesn't seem to display correctly in Rviz, just like the picture. Do I need to rotate the point cloud? I look forward to your reply, and I am eager to communicate with you further. |
Hi,
I followed the discussion in ORBSLAM 2 and saw the following clip:
https://www.youtube.com/watch?v=W3nm2LXmgqE
I can run ORBSLAM and Skimap live, but dont have any idea how to integrate them, partially because there are no topics from ORBSLAM 2. I appreciate if you could give me some idea and explanation about it.
The text was updated successfully, but these errors were encountered: