Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skimap_ros integration with ORBSLAM2 #7

Open
Amir-Ramezani opened this issue Jun 8, 2017 · 43 comments
Open

Skimap_ros integration with ORBSLAM2 #7

Amir-Ramezani opened this issue Jun 8, 2017 · 43 comments

Comments

@Amir-Ramezani
Copy link

Hi,

I followed the discussion in ORBSLAM 2 and saw the following clip:
https://www.youtube.com/watch?v=W3nm2LXmgqE

I can run ORBSLAM and Skimap live, but dont have any idea how to integrate them, partially because there are no topics from ORBSLAM 2. I appreciate if you could give me some idea and explanation about it.

@m4nh
Copy link
Owner

m4nh commented Jun 8, 2017

@AmirCognitive if you see the ROS+RGBD example file in the ORB_SLAM2 package (file) on row 112 there is the real "track" operation made with OrbSlam:

mpSLAM->TrackRGBD(cv_ptrRGB->image,cv_ptrD->image,cv_ptrRGB->header.stamp.toSec());

This method returns the Camera 6DOF Pose (related obviously to the first camera reference frame) so you can retrieve this:

cv::Mat camera_pose = mpSLAM->TrackRGBD(...)

this is a 4x4 transformation matrix so you can convert easily in a Ros tf::Transform and broadcast it to the system, so it can be used by SkiMap as source for sensor pose. @raulmur may correct me if i'm wrong.

The alternative is to call in an asynch way the method to retrieve all the keyframes stored in that moment in the Orb Slam System with:

vector<KeyFrame *> keyframes;
mpSLAM->GetAllKeyFrames(keyframes);

But in this case you have a list of only the KeyFrame's poses , you don't have any information about RGB-D Images so you have to store in a separated Map all your RGB-D data, mapped with timestamp that is the same key used to store KeyFrames in OrbSlam2, @raulmur correct me also here.

@Amir-Ramezani
Copy link
Author

Sorry for late reply.

I followed your first method and added the following to the "file":

// added for integration with skimap_ros
cv::Mat camera_pose = mpSLAM->TrackRGBD(cv_ptrRGB->image,cv_ptrD->image,cv_ptrRGB->header.stamp.toSec());

static tf::TransformBroadcaster br;
tf::Transform transform;

tf::Vector3 origin;
origin.setValue((camera_pose.at<double>(0,3)),(camera_pose.at<double>(1,3)),(camera_pose.at<double>(2,3)));

tf::Matrix3x3 tf3d;
tf3d.setValue((camera_pose.at<double>(0,0)), (camera_pose.at<double>(0,1)), (camera_pose.at<double>(0,2)), 
      (camera_pose.at<double>(1,0)), (camera_pose.at<double>(1,1)), (camera_pose.at<double>(1,2)), 
      (camera_pose.at<double>(2,0)), (camera_pose.at<double>(2,1)), (camera_pose.at<double>(2,2)));

tf::Quaternion tfqt;
tf3d.getRotation(tfqt);

transform.setOrigin(origin);
transform.setRotation(tfqt);

br.sendTransform(tf::StampedTransform(transform, ros::Time::now(), "world", "camera_pose_test"));

However, I guess there is a problem with my code adding camera 6DOF pose to tf::Transform since I receive the following error, when camera moves.

"Segmentation fault (core dumped)"

Then, "so it can be used by SkiMap as source for sensor pose", should I use the skimap_map_service? Could you explain a bit in detail.

Thanks,

@m4nh
Copy link
Owner

m4nh commented Jun 9, 2017

@AmirCognitive SkiMap testing example explained in the README works if is there a TF ready to be used as sensor 6DOF pose. This code about ORBSLAM2 produces only the abovementioned TF, just it. SEGMENTATION FAULT is a problem with your code, maybe related to some pointers or some bad indexing of arrays.

@Amir-Ramezani
Copy link
Author

Maybe my question is very simple, but as I understand you are saying we created the TF in the previous post, should I change the following part in the launch file of skimap_live in order to connect to the new TF created or I should use the skimap_service?

<!-- Global and Camera TF Names -->
<param name="base_frame_name" value="world"/>
<param name="camera_frame_name" value="camera_pose_test"/>

@m4nh
Copy link
Owner

m4nh commented Jun 9, 2017

yes, but this not resolve SEGMENTATION FAULT that is a bug in your code

@Amir-Ramezani
Copy link
Author

Beside what you say about Segmentation Fault which is happening after adding the code, I have this error also in the Skimap_ros launch file:

[ERROR] [1497013568.329012342]: "odom" passed to lookupTransform argument target_frame does not exist.
[ERROR] [1497013568.427245455]: "odom" passed to lookupTransform argument target_frame does not exist.
[ERROR] [1497013568.427980266]: "odom" passed to lookupTransform argument target_frame does not exist.
[ERROR] [1497013568.428439365]: "odom" passed to lookupTransform argument target_frame does not exist.

@m4nh
Copy link
Owner

m4nh commented Jun 9, 2017

"odom" is the base frame of you odometry system. The one always fixed in the space that is the parent for each of the "camera_pose" tfs

@Amir-Ramezani
Copy link
Author

After running the ORB RGB-D, and openni2 for Asus camera, this is the following TFs on my system, output of "rosrun tf tf_monitor"

Frame: /camera_depth_frame published by unknown_publisher Average Delay: -0.0998325 Max Delay: 0
Frame: /camera_depth_optical_frame published by unknown_publisher Average Delay: -0.0997629 Max Delay: 0
Frame: /camera_pose_test published by unknown_publisher Average Delay: 0.000407517 Max Delay: 0.00589606
Frame: /camera_rgb_frame published by unknown_publisher Average Delay: -0.0997913 Max Delay: 0
Frame: /camera_rgb_optical_frame published by unknown_publisher Average Delay: -0.0998036 Max Delay: 0

and these are the topics I have:

/camera/camera_nodelet_manager/bond
/camera/depth/camera_info
/camera/depth/image
/camera/depth/image/compressed
/camera/depth/image/compressed/parameter_descriptions
/camera/depth/image/compressed/parameter_updates
/camera/depth/image/compressedDepth
/camera/depth/image/compressedDepth/parameter_descriptions
/camera/depth/image/compressedDepth/parameter_updates
/camera/depth/image/theora
/camera/depth/image/theora/parameter_descriptions
/camera/depth/image/theora/parameter_updates
/camera/depth/image_raw
/camera/depth/image_raw/compressed
/camera/depth/image_raw/compressed/parameter_descriptions
/camera/depth/image_raw/compressed/parameter_updates
/camera/depth/image_raw/compressedDepth
/camera/depth/image_raw/compressedDepth/parameter_descriptions
/camera/depth/image_raw/compressedDepth/parameter_updates
/camera/depth/image_raw/theora
/camera/depth/image_raw/theora/parameter_descriptions
/camera/depth/image_raw/theora/parameter_updates
/camera/depth/image_rect
/camera/depth/image_rect/compressed
/camera/depth/image_rect/compressed/parameter_descriptions
/camera/depth/image_rect/compressed/parameter_updates
/camera/depth/image_rect/compressedDepth
/camera/depth/image_rect/compressedDepth/parameter_descriptions
/camera/depth/image_rect/compressedDepth/parameter_updates
/camera/depth/image_rect/theora
/camera/depth/image_rect/theora/parameter_descriptions
/camera/depth/image_rect/theora/parameter_updates
/camera/depth/image_rect_raw
/camera/depth/image_rect_raw/compressed
/camera/depth/image_rect_raw/compressed/parameter_descriptions
/camera/depth/image_rect_raw/compressed/parameter_updates
/camera/depth/image_rect_raw/compressedDepth
/camera/depth/image_rect_raw/compressedDepth/parameter_descriptions
/camera/depth/image_rect_raw/compressedDepth/parameter_updates
/camera/depth/image_rect_raw/theora
/camera/depth/image_rect_raw/theora/parameter_descriptions
/camera/depth/image_rect_raw/theora/parameter_updates
/camera/depth/points
/camera/depth_rectify_depth/parameter_descriptions
/camera/depth_rectify_depth/parameter_updates
/camera/depth_registered/camera_info
/camera/depth_registered/image_raw
/camera/depth_registered/image_raw/compressed
/camera/depth_registered/image_raw/compressed/parameter_descriptions
/camera/depth_registered/image_raw/compressed/parameter_updates
/camera/depth_registered/image_raw/compressedDepth
/camera/depth_registered/image_raw/compressedDepth/parameter_descriptions
/camera/depth_registered/image_raw/compressedDepth/parameter_updates
/camera/depth_registered/image_raw/theora
/camera/depth_registered/image_raw/theora/parameter_descriptions
/camera/depth_registered/image_raw/theora/parameter_updates
/camera/depth_registered/points
/camera/depth_registered/sw_registered/camera_info
/camera/depth_registered/sw_registered/image_rect
/camera/depth_registered/sw_registered/image_rect/compressed
/camera/depth_registered/sw_registered/image_rect/compressed/parameter_descriptions
/camera/depth_registered/sw_registered/image_rect/compressed/parameter_updates
/camera/depth_registered/sw_registered/image_rect/compressedDepth
/camera/depth_registered/sw_registered/image_rect/compressedDepth/parameter_descriptions
/camera/depth_registered/sw_registered/image_rect/compressedDepth/parameter_updates
/camera/depth_registered/sw_registered/image_rect/theora
/camera/depth_registered/sw_registered/image_rect/theora/parameter_descriptions
/camera/depth_registered/sw_registered/image_rect/theora/parameter_updates
/camera/depth_registered/sw_registered/image_rect_raw
/camera/depth_registered/sw_registered/image_rect_raw/compressed
/camera/depth_registered/sw_registered/image_rect_raw/compressed/parameter_descriptions
/camera/depth_registered/sw_registered/image_rect_raw/compressed/parameter_updates
/camera/depth_registered/sw_registered/image_rect_raw/compressedDepth
/camera/depth_registered/sw_registered/image_rect_raw/compressedDepth/parameter_descriptions
/camera/depth_registered/sw_registered/image_rect_raw/compressedDepth/parameter_updates
/camera/depth_registered/sw_registered/image_rect_raw/theora
/camera/depth_registered/sw_registered/image_rect_raw/theora/parameter_descriptions
/camera/depth_registered/sw_registered/image_rect_raw/theora/parameter_updates
/camera/driver/parameter_descriptions
/camera/driver/parameter_updates
/camera/ir/camera_info
/camera/ir/image
/camera/ir/image/compressed
/camera/ir/image/compressed/parameter_descriptions
/camera/ir/image/compressed/parameter_updates
/camera/ir/image/compressedDepth
/camera/ir/image/compressedDepth/parameter_descriptions
/camera/ir/image/compressedDepth/parameter_updates
/camera/ir/image/theora
/camera/ir/image/theora/parameter_descriptions
/camera/ir/image/theora/parameter_updates
/camera/rgb/camera_info
/camera/rgb/image_raw
/camera/rgb/image_raw/compressed
/camera/rgb/image_raw/compressed/parameter_descriptions
/camera/rgb/image_raw/compressed/parameter_updates
/camera/rgb/image_raw/compressedDepth
/camera/rgb/image_raw/compressedDepth/parameter_descriptions
/camera/rgb/image_raw/compressedDepth/parameter_updates
/camera/rgb/image_raw/theora
/camera/rgb/image_raw/theora/parameter_descriptions
/camera/rgb/image_raw/theora/parameter_updates
/camera/rgb/image_rect_color
/camera/rgb/image_rect_color/compressed
/camera/rgb/image_rect_color/compressed/parameter_descriptions
/camera/rgb/image_rect_color/compressed/parameter_updates
/camera/rgb/image_rect_color/compressedDepth
/camera/rgb/image_rect_color/compressedDepth/parameter_descriptions
/camera/rgb/image_rect_color/compressedDepth/parameter_updates
/camera/rgb/image_rect_color/theora
/camera/rgb/image_rect_color/theora/parameter_descriptions
/camera/rgb/image_rect_color/theora/parameter_updates
/camera/rgb_rectify_color/parameter_descriptions
/camera/rgb_rectify_color/parameter_updates
/rosout
/rosout_agg
/tf
/tf_static

Obviously I have no odom, so which one should I use?

@m4nh
Copy link
Owner

m4nh commented Jun 9, 2017

Sorry but unfortunately this is not a tutorial. Maybe you don't have a strong background about ROS and a generic SLAM system, so i can't help you, sorry. By the way your ODOM is the Identity in this case, try to publish a Static TF that is the identity 4x4 matrix, and it should work

@Amir-Ramezani
Copy link
Author

No need to say so many sorry, after all, you did a good job by providing this package for people. I was trying to use it but as you know, there is a problem here and there is a problem in the second issue I opened in this repository, and all of these messages I am sending are just to run your package to work in a similar way to the video clips you guys uploaded on youtube.

My SLAM background is not good, not every one are good in SLAM and yes you are right, this is not a tutorial, but if you upload a package then it should not work similar to the description of the package and videos that are published by default? I wanted to use a simple 3D mapping system for model-based machine learning algorithms, since it is a better idea to do test with the packages that already exist I tried your package, but that is not a good idea for me to study SLAM from scratch, rather is better to cooperate with a person who knows it well.

@bcm0
Copy link

bcm0 commented Jun 18, 2017

@m4nh In Orb_Slam_2 discussion you wrote (raulmur/ORB_SLAM2#322):

Unfortunately due to license problem i can't provide you the integration code but SkiMap is pretty simple to use...

I would like to know if it's sufficient to apply the changes you suggest in this issue to get a working system (like the one shown in the video (https://www.youtube.com/watch?v=W3nm2LXmgqE).
Do you use RGB-D or Monocular in the video?
@AmirCognitive Did you manage to get it running?

@m4nh
Copy link
Owner

m4nh commented Jun 19, 2017

@adnion , @zhengguoxian123 in the README of SkiMap is shown a simple example called "skimap_live.launch" which just works with a moving TF of the camera (with a fixed world TF) and a stream from an RGBD camera (so double topic: one for RGB and one for DEPTH). It works regardless of who publish the abovementioned data. So you need to modify the ORBSLAM2 code just to enable the TF Broadcasting of the tracked camera pose, in this way SkiMap can exploit it.

If this is not clear please answer and maybe together we can produce a working example of this. Thanks

@m4nh
Copy link
Owner

m4nh commented Jun 19, 2017

@zhengguoxian123 if you launch "skimap_live.launch" in the launch file you have to set the name of the live TF of the camera ("camera_frame_name"), the name of the map TF (odom or world, or a fixed one: "base_frame_name"), both the name of RGB/DEPTH topic of camera ("camera_rgb_topic"/"camera_depth_topic"). And it should work, it publishes on RVIZ the Visualization Marker of the whole map. You can modify the "skimap_live.cpp" node to manage map in another way

@m4nh
Copy link
Owner

m4nh commented Jun 19, 2017

@zhengguoxian123 did you change TF for camera and TF for world?

@m4nh
Copy link
Owner

m4nh commented Jun 19, 2017

@adnion SkiMap is not able to manage a Monocular camera as it is because without Depth information is impossible to build a 3D representation for RGB points. If the Slam System (like ORBSLAM2 in this case) produces a semi-dense or dense map of points during tracking SkiMap can use this subset to build the map (an example is LSD Slam (https://www.youtube.com/watch?v=GnuQzP3gty4&t=1s) that produces a semi-dense map with monocular camera)

@bcm0
Copy link

bcm0 commented Jun 19, 2017

If the Slam System (like ORBSLAM2 in this case) produces a semi-dense or dense map of points during tracking SkiMap can use this subset to build the map

I thought the output of ORBSLAM2 is camera type independent. Thanks for your reply!

@m4nh
Copy link
Owner

m4nh commented Jun 20, 2017

@zhengguoxian123 can you send me a small "bag" file of your context so i can test it on my computer?

@m4nh
Copy link
Owner

m4nh commented Jun 22, 2017

@zhengguoxian123 i don't understand very well. If you modified ORBSLAM2 to obtain your custom camera TF you can call it with a custom name, for example "tracked_camera", and broadcast it with a custom parent, for example choose "world" as frame parent id. now you have the fixed frame that is "world" and the moving frame that is "tracked_camera". If you launch rviz now you need to set "world" as fixed frame obtaining a correct visualization. (You need to use "world" and "tracked_camera" also as parameter of Skimap Launch file!)

@zhengguoxian123
Copy link

zhengguoxian123 commented Jul 5, 2017

[ INFO] [1499238154.252125537]: Time for Integration: 13.172000 ms
[ INFO] [1499238154.353921955]: Time for Integration: 17.112000 ms
[ERROR] [1499238154.387779607]: Lookup would require extrapolation into the future. Requested time 1499238154.235619241 but the latest data is at time 1499238154.234521345, when looking up transform from frame [kinect2_rgb_optical_frame] to frame [map]
[ INFO] [1499238154.558159039]: Time for Integration: 22.513000 ms
[ INFO] [1499238154.591196123]: Time for Integration: 16.033001 ms
[ INFO] [1499238154.650967590]: Time for Integration: 13.992000 ms
[ERROR] [1499238154.732112811]: Lookup would require extrapolation into the future. Requested time 1499238154.613694470 but the latest data is at time 1499238154.572049359, when looking up transform from frame [kinect2_rgb_optical_frame] to frame [map]
[ INFO] [1499238154.849911565]: Time for Integration: 13.410000 ms
p transform from frame [kinect2_rgb_optical_frame] to frame [map]
[ERROR] [1499238155.331694962]: Lookup would require extrapolation into the future. Requested time 1499238155.203922035 but the latest data is at time 1499238154.741019368, when looking up
[ERROR] [1499238155.535464419]: Lookup would require extrapolation into the future. Requested time 1499238

I occured the problem,does it not affect the mapping?

can the node save the map and reload the map?
thank you for your reply

@m4nh
Copy link
Owner

m4nh commented Jul 5, 2017

@zhengguoxian123 the SkiMap class has the methods "saveToFile" and "loadFromFile" but you have to reimplement this logic in a custom node. In the next release i will provide a full stack service able to save/load maps

@kentsommer
Copy link

kentsommer commented Jul 14, 2017

@m4nh

Hey, I've got the integration almost working, wondering if you would be willing to share how you modified the pose to match correctly? I'm currently doing this (see code below) and it seems as though it doesn't match the orientation that skimap is looking for?

Also, when launching skimap_live it looks like the map is always straight up? (see image) In my mind, it would make sense for the map to match the camera view (aka in front of the green not in front of the red)

skimap

    cv::Mat pose = mpSLAM->TrackRGBD(cv_ptrRGB->image,cv_ptrD->image,cv_ptrRGB->header.stamp.toSec());
    if (pose.empty())
        return;

    cv::Mat  TWC=mpSLAM->mpTracker->mCurrentFrame.mTcw.inv();  
    cv::Mat RWC= TWC.rowRange(0,3).colRange(0,3);  
    cv::Mat tWC= TWC.rowRange(0,3).col(3);

    tf::Matrix3x3 M(RWC.at<float>(0,0),RWC.at<float>(0,1),RWC.at<float>(0,2),
        RWC.at<float>(1,0),RWC.at<float>(1,1),RWC.at<float>(1,2),
        RWC.at<float>(2,0),RWC.at<float>(2,1),RWC.at<float>(2,2));
    tf::Vector3 V(tWC.at<float>(0), tWC.at<float>(1), tWC.at<float>(2));
    tf::Quaternion q;
    M.getRotation(q);

    static tf::TransformBroadcaster br;
    tf::Transform transform = tf::Transform(M, V);
    br.sendTransform(tf::StampedTransform(transform, ros::Time::now(), "world", "camera_pose"));

    geometry_msgs::PoseStamped _pose;
    _pose.pose.position.x = transform.getOrigin().x();
    _pose.pose.position.y = transform.getOrigin().y();
    _pose.pose.position.z = transform.getOrigin().z();
    _pose.pose.orientation.x = transform.getRotation().x();
    _pose.pose.orientation.y = transform.getRotation().y();
    _pose.pose.orientation.z = transform.getRotation().z();
    _pose.pose.orientation.w = transform.getRotation().w();

    _pose.header.stamp = ros::Time::now();
    _pose.header.frame_id = "world";
    pose_pub.publish(_pose);

@m4nh
Copy link
Owner

m4nh commented Jul 14, 2017

In SkiMapLive the test is made with a Odometry system, so the ZERO reference frame is the first pose of the robot mobile base (so onto ground) the first camera pose , instead , is on the head of the robot. The robot is this:

image

In a SLAM system instead, like OrbSlam, there is no ZERO reference frame onto ground, but the ZERO reference frame is the first CAMERA frame. For this reason your image is correct, the map starts in this way, and the RF of camera has the Z(blue) pointing in front of camera, the X (red) pointing on the right and the Y(green) the remaining in cross product right hand convention.

@SteveJos
Copy link

SteveJos commented Aug 4, 2017

@kentsommer Hi, did you modify the ORB source in order to get your posted code running? I get segmentation fault while executing the example with ORB SLAM RGBD. Could you please provide your steps? Thank you!

@AndreaIncertiDelmonte
Copy link

Hi @kentsommer did you solved the orientation problem?
I tried this way:

  • introduced a new reference frame for camera with X right, Y down and Z forward called camera_fixed_frame
  • published a static TF from world frame to camera_fixed_frame composed by -PI/2 rotation around X axis and PI rotation around Z axis see image below
    world_camera_fixed_frame
  • I used your code to publish camera_fixed_frame to camera_pose inside ORB SLAM2

When I run ORB SLAM2 I see the Z value strangely increasing see the video below
https://www.youtube.com/watch?v=YP_-Vz07q7k

@m4nh
Copy link
Owner

m4nh commented Oct 3, 2017

Hi @AndreaIncertiDelmonte the fixed frame anyway is wrong because in that BAG the recording is made by means of the mobile robot which starts with the head bent downwards (about 30/40 degrees).. not perfectly aligned with floor.

In the bag however there are all the TFs of the robots. Also the TF corresponding to the camera (computed by the Odometry+Kinematics of the robot), you can use that as initial fixed frame.

@AndreaIncertiDelmonte
Copy link

AndreaIncertiDelmonte commented Oct 4, 2017

Hi @m4nh,
thanks for your advice! It works.

Now when I try to connect ORBSLAM and Skimap I get this error
[1507114325.806171253]: Lookup would require extrapolation into the past. Requested time 1472902313.451495878 but the earliest data is at time 1507114315.700946991, when looking up transform from frame [orbslam_camera] to frame [world]

Which timestamp is better to use inside br.sendTransform(tf::StampedTransform(...)) on ORBSLAM side? msgRGB->header.stamp or ros::Time::now()?

Andrea

Update: the previous error was caused by not using rosbag time and it was fixed by adding
<param name="use_sim_time" value="true"/> into the lauch file.
But the lookup issue still and I created another issue #9.

@m4nh m4nh mentioned this issue Oct 9, 2017
@JeenQ
Copy link

JeenQ commented Oct 11, 2017

Hi! @m4nh
I'm struggling to try your amazing work.
Unfortunately I don't have a RGBD camera but I do have a stereo camera and I can get depth image as a topic. Is it also possible to integrate orbslam2 stereo mode with your work?

@m4nh
Copy link
Owner

m4nh commented Oct 11, 2017

Yes @JeenQ , you need to separate the two steps:

  • first you need to transform your Depth in PointCloud. In the node "skimap_live.cpp" is show how to do this step starting from a Depth image and knowing the camera matrix.
  • Second you need to publish , from ORBSLAM2, the TF of the current camera pose (with a TF broadcaster) and the launch a TF LIstener in the skimap node to retrieve the camera pose and the use it to transform points

@rnunziata
Copy link

rnunziata commented Oct 19, 2017

Hi AndreaIncertiDelmonte ,
Can you post your full solution for ORBSLAM2 if you have it working ...please.
The line:
cv::Mat TWC=mpSLAM->mpTracker->mCurrentFrame.mTcw.inv();
gets a segfault....looks like mCurrentFrame or mTcw is not set.

@wmj1238
Copy link

wmj1238 commented Jun 3, 2018

Hi @m4nh ,
Thanks for your excellent job,I have learned a lot . But I cann't run it corrent, Can you help me find the error ?
I want to run Skimap_ros integrating with ORBSLAM2. But I can't get the 3D map, I can only see the robot move in rviz. I can't get any maps.

skimap_live.launch
`

<arg name="camera" default="xtion" />

<node name="skimap_live" output="screen" pkg="skimap_ros" type="skimap_live">

    <!-- Generic Parameters -->
    <param name="hz" value="10" />

    <!-- Topics Parameters -->
    <param name="camera_rgb_topic" value="/camera/rgb/image_raw" />
    <param name="camera_depth_topic" value="/camera/depth/image_raw" />

    <!-- Cloud parameters  -->
    <param name="point_cloud_downscale" value="1" />

    <!-- RGB-D Parameters -->
    <param name="fx" value="540.175979" />
    <param name="fy" value="530.740137" />
    <param name="cx" value="325.399891" />
    <param name="cy" value="239.056878" />
    <param name="rows" value="480" />
    <param name="cols" value="640" />
    <param name="camera_distance_max" value="10" />
    <param name="camera_distance_min" value="0.45" />

    <!-- Mapping parameters -->
    <param name="base_frame_name" value="world" />
    <param name="camera_frame_name" value="camera" />
    <param name="map_resolution" value="0.05" />
    <param name="min_voxel_weight" value="100" />
    <param name="ground_level" value="0.05" />
    <param name="agent_height" value="1.5" />
    <param name="enable_chisel" value="false" />
    <param name="chisel_step" value="30" />
    <param name="height_color" value="true" />




</node>

<node pkg="rviz" type="rviz" output="screen" name="rviz" args="-d $(find skimap_ros)/rviz/skimap_live.rviz" />

`
ros_rgbd.cc in orb-slam2.

cv::Mat camera_pose;
camera_pose= mpSLAM->TrackRGBD(cv_ptrRGB->image,cv_ptrD->image,cv_ptrRGB->header.stamp.toSec());
cv::Mat TWC = camera_pose.inv();
cv::Mat RWC= TWC.rowRange(0,3).colRange(0,3);
cv::Mat tWC= TWC.rowRange(0,3).col(3);
tf::Matrix3x3 M(RWC.at(0,0),RWC.at(0,1),RWC.at(0,2),RWC.at(1,0),RWC.at(1,1),RWC.at(1,2),
RWC.at(2,0),RWC.at(2,1),RWC.at(2,2));
tf::Vector3 V(tWC.at(0), tWC.at(1), tWC.at(2));
tf::Quaternion q;
M.getRotation(q);
q.normalize();
tf::Transform transform = tf::Transform(M, V);
tf::TransformBroadcaster br;
br.sendTransform(tf::StampedTransform(transform,ros::Time::now(),"world","camera"));

If I don't integrate Skimap_ros with ORB-SLAM2, only when I use the tiago_lar.bag that you offered , can I get the correct map. When I choose other dataset.bag , I also can't get the map. Can you give me some advice ?

rosbag info rgbd_dataset_freiburg1_room.bag
path: rgbd_dataset_freiburg1_room.bag
version: 2.0
duration: 49.3s
start: May 10 2011 20:51:46.96 (1305031906.96)
end: May 10 2011 20:52:36.24 (1305031956.24)
size: 845.5 MB
messages: 41803
compression: bz2 [2724/2724 chunks; 30.08%]
uncompressed: 2.7 GB @ 56.9 MB/s
compressed: 843.8 MB @ 17.1 MB/s (30.08%)
types: sensor_msgs/CameraInfo [c9a58c1b0b154e0e6da7578cb991d214]
sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743]
sensor_msgs/Imu [6a62c6daae103f4ff57a132d6f95cec2]
tf/tfMessage [94810edda583a504dfda3829e70d7eec]
visualization_msgs/MarkerArray [f10fe193d6fac1bf68fad5d31da421a7]
topics: /camera/depth/camera_info 1361 msgs : sensor_msgs/CameraInfo
/camera/depth/image 1360 msgs : sensor_msgs/Image
/camera/rgb/camera_info 1362 msgs : sensor_msgs/CameraInfo
/camera/rgb/image_color 1362 msgs : sensor_msgs/Image
/cortex_marker_array 4914 msgs : visualization_msgs/MarkerArray
/imu 24569 msgs : sensor_msgs/Imu
/tf 6875 msgs : tf/tfMessage
The code is modified as follows:
static tf::TransformBroadcaster br; br.sendTransform(tf::StampedTransform(transform,cv_ptrRGB->header.stamp::now(),"world","camera"));

@m4nh
Copy link
Owner

m4nh commented Jun 3, 2018

@wmj1238 did you see the TF "camera" moving in RVIZ while bag is running? Are you sure that the Image topic "/camera/rgb/image_raw" and related depth topic are correct in the ORBLSMA2 part?
Try also to change the param "min_voxel_weight" that hides voxels with less than min_voxel_weight inside..

@wmj1238
Copy link

wmj1238 commented Jun 3, 2018

Hi @m4nh , the TF "camera" don't move in RVIZ while bag is running. Why ?
And I am sure that the Image topic and related depth topic are correct in the ORBLSMA2 part.
Thank you.

@m4nh
Copy link
Owner

m4nh commented Jun 3, 2018

@wmj1238 with those rows:

tf::Transform transform = tf::Transform(M, V);
tf::TransformBroadcaster br;
br.sendTransform(tf::StampedTransform(transform,ros::Time::now(),"world","camera"));

ORBSLAM will publish the TF.. so if you don't see the TF "CAMERA" means that ORBSLAM is not working.. try to print the tf::StampedTransform(transform,ros::Time::now(),"world","camera") before broadcasting and see if it changes.. than try to show also the cv_ptrRGB->image with

cv::imshow("img",cv_ptrRGB->image);
cv::waitKey(1);

to debug also the RGB/DEPTH part

@wmj1238
Copy link

wmj1238 commented Jun 3, 2018

Thank you @m4nh ,during this time, I tried to debug as you said , but my computer is running slow and will show Aborted (core dumped) if I imshow the image. And I think the ORBSLAM2 is working because I can get the correct track by running it.
I can see the TF"camera" in rviz sometimes, but it doesn't move.
Do I need to change the skimap_live.cpp code?

@m4nh
Copy link
Owner

m4nh commented Jun 3, 2018

@wmj1238 no i think skimap_live is working... the "core dumped" is on ORBSLAM, so i think that you have problems with your image callback: if imshow produces "core dumped" maybe the rgb image is void!

@wmj1238
Copy link

wmj1238 commented Jun 4, 2018

I am very grateful for your help @m4nh , you are right and my code is wrong in this line:tf::TransformBroadcaster br; it should be static tf::TransformBroadcaster br;.
And now I can see the TF "camera" moving in RVIZ while bag is running . Next I set the rosparam set use_sim_time true as said in #9 .But I still can't get the 3D map. Can you give me some advice ?

screenshot from 2018-06-04 16-24-38

@m4nh
Copy link
Owner

m4nh commented Jun 4, 2018

@wmj1238 do you have the VisualizationMarker topic enabled in rviz? however i see the tf "camera_pose" and not the "camera" ... you set "camera" as tf_name in the launch file of skimap

@wmj1238
Copy link

wmj1238 commented Jun 4, 2018

@m4nh Yes, I have the VisualizationMarker topic enabled in rviz. And I change the tf "camera_pose" to prevent confusion. Now <param name="camera_frame_name" value="camera_pose" /> is in skimap_live.launch.
screenshot from 2018-06-04 18-25-05

@m4nh
Copy link
Owner

m4nh commented Jun 4, 2018

@wmj1238 sorry but i can't help you a lot form here :/ ... it' s hard to debug for me you code! try to modify the skimap_live.cpp to debug... sorry for that :(

@wmj1238
Copy link

wmj1238 commented Jun 4, 2018

OK, thanks again, I will check the code carefully. Thank you. @m4nh

@brunoeducsantos
Copy link

@m4nh which library to you use to visualization on youtube demo? Or just ORBSLAM?
Thanks

@fate7788
Copy link

fate7788 commented Nov 9, 2018

OK, thanks again, I will check the code carefully. Thank you. @m4nh

@wmj1238 What kind of work did you do in the follow-up, can the map be displayed normally? could you give me some advice?Thanks!

@Nick-0814
Copy link

Nick-0814 commented Jul 6, 2023

Hi @m4nh , Thanks for your excellent job,I have learned a lot . But I cann't run it corrent, Can you help me find the error ? I want to run Skimap_ros integrating with ORBSLAM2. But I can't get the 3D map, I can only see the robot move in rviz. I can't get any maps.

skimap_live.launch `

<arg name="camera" default="xtion" />

<node name="skimap_live" output="screen" pkg="skimap_ros" type="skimap_live">

    <!-- Generic Parameters -->
    <param name="hz" value="10" />

    <!-- Topics Parameters -->
    <param name="camera_rgb_topic" value="/camera/rgb/image_raw" />
    <param name="camera_depth_topic" value="/camera/depth/image_raw" />

    <!-- Cloud parameters  -->
    <param name="point_cloud_downscale" value="1" />

    <!-- RGB-D Parameters -->
    <param name="fx" value="540.175979" />
    <param name="fy" value="530.740137" />
    <param name="cx" value="325.399891" />
    <param name="cy" value="239.056878" />
    <param name="rows" value="480" />
    <param name="cols" value="640" />
    <param name="camera_distance_max" value="10" />
    <param name="camera_distance_min" value="0.45" />

    <!-- Mapping parameters -->
    <param name="base_frame_name" value="world" />
    <param name="camera_frame_name" value="camera" />
    <param name="map_resolution" value="0.05" />
    <param name="min_voxel_weight" value="100" />
    <param name="ground_level" value="0.05" />
    <param name="agent_height" value="1.5" />
    <param name="enable_chisel" value="false" />
    <param name="chisel_step" value="30" />
    <param name="height_color" value="true" />




</node>

<node pkg="rviz" type="rviz" output="screen" name="rviz" args="-d $(find skimap_ros)/rviz/skimap_live.rviz" />

` ros_rgbd.cc in orb-slam2.

cv::Mat camera_pose;
camera_pose= mpSLAM->TrackRGBD(cv_ptrRGB->image,cv_ptrD->image,cv_ptrRGB->header.stamp.toSec());
cv::Mat TWC = camera_pose.inv();
cv::Mat RWC= TWC.rowRange(0,3).colRange(0,3);
cv::Mat tWC= TWC.rowRange(0,3).col(3);
tf::Matrix3x3 M(RWC.at(0,0),RWC.at(0,1),RWC.at(0,2),RWC.at(1,0),RWC.at(1,1),RWC.at(1,2),
RWC.at(2,0),RWC.at(2,1),RWC.at(2,2));
tf::Vector3 V(tWC.at(0), tWC.at(1), tWC.at(2));
tf::Quaternion q;
M.getRotation(q);
q.normalize();
tf::Transform transform = tf::Transform(M, V);
tf::TransformBroadcaster br;
br.sendTransform(tf::StampedTransform(transform,ros::Time::now(),"world","camera"));

If I don't integrate Skimap_ros with ORB-SLAM2, only when I use the tiago_lar.bag that you offered , can I get the correct map. When I choose other dataset.bag , I also can't get the map. Can you give me some advice ?

rosbag info rgbd_dataset_freiburg1_room.bag
path: rgbd_dataset_freiburg1_room.bag
version: 2.0
duration: 49.3s
start: May 10 2011 20:51:46.96 (1305031906.96)
end: May 10 2011 20:52:36.24 (1305031956.24)
size: 845.5 MB
messages: 41803
compression: bz2 [2724/2724 chunks; 30.08%]
uncompressed: 2.7 GB @ 56.9 MB/s
compressed: 843.8 MB @ 17.1 MB/s (30.08%)
types: sensor_msgs/CameraInfo [c9a58c1b0b154e0e6da7578cb991d214]
sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743]
sensor_msgs/Imu [6a62c6daae103f4ff57a132d6f95cec2]
tf/tfMessage [94810edda583a504dfda3829e70d7eec]
visualization_msgs/MarkerArray [f10fe193d6fac1bf68fad5d31da421a7]
topics: /camera/depth/camera_info 1361 msgs : sensor_msgs/CameraInfo
/camera/depth/image 1360 msgs : sensor_msgs/Image
/camera/rgb/camera_info 1362 msgs : sensor_msgs/CameraInfo
/camera/rgb/image_color 1362 msgs : sensor_msgs/Image
/cortex_marker_array 4914 msgs : visualization_msgs/MarkerArray
/imu 24569 msgs : sensor_msgs/Imu
/tf 6875 msgs : tf/tfMessage
The code is modified as follows:
static tf::TransformBroadcaster br; br.sendTransform(tf::StampedTransform(transform,cv_ptrRGB->header.stamp::now(),"world","camera"));

@m4nh,@wmj1238. Hello, bro, I basically integrated ORBSLAM 2 algorithm and Skimap algorithm, but the point cloud doesn't seem to display correctly in Rviz, just like the picture. Do I need to rotate the point cloud? I look forward to your reply, and I am eager to communicate with you further.
Thank you very much.

wrong_pointcloud

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests