Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rotate position in motion model constraint #365

Open
wants to merge 25 commits into
base: devel
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 12 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
90 changes: 60 additions & 30 deletions fuse_models/include/fuse_models/unicycle_2d_state_cost_function.h
Original file line number Diff line number Diff line change
Expand Up @@ -42,13 +42,11 @@

#include <ceres/sized_cost_function.h>


namespace fuse_models
{

/**
* @brief Create a cost function for a 2D state vector
*
*
* The state vector includes the following quantities, given in this order:
* x position
* y position
Expand All @@ -72,7 +70,7 @@ namespace fuse_models
* || [ yaw_vel_t2 - proj(yaw_vel_t1) ] ||
* || [ x_acc_t2 - proj(x_acc_t1) ] ||
* || [ y_acc_t2 - proj(y_acc_t1) ] ||
*
*
* where, the matrix A is fixed, the state variables are provided at two discrete time steps, and proj is a function
* that projects the state variables from time t1 to time t2. In case the user is interested in implementing a cost
* function of the form
Expand Down Expand Up @@ -119,26 +117,26 @@ class Unicycle2DStateCostFunction : public ceres::SizedCostFunction<8, 2, 1, 2,
double* residuals,
double** jacobians) const override
{
double position_pred_x;
double position_pred_y;
double delta_position_pred_x;
double delta_position_pred_y;
double yaw_pred;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be renamed to delta_yaw_pred to be consistent with the other variables

double vel_linear_pred_x;
double vel_linear_pred_y;
double vel_yaw_pred;
double acc_linear_pred_x;
double acc_linear_pred_y;
predict(
parameters[0][0], // position1_x
parameters[0][1], // position1_y
parameters[1][0], // yaw1
0.0,
0.0,
0.0,
parameters[2][0], // vel_linear1_x
parameters[2][1], // vel_linear1_y
parameters[3][0], // vel_yaw1
parameters[4][0], // acc_linear1_x
parameters[4][1], // acc_linear1_y
dt_,
position_pred_x,
position_pred_y,
delta_position_pred_x,
delta_position_pred_y,
yaw_pred,
vel_linear_pred_x,
vel_linear_pred_y,
Expand All @@ -147,20 +145,26 @@ class Unicycle2DStateCostFunction : public ceres::SizedCostFunction<8, 2, 1, 2,
acc_linear_pred_y,
jacobians);

residuals[0] = parameters[5][0] - position_pred_x;
residuals[1] = parameters[5][1] - position_pred_y;
residuals[2] = parameters[6][0] - yaw_pred;
residuals[3] = parameters[7][0] - vel_linear_pred_x;
residuals[4] = parameters[7][1] - vel_linear_pred_y;
residuals[5] = parameters[8][0] - vel_yaw_pred;
residuals[6] = parameters[9][0] - acc_linear_pred_x;
residuals[7] = parameters[9][1] - acc_linear_pred_y;
const double delta_yaw = parameters[6][0] - parameters[1][0];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be consistent with naming

Suggested change
const double delta_yaw = parameters[6][0] - parameters[1][0];
const double delta_yaw_est = parameters[6][0] - parameters[1][0];

const Eigen::Vector2d position1(parameters[0][0], parameters[0][1]);
const Eigen::Vector2d position2(parameters[5][0], parameters[5][1]);
const Eigen::Vector2d delta_position_est = fuse_core::rotationMatrix2D(-parameters[1][0]) * (position2 - position1);
const Eigen::Vector2d delta_position_pred(delta_position_pred_x, delta_position_pred_y);
const fuse_core::Matrix2d R_delta_yaw_inv = fuse_core::rotationMatrix2D(-delta_yaw);

Eigen::Map<Eigen::Matrix<double, 8, 1>> residuals_map(residuals);
residuals_map.head<2>() = R_delta_yaw_inv * (delta_position_pred - delta_position_est);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The delta_position_est is computed as Pose1^1 * Pose2. That effectively transforms Pose2 into the frame of Pose1.

Similarly, delta_position_pred is computed with Pose1 as the origin. So that quantity is also in the frame of Pose1.

If we did a simple subtraction, residual[0] = delta_position_est.x - delta_position_prod.x, then the residual would also be represented in the frame of Pose1. Which seems like the natural frame to compute the measurement covariance as well.

Here you are rotating the error into the ... Pose2 frame?
I feel like this should just be:

residuals_map(0) = delta_position_est.x - delta_position_pred.x;
residuals_map(1) = delta_position_est.y - delta_position_pred.y;
residuals_map(2) = delta_yaw_est - delta_yaw_pred;
...

But that is not a firm belief. I could be convinced otherwise.

residuals_map(2) = delta_yaw - yaw_pred;
residuals_map(3) = parameters[7][0] - vel_linear_pred_x;
residuals_map(4) = parameters[7][1] - vel_linear_pred_y;
residuals_map(5) = parameters[8][0] - vel_yaw_pred;
residuals_map(6) = parameters[9][0] - acc_linear_pred_x;
residuals_map(7) = parameters[9][1] - acc_linear_pred_y;

fuse_core::wrapAngle2D(residuals[2]);

// Scale the residuals by the square root information matrix to account for
// the measurement uncertainty.
Eigen::Map<fuse_core::Vector8d> residuals_map(residuals);
residuals_map.applyOnTheLeft(A_);

if (jacobians)
Expand All @@ -178,25 +182,47 @@ class Unicycle2DStateCostFunction : public ceres::SizedCostFunction<8, 2, 1, 2,
// }
// }

// For the following jacobian computations we consider E to be the full residual, and Ep to be the top 2 rows of
// the residual (wrt the first position)
// We then consider L to be the full "prediction" vector from the predict function, and Lp is the top 2 rows of
// the prediction vector where the prediction vector is defined as:
// [
// delta_position_pred_x
// delta_position_pred_y
// yaw_pred,
// vel_linear_pred_x,
// vel_linear_pred_y,
// vel_yaw_pred,
// acc_linear_pred_x,
// acc_linear_pred_y,
// ]
fuse_core::Matrix<double, 2, 2> d_Ep_d_Lp = R_delta_yaw_inv;

// Update jacobian wrt position1
if (jacobians[0])
{
Eigen::Map<fuse_core::Matrix<double, 8, 2>> jacobian(jacobians[0]);
jacobian.applyOnTheLeft(-A_);
Eigen::Map<fuse_core::Matrix<double, 8, 2>> d_L_d_position1(jacobians[0]);
fuse_core::Matrix<double, 2, 2> d_Lp_d_position1 = d_L_d_position1.block<2, 2>(0, 0);
d_L_d_position1.block<2, 2>(0, 0) = d_Ep_d_Lp * d_Lp_d_position1;
d_L_d_position1.applyOnTheLeft(-A_);
}

// Update jacobian wrt yaw1
if (jacobians[1])
{
Eigen::Map<fuse_core::Vector8d> jacobian(jacobians[1]);
jacobian.applyOnTheLeft(-A_);
Eigen::Map<fuse_core::Vector8d> d_L_d_yaw1(jacobians[1]);
fuse_core::Vector2d d_Lp_d_yaw1 = d_L_d_yaw1.head<2>();
d_L_d_yaw1.head<2>() = d_Ep_d_Lp * d_Lp_d_yaw1;
d_L_d_yaw1.applyOnTheLeft(-A_);
}

// Update jacobian wrt vel_linear1
if (jacobians[2])
{
Eigen::Map<fuse_core::Matrix<double, 8, 2>> jacobian(jacobians[2]);
jacobian.applyOnTheLeft(-A_);
Eigen::Map<fuse_core::Matrix<double, 8, 2>> d_L_d_vel_linear1(jacobians[2]);
fuse_core::Matrix<double, 2, 2> d_Lp_d_vel_linear1 = d_L_d_vel_linear1.block<2, 2>(0, 0);
d_L_d_vel_linear1.block<2, 2>(0, 0) = d_Ep_d_Lp * d_Lp_d_vel_linear1;
d_L_d_vel_linear1.applyOnTheLeft(-A_);
}

// Update jacobian wrt vel_yaw1
Expand All @@ -209,8 +235,10 @@ class Unicycle2DStateCostFunction : public ceres::SizedCostFunction<8, 2, 1, 2,
// Update jacobian wrt acc_linear1
if (jacobians[4])
{
Eigen::Map<fuse_core::Matrix<double, 8, 2>> jacobian(jacobians[4]);
jacobian.applyOnTheLeft(-A_);
Eigen::Map<fuse_core::Matrix<double, 8, 2>> d_L_d_acc_linear1(jacobians[4]);
fuse_core::Matrix<double, 2, 2> d_Lp_d_acc_linear1 = d_L_d_acc_linear1.block<2, 2>(0, 0);
d_L_d_acc_linear1.block<2, 2>(0, 0) = d_Ep_d_Lp * d_Lp_d_acc_linear1;
d_L_d_acc_linear1.applyOnTheLeft(-A_);
}

// It might be possible to simplify the code below implementing something like this but using compile-time
Expand All @@ -233,8 +261,10 @@ class Unicycle2DStateCostFunction : public ceres::SizedCostFunction<8, 2, 1, 2,
// Jacobian wrt position2
if (jacobians[5])
{
Eigen::Map<fuse_core::Matrix<double, 8, 2>> jacobian(jacobians[5]);
jacobian = A_.block<8, 2>(0, 0);
Eigen::Map<fuse_core::Matrix<double, 8, 2>> d_L_d_position2(jacobians[5]);
d_L_d_position2 = A_.block<8, 2>(0, 0);
fuse_core::Matrix<double, 2, 2> d_Lp_d_position2 = d_L_d_position2.block<2, 2>(0, 0);
d_L_d_position2.block<2, 2>(0, 0) = d_Ep_d_Lp * d_Lp_d_position2;
}

// Jacobian wrt yaw2
Expand Down
34 changes: 21 additions & 13 deletions fuse_models/include/fuse_models/unicycle_2d_state_cost_functor.h
Original file line number Diff line number Diff line change
Expand Up @@ -40,13 +40,11 @@
#include <fuse_core/fuse_macros.h>
#include <fuse_core/util.h>


namespace fuse_models
{

/**
* @brief Create a cost function for a 2D state vector
*
*
* The state vector includes the following quantities, given in this order:
* x position
* y position
Expand All @@ -70,7 +68,7 @@ namespace fuse_models
* || [ yaw_vel_t2 - proj(yaw_vel_t1) ] ||
* || [ x_acc_t2 - proj(x_acc_t1) ] ||
* || [ y_acc_t2 - proj(y_acc_t1) ] ||
*
*
* where, the matrix A is fixed, the state variables are provided at two discrete time steps, and proj is a function
* that projects the state variables from time t1 to time t2. In case the user is interested in implementing a cost
* function of the form
Expand Down Expand Up @@ -147,28 +145,38 @@ bool Unicycle2DStateCostFunctor::operator()(
const T* const acc_linear2,
T* residual) const
{
T position_pred[2];
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the comment I made above applies here as well. Indeed, I suspect with your change the compiler must be complaining of unused arguments, isn't it?

T yaw_pred[1];
T delta_position_pred[2];
T delta_yaw_pred[1];
T vel_linear_pred[2];
T vel_yaw_pred[1];
T acc_linear_pred[2];
T init_position[2];
T init_yaw[1];
init_position[0] = T(0.0);
init_position[1] = T(0.0);
init_yaw[0] = T(0.0);
predict(
position1,
yaw1,
init_position,
init_yaw,
vel_linear1,
vel_yaw1,
acc_linear1,
T(dt_),
position_pred,
yaw_pred,
delta_position_pred,
delta_yaw_pred,
vel_linear_pred,
vel_yaw_pred,
acc_linear_pred);

const T delta_yaw = yaw2[0] - yaw1[0];
const Eigen::Matrix<T, 2, 1> position1_map(position1[0], position1[1]);
const Eigen::Matrix<T, 2, 1> position2_map(position2[0], position2[1]);
const Eigen::Matrix<T, 2, 1> delta_position_est = fuse_core::rotationMatrix2D(-yaw1[0]) * (position2_map - position1_map);
const Eigen::Matrix<T, 2, 1> delta_position_pred_map(delta_position_pred[0], delta_position_pred[1]);

Eigen::Map<Eigen::Matrix<T, 8, 1>> residuals_map(residual);
residuals_map(0) = position2[0] - position_pred[0];
residuals_map(1) = position2[1] - position_pred[1];
residuals_map(2) = yaw2[0] - yaw_pred[0];
residuals_map.template head<2>() = fuse_core::rotationMatrix2D(-delta_yaw) * (delta_position_est - delta_position_pred_map);
residuals_map(2) = delta_yaw - delta_yaw_pred[0];
residuals_map(3) = vel_linear2[0] - vel_linear_pred[0];
residuals_map(4) = vel_linear2[1] - vel_linear_pred[1];
residuals_map(5) = vel_yaw2[0] - vel_yaw_pred[0];
Expand Down
2 changes: 1 addition & 1 deletion fuse_models/test/test_unicycle_2d_state_cost_function.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ TEST(CostFunction, evaluateCostFunction)

for (size_t i = 0; i < num_parameter_blocks; ++i)
{
EXPECT_MATRIX_NEAR(J_autodiff[i], J[i], std::numeric_limits<double>::epsilon())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was probably too strict, but if we change the cost function or functor, so the residuals are computed differently, I'd expect the analytic jacobians are also computed differently. So I'm afraid that even a small error here is actually a consequence of an incorrect analytic jacobian. Maybe the test values are hiding that because they don't have a strong contribution to the jacobians that should've probably been updated to account for the residuals computation changes. 🤔

EXPECT_MATRIX_NEAR(J_autodiff[i], J[i], 1e-15)
ayrton04 marked this conversation as resolved.
Show resolved Hide resolved
<< "Autodiff Jacobian[" << i << "] =\n" << J_autodiff[i].format(HeavyFmt)
<< "\nAnalytic Jacobian[" << i << "] =\n" << J[i].format(HeavyFmt);
}
Expand Down