This MPPI implementation,based on this paper, is for application to the RoboJackets RoboNav rover.
- The algorithm utilizes a monte carlo procedure by producing random noise (gaussian splatter) over a model prediction horizon
- Cost is calculated for each path in the set of paths based on a model of the robot
- Path integral: The chosen control command is a weighted sum of all the monte carlo paths
- A temperature value is provided to encourage exploration or exploitation
- The control command is applied to the robot and the process is repeated
- All-in-all, the control is the minimization of the expected cost of a trajectory weighted and combined with other trajectories: $$ \mathbf{u}^*(\cdot) = \arg\min_{\mathbf{u}(\cdot)} \mathbb{E}_Q \left[ \phi(\mathbf{x}T, T) + \int{t_0}^T \mathcal{L}(\mathbf{x}_t, \mathbf{u}_t, t) , dt \right] $$
- Note: The annimation may show slight overlap of robot and obstacle without detecting a collision. This is because of the drawing function giving an extra 0.5 width overlap from the actual obstacle border
C++
: C++ implementation for application to hardwareimages
: Result imagessrc
: Python implmentation for logic testingmppi.py
: Whole MPPI algorithmannimation.py
: Annimation of the processobstacle_hit.py
: Function to determine if a continuous agent position hits a discrete obstacle
- Improve ability to get unstuck from crevaces. It currently cannot understand that short term higher cost of going away from the goal will result in long term low cost since it won't sit stuck forever. If the robot gets stuck, I have tried:
- Increase prediction horizon: this technically should fix it, but variable prediction horizon will need a mapping calculation reworking becuase the control matrices are currently static
- Change robot's idea of where the goal is so it goes in a different direction for a short period of time. It is working, but realizes the real goal a bit too soon and just gets stuck again
- Penalize direction cost so it doesn't care as much about going in the correct direction
- Penalize obstacle cost so it really wants to get away from the obstacles
- Find solution to x and y flipping in costmap
- Create robot size so it is not just an infintesimal point
- Incorporate actual robot dynamics with non-holonomic drivetrain
- Implement in cpp with ROS
With gaussian splatter pathing shown Current issue of agent getting stuck