This repository contains a comprehensive environment for autonomous driving research using the CARLA simulator. It implements various deep learning models, including Bird's Eye View (BEV) perception, world forward models, and kinematic models for autonomous vehicle control and planning.
This codebase focuses on developing and testing autonomous driving algorithms with the following key components:
- World Forward Models: Predictive models for understanding vehicle dynamics and environment evolution
- Bird's Eye View (BEV) Processing: Advanced perception systems using top-down view representations
- Model Predictive Control (MPC): Sophisticated control algorithms for vehicle navigation
- Dynamic and Kinematic Models: Physics-based vehicle modeling for accurate simulation
- Policy Training: Various policy training implementations for autonomous driving
- CARLA Integration: Seamless integration with the CARLA simulator for realistic testing
carla_env/
: Core environment implementation for CARLA simulatorconfigs/
: Configuration files for different experiments and modelsdocs/
: Documentation and additional resourcesfigures/
: Generated figures and visualizationsscript/
: Utility scripts for various tasksutils/
: Helper functions and utility modulessimple_bev/
: Bird's Eye View implementationleaderboard/
: Evaluation framework for autonomous driving agentsscenario_runner/
: Scenario definition and execution tools
- Dynamic Forward Model (DFM) implementation
- Kinematic Model (KM) integration
- Extended BEV perception system
- Multiple training frequencies support (5Hz, 20Hz)
- MPC-based control implementation
- Ground truth BEV model training
- Policy evaluation framework
- Data collection utilities
-
Create a Conda environment using the provided
environment.yml
:conda env create -f environment.yml
-
Activate the environment:
conda activate carla
-
Install CARLA simulator (compatible with version 0.9.13)
-
Set up additional dependencies:
- PyTorch with CUDA support
- OpenCV
- Other required packages are listed in environment.yml
python play_carla_env.py --num_episodes 10
- World Forward Model:
python train_world_forward_model_ddp.py
- Policy Training:
python train_dfm_km_cp_extended_bev_gt_bev_encoded_policy_fused.py
- MPC Testing:
python test_mpc_carla.py
- Policy Testing:
python test_policy_carla_dfm_km_cp_extended_bev_5Hz.py
Use the provided scripts to collect training data:
python collect_data_dynamic_kinematic_model.py
python collect_data_ground_truth_bev_model.py
Evaluate trained models using:
python eval_world_forward_model.py
python eval_ego_forward_model.py
- Fork the repository
- Create your feature branch
- Commit your changes
- Push to the branch
- Create a new Pull Request
Please refer to the LICENSE file for details.
This project builds upon the CARLA simulator and various open-source autonomous driving research tools.