Specific Task 3d: Masked Auto-Encoder for Efficient End-to-End Particle Reconstruction and Compression
- Train a lightweight ViT using the Masked Auto-Encoder (MAE) training scheme on the unlabelled dataset.
- Compare reconstruction results using MAE on both training and testing datasets.
- Fine-tune the model on a lower learning rate on the provided labelled dataset and compare results with a model trained from scratch.
- Trained a lightweight ViT using MAE on unlabelled dataset
- Compared reconstruction results on training and testing datasets
- Fine-tuned the model on a lower learning rate using the labelled dataset
- Compared results with a model trained from scratch
- Ensured no overfitting on the test dataset
Here are the notebooks showing complete training process.
- MAE_Particle_Reconstruction.ipynb
- linear-probing-Pretraining.ipynb
- linear-probing-without Pretraining.ipynb
- Includes data loading, model training (pre-training and fine-tuning), evaluation, and model weights
These are Example Notebooks to inference or reproduce the results
- Python 3.x
- Jupyter Notebook
- PyTorch
- NumPy
- Pandas
- Matplotlib
Install these dependencies using pip or conda.