Skip to content

This is a repository containing completed task for GSOC ML4SCI

Notifications You must be signed in to change notification settings

Wodlfvllf/End-to-End-Deep-Learning-Project

Repository files navigation

Specific Task 3d: Masked Auto-Encoder for Efficient End-to-End Particle Reconstruction and Compression

Tasks

  1. Train a lightweight ViT using the Masked Auto-Encoder (MAE) training scheme on the unlabelled dataset.
  2. Compare reconstruction results using MAE on both training and testing datasets.
  3. Fine-tune the model on a lower learning rate on the provided labelled dataset and compare results with a model trained from scratch.

Implementation

  • Trained a lightweight ViT using MAE on unlabelled dataset
  • Compared reconstruction results on training and testing datasets
  • Fine-tuned the model on a lower learning rate using the labelled dataset
  • Compared results with a model trained from scratch
  • Ensured no overfitting on the test dataset

Image Reconstruction

Original

Reconstructed

Comparison of With and Without Pretrained Vision Transformer Model

Both models are fine-tuned on learning rate of 1.e-5 using AdamW optimizer.

Notebooks:

Here are the notebooks showing complete training process.

Example Notebooks:

These are Example Notebooks to inference or reproduce the results

Dependencies

  • Python 3.x
  • Jupyter Notebook
  • PyTorch
  • NumPy
  • Pandas
  • Matplotlib

Install these dependencies using pip or conda.

About

This is a repository containing completed task for GSOC ML4SCI

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published