Skip to content

Implementation of backdoor attacks and defenses in malware classification using machine learning models.

License

Notifications You must be signed in to change notification settings

Slaymish/malware-classifier-backdoors

 
 

Repository files navigation

Malware Classifier Backdoor Attacks and Defenses

This project demonstrates how to execute backdoor attacks on malware classifiers and evaluate their performance under different conditions. This project now supports both LightGBM and optional torch models. The pipeline automatically skips model training if disabled in the config. The test suite further evaluates model performance using two types of confusion matrices (standard and simplified) and updated metrics.

Steps

  1. Poison the Training Data: Inject backdoor samples into the dataset.
  2. Train the Model: Train a malware classifier on the poisoned dataset.
  3. Test on Clean Data: Evaluate the model’s performance on unpoisoned data.
  4. Test on Backdoor Data: Assess the model’s vulnerability to backdoor samples.

Setup Instructions

  1. Update and rebuild the container
./update_build.sh
# this will remove the existing container and build a new one
# and will run and enter the container
  1. Run the unit tests
python -m unittest discover -s scripts/unit_tests
  1. Execute the pipeline detailed below.

Pipeline

  1. Create config.yaml file
  2. Run the pipeline:
./run_pipeline.sh
  1. Benchmark against EMBER dataset:
# Download and extract the EMBER dataset
./download_and_extract_ember.sh

# Benchmark the model against the EMBER dataset
python -m scripts.testing.benchmark_on_ember \
 --model data/outputs/lightgbm.txt \
 --type lightgbm

Testing Details

The test suite now generates two types of confusion matrices:

  1. A standard confusion matrix with three categories (benign, malicious, and backdoor malicious).
  2. A simplified “square” confusion matrix focusing on benign vs. malicious only. It also calculates updated metrics (Accuracy, Precision, Recall, F1 Score, ROC AUC) for each variant, providing a more detailed view of how the model performs against backdoored samples.

The test suite evaluates the trained model across the following data types:

  • Clean Data:
    • Unpoisoned benign samples
    • Unpoisoned malicious samples
  • Poisoned Data:
    • Poisoned benign samples
    • Poisoned malicious samples

Metrics:

The test suite provides the following evaluation metrics:

  • Accuracy
  • Precision
  • Recall
  • F1 Score
  • ROC AUC

Visualizations:

The following plots are generated during testing:

  • Confusion Matrix
  • ROC Curve

Data Structure

The data is organized into the following directories:

data/
├── raw/ # Contains unprocessed executables
│ ├── clean/
│ └── malicious/
├── poisoned/ # Contains poisoned executables
│ ├── <backdoor_name>/
      |── clean/
      |── malicious/
│ └── <backdoor_name>/
└── ember/ # Contains the poisoned dataset in EMBER format
  ├── test.jsonl
  ├── train.jsonl

References

@ARTICLE{2018arXiv180404637A,
author = {{Anderson}, H.~S. and {Roth}, P.},
title = "{EMBER: An Open Dataset for Training Static PE Malware Machine Learning Models}",
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
eprint = {1804.04637},
primaryClass = "cs.CR",
keywords = {Computer Science - Cryptography and Security},
year = 2018,
month = apr,
adsurl = {http://adsabs.harvard.edu/abs/2018arXiv180404637A},
}

About

Implementation of backdoor attacks and defenses in malware classification using machine learning models.

Topics

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.5%
  • Shell 1.9%
  • Dockerfile 0.6%