Skip to content
ckennedy-nmdp edited this page Sep 11, 2014 · 5 revisions

This tutorial assumes you have an Amazon Web Services account registered with the NMDP. This will grant you access to a public machine image with all the data, tools, and compute infrastructure you need to proceed. If you do not have these things, go here first.

Get the code

git clone git@puppet.nmdp-br.aws:/parallel_genomic.git

Will create a local clone (working copy) of the GitHub repository, which contains several shell scripts for parallel execution of pipeline components.

View the sample data

Public sample data from the sequence read archive are provided here:

/mnt/common/data/incoming/nmdp/Proposed_Hackathon_Dataset/DRP000941/

Each file (73 total) contains phased NGS data for 6-locus HLA published by Hosomichi et al, 2013. The files must be decompressed from SRA format to FASTQ before processing. SRA provides tools for this purpose. The decompressed data are also provided in the fastq/ directory.

Run the pipeline

Interpret and validate the results

Create an HML message

DaSH

Clone this wiki locally