Skip to content

Commit

Permalink
Merge pull request #279 from nf-core/dev
Browse files Browse the repository at this point in the history
Dev > Master
  • Loading branch information
ewels authored Mar 13, 2019
2 parents 62f9a12 + b0ef5ad commit ec9b527
Show file tree
Hide file tree
Showing 11 changed files with 185 additions and 99 deletions.
7 changes: 5 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# nf-core/tools: Changelog

## v1.5dev
## [v1.5](https://github.com/nf-core/tools/releases/tag/1.5) - 2019-03-13 Iron Shark

#### Template pipeline
* Dropped Singularity file
Expand All @@ -10,7 +10,10 @@
* Brought the logo to life
* Change the default filenames for the pipeline trace files
* Remote fetch of nf-core/configs profiles fails gracefully if offline
* Remove `process.container` and just directly define `process.container` now.
* Remove `params.container` and just directly define `process.container` now
* Completion email now includes MultiQC report if not too big
* `params.genome` is now checked if set, to ensure that it's a valid iGenomes key
* Together with nf-core/configs, helper function now checks hostname and suggests a valid config profile

#### Tools helper code
* New `nf-core launch` command to interactively launch nf-core pipelines from command-line
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,6 @@

[![install with bioconda](https://img.shields.io/badge/install%20with-bioconda-brightgreen.svg)](http://bioconda.github.io/)
[![Docker](https://img.shields.io/docker/automated/{{ cookiecutter.name_docker }}.svg)](https://hub.docker.com/r/{{ cookiecutter.name_docker }})
![Singularity Container available](
https://img.shields.io/badge/singularity-available-7E4C74.svg)

## Introduction
The pipeline is built using [Nextflow](https://www.nextflow.io), a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It comes with docker / singularity containers making installation trivial and results highly reproducible.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,36 @@
To: $email
Subject: $subject
Mime-Version: 1.0
Content-Type: multipart/related;boundary="nfmimeboundary"
Content-Type: multipart/related;boundary="nfcoremimeboundary"

--nfmimeboundary
--nfcoremimeboundary
Content-Type: text/html; charset=utf-8

$email_html

--nfmimeboundary--
<%
if (mqcFile){
def mqcFileObj = new File("$mqcFile")
if (mqcFileObj.length() < mqcMaxSize){
out << """
--nfcoremimeboundary
Content-Type: text/html; name=\"multiqc_report\"
Content-Transfer-Encoding: base64
Content-ID: <mqcreport>
Content-Disposition: attachment; filename=\"${mqcFileObj.getName()}\"

${mqcFileObj.
bytes.
encodeBase64().
toString().
tokenize( '\n' )*.
toList()*.
collate( 76 )*.
collect { it.join() }.
flatten().
join( '\n' )}
"""
}}
%>

--nfcoremimeboundary--
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
//Profile config names for awsbatch profile
/*
* -------------------------------------------------
* Nextflow config file for running on AWS batch
* -------------------------------------------------
* Base config needed for running with -profile awsbatch
*/
params {
config_profile_name = 'AWSBATCH'
config_profile_description = 'AWSBATCH Cloud Profile'
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ process {
memory = { check_max( 8.GB * task.attempt, 'memory' ) }
time = { check_max( 2.h * task.attempt, 'time' ) }

errorStrategy = { task.exitStatus in [143,137] ? 'retry' : 'finish' }
errorStrategy = { task.exitStatus in [143,137,104,134,139] ? 'retry' : 'finish' }
maxRetries = 1
maxErrors = '-1'

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,18 +2,22 @@

To start using the {{ cookiecutter.name }} pipeline, follow the steps below:

1. [Install Nextflow](#1-install-nextflow)
2. [Install the pipeline](#2-install-the-pipeline)
* [Automatic](#21-automatic)
* [Offline](#22-offline)
* [Development](#23-development)
3. [Pipeline configuration](#3-pipeline-configuration)
* [Software deps: Docker and Singularity](#31-software-deps-docker-and-singularity)
* [Software deps: Bioconda](#32-software-deps-bioconda)
* [Configuration profiles](#33-configuration-profiles)
4. [Reference genomes](#4-reference-genomes)

## 1) Install NextFlow
<!-- Install Atom plugin markdown-toc-auto for this ToC -->
<!-- TOC START min:2 max:3 link:true asterisk:true -->
* [Install NextFlow](#install-nextflow)
* [Install the pipeline](#install-the-pipeline)
* [Automatic](#automatic)
* [Offline](#offline)
* [Development](#development)
* [Pipeline configuration](#pipeline-configuration)
* [Docker](#docker)
* [Singularity](#singularity)
* [Conda](#conda)
* [Configuration profiles](#configuration-profiles)
* [Reference genomes](#reference-genomes)
<!-- TOC END -->

## Install NextFlow
Nextflow runs on most POSIX systems (Linux, Mac OSX etc). It can be installed by running the following commands:

```bash
Expand All @@ -31,12 +35,12 @@ mv nextflow ~/bin/

See [nextflow.io](https://www.nextflow.io/) for further instructions on how to install and configure Nextflow.

## 2) Install the pipeline
## Install the pipeline

#### 2.1) Automatic
### Automatic
This pipeline itself needs no installation - NextFlow will automatically fetch it from GitHub if `{{ cookiecutter.name }}` is specified as the pipeline name.

#### 2.2) Offline
### Offline
The above method requires an internet connection so that Nextflow can download the pipeline files. If you're running on a system that has no internet connection, you'll need to download and transfer the pipeline files manually:

```bash
Expand All @@ -53,12 +57,12 @@ To stop nextflow from looking for updates online, you can tell it to run in offl
export NXF_OFFLINE='TRUE'
```

#### 2.3) Development
### Development

If you would like to make changes to the pipeline, it's best to make a fork on GitHub and then clone the files. Once cloned you can run the pipeline directly as above.


## 3) Pipeline configuration
## Pipeline configuration
By default, the pipeline loads a basic server configuration [`conf/base.config`](../conf/base.config)
This uses a number of sensible defaults for process requirements and is suitable for running
on a simple (if powerful!) local server.
Expand All @@ -71,12 +75,12 @@ Be warned of two important points about this default configuration:
2. Nextflow will expect all software to be installed and available on the `PATH`
* It's expected to use an additional config profile for docker, singularity or conda support. See below.

#### 3.1) Software deps: Docker
### Docker
First, install docker on your system: [Docker Installation Instructions](https://docs.docker.com/engine/installation/)

Then, running the pipeline with the option `-profile docker` tells Nextflow to enable Docker for this run. An image containing all of the software requirements will be automatically fetched and used from dockerhub (https://hub.docker.com/r/{{ cookiecutter.name_docker }}).

#### 3.1) Software deps: Singularity
### Singularity
If you're not able to use Docker then [Singularity](http://singularity.lbl.gov/) is a great alternative.
The process is very similar: running the pipeline with the option `-profile singularity` tells Nextflow to enable singularity for this run. An image containing all of the software requirements will be automatically fetched and used from singularity hub.

Expand All @@ -94,17 +98,16 @@ nextflow run /path/to/{{ cookiecutter.name_noslash }} -with-singularity {{ cooki

Remember to pull updated versions of the singularity image if you update the pipeline.


#### 3.2) Software deps: conda
### Conda
If you're not able to use Docker _or_ Singularity, you can instead use conda to manage the software requirements.
This is slower and less reproducible than the above, but is still better than having to install all requirements yourself!
The pipeline ships with a conda environment file and nextflow has built-in support for this.
To use it first ensure that you have conda installed (we recommend [miniconda](https://conda.io/miniconda.html)), then follow the same pattern as above and use the flag `-profile conda`

#### 3.3) Configuration profiles
### Configuration profiles

See [`docs/configuration/adding_your_own.md`](configuration/adding_your_own.md)

## 4) Reference genomes
## Reference genomes

See [`docs/configuration/reference_genomes.md`](configuration/reference_genomes.md)
Original file line number Diff line number Diff line change
Expand Up @@ -2,46 +2,45 @@

## Table of contents

* [Introduction](#general-nextflow-info)
<!-- Install Atom plugin markdown-toc-auto for this ToC to auto-update on save -->
<!-- TOC START min:2 max:3 link:true asterisk:true update:true -->
* [Table of contents](#table-of-contents)
* [Introduction](#introduction)
* [Running the pipeline](#running-the-pipeline)
* [Updating the pipeline](#updating-the-pipeline)
* [Reproducibility](#reproducibility)
* [Updating the pipeline](#updating-the-pipeline)
* [Reproducibility](#reproducibility)
* [Main arguments](#main-arguments)
* [`-profile`](#-profile-single-dash)
* [`awsbatch`](#awsbatch)
* [`conda`](#conda)
* [`docker`](#docker)
* [`singularity`](#singularity)
* [`test`](#test)
* [`-profile`](#-profile)
* [`--reads`](#--reads)
* [`--singleEnd`](#--singleend)
* [Reference genomes](#reference-genomes)
* [`--genome`](#--genome)
* [`--genome` (using iGenomes)](#--genome-using-igenomes)
* [`--fasta`](#--fasta)
* [`--igenomesIgnore`](#--igenomesignore)
* [Job resources](#job-resources)
* [Automatic resubmission](#automatic-resubmission)
* [Custom resource requests](#custom-resource-requests)
* [AWS batch specific parameters](#aws-batch-specific-parameters)
* [`-awsbatch`](#-awsbatch)
* [Automatic resubmission](#automatic-resubmission)
* [Custom resource requests](#custom-resource-requests)
* [AWS Batch specific parameters](#aws-batch-specific-parameters)
* [`--awsqueue`](#--awsqueue)
* [`--awsregion`](#--awsregion)
* [Other command line parameters](#other-command-line-parameters)
* [`--outdir`](#--outdir)
* [`--email`](#--email)
* [`-name`](#-name-single-dash)
* [`-resume`](#-resume-single-dash)
* [`-c`](#-c-single-dash)
* [`-name`](#-name)
* [`-resume`](#-resume)
* [`-c`](#-c)
* [`--custom_config_version`](#--custom_config_version)
* [`--custom_config_base`](#--custom_config_base)
* [`--max_memory`](#--max_memory)
* [`--max_time`](#--max_time)
* [`--max_cpus`](#--max_cpus)
* [`--plaintext_email`](#--plaintext_email)
* [`--monochrome_logs`](#--monochrome_logs)
* [`--multiqc_config`](#--multiqc_config)
<!-- TOC END -->


## General Nextflow info
## Introduction
Nextflow handles job submissions on SLURM or other environments, and supervises running the jobs. Thus the Nextflow process must run until the pipeline is finished. We recommend that you put the process running in the background through `screen` / `tmux` or similar tool. Alternatively you can run nextflow within a cluster job submitted your job scheduler.

It is recommended to limit the Nextflow Java virtual machines memory. We recommend adding the following line to your environment (typically in `~/.bashrc` or `~./bash_profile`):
Expand Down Expand Up @@ -210,7 +209,7 @@ Please make sure to also set the `-w/--work-dir` and `--outdir` parameters to a
The output directory where the results will be saved.

### `--email`
Set this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (`~/.nextflow/config`) then you don't need to speicfy this on the command line for every run.
Set this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (`~/.nextflow/config`) then you don't need to specify this on the command line for every run.

### `-name`
Name for the pipeline run. If not specified, Nextflow will automatically generate a random mnemonic.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# You can use this file to create a conda environment for this pipeline:
# conda env create -f environment.yml
name: {{ cookiecutter.name_noslash }}-{{ cookiecutter.version }}
channels:
- conda-forge
Expand Down
Loading

0 comments on commit ec9b527

Please sign in to comment.