diff --git a/CHANGELOG.md b/CHANGELOG.md index 4a6a92d015..dce9f99dd8 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -14,6 +14,7 @@ * Added `local.md` to cookiecutter template in `docs/configuration/`. This was referenced in `README.md` but not present. * Major overhaul of docs to add/remove parameters, unify linking of files and added description for providing custom configs where necessary * Travis: Pull the `dev` tagged docker image for testing +* Removed UPPMAX-specific documentation from the template. #### Tools helper code * Make Travis CI tests fail on pull requests if the `CHANGELOG.md` file hasn't been updated diff --git a/nf_core/pipeline-template/{{cookiecutter.name_noslash}}/docs/installation.md b/nf_core/pipeline-template/{{cookiecutter.name_noslash}}/docs/installation.md index cfa6651132..e4aea88c25 100644 --- a/nf_core/pipeline-template/{{cookiecutter.name_noslash}}/docs/installation.md +++ b/nf_core/pipeline-template/{{cookiecutter.name_noslash}}/docs/installation.md @@ -12,8 +12,6 @@ To start using the {{ cookiecutter.name }} pipeline, follow the steps below: * [Software deps: Bioconda](#32-software-deps-bioconda) * [Configuration profiles](#33-configuration-profiles) 4. [Reference genomes](#4-reference-genomes) -5. [Appendices](#5-appendices) - * [Running on UPPMAX](#running-on-uppmax) ## 1) Install NextFlow Nextflow runs on most POSIX systems (Linux, Mac OSX etc). It can be installed by running the following commands: @@ -61,7 +59,9 @@ If you would like to make changes to the pipeline, it's best to make a fork on G ## 3) Pipeline configuration -By default, the pipeline runs with the `standard` configuration profile. This uses a number of sensible defaults for process requirements and is suitable for running on a simple (if powerful!) basic server. You can see this configuration in [`conf/base.config`](../conf/base.config). +By default, the pipeline loads a basic server configuration [`conf/base.config`](../conf/base.config) +This uses a number of sensible defaults for process requirements and is suitable for running +on a simple (if powerful!) local server. Be warned of two important points about this default configuration: @@ -69,15 +69,16 @@ Be warned of two important points about this default configuration: * All jobs are run in the login session. If you're using a simple server, this may be fine. If you're using a compute cluster, this is bad as all jobs will run on the head node. * See the [nextflow docs](https://www.nextflow.io/docs/latest/executor.html) for information about running with other hardware backends. Most job scheduler systems are natively supported. 2. Nextflow will expect all software to be installed and available on the `PATH` + * It's expected to use an additional config profile for docker, singularity or conda support. See below. #### 3.1) Software deps: Docker First, install docker on your system: [Docker Installation Instructions](https://docs.docker.com/engine/installation/) -Then, running the pipeline with the option `-profile standard,docker` tells Nextflow to enable Docker for this run. An image containing all of the software requirements will be automatically fetched and used from dockerhub (https://hub.docker.com/r/{{ cookiecutter.name_docker }}). +Then, running the pipeline with the option `-profile docker` tells Nextflow to enable Docker for this run. An image containing all of the software requirements will be automatically fetched and used from dockerhub (https://hub.docker.com/r/{{ cookiecutter.name_docker }}). #### 3.1) Software deps: Singularity If you're not able to use Docker then [Singularity](http://singularity.lbl.gov/) is a great alternative. -The process is very similar: running the pipeline with the option `-profile standard,singularity` tells Nextflow to enable singularity for this run. An image containing all of the software requirements will be automatically fetched and used from singularity hub. +The process is very similar: running the pipeline with the option `-profile singularity` tells Nextflow to enable singularity for this run. An image containing all of the software requirements will be automatically fetched and used from singularity hub. If running offline with Singularity, you'll need to download and transfer the Singularity image first: @@ -98,7 +99,7 @@ Remember to pull updated versions of the singularity image if you update the pip If you're not able to use Docker _or_ Singularity, you can instead use conda to manage the software requirements. This is slower and less reproducible than the above, but is still better than having to install all requirements yourself! The pipeline ships with a conda environment file and nextflow has built-in support for this. -To use it first ensure that you have conda installed (we recommend [miniconda](https://conda.io/miniconda.html)), then follow the same pattern as above and use the flag `-profile standard,conda` +To use it first ensure that you have conda installed (we recommend [miniconda](https://conda.io/miniconda.html)), then follow the same pattern as above and use the flag `-profile conda` #### 3.3) Configuration profiles @@ -107,16 +108,3 @@ See [`docs/configuration/adding_your_own.md`](configuration/adding_your_own.md) ## 4) Reference genomes See [`docs/configuration/reference_genomes.md`](configuration/reference_genomes.md) - -## 5) Appendices - -#### Running on UPPMAX -To run the pipeline on the [Swedish UPPMAX](https://www.uppmax.uu.se/) clusters (`rackham`, `irma`, `bianca` etc), use the command line flag `-profile uppmax`. This tells Nextflow to submit jobs using the SLURM job executor with Singularity for software dependencies. - -Note that you will need to specify your UPPMAX project ID when running a pipeline. To do this, use the command line flag `--project `. The pipeline will exit with an error message if you try to run it pipeline with the default UPPMAX config profile without a project. - -**Optional Extra:** To avoid having to specify your project every time you run Nextflow, you can add it to your personal Nextflow config file instead. Add this line to `~/.nextflow/config`: - -```nextflow -params.project = 'project_ID' // eg. b2017123 -``` diff --git a/nf_core/pipeline-template/{{cookiecutter.name_noslash}}/docs/usage.md b/nf_core/pipeline-template/{{cookiecutter.name_noslash}}/docs/usage.md index 72d2768902..f387d5c36f 100644 --- a/nf_core/pipeline-template/{{cookiecutter.name_noslash}}/docs/usage.md +++ b/nf_core/pipeline-template/{{cookiecutter.name_noslash}}/docs/usage.md @@ -52,7 +52,7 @@ NXF_OPTS='-Xms1g -Xmx4g' ## Running the pipeline The typical command for running the pipeline is as follows: ```bash -nextflow run {{ cookiecutter.name }} --reads '*_R{1,2}.fastq.gz' -profile standard,docker +nextflow run {{ cookiecutter.name }} --reads '*_R{1,2}.fastq.gz' -profile docker ``` This will launch the pipeline with the `docker` configuration profile. See below for more information about profiles. @@ -84,7 +84,7 @@ This version number will be logged in reports when you run the pipeline, so that ## Main arguments ### `-profile` -Use this parameter to choose a configuration profile. Profiles can give configuration presets for different compute environments. Note that multiple profiles can be loaded, for example: `-profile standard,docker` - the order of arguments is important! +Use this parameter to choose a configuration profile. Profiles can give configuration presets for different compute environments. Note that multiple profiles can be loaded, for example: `-profile docker` - the order of arguments is important! If `-profile` is not specified at all the pipeline will be run locally and expects all software to be installed and available on the `PATH`. diff --git a/nf_core/pipeline-template/{{cookiecutter.name_noslash}}/main.nf b/nf_core/pipeline-template/{{cookiecutter.name_noslash}}/main.nf index 578cf825bf..18caf75ad2 100644 --- a/nf_core/pipeline-template/{{cookiecutter.name_noslash}}/main.nf +++ b/nf_core/pipeline-template/{{cookiecutter.name_noslash}}/main.nf @@ -27,13 +27,13 @@ def helpMessage() { The typical command for running the pipeline is as follows: - nextflow run {{ cookiecutter.name }} --reads '*_R{1,2}.fastq.gz' -profile standard,docker + nextflow run {{ cookiecutter.name }} --reads '*_R{1,2}.fastq.gz' -profile docker Mandatory arguments: --reads Path to input data (must be surrounded with quotes) --genome Name of iGenomes reference -profile Configuration profile to use. Can use multiple (comma separated) - Available: standard, conda, docker, singularity, awsbatch, test + Available: conda, docker, singularity, awsbatch, test and more. Options: --singleEnd Specifies that the input is single end reads