nf-core/hic: Usage
Table of contents
- Introduction
- Running the pipeline
- Updating the pipeline
- Reproducibility
- Main arguments
- Reference genomes
- Hi-C specific options
- Skip options
- Job resources
- Automatic resubmission
- Custom resource requests
- AWS batch specific parameters
- Other command line parameters
General Nextflow info
Nextflow handles job submissions on SLURM or other environments, and supervises running the jobs. Thus the Nextflow process must run until the pipeline is finished. We recommend that you put the process running in the background through screen
/ tmux
or similar tool. Alternatively you can run nextflow within a cluster job submitted your job scheduler.
It is recommended to limit the Nextflow Java virtual machines memory. We recommend adding the following line to your environment (typically in ~/.bashrc
or ~./bash_profile
):
NXF_OPTS='-Xms1g -Xmx4g'
Running the pipeline
The typical command for running the pipeline is as follows:
nextflow run nf-core/hic --reads '*_R{1,2}.fastq.gz' -genome GRCh37 -profile docker
This will launch the pipeline with the docker
configuration profile. See below for more information about profiles.
Note that the pipeline will create the following files in your working directory:
work # Directory containing the nextflow working files
results # Finished results (configurable, see below)
.nextflow_log # Log file from Nextflow
# Other nextflow hidden files, eg. history of pipeline runs and old logs.
Updating the pipeline
When you run the above command, Nextflow automatically pulls the pipeline code from GitHub and stores it as a cached version. When running the pipeline after this, it will always use the cached version if available - even if the pipeline has been updated since. To make sure that you're running the latest version of the pipeline, make sure that you regularly update the cached version of the pipeline:
nextflow pull nf-core/hic
Reproducibility
It's a good idea to specify a pipeline version when running the pipeline on your data. This ensures that a specific version of the pipeline code and software are used when you run your pipeline. If you keep using the same tag, you'll be running the same version of the pipeline, even if there have been changes to the code since.
First, go to the nf-core/hic releases page and find the latest version number - numeric only (eg. 1.3.1
). Then specify this when running the pipeline with -r
(one hyphen) - eg. -r 1.3.1
.
This version number will be logged in reports when you run the pipeline, so that you'll know what you used when you look back in the future.
Main arguments
-profile
Use this parameter to choose a configuration profile. Profiles can give configuration presets for different compute environments. Note that multiple profiles can be loaded, for example: -profile docker
- the order of arguments is important!
If -profile
is not specified at all the pipeline will be run locally and expects all software to be installed and available on the PATH
.
-
awsbatch
- A generic configuration profile to be used with AWS Batch.
-
conda
-
docker
- A generic configuration profile to be used with Docker
- Pulls software from dockerhub:
nfcore/hic
-
singularity
- A generic configuration profile to be used with Singularity
- Pulls software from DockerHub:
nfcore/hic
-
test
- A profile with a complete configuration for automated testing
- Includes links to test data so needs no other parameters
--reads
Use this to specify the location of your input FastQ files. For example:
--reads 'path/to/data/sample_*_{1,2}.fastq'
Please note the following requirements:
- The path must be enclosed in quotes
- The path must have at least one
*
wildcard character - When using the pipeline with paired end data, the path must use
{1,2}
notation to specify read pairs.