diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md
index d316231a6e42ee8643b066a9ebdca7996df761aa..9d68eed2ae8c493a162c2294cdb7e5f229df6283 100644
--- a/CODE_OF_CONDUCT.md
+++ b/CODE_OF_CONDUCT.md
@@ -68,7 +68,9 @@ members of the project's leadership.
 
 ## Attribution
 
-This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [https://www.contributor-covenant.org/version/1/4/code-of-conduct/][version]
+This Code of Conduct is adapted from the [Contributor Covenant][homepage],
+version 1.4, available at
+[https://www.contributor-covenant.org/version/1/4/code-of-conduct/][version]
 
 [homepage]: https://contributor-covenant.org
 [version]: https://www.contributor-covenant.org/version/1/4/code-of-conduct/
diff --git a/README.md b/README.md
index da5877743ea00743c8e3e50a01275dece0a2b1bd..b80eb63adaf9b0006b7a0824e1dd4bd02e2f0b65 100644
--- a/README.md
+++ b/README.md
@@ -63,10 +63,12 @@ settings for your local compute environment.
 iv. Start running your own analysis!
 
 ```bash
-nextflow run nf-core/hic -profile <docker/singularity/conda/institute> --reads '*_R{1,2}.fastq.gz' --genome GRCh37
+nextflow run nf-core/hic -profile <docker/singularity/conda/institute>
+                         --reads '*_R{1,2}.fastq.gz' --genome GRCh37
 ```
 
-See [usage docs](docs/usage.md) for all of the available options when running the pipeline.
+See [usage docs](docs/usage.md) for all of the available options when running
+the pipeline.
 
 ## Documentation
 
@@ -84,36 +86,9 @@ found in the `docs/` directory:
 =======
 [![Get help on Slack](http://img.shields.io/badge/slack-nf--core%20%23hic-4A154B?logo=slack)](https://nfcore.slack.com/channels/hic)
 
-## Introduction
-
-The pipeline is built using [Nextflow](https://www.nextflow.io), a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It comes with docker containers making installation trivial and results highly reproducible.
-
-## Quick Start
-
-1. Install [`nextflow`](https://nf-co.re/usage/installation)
-
-2. Install either [`Docker`](https://docs.docker.com/engine/installation/) or [`Singularity`](https://www.sylabs.io/guides/3.0/user-guide/) for full pipeline reproducibility _(please only use [`Conda`](https://conda.io/miniconda.html) as a last resort; see [docs](https://nf-co.re/usage/configuration#basic-configuration-profiles))_
-
-3. Download the pipeline and test it on a minimal dataset with a single command:
-
-    ```bash
-    nextflow run nf-core/hic -profile test,<docker/singularity/conda/institute>
-    ```
-
-    > Please check [nf-core/configs](https://github.com/nf-core/configs#documentation) to see if a custom config file to run nf-core pipelines already exists for your Institute. If so, you can simply use `-profile <institute>` in your command. This will enable either `docker` or `singularity` and set the appropriate execution settings for your local compute environment.
-
-4. Start running your own analysis!
-
-    ```bash
-    nextflow run nf-core/hic -profile <docker/singularity/conda/institute> --input '*_R{1,2}.fastq.gz' --genome GRCh37
-    ```
-
-See [usage docs](docs/usage.md) for all of the available options when running the pipeline.
-
-## Documentation
-
-The nf-core/hic pipeline comes with documentation about the pipeline which you can read at [https://nf-core/hic/docs](https://nf-core/hic/docs) or find in the [`docs/` directory](docs).
->>>>>>> 069f5edceb65e1acf5162edf2b475f72159c08a2
+The nf-core/hic pipeline comes with documentation about the pipeline which
+you can read at [https://nf-core/hic/docs](https://nf-core/hic/docs) or
+find in the [`docs/` directory](docs).
 
 For further information or help, don't hesitate to get in touch on
 [Slack](https://nfcore.slack.com/channels/hic).
@@ -127,7 +102,9 @@ nf-core/hic was originally written by Nicolas Servant.
 
 If you would like to contribute to this pipeline, please see the [contributing guidelines](.github/CONTRIBUTING.md).
 
-For further information or help, don't hesitate to get in touch on the [Slack `#hic` channel](https://nfcore.slack.com/channels/hic) (you can join with [this invite](https://nf-co.re/join/slack)).
+For further information or help, don't hesitate to get in touch on the
+[Slack `#hic` channel](https://nfcore.slack.com/channels/hic)
+(you can join with [this invite](https://nf-co.re/join/slack)).
 
 ## Citation
 
diff --git a/docs/README.md b/docs/README.md
index a6889549c7f27bda0aed81947685713781fe2d1b..bdbc92abc939ff716f3fcaba1b5069be471c9049 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -3,8 +3,11 @@
 The nf-core/hic documentation is split into the following pages:
 
 * [Usage](usage.md)
-  * An overview of how the pipeline works, how to run it and a description of all of the different command-line flags.
+  * An overview of how the pipeline works, how to run it and a
+  description of all of the different command-line flags.
 * [Output](output.md)
-  * An overview of the different results produced by the pipeline and how to interpret them.
+  * An overview of the different results produced by the pipeline
+  and how to interpret them.
 
-You can find a lot more documentation about installing, configuring and running nf-core pipelines on the website: [https://nf-co.re](https://nf-co.re)
+You can find a lot more documentation about installing, configuring
+and running nf-core pipelines on the website: [https://nf-co.re](https://nf-co.re)
diff --git a/docs/output.md b/docs/output.md
index c64809342dd09ae7468aed5a203980a39e273aae..95aca423a2f44853e03c8ed443f6eb8ac43b9019 100644
--- a/docs/output.md
+++ b/docs/output.md
@@ -1,8 +1,12 @@
 # nf-core/hic: Output
 
-This document describes the output produced by the pipeline. Most of the plots are taken from the MultiQC report, which summarises results at the end of the pipeline.
+This document describes the output produced by the pipeline.
+Most of the plots are taken from the MultiQC report, which
+summarises results at the end of the pipeline.
 
-The directories listed below will be created in the results directory after the pipeline has finished. All paths are relative to the top-level results directory.
+The directories listed below will be created in the results directory
+after the pipeline has finished. All paths are relative to the top-level
+results directory.
 
 ## Pipeline overview
 
@@ -195,8 +199,9 @@ web browser
 in the pipeline
 
 * `pipeline_info/`
-  * Reports generated by Nextflow: `execution_report.html`, `execution_timeline.html`, `execution_trace.txt` and `pipeline_dag.dot`/`pipeline_dag.svg`.
-  * Reports generated by the pipeline: `pipeline_report.html`, `pipeline_report.txt` and `software_versions.csv`.
-  * Documentation for interpretation of results in HTML format: `results_description.html`.
-
-
+  * Reports generated by Nextflow: `execution_report.html`, `execution_timeline.html`,
+  `execution_trace.txt` and `pipeline_dag.dot`/`pipeline_dag.svg`.
+  * Reports generated by the pipeline: `pipeline_report.html`,
+  `pipeline_report.txt` and `software_versions.csv`.
+  * Documentation for interpretation of results in HTML format:
+  `results_description.html`.
diff --git a/docs/usage.md b/docs/usage.md
index 4934aa9f997a6d587a6fd990e3ace7799a35ec7f..4a057e74ebd2dd6e3464421aeaa92fc56bc0f813 100644
--- a/docs/usage.md
+++ b/docs/usage.md
@@ -66,7 +66,8 @@ fails after three times then the pipeline is stopped.
 
 ## Core Nextflow arguments
 
-> **NB:** These options are part of Nextflow and use a _single_ hyphen (pipeline parameters use a double-hyphen).
+> **NB:** These options are part of Nextflow and use a _single_ hyphen
+(pipeline parameters use a double-hyphen).
 
 ### `-profile`
 
@@ -81,16 +82,20 @@ the pipeline to use software packaged using different methods
 pipeline reproducibility, however when this is not possible, Conda is also supported.
 
 The pipeline also dynamically loads configurations from
-[https://github.com/nf-core/configs](https://github.com/nf-core/configs) when it runs,
-making multiple config profiles for various institutional clusters available at run time.
-For more information and to see if your system is available in these configs please see
+[https://github.com/nf-core/configs](https://github.com/nf-core/configs)
+when it runs, making multiple config profiles for various institutional
+clusters available at run time.
+For more information and to see if your system is available in these
+configs please see
 the [nf-core/configs documentation](https://github.com/nf-core/configs#documentation).
 
-Note that multiple profiles can be loaded, for example: `-profile test,docker` - the order
-of arguments is important!
-They are loaded in sequence, so later profiles can overwrite earlier profiles.
+Note that multiple profiles can be loaded, for example: `-profile test,docker` -
+the order of arguments is important!
+They are loaded in sequence, so later profiles can overwrite
+earlier profiles.
 
-If `-profile` is not specified, the pipeline will run locally and expect all software to be
+If `-profile` is not specified, the pipeline will run locally and
+expect all software to be
 installed and available on the `PATH`. This is _not_ recommended.
 
 * `docker`
@@ -100,7 +105,8 @@ installed and available on the `PATH`. This is _not_ recommended.
   * A generic configuration profile to be used with [Singularity](https://sylabs.io/docs/)
   * Pulls software from Docker Hub: [`nfcore/hic`](https://hub.docker.com/r/nfcore/hic/)
 * `conda`
-  * Please only use Conda as a last resort i.e. when it's not possible to run the pipeline with Docker or Singularity.
+  * Please only use Conda as a last resort i.e. when it's not possible to run the
+  pipeline with Docker or Singularity.
   * A generic configuration profile to be used with [Conda](https://conda.io/docs/)
   * Pulls most software from [Bioconda](https://bioconda.github.io/)
 * `test`
@@ -109,18 +115,30 @@ installed and available on the `PATH`. This is _not_ recommended.
 
 ### `-resume`
 
-Specify this when restarting a pipeline. Nextflow will used cached results from any pipeline steps where the inputs are the same, continuing from where it got to previously.
-You can also supply a run name to resume a specific run: `-resume [run-name]`. Use the `nextflow log` command to show previous run names.
+Specify this when restarting a pipeline. Nextflow will used cached results from
+any pipeline steps where the inputs are the same, continuing from where it got
+to previously.
+You can also supply a run name to resume a specific run: `-resume [run-name]`.
+Use the `nextflow log` command to show previous run names.
 
 ### `-c`
 
-Specify the path to a specific config file (this is a core Nextflow command). See the [nf-core website documentation](https://nf-co.re/usage/configuration) for more information.
+Specify the path to a specific config file (this is a core Nextflow command).
+See the [nf-core website documentation](https://nf-co.re/usage/configuration)
+for more information.
 
 #### Custom resource requests
 
-Each step in the pipeline has a default set of requirements for number of CPUs, memory and time. For most of the steps in the pipeline, if the job exits with an error code of `143` (exceeded requested resources) it will automatically resubmit with higher requests (2 x original, then 3 x original). If it still fails after three times then the pipeline is stopped.
+Each step in the pipeline has a default set of requirements for number of CPUs,
+memory and time. For most of the steps in the pipeline, if the job exits with
+an error code of `143` (exceeded requested resources) it will automatically resubmit
+with higher requests (2 x original, then 3 x original). If it still fails after three
+times then the pipeline is stopped.
 
-Whilst these default requirements will hopefully work for most people with most data, you may find that you want to customise the compute resources that the pipeline requests. You can do this by creating a custom config file. For example, to give the workflow process `star` 32GB of memory, you could use the following config:
+Whilst these default requirements will hopefully work for most people with most data,
+you may find that you want to customise the compute resources that the pipeline requests.
+You can do this by creating a custom config file. For example, to give the workflow
+process `star` 32GB of memory, you could use the following config:
 
 ```nextflow
 process {
@@ -130,32 +148,48 @@ process {
 }
 ```
 
-See the main [Nextflow documentation](https://www.nextflow.io/docs/latest/config.html) for more information.
+See the main [Nextflow documentation](https://www.nextflow.io/docs/latest/config.html)
+for more information.
 
-If you are likely to be running `nf-core` pipelines regularly it may be a good idea to request that your custom config file is uploaded to the `nf-core/configs` git repository. Before you do this please can you test that the config file works with your pipeline of choice using the `-c` parameter (see definition below). You can then create a pull request to the `nf-core/configs` repository with the addition of your config file, associated documentation file (see examples in [`nf-core/configs/docs`](https://github.com/nf-core/configs/tree/master/docs)), and amending [`nfcore_custom.config`](https://github.com/nf-core/configs/blob/master/nfcore_custom.config) to include your custom profile.
+If you are likely to be running `nf-core` pipelines regularly it may be a
+good idea to request that your custom config file is uploaded to the
+`nf-core/configs` git repository. Before you do this please can you test
+that the config file works with your pipeline of choice using the `-c`
+parameter (see definition below). You can then create a pull request to the
+`nf-core/configs` repository with the addition of your config file, associated
+documentation file (see examples in [`nf-core/configs/docs`](https://github.com/nf-core/configs/tree/master/docs)),
+and amending [`nfcore_custom.config`](https://github.com/nf-core/configs/blob/master/nfcore_custom.config)
+to include your custom profile.
 
-If you have any questions or issues please send us a message on [Slack](https://nf-co.re/join/slack) on the [`#configs` channel](https://nfcore.slack.com/channels/configs).
+If you have any questions or issues please send us a message on
+[Slack](https://nf-co.re/join/slack) on the
+[`#configs` channel](https://nfcore.slack.com/channels/configs).
 
 ### Running in the background
 
-Nextflow handles job submissions and supervises the running jobs. The Nextflow process must run until the pipeline is finished.
+Nextflow handles job submissions and supervises the running jobs.
+The Nextflow process must run until the pipeline is finished.
 
-The Nextflow `-bg` flag launches Nextflow in the background, detached from your terminal so that the workflow does not stop if you log out of your session. The logs are saved to a file.
+The Nextflow `-bg` flag launches Nextflow in the background, detached from your terminal
+so that the workflow does not stop if you log out of your session. The logs are
+saved to a file.
 
-Alternatively, you can use `screen` / `tmux` or similar tool to create a detached session which you can log back into at a later time.
-Some HPC setups also allow you to run nextflow within a cluster job submitted your job scheduler (from where it submits more jobs).
+Alternatively, you can use `screen` / `tmux` or similar tool to create a detached
+session which you can log back into at a later time.
+Some HPC setups also allow you to run nextflow within a cluster job submitted
+your job scheduler (from where it submits more jobs).
 
 #### Nextflow memory requirements
 
-In some cases, the Nextflow Java virtual machines can start to request a large amount of memory.
-We recommend adding the following line to your environment to limit this (typically in `~/.bashrc` or `~./bash_profile`):
+In some cases, the Nextflow Java virtual machines can start to request a
+large amount of memory.
+We recommend adding the following line to your environment to limit this
+(typically in `~/.bashrc` or `~./bash_profile`):
 
 ```bash
 NXF_OPTS='-Xms1g -Xmx4g'
 ```
 
-# Pipeline Options
-
 ## Inputs
 
 ### `--input`
@@ -177,7 +211,10 @@ If left unspecified, a default pattern is used: `data/*{1,2}.fastq.gz`
 
 ### `--single_end`
 
-By default, the pipeline expects paired-end data. If you have single-end data, you need to specify `--single_end` on the command line when you launch the pipeline. A normal glob pattern, enclosed in quotation marks, can then be used for `--reads`. For example:
+By default, the pipeline expects paired-end data. If you have single-end data,
+you need to specify `--single_end` on the command line when you launch the pipeline.
+A normal glob pattern, enclosed in quotation marks, can then be used for `--input`.
+For example:
 
 ```bash
 --single_end --reads '*.fastq'
@@ -187,7 +224,9 @@ It is not possible to run a mixture of single-end and paired-end files in one ru
 
 ## Reference genomes
 
-The pipeline config files come bundled with paths to the illumina iGenomes reference index files. If running with docker or AWS, the configuration is set up to use the [AWS-iGenomes](https://ewels.github.io/AWS-iGenomes/) resource.
+The pipeline config files come bundled with paths to the illumina iGenomes reference
+index files. If running with docker or AWS, the configuration is set up to use the
+[AWS-iGenomes](https://ewels.github.io/AWS-iGenomes/) resource.
 
 ### `--genome` (using iGenomes)