diff --git a/README.md b/README.md
index df0a41991f3629fae502e434a2bdfbc46e115e57..70462e0c61d54b65c1b630203ebad2c5d0e9c2ad 100644
--- a/README.md
+++ b/README.md
@@ -1,24 +1,13 @@
-# ![nf-core/hic](docs/images/nf-core-hic_logo_light.png#gh-light-mode-only) ![nf-core/hic](docs/images/nf-core-hic_logo_dark.png#gh-dark-mode-only)
-
-[![AWS CI](https://img.shields.io/badge/CI%20tests-full%20size-FF9900?labelColor=000000&logo=Amazon%20AWS)](https://nf-co.re/hic/results)[![Cite with Zenodo](http://img.shields.io/badge/DOI-10.5281/zenodo.2669512-1073c8?labelColor=000000)](https://doi.org/10.5281/zenodo.2669512)
-
-[![Nextflow](https://img.shields.io/badge/nextflow%20DSL2-%E2%89%A522.10.1-23aa62.svg)](https://www.nextflow.io/)
-[![run with conda](http://img.shields.io/badge/run%20with-conda-3EB049?labelColor=000000&logo=anaconda)](https://docs.conda.io/en/latest/)
-[![run with docker](https://img.shields.io/badge/run%20with-docker-0db7ed?labelColor=000000&logo=docker)](https://www.docker.com/)
-[![run with singularity](https://img.shields.io/badge/run%20with-singularity-1d355c.svg?labelColor=000000)](https://sylabs.io/docs/)
-[![Launch on Nextflow Tower](https://img.shields.io/badge/Launch%20%F0%9F%9A%80-Nextflow%20Tower-%234256e7)](https://tower.nf/launch?pipeline=https://github.com/nf-core/hic)
-
-[![Get help on Slack](http://img.shields.io/badge/slack-nf--core%20%23hic-4A154B?labelColor=000000&logo=slack)](https://nfcore.slack.com/channels/hic)[![Follow on Twitter](http://img.shields.io/badge/twitter-%40nf__core-1DA1F2?labelColor=000000&logo=twitter)](https://twitter.com/nf_core)[![Follow on Mastodon](https://img.shields.io/badge/mastodon-nf__core-6364ff?labelColor=FFFFFF&logo=mastodon)](https://mstdn.science/@nf_core)[![Watch on YouTube](http://img.shields.io/badge/youtube-nf--core-FF0000?labelColor=000000&logo=youtube)](https://www.youtube.com/c/nf-core)
-
 ## Introduction
 
-**nf-core/hic** is a bioinformatics best-practice analysis pipeline for Analysis of Chromosome Conformation Capture data (Hi-C).
+The **meta Hi-C pipeline** is regrouping different Hi-C pipelines for Analysis of Chromosome Conformation Capture data.
 
-The pipeline is built using [Nextflow](https://www.nextflow.io), a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The [Nextflow DSL2](https://www.nextflow.io/docs/latest/dsl2.html) implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from [nf-core/modules](https://github.com/nf-core/modules) in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!
+The pipeline is built using [Nextflow](https://www.nextflow.io), a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The [Nextflow DSL2](https://www.nextflow.io/docs/latest/dsl2.html) implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies.
 
-On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world datasets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources.The results obtained from the full-sized test can be viewed on the [nf-core website](https://nf-co.re/hic/results).
+The pipeline is based on the [nf-core/hic](https://github.com/nf-core/hic) pipeline and the [hicstuff](https://github.com/koszullab/hicstuff) pipeline.
+It is split into two workflows for now, named `hicpro` and `hicstuff`.
 
-## Pipeline summary
+## Workflow summary (hicpro)
 
 1. Read QC ([`FastQC`](https://www.bioinformatics.babraham.ac.uk/projects/fastqc/))
 2. Hi-C data processing
@@ -36,14 +25,25 @@ On release, automated continuous integration tests run the pipeline on a full-si
 8. TADs calling ([`HiCExplorer`](https://github.com/deeptools/HiCExplorer), [`cooltools`](https://cooltools.readthedocs.io/en/latest/))
 9. Quality control report ([`MultiQC`](https://multiqc.info/))
 
-## Usage
+## Workflow summary (hicstuff)
+1. Hi-C data préparation
+2. Processing
+    1. Mapping
+    2. Merge and filter
+    3. Fragment attribution
+    4. Matrix generation
 
-> **Note**
-> If you are new to Nextflow and nf-core, please refer to [this page](https://nf-co.re/docs/usage/installation) on how
-> to set-up Nextflow. Make sure to [test your setup](https://nf-co.re/docs/usage/introduction#how-to-run-a-pipeline)
-> with `-profile test` before running the workflow on actual data.
+## Usage
+### Prepare the environment
+If you want to run the pipeline on the PSMN, you first need to set up your [PSMN environment](https://lbmc.gitbiopages.ens-lyon.fr/biocomp/resources/psmn_cbp/) if it's not already done.
 
-First, prepare a samplesheet with your input data that looks as follows:
+Then your going to clone this repository in your `scratch/Bio` directory or locally on your computer
+```
+git clone git@gitbio.ens-lyon.fr:LBMC/hub/hic.git
+```
+Then `cd` in the git directory
+### Get started
+Prepare a samplesheet with your input data that looks as follows:
 
 `samplesheet.csv`:
 
@@ -56,46 +56,31 @@ Each row represents a pair of fastq files (paired end).
 Now, you can run the pipeline using:
 
 ```bash
-nextflow run nf-core/hic \
-   -profile <docker/singularity/.../institute> \
+nextflow run main.nf \
+   -profile psmn \
+   --workflow <hicpro/hicstuff> \
    --input samplesheet.csv \
-   --genome GRCh37 \
-   --outdir <OUTDIR>
+   --fasta <path/to/genome.fasta> \
+   --outdir <OUTDIR> \
+   --digestion <dpnii/hindiii/arima/mboi>
 ```
+If your not running the pipeline on the PSMN, make sure you have Docker installed and use `-profile docker` instead.
 
-> **Warning:**
-> Please provide pipeline parameters via the CLI or Nextflow `-params-file` option. Custom config files including those
-> provided by the `-c` Nextflow option can be used to provide any configuration _**except for parameters**_;
-> see [docs](https://nf-co.re/usage/configuration#custom-configuration-files).
-
-For more details, please refer to the [usage documentation](https://nf-co.re/hic/usage) and the [parameter documentation](https://nf-co.re/hic/parameters).
+For detailed options, please refer to the [parameter documentation](docs/usage.md).
 
 ## Pipeline output
 
-To see the the results of a test run with a full size dataset refer to the [results](https://nf-co.re/hic/results) tab on the nf-core website pipeline page.
+To see the the results of a test run with a full size dataset refer to the [results](https://nf-co.re/hic/2.1.0/results/hic/results-fe4ac656317d24c37e81e7940a526ed9ea812f8e/) tab on the nf-core website pipeline page (hicpro workflow, original nf-core/hic pipeline's results).
 For more details about the output files and reports, please refer to the
-[output documentation](https://nf-co.re/hic/output).
+[output documentation](docs/output.md).
 
 ## Credits
 
 nf-core/hic was originally written by Nicolas Servant.
+hicstuff was originally written by Romain Koszul's lab.
+This pipeline was modified by Mia Croiset.
 
-## Contributions and Support
-
-If you would like to contribute to this pipeline, please see the [contributing guidelines](.github/CONTRIBUTING.md).
-
-For further information or help, don't hesitate to get in touch on the [Slack `#hic` channel](https://nfcore.slack.com/channels/hic) (you can join with [this invite](https://nf-co.re/join/slack)).
-
-## Citations
-
-If you use nf-core/hic for your analysis, please cite it using the following doi: doi: [10.5281/zenodo.2669512](https://doi.org/10.5281/zenodo.2669512)
-
-An extensive list of references for the tools used by the pipeline can be found in the [`CITATIONS.md`](CITATIONS.md) file.
+## Support
 
-You can cite the `nf-core` publication as follows:
+For further information or help, don't hesitate to get in touch on the [Element](https://element.io/) or by email.
 
-> **The nf-core framework for community-curated bioinformatics pipelines.**
->
-> Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.
->
-> _Nat Biotechnol._ 2020 Feb 13. doi: [10.1038/s41587-020-0439-x](https://dx.doi.org/10.1038/s41587-020-0439-x).
diff --git a/docs/hicstuff_usage.md b/docs/hicstuff_usage.md
new file mode 100644
index 0000000000000000000000000000000000000000..60aad8d829d6df655a7b1180135da689ac7f0011
--- /dev/null
+++ b/docs/hicstuff_usage.md
@@ -0,0 +1,326 @@
+# Usage
+This document present the usage and parameters of the **hicstuff workflow**. To see the parameters of the **hicpro workflow**, go [here](usage.md). Samplesheet input and core arguments are detailed there.
+
+# Parameters
+
+## Inputs
+Inputs are the same as the hicpro workflow
+
+### `--input`
+
+Use this to specify the location of your input FastQ files. For example:
+
+```bash
+--input 'path/to/data/sample_*_{1,2}.fastq'
+```
+
+Please note the following requirements:
+
+1. The path must be enclosed in quotes
+2. The path must have at least one `*` wildcard character
+3. When using the pipeline with paired end data, the path must use `{1,2}`
+   notation to specify read pairs.
+
+If left unspecified, a default pattern is used: `data/*{1,2}.fastq.gz`
+
+Note that the Hi-C data analysis workflow requires paired-end data.
+
+## Reference genomes
+
+The pipeline config files come bundled with paths to the Illumina iGenomes reference
+index files. If running with docker or AWS, the configuration is set up to use the
+[AWS-iGenomes](https://ewels.github.io/AWS-iGenomes/) resource.
+
+### `--genome` (using iGenomes)
+
+There are many different species supported in the iGenomes references. To run
+the pipeline, you must specify which to use with the `--genome` flag.
+
+You can find the keys to specify the genomes in the
+[iGenomes config file](https://github.com/nf-core/hic/blob/master/conf/igenomes.config).
+
+### `--fasta`
+
+If you prefer, you can specify the full path to your reference genome when you
+run the pipeline:
+
+```bash
+--fasta '[path to Fasta reference]'
+```
+
+### `--bwt2_index`
+
+The bowtie2 indexes are required to align the data with the HiC-Pro workflow. If the
+`--bwt2_index` is not specified, the pipeline will either use the iGenomes
+bowtie2 indexes (see `--genome` option) or build the indexes on-the-fly
+(see `--fasta` option)
+
+```bash
+--bwt2_index '[path to bowtie2 index]'
+```
+
+### `--chromosome_size`
+
+The Hi-C pipeline also requires a two-column text file with the
+chromosome name and the chromosome size (tab-separated).
+If not specified, this file will be automatically created by the pipeline.
+In the latter case, the `--fasta` reference genome has to be specified.
+
+```bash
+   chr1    249250621
+   chr2    243199373
+   chr3    198022430
+   chr4    191154276
+   chr5    180915260
+   chr6    171115067
+   chr7    159138663
+   chr8    146364022
+   chr9    141213431
+   chr10   135534747
+   (...)
+```
+
+```bash
+--chromosome_size '[path to chromosome size file]'
+```
+
+### `--restriction_fragments`
+
+Finally, Hi-C experiments based on restriction enzyme digestion require a BED
+file with coordinates of restriction fragments.
+
+```bash
+   chr1   0       16007   HIC_chr1_1    0   +
+   chr1   16007   24571   HIC_chr1_2    0   +
+   chr1   24571   27981   HIC_chr1_3    0   +
+   chr1   27981   30429   HIC_chr1_4    0   +
+   chr1   30429   32153   HIC_chr1_5    0   +
+   chr1   32153   32774   HIC_chr1_6    0   +
+   chr1   32774   37752   HIC_chr1_7    0   +
+   chr1   37752   38369   HIC_chr1_8    0   +
+   chr1   38369   38791   HIC_chr1_9    0   +
+   chr1   38791   39255   HIC_chr1_10   0   +
+   (...)
+```
+
+If not specified, this file will be automatically created by the pipeline.
+In this case, the `--fasta` reference genome will be used.
+Note that the `--digestion` or `--restriction_site` parameter is mandatory to create this file.
+
+## Hicstuff specific parameters
+
+The following options are defined in the `nextflow.config` file, and can be
+updated either using a custom configuration file (see `-c` option) or using
+command line parameters.
+
+### Mapping
+
+#### `--hicstuff_bwt2_align_opts`
+
+Bowtie2 alignment option for end-to-end mapping.
+Default: '--very-sensitive-local'
+
+```bash
+--hicstuff_bwt2_align_opts '[Options for bowtie2 mapping on full reads]'
+```
+
+### Fragment enzyme
+
+#### `--hicstuff_min_size`
+Minimum contig size required to keep it. Default:0
+
+```bash
+--hicstuff_min_size '[Minimum size value]'
+```
+
+#### `--hicstuff_circular`
+Use if the genome is circular. Default:false
+
+```bash
+--hicstuff_circular
+```
+
+#### `--hicstuff_output_contigs`
+Name of info contigs file. Default: 'info_contigs.txt'
+
+```bash
+--hicstuff_output_contigs '[Name of info contigs file]'
+```
+
+#### `--hicstuff_output_frags`
+Name of fragments list file. Default: 'fragments_list.txt'
+
+```bash
+--hicstuff_output_frags '[Name of fragments file]'
+```
+
+#### `--hicstuff_frags_plot`
+Whether fragments plot should be generated. Default: false
+
+```bash
+--hicstuff_frags_plot
+```
+
+#### `--hicstuff_frags_plot_path`
+Name of fragments plot file. Default: 'frags_hist.pdf'
+
+```bash
+--hicstuff_frags_plot_path '[Name of fragments plot file]'
+```
+
+### Bam2pairs
+
+#### `--hicstuff_valid_pairs`
+Name of valid pairs file. Default: 'valid.pairs'
+
+```bash
+--hicstuff_valid_pairs '[Name of valid pairs file]'
+```
+
+#### `--hicstuff_valid_idx`
+Name of valid pairs index file. Default: 'valid_idx.pairs'
+
+```bash
+--hicstuff_valid_idx '[Name of valid pairs index file]'
+```
+
+#### `--hicstuff_min_qual`
+Minimum mapping quality required to keep a pair of Hi-C reads. Default:30
+
+```bash
+--hicstuff_min_qual '[Minimum quality value]'
+```
+
+[#### `--hicstuff_circular`](#hicstuff_circular)
+
+### Matrix
+
+#### `--hicstuff_matrix`
+Common name of matrix files. Default: 'abs_fragments_contacts_weighted.txt'
+
+```bash
+--hicstuff_matrix '[Name of matrix file]'
+```
+
+#### `--hicstuff_bin`
+Binsize for plotting matrix. Default: 10000
+
+```bash
+--hicstuff_bin [binsize]
+```
+
+> :warning: **Warning**: Depending of the size of your input, the bin size may not correspond and make the pipeline fail
+>>10000 is default for human genome.
+>>For yeast for example you may want to use a smaller bin
+
+### Hicstuff options
+
+#### `--filter_event`
+Filter spurious or uninformative 3C events. Requires a restriction enzyme. Default: false
+
+```bash
+--filter_event
+```
+
+#### `--hicstuff_valid_idx_filtered`
+Name of filtered valid pairs index file. Default: false
+
+```bash
+--hicstuff_valid_idx_filtered '[Name of filtered valid pairs index file]'
+```
+
+#### `--hicstuff_plot_events`
+Whether plots should be generated at different steps of the pipeline. Default: false
+
+```bash
+--hicstuff_plot_events
+```
+
+#### `--hicstuff_dist_plot`
+Prefix of distance plot file during filter events. Default: 'dist'
+
+```bash
+--hicstuff_dist_plot '[Prefix of distance plot file]'
+```
+
+#### `--hicstuff_pie_plot`
+Prefix of distribution plot file during filter events. Default: 'distrib'
+
+```bash
+--hicstuff_pie_plot '[Prefix of pie plot file]'
+```
+
+#### `--distance_law`
+If true, generates a distance law file with the values of the probabilities to have a contact between two distances for each chromosomes or arms if the file with the positions has been given. The values are not normalized, or averaged. Default: false
+
+```bash
+--distance_law
+```
+
+#### `--hicstuff_centro_file`
+If not None, path of file with Positions of the centromeres separated by a space and in the same order than the chromosomes. Default: 'None'
+
+```bash
+--hicstuff_centro_file '[Path of centromeres file]'
+```
+
+#### `--hicstuff_base`
+Base use to construct the logspace of the bins for distance law. Default: 1.1
+
+```bash
+--hicstuff_base '[Base number]'
+```
+
+#### `--hicstuff_distance_out_file`
+Name of distance law table file. Default: 'distance_law.txt'
+
+```bash
+--hicstuff_distance_out_file '[Name of distance law file]'
+```
+
+[#### `--hicstuff_circular`](#hicstuff_circular)
+
+#### `--hicstuff_rm_centro`
+If the distance law is computed, this is the number of kb that will be removed around the centromere position given by in the centromere file. Default: 'None'
+
+```bash
+--hicstuff_rm_centro '[Number of kb to remove]'
+```
+
+#### `--hicstuff_distance_plot`
+Whether distance law table should be plotted. Default: false
+
+```bash
+--hicstuff_distance_plot
+```
+
+#### `--hicstuff_distance_out_plot`
+Name of distance law table plot if hicstuff_distance_plot is true. Default: distance_law.txt
+
+```bash
+--hicstuff_distance_out_plot '[Name of distance law plot file]'
+```
+
+#### `--filter_pcr`
+If true, PCR duplicates will be filtered based on genomic positions Pairs where both reads have exactly the same coordinates are considered duplicates and only one of those will be conserved. Default: false
+
+```bash
+--hicstuff_filter_pcr
+```
+> :warning: **Warning**: if true, `--filter_pcr_duplicates` **must** be false
+
+#### `--hicstuff_filter_pcr_out_file`
+Name of pair file after PCR filtering. Default: 'valid_idx_pcrfree.pairs'
+
+```bash
+--hicstuff_filter_pcr_out_file '[Name of pcr free pair file]'
+```
+
+#### `--filter_pcr_duplicates`
+
+If specified, duplicate reads are filtered using PICARD MarkDuplicate method. If true, `--keep_dups` **must** also be true. Default:'false'
+
+```bash
+--filter_pcr_duplicates
+```
+
diff --git a/docs/output.md b/docs/output.md
index 1086b0371c3be007b813fdd582e5e79dde1000f6..253154334e96f61e1addd09c9c9e47ae4af4963d 100644
--- a/docs/output.md
+++ b/docs/output.md
@@ -1,8 +1,8 @@
-# nf-core/hic: Output
+# Output
 
 ## Introduction
 
-This document describes the output produced by the pipeline. Most of the plots are taken from the MultiQC report, which summarises results at the end of the pipeline.
+This document describes the output produced by the **hicpro workflow** of the pipeline. Most of the plots are taken from the MultiQC report, which summarises results at the end of the pipeline.
 The directories listed below will be created in the results directory after the pipeline has finished. All paths are relative to the top-level results directory.
 
 ## Pipeline overview
@@ -109,7 +109,7 @@ can thus be discarded using the `--min_cis_dist` parameter.
 - `*.FiltPairs` - List of filtered pairs
 - `*RSstat` - Statitics of number of read pairs falling in each category
 
-Of note, these results are saved only if `--save_pairs_intermediates` is used.  
+Of note, these results are saved only if `--save_pairs_intermediates` is used.
 The `validPairs` are stored using a simple tab-delimited text format ;
 
 ```bash
@@ -196,7 +196,7 @@ is specified on the command line.
 - `*_iced.matrix` - genome-wide iced contact maps
 
 The contact maps are generated for all specified resolutions
-(see `--bin_size` argument).  
+(see `--bin_size` argument).
 A contact map is defined by :
 
 - A list of genomic intervals related to the specified resolution (BED format).
@@ -221,7 +221,7 @@ downstream analysis.
 ## Hi-C contact maps
 
 Contact maps are usually stored as simple txt (`HiC-Pro`), .hic (`Juicer/Juicebox`) and .(m)cool (`cooler/Higlass`) formats.
-The .cool and .hic format are compressed and indexed and usually much more efficient than the txt format.  
+The .cool and .hic format are compressed and indexed and usually much more efficient than the txt format.
 In the current workflow, we propose to use the `cooler` format as a standard to build the raw and normalised maps
 after valid pairs detection as it is used by several downstream analysis and visualisation tools.
 
diff --git a/docs/usage.md b/docs/usage.md
index 4ad48da5087a541195d98f097c625aa22b5a6062..a32e4b8364c24c1e805eecad72d2b94ee746ac80 100644
--- a/docs/usage.md
+++ b/docs/usage.md
@@ -1,8 +1,6 @@
-# nf-core/hic: Usage
+# Usage
 
-## :warning: Please read this documentation on the nf-core website: [https://nf-co.re/hic/usage](https://nf-co.re/hic/usage)
-
-> _Documentation of pipeline parameters is generated automatically from the pipeline schema and can no longer be found in markdown files._
+This document present the usage and [parameters](#parameters) of the **hicpro workflow**. To see the parameters of the **hicstuff workflow**, go [here](hicstuff_usage.md).
 
 ## Introduction
 
@@ -27,7 +25,7 @@ CONTROL_REP1,AEG588A1_S1_L004_R1_001.fastq.gz,AEG588A1_S1_L004_R2_001.fastq.gz
 
 ### Full samplesheet
 
-The `nf-core-hic` pipeline is designed to work only with paired-end data. The samplesheet can have as many columns as you desire, however, there is a strict requirement for the first 3 columns to match those defined in the table below.
+This pipeline is designed to work only with paired-end data. The samplesheet can have as many columns as you desire, however, there is a strict requirement for the first 3 columns to match those defined in the table below.
 
 ```console
 sample,fastq_1,fastq_2
@@ -49,10 +47,10 @@ An [example samplesheet](../assets/samplesheet.csv) has been provided with the p
 The typical command for running the pipeline is as follows:
 
 ```bash
-nextflow run nf-core/hic --input samplesheet.csv --outdir <OUTDIR> --genome GRCh37 -profile docker
+nextflow run main.nf -profile psmn --workflow hicpro --input samplesheet.csv --outdir <OUTDIR> --fasta <genome.fasta> --digestion <dpnii/hindiii/arima/mboi>
 ```
 
-This will launch the pipeline with the `docker` configuration profile.
+This will launch the pipeline with the `psmn` configuration profile.
 See below for more information about profiles.
 
 Note that the pipeline will create the following files in your working directory:
@@ -66,26 +64,7 @@ work                # Directory containing the nextflow working files
 
 If you wish to repeatedly use the same parameters for multiple runs, rather than specifying each flag in the command, you can specify these in a params file.
 
-Pipeline settings can be provided in a `yaml` or `json` file via `-params-file <file>`.
-
-> ⚠️ Do not use `-c <file>` to specify parameters as this will result in errors. Custom config files specified with `-c` must only be used for [tuning process resource specifications](https://nf-co.re/docs/usage/configuration#tuning-workflow-resources), other infrastructural tweaks (such as output directories), or module arguments (args).
-> The above pipeline run specified with a params file in yaml format:
-
-```bash
-nextflow run nf-core/hic -profile docker -params-file params.yaml
-```
-
-with `params.yaml` containing:
-
-```yaml
-input: './samplesheet.csv'
-outdir: './results/'
-genome: 'GRCh37'
-input: 'data'
-<...>
-```
-
-You can also generate such `YAML`/`JSON` files via [nf-core/launch](https://nf-co.re/launch).
+You can edit the `nextflow.config` file to change the parameters you want. Remember to keep a trace of the default parameters just in case.
 
 ### Updating the pipeline
 
@@ -95,18 +74,6 @@ When you run the above command, Nextflow automatically pulls the pipeline code f
 nextflow pull nf-core/hic
 ```
 
-### Reproducibility
-
-It is a good idea to specify a pipeline version when running the pipeline on your data. This ensures that a specific version of the pipeline code and software are used when you run your pipeline. If you keep using the same tag, you'll be running the same version of the pipeline, even if there have been changes to the code since.
-
-First, go to the [nf-core/hic releases page](https://github.com/nf-core/hic/releases) and find the latest pipeline version - numeric only (eg. `1.3.1`). Then specify this when running the pipeline with `-r` (one hyphen) - eg. `-r 1.3.1`. Of course, you can switch to another version by changing the number after the `-r` flag.
-
-This version number will be logged in reports when you run the pipeline, so that you'll know what you used when you look back in the future. For example, at the bottom of the MultiQC reports.
-
-To further assist in reproducbility, you can use share and re-use [parameter files](#running-the-pipeline) to repeat pipeline runs with the same settings without having to write out a command with every single parameter.
-
-> 💡 If you wish to share such profile (such as upload as supplementary material for academic publications), make sure to NOT include cluster specific paths to files, nor institutional specific profiles.
-
 ## Core Nextflow arguments
 
 > **NB:** These options are part of Nextflow and use a _single_ hyphen
@@ -122,39 +89,8 @@ Several generic profiles are bundled with the pipeline which instruct the pipeli
 > We highly recommend the use of Docker or Singularity containers for full
 > pipeline reproducibility, however when this is not possible, Conda is also supported.
 
-The pipeline also dynamically loads configurations from
-[https://github.com/nf-core/configs](https://github.com/nf-core/configs)
-when it runs, making multiple config profiles for various institutional
-clusters available at run time.
-For more information and to see if your system is available in these
-configs please see
-the [nf-core/configs documentation](https://github.com/nf-core/configs#documentation).
-
-Note that multiple profiles can be loaded, for example: `-profile test,docker` -
-the order of arguments is important!
-They are loaded in sequence, so later profiles can overwrite
-earlier profiles.
-
 If `-profile` is not specified, the pipeline will run locally and expect all software to be installed and available on the `PATH`. This is _not_ recommended, since it can lead to different results on different machines dependent on the computer enviroment.
 
-- `test`
-  - A profile with a complete configuration for automated testing
-  - Includes links to test data so needs no other parameters
-- `docker`
-  - A generic configuration profile to be used with [Docker](https://docker.com/)
-- `singularity`
-  - A generic configuration profile to be used with [Singularity](https://sylabs.io/docs/)
-- `podman`
-  - A generic configuration profile to be used with [Podman](https://podman.io/)
-- `shifter`
-  - A generic configuration profile to be used with [Shifter](https://nersc.gitlab.io/development/shifter/how-to-use/)
-- `charliecloud`
-  - A generic configuration profile to be used with [Charliecloud](https://hpc.github.io/charliecloud/)
-- `apptainer`
-  - A generic configuration profile to be used with [Apptainer](https://apptainer.org/)
-- `conda`
-  - A generic configuration profile to be used with [Conda](https://conda.io/docs/). Please only use Conda as a last resort i.e. when it's not possible to run the pipeline with Docker, Singularity, Podman, Shifter, Charliecloud, or Apptainer.
-
 ### `-resume`
 
 Specify this when restarting a pipeline. Nextflow will use cached results from any pipeline steps where the inputs are the same, continuing from where it got to previously. For input to be considered the same, not only the names must be identical but the files' contents as well. For more info about this parameter, see [this blog post](https://www.nextflow.io/blog/2019/demystifying-nextflow-resume.html).
@@ -181,78 +117,7 @@ In some cases you may wish to change which container or conda environment a step
 
 To use a different container from the default container or conda environment specified in a pipeline, please see the [updating tool versions](https://nf-co.re/docs/usage/configuration#updating-tool-versions) section of the nf-core website.
 
-### Custom Tool Arguments
-
-A pipeline might not always support every possible argument or option of a particular tool used in pipeline. Fortunately, nf-core pipelines provide some freedom to users to insert additional parameters that the pipeline does not include by default.
-
-To learn how to provide additional arguments to a particular tool of the pipeline, please see the [customising tool arguments](https://nf-co.re/docs/usage/configuration#customising-tool-arguments) section of the nf-core website.
-
-### nf-core/configs
-
-In most cases, you will only need to create a custom config as a one-off but if you and others within your organisation are likely to be running nf-core pipelines regularly and need to use the same settings regularly it may be a good idea to request that your custom config file is uploaded to the `nf-core/configs` git repository. Before you do this please can you test that the config file works with your pipeline of choice using the `-c` parameter. You can then create a pull request to the `nf-core/configs` repository with the addition of your config file, associated documentation file (see examples in [`nf-core/configs/docs`](https://github.com/nf-core/configs/tree/master/docs)), and amending [`nfcore_custom.config`](https://github.com/nf-core/configs/blob/master/nfcore_custom.config) to include your custom profile.
-
-See the main [Nextflow documentation](https://www.nextflow.io/docs/latest/config.html) for more information about creating your own configuration files.
-
-If you have any questions or issues please send us a message on
-[Slack](https://nf-co.re/join/slack) on the
-[`#configs` channel](https://nfcore.slack.com/channels/configs).
-
-## Azure Resource Requests
-
-To be used with the `azurebatch` profile by specifying the `-profile azurebatch`.
-We recommend providing a compute `params.vm_type` of `Standard_D16_v3` VMs by default but these options can be changed if required.
-
-Note that the choice of VM size depends on your quota and the overall workload during the analysis.
-For a thorough list, please refer the [Azure Sizes for virtual machines in Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes).
-
-## Running in the background
-
-Nextflow handles job submissions and supervises the running jobs.
-The Nextflow process must run until the pipeline is finished.
-
-The Nextflow `-bg` flag launches Nextflow in the background, detached from your terminal
-so that the workflow does not stop if you log out of your session. The logs are
-saved to a file.
-
-Alternatively, you can use `screen` / `tmux` or similar tool to create a detached
-session which you can log back into at a later time.
-Some HPC setups also allow you to run nextflow within a cluster job submitted
-your job scheduler (from where it submits more jobs).
-
-## Nextflow memory requirements
-
-In some cases, the Nextflow Java virtual machines can start to request a
-large amount of memory.
-We recommend adding the following line to your environment to limit this
-(typically in `~/.bashrc` or `~./bash_profile`):
-
-```bash
-NXF_OPTS='-Xms1g -Xmx4g'
-```
-
-## Use case
-
-### Hi-C digestion protocol
-
-Here is an command line example for standard DpnII digestion protocols.
-Alignment will be performed on the `mm10` genome with default parameters.
-Multi-hits will not be considered and duplicates will be removed.
-Note that by default, no filters are applied on DNA and restriction fragment sizes.
-
-```bash
-nextflow run main.nf --input './*_R{1,2}.fastq.gz' --genome 'mm10' --digestion 'dnpii'
-```
-
-### DNase Hi-C protocol
-
-Here is an command line example for DNase protocol.
-Alignment will be performed on the `mm10` genome with default paramters.
-Multi-hits will not be considered and duplicates will be removed.
-Contacts involving fragments separated by less than 1000bp will be discarded.
-
-```bash
-nextflow run main.nf --input './*_R{1,2}.fastq.gz' --genome 'mm10' --dnase --min_cis 1000
-```
+# Parameters
 
 ## Inputs
 
@@ -516,6 +381,16 @@ If specified, duplicate reads are not discarded before building contact maps.
 --keep_dups
 ```
 
+#### `--filter_pcr_duplicates`
+
+If specified, duplicate reads are filtered using PICARD MarkDuplicate method. If true, `--keep_dups` **must** also be true. Default:'false'
+
+```bash
+--filter_pcr_duplicates
+```
+
+> :warning: **Warning**: this option is **not** functional yet with the workflow hicpro. Setting this parameter true will fail your run.
+
 #### `--keep_multi`
 
 If specified, reads that aligned multiple times on the genome are not discarded.