diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 9f1ab7baf144858c03103531c66575de0f26d13e..2b87ac37aa1cb54f4a56c8c3b91a3ffd4f55f15f 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -1,92 +1,287 @@
 # Contributing
 
-When contributing to this repository, please first discuss the change you wish to make via issue,
-email, or any other method with the owners of this repository before making a change. 
+When contributing to this repository, please first discuss the change you wish to make via issues,
+email, or on the [ENS-Bioinfo channel](https://matrix.to/#/#ens-bioinfo:matrix.org) before making a change. 
 
-Please note we have a code of conduct, please follow it in all your interactions with the project.
+## Forking
 
-## Pull Request Process
+In git, the [action of forking](https://git-scm.com/book/en/v2/GitHub-Contributing-to-a-Project) means that you are going to make your own private copy of a repository. You can then write modifications in your project, and if they are of interest for the source repository create a merge request (here [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow)). Merge requests are sent to the source repository to ask the maintainers to integrate modifications.
 
-1. Ensure any install or build dependencies are removed before the end of the layer when doing a 
-   build.
-2. Update the README.md with details of changes to the interface, this includes new environment 
-   variables, exposed ports, useful file locations and container parameters.
-3. Increase the version numbers in any examples files and the README.md to the new version that this
-   Pull Request would represent. The versioning scheme we use is [SemVer](http://semver.org/).
-4. You may merge the Pull Request in once you have the sign-off of two other developers, or if you 
-   do not have permission to do that, you may request the second reviewer to merge it for you.
+![merge request button](./doc/img/merge_request.png)
 
-## Code of Conduct
+## Project organization
 
-### Our Pledge
+The `LBMC/nextflow` project is structured as follows:
+- all the code is in the `src/` folder
+- scripts downloading external tools should download them in the `bin/` folder
+- all the documentation (including this file) can be found int he `doc/` folder
+- the `data` and `results` folders contain the data and results of your pipelines and are ignored by `git`
 
-In the interest of fostering an open and welcoming environment, we as
-contributors and maintainers pledge to making participation in our project and
-our community a harassment-free experience for everyone, regardless of age, body
-size, disability, ethnicity, gender identity and expression, level of experience,
-nationality, personal appearance, race, religion, or sexual identity and
-orientation.
+## Code structure
 
-### Our Standards
+The `src/` folder is where we want to save the pipeline (`.nf`) scripts. This folder also contains
+- the `src/install_nextflow.sh` to install the nextflow executable at the root of the project.
+- some pipelines examples (like the one build during the nf_pratical)
+- the `src/nextflow.config` global configuration file which contains the `docker`, `singularity`, `psmn` and `ccin2p3` profiles.
+- the `src/nf_modules` folder contains per tools `main.nf` modules with predefined process that users can import in their projects with the [DSL2](https://www.nextflow.io/docs/latest/dsl2.html)
 
-Examples of behavior that contributes to creating a positive environment
-include:
+But also some hidden folders that users don't need to see when building their pipeline:
+- the `src/.docker_modules` contains the recipes for the `docker` containers used in the `src/nf_modules/<tool_names>/main.nf` files
+- the `src/.singularity_in2p3` and `src/.singularity_psmn` are symbolic links to the shared folder where the singularity images are downloaded on the PSMN and CCIN2P3 
 
-* Using welcoming and inclusive language
-* Being respectful of differing viewpoints and experiences
-* Gracefully accepting constructive criticism
-* Focusing on what is best for the community
-* Showing empathy towards other community members
+# Proposing a new tool
 
-Examples of unacceptable behavior by participants include:
+Each tool named `<tool_name>` must have two dedicated folders:
 
-* The use of sexualized language or imagery and unwelcome sexual attention or
-advances
-* Trolling, insulting/derogatory comments, and personal or political attacks
-* Public or private harassment
-* Publishing others' private information, such as a physical or electronic
-  address, without explicit permission
-* Other conduct which could reasonably be considered inappropriate in a
-  professional setting
+- [`src/nf_modules/<tool_name>`](./src/nf_modules/fastp/) where users can find `.nf` files to include
+- [`src/.docker_modules/<tool_name>/<version_number>`](./src/.docker_modules/fastp/0.20.1/) where we have the [`Dockerfile`](./src/.docker_modules/fastp/0.20.1/Dockerfile) to construct the container used in the `main.nf` file
 
-### Our Responsibilities
+## `src/nf_module` guide lines
 
-Project maintainers are responsible for clarifying the standards of acceptable
-behavior and are expected to take appropriate and fair corrective action in
-response to any instances of unacceptable behavior.
+We are going to take the [`fastp`, `nf_module`](./src/nf_modules/fastp/) as an example.
 
-Project maintainers have the right and responsibility to remove, edit, or
-reject comments, commits, code, wiki edits, issues, and other contributions
-that are not aligned to this Code of Conduct, or to ban temporarily or
-permanently any contributor for other behaviors that they deem inappropriate,
-threatening, offensive, or harmful.
+The [`src/nf_modules/<tool_name>`](./src/nf_modules/fastp/) should contain a [`main.nf`](./src/nf_modules/fastp/main.nf) file that describe at least one process using `<tool_name>`
 
-### Scope
+### container informations
 
-This Code of Conduct applies both within project spaces and in public spaces
-when an individual is representing the project or its community. Examples of
-representing a project or community include using an official project e-mail
-address, posting via an official social media account, or acting as an appointed
-representative at an online or offline event. Representation of a project may be
-further defined and clarified by project maintainers.
+The first two lines of [`main.nf`](./src/nf_modules/fastp/main.nf) should define two variables
+```Groovy
+version = "0.20.1"
+container_url = "lbmc/fastp:${version}"
+```
 
-### Enforcement
+we can then use the `container_url` definition in each `process` in the `container` attribute.
+In addition to the `container` directive, each `process` should have one of the following `label` attributes (defined in the `src/nextflow.config` file)
+- `big_mem_mono_cpus`
+- `big_mem_multi_cpus`
+- `small_mem_mono_cpus`
+- `small_mem_multi_cpus`
 
-Instances of abusive, harassing, or otherwise unacceptable behavior may be
-reported by contacting the project team at [INSERT EMAIL ADDRESS]. All
-complaints will be reviewed and investigated and will result in a response that
-is deemed necessary and appropriate to the circumstances. The project team is
-obligated to maintain confidentiality with regard to the reporter of an incident.
-Further details of specific enforcement policies may be posted separately.
+```Groovy
+process fastp {
+  container = "${container_url}"
+  label = "big_mem_multi_cpus"
+  ...
+}
+```
 
-Project maintainers who do not follow or enforce the Code of Conduct in good
-faith may face temporary or permanent repercussions as determined by other
-members of the project's leadership.
+### process options
 
-### Attribution
+Before each process, you should declare at least two `params.` variables:
+- A `params.<process_name>` defaulting to `""` (empty string) to allow user to add more command line option to your process without rewriting the process definition
+- A `params.<process_name>_out` defaulting to `""` (empty string) that define the `results/` subfolder where the process output should be copied if the user wants to save the process output
 
-This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
-available at [http://contributor-covenant.org/version/1/4][version]
+```Groovy
+params.fastp = ""
+params.fastp_out = ""
+process fastp {
+  container = "${container_url}"
+  label "big_mem_multi_cpus"
+  if (params.fastp_out != "") {
+    publishDir "results/${params.fastp_out}", mode: 'copy'
+  }
+  ...
+  script:
+"""
+fastp --thread ${task.cpus} \
+${params.fastp} \
+...
+"""
+}
+```
+
+The user can then change the value of these variables:
+- from the command line `--fastp "--trim_head1=10"``
+- with the `include` command within their pipeline: `include { fastq } from "nf_modules/fastq/main" addParams(fastq_out: "QC/fastq/")
+- by defining the variable within their pipeline: `params.fastq_out = "QC/fastq/"
+
+### `input` and `output` format
+
+You should always use `tuple` for input and output channel format with at least:
+- a `val` containing variable(s) related to the item
+- a `path` for the file(s) that you want to process
+
+for example:
+
+```Groovy
+process fastp {
+  container = "${container_url}"
+  label "big_mem_multi_cpus"
+  tag "$file_id"
+  if (params.fastp_out != "") {
+    publishDir "results/${params.fastp_out}", mode: 'copy'
+  }
+
+  input:
+  tuple val(file_id), path(reads)
+
+  output:
+    tuple val(file_id), path("*.fastq.gz"), emit: fastq
+    tuple val(file_id), path("*.html"), emit: html
+    tuple val(file_id), path("*.json"), emit: report
+...
+```
+
+Here `file_id` can be anything from a simple identifier to a list of several variables.
+In which case the first item of the List should be usable as a file prefix.
+So you have to keep that in mind if you want to use it to define output file names (you can test for that with `file_id instanceof List`).
+In some case, the `file_id` may be a Map to have a cleaner access to the `file_id` content by explicit keywords.
+
+If you want to use information within the `file_id` to name outputs in your `script` section, you can use the following snipet:
+
+```Groovy
+  script:
+    switch(file_id) {
+    case {it instanceof List}:
+      file_prefix = file_id[0]
+    break
+    case {it instanceof Map}:
+      file_prefix = file_id.values()[0]
+    break
+    default:
+      file_prefix = file_id
+    break
+  }
+```
+
+and use the `file_prefix` variable.
+
+This also means that channel emitting `path` item should be transformed with at least the following map function:
+
+```Groovy
+.map { it -> [it.simpleName, it]}
+```
+
+for example
+
+```Groovy
+channel
+  .fromPath( params.fasta )
+  .ifEmpty { error "Cannot find any fasta files matching: ${params.fasta}" }
+  .map { it -> [it.simpleName, it]}
+  .set { fasta_files }
+```
+
+
+The rationale behind taking a `file_id` and emitting the same `file_id` is to facilitate complex channel operations in pipelines without having to rewrite the `process` blocks.
+
+### dealing with paired-end and single-end data
+
+When oppening fastq files with `channel.fromFilePairs( params.fastq )`, item in the channel have the following shape:
+
+```Groovy
+[file_id, [read_1_file, read_2_file]]
+```
+
+To make this call more generic, we can use the `size: -1` option, and accept arbitrary number of associated fastq files:
+
+```Groovy
+channel.fromFilePairs( params.fastq, size: -1 )
+```
+
+will thus give `[file_id, [read_1_file, read_2_file]]` for paired-end data and `[file_id, [read_1_file]]` for single-end data
+
+
+You can the use tests on `read.size()` to define conditional `script` block:
+
+```Groovy
+...
+  script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
+  if (reads.size() == 2)
+  """
+  fastp --thread ${task.cpus} \
+    ${params.fastp} \
+    --in1 ${reads[0]} \
+    --in2 ${reads[1]} \
+    --out1 ${file_prefix}_R1_trim.fastq.gz \
+    --out2 ${file_prefix}_R2_trim.fastq.gz \
+    --html ${file_prefix}.html \
+    --json ${file_prefix}_fastp.json \
+    --report_title ${file_prefix}
+  """
+  else
+  """
+  fastp --thread ${task.cpus} \
+    ${params.fastp} \
+    --in1 ${reads[0]} \
+    --out1 ${file_prefix}_trim.fastq.gz \
+    --html ${file_prefix}.html \
+    --json ${file_prefix}_fastp.json \
+    --report_title ${file_prefix}
+  """
+...
+```
+
+### Complex processes
+
+Sometime you want to do write complex processes, for example for `fastp` we want to have predefine `fastp` process for different protocols, order of adapter trimming and reads clipping.
+We can then use the fact that `process` or named `workflow` can be interchangeably imported with th [DSL2](https://www.nextflow.io/docs/latest/dsl2.html#workflow-composition).
+
+With the following example, the user can simply include the `fastp` step without knowing that it's a named `workflow` instead of a `process`.
+By specifying the `params.fastp_protocol`, the `fastp` step will transparently switch betwen the different `fastp` `process`es.
+Here `fastp_default` or `fastp_accel_1splus`, and other protocols can be added later, pipeline will be able to handle these new protocols by simply updating from the `upstream` repository without changing their codes.
+
+```Groovy
+params.fastp_protocol = ""
+workflow fastp {
+  take:
+    fastq
+
+  main:
+  switch(params.fastp_protocol) {
+    case "accel_1splus":
+      fastp_accel_1splus(fastq)
+      fastp_accel_1splus.out.fastq.set{res_fastq}
+      fastp_accel_1splus.out.report.set{res_report}
+    break;
+    default:
+      fastp_default(fastq)
+      fastp_default.out.fastq.set{res_fastq}
+      fastp_default.out.report.set{res_report}
+    break;
+  }
+  emit:
+    fastq = res_fastq
+    report = res_report
+}
+```
+
+## `src/.docker_modules` guide lines
+
+We are going to take the [`fastp`, `.docker_modules`](./src/.docker_module/fastp/0.20.1/) as an example.
+
+The [`src/.docker_modules/<tool_name>/<version_number>`](./src/nf_modules/fastp/0.20.1/) should contain a [`Dockerfile`](./src/.docker_module/fastp/0.20.1/Dockerfile) and a [`docker_init.sh`](./src/.docker_module/fastp/0.20.1/docker_init.sh).
+
+### `Dockerfile`
+
+The [`Dockerfile`](./src/.docker_module/fastp/0.20.1/Dockerfile) shoud contains a `docker` recipe to build a image with `<tool_name>` installed in a system-wide binary folder (`/bin`, `/usr/local/bin/`, etc).
+Therefore, your scripts are easily accessible from within the container.
+
+This recipe should have:
+
+- an easily changeable `<version_number>` to be able to update the corresponding image to a newer version of the tool
+- the `ps` executable (package `procps` in debian)
+- a default `bash` command (`CMD ["bash"]`)
+
+### `docker_init.sh`
+
+The [`docker_init.sh`](./src/.docker_module/fastp/0.20.1/docker_init.sh) script is a small sh script with the following content:
+
+```sh
+#!/bin/sh
+docker pull lbmc/fastp:0.20.1
+docker build src/.docker_modules/fastp/0.20.1 -t 'lbmc/fastp:0.20.1'
+docker push lbmc/fastp:0.20.1
+```
+
+We want to be able to execute the `src/.docker_module/fastp/0.20.1/docker_init.sh` from the root of the project to :
+
+- try to download the corresponding container if it exists on the [Docker Hub](https://hub.docker.com/repository/docker/lbmc/)
+- if not build the container from the correspondig [`Dockerfile`](./src/.docker_module/fastp/0.20.1/Dockerfile) and with the same name as the name we would get from the `docker pull` command
+- push the container on the [Docker Hub](https://hub.docker.com/repository/docker/lbmc/) (only [laurent.modolo@ens-lyon.fr](mailto:laurent.modolo@ens-lyon.fr) can do this step for the group **lbmc**)
 
-[homepage]: http://contributor-covenant.org
-[version]: http://contributor-covenant.org/version/1/4/
diff --git a/README.md b/README.md
index 199db4809ba27cae7eec980d541dc138b7e8e229..cab03de46f2c5c7a9408511c29ad98385b3c8394 100644
--- a/README.md
+++ b/README.md
@@ -2,35 +2,42 @@
 
 This repository is a template and a library repository to help you build nextflow pipeline.
 You can fork this repository to build your own pipeline.
+
+## Getting the last updates
+
 To get the last commits from this repository into your fork use the following commands:
 
+For the first time:
+```sh
+git remote add upstream git@gitbio.ens-lyon.fr:LBMC/nextflow.git
+git pull upstream master
+```
+
+Then to make an update:
 ```sh
-git remote add upstream git@gitbio.ens-lyon.fr::pipelines/nextflow.git
 git pull upstream master
+git merge upstream/master
 ```
-**If you created your `.config` file before version `0.4.0` you need to run the script `src/.update_config.sh` to use the latest docker, singularity and conda configuration (don't forget to check your config files afterward for typos).**
 
 ## Getting Started
 
-These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system.
+These instructions will get you a copy of the project as a template when you want to build your own pipeline.
 
 [you can follow them here.](doc/getting_started.md)
 
-## Available tools
+## Building your pipeline
 
-[The list of available tools.](doc/available_tools.md)
+You can follow the [building your pipeline guide](./doc/building_your_pipeline.md) for a gentle introduction to `nextflow` and taking advantage of this template to build your pipelines.
 
-## Projects using nextflow
+## Existing Nextflow pipeline
 
-[A list of projects using nextflow at the LBMC.](doc/nf_projects.md)
+Before starting a new project, you can check if someone else didn’t already to the work !
+- [on the nextflow project page](./doc/nf_projects.md)
+- [on the nf-core project](https://nf-co.re/pipelines)
 
 ## Contributing
 
-Please read [CONTRIBUTING.md](CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests to us.
-
-## Versioning
-
-We use [SemVer](http://semver.org/) for versioning. For the versions available, see the [tags on this repository](https://gitbio.ens-lyon.fr/pipelines/nextflow/tags).
+If you want to add more tools to this project, please read the [CONTRIBUTING.md](CONTRIBUTING.md).
 
 ## Authors
 
diff --git a/doc/Makefile b/doc/Makefile
deleted file mode 100644
index 3b34e9e63f3e6f520041a2b6369af61a19c1efd2..0000000000000000000000000000000000000000
--- a/doc/Makefile
+++ /dev/null
@@ -1,13 +0,0 @@
-all: TP_experimental_biologists.pdf TP_computational_biologists.pdf ../public/TP_experimental_biologists.html ../public/TP_computational_biologists.html
-
-../public/TP_experimental_biologists.html: TP_experimental_biologists.md
-	pandoc -s TP_experimental_biologists.md -o ../public/TP_experimental_biologists.html
-
-../public/TP_computational_biologists.html: TP_computational_biologists.md
-	pandoc -s TP_computational_biologists.md -o ../public/TP_computational_biologists.html
-
-TP_experimental_biologists.pdf: TP_experimental_biologists.md
-	R -e 'require(rmarkdown); rmarkdown::render("TP_experimental_biologists.md")'
-
-TP_computational_biologists.pdf: TP_computational_biologists.md
-	R -e 'require(rmarkdown); rmarkdown::render("TP_computational_biologists.md")'
diff --git a/doc/TP_computational_biologists.md b/doc/TP_computational_biologists.md
deleted file mode 100644
index 1b15a64813a6505cd3623431ab701cd3807f3df3..0000000000000000000000000000000000000000
--- a/doc/TP_computational_biologists.md
+++ /dev/null
@@ -1,241 +0,0 @@
----
-title: "TP for computational biologists"
-author: Laurent Modolo [laurent.modolo@ens-lyon.fr](mailto:laurent.modolo@ens-lyon.fr)
-date: 20 Jun 2018
-output:
-pdf_document:
-toc: true
-toc_depth: 3
-    number_sections: true
-highlight: tango
-    latex_engine: xelatex
----
-
-The goal of this practical is to learn how to *wrap* tools in [Docker](https://www.docker.com/what-docker) or [Environment Module](http://www.ens-lyon.fr/PSMN/doku.php?id=documentation:tools:modules) to make them available to nextflow on a personal computer or at the [PSMN](http://www.ens-lyon.fr/PSMN/doku.php).
-
-Here we assume that you followed the [TP for experimental biologists](./TP_experimental_biologists.md), and that you know the basics of [Docker containers](https://www.docker.com/what-container) and [Environment Module](http://www.ens-lyon.fr/PSMN/doku.php?id=documentation:tools:modules). We are also going to assume that you know how to build and use a nextflow pipeline from the template [pipelines/nextflow](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow).
-
-For the practical you can either work with the WebIDE of Gitlab, or locally as described in the [git: basis formation](https://gitlab.biologie.ens-lyon.fr/formations/git_basis).
-
-# Docker
-
-To run a tool within a [Docker container](https://www.docker.com/what-container) you need to write a `Dockerfile`.
-
-[`Dockerfile`](./src/docker_modules/kallisto/0.44.0/Dockerfile) are found in the [pipelines/nextflow](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow) project under `src/docker_modules/`. Each [`Dockerfile`](./src/docker_modules/kallisto/0.44.0/Dockerfile) is paired with a [`docker_init.sh`](./src/docker_modules/kallisto/0.44.0/docker_init.sh) file like following the example for `Kallisto` version `0.43.1`:
-
-```sh
-$ ls -l src/docker_modules/kallisto/0.43.1/
-total 16K
-drwxr-xr-x 2 laurent users 4.0K Jun 5 19:06 ./
-drwxr-xr-x 3 laurent users 4.0K Jun 6 09:49 ../
--rw-r--r-- 1 laurent users  587 Jun  5 19:06 Dockerfile
--rwxr-xr-x 1 laurent users 79 Jun 5 19:06 docker_init.sh*
-```
-
-## [`docker_init.sh`](./src/docker_modules/kallisto/0.44.0/docker_init.sh)
-The [`docker_init.sh`](./src/docker_modules/kallisto/0.44.0/docker_init.sh) is a simple sh script with executable rights (`chmod +x`). By executing this script, the user creates a [Docker container](https://www.docker.com/what-container) with the tool installed a specific version. You can check the [`docker_init.sh`](./src/docker_modules/kallisto/0.44.0/docker_init.sh) file of any implemented tools as a template.
-
-Remember that the name of the [container](https://www.docker.com/what-container) must be in lower case and in the format `<tool_name>:<version>`.
-For tools without a version number you can use a commit hash instead.
-
-## [`Dockerfile`](./src/docker_modules/kallisto/0.44.0/Dockerfile)
-
-The recipe to wrap your tool in a [Docker container](https://www.docker.com/what-container) is written in a [`Dockerfile`](./src/docker_modules/kallisto/0.44.0/Dockerfile) file.
-
-For `Kallisto` version `0.44.0` the header of the `Dockerfile` is :
-
-```Docker
-FROM ubuntu:18.04
-MAINTAINER Laurent Modolo
-
-ENV KALLISTO_VERSION=0.44.0
-```
-
-The `FROM` instruction means that the [container](https://www.docker.com/what-container) is initialized from a bare installation of Ubuntu 18.04. You can check the versions of Ubuntu available [here](https://hub.docker.com/_/ubuntu/) or others operating systems like [debian](https://hub.docker.com/_/debian/) or [worst](https://hub.docker.com/r/microsoft/windowsservercore/).
-
-Then we declare the *maintainer* of the container. Before declaring an environment variable for the container named `KALLISTO_VERSION`, which contains the version of the tool wrapped. This this bash variable will be declared for the user root within the [container](https://www.docker.com/what-container).
-
-You should always declare a variable `TOOLSNAME_VERSION` that contains the version number of commit number of the tools you wrap. In simple cases you just have to modify this line to create a new `Dockerfile` for another version of the tool.
-
-The following lines of the [`Dockerfile`](./src/docker_modules/kallisto/0.44.0/Dockerfile) are a succession of `bash` commands executed as the **root** user within the container.
-Each `RUN` block is run sequentially by `Docker`. If there is an error or modifications in a `RUN` block, only this block and the following `RUN` will be executed.
-
-You can learn more about the building of Docker containers [here](https://docs.docker.com/engine/reference/builder/#usage).
-
-When you build your [`Dockerfile`](./src/docker_modules/kallisto/0.44.0/Dockerfile), instead of launching many times the [`docker_init.sh`](./src/docker_modules/kallisto/0.44.0/docker_init.sh) script to tests your [container](https://www.docker.com/what-container), you can connect to a base container in interactive mode to launch tests your commands.
-
-```sh
-docker run -it ubuntu:18.04 bash
-KALLISTO_VERSION=0.44.0
-```
-
-# SGE / [PSMN](http://www.ens-lyon.fr/PSMN/doku.php)
-
-To run easily tools on the [PSMN](http://www.ens-lyon.fr/PSMN/doku.php), you need to build your own [Environment Module](http://www.ens-lyon.fr/PSMN/doku.php?id=documentation:tools:modules).
-
-You can read the Contributing guide for the [PMSN/modules](https://gitlab.biologie.ens-lyon.fr/PSMN/modules) project [here](https://gitlab.biologie.ens-lyon.fr/PSMN/modules/blob/master/CONTRIBUTING.md)
-
-# Nextflow
-
-The last step to wrap your tool is to make it available in nextflow. For this you need to create at least 4 files, like the following for Kallisto version `0.44.0`:
-
-```sh
-ls -lR src/nf_modules/kallisto
-src/nf_modules/kallisto/:
-total 12
--rw-r--r-- 1 laurent users 551 Jun 18 17:14 index.nf
--rw-r--r-- 1 laurent users 901 Jun 18 17:14 mapping_paired.nf
--rw-r--r-- 1 laurent users 1037 Jun 18 17:14 mapping_single.nf
--rwxr-xr-x 1 laurent users 627 Jun 18 17:14 tests.sh*
-```
-
-The [`.config` files](./src/nf_modules/kallisto/) file contains instructions for two profiles : `psmn` and `docker`.
-The [`.nf` files](./src/nf_modules/kallisto/) file contains nextflow processes to use `Kallisto`.
-
-The [`tests/tests.sh`](./src/nf_modules/kallisto/tests/tests.sh) script (with executable rights), contains a series of nextflow calls on the other `.nf` files of the folder. Those tests correspond to execution of the `*.nf` files present in the [`kallisto folder`](./src/nf_modules/kallisto/) on the [LBMC/tiny_dataset](https://gitlab.biologie.ens-lyon.fr/LBMC/tiny_dataset) dataset with the `docker` profile. You can read the *Running the tests* section of the [README.md](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/README.md).
-
-## [`kallisto.config`](./src/nf_modules/kallisto/)
-
-The `.config` file defines the configuration to apply to your process conditionally to the value of the `-profile` option. You must define configuration for at least the `psmn` and `docker` profile.
-
-```Groovy
-profiles {
-  docker {
-    docker.temp = 'auto'
-    docker.enabled = true
-    process {
-    }
-  }
-  psmn {
-    process{
-    }
-  }
-```
-
-### `docker` profile
-
-The `docker` profile starts by enabling docker for the whole pipeline. After that you only have to define the container name for each process:
-For example, for `Kallisto` with the version `0.44.0`, we have:
-
-```Groovy
-process {
-  withName: index_fasta {
-    container = "kallisto:0.44.0"
-  }
-  withName: mapping_fastq {
-    container = "kallisto:0.44.0"
-  }
-}
-```
-
-### `psmn` profile
-
-The `psmn` profile defines for each process all the informations necessary to launch your process on a given queue with SGE at the [PSMN](http://www.ens-lyon.fr/PSMN/doku.php).
-For example, for `Kallisto`, we have:
-
-```Groovy
-process{
-  withName: index_fasta {
-    beforeScript = "source /usr/share/lmod/lmod/init/bash; module use ~/privatemodules"
-    module = "Kallisto/0.44.0"
-    executor = "sge"
-    cpus = 16
-    memory = "30GB"
-    time = "24h"
-    queue = 'E5-2670deb128A,E5-2670deb128B,E5-2670deb128C,E5-2670deb128D,E5-2670deb128E,E5-2670deb128F'
-    penv = 'openmp16'
-  }
-  withName: mapping_fastq {
-    beforeScript = "source /usr/share/lmod/lmod/init/bash; module use ~/privatemodules"
-    module = "Kallisto/0.44.0"
-    executor = "sge"
-    cpus = 16
-    memory = "30GB"
-    time = "24h"
-    queue = 'E5-2670deb128A,E5-2670deb128B,E5-2670deb128C,E5-2670deb128D,E5-2670deb128E,E5-2670deb128F'
-    penv = 'openmp16'
-  }
-}
-```
-
-The `beforeScript` variable is executed before the main script for the corresponding process.
-
-## [`kallisto.nf`](./src/nf_modules/kallisto/kallisto.nf)
-
-The [`kallisto.nf`](./src/nf_modules/kallisto/kallisto.nf) file contains examples of nextflow process that execute Kallisto.
-
-- Each example must be usable as it is to be incorporated in a nextflow pipeline.
-- You need to define, default value for the parameters passed to the process. 
-- Input and output must be clearly defined.
-- Your process should be usable as a starting process or a process retrieving the output of another process.
-
-For more informations on processes and channels you can check the [nextflow documentation](https://www.nextflow.io/docs/latest/index.html).
-
-## Making your wrapper available to the LBMC
-
-To make your module available to the LBMC you must have a `tests.sh` script and one or many `docker_init.sh` scripts working without errors.
-All the processes in your `.nf` must be covered by the tests.
-
-After pushing your modifications on your forked repository, you can make a Merge Request to the [PSMN/modules](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow) **dev** branch. Where it will be tested and integrated to the **master** branch.
-
-You can read more on this process [here](https://guides.github.com/introduction/flow/)
-
-
-### `docker_modules`
-
-The `src/docker_modules` contains the code to wrap tools in [Docker](https://www.docker.com/what-docker). [Docker](https://www.docker.com/what-docker) is a framework that allows you to execute software within [containers](https://www.docker.com/what-container). The `docker_modules` contains directory corresponding to tools and subdirectories corresponding to their version.
-
-```sh
-ls -l src/docker_modules/
-rwxr-xr-x  3 laurent _lpoperator   96 May 25 15:42 bedtools/
-drwxr-xr-x  4 laurent _lpoperator  128 Jun 5 16:14 bowtie2/
-drwxr-xr-x  3 laurent _lpoperator   96 May 25 15:42 fastqc/
-drwxr-xr-x  4 laurent _lpoperator  128 Jun 5 16:14 htseq/
-```
-
-To each `tools/version` corresponds two files:
-
-```sh
-ls -l src/docker_modules/bowtie2/2.3.4.1/
--rw-r--r-- 1 laurent _lpoperator  283 Jun  5 15:07 Dockerfile
--rwxr-xr-x  1 laurent _lpoperator   79 Jun 5 16:18 docker_init.sh*
-```
-
-The `Dockerfile` is the [Docker](https://www.docker.com/what-docker) recipe to create a [container](https://www.docker.com/what-container) containing `Bowtie2` in its `2.3.4.1` version. And the `docker_init.sh` file is a small script to create the [container](https://www.docker.com/what-container) from this recipe.
-
-By running this script you will be able to easily install tools in different versions on your personal computer and use it in your pipeline. Some of the advantages are:
-
-- Whatever the computer, the installation and the results will be the same
-- You can keep [container](https://www.docker.com/what-container) for old version of tools and run it on new systems (science = reproducibility)
-- You don’t have to bother with tedious installation procedures, somebody else already did the job and wrote a `Dockerfile`.
-- You can easily keep [containers](https://www.docker.com/what-container) for different version of the same tools.
-
-### `psmn_modules`
-
-The `src/psmn_modules` folder is not really there. It’s a submodule of the project [PSMN/modules](https://gitlab.biologie.ens-lyon.fr/PSMN/modules). To populate it locally you can use the following command:
-
-```sh
-git submodule init
-```
-
-Like the `src/docker_modules` the [PSMN/modules](https://gitlab.biologie.ens-lyon.fr/PSMN/modules) project describe recipes to install tools and use them. The main difference is that you cannot use [Docker](https://www.docker.com/what-docker) on the PSMN. Instead you have to use another framework [Environment Module](http://www.ens-lyon.fr/PSMN/doku.php?id=documentation:tools:modules) which allows you to load modules for specific tools and version.
-The [README.md](https://gitlab.biologie.ens-lyon.fr/PSMN/modules/blob/master/README.md) file of the [PSMN/modules](https://gitlab.biologie.ens-lyon.fr/PSMN/modules) repository contains all the instruction to be able to load the modules maintained by the LBMC and present in the [PSMN/modules](https://gitlab.biologie.ens-lyon.fr/PSMN/modules) repository.
-
-## Create your Docker containers
-
-For this practical, we are going to need the following tools:
-
-- For Illumina adaptor removal: cutadapt
-- For reads trimming by quality: UrQt
-- For mapping and quantifying reads: BEDtools and Kallisto
-
-To initialize these tools, follow the **Installing** section of the [README.md](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/README.md) file.
-
-**If you are using a CBP computer don’t forget to clean up your docker containers at the end of the practical with the following commands:**
-
-```sh
-docker rm $(docker stop $(docker ps -aq))
-docker rmi $(docker images -qf "dangling=true")
-```
-
-
diff --git a/doc/TP_experimental_biologists.md b/doc/TP_experimental_biologists.md
deleted file mode 100644
index dd1f4c54e9f16472aa0912b5b364497d5949aa53..0000000000000000000000000000000000000000
--- a/doc/TP_experimental_biologists.md
+++ /dev/null
@@ -1,365 +0,0 @@
----
-title: "TP for experimental biologists"
-author: Laurent Modolo [laurent.modolo@ens-lyon.fr](mailto:laurent.modolo@ens-lyon.fr)
-date: 6 Jun 2018
-output:
-pdf_document:
-toc: true
-toc_depth: 3
-    number_sections: true
-highlight: tango
-    latex_engine: xelatex
----
-
-The Goal of this practical is to learn how to build your own pipeline with nextflow and using the tools already *wrapped*.
-For this we are going to build a small RNASeq analysis pipeline that should run the following steps:
-
-- remove Illumina adaptors
-- trim reads by quality
-- build the index of a reference genome
-- estimate the amount of RNA fragments mapping to the transcripts of this genome
-
-**To do this practical you will need to have [Docker](https://www.docker.com/) installed and running on your computer**
-
-# Initialize your own project
-
-You are going to build a pipeline for you or your team. So the first step is to create your own project.
-
-## Forking
-
-Instead of reinventing the wheel, you can use the [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow) as a template.
-To easily do so, go to the [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow) repository and click on the [**fork**](https://gitbio.ens-lyon.fr/LBMC/nextflow/forks/new) button (you need to log-in).
-
-![fork button](img/fork.png)
-
-In git, the [action of forking](https://git-scm.com/book/en/v2/GitHub-Contributing-to-a-Project) means that you are going to make your own private copy of a repository. You can then write modifications in your project, and if they are of interest for the source repository create a merge request (here [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow)). Merge requests are sent to the source repository to ask the maintainers to integrate modifications.
-
-![merge request button](img/merge_request.png)
-
-## Project organisation
-
-This project (and yours) follows the [guide of good practices for the LBMC](http://www.ens-lyon.fr/LBMC/intranet/services-communs/pole-bioinformatique/ressources/good_practice_LBMC)
-
-You are now on the main page of your fork of the [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow). You can explore this project, all the code in it is under the CeCILL licence (in the [LICENCE](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/LICENSE) file).
-
-The [README.md](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/README.md) file contains instructions to run your pipeline and test its installation.
-
-The [CONTRIBUTING.md](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/CONTRIBUTING.md) file contains guidelines if you want to contribute to the [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow) (making a merge request for example).
-
-The [data](https://gitbio.ens-lyon.fr/LBMC/nextflow/tree/master/data) folder will be the place where you store the raw data for your analysis.
-The [results](https://gitbio.ens-lyon.fr/LBMC/nextflow/tree/master/results) folder will be the place where you store the results of your analysis.
-
-> **The content of `data` and `results` folders should never be saved on git.**
-
-The [doc](https://gitbio.ens-lyon.fr/LBMC/nextflow/tree/master/doc) folder contains the documentation of this practical course.
-
-And most interestingly for you, the [src](https://gitbio.ens-lyon.fr/LBMC/nextflow/tree/master/src) contains code to wrap tools. This folder contains one visible subdirectories `nf_modules` some pipeline examples and other hidden files.
-
-### `nf_modules`
-
-The `src/nf_modules` folder contains templates of [nextflow](https://www.nextflow.io/) wrappers for the tools available in [Docker](https://www.docker.com/what-docker). The details of the [nextflow](https://www.nextflow.io/) wrapper will be presented in the next section. Alongside the `.nf` and `.config` files, there is a `tests.sh` script to run test on the tool.
-
-# Nextflow pipeline
-
-A pipeline is a succession of **process**. Each process has data input(s) and optional data output(s). Data flows are modeled as **channels**.
-
-## Processes
-
-Here is an example of **process**:
-
-```Groovy
-process sample_fasta {
-  input:
-file fasta from fasta_file
-
-  output:
-file "sample.fasta" into fasta_sample
-
-  script:
-"""
-head ${fasta} > sample.fasta
-"""
-}
-```
-
-We have the process `sample_fasta` that takes a `fasta_file` **channel** as input and as output a `fasta_sample` **channel**. The process itself is defined in the `script:` block and within `"""`.
-
-```Groovy
-input:
-file fasta from fasta_file
-```
-
-When we zoom on the `input:` block we see that we define a variable `fasta` of type `file` from the `fasta_file` **channel**. This mean that groovy is going to write a file named as the content of the variable `fasta` in the root of the folder where `script:` is executed.
-
-
-```Groovy
-output:
-file "sample.fasta" into fasta_sample
-```
-
-At the end of the script, a file named `sample.fasta` is found in the root the folder where `script:` is executed and send into the **channel** `fasta_sample`.
-
-Using the WebIDE of Gitlab, create a file `src/fasta_sampler.nf` with this process and commit it to your repository.
-
-![webide](img/webide.png)
-
-## Channels
-
-Why bother with channels? In the above example, the advantages of channels are not really clear. We could have just given the `fasta` file to the process. But what if we have many fasta files to process? What if we have sub processes to run on each of the sampled fasta files? Nextflow can easily deal with these problems with the help of channels.
-
-> **Channels** are streams of items that are emitted by a source and consumed by a process. A process with a channel as input will be run on every item send through the channel.
-
-```Groovy
-Channel
-  .fromPath( "data/tiny_dataset/fasta/*.fasta" )
-  .set { fasta_file }
-```
-
-Here we defined the channel `fasta_file` that is going to send every fasta file from the folder `data/tiny_dataset/fasta/` into the process that take it as input.
-
-Add the definition of the channel to the `src/fasta_sampler.nf` file and commit it to your repository.
-
-
-## Run your pipeline locally
-
-After writing this first pipeline, you may want to test it. To do that, first clone your repository. To easily do that set the visibility level to *public* in the settings/General/Permissions page of your project.
-
-You can then run the following commands to download your project on your computer:
-
-and then :
-
-```sh
-git clone git@gitbio.ens-lyon.fr:<usr_name>/nextflow.git
-cd nextflow
-src/install_nextflow.sh
-```
-
-We also need data to run our pipeline:
-
-```
-cd data
-git clone git@gitbio.ens-lyon.fr:LBMC/hub/tiny_dataset.git
-cd ..
-```
-
-We can run our pipeline with the following command:
-
-```sh
-./nextflow src/fasta_sampler.nf
-```
-
-
-
-## Getting your results
-
-Our pipeline seems to work but we don’t know where is the `sample.fasta`. To get results out of a process, we need to tell nextflow to write it somewhere (we may don’t need to get every intermediate file in our results).
-
-To do that we need to add the following line before the `input:` section:
-
-```Groovy
-publishDir "results/sampling/", mode: 'copy'
-```
-
-Every file described in the `output:` section will be copied from nextflow to the folder `results/sampling/`.
-
-Add this to your `src/fasta_sampler.nf` file with the WebIDE and commit to your repository.
-Pull your modifications locally with the command:
-
-```sh
-git pull origin master
-```
-
-You can run your pipeline again and check the content of the folder `results/sampling`.
-
-
-
-## Fasta everywhere
-
-We ran our pipeline on one fasta file. How would nextflow handle 100 of them? To test that we need to duplicate the `tiny_v2.fasta` file: 
-
-```sh
-for i in {1..100}
-do
-cp data/tiny_dataset/fasta/tiny_v2.fasta data/tiny_dataset/fasta/tiny_v2_${i}.fasta
-done
-```
-
-You can run your pipeline again and check the content of the folder `results/sampling`.
-
-Every `fasta_sampler` process write a `sample.fasta` file. We need to make the name of the output file dependent of the name of the input file.
-
-```Groovy
-output:
-file "*_sample.fasta" into fasta_sample
-
-  script:
-"""
-head ${fasta} > ${fasta.baseName}_sample.fasta
-"""
-```
-
-Add this to your `src/fasta_sampler.nf` file with the WebIDE and commit it to your repository before pulling your modifications locally.
-You can run your pipeline again and check the content of the folder `results/sampling`.
-
-# Build your own RNASeq pipeline
-
-In this section you are going to build your own pipeline for RNASeq analysis from the code available in the `src/nf_modules` folder.
-
-## Cutadapt
-
-The first step of the pipeline is to remove any Illumina adaptors left in your read files.
-
-Open the WebIDE and create a `src/RNASeq.nf` file. Browse for [src/nf_modules/cutadapt/adaptor_removal_paired.nf](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/src/nf_modules/cutadapt/adaptor_removal_paired.nf), this file contains examples for cutadapt. We are interested in the *Illumina adaptor removal*, *for paired-end data* section of the code. Copy this code in your pipeline and commit it.
-
-Compared to before, we have few new lines:
-
-```Groovy
-params.fastq = "$baseDir/data/fastq/*_{1,2}.fastq"
-```
-
-We declare a variable that contains the path of the fastq file to look for. The advantage of using `params.fastq` is that the option `--fastq` is now a parameter of your pipeline.
-Thus, you can call your pipeline with the `--fastq` option:
-
-```sh
-./nextflow src/RNASeq.nf --fastq "data/tiny_dataset/fastq/*_R{1,2}.fastq"
-```
-
-```Groovy
-log.info "fastq files: ${params.fastq}"
-```
-
-This line simply displays the value of the variable
-
-```Groovy
-Channel
-  .fromFilePairs( params.fastq )
-```
-
-As we are working with paired-end RNASeq data, we tell nextflow to send pairs of fastq in the `fastq_file` channel.
-
-
-### cutadapt.config
-
-For the `fastq_sampler.nf` pipeline we used the command `head` present in most base UNIX systems. Here we want to use `cutadapt` which is not. Therefore, we have three main options:
-
-- install cutadapt locally so nextflow can use it
-- launch the process in a [Docker](https://www.docker.com/) container that has cutadapt installed
-- launch the process in a [Singularity](https://singularity.lbl.gov/) container (what we do on the PSMN and CCIN2P3)
-
-We are not going to use the first option which requires no configuration for nextflow but tedious tools installations. Instead, we are going to use existing *wrappers* and tell nextflow about it. This is what the [src/nf_modules/cutadapt/adaptor_removal_paired.config](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/src/nf_modules/cutadapt/adaptor_removal_paired.config) is used for.
-
-Copy the content of this config file to an `src/RNASeq.config` file. This file is structured in process blocks. Here we are only interested in configuring `adaptor_removal` process not `trimming` process. So you can remove the `trimming` block and commit it.
-
-You can test your pipeline with the following command:
-
-```sh
-./nextflow src/RNASeq.nf -c src/RNASeq.config -profile docker --fastq "data/tiny_dataset/fastq/*_R{1,2}.fastq"
-```
-
-
-## UrQt
-
-The second step of the pipeline is to trim reads by quality.
-
-Browse for [src/nf_modules/urqt/trimming_paired.nf](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/src/nf_modules/urqt/trimming_paired.nf), this file contains examples for UrQt. We are interested in the *for paired-end data* section of the code. Copy the process section code in your pipeline and commit it.
-
-This code won’t work if you try to run it: the `fastq_file` channel is already consumed by the `adaptor_removal` process. In nextflow once a channel is used by a process, it ceases to exist. Moreover, we don’t want to trim the input fastq, we want to trim the fastq that comes from the `adaptor_removal` process.
-
-Therefore, you need to change the line:
-
-```Groovy
-set pair_id, file(reads) from fastq_files
-```
-
-In the `trimming` process to:
-
-```Groovy
-set pair_id, file(reads) from fastq_files_cut
-```
-
-The two processes are now connected by the channel `fastq_files_cut`.
-
-Add the content of the [src/nf_modules/urqt/trimming_paired.config](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/src/nf_modules/urqt/trimming_paired.config) file to your `src/RNASeq.config` file and commit it.
-
-You can test your pipeline.
-
-## BEDtools
-
-Kallisto need the sequences of the transcripts that need to be quantified. We are going to extract these sequences from the reference `data/tiny_dataset/fasta/tiny_v2.fasta` with the `bed` annotation `data/tiny_dataset/annot/tiny.bed`.
-
-You can copy to your `src/RNASeq.nf` file the content of [src/nf_modules/bedtools/fasta_from_bed.nf](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/src/nf_modules/bedtools/fasta_from_bed.nf) and to your `src/RNASeq.config` file the content of [src/nf_modules/bedtools/fasta_from_bed.config](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/src/nf_modules/bedtools/fasta_from_bed.config).
-
-Commit your work and test your pipeline with the following command:
-
-```sh
-./nextflow src/RNASeq.nf -c src/RNASeq.config -profile docker --fastq "data/tiny_dataset/fastq/*_R{1,2}.fastq" --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" --bed "data/tiny_dataset/annot/tiny.bed"
-```
-
-## Kallisto
-
-Kallisto run in two steps: the indexation of the reference and the quantification on this index.
-
-You can copy to your `src/RNASeq.nf` file the content of the files [src/nf_modules/kallisto/indexing.nf](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/src/nf_modules/kallisto/indexing.nf) and [src/nf_modules/kallisto/mapping_paired.nf](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/src/nf_modules/kallisto/mapping_paired.nf). You can add to your file `src/RNASeq.config` file the content of the files [src/nf_modules/kallisto/indexing.config](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/src/nf_modules/kallisto/indexing.config) and [src/nf_modules/kallisto/mapping_paired.config](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/src/nf_modules/kallisto/mapping_paired.config).
-
-We are going to work with paired-end so only copy the relevant processes. The `index_fasta` process needs to take as input the output of your `fasta_from_bed` process. The `fastq` input of your `mapping_fastq` process needs to take as input the output of your `index_fasta` process and the `trimming` process.
-
-Commit your work and test your pipeline.
-You now have a RNASeq analysis pipeline that can run locally with Docker!
-
-
-## Additional nextflow option
-
-With nextflow you can restart the computation of a pipeline and get a trace of the process with the following options:
-
-```sh
- -resume -with-dag results/RNASeq_dag.pdf -with-timeline results/RNASeq_timeline
-```
-
-# Run your RNASeq pipeline on the PSMN
-
-First you need to connect to the PSMN:
-
-```sh
-login@allo-psmn
-```
-Then once connected to `allo-psmn`, you can connect to `e5-2667v4comp1`:
-
-```sh
-login@e5-2667v4comp1
-```
-
-## Set your environment
-
-Create and go to your `scratch` folder:
-
-```sh
-mkdir -p /scratch/Bio/<login>
-cd /scratch/Bio/<login>
-```
-
-Then you need to clone your pipeline and get the data:
-
-```sh
-git clone https://gitbio.ens-lyon.fr/<usr_name>/nextflow.git
-cd nextflow/data
-git clone https://gitbio.ens-lyon.fr/LBMC/hub/tiny_dataset.git
-cd ..
-```
-
-## Run nextflow
-
-As we don’t want nextflow to be killed in case of disconnection, we start by launching `tmux`. In case of deconnection, you can restore your session with the command `tmux a` and close one with `ctr + b + d`
-
-```sh
-tmux
-src/install_nextflow.sh
-./nextflow src/RNASeq.nf -c src/RNASeq.config -profile psmn --fastq "data/tiny_dataset/fastq/*_R{1,2}.fastq" --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" --bed "data/tiny_dataset/annot/tiny.bed" -w /scratch/Bio/<login>
-```
-
-To use the scratch for nextflow computations add the option :
-
-```sh
--w /scratch/<login>
-```
-
-You just ran your pipeline on the PSMN!
diff --git a/doc/available_tools.md b/doc/available_tools.md
deleted file mode 100644
index 297cc27e9211f15023d415a8e20b13b54c8aac18..0000000000000000000000000000000000000000
--- a/doc/available_tools.md
+++ /dev/null
@@ -1,42 +0,0 @@
-## Available tools
-
-- **nf module**: a working example of nextflow process is available in `src/nf_modules/<tools>/<tool>.nf` and `src/nf_modules/<tools>/<tool>.config`
-- **docker module**: you can create a docker with the `src/docker_modules/<tool>/<version>/docker_init.sh`
-- **psmn module**: you can use the tool in the PSMN
-- **IN2P3 module**: you can use the tool in the CCIN2P3
-
-| tool | nf module | docker module | psmn module | in2p3 module |
-|------|:---------:|:-------------:|:-----------:|:-------------:|
-BEDtools | ok | ok | ok | ok 
-BFCtools |**no**  | ok | ok | ok
-bioawk |**no**  | ok | ok | ok
-Bowtie | ok | ok | **no** | ok
-Bowtie2 | ok | ok | ok | ok
-BWA | ok | ok | ok | ok
-canu | ok | ok | ok | ok
-cutadapt | ok | ok | ok | ok
-deepTools | ok | ok | ok | ok
-fastp | ok | ok | ok | ok
-FastQC | ok | ok | ok | ok
-file_handle | **no** | ok | ok | ok
-GATK | **no** | ok | ok | ok
-HISAT2 | ok | ok | ok | ok
-HTSeq | ok | ok | ok | ok
-Kallisto | ok | ok | ok | ok
-MACS2 | ok | ok | ok | ok
-MultiQC | ok | ok | ok | ok
-MUSIC | ok | ok | ok | ok
-picard | **no** | ok | ok | ok
-pigz | **no** | ok | ok | ok
-RSEM | ok | ok | ok | ok
-Salmon | **no** | ok | ok | ok
-sambamba | ok | ok | ok | ok
-samblaster | ok | ok | ok | ok
-SAMtools | ok | ok | ok | ok
-SRAtoolkit | ok | ok | ok | ok
-STAR | ok | ok | ok | ok
-subread | **no** | ok | ok | ok
-TopHat | **no** | ok | ok | ok
-Trimmomatic | **no** | ok | ok | ok
-UMItools  | **no** | ok | ok | ok
-UrQt | ok | ok | ok | ok
diff --git a/doc/building_your_pipeline.md b/doc/building_your_pipeline.md
new file mode 100644
index 0000000000000000000000000000000000000000..90fba56d210c3bc229a6beb173e608214ddcc30c
--- /dev/null
+++ b/doc/building_your_pipeline.md
@@ -0,0 +1,471 @@
+# Building your own pipeline
+
+The goal of this guide is to walk you through the Nextflow pipeline building process you will learn:
+
+1. How to use this [git repository (LBMC/nextflow)](https://gitbio.ens-lyon.fr/LBMC/nextflow) as a template for your project.
+2. The basis of [Nextflow](https://www.nextflow.io/) the pipeline manager that we use at the lab.
+3. How to build a simple pipeline for the transcript-level quantification of RNASeq data
+4. How to run the exact same pipeline on a computing center ([PSMN](http://www.ens-lyon.fr/PSMN/doku.php))
+
+This guide assumes that you followed the [Git basis, training course](https://gitbio.ens-lyon.fr/LBMC/hub/formations/git_basis).
+
+# Initialize your own project
+
+You are going to build a pipeline for you or your team. So the first step is to create your own project.
+
+## Forking
+
+Instead of reinventing the wheel, you can use the [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow) as a template.
+To easily do so, go to the [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow) repository and click on the [**fork**](https://gitbio.ens-lyon.fr/LBMC/nextflow/forks/new) button (you need to log-in).
+
+![fork button](./img/fork.png)
+
+In git, the [action of forking](https://git-scm.com/book/en/v2/GitHub-Contributing-to-a-Project) means that you are going to make your own private copy of a repository.
+This repository will keep a link with the original [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow) project from which you will be able to
+
+- [get updates](https://gitbio.ens-lyon.fr/LBMC/nextflow#getting-the-last-updates) `LBMC/nextflow` from the repository
+- propose update (see [contributing guide](https://gitbio.ens-lyon.fr/LBMC/nextflow/-/blob/master/CONTRIBUTING.md#forking))
+
+
+## Project organization
+
+This project (and yours) follows the [guide of good practices for the LBMC](http://www.ens-lyon.fr/LBMC/intranet/services-communs/pole-bioinformatique/ressources/good_practice_LBMC)
+
+You are now on the main page of your fork of the [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow). You can explore this project, all the codes in it is under the CeCILL licence (in the [LICENCE](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/LICENSE) file).
+
+The [README.md](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/README.md) file contains instructions to run your pipeline and test its installation.
+
+The [CONTRIBUTING.md](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/CONTRIBUTING.md) file contains guidelines if you want to contribute to the [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow).
+
+The [data](https://gitbio.ens-lyon.fr/LBMC/nextflow/tree/master/data) folder will be the place where you store the raw data for your analysis.
+The [results](https://gitbio.ens-lyon.fr/LBMC/nextflow/tree/master/results) folder will be the place where you store the results of your analysis.
+
+**The content of `data` and `results` folders should never be saved on git.**
+
+The [doc](https://gitbio.ens-lyon.fr/LBMC/nextflow/tree/master/doc) folder contains the documentation and this guide.
+
+And most interestingly for you, the [src](https://gitbio.ens-lyon.fr/LBMC/nextflow/tree/master/src) contains code to wrap tools. This folder contains one visible subdirectories `nf_modules` some pipeline examples and other hidden folders and files.
+
+# Nextflow pipeline
+
+A pipeline is a succession of [**process**](https://www.nextflow.io/docs/latest/process.html#process-page). Each `process` has data input(s) and optional data output(s). Data flows are modeled as [**channels**](https://www.nextflow.io/docs/latest/channel.html).
+
+## Processes
+
+Here is an example of **process**:
+
+```Groovy
+process sample_fasta {
+  input:
+  path fasta
+
+  output:
+  path "sample.fasta", emit: fasta_sample
+
+  script:
+"""
+head ${fasta} > sample.fasta
+"""
+}
+```
+
+We have the process `sample_fasta` that takes fasta `path` input and as output a fasta `path`. The `process` task itself is defined in the `script:` block and within `"""`.
+
+```Groovy
+input:
+path fasta
+```
+
+When we zoom on the `input:` block, we see that we define a variable `fasta` of type `path`.
+This means that the `sample_fasta` `process` is going to get a flux of fasta file(s).
+Nextflow is going to write a file named as the content of the variable `fasta` in the root of the folder where `script:` is executed.
+
+```Groovy
+output:
+path "sample.fasta", emit: fasta_sample
+```
+
+At the end of the script, a file named `sample.fasta` is found in the root the folder where `script:` is executed and will be emitted as `fasta_sample`.
+
+Using the WebIDE of Gitlab, create a file `src/fasta_sampler.nf`
+![webide](./img/webide.png)
+
+The first line that you need to add is
+
+```Groovy
+nextflow.enable.dsl=2
+```
+
+Then add the `sample_fastq` process and commit it to your repository.
+
+
+## Workflow
+
+In Nexflow, `process` blocks are chained together within a `workflow` block.
+For the time being, we only have one `process` so `workflow` may look like an unnecessary complication, but keep in mind that we want to be able to write complex bioinformatic pipeline.
+
+```Groovy
+workflow {
+  sample_fasta(fasta_file)
+}
+```
+
+Like `process` blocks `workflow` can take some inputs: `fasta_files`
+and transmit this input to `process`es
+
+```Groovy
+  sample_fasta(fasta_file)
+```
+
+Add the definition of the `workflow` to the `src/fasta_sampler.nf` file and commit it to your repository.
+
+## Channels
+
+Why bother with `channel`s? In the above example, the advantages of `channel`s are not really clear. We could have just given the `fasta` file to the `workflow`. But what if we have many fasta files to process? What if we have sub processes to run on each of the sampled fasta files? Nextflow can easily deal with these problems with the help of `channel`s.
+
+> **Channels** are streams of items that are emitted by a source and consumed by a process. A process with a `channel` as input will be run on every item send through the `channel`.
+
+```Groovy
+channel
+  .fromPath( "data/tiny_dataset/fasta/*.fasta" )
+  .set { fasta_file }
+```
+
+Here we defined the `channel`, `fasta_file`, that is going to send every fasta file from the folder `data/tiny_dataset/fasta/` into the process that takes it as input.
+
+Add the definition of the `channel`, above the `workflow` block, to the `src/fasta_sampler.nf` file and commit it to your repository.
+
+## Run your pipeline locally
+
+After writing this first pipeline, you may want to test it. To do that, first clone your repository.
+After following the [Git basis, training course](https://gitbio.ens-lyon.fr/LBMC/hub/formations/git_basis), you should have an up-to-date `ssh` configuration to connect to the `gitbio.ens-lyon.fr` git server.
+
+You can run the following commands to download your project on your computer:
+
+```sh
+git clone git@gitbio.ens-lyon.fr:<usr_name>/nextflow.git
+cd nextflow
+src/install_nextflow.sh
+```
+
+We also need data to test our pipeline:
+
+```sh
+cd data
+git clone git@gitbio.ens-lyon.fr:LBMC/hub/tiny_dataset.git
+cd ..
+```
+
+We can run our pipeline with the following command:
+
+```sh
+./nextflow src/fasta_sampler.nf
+```
+
+
+## Getting your results
+
+Our pipeline seems to work but we don’t know where is the `sample.fasta`. To get results out of a `process`, we need to tell nextflow to write it somewhere (we may don’t need to get every intermediate file in our results).
+
+To do that we need to add the following line before the `input:` section:
+
+```Groovy
+publishDir "results/sampling/", mode: 'copy'
+```
+
+Every file described in the `output:` section will be copied from nextflow to the folder `results/sampling/`.
+
+Add this to your `src/fasta_sampler.nf` file with the WebIDE and commit to your repository.
+Pull your modifications locally with the command:
+
+```sh
+git pull origin master
+```
+
+You can run your pipeline again and check the content of the folder `results/sampling`.
+
+## Fasta everywhere
+
+We ran our pipeline on one fasta file. How would nextflow handle 100 of them? To test that we need to duplicate the `tiny_v2.fasta` file: 
+
+```sh
+for i in {1..100}
+do
+cp data/tiny_dataset/fasta/tiny_v2.fasta data/tiny_dataset/fasta/tiny_v2_${i}.fasta
+done
+```
+
+You can run your pipeline again and check the content of the folder `results/sampling`.
+
+Every `fasta_sampler` process write a `sample.fasta` file. We need to make the name of the output file dependent of the name of the input file.
+
+```Groovy
+output:
+path "*_sample.fasta", emit: fasta_sample
+
+  script:
+"""
+head ${fasta} > ${fasta.simpleName}_sample.fasta
+"""
+```
+
+Add this to your `src/fasta_sampler.nf` file with the WebIDE and commit it to your repository before pulling your modifications locally.
+You can run your pipeline again and check the content of the folder `results/sampling`.
+
+Congratulations you built your first, one step, nextflow pipeline !
+
+
+# Build your own RNASeq pipeline
+
+In this section you are going to build your own pipeline for RNASeq analysis from the code available in the `src/nf_modules` folder.
+
+Open the WebIDE and create a `src/RNASeq.nf` file.
+
+The first line that we are going to add is
+
+```Groovy
+nextflow.enable.dsl=2
+```
+
+## fastp 
+
+The first step of the pipeline is to remove any Illumina adaptors left in your read files and to trim your reads by quality.
+
+The [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow) template provide you with many tools, for which you can find a predefined `process` block.
+You can find a list of these tools in the [`src/nf_modules`](./src/nf_modules) folder.
+You can also ask for a new tool by creating a [new issue for it](https://gitbio.ens-lyon.fr/LBMC/nextflow/-/issues/new) in the [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow) project.
+
+We are going to include the [`src/nf_modules/fastp/main.nf`](./src/nf_modules/fastp/main.nf) in our `src/RNASeq.nf` file
+
+```Groovy
+include { fastp } from "./nf_modules/fastp/main.nf"
+```
+The `./nf_modules/fastp/main.nf` is relative to the `src/RNASeq.nf` file, this is why we don’t include the `src/` part of the path.
+
+With this line we can call the `fastp` block in our future `workflow` without having to write it !
+If we check the content of the file [`src/nf_modules/fastp/main.nf`](./src/nf_modules/fastp/main.nf), we can see that by including `fastp`, we are including a sub-`workflow` (we will come back on this object latter). Sub-`workflow` can be used like `process`es.
+
+This `sub-workflow` takes a `fastq` `channel`. We need to make one:
+
+```Groovy
+channel
+  .fromFilePairs( "data/tiny_dataset/fastq/*_R{1,2}.fastq", size: -1)
+  .set { fastq_files }
+```
+
+The `.fromFilePairs()` function creates a `channel` of pairs of fastq files. Therefore, the items emitted by the `fastq_files` channel are going to be pairs of fastq for paired-end data.
+
+The option `size: -1` allows for arbitrary numbers of associated files. Therefore, we can use the same `channel` creation for single-end data.
+
+We can now include the `workflow` definition, passing the `fastq_files` `channel` to `fastp` to our `src/RNASeq.nf` file.
+
+```Groovy
+workflow {
+  fastp(fastq_files)
+}
+```
+
+You can commit your `src/RNASeq.nf` file, `pull` your modification locally and run your pipeline with the command:
+
+```Groovy
+./nextflow src/RNASeq.nf
+```
+
+What is happening ?
+
+## Nextflow `-profile`
+
+Nextflow tells you the following error: `fastp: command not found`. You haven’t `fastp` installed on your computer.
+
+Tools installation can be a tedious process and reinstalling old version of those tools to reproduce old analyses can be very difficult.
+Containers technologies like [Docker](https://www.docker.com/) or [Singularity](https://sylabs.io/singularity/) create small virtual environments where we can install software in a given version with all it’s dependencies. This environment can be saved, and shared, to have access to this exact working version of the software.
+
+> Why two different systems ?
+
+> Docker is easy to use and can be installed on Windows / MacOS / GNU/Linux but need admin rights.
+> Singularity can only be used on GNU/Linux but don’t need admin rights, and can be used on shared environment.
+
+The [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow) template provides you with [4 different `-profile`s to run your pipeline](https://gitbio.ens-lyon.fr/LBMC/nextflow/-/blob/master/doc/getting_started.md#nextflow-profile).
+
+Profiles are defined in the [`src/nextflow.config`](./src/nextflow.config), which is the default configuration file for your pipeline (you don’t have to edit this file).
+
+To run the pipeline locally you can use the profile `singularity` or `docker`
+
+```Groovy
+./nextflow src/RNASeq.nf -profile singularity
+```
+
+The `fastp`, `singularity` or `docker`, image is downloaded automatically and the fastq files are processed.
+
+## Pipeline `--` arguments
+
+We have defined the fastq files path within our `src/RNASeq.nf` file.
+But what if we want to share our pipeline with someone who doesn’t want to analyze the `tiny_dataset` and but other fastq.
+We can define a variable instead of fixing the path.
+
+```Groovy
+params.fastq = "data/fastq/*_{1,2}.fastq"
+channel
+  .fromFilePairs( params.fastq, size: -1)
+  .set { fastq_files }
+```
+
+We declare a variable that contains the path of the fastq file to look for. The advantage of using `params.fastq` is that the option `--fastq` is now a parameter of your pipeline.
+
+Thus, you can call your pipeline with the `--fastq` option.
+
+You can commit your `src/RNASeq.nf` file, `pull` your modification locally and run your pipeline with the command:
+
+```sh
+./nextflow src/RNASeq.nf -profile singularity --fastq "data/tiny_dataset/fastq/*_R{1,2}.fastq"
+```
+
+We can also add the following line:
+
+```Groovy
+log.info "fastq files: ${params.fastq}"
+```
+
+This line simply displays the value of the variable
+
+## BEDtools
+
+We need the sequences of the transcripts that need to be quantified. We are going to extract these sequences from the reference `data/tiny_dataset/fasta/tiny_v2.fasta` with the `bed` file annotation `data/tiny_dataset/annot/tiny.bed`.
+
+You can include the `fasta_from_bed` `process` from the [src/nf_modules/bedtools/main.nf](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/src/nf_modules/bedtools/main.nf) file to your `src/RNASeq.nf` file.
+
+You need to be able to input a `fasta_files` `channel` and a `bed_files` `channel`.
+
+```Groovy
+log.info "fasta file : ${params.fasta}"
+log.info "bed file : ${params.bed}"
+
+channel
+  .fromPath( params.fasta )
+  .ifEmpty { error "Cannot find any fasta files matching: ${params.fasta}" }
+  .map { it -> [it.simpleName, it]}
+  .set { fasta_files }
+channel
+  .fromPath( params.bed )
+  .ifEmpty { error "Cannot find any bed files matching: ${params.bed}" }
+  .map { it -> [it.simpleName, it]}
+  .set { bed_files }
+```
+
+We introduce 2 new directives:
+- `.ifEmpty { error "Cannot find any fasta files matching: ${params.fasta}" }` to throw an error if the path of the file is not right
+- `.map { it -> [it.simpleName, it]}` to transform our `channel` to a format compatible with the [`CONTRIBUTING`](../CONTRIBUTING.md) rules. Item, in the `channel` have the following shape [file_id, [file]], like the ones emited by the `.fromFilePairs(..., size: -1)` function.
+
+We can add the `fastq_from_bed` step to our `workflow`
+
+```Groovy
+workflow {
+  fastp(fastq_files)
+  fasta_from_bed(fasta_files, bed_files)
+}
+```
+
+Commit your work and test your pipeline with the following command:
+
+```sh
+./nextflow src/RNASeq.nf -profile singularity --fastq "data/tiny_dataset/fastq/*_R{1,2}.fastq" --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" --bed "data/tiny_dataset/annot/tiny.bed"
+```
+
+## Kallisto
+
+Kallisto run in two steps: the indexation of the reference and the quantification of the transcript on this index.
+
+You can include two `process`es with the following syntax:
+
+```Groovy
+include { index_fasta; mapping_fastq } from './nf_modules/kallisto/main.nf'
+```
+
+The `index_fasta` process needs to take as input the output of your `fasta_from_bed` `process`, which has the shape `[fasta_id, [fasta_file]]`.
+
+The input of your `mapping_fastq` `process` needs to take as input and the output of your `index_fasta` `process` and the `fastp` `process`, of shape `[index_id, [index_file]]`, and `[fastq_id, [fastq_r1_file, fastq_r2_file]]`.
+
+The output of a `process` is accessible through `<process_name>.out`.
+In the cases where we have an `emit: <channel_name>` we can access the corresponding channel with`<process_name>.out.<channel_name>`
+
+```Groovy
+workflow {
+  fastp(fastq_files)
+  fasta_from_bed(fasta_files, bed_files)
+  index_fasta(fasta_from_bed.out.fasta)
+  mapping_fastq(index_fasta.out.index.collect(), fastp.out.fastq)
+}
+```
+
+Commit your work and test your pipeline.
+
+## Returning results
+
+By default none of the `process` defined in `src/nf_modules` use the `publishDir` instruction.
+You can specify their `publishDir` directory by specifying the :
+
+```Groovy
+params.<process_name>_out = "path"
+```
+
+Where "path" will describe a path within the `results` folder
+
+Therefore you can either:
+
+- call your pipeline with the following parameter `--mapping_fastq_out "quantification/"`
+- add the following line to your `src/RNASeq.nf` file to get the output of the `mapping_fastq` process:
+
+```Groovy
+include { index_fasta; mapping_fastq } from './nf_modules/kallisto/main.nf' addParams(mapping_fastq_out: "quantification/")
+```
+
+Commit your work and test your pipeline.
+You now have a RNASeq analysis pipeline that can run locally with Docker or Singularity!
+
+## Bonus
+
+A file `report.html` is created for each run with the detail of your pipeline execution.
+You can use the `-resume` option to be able to save into cache the process results (the in a `work/` folder).
+
+# Run your RNASeq pipeline on the PSMN
+
+First you need to connect to the PSMN:
+
+```sh
+login@allo-psmn
+```
+Then once connected to `allo-psmn`, you can connect to `e5-2667v4comp1`:
+
+```sh
+login@m6142comp2
+```
+
+## Set your environment
+
+Create and go to your `scratch` folder:
+
+```sh
+mkdir -p /scratch/Bio/<login>
+cd /scratch/Bio/<login>
+```
+
+Then you need to clone your pipeline and get the data:
+
+```sh
+git clone https://gitbio.ens-lyon.fr/<usr_name>/nextflow.git
+cd nextflow/data
+git clone https://gitbio.ens-lyon.fr/LBMC/hub/tiny_dataset.git
+cd ..
+```
+
+## Run nextflow
+
+As we don’t want nextflow to be killed in case of disconnection, we start by launching `tmux`. In case of disconnection, you can restore your session with the command `tmux a` and close one with `ctr + b + d`
+
+```sh
+tmux
+src/install_nextflow.sh
+./nextflow src/RNASeq.nf -profile psmn --fastq "data/tiny_dataset/fastq/*_R{1,2}.fastq" --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" --bed "data/tiny_dataset/annot/tiny.bed"
+```
+
+You just ran your pipeline on the PSMN!
diff --git a/doc/getting_started.md b/doc/getting_started.md
index b9ef5cee8438dcf7fe59b8931cecbcfb6830cba3..b1a01f13775b6e9c49726c713e53878189758040 100644
--- a/doc/getting_started.md
+++ b/doc/getting_started.md
@@ -1,47 +1,45 @@
-## Getting Started
+# Getting Started
 
-These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system.
+These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
+You can follow the [building your pipeline guide](./doc/building_your_pipeline.md) to learn how to build your own pipelines.
 
-### Prerequisites
+## Prerequisites
 
-To run nextflow on you computer you need to have java (>= 1.8) installed.
+To run nextflow on your computer you need to have `java` (>= 1.8) installed.
 
 ```sh
 java --version
 ```
 
-To be able to easily test tools already implemented for nextflow on your computer (`src/nf_modules/` to see their list). You need to have docker installed.
+and `git`
 
 ```sh
-docker run hello-world
+git --version
 ```
 
-### Installing
-
-To install nextflow on you computer simply run the following command:
+To be able to run existing tools in nextflow on your computer (`src/nf_modules/` to see the list). You need to have `docker` installed.
 
 ```sh
-src/install_nextflow.sh
+docker run hello-world
 ```
 
-Then to initialize a given tools run the following command:
+Alternatively if you are on Linux, you can use `singularity`:
 
 ```sh
-src/docker_modules/<tool_name>/<tool_version>/docker_init.sh
+singularity run docker://hello-world
 ```
 
-For example to initialize `file_handle` version `0.1.1`, run:
+## Installing
 
-```sh
-src/docker_modules/file_handle/0.1.1/docker_init.sh
-```
+To install nextflow on your computer simply run the following command:
 
-To initialize all the tools:
 ```sh
-find src/docker_modules/ -name "docker_init.sh" | awk '{system($0)}'
+git clone git@gitbio.ens-lyon.fr:LBMC/nextflow.git
+cd nextflow/
+src/install_nextflow.sh
 ```
 
-## Running the tests
+## Running a toy RNASeq quantification pipeline
 
 To run tests we first need to get a training set
 ```sh
@@ -56,11 +54,39 @@ cd ..
 Then to run the tests for a given tools run the following command:
 
 ```sh
-src/nf_modules/<tool_name>/<tool_version>/tests.sh
+./nextflow src/solution_RNASeq.nf --fastq "data/tiny_dataset/fastq/tiny2_R{1,2}.fastq.gz" --fasta "data/tiny_dataset/fasta/tiny_v2_10.fasta" --bed "data/tiny_dataset/annot/tiny.bed" -profile docker
+```
+
+## Nextflow profile
+
+By default le `src/nextflow.config` file define 4 different profiles
+
+- `-profile docker` each process of the pipeline will be executed within a `docker` container locally
+- `-profile singularity` each process of the pipeline will be executed within a `singularity` container locally
+- `-profile psmn` each process will be sent as a separate job within a `singularity` container on the PSMN
+- `-profile ccin2p3` each process will be sent as a separate job within a `singularity` container on the CCIN2P3
+
+If the containers are not found locally, they are automatically downloaded before running the process. For the PSMN and CCIN2P3, the `singularity` images are downloaded in a shared folder (`/scratch/Bio/singularity` for the PSMN, and `/sps/lbmc/common/singularity/` for the CCIN2P3)
+
+
+### PSMN
+
+When running `nextflow` on the PSMN, we recommend to use `tmux` before launching the pipeline:
+
+```sh
+tmux
+./nextflow src/solution_RNASeq.nf --fastq "data/tiny_dataset/fastq/tiny2_R{1,2}.fastq.gz" --fasta "data/tiny_dataset/fasta/tiny_v2_10.fasta" --bed "data/tiny_dataset/annot/tiny.bed" -profile psmn
 ```
 
-For example to run the tests on `Bowtie2` run:
+Therefore, the `nextflow` process will continue to run even if you are disconnected.
+You can re-attach the `tmux` session, with the command `tmux a` (and press `ctrl` `+` `b` `+` `d` to detach the attached session).
+
+### CCIN2P3
+
+When runnning `nextflow` on the CCIN2P3, you cannot use `tmux`, instead you should send a *daemon* jobs which will launch the `nextflow` command.
+You can edit the `src/ccin2p3.pbs` file to personalize your `nextflow` command and send it with the command:
 
 ```sh
-src/nf_modules/bowtie2/tests.sh
+qsub src/ccin2p3.pbs
 ```
+
diff --git a/doc/nf_projects.md b/doc/nf_projects.md
index 2307ef1cd0df4552620476e510d420cde3e955ec..9ecab839f9df596e58269bd1ed249ca0e4e9bff6 100644
--- a/doc/nf_projects.md
+++ b/doc/nf_projects.md
@@ -1,5 +1,15 @@
 ## Projects using nextflow
 
+This page list existing Nextflow based pipeline projects into different categories
+
+- [RNASeq](./nf_projects.md#rnaseq)
+- [scRNASeq](./nf_projects.m#scrnaseq)
+- [DNASeq](./nf_projects.md#dnaseq)
+- [ChipSeq](./nf_projects.md#chipseq)
+
+To add your project to this list, please fork this project, modify this file and make a merge request.
+![merge request button](./img/merge_request.png)
+
 ### RNASeq
 
 - [https://_https://gitlab.biologie.ens-lyon.fr/gylab/salmoninyeast](https://_https://gitlab.biologie.ens-lyon.fr/gylab/salmoninyeast)
@@ -7,15 +17,18 @@
 - [https://_https://gitlab.biologie.ens-lyon.fr/vvanoost/nextflow](https://_https://gitlab.biologie.ens-lyon.fr/vvanoost/nextflow)
 - [https://gitlab.biologie.ens-lyon.fr/elabaron/HIV_project](https://gitlab.biologie.ens-lyon.fr/elabaron/HIV_project)
 
-### single-cell RNA_-Seq
+### scRNASeq
 
 - [https://gitlab.com/LBMC_UMR5239/sbdm/mars-seq](https://gitlab.com/LBMC_UMR5239/sbdm/mars-seq)
 
 ### DNASeq
 
 - [https://github.com/LBMC/ChrSexebelari](https://github.com/LBMC/ChrSexebelari)
+- [https://gitbio.ens-lyon.fr/LBMC/gylab/MappingNoise](https://gitbio.ens-lyon.fr/LBMC/gylab/MappingNoise)
+- [https://gitbio.ens-lyon.fr/LBMC/qrg/droso_hic_group/droso_haplo_rna_seq](https://gitbio.ens-lyon.fr/LBMC/qrg/droso_hic_group/droso_haplo_rna_seq)
 
 ### Chip-Seq
 
 - [https://gitlab.biologie.ens-lyon.fr/Auboeuf/ChIP-seq](https://gitlab.biologie.ens-lyon.fr/Auboeuf/ChIP-seq)
+- [https://gitbio.ens-lyon.fr/LBMC/Bernard/quantitative-nucleosome-analysis](https://gitbio.ens-lyon.fr/LBMC/Bernard/quantitative-nucleosome-analysis)
 
diff --git a/src/.conda_envs/.conda_envs_dir_test b/src/.conda_envs/.conda_envs_dir_test
deleted file mode 120000
index 79b89f16062f6b1cfa27a8ed0cd7d1805593f5d8..0000000000000000000000000000000000000000
--- a/src/.conda_envs/.conda_envs_dir_test
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/.conda_envs_dir_test
\ No newline at end of file
diff --git a/src/.conda_envs/Python_2.7.13 b/src/.conda_envs/Python_2.7.13
deleted file mode 120000
index 5c91793f9b75f93299586c455302e0e3e209384f..0000000000000000000000000000000000000000
--- a/src/.conda_envs/Python_2.7.13
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/Python_2.7.13
\ No newline at end of file
diff --git a/src/.conda_envs/Python_3.6.1 b/src/.conda_envs/Python_3.6.1
deleted file mode 120000
index 7a6e74cff5631cc204ff1683f7966e57bd47fc80..0000000000000000000000000000000000000000
--- a/src/.conda_envs/Python_3.6.1
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/Python_3.6.1
\ No newline at end of file
diff --git a/src/.conda_envs/R_3.3.1 b/src/.conda_envs/R_3.3.1
deleted file mode 120000
index 6544e0903216a580761735313b6504740ad9d3ef..0000000000000000000000000000000000000000
--- a/src/.conda_envs/R_3.3.1
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/R_3.3.1
\ No newline at end of file
diff --git a/src/.conda_envs/R_3.4.3 b/src/.conda_envs/R_3.4.3
deleted file mode 120000
index 2f4558021803fab2cb8c969911b8fc9cda94d985..0000000000000000000000000000000000000000
--- a/src/.conda_envs/R_3.4.3
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/R_3.4.3
\ No newline at end of file
diff --git a/src/.conda_envs/axtchain_377 b/src/.conda_envs/axtchain_377
deleted file mode 120000
index 845b1a9312cb20de7a8c177ff004354efd50250c..0000000000000000000000000000000000000000
--- a/src/.conda_envs/axtchain_377
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/axtchain_377
\ No newline at end of file
diff --git a/src/.conda_envs/bcftools_1.7 b/src/.conda_envs/bcftools_1.7
deleted file mode 120000
index c77cd622e68df50873142b109c36ef7a797a002e..0000000000000000000000000000000000000000
--- a/src/.conda_envs/bcftools_1.7
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/bcftools_1.7
\ No newline at end of file
diff --git a/src/.conda_envs/bedtools_2.25.0 b/src/.conda_envs/bedtools_2.25.0
deleted file mode 120000
index da8bc7748ceb28af6aaf93153faef36e4235d78b..0000000000000000000000000000000000000000
--- a/src/.conda_envs/bedtools_2.25.0
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/bedtools_2.25.0
\ No newline at end of file
diff --git a/src/.conda_envs/bioawk_1.0 b/src/.conda_envs/bioawk_1.0
deleted file mode 120000
index 1907adb042aa5c7667a0f4c68676b94ada4c0217..0000000000000000000000000000000000000000
--- a/src/.conda_envs/bioawk_1.0
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/bioawk_1.0
\ No newline at end of file
diff --git a/src/.conda_envs/biopython_1.74 b/src/.conda_envs/biopython_1.74
deleted file mode 120000
index a32165cab5f8aa5cee3f6324e9c2ceacd86e7387..0000000000000000000000000000000000000000
--- a/src/.conda_envs/biopython_1.74
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/biopython_1.74
\ No newline at end of file
diff --git a/src/.conda_envs/bowtie2_2.3.2 b/src/.conda_envs/bowtie2_2.3.2
deleted file mode 120000
index 6f58283d4e80af7e5168f906e070b7c75c8d70bf..0000000000000000000000000000000000000000
--- a/src/.conda_envs/bowtie2_2.3.2
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/bowtie2_2.3.2
\ No newline at end of file
diff --git a/src/.conda_envs/bowtie2_2.3.4.1 b/src/.conda_envs/bowtie2_2.3.4.1
deleted file mode 120000
index ee79b3966ee7a73c0b51ebe9a1a2808c656dd440..0000000000000000000000000000000000000000
--- a/src/.conda_envs/bowtie2_2.3.4.1
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/bowtie2_2.3.4.1
\ No newline at end of file
diff --git a/src/.conda_envs/bwa_0.7.17 b/src/.conda_envs/bwa_0.7.17
deleted file mode 120000
index 8ee9986b6d7c9423cf8ca4254d5e70296813955f..0000000000000000000000000000000000000000
--- a/src/.conda_envs/bwa_0.7.17
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/bwa_0.7.17
\ No newline at end of file
diff --git a/src/.conda_envs/canu_1.7 b/src/.conda_envs/canu_1.7
deleted file mode 120000
index 3b3b783c64fc065862d45ea73a29d605cc667895..0000000000000000000000000000000000000000
--- a/src/.conda_envs/canu_1.7
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/canu_1.7
\ No newline at end of file
diff --git a/src/.conda_envs/cdhit_4.6.8 b/src/.conda_envs/cdhit_4.6.8
deleted file mode 120000
index 0250dfbb10bcccaa03973e9c01413c723da4b817..0000000000000000000000000000000000000000
--- a/src/.conda_envs/cdhit_4.6.8
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/cdhit_4.6.8
\ No newline at end of file
diff --git a/src/.conda_envs/cutadapt_1.14 b/src/.conda_envs/cutadapt_1.14
deleted file mode 120000
index 33552e9c4735d6cfa2475b9fe29863cf1cba350d..0000000000000000000000000000000000000000
--- a/src/.conda_envs/cutadapt_1.14
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/cutadapt_1.14
\ No newline at end of file
diff --git a/src/.conda_envs/cutadapt_2.4 b/src/.conda_envs/cutadapt_2.4
deleted file mode 120000
index c2ab68ca481d52b8ace959546e87c07ef9b197cc..0000000000000000000000000000000000000000
--- a/src/.conda_envs/cutadapt_2.4
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/cutadapt_2.4
\ No newline at end of file
diff --git a/src/.conda_envs/deeptools_3.0.2 b/src/.conda_envs/deeptools_3.0.2
deleted file mode 120000
index 514f9cd80f15a903e5af0f370eaea45c3d38ab3c..0000000000000000000000000000000000000000
--- a/src/.conda_envs/deeptools_3.0.2
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/deeptools_3.0.2
\ No newline at end of file
diff --git a/src/.conda_envs/envs b/src/.conda_envs/envs
deleted file mode 120000
index cf88bffb3ae0500e1b2341fd4ee37bc68cbccfbd..0000000000000000000000000000000000000000
--- a/src/.conda_envs/envs
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/
\ No newline at end of file
diff --git a/src/.conda_envs/fastp_0.19.7 b/src/.conda_envs/fastp_0.19.7
deleted file mode 120000
index 325a0da0ef6d62c88eba7a4477d6df84525eeee3..0000000000000000000000000000000000000000
--- a/src/.conda_envs/fastp_0.19.7
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/fastp_0.19.7
\ No newline at end of file
diff --git a/src/.conda_envs/fastqc_0.11.5 b/src/.conda_envs/fastqc_0.11.5
deleted file mode 120000
index 73298f6fc3616a6de5777b317b93e8b628986b26..0000000000000000000000000000000000000000
--- a/src/.conda_envs/fastqc_0.11.5
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/fastqc_0.11.5
\ No newline at end of file
diff --git a/src/.conda_envs/flexi-splitter_1.0.0 b/src/.conda_envs/flexi-splitter_1.0.0
deleted file mode 120000
index 48e582352dbbb002a4c2cbe4b12218d26eb5c671..0000000000000000000000000000000000000000
--- a/src/.conda_envs/flexi-splitter_1.0.0
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/flexi-splitter_1.0.0
\ No newline at end of file
diff --git a/src/.conda_envs/gatk_3.8 b/src/.conda_envs/gatk_3.8
deleted file mode 120000
index 63462f4446854d83b270fb509620e081c1cdc00e..0000000000000000000000000000000000000000
--- a/src/.conda_envs/gatk_3.8
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/gatk_3.8
\ No newline at end of file
diff --git a/src/.conda_envs/gatk_3.8.0 b/src/.conda_envs/gatk_3.8.0
deleted file mode 120000
index 0463b9ccef8d7cc0caf0455d9ff9d550e6c31be6..0000000000000000000000000000000000000000
--- a/src/.conda_envs/gatk_3.8.0
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/gatk_3.8.0
\ No newline at end of file
diff --git a/src/.conda_envs/gatk_4.0.8.1 b/src/.conda_envs/gatk_4.0.8.1
deleted file mode 120000
index 96fb8371cfcc3d758cfcf4fe77e3c355b8579841..0000000000000000000000000000000000000000
--- a/src/.conda_envs/gatk_4.0.8.1
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/gatk_4.0.8.1
\ No newline at end of file
diff --git a/src/.conda_envs/hisat2_2.0.0 b/src/.conda_envs/hisat2_2.0.0
deleted file mode 120000
index 31d4300201094daba0c2be4ed9b9bffbc3ff8569..0000000000000000000000000000000000000000
--- a/src/.conda_envs/hisat2_2.0.0
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/hisat2_2.0.0
\ No newline at end of file
diff --git a/src/.conda_envs/hisat2_2.1.0 b/src/.conda_envs/hisat2_2.1.0
deleted file mode 120000
index 2a26ea5b4a994a456e3ba7228a502b30a0e4a914..0000000000000000000000000000000000000000
--- a/src/.conda_envs/hisat2_2.1.0
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/hisat2_2.1.0
\ No newline at end of file
diff --git a/src/.conda_envs/htseq_0.11.2 b/src/.conda_envs/htseq_0.11.2
deleted file mode 120000
index 512c5ab495a12bb9b5503de7d9371773f9cd5858..0000000000000000000000000000000000000000
--- a/src/.conda_envs/htseq_0.11.2
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/htseq_0.11.2
\ No newline at end of file
diff --git a/src/.conda_envs/htseq_0.9.1 b/src/.conda_envs/htseq_0.9.1
deleted file mode 120000
index a0b11c0b6ed34bfe3e67105abb9361d7671b2cef..0000000000000000000000000000000000000000
--- a/src/.conda_envs/htseq_0.9.1
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/htseq_0.9.1
\ No newline at end of file
diff --git a/src/.conda_envs/kallisto_0.43.1 b/src/.conda_envs/kallisto_0.43.1
deleted file mode 120000
index f97de91a46e446fd1d2c7c0cdc76d94a666aec3d..0000000000000000000000000000000000000000
--- a/src/.conda_envs/kallisto_0.43.1
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/kallisto_0.43.1
\ No newline at end of file
diff --git a/src/.conda_envs/kallisto_0.44.0 b/src/.conda_envs/kallisto_0.44.0
deleted file mode 120000
index 63b3d7d5411d998edae2703da5d7d131131a3a09..0000000000000000000000000000000000000000
--- a/src/.conda_envs/kallisto_0.44.0
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/kallisto_0.44.0
\ No newline at end of file
diff --git a/src/.conda_envs/last_1060 b/src/.conda_envs/last_1060
deleted file mode 120000
index 2015c9856db5613f43e7287649b2508366e7a260..0000000000000000000000000000000000000000
--- a/src/.conda_envs/last_1060
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/last_1060
\ No newline at end of file
diff --git a/src/.conda_envs/liftover_357 b/src/.conda_envs/liftover_357
deleted file mode 120000
index ba72fec91f4d2f96683839a0ccea6b50f561ebc2..0000000000000000000000000000000000000000
--- a/src/.conda_envs/liftover_357
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/liftover_357
\ No newline at end of file
diff --git a/src/.conda_envs/macs2_2.1.2 b/src/.conda_envs/macs2_2.1.2
deleted file mode 120000
index d14a7fd7801656eb78fa2b73da9ec33e9806c8b7..0000000000000000000000000000000000000000
--- a/src/.conda_envs/macs2_2.1.2
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/macs2_2.1.2
\ No newline at end of file
diff --git a/src/.conda_envs/multiqc_0.9 b/src/.conda_envs/multiqc_0.9
deleted file mode 120000
index 57372d0c5426850192bb9ec31deb8cea849f3d4f..0000000000000000000000000000000000000000
--- a/src/.conda_envs/multiqc_0.9
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/multiqc_0.9
\ No newline at end of file
diff --git a/src/.conda_envs/multiqc_1.0 b/src/.conda_envs/multiqc_1.0
deleted file mode 120000
index db90e68674b716ce181518d16577d3a7b05b7236..0000000000000000000000000000000000000000
--- a/src/.conda_envs/multiqc_1.0
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/multiqc_1.0
\ No newline at end of file
diff --git a/src/.conda_envs/multiqc_1.7 b/src/.conda_envs/multiqc_1.7
deleted file mode 120000
index 46bbaff86bf20a3737a2473ece096b40df8ca2c6..0000000000000000000000000000000000000000
--- a/src/.conda_envs/multiqc_1.7
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/multiqc_1.7
\ No newline at end of file
diff --git a/src/.conda_envs/music_1.0.0 b/src/.conda_envs/music_1.0.0
deleted file mode 120000
index c451a47d3d114e0314fee9a8b0e0260b1fbe6ef2..0000000000000000000000000000000000000000
--- a/src/.conda_envs/music_1.0.0
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/music_1.0.0
\ No newline at end of file
diff --git a/src/.conda_envs/ncdu b/src/.conda_envs/ncdu
deleted file mode 120000
index f46dbc261cbd16dd4221e4533b203dbdeeb46293..0000000000000000000000000000000000000000
--- a/src/.conda_envs/ncdu
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/ncdu
\ No newline at end of file
diff --git a/src/.conda_envs/nextflow_0.25.1 b/src/.conda_envs/nextflow_0.25.1
deleted file mode 120000
index b419d0badce280a3fa26c02c7bc7d3fb8879d22d..0000000000000000000000000000000000000000
--- a/src/.conda_envs/nextflow_0.25.1
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/nextflow_0.25.1
\ No newline at end of file
diff --git a/src/.conda_envs/nextflow_0.28.2 b/src/.conda_envs/nextflow_0.28.2
deleted file mode 120000
index d6f5f3dcc36db582dfb4f9453e5289a1fe9b7aeb..0000000000000000000000000000000000000000
--- a/src/.conda_envs/nextflow_0.28.2
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/nextflow_0.28.2
\ No newline at end of file
diff --git a/src/.conda_envs/nextflow_0.32.0 b/src/.conda_envs/nextflow_0.32.0
deleted file mode 120000
index 439a15215267bb1ffff044846cf6ff8b38a89499..0000000000000000000000000000000000000000
--- a/src/.conda_envs/nextflow_0.32.0
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/nextflow_0.32.0
\ No newline at end of file
diff --git a/src/.conda_envs/nextflow_19.01.0 b/src/.conda_envs/nextflow_19.01.0
deleted file mode 120000
index 18a20221ccfbaf8e6cf869f1e009b710ac29e6ab..0000000000000000000000000000000000000000
--- a/src/.conda_envs/nextflow_19.01.0
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/nextflow_19.01.0
\ No newline at end of file
diff --git a/src/.conda_envs/picard_2.18.11 b/src/.conda_envs/picard_2.18.11
deleted file mode 120000
index cbf205fe4c8c13d94b55b6cf3fc0437e25577891..0000000000000000000000000000000000000000
--- a/src/.conda_envs/picard_2.18.11
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/picard_2.18.11
\ No newline at end of file
diff --git a/src/.conda_envs/pigz_2.3.4 b/src/.conda_envs/pigz_2.3.4
deleted file mode 120000
index 33455f842e2168c2076dd89f26baab97deb0a9f6..0000000000000000000000000000000000000000
--- a/src/.conda_envs/pigz_2.3.4
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/pigz_2.3.4
\ No newline at end of file
diff --git a/src/.conda_envs/prinseq_0.20.4 b/src/.conda_envs/prinseq_0.20.4
deleted file mode 120000
index dca206100190bef5b2c300cda8fcc1852c15cffd..0000000000000000000000000000000000000000
--- a/src/.conda_envs/prinseq_0.20.4
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/prinseq_0.20.4
\ No newline at end of file
diff --git a/src/.conda_envs/python_mars_seq_modules b/src/.conda_envs/python_mars_seq_modules
deleted file mode 120000
index cd3910ea6920943ac21858b312917b05220ea5d7..0000000000000000000000000000000000000000
--- a/src/.conda_envs/python_mars_seq_modules
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/python_mars_seq_modules
\ No newline at end of file
diff --git a/src/.conda_envs/r_mars_seq_modules_r b/src/.conda_envs/r_mars_seq_modules_r
deleted file mode 120000
index 00ebf61d64ebc9cfad5066797f23d1f460ff7be4..0000000000000000000000000000000000000000
--- a/src/.conda_envs/r_mars_seq_modules_r
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/r_mars_seq_modules_r
\ No newline at end of file
diff --git a/src/.conda_envs/ribotish_0.2.4 b/src/.conda_envs/ribotish_0.2.4
deleted file mode 120000
index 173ef74842b0f029f772b7bbea49e102bafbbfdd..0000000000000000000000000000000000000000
--- a/src/.conda_envs/ribotish_0.2.4
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/ribotish_0.2.4
\ No newline at end of file
diff --git a/src/.conda_envs/rsem_1.3.0 b/src/.conda_envs/rsem_1.3.0
deleted file mode 120000
index cea23372fb11c6a72ca65e7a0fc66ceeedae6102..0000000000000000000000000000000000000000
--- a/src/.conda_envs/rsem_1.3.0
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/rsem_1.3.0
\ No newline at end of file
diff --git a/src/.conda_envs/rsem_1.3.1 b/src/.conda_envs/rsem_1.3.1
deleted file mode 120000
index 325fb2388538b17cebb896f3abd97c171a4dc742..0000000000000000000000000000000000000000
--- a/src/.conda_envs/rsem_1.3.1
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/rsem_1.3.1
\ No newline at end of file
diff --git a/src/.conda_envs/salmon_0.8.2 b/src/.conda_envs/salmon_0.8.2
deleted file mode 120000
index 65b04e4bcc71ec13f66b472b344daa659eeffe68..0000000000000000000000000000000000000000
--- a/src/.conda_envs/salmon_0.8.2
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/salmon_0.8.2
\ No newline at end of file
diff --git a/src/.conda_envs/samblaster_0.1.24 b/src/.conda_envs/samblaster_0.1.24
deleted file mode 120000
index d00932670059582660783647bf4c3e4377142f68..0000000000000000000000000000000000000000
--- a/src/.conda_envs/samblaster_0.1.24
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/samblaster_0.1.24
\ No newline at end of file
diff --git a/src/.conda_envs/samtools_1.5 b/src/.conda_envs/samtools_1.5
deleted file mode 120000
index cab31c466a7df1cd827de5fdd090a53b52f79910..0000000000000000000000000000000000000000
--- a/src/.conda_envs/samtools_1.5
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/samtools_1.5
\ No newline at end of file
diff --git a/src/.conda_envs/samtools_1.7 b/src/.conda_envs/samtools_1.7
deleted file mode 120000
index 29a7fb1e11bd2d24212085320cdd22cf19044e9a..0000000000000000000000000000000000000000
--- a/src/.conda_envs/samtools_1.7
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/samtools_1.7
\ No newline at end of file
diff --git a/src/.conda_envs/seaborn_0.9.0 b/src/.conda_envs/seaborn_0.9.0
deleted file mode 120000
index dedf18f456745051b6ab5a3b858b013e2c54e890..0000000000000000000000000000000000000000
--- a/src/.conda_envs/seaborn_0.9.0
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/seaborn_0.9.0
\ No newline at end of file
diff --git a/src/.conda_envs/splicing_lore_env b/src/.conda_envs/splicing_lore_env
deleted file mode 120000
index f5570a069c24c769372e28cf7282cb8b4662f215..0000000000000000000000000000000000000000
--- a/src/.conda_envs/splicing_lore_env
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/splicing_lore_env
\ No newline at end of file
diff --git a/src/.conda_envs/sra-tools_2.8.2 b/src/.conda_envs/sra-tools_2.8.2
deleted file mode 120000
index a01645bc705e22aa2ebac2b9e4194208d67a8ee6..0000000000000000000000000000000000000000
--- a/src/.conda_envs/sra-tools_2.8.2
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/sra-tools_2.8.2
\ No newline at end of file
diff --git a/src/.conda_envs/star_2.5.3a b/src/.conda_envs/star_2.5.3a
deleted file mode 120000
index 6030edb0ec8b6cffd59e9b4fdcd0fa7e710f5d59..0000000000000000000000000000000000000000
--- a/src/.conda_envs/star_2.5.3a
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/star_2.5.3a
\ No newline at end of file
diff --git a/src/.conda_envs/star_2.7.0e b/src/.conda_envs/star_2.7.0e
deleted file mode 120000
index b8cafd08ff5bb9dc50b0095550681cbb36e842cb..0000000000000000000000000000000000000000
--- a/src/.conda_envs/star_2.7.0e
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/star_2.7.0e
\ No newline at end of file
diff --git a/src/.conda_envs/star_2.7.3a b/src/.conda_envs/star_2.7.3a
deleted file mode 120000
index c25c3a3c25f8f3029211084d267995564e8fd04b..0000000000000000000000000000000000000000
--- a/src/.conda_envs/star_2.7.3a
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/star_2.7.3a
\ No newline at end of file
diff --git a/src/.conda_envs/subread_1.6.4 b/src/.conda_envs/subread_1.6.4
deleted file mode 120000
index e7bba79df0c763176ec87a05b9989a440e30a813..0000000000000000000000000000000000000000
--- a/src/.conda_envs/subread_1.6.4
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/subread_1.6.4
\ No newline at end of file
diff --git a/src/.conda_envs/tophat_2.1.1 b/src/.conda_envs/tophat_2.1.1
deleted file mode 120000
index 6bb53f91843bb3b19046e9984c7b12a537a08033..0000000000000000000000000000000000000000
--- a/src/.conda_envs/tophat_2.1.1
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/tophat_2.1.1
\ No newline at end of file
diff --git a/src/.conda_envs/trimmomatic_0.36 b/src/.conda_envs/trimmomatic_0.36
deleted file mode 120000
index 77753a1c7991af4eff3bbdec4fa2fc2035740fa8..0000000000000000000000000000000000000000
--- a/src/.conda_envs/trimmomatic_0.36
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/trimmomatic_0.36
\ No newline at end of file
diff --git a/src/.conda_envs/trimmomatic_0.39 b/src/.conda_envs/trimmomatic_0.39
deleted file mode 120000
index 81502468997ba00e66763289fc80877c37c17088..0000000000000000000000000000000000000000
--- a/src/.conda_envs/trimmomatic_0.39
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/trimmomatic_0.39
\ No newline at end of file
diff --git a/src/.conda_envs/ucsc_375 b/src/.conda_envs/ucsc_375
deleted file mode 120000
index 406934cb95e7bccdffb15a8f9f2ccd9b93b42de7..0000000000000000000000000000000000000000
--- a/src/.conda_envs/ucsc_375
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/ucsc_375
\ No newline at end of file
diff --git a/src/.conda_envs/umitools_0.3.4 b/src/.conda_envs/umitools_0.3.4
deleted file mode 120000
index 1fbf3fc65e7e2b16d4d5a8b6b6a4f86d862d487f..0000000000000000000000000000000000000000
--- a/src/.conda_envs/umitools_0.3.4
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/umitools_0.3.4
\ No newline at end of file
diff --git a/src/.conda_envs/urqt_d62c1f8 b/src/.conda_envs/urqt_d62c1f8
deleted file mode 120000
index bb665c60c65c4633ea01ecb6aac9e687af8e9571..0000000000000000000000000000000000000000
--- a/src/.conda_envs/urqt_d62c1f8
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/envs/urqt_d62c1f8
\ No newline at end of file
diff --git a/src/.conda_packages.sh b/src/.conda_packages.sh
deleted file mode 100644
index 86b2575c836ccb90a8d554cdbb6819280865ed8e..0000000000000000000000000000000000000000
--- a/src/.conda_packages.sh
+++ /dev/null
@@ -1,164 +0,0 @@
-source src/.conda_psmn.sh
-CONDA_ENVS=src/.conda_envs/
-if [ ! -d ${CONDA_ENVS}pigz_2.3.4 ]; then
-  conda create --yes --name pigz_2.3.4 pigz=2.3.4
-fi
-if [ ! -d ${CONDA_ENVS}tophat_2.1.1 ]; then
-  conda create --yes --name tophat_2.1.1 tophat=2.1.1
-fi
-if [ ! -d ${CONDA_ENVS}hisat2_2.0.0 ]; then
-  conda create --yes --name hisat2_2.0.0 hisat2=2.0.0 samtools=1.7
-fi
-if [ ! -d ${CONDA_ENVS}hisat2_2.1.0 ]; then
-  conda create --yes --name hisat2_2.1.0 hisat2=2.1.0 samtools=1.7
-fi
-if [ ! -d ${CONDA_ENVS}rsem_1.3.1 ]; then
-  conda create --yes --name rsem_1.3.1 rsem=1.3.1 samtools=1.3
-fi
-if [ ! -d ${CONDA_ENVS}rsem_1.3.0 ]; then
-  conda create --yes --name rsem_1.3.0 rsem=1.3.0 samtools=1.3
-fi
-if [ ! -d ${CONDA_ENVS}samblaster_0.1.24 ]; then
-  conda create --yes --name samblaster_0.1.24 samblaster=0.1.24
-fi
-if [ ! -d ${CONDA_ENVS}nextflow_0.25.1 ]; then
-  conda create --yes --name nextflow_0.25.1 nextflow=0.25.1
-fi
-if [ ! -d ${CONDA_ENVS}nextflow_19.01.0 ]; then
-  conda create --yes --name nextflow_19.01.0 nextflow=19.01.0
-fi
-if [ ! -d ${CONDA_ENVS}nextflow_0.32.0 ]; then
-  conda create --yes --name nextflow_0.32.0 nextflow=0.32.0
-fi
-if [ ! -d ${CONDA_ENVS}nextflow_0.28.2 ]; then
-  conda create --yes --name nextflow_0.28.2 nextflow=0.28.2
-fi
-if [ ! -d ${CONDA_ENVS}samtools_1.7 ]; then
-  conda create --yes --name samtools_1.7 samtools=1.7
-fi
-if [ ! -d ${CONDA_ENVS}samtools_1.5 ]; then
-  conda create --yes --name samtools_1.5 samtools=1.5
-fi
-if [ ! -d ${CONDA_ENVS}bowtie2_2.3.2 ]; then
-  conda create --yes --name bowtie2_2.3.2 bowtie2=2.3.2 samtools=1.7
-fi
-if [ ! -d ${CONDA_ENVS}bowtie2_2.3.4.1 ]; then
-  conda create --yes --name bowtie2_2.3.4.1 bowtie2=2.3.4.1 samtools=1.7 #&& \
-fi
-if [ ! -d ${CONDA_ENVS}bwa_0.7.17 ]; then
-  conda create --yes --name bwa_0.7.17 -c bioconda bwa=0.7.17
-fi
-if [ ! -d ${CONDA_ENVS}sra-tools_2.8.2 ]; then
-  conda create --yes --name sra-tools_2.8.2 sra-tools=2.8.2
-fi
-if [ ! -d ${CONDA_ENVS}trimmomatic_0.36 ]; then
-  conda create --yes --name trimmomatic_0.36 trimmomatic=0.36
-fi
-if [ ! -d ${CONDA_ENVS}trimmomatic_0.39 ]; then
-  conda create --yes --name trimmomatic_0.39 trimmomatic=0.39
-fi
-if [ ! -d ${CONDA_ENVS}Python_3.6.1 ]; then
-  conda create --yes --name Python_3.6.1 Python=3.6.1
-fi
-if [ ! -d ${CONDA_ENVS}Python_2.7.13 ]; then
-  conda create --yes --name Python_2.7.13 Python=2.7.13
-fi
-if [ ! -d ${CONDA_ENVS}kallisto_0.44.0 ]; then
-  conda create --yes --name kallisto_0.44.0 kallisto=0.44.0
-fi
-if [ ! -d ${CONDA_ENVS}kallisto_0.43.1 ]; then
-  conda create --yes --name kallisto_0.43.1 kallisto=0.43.1
-fi
-if [ ! -d ${CONDA_ENVS}music_1.0.0 ]; then
-  conda create --yes --name music_1.0.0 music=1.0.0
-fi
-if [ ! -d ${CONDA_ENVS}umitools_0.3.4 ]; then
-  conda create --yes --name umitools_0.3.4 umitools=0.3.4
-fi
-if [ ! -d ${CONDA_ENVS}fastp_0.19.7 ]; then
-  conda create --yes --name fastp_0.19.7 fastp=0.19.7
-fi
-if [ ! -d ${CONDA_ENVS}gatk_3.8.0 ]; then
-  conda create --yes --name gatk_3.8.0 gatk=3.8
-fi
-if [ ! -d ${CONDA_ENVS}gatk_4.0.8.1 ]; then
-  conda create --yes --name gatk_4.0.8.1 gatk4=4.0.8.1-0
-fi
-if [ ! -d ${CONDA_ENVS}cutadapt_1.14 ]; then
-  conda create --yes --name cutadapt_1.14 cutadapt=1.14
-fi
-if [ ! -d ${CONDA_ENVS}bioawk_1.0 ]; then
-  conda create --yes --name bioawk_1.0 bioawk=1.0
-fi
-if [ ! -d ${CONDA_ENVS}canu_1.7 ]; then
-  conda create --yes --name canu_1.7 canu=1.7
-fi
-if [ ! -d ${CONDA_ENVS}fastqc_0.11.5 ]; then
-  conda create --yes --name fastqc_0.11.5 fastqc=0.11.5
-fi
-if [ ! -d ${CONDA_ENVS}bedtools_2.25.0 ]; then
-  conda create --yes --name bedtools_2.25.0 bedtools=2.25.0
-fi
-if [ ! -d ${CONDA_ENVS}macs2_2.1.2 ]; then
-  conda create --yes --name macs2_2.1.2 macs2=2.1.2
-fi
-if [ ! -d ${CONDA_ENVS}bcftools_1.7 ]; then
-  conda create --yes --name bcftools_1.7 bcftools=1.7
-fi
-if [ ! -d ${CONDA_ENVS}salmon_0.8.2 ]; then
-  conda create --yes --name salmon_0.8.2 salmon=0.8.2
-fi
-if [ ! -d ${CONDA_ENVS}urqt_d62c1f8 ]; then
-  conda create --yes --name urqt_d62c1f8 urqt=d62c1f8
-fi
-if [ ! -d ${CONDA_ENVS}multiqc_0.9 ]; then
-  conda create --yes --name multiqc_0.9 multiqc=0.9
-fi
-if [ ! -d ${CONDA_ENVS}multiqc_1.7 ]; then
-  conda create --yes --name multiqc_1.7 multiqc=1.7
-fi
-if [ ! -d ${CONDA_ENVS}multiqc_1.0 ]; then
-  conda create --yes --name multiqc_1.0 multiqc=1.0
-fi
-if [ ! -d ${CONDA_ENVS}cdhit_4.6.8 ]; then
-  conda create --yes --name cdhit_4.6.8 cdhit=4.6.8
-fi
-if [ ! -d ${CONDA_ENVS}deeptools_3.0.2 ]; then
-  conda create --yes --name deeptools_3.0.2 deeptools=3.0.2
-fi
-if [ ! -d ${CONDA_ENVS}htseq_0.9.1 ]; then
-  conda create --yes --name htseq_0.9.1 htseq=0.9.1
-fi
-if [ ! -d ${CONDA_ENVS}htseq_0.11.2 ]; then
-  conda create --yes --name htseq_0.11.2 htseq=0.11.2
-fi
-if [ ! -d ${CONDA_ENVS}R_3.4.3 ]; then
-  conda create --yes --name R_3.4.3 R=3.4.3
-fi
-if [ ! -d ${CONDA_ENVS}R_3.3.1 ]; then
-  conda create --yes --name R_3.3.1 R=3.3.1
-fi
-if [ ! -d ${CONDA_ENVS}file_handle_0.1.1 ]; then
-  conda create --yes --name file_handle_0.1.1 file_handle=0.1.1
-fi
-if [ ! -d ${CONDA_ENVS}ncdu_1.13 ]; then
-  conda create --yes --name ncdu_1.13 ncdu=1.13
-fi
-if [ ! -d ${CONDA_ENVS}picard_2.18.11 ]; then
-  conda create --yes --name picard_2.18.11 picard=2.18.11
-fi
-if [ ! -d ${CONDA_ENVS}sambamba_0.6.7 ]; then
-  conda create --yes --name sambamba_0.6.7 sambamba=0.6.7
-fi
-if [ ! -d ${CONDA_ENVS}star_2.7.3a ]; then
-  conda create --yes --name star_2.7.3a star=2.7.3a
-fi
-if [ ! -d ${CONDA_ENVS}liftover_357 ]; then
-  conda create --yes --name liftover_357 ucsc-liftover==357
-fi
-if [ ! -d ${CONDA_ENVS}axtchain_377 ]; then
-  conda create --yes --name axtchain_377 ucsc-axtchain==377
-fi
-if [ ! -d ${CONDA_ENVS}ribotish_0.2.4 ]; then
-  conda create --name ribotish_0.2.4 ribotish=0.2.4
-fi
diff --git a/src/.conda_psmn.sh b/src/.conda_psmn.sh
deleted file mode 120000
index cbb3d9b56bffb23950376e4f50dea1a7a88c80c8..0000000000000000000000000000000000000000
--- a/src/.conda_psmn.sh
+++ /dev/null
@@ -1 +0,0 @@
-/Xnfs/lbmcdb/common/conda/init.sh
\ No newline at end of file
diff --git a/src/.docker_modules/agat/0.8.0/Dockerfile b/src/.docker_modules/agat/0.8.0/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..4325fc36b153587a048a205ad247eafc5dac2d94
--- /dev/null
+++ b/src/.docker_modules/agat/0.8.0/Dockerfile
@@ -0,0 +1 @@
+FROM quay.io/biocontainers/agat:0.8.0--pl5262hdfd78af_0 
\ No newline at end of file
diff --git a/src/.docker_modules/agat/0.8.0/docker_init.sh b/src/.docker_modules/agat/0.8.0/docker_init.sh
new file mode 100755
index 0000000000000000000000000000000000000000..ef8ca5cf34c9f47b0defa7874e66d48e41a09f26
--- /dev/null
+++ b/src/.docker_modules/agat/0.8.0/docker_init.sh
@@ -0,0 +1,5 @@
+#!/bin/sh
+docker pull lbmc/agat:0.8.0
+# docker build src/.docker_modules/agat/0.8.0 -t 'lbmc/agat:0.8.0'
+# docker push lbmc/agat:0.8.0
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/agat:0.8.0" --push src/.docker_modules/agat/0.8.0
diff --git a/src/.docker_modules/alntools/dd96682/Dockerfile b/src/.docker_modules/alntools/dd96682/Dockerfile
index 01fdd8995a3501ba9e3e6ceb727055dc26aa5656..8bd4d05acee40478d3d0f9a79bc6dbd1c0789bd5 100644
--- a/src/.docker_modules/alntools/dd96682/Dockerfile
+++ b/src/.docker_modules/alntools/dd96682/Dockerfile
@@ -4,6 +4,7 @@ MAINTAINER Laurent Modolo
 ENV ALNTOOLS_VERSION=dd96682
 ENV PACKAGES git \
     ca-certificates \
+    gawk \
     procps
 
 RUN apt-get update \
@@ -15,6 +16,7 @@ RUN apt-get update \
     && python setup.py install \
     && cd .. \
     && rm -R alntools \
+    && pip install six \
     && apt-get autoremove --purge -y git ca-certificates
 
 CMD ["bash"] 
\ No newline at end of file
diff --git a/src/.docker_modules/alntools/dd96682/docker_init.sh b/src/.docker_modules/alntools/dd96682/docker_init.sh
index 48190c462975649ace8430a6e0769cf742611b42..da1f1c97c94d6238ad520ebd7c97f5aa1e35f81f 100755
--- a/src/.docker_modules/alntools/dd96682/docker_init.sh
+++ b/src/.docker_modules/alntools/dd96682/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/alntools:dd96682
-docker build src/.docker_modules/alntools/dd96682 -t 'lbmc/alntools:dd96682'
-docker push lbmc/alntools:dd96682
+# docker build src/.docker_modules/alntools/dd96682 -t 'lbmc/alntools:dd96682'
+# docker push lbmc/alntools:dd96682
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/alntools:dd96682" --push src/.docker_modules/alntools/dd96682
diff --git a/src/.docker_modules/bamutils/1.0.14/docker_init.sh b/src/.docker_modules/bamutils/1.0.14/docker_init.sh
index 2d89e9ccd542b1a848d74231068cff94bfe92239..d89d2e5fb631378ef05003d270b44eedc5d67249 100755
--- a/src/.docker_modules/bamutils/1.0.14/docker_init.sh
+++ b/src/.docker_modules/bamutils/1.0.14/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/bamutils:1.0.14
-docker build src/.docker_modules/bamutils/1.0.14 -t 'lbmc/bamutils:1.0.14'
-docker push lbmc/bamutils:1.0.14
+# docker build src/.docker_modules/bamutils/1.0.14 -t 'lbmc/bamutils:1.0.14'
+# docker push lbmc/bamutils:1.0.14
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/bamutils:1.0.14" --push src/.docker_modules/bamutils/1.0.14
diff --git a/src/.docker_modules/bcftools/1.7/docker_init.sh b/src/.docker_modules/bcftools/1.7/docker_init.sh
index c2bf925159aeb2708d742ff891ff96b5d40bf05a..55cab9902c35693f4876b09a650665ebd57bcbb8 100755
--- a/src/.docker_modules/bcftools/1.7/docker_init.sh
+++ b/src/.docker_modules/bcftools/1.7/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/bcftools:1.7
-docker build src/.docker_modules/bcftools/1.7 -t 'lbmc/bcftools:1.7'
-docker push lbmc/bcftools:1.7
+# docker build src/.docker_modules/bcftools/1.7 -t 'lbmc/bcftools:1.7'
+# docker push lbmc/bcftools:1.7
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/bcftools:1.7" --push src/.docker_modules/bcftools/1.7
diff --git a/src/.docker_modules/bedops/2.4.39/docker_init.sh b/src/.docker_modules/bedops/2.4.39/docker_init.sh
index a50d06b132b1d69bf8e7d18c55fe05b4a5a15b7b..0c1ff388e2f37895999e0a868b450607e17598f1 100755
--- a/src/.docker_modules/bedops/2.4.39/docker_init.sh
+++ b/src/.docker_modules/bedops/2.4.39/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/bedops:2.4.39
-docker build src/.docker_modules/bedops/2.4.39 -t 'lbmc/bedops:2.4.39'
-docker push lbmc/bedops:2.4.39
+# docker build src/.docker_modules/bedops/2.4.39 -t 'lbmc/bedops:2.4.39'
+# docker push lbmc/bedops:2.4.39
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/bedops:2.4.39" --push src/.docker_modules/bedops/2.4.39
diff --git a/src/.docker_modules/bedtools/2.25.0/docker_init.sh b/src/.docker_modules/bedtools/2.25.0/docker_init.sh
index e35c4d6aa13c4fd78c4797d68150471af81ef94a..750ceb54c4f7bf601f24f86ad4203d87cdf84b2a 100755
--- a/src/.docker_modules/bedtools/2.25.0/docker_init.sh
+++ b/src/.docker_modules/bedtools/2.25.0/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/bedtools:2.25.0
-docker build src/.docker_modules/bedtools/2.25.0 -t 'lbmc/bedtools:2.25.0'
-docker push lbmc/bedtools:2.25.0
+# docker build src/.docker_modules/bedtools/2.25.0 -t 'lbmc/bedtools:2.25.0'
+# docker push lbmc/bedtools:2.25.0
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/bedtools:2.25.0" --push src/.docker_modules/bedtools/2.25.0
diff --git a/src/.docker_modules/bedtools/2.30.0/Dockerfile b/src/.docker_modules/bedtools/2.30.0/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..e41c1f0f50e6e6c561bbdbfc059bd535bc8ad172
--- /dev/null
+++ b/src/.docker_modules/bedtools/2.30.0/Dockerfile
@@ -0,0 +1 @@
+FROM quay.io/biocontainers/bedtools:2.30.0--h7d7f7ad_1
\ No newline at end of file
diff --git a/src/.docker_modules/bedtools/2.30.0/docker_init.sh b/src/.docker_modules/bedtools/2.30.0/docker_init.sh
new file mode 100755
index 0000000000000000000000000000000000000000..fa7f1fde2544c47ff928fefbd549f3b786feca8c
--- /dev/null
+++ b/src/.docker_modules/bedtools/2.30.0/docker_init.sh
@@ -0,0 +1,5 @@
+#!/bin/sh
+docker pull lbmc/bedtools:2.30.0
+# docker build src/.docker_modules/bedtools/2.30.0 -t 'lbmc/bedtools:2.30.0'
+# docker push lbmc/bedtools:2.30.0
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/bedtools:2.30.0" --push src/.docker_modules/bedtools/2.30.0
diff --git a/src/.docker_modules/bioawk/1.0/docker_init.sh b/src/.docker_modules/bioawk/1.0/docker_init.sh
index 8e6d7444062e7368f074f0b80386060e0f0b1a07..830fcf6bc977e70746fbfcc5e8f15e948cbf2773 100755
--- a/src/.docker_modules/bioawk/1.0/docker_init.sh
+++ b/src/.docker_modules/bioawk/1.0/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/bioawk:1.0
-docker build src/.docker_modules/bioawk/1.0 -t 'lbmc/bioawk:1.0'
-docker push lbmc/bioawk:1.0
+# docker build src/.docker_modules/bioawk/1.0 -t 'lbmc/bioawk:1.0'
+# docker push lbmc/bioawk:1.0
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/bioawk:1.0" --push src/.docker_modules/bioawk/1.0
diff --git a/src/.docker_modules/bioconvert/0.4.0/Dockerfile b/src/.docker_modules/bioconvert/0.4.0/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..5c05ff6f54610d2df6839e6e4a2a21c2e68e0845
--- /dev/null
+++ b/src/.docker_modules/bioconvert/0.4.0/Dockerfile
@@ -0,0 +1,4 @@
+FROM bioconvert/bioconvert:test
+MAINTAINER Laurent Modolo
+
+ENV BIOCONVERT_VERSION="0.4.0"
\ No newline at end of file
diff --git a/src/.docker_modules/bioconvert/0.4.0/docker_init.sh b/src/.docker_modules/bioconvert/0.4.0/docker_init.sh
new file mode 100755
index 0000000000000000000000000000000000000000..0323c1bcf3645907796a93f5318849675d821977
--- /dev/null
+++ b/src/.docker_modules/bioconvert/0.4.0/docker_init.sh
@@ -0,0 +1,5 @@
+#!/bin/sh
+docker pull lbmc/bioconvert:0.4.0
+# docker build src/.docker_modules/bioconvert/0.4.0 -t 'lbmc/bioconvert:0.4.0'
+# docker push lbmc/bioconvert:0.4.0
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/bioconvert:0.4.0" --push src/.docker_modules/bioconvert/0.4.0
diff --git a/src/.docker_modules/bioconvert/0.4.3/Dockerfile b/src/.docker_modules/bioconvert/0.4.3/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..eb4dc3d0b625a93eb5b0ce5506956e8ad39aabbc
--- /dev/null
+++ b/src/.docker_modules/bioconvert/0.4.3/Dockerfile
@@ -0,0 +1,16 @@
+FROM conda/miniconda3
+MAINTAINER Laurent Modolo
+
+ENV BIOCONVERT_VERSION="0.4.3"
+RUN conda init \
+&& conda config --add channels r \
+&& conda config --add channels defaults \
+&& conda config --add channels conda-forge \
+&& conda config --add channels bioconda \
+&& conda create -y -n bioconvert
+SHELL ["conda", "run", "-n", "bioconvert", "/bin/bash", "-c"]
+RUN conda install bioconvert \
+&& echo "conda activate bioconvert" >> /root/.bashrc
+RUN apt update && apt install -y procps
+
+ENV PATH /usr/local/envs/bioconvert/bin:/usr/local/condabin:$PATH
diff --git a/src/.docker_modules/bioconvert/0.4.3/docker_init.sh b/src/.docker_modules/bioconvert/0.4.3/docker_init.sh
new file mode 100755
index 0000000000000000000000000000000000000000..cbc76d9b9306c06fb1d19f460a989979f974a59a
--- /dev/null
+++ b/src/.docker_modules/bioconvert/0.4.3/docker_init.sh
@@ -0,0 +1,5 @@
+#!/bin/sh
+docker pull lbmc/bioconvert:0.4.3
+# docker build src/.docker_modules/bioconvert/0.4.3 -t 'lbmc/bioconvert:0.4.3'
+# docker push lbmc/bioconvert:0.4.3
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/bioconvert:0.4.3" --push src/.docker_modules/bioconvert/0.4.3
diff --git a/src/.docker_modules/bowtie/1.2.2/docker_init.sh b/src/.docker_modules/bowtie/1.2.2/docker_init.sh
index 814a311d967eaf943fd1fa864c8c137ce724c812..5bf220b3fc4fc30eb4e4f12f6f9f49426fc23273 100755
--- a/src/.docker_modules/bowtie/1.2.2/docker_init.sh
+++ b/src/.docker_modules/bowtie/1.2.2/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/bowtie:1.2.2
-docker build src/.docker_modules/bowtie/1.2.2 -t 'lbmc/bowtie:1.2.2'
-docker push lbmc/bowtie:1.2.2
+# docker build src/.docker_modules/bowtie/1.2.2 -t 'lbmc/bowtie:1.2.2'
+# docker push lbmc/bowtie:1.2.2
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/bowtie:1.2.2" --push src/.docker_modules/bowtie/1.2.2
diff --git a/src/.docker_modules/bowtie2/2.3.4.1/docker_init.sh b/src/.docker_modules/bowtie2/2.3.4.1/docker_init.sh
index bdb93e1663ee77e81a65020a0f25a8df182f9245..266039364df3288b7d69499946e7c5bb2f5958b3 100755
--- a/src/.docker_modules/bowtie2/2.3.4.1/docker_init.sh
+++ b/src/.docker_modules/bowtie2/2.3.4.1/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/bowtie2:2.3.4.1
-docker build src/.docker_modules/bowtie2/2.3.4.1 -t 'lbmc/bowtie2:2.3.4.1'
-docker push lbmc/bowtie2:2.3.4.1
+# docker build src/.docker_modules/bowtie2/2.3.4.1 -t 'lbmc/bowtie2:2.3.4.1'
+# docker push lbmc/bowtie2:2.3.4.1
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/bowtie2:2.3.4.1" --push src/.docker_modules/bowtie2/2.3.4.1
diff --git a/src/.docker_modules/bwa/0.7.17/docker_init.sh b/src/.docker_modules/bwa/0.7.17/docker_init.sh
index 3cabcbd9adcfdc35d6746a3f56534170a2d1d63a..d50fafd812b26362b73b9e66672eadb65820e172 100755
--- a/src/.docker_modules/bwa/0.7.17/docker_init.sh
+++ b/src/.docker_modules/bwa/0.7.17/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/bwa:0.7.17
-docker build src/.docker_modules/bwa/0.7.17 -t 'lbmc/bwa:0.7.17'
-docker push lbmc/bwa:0.7.17
+# docker build src/.docker_modules/bwa/0.7.17 -t 'lbmc/bwa:0.7.17'
+# docker push lbmc/bwa:0.7.17
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/bwa:0.7.17" --push src/.docker_modules/bwa/0.7.17
diff --git a/src/.docker_modules/canu/1.6/docker_init.sh b/src/.docker_modules/canu/1.6/docker_init.sh
index b1afabb6dedba67dc9a9537ea570a9c5c62da28f..58df472854b9345495433f14edc17504c918ead8 100755
--- a/src/.docker_modules/canu/1.6/docker_init.sh
+++ b/src/.docker_modules/canu/1.6/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/canu:1.6
-docker build src/.docker_modules/canu/1.6 -t 'lbmc/canu:1.6'
-docker push lbmc/canu:1.6
+# docker build src/.docker_modules/canu/1.6 -t 'lbmc/canu:1.6'
+# docker push lbmc/canu:1.6
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/canu:1.6" --push src/.docker_modules/canu/1.6
diff --git a/src/.docker_modules/crossmap/0.4.1/docker_init.sh b/src/.docker_modules/crossmap/0.4.1/docker_init.sh
index 8bf250e04c90965f0be29ff62eb3bd54c08e81cc..a85ae11e491655126a71ad0ea4ac214d7d451209 100755
--- a/src/.docker_modules/crossmap/0.4.1/docker_init.sh
+++ b/src/.docker_modules/crossmap/0.4.1/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/crossmap:0.4.1
-docker build src/.docker_modules/crossmap/0.4.1/ -t 'lbmc/crossmap:0.4.1'
-docker push lbmc/crossmap:0.4.1
+# docker build src/.docker_modules/crossmap/0.4.1/ -t 'lbmc/crossmap:0.4.1'
+# docker push lbmc/crossmap:0.4.1
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/crossmap:0.4.1" --push src/.docker_modules/crossmap/0.4.1
diff --git a/src/.docker_modules/cutadapt/1.14/docker_init.sh b/src/.docker_modules/cutadapt/1.14/docker_init.sh
index 1ba18cb47af7cf8a8c9d4d0fee001f8e2d5747b1..8febfd80bdfc6df62a59bdb4d1a1a136e3ac646e 100755
--- a/src/.docker_modules/cutadapt/1.14/docker_init.sh
+++ b/src/.docker_modules/cutadapt/1.14/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/cutadapt:1.14
-docker build src/.docker_modules/cutadapt/1.14 -t 'lbmc/cutadapt:1.14'
-docker push lbmc/cutadapt:1.14
+# docker build src/.docker_modules/cutadapt/1.14 -t 'lbmc/cutadapt:1.14'
+# docker push lbmc/cutadapt:1.14
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/cutadapt:1.14" --push src/.docker_modules/cutadapt/1.14
diff --git a/src/.docker_modules/cutadapt/1.15/docker_init.sh b/src/.docker_modules/cutadapt/1.15/docker_init.sh
index 49303006414d8a1ab61bda8da49b850824dde551..5f5b3c7779dfd637d93c3045d933767b81836707 100755
--- a/src/.docker_modules/cutadapt/1.15/docker_init.sh
+++ b/src/.docker_modules/cutadapt/1.15/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/cutadapt:1.15
-docker build src/.docker_modules/cutadapt/1.15 -t 'lbmc/cutadapt:1.15'
-docker push lbmc/cutadapt:1.15
+# docker build src/.docker_modules/cutadapt/1.15 -t 'lbmc/cutadapt:1.15'
+# docker push lbmc/cutadapt:1.15
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/cutadapt:1.15" --push src/.docker_modules/cutadapt/1.15
diff --git a/src/.docker_modules/cutadapt/2.1/docker_init.sh b/src/.docker_modules/cutadapt/2.1/docker_init.sh
index cda255f0f22841d3f9cdf61480a053d47c948071..2b631f0f4c2799971f9490dc3df0f264019c1296 100755
--- a/src/.docker_modules/cutadapt/2.1/docker_init.sh
+++ b/src/.docker_modules/cutadapt/2.1/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/cutadapt:2.1
-docker build src/.docker_modules/cutadapt/2.1 -t 'lbmc/cutadapt:2.1'
-docker push lbmc/cutadapt:2.1
+# docker build src/.docker_modules/cutadapt/2.1 -t 'lbmc/cutadapt:2.1'
+# docker push lbmc/cutadapt:2.1
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/cutadapt:2.1" --push src/.docker_modules/cutadapt/2.1
diff --git a/src/.docker_modules/danpos3/2f7f223/Dockerfile b/src/.docker_modules/danpos3/2f7f223/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..f9b4f63308d3839f335c8ca128e920c669660432
--- /dev/null
+++ b/src/.docker_modules/danpos3/2f7f223/Dockerfile
@@ -0,0 +1,19 @@
+FROM ubuntu:20.04 
+
+ENV DANPOS_VERSION="2f7f223"
+ENV TZ=Europe/Paris
+RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone \
+&& apt update -qq \
+&& apt install --no-install-recommends software-properties-common -y dirmngr git python \
+&& wget -qO- https://cloud.r-project.org/bin/linux/ubuntu/marutter_pubkey.asc | tee -a /etc/apt/trusted.gpg.d/cran_ubuntu_key.asc \
+&& add-apt-repository "deb https://cloud.r-project.org/bin/linux/ubuntu $(lsb_release -cs)-cran40/"
+RUN apt-get update \
+&& apt-get install -y r-base=4.0.1 samtools \
+&& git clone https://github.com/sklasfeld/DANPOS3.git \
+&& cd DANPOS3 \
+&& git checkout $DANPOS_VERSION \
+&& pip install -r requirements.txt \
+&& chmod +x DANPOS3/*.py
+
+ENV PATH="/DANPOS3:${PATH}"
+CMD [ "bash" ]
diff --git a/src/.docker_modules/danpos3/2f7f223/docker_init.sh b/src/.docker_modules/danpos3/2f7f223/docker_init.sh
new file mode 100755
index 0000000000000000000000000000000000000000..9c59d3e259e4e1ef28466794b0ef5210190de652
--- /dev/null
+++ b/src/.docker_modules/danpos3/2f7f223/docker_init.sh
@@ -0,0 +1,5 @@
+#!/bin/sh
+docker pull lbmc/danpos3:2f7f223
+# docker build src/.docker_modules/danpos3/2f7f223 -t 'lbmc/danpos3:2f7f223'
+# docker push lbmc/danpos3:2f7f223
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/danpos3:2f7f223" --push src/.docker_modules/danpos3/2f7f223
diff --git a/src/.docker_modules/deeptools/3.0.2/docker_init.sh b/src/.docker_modules/deeptools/3.0.2/docker_init.sh
index 33959edcd7627e94d34d890d875d6cbe0fced74f..73ce8857ba40e1fe5b6c0b287e652a644738f987 100755
--- a/src/.docker_modules/deeptools/3.0.2/docker_init.sh
+++ b/src/.docker_modules/deeptools/3.0.2/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/deeptools:3.0.2
-docker build src/.docker_modules/deeptools/3.0.2 -t 'lbmc/deeptools:3.0.2'
-docker push lbmc/deeptools:3.0.2
+# docker build src/.docker_modules/deeptools/3.0.2 -t 'lbmc/deeptools:3.0.2'
+# docker push lbmc/deeptools:3.0.2
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/deeptools:3.0.2" --push src/.docker_modules/deeptools/3.0.2
diff --git a/src/.docker_modules/deeptools/3.1.1/docker_init.sh b/src/.docker_modules/deeptools/3.1.1/docker_init.sh
index 06e63a90199385965a012175fe3f448f75539ba4..a367cac40bcf40270e90755150c541c109ee5ddf 100755
--- a/src/.docker_modules/deeptools/3.1.1/docker_init.sh
+++ b/src/.docker_modules/deeptools/3.1.1/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/deeptools:3.1.1
-docker build src/.docker_modules/deeptools/3.1.1 -t 'lbmc/deeptools:3.1.1'
-docker push lbmc/deeptools:3.1.1
+# docker build src/.docker_modules/deeptools/3.1.1 -t 'lbmc/deeptools:3.1.1'
+# docker push lbmc/deeptools:3.1.1
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/deeptools:3.1.1" --push src/.docker_modules/deeptools/3.1.1
diff --git a/src/.docker_modules/deeptools/3.5.0/Dockerfile b/src/.docker_modules/deeptools/3.5.0/Dockerfile
index 3940e3905f4ecc776f9a8158823c42599d49b829..4680f9494d6c3fca32b27b49f9ee85838f6bb161 100644
--- a/src/.docker_modules/deeptools/3.5.0/Dockerfile
+++ b/src/.docker_modules/deeptools/3.5.0/Dockerfile
@@ -10,5 +10,6 @@ RUN apt-get update -qq \
         liblzma-dev \
         libcurl4-gnutls-dev \
         libssl-dev \
-        libncurses5-dev
+        libncurses5-dev \
+        procps
 RUN pip3 install deeptools==${DEEPTOOLS_VERSION}
diff --git a/src/.docker_modules/deeptools/3.5.0/docker_init.sh b/src/.docker_modules/deeptools/3.5.0/docker_init.sh
index 47b9e608149fac2739fc1200170d98981e6c4f78..d31641fd0a0298df31d1b95cf42c1369c7a11a77 100755
--- a/src/.docker_modules/deeptools/3.5.0/docker_init.sh
+++ b/src/.docker_modules/deeptools/3.5.0/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/deeptools:3.5.0
-docker build src/.docker_modules/deeptools/3.5.0 -t 'lbmc/deeptools:3.5.0'
-docker push lbmc/deeptools:3.5.0
+# docker build src/.docker_modules/deeptools/3.5.0 -t 'lbmc/deeptools:3.5.0'
+# docker push lbmc/deeptools:3.5.0
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/deeptools:3.5.0" --push src/.docker_modules/deeptools/3.5.0
diff --git a/src/.docker_modules/deeptools/3.5.1/Dockerfile b/src/.docker_modules/deeptools/3.5.1/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..389e85418fe5edda166a1cc976f83d04601c8113
--- /dev/null
+++ b/src/.docker_modules/deeptools/3.5.1/Dockerfile
@@ -0,0 +1,17 @@
+FROM python:3.8-slim
+MAINTAINER Lauret Modolo
+
+ENV DEEPTOOLS_VERSION=3.5.1
+RUN apt-get update -qq \
+    && apt-get install --no-install-recommends --yes \
+        build-essential \
+        zlib1g-dev \
+        libbz2-dev \
+        liblzma-dev \
+        libcurl4-gnutls-dev \
+        libssl-dev \
+        libncurses5-dev \
+        libcurl4 \
+        libc6 \
+        procps
+RUN pip3 install pysam deeptools==${DEEPTOOLS_VERSION}
diff --git a/src/.docker_modules/deeptools/3.5.1/docker_init.sh b/src/.docker_modules/deeptools/3.5.1/docker_init.sh
new file mode 100755
index 0000000000000000000000000000000000000000..304e988817c7ed38b52806f41c4f2bd333083f71
--- /dev/null
+++ b/src/.docker_modules/deeptools/3.5.1/docker_init.sh
@@ -0,0 +1,5 @@
+#!/bin/sh
+docker pull lbmc/deeptools:3.5.1
+# docker build src/.docker_modules/deeptools/3.5.1 -t 'lbmc/deeptools:3.5.1'
+# docker push lbmc/deeptools:3.5.1
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/deeptools:3.5.1" --push src/.docker_modules/deeptools/3.5.1
diff --git a/src/.docker_modules/emase-zero/0.3.1/docker_init.sh b/src/.docker_modules/emase-zero/0.3.1/docker_init.sh
index cb295bd192aca5d48e4dae5c44729b13a43650cc..3c92928521995235d49f93fa08579fa0062edc0c 100755
--- a/src/.docker_modules/emase-zero/0.3.1/docker_init.sh
+++ b/src/.docker_modules/emase-zero/0.3.1/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/emase-zero:0.3.1
-docker build src/.docker_modules/emase-zero/0.3.1 -t 'lbmc/emase-zero:0.3.1'
-docker push lbmc/emase-zero:0.3.1
+# docker build src/.docker_modules/emase-zero/0.3.1 -t 'lbmc/emase-zero:0.3.1'
+# docker push lbmc/emase-zero:0.3.1
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/emase-zero:0.3.1" --push src/.docker_modules/emase-zero/0.3.1
diff --git a/src/.docker_modules/emase/0.10.16/Dockerfile b/src/.docker_modules/emase/0.10.16/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..187d2fb699addc5cf659414fb136439569ceefde
--- /dev/null
+++ b/src/.docker_modules/emase/0.10.16/Dockerfile
@@ -0,0 +1,14 @@
+FROM conda/miniconda2
+MAINTAINER Laurent Modolo
+
+ENV EMASE_VERSION=0.10.16
+RUN conda init \
+&& conda config --add channels r \
+&& conda config --add channels bioconda \
+&& conda create -y -n emase jupyter
+SHELL ["conda", "run", "-n", "emase", "/bin/bash", "-c"]
+RUN conda install -y -c kbchoi emase \
+&& echo "conda activate emase" >> /root/.bashrc
+RUN apt update && apt install -y procps
+
+ENV PATH /usr/local/envs/emase/bin:/usr/local/condabin:$PATH
diff --git a/src/.docker_modules/emase/0.10.16/docker_init.sh b/src/.docker_modules/emase/0.10.16/docker_init.sh
new file mode 100755
index 0000000000000000000000000000000000000000..9fae4ac78667557b35fada3bc51088fde2d16b9e
--- /dev/null
+++ b/src/.docker_modules/emase/0.10.16/docker_init.sh
@@ -0,0 +1,5 @@
+#!/bin/sh
+docker pull lbmc/emase:0.10.16
+# docker build src/.docker_modules/emase/0.10.16 -t 'lbmc/emase:0.10.16'
+# docker push lbmc/emase:0.10.16
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/emase:0.10.16" --push src/.docker_modules/emase/0.10.16
diff --git a/src/.docker_modules/fastp/0.19.7/docker_init.sh b/src/.docker_modules/fastp/0.19.7/docker_init.sh
index 1085915c2cfd5caf2599275d8ad50a909704d728..310e5aef0e5e01c20200da089b4fe614aeeac103 100755
--- a/src/.docker_modules/fastp/0.19.7/docker_init.sh
+++ b/src/.docker_modules/fastp/0.19.7/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/fastp:0.19.7
-docker build src/.docker_modules/fastp/0.19.7 -t 'lbmc/fastp:0.19.7'
-docker push lbmc/fastp:0.19.7
+# docker build src/.docker_modules/fastp/0.19.7 -t 'lbmc/fastp:0.19.7'
+# docker push lbmc/fastp:0.19.7
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/fastp:0.19.7" --push src/.docker_modules/fastp/0.19.7
diff --git a/src/.docker_modules/fastp/0.20.1/docker_init.sh b/src/.docker_modules/fastp/0.20.1/docker_init.sh
index 2b1f3bee40fb05504488fe026ff39811f9fef47d..7310a935194e3a15ec32ed784098fc4c06b4fb31 100755
--- a/src/.docker_modules/fastp/0.20.1/docker_init.sh
+++ b/src/.docker_modules/fastp/0.20.1/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/fastp:0.20.1
-docker build src/.docker_modules/fastp/0.20.1 -t 'lbmc/fastp:0.20.1'
-docker push lbmc/fastp:0.20.1
+# docker build src/.docker_modules/fastp/0.20.1 -t 'lbmc/fastp:0.20.1'
+# docker push lbmc/fastp:0.20.1
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/fastp:0.20.1" --push src/.docker_modules/fastp/0.20.1
diff --git a/src/.docker_modules/fastqc/0.11.5/docker_init.sh b/src/.docker_modules/fastqc/0.11.5/docker_init.sh
index 6b82ff40580dc34b3594278ef2f9c46d36f73560..502bbd62228163b18a2d1369d201ae9ac1c8f47a 100755
--- a/src/.docker_modules/fastqc/0.11.5/docker_init.sh
+++ b/src/.docker_modules/fastqc/0.11.5/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/fastqc:0.11.5
-docker build src/.docker_modules/fastqc/0.11.5 -t 'lbmc/fastqc:0.11.5'
-docker push lbmc/fastqc:0.11.5
+# docker build src/.docker_modules/fastqc/0.11.5 -t 'lbmc/fastqc:0.11.5'
+# docker push lbmc/fastqc:0.11.5
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/fastqc:0.11.5" --push src/.docker_modules/fastqc/0.11.5
diff --git a/src/.docker_modules/file_handle/0.1.1/docker_init.sh b/src/.docker_modules/file_handle/0.1.1/docker_init.sh
index 0f1cf512532dc8d72490b9ebb174d90418f4c640..21c51ea0d52e2eb4863acc82760492de1f47d8c8 100755
--- a/src/.docker_modules/file_handle/0.1.1/docker_init.sh
+++ b/src/.docker_modules/file_handle/0.1.1/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/file_handle:0.1.1
-docker build src/.docker_modules/file_handle/0.1.1 -t 'lbmc/file_handle:0.1.1'
-docker push lbmc/file_handle:0.1.1
+# docker build src/.docker_modules/file_handle/0.1.1 -t 'lbmc/file_handle:0.1.1'
+# docker push lbmc/file_handle:0.1.1
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/file_handle:0.1.1" --push src/.docker_modules/file_handle/0.1.1
diff --git a/src/.docker_modules/flexi_splitter/1.0.2/Dockerfile b/src/.docker_modules/flexi_splitter/1.0.2/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..49f5f7091902703740fbaac084bdebfa1e596fb9
--- /dev/null
+++ b/src/.docker_modules/flexi_splitter/1.0.2/Dockerfile
@@ -0,0 +1,13 @@
+FROM python:3.9-slim
+MAINTAINER Lauret Modolo
+
+ENV FLEXI_SPLITTER_VERSION=1.0.2
+RUN apt-get update -qq \
+    && apt-get install --no-install-recommends --yes \
+        build-essential \
+        procps
+RUN pip3 install flexi-splitter==${FLEXI_SPLITTER_VERSION}
+RUN apt-get remove --yes \
+        build-essential
+
+CMD [ "bash" ]
\ No newline at end of file
diff --git a/src/.docker_modules/flexi_splitter/1.0.2/docker_init.sh b/src/.docker_modules/flexi_splitter/1.0.2/docker_init.sh
new file mode 100755
index 0000000000000000000000000000000000000000..7f7c0e77699f32a95df6e452abe2fa658df605c8
--- /dev/null
+++ b/src/.docker_modules/flexi_splitter/1.0.2/docker_init.sh
@@ -0,0 +1,5 @@
+#!/bin/sh
+docker pull lbmc/flexi_splitter:1.0.2
+# docker build src/.docker_modules/flexi_splitter/1.0.2 -t 'lbmc/flexi_splitter:1.0.2'
+# docker push lbmc/flexi_splitter:1.0.2
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/flexi_splitter:1.0.2" --push src/.docker_modules/flexi_splitter/1.0.2
diff --git a/src/.docker_modules/freebayes/1.3.2/docker_init.sh b/src/.docker_modules/freebayes/1.3.2/docker_init.sh
index 3dee0714d779f3df172fb007962dcf5219f7cc3f..2e754d0d92af4b7b43902b023ab6ff9d9cc6d510 100755
--- a/src/.docker_modules/freebayes/1.3.2/docker_init.sh
+++ b/src/.docker_modules/freebayes/1.3.2/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/freebayes:1.3.2
-docker build src/.docker_modules/freebayes/1.3.2/ -t 'lbmc/freebayes:1.3.2'
-docker push lbmc/freebayes:1.3.2
+# docker build src/.docker_modules/freebayes/1.3.2/ -t 'lbmc/freebayes:1.3.2'
+# docker push lbmc/freebayes:1.3.2
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/freebayes:1.3.2" --push src/.docker_modules/freebayes/1.3.2
diff --git a/src/.docker_modules/g2gtools/0.2.7/docker_init.sh b/src/.docker_modules/g2gtools/0.2.7/docker_init.sh
index 2da2d65ca6ecc154956f98dd2bc980f70311c41c..633690bc685f11ba9a4431ba1903bfc6db654cdb 100755
--- a/src/.docker_modules/g2gtools/0.2.7/docker_init.sh
+++ b/src/.docker_modules/g2gtools/0.2.7/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/g2gtools:0.2.7
-docker build src/.docker_modules/g2gtools/0.2.7 -t 'lbmc/g2gtools:0.2.7'
-docker push lbmc/g2gtools:0.2.7
+# docker build src/.docker_modules/g2gtools/0.2.7 -t 'lbmc/g2gtools:0.2.7'
+# docker push lbmc/g2gtools:0.2.7
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/g2gtools:0.2.7" --push src/.docker_modules/g2gtools/0.2.7
diff --git a/src/.docker_modules/g2gtools/0.2.8/docker_init.sh b/src/.docker_modules/g2gtools/0.2.8/docker_init.sh
index 99cbd49ff63c77fb957b0193c4596029257b2de7..d09df0f7759d4e9f69f3d1fdbca5b1c538a3fdfd 100755
--- a/src/.docker_modules/g2gtools/0.2.8/docker_init.sh
+++ b/src/.docker_modules/g2gtools/0.2.8/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/g2gtools:0.2.8
-docker build src/.docker_modules/g2gtools/0.2.8 -t 'lbmc/g2gtools:0.2.8'
-docker push lbmc/g2gtools:0.2.8
+# docker build src/.docker_modules/g2gtools/0.2.8 -t 'lbmc/g2gtools:0.2.8'
+# docker push lbmc/g2gtools:0.2.8
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/g2gtools:0.2.8" --push src/.docker_modules/g2gtools/0.2.8
diff --git a/src/.docker_modules/gatk/3.8.0/docker_init.sh b/src/.docker_modules/gatk/3.8.0/docker_init.sh
index 1188be23f86622450582486167064c6829913394..6378db27d03f6d39311c6478dc3efec975915f75 100755
--- a/src/.docker_modules/gatk/3.8.0/docker_init.sh
+++ b/src/.docker_modules/gatk/3.8.0/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/gatk:3.8.0
-docker build src/.docker_modules/gatk/3.8.0 -t 'lbmc/gatk:3.8.0'
-docker push lbmc/gatk:3.8.0
+# docker build src/.docker_modules/gatk/3.8.0 -t 'lbmc/gatk:3.8.0'
+# docker push lbmc/gatk:3.8.0
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/gatk:3.8.0" --push src/.docker_modules/gatk/3.8.0
diff --git a/src/.docker_modules/gatk/4.0.8.1/docker_init.sh b/src/.docker_modules/gatk/4.0.8.1/docker_init.sh
index ddfd8ee0205fa9e9af20878ec561821fc4173057..38d06e39cc7030efb6e01a4fcec9d82619556e05 100755
--- a/src/.docker_modules/gatk/4.0.8.1/docker_init.sh
+++ b/src/.docker_modules/gatk/4.0.8.1/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/gatk:4.0.8.1
-docker build src/.docker_modules/gatk/4.0.8.1 -t 'lbmc/gatk:4.0.8.1'
-docker push lbmc/gatk:4.0.8.1
+# docker build src/.docker_modules/gatk/4.0.8.1 -t 'lbmc/gatk:4.0.8.1'
+# docker push lbmc/gatk:4.0.8.1
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/gatk:4.0.8.1" --push src/.docker_modules/gatk/4.0.8.1
diff --git a/src/.docker_modules/gffread/0.11.8/docker_init.sh b/src/.docker_modules/gffread/0.11.8/docker_init.sh
index 44c18612cbc9b8d5c093d980848bfc03d1b2f1e6..e03cbd883bfa8bcf130e45c0f07ceabd7895bb90 100755
--- a/src/.docker_modules/gffread/0.11.8/docker_init.sh
+++ b/src/.docker_modules/gffread/0.11.8/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/gffread:0.11.8
-docker build src/.docker_modules/gffread/0.11.8 -t 'lbmc/gffread:0.11.8'
-docker push lbmc/gffread:0.11.8
+# docker build src/.docker_modules/gffread/0.11.8 -t 'lbmc/gffread:0.11.8'
+# docker push lbmc/gffread:0.11.8
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/gffread:0.11.8" --push src/.docker_modules/gffread/0.11.8
diff --git a/src/.docker_modules/gffread/0.12.2/Dockerfile b/src/.docker_modules/gffread/0.12.2/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..a60a75facd1d5a641231b946181a12ca9a6172c8
--- /dev/null
+++ b/src/.docker_modules/gffread/0.12.2/Dockerfile
@@ -0,0 +1,16 @@
+FROM alpine:3.12
+MAINTAINER Laurent Modolo
+
+ENV GFFREAD_VERSION=0.12.2
+ENV PACKAGES make \
+             g++ \
+             bash \
+             perl
+
+RUN apk update && \
+    apk add ${PACKAGES} && \
+wget http://ccb.jhu.edu/software/stringtie/dl/gffread-${GFFREAD_VERSION}.tar.gz && \
+tar -xvf gffread-${GFFREAD_VERSION}.tar.gz && \
+cd gffread-${GFFREAD_VERSION}/ && \
+make && \
+cp gffread /usr/bin/
diff --git a/src/.docker_modules/gffread/0.12.2/docker_init.sh b/src/.docker_modules/gffread/0.12.2/docker_init.sh
new file mode 100755
index 0000000000000000000000000000000000000000..174089499432aa9e38c842b8f04df249bab63059
--- /dev/null
+++ b/src/.docker_modules/gffread/0.12.2/docker_init.sh
@@ -0,0 +1,5 @@
+#!/bin/sh
+docker pull lbmc/gffread:0.12.2
+# docker build src/.docker_modules/gffread/0.12.2 -t 'lbmc/gffread:0.12.2'
+# docker push lbmc/gffread:0.12.2
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/gffread:0.12.2" --push src/.docker_modules/gffread/0.12.2
diff --git a/src/.docker_modules/hisat2/2.0.0/docker_init.sh b/src/.docker_modules/hisat2/2.0.0/docker_init.sh
index 8bfb16363342039e3fff7057259a8e835c2a8c6d..a09fc9726e4cb19ffb9f80a29a8b22289230e6a2 100755
--- a/src/.docker_modules/hisat2/2.0.0/docker_init.sh
+++ b/src/.docker_modules/hisat2/2.0.0/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/hisat2:2.0.0
-docker build src/.docker_modules/hisat2/2.0.0 -t 'lbmc/hisat2:2.0.0'
-docker push lbmc/hisat2:2.0.0
+# docker build src/.docker_modules/hisat2/2.0.0 -t 'lbmc/hisat2:2.0.0'
+# docker push lbmc/hisat2:2.0.0
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/hisat2:2.0.0" --push src/.docker_modules/hisat2/2.0.0
diff --git a/src/.docker_modules/hisat2/2.1.0/docker_init.sh b/src/.docker_modules/hisat2/2.1.0/docker_init.sh
index 55fb191ab23cbe7615f70ba5488a227b0b69580a..4d541d32ffda61bc7c5c159051a2da3d9b8a54da 100755
--- a/src/.docker_modules/hisat2/2.1.0/docker_init.sh
+++ b/src/.docker_modules/hisat2/2.1.0/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/hisat2:2.1.0
-docker build src/.docker_modules/hisat2/2.1.0 -t 'lbmc/hisat2:2.1.0'
-docker push lbmc/hisat2:2.1.0
+# docker build src/.docker_modules/hisat2/2.1.0 -t 'lbmc/hisat2:2.1.0'
+# docker push lbmc/hisat2:2.1.0
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/hisat2:2.1.0" --push src/.docker_modules/hisat2/2.1.0
diff --git a/src/.docker_modules/htseq/0.11.2/docker_init.sh b/src/.docker_modules/htseq/0.11.2/docker_init.sh
index b46f01de71ede2fa99923f609cc98d96994a6bf6..9e2c73ea967838353653327b57efc23f464a75cc 100644
--- a/src/.docker_modules/htseq/0.11.2/docker_init.sh
+++ b/src/.docker_modules/htseq/0.11.2/docker_init.sh
@@ -1,2 +1,2 @@
 #!/bin/sh
-docker build src/docker_modules/HTSeq/0.11.2 -t 'htseq:0.11.2'
+# docker build src/docker_modules/HTSeq/0.11.2 -t 'htseq:0.11.2'
diff --git a/src/.docker_modules/htseq/0.13.5/Dockerfile b/src/.docker_modules/htseq/0.13.5/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..68347cf4a63723c11b3a7e4d01817c6ef8d18f79
--- /dev/null
+++ b/src/.docker_modules/htseq/0.13.5/Dockerfile
@@ -0,0 +1,2 @@
+FROM quay.io/biocontainers/htseq:0.13.5--py39h70b41aa_1
+MAINTAINER Laurent Modolo
diff --git a/src/.docker_modules/htseq/0.13.5/docker_init.sh b/src/.docker_modules/htseq/0.13.5/docker_init.sh
new file mode 100755
index 0000000000000000000000000000000000000000..00ad38fecb0bf776961f550a87610abb40741211
--- /dev/null
+++ b/src/.docker_modules/htseq/0.13.5/docker_init.sh
@@ -0,0 +1,5 @@
+#!/bin/sh
+docker pull lbmc/htseq:0.13.5
+# docker build src/.docker_modules/htseq/0.13.5 -t 'lbmc/htseq:0.13.5'
+# docker push lbmc/htseq:0.13.5
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/htseq:0.13.5" --push src/.docker_modules/htseq/0.13.5
diff --git a/src/.docker_modules/htseq/0.8.0/docker_init.sh b/src/.docker_modules/htseq/0.8.0/docker_init.sh
index e322517cf457f8a8a9041da975a7851caf2ab4ef..7535289438cdc687a1e5739deca28b14b7426c9f 100755
--- a/src/.docker_modules/htseq/0.8.0/docker_init.sh
+++ b/src/.docker_modules/htseq/0.8.0/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/htseq:0.8.0
-docker build src/.docker_modules/htseq/0.8.0 -t 'lbmc/htseq:0.8.0'
-docker push lbmc/htseq:0.8.0
+# docker build src/.docker_modules/htseq/0.8.0 -t 'lbmc/htseq:0.8.0'
+# docker push lbmc/htseq:0.8.0
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/htseq:0.8.0" --push src/.docker_modules/htseq/0.8.0
diff --git a/src/.docker_modules/kallisto/0.43.1/docker_init.sh b/src/.docker_modules/kallisto/0.43.1/docker_init.sh
index b93c004d24d291b3c92bab0ca7d9ae7c7131cf7a..b4043c7102a70c8e8a4eab4f11e9a4caa9c285e0 100755
--- a/src/.docker_modules/kallisto/0.43.1/docker_init.sh
+++ b/src/.docker_modules/kallisto/0.43.1/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/kallisto:0.43.1
-docker build src/.docker_modules/kallisto/0.43.1 -t 'lbmc/kallisto:0.43.1'
-docker push lbmc/kallisto:0.43.1
+# docker build src/.docker_modules/kallisto/0.43.1 -t 'lbmc/kallisto:0.43.1'
+# docker push lbmc/kallisto:0.43.1
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/kallisto:0.43.1" --push src/.docker_modules/kallisto/0.43.1
diff --git a/src/.docker_modules/kallisto/0.44.0/docker_init.sh b/src/.docker_modules/kallisto/0.44.0/docker_init.sh
index 4fa79008a07f4e9a4afe6c8bb20fb8ca60b98858..e3670b68417db2189b04ac00af3139400b4b3fd3 100755
--- a/src/.docker_modules/kallisto/0.44.0/docker_init.sh
+++ b/src/.docker_modules/kallisto/0.44.0/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/kallisto:0.44.0
-docker build src/.docker_modules/kallisto/0.44.0 -t 'lbmc/kallisto:0.44.0'
-docker push lbmc/kallisto:0.44.0
+# docker build src/.docker_modules/kallisto/0.44.0 -t 'lbmc/kallisto:0.44.0'
+# docker push lbmc/kallisto:0.44.0
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/kallisto:0.44.0" --push src/.docker_modules/kallisto/0.44.0
diff --git a/src/.docker_modules/kallistobustools/0.24.4/docker_init.sh b/src/.docker_modules/kallistobustools/0.24.4/docker_init.sh
index 216b302fea20598e923a7ab21d492235042f1738..bb0edc976b6d56be250ab54b266c24a9672bccf4 100755
--- a/src/.docker_modules/kallistobustools/0.24.4/docker_init.sh
+++ b/src/.docker_modules/kallistobustools/0.24.4/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/kallistobustools:0.24.4
-docker build src/.docker_modules/kallistobustools/0.24.4 -t 'lbmc/kallistobustools:0.24.4'
-docker push lbmc/kallistobustools:0.24.4
+# docker build src/.docker_modules/kallistobustools/0.24.4 -t 'lbmc/kallistobustools:0.24.4'
+# docker push lbmc/kallistobustools:0.24.4
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/kallistobustools:0.24.4" --push src/.docker_modules/kallistobustools/0.24.4
diff --git a/src/.docker_modules/kallistobustools/0.39.3/docker_init.sh b/src/.docker_modules/kallistobustools/0.39.3/docker_init.sh
index 5cdd8c44773d7eb1f21bf5f500e3556f978ecdf9..9ba2835a5f2287d334900e7b4ce17e89cde95049 100755
--- a/src/.docker_modules/kallistobustools/0.39.3/docker_init.sh
+++ b/src/.docker_modules/kallistobustools/0.39.3/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/kallistobustools:0.39.3
-docker build src/.docker_modules/kallistobustools/0.39.3 -t 'lbmc/kallistobustools:0.39.3'
-docker push lbmc/kallistobustools:0.39.3
+# docker build src/.docker_modules/kallistobustools/0.39.3 -t 'lbmc/kallistobustools:0.39.3'
+# docker push lbmc/kallistobustools:0.39.3
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/kallistobustools:0.39.3" --push src/.docker_modules/kallistobustools/0.39.3
diff --git a/src/.docker_modules/kb/0.26.0/Dockerfile b/src/.docker_modules/kb/0.26.0/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..374646355b3f3f3895e8450c70d49eec1688c280
--- /dev/null
+++ b/src/.docker_modules/kb/0.26.0/Dockerfile
@@ -0,0 +1,13 @@
+FROM python:3.9-slim 
+
+ENV KB_VERSION="0.26.0"
+
+RUN apt update && apt install -y procps && pip3 install kb-python==${KB_VERSION}
+
+COPY t2g.py /usr/bin/
+COPY fix_t2g.py /usr/bin/
+
+RUN chmod +x /usr/bin/t2g.py
+RUN chmod +x /usr/bin/fix_t2g.py
+
+CMD [ "bash" ]
diff --git a/src/.docker_modules/kb/0.26.0/docker_init.sh b/src/.docker_modules/kb/0.26.0/docker_init.sh
new file mode 100755
index 0000000000000000000000000000000000000000..4531914965e15aa627ba65938d1cbb0ef673483a
--- /dev/null
+++ b/src/.docker_modules/kb/0.26.0/docker_init.sh
@@ -0,0 +1,5 @@
+#!/bin/sh
+docker pull lbmc/kb:0.26.0
+# docker build src/.docker_modules/kb/0.26.0 -t 'lbmc/kb:0.26.0'
+# docker push lbmc/kb:0.26.0
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/kb:0.26.0" --push src/.docker_modules/kb/0.26.0
diff --git a/src/.docker_modules/kb/0.26.0/fix_t2g.py b/src/.docker_modules/kb/0.26.0/fix_t2g.py
new file mode 100644
index 0000000000000000000000000000000000000000..a6b4619b5a17ad3fd4351918eb77c13b8c106f94
--- /dev/null
+++ b/src/.docker_modules/kb/0.26.0/fix_t2g.py
@@ -0,0 +1,64 @@
+#!/usr/local/bin/python
+import os
+import re
+import gzip
+import argparse
+
+
+def validate_file(f):
+    if not os.path.exists(f):
+        # Argparse uses the ArgumentTypeError to give a rejection message like:
+        # error: argument input: x does not exist
+        raise argparse.ArgumentTypeError("{0} does not exist".format(f))
+    return f
+
+
+def t2g_line(transcript, gene):
+    return str(transcript) + "\t" + str(gene) + "\n"
+
+
+def build_t2g_re():
+    return re.compile("([A-Z]+[0-9]+)\.\S+\s([A-Z]+[0-9]+)\.\S+")
+
+
+def get_t2g(line, t2g_re):
+    return t2g_re.match(line)
+
+
+def get_t2g_line(line, t2g_re):
+    t2g_id = get_t2g(line, t2g_re)
+    return {'transcript_id': t2g_id, 'gene_id': t2g_id}
+
+
+def write_t2g_line(t2g, line, t2g_re):
+    results = get_t2g_line(line, t2g_re)
+    if results['transcript_id']:
+        t2g.write(
+            t2g_line(
+                results['transcript_id'].group(1),
+                results['gene_id'].group(2)
+            )
+        )
+
+
+if __name__ == "__main__":
+    parser = argparse.ArgumentParser(
+        description="create transcript to genes file from a gtf file."
+    )
+    parser.add_argument(
+        "-f", "--t2g", dest="t2g", required=True, type=validate_file,
+        help="t2g file", metavar="FILE"
+    )
+    args = parser.parse_args()
+    t2g_re = build_t2g_re()
+
+    try:
+        with gzip.open(args.t2g, "rb") as gtf:
+            with open("fix_t2g.txt", "w") as t2g:
+                for line in gtf:
+                    write_t2g_line(t2g, str(line), t2g_re)
+    except gzip.BadGzipFile:
+        with open(args.t2g, "r") as gtf:
+            with open("fix_t2g.txt", "w") as t2g:
+                for line in gtf:
+                    write_t2g_line(t2g, str(line), t2g_re)
diff --git a/src/.docker_modules/kb/0.26.0/t2g.py b/src/.docker_modules/kb/0.26.0/t2g.py
new file mode 100755
index 0000000000000000000000000000000000000000..b99e74e6c2c3d9574ce54008bc58143c14b229a5
--- /dev/null
+++ b/src/.docker_modules/kb/0.26.0/t2g.py
@@ -0,0 +1,75 @@
+#!/usr/local/bin/python
+import os
+import re
+import gzip
+import argparse
+
+
+def validate_file(f):
+    if not os.path.exists(f):
+        # Argparse uses the ArgumentTypeError to give a rejection message like:
+        # error: argument input: x does not exist
+        raise argparse.ArgumentTypeError("{0} does not exist".format(f))
+    return f
+
+
+def t2g_line(transcript, gene):
+    return str(transcript) + "\t" + str(gene) + "\n"
+
+
+def build_gene_re():
+    return re.compile(".*gene_id\s+\"(\S+)\";.*")
+
+
+def build_transcript_re():
+    return re.compile(".*transcript_id\s+\"(\S+)\";.*")
+
+
+def get_gene(line, gene_re):
+    return gene_re.match(line)
+
+
+def get_transcript(line, transcript_re):
+    return transcript_re.match(line)
+
+
+def gtf_line(line, transcript_re, gene_re):
+    transcript_id = get_transcript(line, transcript_re)
+    gene_id = get_gene(line, gene_re)
+    return {'transcript_id': transcript_id, 'gene_id': gene_id}
+
+
+def write_t2g_line(t2g, line, transcript_re, gene_re):
+    results = gtf_line(line, transcript_re, gene_re)
+    if results['transcript_id']:
+        t2g.write(
+            t2g_line(
+                results['transcript_id'].group(1),
+                results['gene_id'].group(1)
+            )
+        )
+
+
+if __name__ == "__main__":
+    parser = argparse.ArgumentParser(
+        description="create transcript to genes file from a gtf file."
+    )
+    parser.add_argument(
+        "-g", "--gtf", dest="gtf", required=True, type=validate_file,
+        help="gtf file", metavar="FILE"
+    )
+    args = parser.parse_args()
+    gene_re = build_gene_re()
+    transcript_re = build_transcript_re()
+
+    try:
+        with gzip.open(args.gtf, "rb") as gtf:
+            with open("t2g_dup.txt", "w") as t2g:
+                for line in gtf:
+                    write_t2g_line(t2g, str(line), transcript_re, gene_re)
+    except gzip.BadGzipFile:
+        with open(args.gtf, "r") as gtf:
+            with open("t2g_dup.txt", "w") as t2g:
+                for line in gtf:
+                    write_t2g_line(t2g, str(line), transcript_re, gene_re)
+
diff --git a/src/.docker_modules/kb/0.26.3/Dockerfile b/src/.docker_modules/kb/0.26.3/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..5cd35dfae7d474b25223ebeca80fe6adada91db3
--- /dev/null
+++ b/src/.docker_modules/kb/0.26.3/Dockerfile
@@ -0,0 +1,13 @@
+FROM python:3.9-slim 
+
+ENV KB_VERSION="0.26.3"
+
+RUN apt update && apt install -y procps make gcc zlib1g-dev libbz2-dev  	libcurl4 liblzma-dev \
+        && pip3 install pysam anndata h5py Jinja2 loompy nbconvert nbformat ngs-tools numpy pandas plotly scanpy scikit-learn tqdm \
+        && pip3 install kb-python==${KB_VERSION} gffutils
+
+COPY t2g.py /usr/bin/
+
+RUN chmod +x /usr/bin/t2g.py
+
+CMD [ "bash" ]
diff --git a/src/.docker_modules/kb/0.26.3/docker_init.sh b/src/.docker_modules/kb/0.26.3/docker_init.sh
new file mode 100755
index 0000000000000000000000000000000000000000..8104fc75e2461863c98782e9c115a42501f624c9
--- /dev/null
+++ b/src/.docker_modules/kb/0.26.3/docker_init.sh
@@ -0,0 +1,5 @@
+#!/bin/sh
+docker pull lbmc/kb:0.26.3
+# docker build src/.docker_modules/kb/0.26.3 -t 'lbmc/kb:0.26.3'
+# docker push lbmc/kb:0.26.3
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/kb:0.26.3" --push src/.docker_modules/kb/0.26.3
diff --git a/src/.docker_modules/kb/0.26.3/t2g.py b/src/.docker_modules/kb/0.26.3/t2g.py
new file mode 100755
index 0000000000000000000000000000000000000000..f9f0b45dc89b385c3ed52dc252f8f09eb3bc8c74
--- /dev/null
+++ b/src/.docker_modules/kb/0.26.3/t2g.py
@@ -0,0 +1,47 @@
+#!/usr/local/bin/python
+import os
+import gffutils
+import argparse
+
+
+def validate_file(f):
+    if not os.path.exists(f):
+        # Argparse uses the ArgumentTypeError to give a rejection message like:
+        # error: argument input: x does not exist
+        raise argparse.ArgumentTypeError("{0} does not exist".format(f))
+    return f
+
+
+if __name__ == "__main__":
+    parser = argparse.ArgumentParser(
+        description="create transcript to genes file from a gtf file."
+    )
+    parser.add_argument(
+        "-g", "--gtf", dest="gtf", required=True, type=validate_file,
+        help="gtf file", metavar="FILE"
+    )
+    args = parser.parse_args()
+
+    db = gffutils.create_db(
+        args.gtf,
+        dbfn=":memory:",
+        force=True,
+        merge_strategy="merge",
+        disable_infer_transcripts=False,
+        disable_infer_genes=False
+    )
+    with open("t2g.txt", "w") as t2g:
+        for gene in db.all_features():
+            for transcript in db.children(
+              gene, featuretype='transcript', order_by='start'
+            ):
+                t2g_line = str(transcript["transcript_id"][0]) + \
+                    "\t" + \
+                    str(gene["gene_id"][0])
+                t2g_line = t2g_line.split("\t")
+                t2g.write(
+                    str(t2g_line[0].split(".")[0]) +
+                    "\t" +
+                    str(t2g_line[1].split(".")[0]) +
+                    "\n"
+                )
diff --git a/src/.docker_modules/last/1060/docker_init.sh b/src/.docker_modules/last/1060/docker_init.sh
index 0e8393fb88f803e9648cc1185f10c3198daf5976..e5682dfdf46a796352dbfa5068cd5e0e6001adf3 100755
--- a/src/.docker_modules/last/1060/docker_init.sh
+++ b/src/.docker_modules/last/1060/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/last:1060
-docker build src/.docker_modules/last/1060/ -t 'lbmc/last:1060'
-docker push lbmc/last:1060
+# docker build src/.docker_modules/last/1060/ -t 'lbmc/last:1060'
+# docker push lbmc/last:1060
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/last:1060" --push src/.docker_modules/last/1060
diff --git a/src/.docker_modules/liftover/357/docker_init.sh b/src/.docker_modules/liftover/357/docker_init.sh
index 68bd90585292fb30242e3d2cdd94e3538d277f6f..1f4da570565e97ea69275452ca68b5fb4004973c 100755
--- a/src/.docker_modules/liftover/357/docker_init.sh
+++ b/src/.docker_modules/liftover/357/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/liftover:357
-docker build src/.docker_modules/liftover/357/ -t 'lbmc/liftover:357'
-docker push lbmc/liftover:357
+# docker build src/.docker_modules/liftover/357/ -t 'lbmc/liftover:357'
+# docker push lbmc/liftover:357
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/liftover:357" --push src/.docker_modules/liftover/357
diff --git a/src/.docker_modules/macs2/2.1.2/docker_init.sh b/src/.docker_modules/macs2/2.1.2/docker_init.sh
index 8dc7b2483a1aa91f1f637e26812469f861b68f0e..70f61fcd41109c398c4db60f4fcea2f69b9bbbcb 100755
--- a/src/.docker_modules/macs2/2.1.2/docker_init.sh
+++ b/src/.docker_modules/macs2/2.1.2/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/macs2:2.1.2
-docker build src/.docker_modules/macs2/2.1.2 -t 'lbmc/macs2:2.1.2'
-docker push lbmc/macs2:2.1.2
+# docker build src/.docker_modules/macs2/2.1.2 -t 'lbmc/macs2:2.1.2'
+# docker push lbmc/macs2:2.1.2
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/macs2:2.1.2" --push src/.docker_modules/macs2/2.1.2
diff --git a/src/.docker_modules/macs3/3.0.0a6/docker_init.sh b/src/.docker_modules/macs3/3.0.0a6/docker_init.sh
index 3c830318a39076b8f2ca4dc2e7442c8c046a320a..ab969f74738ed154e031f85315fa1a79d27e6703 100755
--- a/src/.docker_modules/macs3/3.0.0a6/docker_init.sh
+++ b/src/.docker_modules/macs3/3.0.0a6/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/macs3:3.0.0a6
-docker build src/.docker_modules/macs3/3.0.0a6 -t 'lbmc/macs3:3.0.0a6'
-docker push lbmc/macs3:3.0.0a6
+# docker build src/.docker_modules/macs3/3.0.0a6 -t 'lbmc/macs3:3.0.0a6'
+# docker push lbmc/macs3:3.0.0a6
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/macs3:3.0.0a6" --push src/.docker_modules/macs3/3.0.0a6
diff --git a/src/.docker_modules/minimap2/2.17/docker_init.sh b/src/.docker_modules/minimap2/2.17/docker_init.sh
index 773f0cf6d1ec3f29c3e60e4b1fa359d28223e601..c77f5b7879ced88a2533b81f263b80b771a39043 100755
--- a/src/.docker_modules/minimap2/2.17/docker_init.sh
+++ b/src/.docker_modules/minimap2/2.17/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/minimap2:2.17
-docker build src/.docker_modules/minimap2/2.17 -t 'lbmc/minimap2:2.17'
-docker push lbmc/minimap2:2.17
+# docker build src/.docker_modules/minimap2/2.17 -t 'lbmc/minimap2:2.17'
+# docker push lbmc/minimap2:2.17
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/minimap2:2.17" --push src/.docker_modules/minimap2/2.17
diff --git a/src/.docker_modules/multiqc/1.0/docker_init.sh b/src/.docker_modules/multiqc/1.0/docker_init.sh
index 1b45ce3e7d6a58c98cf34f7614603dec9ad525fc..a0098e2fdf2fbe5091543f0a872c47c2669a18d0 100755
--- a/src/.docker_modules/multiqc/1.0/docker_init.sh
+++ b/src/.docker_modules/multiqc/1.0/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/multiqc:1.0
-docker build src/.docker_modules/multiqc/1.0 -t 'lbmc/multiqc:1.0'
-docker push lbmc/multiqc:1.0
+# docker build src/.docker_modules/multiqc/1.0 -t 'lbmc/multiqc:1.0'
+# docker push lbmc/multiqc:1.0
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/multiqc:1.0" --push src/.docker_modules/multiqc/1.0
diff --git a/src/.docker_modules/multiqc/1.11/Dockerfile b/src/.docker_modules/multiqc/1.11/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..cd6e68e662bdd660e3c58f2ad51ff72ec1e7bfa5
--- /dev/null
+++ b/src/.docker_modules/multiqc/1.11/Dockerfile
@@ -0,0 +1,19 @@
+FROM python:3.9-slim
+MAINTAINER Laurent Modolo
+
+ENV MULTIQC_VERSION=1.11
+ENV PACKAGES procps locales
+
+RUN apt-get update && \
+    apt-get install -y --no-install-recommends ${PACKAGES} && \
+    apt-get clean
+
+RUN locale-gen en_US.UTF-8
+ENV LC_ALL=en_US.utf-8
+ENV LANG=en_US.utf-8
+ENV LC_ALL=C.UTF-8
+ENV LANG=C.UTF-8
+
+
+RUN pip3 install multiqc==${MULTIQC_VERSION}
+
diff --git a/src/.docker_modules/multiqc/1.11/docker_init.sh b/src/.docker_modules/multiqc/1.11/docker_init.sh
new file mode 100755
index 0000000000000000000000000000000000000000..5831c6eccde183ac2c129757436bba1823bb45c3
--- /dev/null
+++ b/src/.docker_modules/multiqc/1.11/docker_init.sh
@@ -0,0 +1,5 @@
+#!/bin/sh
+docker pull lbmc/multiqc:1.11
+# docker build src/.docker_modules/multiqc/1.11 -t 'lbmc/multiqc:1.11'
+# docker push lbmc/multiqc:1.11
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/multiqc:1.11" --push src/.docker_modules/multiqc/1.11
diff --git a/src/.docker_modules/multiqc/1.7/docker_init.sh b/src/.docker_modules/multiqc/1.7/docker_init.sh
index e091f04a2752d2fcf2901f580ff8f001c8589df6..375f5a3b9c30070a09098456a93afc46408a5a46 100755
--- a/src/.docker_modules/multiqc/1.7/docker_init.sh
+++ b/src/.docker_modules/multiqc/1.7/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/multiqc:1.7
-docker build src/.docker_modules/multiqc/1.7 -t 'lbmc/multiqc:1.7'
-docker push lbmc/multiqc:1.7
+# docker build src/.docker_modules/multiqc/1.7 -t 'lbmc/multiqc:1.7'
+# docker push lbmc/multiqc:1.7
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/multiqc:1.7" --push src/.docker_modules/multiqc/1.7
diff --git a/src/.docker_modules/multiqc/1.9/docker_init.sh b/src/.docker_modules/multiqc/1.9/docker_init.sh
index dcb2897242cb084d9bc851e2274c21aca99f00c2..b47d3cde6b23b65ffb2b9ca0062ffcf9541d8700 100755
--- a/src/.docker_modules/multiqc/1.9/docker_init.sh
+++ b/src/.docker_modules/multiqc/1.9/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/multiqc:1.9
-docker build src/.docker_modules/multiqc/1.9 -t 'lbmc/multiqc:1.9'
-docker push lbmc/multiqc:1.9
+# docker build src/.docker_modules/multiqc/1.9 -t 'lbmc/multiqc:1.9'
+# docker push lbmc/multiqc:1.9
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/multiqc:1.9" --push src/.docker_modules/multiqc/1.9
diff --git a/src/.docker_modules/music/6613c53/docker_init.sh b/src/.docker_modules/music/6613c53/docker_init.sh
index 20e327a97a09ced0c55b16d6a780f35a09e1c881..9ef317a11822b5abc4760053458a4aba700c8a41 100755
--- a/src/.docker_modules/music/6613c53/docker_init.sh
+++ b/src/.docker_modules/music/6613c53/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/music:6613c53
-docker build src/.docker_modules/music/6613c53 -t 'lbmc/music:6613c53'
-docker push lbmc/music:6613c53
+# docker build src/.docker_modules/music/6613c53 -t 'lbmc/music:6613c53'
+# docker push lbmc/music:6613c53
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/music:6613c53" --push src/.docker_modules/music/6613c53
diff --git a/src/.docker_modules/pandoc/2.11/docker_init.sh b/src/.docker_modules/pandoc/2.11/docker_init.sh
index 3bbc7b6adc58f5f9fb2f8c554bbdfea9724cb4b2..309f146d22c7784c5e035c649f142104bdb5a714 100755
--- a/src/.docker_modules/pandoc/2.11/docker_init.sh
+++ b/src/.docker_modules/pandoc/2.11/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/pandoc:2.11
-docker build src/.docker_modules/pandoc/2.11 -t 'lbmc/pandoc:2.11'
-docker push lbmc/pandoc:2.11
+# docker build src/.docker_modules/pandoc/2.11 -t 'lbmc/pandoc:2.11'
+# docker push lbmc/pandoc:2.11
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/pandoc:2.11" --push src/.docker_modules/pandoc/2.11
diff --git a/src/.docker_modules/picard/2.18.11/docker_init.sh b/src/.docker_modules/picard/2.18.11/docker_init.sh
index 82c4cf7d3bdf581587fa7a3345cff9cb465158ae..6e77b810f498da965b7181d1b243e321281218f9 100755
--- a/src/.docker_modules/picard/2.18.11/docker_init.sh
+++ b/src/.docker_modules/picard/2.18.11/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/picard:2.18.11
-docker build src/.docker_modules/picard/2.18.11 -t 'lbmc/picard:2.18.11'
-docker push lbmc/picard:2.18.11
+# docker build src/.docker_modules/picard/2.18.11 -t 'lbmc/picard:2.18.11'
+# docker push lbmc/picard:2.18.11
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/picard:2.18.11" --push src/.docker_modules/picard/2.18.11
diff --git a/src/.docker_modules/pigz/2.4/docker_init.sh b/src/.docker_modules/pigz/2.4/docker_init.sh
index 38d7347d72e9345ca69f54c9d8ea2ac3ec0ebbb8..2925b0b6f692d1ded62fb54a93f304186d212d3c 100755
--- a/src/.docker_modules/pigz/2.4/docker_init.sh
+++ b/src/.docker_modules/pigz/2.4/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/pigz:2.4
-docker build src/.docker_modules/pigz/2.4 -t 'lbmc/pigz:2.4'
-docker push lbmc/pigz:2.4
+# docker build src/.docker_modules/pigz/2.4 -t 'lbmc/pigz:2.4'
+# docker push lbmc/pigz:2.4
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/pigz:2.4" --push src/.docker_modules/pigz/2.4
diff --git a/src/.docker_modules/python/3.8/docker_init.sh b/src/.docker_modules/python/3.8/docker_init.sh
index 9a1c9b8b04f56586c6fadda16add7af4f66c3454..29a251fcaca93765eaa1d843b8d8486db2342d8f 100755
--- a/src/.docker_modules/python/3.8/docker_init.sh
+++ b/src/.docker_modules/python/3.8/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/python:3.8
-docker build src/.docker_modules/python/3.8 -t 'lbmc/python:3.8'
-docker push lbmc/python:3.8
+# docker build src/.docker_modules/python/3.8 -t 'lbmc/python:3.8'
+# docker push lbmc/python:3.8
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/python:3.8" --push src/.docker_modules/python/3.8
diff --git a/src/.docker_modules/r-base/3.5.3/docker_init.sh b/src/.docker_modules/r-base/3.5.3/docker_init.sh
index f62473c54ae30f85f36293fb74b42fc255062a46..7e0a931bc9affe03242282d21acea352f1ba8926 100755
--- a/src/.docker_modules/r-base/3.5.3/docker_init.sh
+++ b/src/.docker_modules/r-base/3.5.3/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/r:3.5.3
-docker build src/.docker_modules/r-base/3.5.3 -t 'lbmc/r:3.5.3'
-docker push lbmc/r:3.5.3
+# docker build src/.docker_modules/r-base/3.5.3 -t 'lbmc/r:3.5.3'
+# docker push lbmc/r:3.5.3
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/r:3.5.3" --push src/.docker_modules/r/3.5.3
diff --git a/src/.docker_modules/r-base/3.6.2/docker_init.sh b/src/.docker_modules/r-base/3.6.2/docker_init.sh
index d1e6e8183e95ba787d244ae17f13d0f6e97eefba..29a3ea6716c078c79437f428a8a70226351e2123 100755
--- a/src/.docker_modules/r-base/3.6.2/docker_init.sh
+++ b/src/.docker_modules/r-base/3.6.2/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/r-base:3.6.2
-docker build src/.docker_modules/r-base/3.6.2 -t 'lbmc/r-base:3.6.2'
-docker push lbmc/r-base:3.6.2
+# docker build src/.docker_modules/r-base/3.6.2 -t 'lbmc/r-base:3.6.2'
+# docker push lbmc/r-base:3.6.2
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/r-base:3.6.2" --push src/.docker_modules/r-base/3.6.2
diff --git a/src/.docker_modules/r-base/4.0.0/docker_init.sh b/src/.docker_modules/r-base/4.0.0/docker_init.sh
index fe24f44d1733d3a3cce32eb516ddcd7d8ae50930..d5010649517234b79e63ad3fc6e32c85b2207f59 100755
--- a/src/.docker_modules/r-base/4.0.0/docker_init.sh
+++ b/src/.docker_modules/r-base/4.0.0/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/r-base:4.0.0
-docker build src/.docker_modules/r-base/4.0.0 -t 'lbmc/r-base:4.0.0'
-docker push lbmc/r-base:4.0.0
+# docker build src/.docker_modules/r-base/4.0.0 -t 'lbmc/r-base:4.0.0'
+# docker push lbmc/r-base:4.0.0
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/r-base:4.0.0" --push src/.docker_modules/r-base/4.0.0
diff --git a/src/.docker_modules/r-base/4.0.2/docker_init.sh b/src/.docker_modules/r-base/4.0.2/docker_init.sh
index d07371190e4360bb9ebca95c0cb16eef8b88e32d..7a5e7304c4d77f4210c0ca1c110d122910cddf4a 100755
--- a/src/.docker_modules/r-base/4.0.2/docker_init.sh
+++ b/src/.docker_modules/r-base/4.0.2/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/r-base:4.0.2
-docker build src/.docker_modules/r-base/4.0.2 -t 'lbmc/r-base:4.0.2'
-docker push lbmc/r-base:4.0.2
+# docker build src/.docker_modules/r-base/4.0.2 -t 'lbmc/r-base:4.0.2'
+# docker push lbmc/r-base:4.0.2
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/r-base:4.0.2" --push src/.docker_modules/r-base/4.0.2
diff --git a/src/.docker_modules/r-base/4.0.3/docker_init.sh b/src/.docker_modules/r-base/4.0.3/docker_init.sh
index 2b4e97048e502f00ec3447bbced8d9f53d529c6c..4679de3a4de54dbb7c8ea3c18082e0a666fa1f14 100755
--- a/src/.docker_modules/r-base/4.0.3/docker_init.sh
+++ b/src/.docker_modules/r-base/4.0.3/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/r-base:4.0.3
-docker build src/.docker_modules/r-base/4.0.3 -t 'lbmc/r-base:4.0.3'
-docker push lbmc/r-base:4.0.3
+# docker build src/.docker_modules/r-base/4.0.3 -t 'lbmc/r-base:4.0.3'
+# docker push lbmc/r-base:4.0.3
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/r-base:4.0.3" --push src/.docker_modules/r-base/4.0.3
diff --git a/src/.docker_modules/rasusa/0.6.0/Dockerfile b/src/.docker_modules/rasusa/0.6.0/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..00ee62f6ab38676c8e97ae103bd0932498238e45
--- /dev/null
+++ b/src/.docker_modules/rasusa/0.6.0/Dockerfile
@@ -0,0 +1,7 @@
+FROM quay.io/mbhall88/rasusa:0.6.0 AS quay_source
+
+FROM alpine:3.13
+COPY --from=quay_source /bin/rasusa /bin/
+RUN apk add --update --no-cache bash procps
+
+CMD [ "bash" ]
diff --git a/src/.docker_modules/rasusa/0.6.0/docker_init.sh b/src/.docker_modules/rasusa/0.6.0/docker_init.sh
new file mode 100755
index 0000000000000000000000000000000000000000..4f920288e1281d8c05a9fcd7c86aceff9b0c4d4a
--- /dev/null
+++ b/src/.docker_modules/rasusa/0.6.0/docker_init.sh
@@ -0,0 +1,6 @@
+#!/bin/sh
+docker pull lbmc/rasusa:0.6.0
+# docker build src/.docker_modules/rasusa/0.6.0 -t 'lbmc/rasusa:0.6.0'
+# docker push lbmc/rasusa:0.6.0
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/rasusa:0.6.0" --push src/.docker_modules/rasusa/0.6.0
+docker buildx build --platform linux/amd64,linux/arm64 -t 'lbmc/rasusa:0.6.0' --push src/.docker_modules/rasusa/0.6.0
diff --git a/src/.docker_modules/ribotricer/1.3.2/Dockerfile b/src/.docker_modules/ribotricer/1.3.2/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..451a68a1fdf14d8e517f6aad5504f516993cfa1f
--- /dev/null
+++ b/src/.docker_modules/ribotricer/1.3.2/Dockerfile
@@ -0,0 +1,13 @@
+FROM ubuntu:20.04
+MAINTAINER Emmanuel Labaronne
+
+ENV RIBOTRICER_VERSION=1.3.2
+ENV PACKAGES python3\
+             python3-dev\
+             python3-pip
+
+RUN apt-get update && \
+    apt-get install -y ${PACKAGES} && \
+    apt-get clean
+
+RUN pip3 install ribotricer==${RIBOTRICER_VERSION}
diff --git a/src/.docker_modules/ribotricer/1.3.2/docker_init.sh b/src/.docker_modules/ribotricer/1.3.2/docker_init.sh
new file mode 100644
index 0000000000000000000000000000000000000000..b28b00c5db0df0dba25c789275c99393f6356279
--- /dev/null
+++ b/src/.docker_modules/ribotricer/1.3.2/docker_init.sh
@@ -0,0 +1,5 @@
+#!/bin/sh
+docker pull lbmc/ribotricer:1.3.2
+# docker build src/.docker_modules/ribotricer/1.3.2 -t 'lbmc/ribotricer:1.3.2'
+# docker push lbmc/ribotricer:1.3.2
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/ribotricer:1.3.2" --push src/.docker_modules/ribotricer/1.3.2
diff --git a/src/.docker_modules/rsem/1.3.0/docker_init.sh b/src/.docker_modules/rsem/1.3.0/docker_init.sh
index aadcb4d8ce01353c3510a6a649d640121865bf8d..56cdc1d97084f3c1623bfb80debbe199fdd3bbd6 100755
--- a/src/.docker_modules/rsem/1.3.0/docker_init.sh
+++ b/src/.docker_modules/rsem/1.3.0/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/rsem:1.3.0
-docker build src/.docker_modules/rsem/1.3.0 -t 'lbmc/rsem:1.3.0'
-docker push lbmc/rsem:1.3.0
+# docker build src/.docker_modules/rsem/1.3.0 -t 'lbmc/rsem:1.3.0'
+# docker push lbmc/rsem:1.3.0
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/rsem:1.3.0" --push src/.docker_modules/rsem/1.3.0
diff --git a/src/.docker_modules/sabre/039a55e/docker_init.sh b/src/.docker_modules/sabre/039a55e/docker_init.sh
index fc0f318f612a582b7a56691d27cd5454b2b3370b..b5aefa6b8e8b5f34d668c5cd27ef78d763dc1d1d 100755
--- a/src/.docker_modules/sabre/039a55e/docker_init.sh
+++ b/src/.docker_modules/sabre/039a55e/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/sabre:039a55e
-docker build src/.docker_modules/sabre/039a55e -t 'lbmc/sabre:039a55e'
-docker push lbmc/sabre:039a55e
+# docker build src/.docker_modules/sabre/039a55e -t 'lbmc/sabre:039a55e'
+# docker push lbmc/sabre:039a55e
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/sabre:039a55e" --push src/.docker_modules/sabre/039a55e
diff --git a/src/.docker_modules/salmon/0.8.2/docker_init.sh b/src/.docker_modules/salmon/0.8.2/docker_init.sh
index f44850b49c43ae852f1ef93b88a09f301169f780..924c7d7b719b4ee22c719519bcb71fcb41091ee4 100755
--- a/src/.docker_modules/salmon/0.8.2/docker_init.sh
+++ b/src/.docker_modules/salmon/0.8.2/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/salmon:0.8.2
-docker build src/.docker_modules/salmon/0.8.2 -t 'lbmc/salmon:0.8.2'
-docker push lbmc/salmon:0.8.2
+# docker build src/.docker_modules/salmon/0.8.2 -t 'lbmc/salmon:0.8.2'
+# docker push lbmc/salmon:0.8.2
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/salmon:0.8.2" --push src/.docker_modules/salmon/0.8.2
diff --git a/src/.docker_modules/sambamba/0.6.7/docker_init.sh b/src/.docker_modules/sambamba/0.6.7/docker_init.sh
index ccedf316633c21653bde1312e1ccd5376b95fafe..07c8d05a978daf9704dd1896adff0306f18cd5ee 100755
--- a/src/.docker_modules/sambamba/0.6.7/docker_init.sh
+++ b/src/.docker_modules/sambamba/0.6.7/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/sambamba:0.6.7
-docker build src/.docker_modules/sambamba/0.6.7 -t 'lbmc/sambamba:0.6.7'
-docker push lbmc/sambamba:0.6.7
+# docker build src/.docker_modules/sambamba/0.6.7 -t 'lbmc/sambamba:0.6.7'
+# docker push lbmc/sambamba:0.6.7
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/sambamba:0.6.7" --push src/.docker_modules/sambamba/0.6.7
diff --git a/src/.docker_modules/sambamba/0.6.9/docker_init.sh b/src/.docker_modules/sambamba/0.6.9/docker_init.sh
index 9525b17e688d739198a1421f641c2281b45ade9a..4de76d6617ef8aad1a5bca7748d04b2a747650f1 100755
--- a/src/.docker_modules/sambamba/0.6.9/docker_init.sh
+++ b/src/.docker_modules/sambamba/0.6.9/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/sambamba:0.6.9
-docker build src/.docker_modules/sambamba/0.6.9 -t 'lbmc/sambamba:0.6.9'
-docker push lbmc/sambamba:0.6.9
+# docker build src/.docker_modules/sambamba/0.6.9 -t 'lbmc/sambamba:0.6.9'
+# docker push lbmc/sambamba:0.6.9
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/sambamba:0.6.9" --push src/.docker_modules/sambamba/0.6.9
diff --git a/src/.docker_modules/samblaster/0.1.24/docker_init.sh b/src/.docker_modules/samblaster/0.1.24/docker_init.sh
index 0fec5a0782d348935647212a430f9c1efe7d4367..319de6666bfb794cf9576204889ec25bc2a885ea 100755
--- a/src/.docker_modules/samblaster/0.1.24/docker_init.sh
+++ b/src/.docker_modules/samblaster/0.1.24/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/samblaster:0.1.24
-docker build src/.docker_modules/samblaster/0.1.24 -t 'lbmc/samblaster:0.1.24'
-docker push lbmc/samblaster:0.1.24
+# docker build src/.docker_modules/samblaster/0.1.24 -t 'lbmc/samblaster:0.1.24'
+# docker push lbmc/samblaster:0.1.24
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/samblaster:0.1.24" --push src/.docker_modules/samblaster/0.1.24
diff --git a/src/.docker_modules/samtools/1.11/docker_init.sh b/src/.docker_modules/samtools/1.11/docker_init.sh
index e5cf9c2896e0679b9124bdb4e38f852184f993f6..273b8472ccb75b1bf1f308fd79f468b72a210797 100755
--- a/src/.docker_modules/samtools/1.11/docker_init.sh
+++ b/src/.docker_modules/samtools/1.11/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/samtools:1.11
-docker build src/.docker_modules/samtools/1.11 -t 'lbmc/samtools:1.11'
-docker push lbmc/samtools:1.11
+# docker build src/.docker_modules/samtools/1.11 -t 'lbmc/samtools:1.11'
+# docker push lbmc/samtools:1.11
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/samtools:1.11" --push src/.docker_modules/samtools/1.11
diff --git a/src/.docker_modules/samtools/1.7/docker_init.sh b/src/.docker_modules/samtools/1.7/docker_init.sh
index 83c510a9e6fe22e1c28eac9bed5e44d1c707da15..5839d04f8ce5da781ec4f0772b712e78ca47eea7 100755
--- a/src/.docker_modules/samtools/1.7/docker_init.sh
+++ b/src/.docker_modules/samtools/1.7/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/samtools:1.7
-docker build src/.docker_modules/samtools/1.7 -t 'lbmc/samtools:1.7'
-docker push lbmc/samtools:1.7
+# docker build src/.docker_modules/samtools/1.7 -t 'lbmc/samtools:1.7'
+# docker push lbmc/samtools:1.7
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/samtools:1.7" --push src/.docker_modules/samtools/1.7
diff --git a/src/.docker_modules/sratoolkit/2.8.2/docker_init.sh b/src/.docker_modules/sratoolkit/2.8.2/docker_init.sh
index ce040fcc1b3ed4f7041d01421e7a2031d983ef6f..d6207d7eca02771e2d9569e7ce43829908cc473f 100755
--- a/src/.docker_modules/sratoolkit/2.8.2/docker_init.sh
+++ b/src/.docker_modules/sratoolkit/2.8.2/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/sratoolkit:2.8.2
-docker build src/.docker_modules/sratoolkit/2.8.2 -t 'lbmc/sratoolkit:2.8.2'
-docker push lbmc/sratoolkit:2.8.2
+# docker build src/.docker_modules/sratoolkit/2.8.2 -t 'lbmc/sratoolkit:2.8.2'
+# docker push lbmc/sratoolkit:2.8.2
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/sratoolkit:2.8.2" --push src/.docker_modules/sratoolkit/2.8.2
diff --git a/src/.docker_modules/star/2.5.3/docker_init.sh b/src/.docker_modules/star/2.5.3/docker_init.sh
index 50beecfcc7fcb7a9b1943a418651cafb55851495..791f2b21442a6deefb0538a26b3f23f43b841a40 100755
--- a/src/.docker_modules/star/2.5.3/docker_init.sh
+++ b/src/.docker_modules/star/2.5.3/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/star:2.7.3a
-docker build src/.docker_modules/star/2.7.3a/ -t 'lbmc/star:2.7.3a'
-docker push lbmc/star:2.7.3a
+# docker build src/.docker_modules/star/2.7.3a/ -t 'lbmc/star:2.7.3a'
+# docker push lbmc/star:2.7.3a
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/star:2.7.3a" --push src/.docker_modules/star/2.7.3a
diff --git a/src/.docker_modules/star/2.7.3a/docker_init.sh b/src/.docker_modules/star/2.7.3a/docker_init.sh
index 50beecfcc7fcb7a9b1943a418651cafb55851495..791f2b21442a6deefb0538a26b3f23f43b841a40 100755
--- a/src/.docker_modules/star/2.7.3a/docker_init.sh
+++ b/src/.docker_modules/star/2.7.3a/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/star:2.7.3a
-docker build src/.docker_modules/star/2.7.3a/ -t 'lbmc/star:2.7.3a'
-docker push lbmc/star:2.7.3a
+# docker build src/.docker_modules/star/2.7.3a/ -t 'lbmc/star:2.7.3a'
+# docker push lbmc/star:2.7.3a
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/star:2.7.3a" --push src/.docker_modules/star/2.7.3a
diff --git a/src/.docker_modules/subread/1.6.4/docker_init.sh b/src/.docker_modules/subread/1.6.4/docker_init.sh
index 0dd51ca0dbc45ab1b2c237c1a43c670f14dd184a..5d7740e939fb1aaae30de4bf836d9422f52430d0 100755
--- a/src/.docker_modules/subread/1.6.4/docker_init.sh
+++ b/src/.docker_modules/subread/1.6.4/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/subread:1.6.4
-docker build src/.docker_modules/subread/1.6.4 -t 'lbmc/subread:1.6.4'
-docker push lbmc/subread:1.6.4
+# docker build src/.docker_modules/subread/1.6.4 -t 'lbmc/subread:1.6.4'
+# docker push lbmc/subread:1.6.4
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/subread:1.6.4" --push src/.docker_modules/subread/1.6.4
diff --git a/src/.docker_modules/tophat/2.1.1/Dockerfile b/src/.docker_modules/tophat/2.1.1/Dockerfile
index 82d3c5d634b59ede9281f56b00f7cc6be1f316e6..3c34d4641239ebfa7cbb65a716e436f2e0c099ee 100644
--- a/src/.docker_modules/tophat/2.1.1/Dockerfile
+++ b/src/.docker_modules/tophat/2.1.1/Dockerfile
@@ -2,8 +2,10 @@ FROM ubuntu:18.04
 MAINTAINER Laurent Modolo
 
 ENV TOPHAT_VERSION=2.1.1
-ENV PACKAGES tophat=${BOWTIE2_VERSION}*
+ENV PACKAGES tophat=${TOPHAT_VERSION}*\
+             bowtie=1.2.2*\
+             libsys-hostname-long-perl
 
 RUN apt-get update && \
-    apt-get install -y --no-install-recommends ${PACKAGES} && \
+    apt-get install -y --no-install-recommends ${PACKAGES}  && \
     apt-get clean
diff --git a/src/.docker_modules/tophat/2.1.1/docker_init.sh b/src/.docker_modules/tophat/2.1.1/docker_init.sh
index 67151131596b2c2dda5e5cc7beadc69dcd64aa6c..f14f6b0bc911c97eedf3af71393df55126403fbe 100755
--- a/src/.docker_modules/tophat/2.1.1/docker_init.sh
+++ b/src/.docker_modules/tophat/2.1.1/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/tophat:2.1.1
-docker build src/.docker_modules/tophat/2.1.1 -t 'lbmc/tophat:2.1.1'
-docker push lbmc/tophat:2.1.1
+# docker build src/.docker_modules/tophat/2.1.1 -t 'lbmc/tophat:2.1.1'
+# docker push lbmc/tophat:2.1.1
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/tophat:2.1.1" --push src/.docker_modules/tophat/2.1.1
diff --git a/src/.docker_modules/trimmomatic/0.36/docker_init.sh b/src/.docker_modules/trimmomatic/0.36/docker_init.sh
index f054581bde67aff212a04284c5f463f8a6e4ab75..34477452f757308c8cc65862b10310ae2076c5fc 100755
--- a/src/.docker_modules/trimmomatic/0.36/docker_init.sh
+++ b/src/.docker_modules/trimmomatic/0.36/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/trimmomatic:0.36
-docker build src/.docker_modules/trimmomatic/0.36 -t 'lbmc/trimmomatic:0.36'
-docker push lbmc/trimmomatic:0.36
+# docker build src/.docker_modules/trimmomatic/0.36 -t 'lbmc/trimmomatic:0.36'
+# docker push lbmc/trimmomatic:0.36
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/trimmomatic:0.36" --push src/.docker_modules/trimmomatic/0.36
diff --git a/src/.docker_modules/ucsc/375/docker_init.sh b/src/.docker_modules/ucsc/375/docker_init.sh
index f0cc90565cc1f5583eb0c4303976300f695500e0..5f3b3913b2a62e5151f1260d9662749268c3c16f 100755
--- a/src/.docker_modules/ucsc/375/docker_init.sh
+++ b/src/.docker_modules/ucsc/375/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/ucsc:375
-docker build src/.docker_modules/ucsc/375/ -t 'lbmc/ucsc:375'
-docker push lbmc/ucsc:375
+# docker build src/.docker_modules/ucsc/375/ -t 'lbmc/ucsc:375'
+# docker push lbmc/ucsc:375
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/ucsc:375" --push src/.docker_modules/ucsc/375
diff --git a/src/.docker_modules/ucsc/400/docker_init.sh b/src/.docker_modules/ucsc/400/docker_init.sh
index 83c2161652164d7ccecaf82b4ca25babde445599..df1e3e58fbb36aa4cce0c1ecec4e550827a200ed 100755
--- a/src/.docker_modules/ucsc/400/docker_init.sh
+++ b/src/.docker_modules/ucsc/400/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/ucsc:400
-docker build src/.docker_modules/ucsc/400/ -t 'lbmc/ucsc:400'
-docker push lbmc/ucsc:400
+# docker build src/.docker_modules/ucsc/400/ -t 'lbmc/ucsc:400'
+# docker push lbmc/ucsc:400
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/ucsc:400" --push src/.docker_modules/ucsc/400
diff --git a/src/.docker_modules/ucsc/407/Dockerfile b/src/.docker_modules/ucsc/407/Dockerfile
index 1499bdb1d58e48a64ee1a7dee550444527b7c82e..76eb05edcb7ac5fc84ef82251b037f9882e188d8 100644
--- a/src/.docker_modules/ucsc/407/Dockerfile
+++ b/src/.docker_modules/ucsc/407/Dockerfile
@@ -25,3 +25,7 @@ cd .. &&\
 mv userApps/bin/* /usr/bin/ &&\
 rm -R userApps.v${UCSC_VERSION}.src.tgz &&\
 rm -R userApps
+
+COPY bedgraph_to_wig.pl /usr/bin/
+COPY gtf2bed.pl /usr/bin/
+RUN chmod +x /usr/bin/*.pl
\ No newline at end of file
diff --git a/src/.docker_modules/ucsc/407/bedgraph_to_wig.pl b/src/.docker_modules/ucsc/407/bedgraph_to_wig.pl
new file mode 100644
index 0000000000000000000000000000000000000000..19fe222598db78fcf6a2e0c4fdb1ad270f0549da
--- /dev/null
+++ b/src/.docker_modules/ucsc/407/bedgraph_to_wig.pl
@@ -0,0 +1,131 @@
+#!/usr/bin/perl
+
+# Description: This script converts bedGraph to fixedStep wig format with defined step size. Input file may be compressed as .gz.
+# Coordinates in bedGraph input are assumed to be 0-based (http://genome.ucsc.edu/goldenPath/help/bedgraph.html).
+# Coordinates in wig output are 1-based (http://genome.ucsc.edu/goldenPath/help/wiggle.html). 
+
+# Usage: bedgraph_to_wig.pl --bedgraph input.bedgraph --wig output.wig --step step_size [--compact]
+# --bedgraph : specify input file in bedGraph format.
+# --wig : specify output file in fixedStep format.
+# --step : specify step size. Note that span is set to be identical to step.
+# --compact : if selected, steps with value equal to 0 will not be printed. This saves space but was not allowed in original wig format, thus some scripts using wig file as input may not understand it.
+
+# Credits: This script was written by Sebastien Vigneau (sebastien.vigneau@gmail.com) in Alexander Gimelbrant lab (Dana-Farber Cancer Institute). The inspiration for this script comes from Dave Tang's own version (http://davetang.org/wiki/tiki-index.php?page=wig).
+
+
+use strict;
+use warnings;
+use Getopt::Long;
+use List::Util qw[min max];
+
+my $usage = "Usage: $0 --bedgraph <infile.bedgraph> --wig <outfile.wig> --step <step_size> [--compact]\n";
+
+# Parse command line arguments
+
+my $infile; # bedGraph input file name
+my $outfile; # wig output file name
+my $step; # wig step size
+my $compact; # if selected, steps with value equal to 0 will not be printed
+
+GetOptions (
+  "bedgraph=s" => \$infile,
+  "wig=s" => \$outfile,
+  "step=i" => \$step,
+  "compact" => \$compact,
+) or die ("Error in command line arguments!\n$usage\n");
+
+# Open input file. If it is compressed with gunzip, uncompress it.
+
+if ($infile =~ /\.gz$/){
+  open(IN,'-|',"gunzip -c $infile") || die "Could not open $infile: $!\n";
+} else {
+  open(IN,'<',$infile) || die "Could not open $infile: $!\n";
+}
+
+# Open output file.
+
+open(OUT,'>',$outfile) || die "Could not open $outfile: $!\n";
+
+
+# bedGraph to wig conversion starts here.
+
+# Print main header.
+
+print OUT "track type=wiggle_0 name=\"$infile\" description=\"$infile\" visibility=full\n";
+
+# Initialize variables.
+
+my $cur_chr = 0; # chromosome being processed
+my $cur_pos = 0; # position of current step
+my $next_pos = $cur_pos + $step; # position of next step
+my $exp_pos = 0; # expected position if no step was skipped; used with --compact option
+my $cur_val = 0; # value for current step
+
+while (<IN>) {
+
+  chomp;
+
+  # Skip comment lines
+  next if (/^track/);
+  next if (/^#/);
+
+  # Parse relevant information in current line 
+  # e.g: chr1 3000400 3000500 2
+  my ($chr, $start, $end, $val) = split(/\t/);
+  
+  # Print header for new chromosome and initialize variables.
+  if ($chr ne $cur_chr) {
+    $cur_chr = $chr;
+    $cur_pos = 0;
+    $next_pos = $cur_pos + $step;
+    $cur_val = 0;
+    if (!$compact) { # If --compact option selected, header will be printed immediately before non-null value.
+      print OUT "fixedStep chrom=$chr start=", $cur_pos + 1, " step=$step span=$step\n";
+      # +1 was added to convert from 0-based to 1-based coordinates.
+    }
+  }
+  
+  # Print values when gap in bedGraph file is greater than step.
+  while ($start >= $next_pos) {
+    print_wig_line($cur_chr, \$cur_pos, \$next_pos, \$exp_pos, \$cur_val, $chr, $start, $end, $val, $step);
+  }
+  
+  # Print values when step overlaps with bedGraph interval and bedGraph interval is longer than step.
+  while ($end >= $next_pos) {
+    $cur_val += $val * ($next_pos - max($cur_pos, $start));
+    print_wig_line($cur_chr, \$cur_pos, \$next_pos, \$exp_pos, \$cur_val, $chr, $start, $end, $val, $step);
+  }
+
+  # Update value when end of bedGraph interval is contained within step.
+  if ($end < $next_pos) {
+    $cur_val += $val * ($end - max($cur_pos, $start));
+  }
+
+}
+
+close(IN);
+close(OUT);
+
+exit(0);
+
+
+# Print or skip line in wig file depending on --compact option and value; update variables for next step.
+sub print_wig_line {
+  my ($cur_chr, $cur_pos, $next_pos, $exp_pos, $cur_val, $chr, $start, $end, $val, $step) = @_;
+  if (!$compact) { # Always print if --compact option was not selected.
+    my $cur_ave_val = $$cur_val / $step;
+    print OUT "$cur_ave_val\n";
+  } elsif ($$cur_val != 0) { # Skips printing if --compact option selected and value is null.
+    if ($$cur_pos == 0 || $$cur_pos != $$exp_pos) {
+      # Adds header if first step in chromosome, or if previous step had null value and was skipped in print out.
+      print OUT "fixedStep chrom=$chr start=", $$cur_pos + 1, " step=$step span=$step\n";
+      # +1 was added to convert from 0-based to 1-based coordinates.
+    }
+    my $cur_ave_val = $$cur_val / $step;
+    print OUT "$cur_ave_val\n";
+    $$exp_pos = $$next_pos;
+  }
+  $$cur_pos = $$next_pos;
+  $$next_pos = $$cur_pos + $step;
+  $$cur_val = 0;
+}
diff --git a/src/.docker_modules/ucsc/407/docker_init.sh b/src/.docker_modules/ucsc/407/docker_init.sh
index 1f092a8f48aa56e22b30716949337871950795a2..a68b313b07b5a55ed904be8de364493f09e26a36 100755
--- a/src/.docker_modules/ucsc/407/docker_init.sh
+++ b/src/.docker_modules/ucsc/407/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/ucsc:407
-docker build src/.docker_modules/ucsc/407/ -t 'lbmc/ucsc:407'
-docker push lbmc/ucsc:407
+# docker build src/.docker_modules/ucsc/407/ -t 'lbmc/ucsc:407'
+# docker push lbmc/ucsc:407
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/ucsc:407" --push src/.docker_modules/ucsc/407
diff --git a/src/.docker_modules/ucsc/407/gtf2bed.pl b/src/.docker_modules/ucsc/407/gtf2bed.pl
new file mode 100644
index 0000000000000000000000000000000000000000..c6ab16407e0ac5ac47d5de6a9fe5dcfd6104a360
--- /dev/null
+++ b/src/.docker_modules/ucsc/407/gtf2bed.pl
@@ -0,0 +1,124 @@
+#!/usr/bin/perl
+
+# Copyright (c) 2011 Erik Aronesty (erik@q32.com)
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+# THE SOFTWARE.
+#
+# ALSO, IT WOULD BE NICE IF YOU LET ME KNOW YOU USED IT.
+
+use Getopt::Long;
+
+my $extended;
+GetOptions("x"=>\$extended);
+
+$in = shift @ARGV;
+
+my $in_cmd =($in =~ /\.gz$/ ? "gunzip -c $in|" : $in =~ /\.zip$/ ? "unzip -p $in|" : "$in") || die "Can't open $in: $!\n";
+open IN, $in_cmd;
+
+while (<IN>) {
+	$gff = 2 if /^##gff-version 2/;
+	$gff = 3 if /^##gff-version 3/;
+	next if /^#/ && $gff;
+
+	s/\s+$//;
+	# 0-chr 1-src 2-feat 3-beg 4-end 5-scor 6-dir 7-fram 8-attr
+	my @f = split /\t/;
+	if ($gff) {
+        # most ver 2's stick gene names in the id field
+		($id) = $f[8]=~ /\bID="([^"]+)"/;
+        # most ver 3's stick unquoted names in the name field
+		($id) = $f[8]=~ /\bName=([^";]+)/ if !$id && $gff == 3;
+	} else {
+		($id) = $f[8]=~ /transcript_id "([^"]+)"/;
+	}
+
+	next unless $id && $f[0];
+
+	if ($f[2] eq 'exon') {
+		die "no position at exon on line $." if ! $f[3];
+        # gff3 puts :\d in exons sometimes
+        $id =~ s/:\d+$// if $gff == 3;
+		push @{$exons{$id}}, \@f;
+		# save lowest start
+		$trans{$id} = \@f if !$trans{$id};
+	} elsif ($f[2] eq 'start_codon') {
+		#optional, output codon start/stop as "thick" region in bed
+		$sc{$id}->[0] = $f[3];
+	} elsif ($f[2] eq 'stop_codon') {
+		$sc{$id}->[1] = $f[4];
+	} elsif ($f[2] eq 'miRNA' ) {
+		$trans{$id} = \@f if !$trans{$id};
+		push @{$exons{$id}}, \@f;
+	}
+}
+
+for $id (
+	# sort by chr then pos
+	sort {
+		$trans{$a}->[0] eq $trans{$b}->[0] ?
+		$trans{$a}->[3] <=> $trans{$b}->[3] :
+		$trans{$a}->[0] cmp $trans{$b}->[0]
+	} (keys(%trans)) ) {
+		my ($chr, undef, undef, undef, undef, undef, $dir, undef, $attr, undef, $cds, $cde) = @{$trans{$id}};
+        my ($cds, $cde);
+        ($cds, $cde) = @{$sc{$id}} if $sc{$id};
+
+		# sort by pos
+		my @ex = sort {
+			$a->[3] <=> $b->[3]
+		} @{$exons{$id}};
+
+		my $beg = $ex[0][3];
+		my $end = $ex[-1][4];
+		
+		if ($dir eq '-') {
+			# swap
+			$tmp=$cds;
+			$cds=$cde;
+			$cde=$tmp;
+			$cds -= 2 if $cds;
+			$cde += 2 if $cde;
+		}
+
+		# not specified, just use exons
+		$cds = $beg if !$cds;
+		$cde = $end if !$cde;
+
+		# adjust start for bed
+		--$beg; --$cds;
+	
+		my $exn = @ex;												# exon count
+		my $exst = join ",", map {$_->[3]-$beg-1} @ex;				# exon start
+		my $exsz = join ",", map {$_->[4]-$_->[3]+1} @ex;			# exon size
+
+        my $gene_id;
+        my $extend = "";
+        if ($extended) {
+    	    ($gene_id) = $attr =~ /gene_name "([^"]+)"/;
+    	    ($gene_id) = $attr =~ /gene_id "([^"]+)"/ unless $gene_id;
+            $extend="\t$gene_id";
+        }
+		# added an extra comma to make it look exactly like ucsc's beds
+		print "$chr\t$beg\t$end\t$id\t0\t$dir\t$cds\t$cde\t0\t$exn\t$exsz,\t$exst,$extend\n";
+}
+
+
+close IN;
+
diff --git a/src/.docker_modules/umi_tools/0.5.4/docker_init.sh b/src/.docker_modules/umi_tools/0.5.4/docker_init.sh
index 200e9c066fe98de8262a48eea0f615b064ff90a4..4c669f326f8092d92f372b65f43df77d8e14fc4e 100755
--- a/src/.docker_modules/umi_tools/0.5.4/docker_init.sh
+++ b/src/.docker_modules/umi_tools/0.5.4/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/umi_tools:1.0.0
-docker build src/.docker_modules/umi_tools/1.0.0/ -t 'lbmc/umi_tools:1.0.0'
-docker push lbmc/umi_tools:1.0.0
+# docker build src/.docker_modules/umi_tools/1.0.0/ -t 'lbmc/umi_tools:1.0.0'
+# docker push lbmc/umi_tools:1.0.0
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/umi_tools:1.0.0" --push src/.docker_modules/umi_tools/1.0.0
diff --git a/src/.docker_modules/umi_tools/1.0.0/docker_init.sh b/src/.docker_modules/umi_tools/1.0.0/docker_init.sh
index 200e9c066fe98de8262a48eea0f615b064ff90a4..4c669f326f8092d92f372b65f43df77d8e14fc4e 100755
--- a/src/.docker_modules/umi_tools/1.0.0/docker_init.sh
+++ b/src/.docker_modules/umi_tools/1.0.0/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/umi_tools:1.0.0
-docker build src/.docker_modules/umi_tools/1.0.0/ -t 'lbmc/umi_tools:1.0.0'
-docker push lbmc/umi_tools:1.0.0
+# docker build src/.docker_modules/umi_tools/1.0.0/ -t 'lbmc/umi_tools:1.0.0'
+# docker push lbmc/umi_tools:1.0.0
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/umi_tools:1.0.0" --push src/.docker_modules/umi_tools/1.0.0
diff --git a/src/.docker_modules/urqt/d62c1f8/docker_init.sh b/src/.docker_modules/urqt/d62c1f8/docker_init.sh
index bb3fb4f882ec4f93e4cec643e035fb7d2d7a4963..5bc22f778169ec31e6b391d1d6cdfdc6f2d99f04 100755
--- a/src/.docker_modules/urqt/d62c1f8/docker_init.sh
+++ b/src/.docker_modules/urqt/d62c1f8/docker_init.sh
@@ -1,4 +1,5 @@
 #!/bin/sh
 docker pull lbmc/urqt:d62c1f8
-docker build src/.docker_modules/urqt/d62c1f8 -t 'lbmc/urqt:d62c1f8'
-docker push lbmc/urqt:d62c1f8
+# docker build src/.docker_modules/urqt/d62c1f8 -t 'lbmc/urqt:d62c1f8'
+# docker push lbmc/urqt:d62c1f8
+docker buildx build --platform linux/amd64,linux/arm64 -t "lbmc/urqt:d62c1f8" --push src/.docker_modules/urqt/d62c1f8
diff --git a/src/.singularity_in2p3 b/src/.singularity_in2p3
new file mode 120000
index 0000000000000000000000000000000000000000..7f86c0543c5eb7792fe4d87ecbf92a66d0ed03e2
--- /dev/null
+++ b/src/.singularity_in2p3
@@ -0,0 +1 @@
+/sps/lbmc/common/singularity/
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-bcftools-1.7.img b/src/.singularity_in2p3/lbmc-bcftools-1.7.img
deleted file mode 120000
index 3f3b72a6df7f1dd3d5fc18d4b62c804d0acfc2e4..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-bcftools-1.7.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-bcftools-1.7.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-bedtools-2.25.0.img b/src/.singularity_in2p3/lbmc-bedtools-2.25.0.img
deleted file mode 120000
index 2003d1149ce5a96b075e382e8130c9ff2b1cc20a..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-bedtools-2.25.0.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-bedtools-2.25.0.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-bioawk-1.0.img b/src/.singularity_in2p3/lbmc-bioawk-1.0.img
deleted file mode 120000
index 56d68b2822fc0a378460cacda068ef7a3a9ebbcd..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-bioawk-1.0.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-bioawk-1.0.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-bowtie-1.2.2.img b/src/.singularity_in2p3/lbmc-bowtie-1.2.2.img
deleted file mode 120000
index 0977cfa43f0457f910bffef7e80fecf7c2cd3b57..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-bowtie-1.2.2.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-bowtie-1.2.2.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-bowtie2-2.3.4.1.img b/src/.singularity_in2p3/lbmc-bowtie2-2.3.4.1.img
deleted file mode 120000
index da3b13a066c217995d5bf467fcd4f4be9b01ea2f..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-bowtie2-2.3.4.1.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-bowtie2-2.3.4.1.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-bwa-0.7.17.img b/src/.singularity_in2p3/lbmc-bwa-0.7.17.img
deleted file mode 120000
index d6c0b9d74aa087e13a6977eba593f6260a7be96e..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-bwa-0.7.17.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-bwa-0.7.17.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-canu-1.6.img b/src/.singularity_in2p3/lbmc-canu-1.6.img
deleted file mode 120000
index 202c4a121b3feae32fac6291a6f1557b9c60cf4a..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-canu-1.6.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-canu-1.6.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-cutadapt-1.14.img b/src/.singularity_in2p3/lbmc-cutadapt-1.14.img
deleted file mode 120000
index 5a40d3d01cfdcd1d788863ae2cc5a6d6d8dddd3b..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-cutadapt-1.14.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-cutadapt-1.14.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-cutadapt-1.15.img b/src/.singularity_in2p3/lbmc-cutadapt-1.15.img
deleted file mode 120000
index b7fe369985b74675b03f4cd593655e333b2c2e65..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-cutadapt-1.15.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-cutadapt-1.15.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-cutadapt-2.1.img b/src/.singularity_in2p3/lbmc-cutadapt-2.1.img
deleted file mode 120000
index bfe8944319c632a29654f2e1177d6e6f4b31c1d6..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-cutadapt-2.1.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-cutadapt-2.1.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-deeptools-3.0.2.img b/src/.singularity_in2p3/lbmc-deeptools-3.0.2.img
deleted file mode 120000
index 20fac04f7d79a4e30dcae9320901e3afca92fe3f..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-deeptools-3.0.2.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-deeptools-3.0.2.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-deeptools-3.1.1.img b/src/.singularity_in2p3/lbmc-deeptools-3.1.1.img
deleted file mode 120000
index cd9b1e8d5b711bb2c8166888b689c99595845bc9..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-deeptools-3.1.1.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-deeptools-3.1.1.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-fastp-0.19.7.img b/src/.singularity_in2p3/lbmc-fastp-0.19.7.img
deleted file mode 120000
index aec2258a1404d0e0bb35d93335386e45a9f65a9b..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-fastp-0.19.7.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-fastp-0.19.7.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-fastqc-0.11.5.img b/src/.singularity_in2p3/lbmc-fastqc-0.11.5.img
deleted file mode 120000
index 939ac3cf2c1ff31e1d7900b5035c13f41e24c39a..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-fastqc-0.11.5.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-fastqc-0.11.5.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-file_handle-0.1.1.img b/src/.singularity_in2p3/lbmc-file_handle-0.1.1.img
deleted file mode 120000
index 50396d25d4c38864c7df09053aa34bc75c1c2167..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-file_handle-0.1.1.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-file_handle-0.1.1.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-gatk-4.0.8.1.img b/src/.singularity_in2p3/lbmc-gatk-4.0.8.1.img
deleted file mode 120000
index f398e91fe0edfc88f10a30aaa8d3de8d0be7621c..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-gatk-4.0.8.1.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-gatk-4.0.8.1.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-hisat2-2.0.0.img b/src/.singularity_in2p3/lbmc-hisat2-2.0.0.img
deleted file mode 120000
index 24c7eb7300fd7533b288c68743885d68ccad068c..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-hisat2-2.0.0.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-hisat2-2.0.0.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-hisat2-2.1.0.img b/src/.singularity_in2p3/lbmc-hisat2-2.1.0.img
deleted file mode 120000
index b210e78b748a940e0e6682be033fb114c38b3862..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-hisat2-2.1.0.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-hisat2-2.1.0.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-htseq-0.11.2.img b/src/.singularity_in2p3/lbmc-htseq-0.11.2.img
deleted file mode 120000
index c2e222e828a3646efa9c2997316943e42a39fb60..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-htseq-0.11.2.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-htseq-0.11.2.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-htseq-0.8.0.img b/src/.singularity_in2p3/lbmc-htseq-0.8.0.img
deleted file mode 120000
index d9761928187a7538046256904be6f7db47b3e2b5..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-htseq-0.8.0.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-htseq-0.8.0.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-kallisto-0.43.1.img b/src/.singularity_in2p3/lbmc-kallisto-0.43.1.img
deleted file mode 120000
index f0bc3cb4c224cebf21809e1d7b79f9e844c16a2b..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-kallisto-0.43.1.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-kallisto-0.43.1.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-kallisto-0.44.0.img b/src/.singularity_in2p3/lbmc-kallisto-0.44.0.img
deleted file mode 120000
index f5a1df9011779058335fdb0377901338434d2808..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-kallisto-0.44.0.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-kallisto-0.44.0.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-macs2-2.1.2.img b/src/.singularity_in2p3/lbmc-macs2-2.1.2.img
deleted file mode 120000
index 3507efe473fdc01252b874f810b618fc0fd18660..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-macs2-2.1.2.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-macs2-2.1.2.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-multiqc-1.0.img b/src/.singularity_in2p3/lbmc-multiqc-1.0.img
deleted file mode 120000
index 2c95d54baf3c3c0354fa66f1f3a85c36b4636bb0..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-multiqc-1.0.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-multiqc-1.0.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-multiqc-1.7.img b/src/.singularity_in2p3/lbmc-multiqc-1.7.img
deleted file mode 120000
index df7f68b57cbbf6646c97f84557183a58d08c3959..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-multiqc-1.7.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-multiqc-1.7.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-music-6613c53.img b/src/.singularity_in2p3/lbmc-music-6613c53.img
deleted file mode 120000
index a0f66ea144a85e37d52d56c6f0ce6d4b5eb29088..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-music-6613c53.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-music-6613c53.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-picard-2.18.11.img b/src/.singularity_in2p3/lbmc-picard-2.18.11.img
deleted file mode 120000
index d2373a62d5848c8bada4da7f08b1ad31c983b48f..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-picard-2.18.11.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-picard-2.18.11.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-pigz-2.4.img b/src/.singularity_in2p3/lbmc-pigz-2.4.img
deleted file mode 120000
index 25f5929082f08108919410d1e62c675ffb193f2f..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-pigz-2.4.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-pigz-2.4.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-r-3.5.3.img b/src/.singularity_in2p3/lbmc-r-3.5.3.img
deleted file mode 120000
index 447fc14610fccde46ff6bfdc866728b0a4008cc7..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-r-3.5.3.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-r-3.5.3.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-rsem-1.3.0.img b/src/.singularity_in2p3/lbmc-rsem-1.3.0.img
deleted file mode 120000
index 8230e003bb1ef02394ae5a954423d41311260aab..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-rsem-1.3.0.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-rsem-1.3.0.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-salmon-0.8.2.img b/src/.singularity_in2p3/lbmc-salmon-0.8.2.img
deleted file mode 120000
index ebe4177379aa6fb25ee467868b0972f08f91dc66..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-salmon-0.8.2.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-salmon-0.8.2.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-sambamba-0.6.7.img b/src/.singularity_in2p3/lbmc-sambamba-0.6.7.img
deleted file mode 120000
index e001f82d9f8b71094f14e0e0ec3a49026f39981c..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-sambamba-0.6.7.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-sambamba-0.6.7.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-samblaster-0.1.24.img b/src/.singularity_in2p3/lbmc-samblaster-0.1.24.img
deleted file mode 120000
index 242ae80e152d626bd9dcba519a7bcc77fcf0fbc4..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-samblaster-0.1.24.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-samblaster-0.1.24.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-samtools-1.7.img b/src/.singularity_in2p3/lbmc-samtools-1.7.img
deleted file mode 120000
index 8f513f7638cbf6f95c6c4e69bc382295b5653cc7..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-samtools-1.7.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-samtools-1.7.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-sratoolkit-2.8.2.img b/src/.singularity_in2p3/lbmc-sratoolkit-2.8.2.img
deleted file mode 120000
index fc196a885058ffd2f016f8fd91db4019d703486f..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-sratoolkit-2.8.2.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-sratoolkit-2.8.2.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-star-2.7.3a.img b/src/.singularity_in2p3/lbmc-star-2.7.3a.img
deleted file mode 120000
index ea55bf4bb7206764b8efaa48bc9c5ef888606cde..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-star-2.7.3a.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-star-2.7.3a.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-subread-1.6.4.img b/src/.singularity_in2p3/lbmc-subread-1.6.4.img
deleted file mode 120000
index 8a782172465289b9cacc105619396a30bc9ef761..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-subread-1.6.4.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-subread-1.6.4.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-tophat-2.1.1.img b/src/.singularity_in2p3/lbmc-tophat-2.1.1.img
deleted file mode 120000
index 5cbb7f17d4f08fc19f72762411c2417eb1c77b3f..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-tophat-2.1.1.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-tophat-2.1.1.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-trimmomatic-0.36.img b/src/.singularity_in2p3/lbmc-trimmomatic-0.36.img
deleted file mode 120000
index 8baf3c581996bed38e11de7072f12e3c9d074567..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-trimmomatic-0.36.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-trimmomatic-0.36.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-ucsc-375.img b/src/.singularity_in2p3/lbmc-ucsc-375.img
deleted file mode 120000
index 54ba6659f0ad9a0ac6483d435ca2a6c9f87a4bf8..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-ucsc-375.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-ucsc-375.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-umi_tools-1.0.0.img b/src/.singularity_in2p3/lbmc-umi_tools-1.0.0.img
deleted file mode 120000
index 240e59a0438697ce4e1f41fad639d4bfb6599e4f..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-umi_tools-1.0.0.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-umi_tools-1.0.0.img
\ No newline at end of file
diff --git a/src/.singularity_in2p3/lbmc-urqt-d62c1f8.img b/src/.singularity_in2p3/lbmc-urqt-d62c1f8.img
deleted file mode 120000
index a7f364f55d65c79dd11c49ecf459b6ca9163a9d9..0000000000000000000000000000000000000000
--- a/src/.singularity_in2p3/lbmc-urqt-d62c1f8.img
+++ /dev/null
@@ -1 +0,0 @@
-/sps/lbmc/common/singularity/lbmc-urqt-d62c1f8.img
\ No newline at end of file
diff --git a/src/.singularity_psmn b/src/.singularity_psmn
index a5b71da334132bfa76cd674fa170744c49a129c7..58b1c91295c39308c18df5467f0b04713ba0bd96 120000
--- a/src/.singularity_psmn
+++ b/src/.singularity_psmn
@@ -1 +1 @@
-/scratch/Bio/singularity
\ No newline at end of file
+/Xnfs/abc/singularity/
\ No newline at end of file
diff --git a/src/.update_config.sh b/src/.update_config.sh
deleted file mode 100644
index 2ea29fcf25727fdddaa062fbc7c047f89f9e4af5..0000000000000000000000000000000000000000
--- a/src/.update_config.sh
+++ /dev/null
@@ -1,32 +0,0 @@
-# update docker url
-fd ".*config" -E "nf_modules" src/ -x perl -0777pe 's|container = "|container = "lbmc/|g' -i {}
-
-# update singularity url
-fd ".*config" -E "nf_modules" src/ -x perl -pe 's|container = "lbmc/file://bin/(.*).img"|container = "lbmc/\1"|g' -i {}
-
-# update singularity config
-fd ".*config" -E "nf_modules" src/ -x perl -0777pe 's|\n\s*singularity {\n\s*singularity.enabled = true|\n  singularity {\n    singularity.enabled = true\n    singularity.cacheDir = "./bin/"|mg' -i {}
-
-# update in2p3 config
-fd ".*config" -E "nf_modules" src/ -x perl -0777pe 's|\n\s*ccin2p3 {\n\s*singularity.enabled = true|\n  ccin2p3 {\n    singularity.enabled = true\n    singularity.cacheDir = "/sps/lbmc/common/singularity/"|mg' -i {}
-fd ".*config" src/ -x perl -pe 's|container = "lbmc//sps/lbmc/common/singularity/(.*).img"|container = "lbmc/\1"|g' -i {}
-fd ".*config" -E "nf_modules" src/ -x perl -0777pe 's|singularity.cacheDir = "/sps/lbmc/common/singularity/"|singularity.cacheDir = "\$baseDir/.singularity_in2p3/"|mg' -i {}
-
-# we remove the ccin2p3_conda section
-fd ".*config" -E "nf_modules" src/ -x perl -0777pe "s|\s*ccin2p3_conda {.*ccin2p3 {\n|\n  ccin2p3 {\n|msg" -i {}
-
-# we update the psmn module to conda
-fd ".*config" -E "nf_modules" src/ -x perl -0777pe 's|beforeScript = "source /usr/share/lmod/lmod/init/bash; module use ~/privatemodules"\n\s*module = "(.*)/(.*)"|beforeScript = "source \$baseDir/.conda_psmn.sh"\n        conda = "\$baseDir/.conda_envs/\L\1_\2"|mg' -i {}
-
-# we update the psmn queue to new cluster
-fd ".*config" src/ -x perl -0777pe 's|E5-2670deb128A,E5-2670deb128B,E5-2670deb128C,E5-2670deb128D,E5-2670deb128E,E5-2670deb128F|CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D|mg' -i {}
-fd ".*config" src/ -x perl -0777pe 's|monointeldeb128,monointeldeb48,h48-E5-2670deb128,h6-E5-2667v4deb128|monointeldeb128|mg' -i {}
-fd ".*config" src/ -x perl -0777pe 's|openmp16|openmp32|mg' -i {}
-fd ".*config" src/ -x perl -0777pe 's|cpus = 16|cpus = 32|mg' -i {}
-fd ".*config" src/ -x perl -0777pe "s|'|\"|mg" -i {}
-
-# we update the psmn config to singularity
-fd ".*config" src/ -x perl -0777pe 's|psmn{|psmn{\n    singularity.enabled = true\n    singularity.cacheDir = "$baseDir/.singularity_psmn/"\n    singularity.runOptions = "--bind /Xnfs,/scratch"|mg' -i {}
-fd ".*config" src/ -x perl -0777pe 's|beforeScript.*conda.*(\n\s*clusterOptions = "-cwd -V".*)(container .*executor = "sge")|\2\1\2|gs' -i {}
-fd ".*config" src/nf_modules/ -x perl -0777pe 's|\s*scratch = true(\n.*clusterOptions = "-cwd -V")|\1|gs' -i {}
-fd ".*config" src/nf_modules/ -x perl -0777pe 's|\s*stageInMode = "copy"\n\s*stageOutMode = "rsync"(\n.*clusterOptions = "-cwd -V")|\1|gs' -i {}
diff --git a/src/.update_tools.sh b/src/.update_tools.sh
deleted file mode 100755
index 57af5db8b119176f94fe0ac4b28043b3cd94dd91..0000000000000000000000000000000000000000
--- a/src/.update_tools.sh
+++ /dev/null
@@ -1,92 +0,0 @@
-#/bin/sh
-
-# A POSIX variable
-OPTIND=1
-
-# Initialize our own variables:
-tool=""
-version=""
-
-while getopts "h?v:t:p:" opt; do
-  case "${opt}" in
-  h|\?)
-    echo "update_tools.sh -t toolname -v tool_version -p tool_previous_version"
-    exit 0
-    ;;
-  v)
-    version=${OPTARG}
-    ;;
-  p)
-    prev_version=${OPTARG}
-    ;;
-  t)
-    tool=${OPTARG}
-    ;;
-  esac
-done
-
-echo "tool=${tool}, version='${version}', previous version='${version}'"
-
-docker_tool_dir="src/docker_modules/"${tool}"/"
-echo ${docker_tool_dir}
-if [ -d ${docker_tool_dir} ]; then
-  echo "docker module found for ${tool}."
-  if [ -d ${docker_tool_dir}${version} ]; then
-    echo "version already existing, skipping."
-  else
-    cp -R ${docker_tool_dir}${prev_version} ${docker_tool_dir}${version}
-    sed -i "s|${prev_version}|${version}|g" "${docker_tool_dir}${version}/Dockerfile"
-    sed -i "s|${prev_version}|${version}|g" "${docker_tool_dir}${version}/docker_init.sh"
-    echo "docker_module for ${tool}:${version}, done."
-  fi
-else
-  echo "docker module not found for '${tool}', skipping."
-fi
-
-singularity_tool_dir="src/singularity_modules/"${tool}"/"
-echo ${singularity_tool_dir}
-if [ -d ${singularity_tool_dir} ]; then
-  echo "singularity module found for $tool."
-  if [ -d ${singularity_tool_dir}${version} ]; then
-    echo "version already existing, skipping."
-  else
-    cp -R ${singularity_tool_dir}${prev_version} ${singularity_tool_dir}${version}
-    sed -i "s|${prev_version}|${version}|g" "${singularity_tool_dir}${version}/${tool}.def"
-    sed -i "s|${prev_version}|${version}|g" "${singularity_tool_dir}${version}/build.sh"
-    echo "singularity_module for ${tool}:${version}, done."
-  fi
-else
-  echo "singularity module not found for '${tool}', skipping."
-fi
-
-nf_tool_dir="src/nf_modules/"$tool"/"
-echo $nf_tool_dir
-if [ -d ${nf_tool_dir} ]; then
-  echo "nf module found for ${tool}."
-  find ${nf_tool_dir} -maxdepth 1 -mindepth 1 -type f -name "*.config" |
-    awk "{system(\"sed -i \\\"s|${prev_version}|${version}|g\\\" \"\$0)}"
-  echo "nf_module for ${tool}:${version}, done."
-else
-  echo "nf module not found for '${tool}', skipping."
-fi
-
-psmn_modules_dir="src/psmn_modules/.git/"
-if [ ! -d ${nf_tool_dir} ]; then
-  git submodule init && \
-  git submodule update
-fi
-psmn_tool_app_dir="src/psmn_modules/apps/"${tool}"/"
-psmn_tool_module_dir="src/psmn_modules/modulefiles/"${tool}"/"
-echo ${psmn_tool_app_dir}
-if [ -d ${psmn_tool_app_dir} ]; then
-  echo "psmn module found for ${tool}."
-  cp ${psmn_tool_app_dir}/install_${prev_version}.sh \
-    ${psmn_tool_app_dir}/install_${version}.sh
-  sed -i "s|$prev_version|$version|g" ${psmn_tool_app_dir}/install_${version}.sh
-  cp ${psmn_tool_module_dir}/${prev_version}.lua \
-    ${psmn_tool_module_dir}/${version}.lua
-  sed -i "s|${prev_version}|${version}|g" ${psmn_tool_module_dir}/${version}.lua
-  echo "psmn_module for ${tool}:${version}, done."
-else
-  echo "psmn module not found for '${tool}', skipping."
-fi
diff --git a/src/example_chipseq.nf b/src/example_chipseq.nf
new file mode 100644
index 0000000000000000000000000000000000000000..454617c9c6fa21b9cdd2c8fd378bbcd7c60ea42e
--- /dev/null
+++ b/src/example_chipseq.nf
@@ -0,0 +1,123 @@
+
+nextflow.enable.dsl=2
+
+include {
+  fastp
+} from './nf_modules/fastp/main'
+
+workflow csv_parsing {
+  if (params.csv_path.size() > 0) {
+    log.info "loading local csv files"
+    Channel
+      .fromPath(params.csv_path, checkIfExists: true)
+      .ifEmpty { error 
+      log.error """
+    =============================================================
+      WARNING! No csv input file precised.
+      Use '--csv_path <file.csv>'
+      Or '--help' for more informations
+    =============================================================
+    """
+      }
+      .splitCsv(header: true, sep: ";", strip: true)
+      .flatMap{
+        it -> [
+          [(it.IP + it.WCE).md5(), "IP", "w", file(it.IP)],
+          [(it.IP + it.WCE).md5(), "WCE", "w", file(it.WCE)]
+        ]
+      }
+      .map{ it ->
+        if (it[1] instanceof List){
+          it
+        } else {
+          [it[0], [it[1]], it[2], it[3], [it[4]]]
+        }
+      }
+      .map{
+        it ->
+        if (it[1].size() == 2){ // if data are paired_end
+          [
+            "index": it[0],
+            "group": ref_order(it),
+            "ip": it[2],
+            "type": it[3],
+            "id": read_order(it)[0],
+            "file": read_order(it)
+          ]
+        } else {
+          [
+            "index": it[0],
+            "group": it[1].simpleName,
+            "ip": it[2],
+            "type": it[3],
+            "id": it[4].simpleName,
+            "file": [it[4].simpleName, it[4]]
+          ]
+        }
+      }
+      .set{input_csv}
+  } else {
+    log.info "loading remotes SRA csv files"
+    Channel
+      .fromPath(params.csv_sra, checkIfExists: true)
+      .ifEmpty { error 
+      log.error """
+    =============================================================
+      WARNING! No csv input file precised.
+      Use '--csv_path <file.csv>' or
+      Use '--csv_SRA <file.csv>'
+      Or '--help' for more informations
+    =============================================================
+    """
+      }
+      .splitCsv(header: true, sep: ";", strip: true)
+      .flatMap{
+        it -> [
+          [[it.IP_w + it.WCE_w + it.IP_m + it.WCE_m], t.IP_w, "IP", "w", it.IP_w],
+          [[it.IP_w + it.WCE_w + it.IP_m + it.WCE_m], it.IP_w, "WCE", "w", it.WCE_w],
+          [[it.IP_w + it.WCE_w + it.IP_m + it.WCE_m], it.IP_w, "IP", "m", it.IP_m],
+          [[it.IP_w + it.WCE_w + it.IP_m + it.WCE_m], it.IP_w, "WCE", "m", it.WCE_m]
+        ]
+      }
+      .map{
+        it ->
+        if (it[1].size() == 2){ // if data are paired_end
+          [
+            "index": (
+              it[0][0][0].simpleName +
+              it[0][0][1].simpleName +
+              it[0][0][2].simpleName +
+              it[0][0][3].simpleName
+            ).md5(),
+            "group": it[1][0].simpleName,
+            "ip": it[2],
+            "type": it[3],
+            "id": it[4][0].simpleName[0..-4],
+            "file": [it[4][0].simpleName[0..-4], it[4]]
+          ]
+        } else {
+          [
+            "index": (
+              it[0][0].simpleName +
+              it[0][1].simpleName +
+              it[0][2].simpleName +
+              it[0][3].simpleName
+            ).md5(),
+            "group": it[1].simpleName,
+            "ip": it[2],
+            "type": it[3],
+            "id": it[4].simpleName,
+            "file": [it[4].simpleName, it[4]]
+          ]
+        }
+      }
+      .set{input_csv}
+  }
+  emit:
+  input_csv
+}
+
+
+workflow {
+
+}
\ No newline at end of file
diff --git a/src/example_marseq.nf b/src/example_marseq.nf
new file mode 100644
index 0000000000000000000000000000000000000000..821ebd99d45e9810b2a5fc3546e720d417666750
--- /dev/null
+++ b/src/example_marseq.nf
@@ -0,0 +1,87 @@
+nextflow.enable.dsl=2
+
+/*
+Testing pipeline for marseq scRNASeq analysis
+*/
+
+include { adaptor_removal} from "./nf_modules/cutadapt/main.nf"
+include {
+  index_fasta;
+  count;
+  index_fasta_velocity;
+  count_velocity
+} from "./nf_modules/kb/main.nf" addParams(
+  kb_protocol: "marsseq",
+  count_out: "quantification/",
+  count_velocity_out: "quantification_velocity/"
+)
+
+params.fasta = "http://ftp.ensembl.org/pub/release-94/fasta/gallus_gallus/dna/Gallus_gallus.Gallus_gallus-5.0.dna.toplevel.fa.gz"
+params.fastq = "data/CF42_45/*/*R{1,2}.fastq.gz"
+params.gtf = "http://ftp.ensembl.org/pub/release-94/gtf/gallus_gallus/Gallus_gallus.Gallus_gallus-5.0.94.gtf.gz"
+params.transcript_to_gene = ""
+params.whitelist = "data/expected_whitelist.txt"
+params.config = "data/marseq_flexi_splitter.yaml"
+params.workflow_type = "classic"
+
+log.info "fastq files (--fastq): ${params.fastq}"
+log.info "fasta file (--fasta): ${params.fasta}"
+log.info "gtf file (--gtf): ${params.gtf}"
+log.info "transcript_to_gene file (--transcript_to_gene): ${params.transcript_to_gene}"
+log.info "whitelist file (--whitelist): ${params.whitelist}"
+log.info "config file (--config): ${params.config}"
+
+channel
+  .fromFilePairs( params.fastq, size: -1)
+  .set { fastq_files }
+channel
+  .fromPath( params.fasta )
+  .ifEmpty { error "Cannot find any fasta files matching: ${params.fasta}" }
+  .map { it -> [it.simpleName, it]}
+  .set { fasta_files }
+channel
+  .fromPath( params.gtf )
+  .ifEmpty { error "Cannot find any gtf files matching: ${params.gtf}" }
+  .map { it -> [it.simpleName, it]}
+  .set { gtf_files }
+if (params.whitelist == "") {
+  channel.empty()
+    .set { whitelist_files }
+} else {
+  channel
+    .fromPath( params.whitelist )
+    .map { it -> [it.simpleName, it]}
+    .set { whitelist_files }
+}
+channel
+  .fromPath( params.config )
+  .ifEmpty { error "Cannot find any config files matching: ${params.config}" }
+  .map { it -> [it.simpleName, it]}
+  .set { config_files }
+
+workflow {
+  adaptor_removal(fastq_files)
+  if (params.workflow_type == "classic") {
+    index_fasta(
+      fasta_files,
+      gtf_files
+    )
+    count(
+      index_fasta.out.index,
+      adaptor_removal.out.fastq,
+      index_fasta.out.t2g, whitelist_files,config_files
+    )
+  } else {
+    index_fasta_velocity(
+      fasta_files,
+      gtf_files
+    )
+    count_velocity(
+      index_fasta_velocity.out.index,
+      adaptor_removal.out.fastq,
+      index_fasta_velocity.out.t2g,
+      whitelist_files,
+      config_files
+    )
+  }
+}
diff --git a/src/example_variant_calling.nf b/src/example_variant_calling.nf
new file mode 100644
index 0000000000000000000000000000000000000000..5d793ed4898ac89753b2188f252403f903182043
--- /dev/null
+++ b/src/example_variant_calling.nf
@@ -0,0 +1,36 @@
+nextflow.enable.dsl=2
+
+/*
+Testing pipeline for marseq scRNASeq analysis
+*/
+
+include {
+  mapping;
+} from "./nf_modules/bwa/main.nf"
+
+include {
+  sort_bam;
+} from "./nf_modules/samtools/main.nf"
+
+include {
+  germline_cohort_data_variant_calling;
+} from "./nf_modules/gatk4/main.nf" addParams(
+  variant_calling_out: "vcf/",
+)
+
+params.fastq = ""
+params.fasta = ""
+
+channel
+  .fromFilePairs( params.fastq, size: -1)
+  .set { fastq_files }
+channel
+  .fromPath( params.fasta )
+  .map { it -> [it.simpleName, it]}
+  .set { fasta_files }
+
+workflow {
+  mapping(fasta_files, fastq_files)
+  sort_bam(mapping.out.bam)
+  germline_cohort_data_variant_calling(sort_bam.out.bam, fasta_files)
+}
diff --git a/src/nextflow.config b/src/nextflow.config
index a30fd44fa68eec707aafbb03aaabb0749af83429..859be7adc4e3fd49257bb1d9ed05e85bf9681adc 100644
--- a/src/nextflow.config
+++ b/src/nextflow.config
@@ -25,6 +25,35 @@ profiles {
       withLabel: big_mem_multi_cpus {
         cpus = 4
       }
+      withLabel: small_mem_mono_cpus {
+        cpus = 1
+        memory = '2GB'
+      }
+      withLabel: small_mem_multi_cpus {
+        cpus = 4
+        memory = '2GB'
+      }
+    }
+  }
+  podman {
+    podman.enabled = true
+    process {
+      errorStrategy = 'finish'
+      memory = '16GB'
+      withLabel: big_mem_mono_cpus {
+        cpus = 1
+      }
+      withLabel: big_mem_multi_cpus {
+        cpus = 4
+      }
+      withLabel: small_mem_mono_cpus {
+        cpus = 1
+        memory = '2GB'
+      }
+      withLabel: small_mem_multi_cpus {
+        cpus = 4
+        memory = '2GB'
+      }
     }
   }
   singularity {
@@ -39,11 +68,19 @@ profiles {
       withLabel: big_mem_multi_cpus {
         cpus = 4
       }
+      withLabel: small_mem_mono_cpus {
+        cpus = 1
+        memory = '2GB'
+      }
+      withLabel: small_mem_multi_cpus {
+        cpus = 4
+        memory = '2GB'
+      }
     }
   }
   psmn {
     singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_psmn/"
+    singularity.cacheDir = "/Xnfs/abc/singularity/"
     singularity.runOptions = "--bind /Xnfs,/scratch"
     process{
       errorStrategy = { sleep(Math.pow(2, task.attempt) * 200 as long); return 'retry' }
@@ -66,6 +103,24 @@ profiles {
         penv = "openmp32"
 
       }
+      withLabel: small_mem_mono_cpus {
+        executor = "sge"
+        clusterOptions = "-cwd -V"
+        cpus = 1
+        memory = "16GB"
+        time = "12h"
+        queue = "monointeldeb128,monointeldeb192"
+      }
+      withLabel: small_mem_multi_cpus {
+        executor = "sge"
+        clusterOptions = "-cwd -V"
+        cpus = 32
+        memory = "16GB"
+        time = "24h"
+        queue = "CLG*,SLG*,Epyc*"
+        penv = "openmp32"
+
+      }
     }
   }
   ccin2p3 {
@@ -96,6 +151,27 @@ profiles {
         memory = "8GB"
         queue = "huge"
       }
+      withLabel: small_mem_mono_cpus {
+        scratch = true
+        stageInMode = "copy"
+        stageOutMode = "rsync"
+        executor = "sge"
+        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n"
+        cpus = 1
+        memory = "8GB"
+        queue = "huge"
+      }
+      withLabel: small_mem_multi_cpus {
+        container = "lbmc/urqt:d62c1f8"
+        scratch = true
+        stageInMode = "copy"
+        stageOutMode = "rsync"
+        executor = "sge"
+        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n"
+        cpus = 1
+        memory = "8GB"
+        queue = "huge"
+      }
     }
   }
 }
diff --git a/src/nf_modules/agat/main.nf b/src/nf_modules/agat/main.nf
new file mode 100644
index 0000000000000000000000000000000000000000..e2d832e72b97340f1a0811dde6358bd73cb2e606
--- /dev/null
+++ b/src/nf_modules/agat/main.nf
@@ -0,0 +1,46 @@
+version = "0.8.0"
+container_url = "lbmc/agat:${version}"
+
+params.gff_to_bed = ""
+params.gff_to_bed_out = ""
+process gff_to_bed {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.gff_to_bed_out != "") {
+    publishDir "results/${params.gff_to_bed_out}", mode: 'copy'
+  }
+
+  input:
+    tuple val(file_id), path(gff)
+  output:
+    tuple val(file_id), path("*.bed"), emit: bed
+
+  script:
+"""
+zcat ${gff} > ${gff.baseName}.gff
+agat_convert_sp_gff2bed.pl ${params.gff_to_bed} --gff ${gff.baseName}.gff -o ${gff.simpleName}.bed
+"""
+}
+
+params.gff_to_gtf = ""
+params.gff_to_gtf_out = ""
+process gff_to_gtf {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.gff_to_gtf_out != "") {
+    publishDir "results/${params.gff_to_gtf_out}", mode: 'copy'
+  }
+
+  input:
+    tuple val(file_id), path(gff)
+  output:
+    tuple val(file_id), path("*.gtf"), emit: gtf
+
+  script:
+"""
+zcat ${gff} > ${gff.baseName}.gff
+agat_convert_sp_gff2gtf.pl ${params.gff_to_gtf} --gff ${gff.baseName}.gff -o ${gff.simpleName}.gtf
+"""
+}
\ No newline at end of file
diff --git a/src/nf_modules/alntools/main.nf b/src/nf_modules/alntools/main.nf
new file mode 100644
index 0000000000000000000000000000000000000000..19ee7b096f8f8aa40f7c57602caee4060f17227a
--- /dev/null
+++ b/src/nf_modules/alntools/main.nf
@@ -0,0 +1,65 @@
+version = "dd96682"
+container_url = "lbmc/alntools:${version}"
+
+params.bam2ec = ""
+params.bam2ec_out = ""
+process bam2ec {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.bam2ec_out != "") {
+    publishDir "results/${params.bam2ec_out}", mode: 'copy'
+  }
+
+  input:
+    tuple val(file_id), path(bam), path(bam_idx)
+    tuple val(transcripts_lengths_id), path(transcripts_lengths)
+
+  output:
+    tuple val(file_id), path("${bam.simpleName}.bin"), emit: bin
+    tuple val(transcripts_lengths_id), path("${transcripts_lengths}"), emit: tsv
+    tuple val(file_id), path("${bam.simpleName}_bam2ec_report.txt"), emit: report
+
+  script:
+"""
+mkdir tmp
+alntools bam2ec \
+  -c 1 ${params.bam2ec} \
+  -d ./tmp \
+  -t ${transcripts_lengths} \
+  -v \
+  ${bam} ${bam.simpleName}.bin &> \
+  ${bam.simpleName}_bam2ec_report.txt
+"""
+}
+
+params.gtf_to_transcripts_lengths = ""
+params.gtf_to_transcripts_lengths_out = ""
+process gtf_to_transcripts_lengths {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.gtf_to_transcripts_lengths != "") {
+    publishDir "results/${params.gtf_to_transcripts_lengths}", mode: 'copy'
+  }
+
+  input:
+    tuple val(file_id), path(gtf)
+
+  output:
+    tuple val(file_id), path("${gtf.simpleName}_transcripts_lengths.tsv"), emit: tsv
+
+  script:
+"""
+awk -F"[\\t;]" '
+\$3=="exon" {
+        ID=gensub(/transcript_id \\"(.*)\\"/, "\\\\1", "g", \$11); 
+        LEN[ID]+=\$5-\$4+1;
+    } 
+END{
+    for(i in LEN)
+        {print i"\\t"LEN[i]}
+    }
+' ${gtf} > ${gtf.simpleName}_transcripts_lengths.tsv
+"""
+}
diff --git a/src/nf_modules/beagle/main.nf b/src/nf_modules/beagle/main.nf
new file mode 100644
index 0000000000000000000000000000000000000000..bc0a54b21d941d358d047334c29d67f4b55be16b
--- /dev/null
+++ b/src/nf_modules/beagle/main.nf
@@ -0,0 +1,23 @@
+version = "5.1_24Aug19.3e8--hdfd78af_1"
+container_url = "quay.io/biocontainers/beagle::${version}"
+
+params.phasing = ""
+process phasing {
+  container = "${container_url}"
+  label "big_mem_multi_cpus"
+  tag "$file_id"
+
+  input:
+    tuple val(file_id), path(vcf)
+    tuple val(ref_id), path(ref_vcf)
+
+  output:
+    tuple val(file_id), path("*.bam*"), emit: bam
+
+  script:
+"""
+beagle nthread=${task.cpus} \
+  gtgl=${vcf} \
+  ref=${ref_vcf}
+"""
+}
diff --git a/src/nf_modules/bedtools/fasta_from_bed.config b/src/nf_modules/bedtools/fasta_from_bed.config
deleted file mode 100644
index 077270d718912d8430da8a86380ebafe4fb02ea4..0000000000000000000000000000000000000000
--- a/src/nf_modules/bedtools/fasta_from_bed.config
+++ /dev/null
@@ -1,55 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withLabel: bedtools {
-        container = "lbmc/bedtools:2.25.0"
-        cpus = 1
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withLabel: bedtools {
-        container = "lbmc/bedtools:2.25.0"
-        cpus = 1
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withLabel: bedtools {
-        container = "lbmc/bedtools:2.25.0"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "20GB"
-        time = "12h"
-        queue = "monointeldeb128"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withLabel: bedtools {
-        container = "lbmc/bedtools:2.25.0"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n"
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/bedtools/fasta_from_bed.nf b/src/nf_modules/bedtools/fasta_from_bed.nf
deleted file mode 100644
index 85dce5556bb9a583d0dd70dadcdef635ce842074..0000000000000000000000000000000000000000
--- a/src/nf_modules/bedtools/fasta_from_bed.nf
+++ /dev/null
@@ -1,41 +0,0 @@
-/*
-* bedtools :
-* Imputs : fasta files
-* Imputs : bed files
-* Output : fasta files
-*/
-/*                      fasta extraction                                     */
-
-params.fasta = "$baseDir/data/fasta/*.fasta"
-params.bed = "$baseDir/data/annot/*.bed"
-
-log.info "fasta file : ${params.fasta}"
-log.info "bed file : ${params.bed}"
-
-Channel
-  .fromPath( params.fasta )
-  .ifEmpty { error "Cannot find any fasta files matching: ${params.fasta}" }
-  .set { fasta_files }
-Channel
-  .fromPath( params.bed )
-  .ifEmpty { error "Cannot find any bed files matching: ${params.bed}" }
-  .set { bed_files }
-
-process fasta_from_bed {
-  tag "${bed.baseName}"
-  publishDir "results/fasta/", mode: 'copy'
-  label "bedtools"
-
-  input:
-  file fasta from fasta_files
-  file bed from bed_files
-
-  output:
-  file "*_extracted.fasta" into fasta_files_extracted
-
-  script:
-"""
-bedtools getfasta -name \
--fi ${fasta} -bed ${bed} -fo ${bed.baseName}_extracted.fasta
-"""
-}
diff --git a/src/nf_modules/bedtools/main.nf b/src/nf_modules/bedtools/main.nf
index 50a848e76e1c41f1217c7c47cc7d3f9e025d99b7..9400abf4e55bcf57f52e896b5f1d41e1a8fe8bfa 100644
--- a/src/nf_modules/bedtools/main.nf
+++ b/src/nf_modules/bedtools/main.nf
@@ -1,46 +1,61 @@
 version = "2.25.0"
 container_url = "lbmc/bedtools:${version}"
 
+params.fasta_from_bed = "-name"
+params.fasta_from_bed_out = ""
 process fasta_from_bed {
   container = "${container_url}"
   label "big_mem_mono_cpus"
-  tag "${bed.baseName}"
+  tag "${file_id}"
+  if (params.fasta_from_bed_out != "") {
+    publishDir "results/${params.fasta_from_bed_out}", mode: 'copy'
+  }
 
   input:
-  path fasta
-  path bed
+  tuple val(fasta_id), path(fasta)
+  tuple val(file_id), path(bed)
 
   output:
-  tuple val(bed.baseName), path("*_extracted.fasta"), emit: fasta
+  tuple val(file_id), path("*_extracted.fasta"), emit: fasta
 
   script:
 """
-bedtools getfasta -name \
+bedtools getfasta ${params.fasta_from_bed} \
 -fi ${fasta} -bed ${bed} -fo ${bed.baseName}_extracted.fasta
 """
 }
 
+params.merge_bed = ""
+params.merge_bed_out = ""
 process merge_bed {
   container = "${container_url}"
   label "big_mem_mono_cpus"
-  tag "${bed.baseName}"
+  tag "${file_id}"
+  if (params.merge_bed_out != "") {
+    publishDir "results/${params.merge_bed_out}", mode: 'copy'
+  }
 
   input:
-  path bed
+  tuple val(file_id), path(bed)
 
   output:
-  tuple val(bed[0].simpleName), path("*_merged.fasta"), emit: bed
+  tuple val(file_id), path("*_merged.fasta"), emit: bed
 
   script:
 """
-bedtools merge -i ${bed} > ${bed[0].simpleName}_merged.bed
+bedtools merge ${params.merge_bed} -i ${bed} > ${bed[0].simpleName}_merged.bed
 """
 }
 
+params.bam_to_fastq_singleend = ""
+params.bam_to_fastq_singleend_out = ""
 process bam_to_fastq_singleend {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "${bam_id}"
+  if (params.bam_to_fastq_singleend_out != "") {
+    publishDir "results/${params.bam_to_fastq_singleend_out}", mode: 'copy'
+  }
 
   input:
   tuple val(bam_id), path(bam)
@@ -51,14 +66,20 @@ process bam_to_fastq_singleend {
   script:
 """
 bedtools bamtofastq \
--i ${bam} -fq ${bam.baseName}.fastq
+  ${params.bam_to_fastq_singleend} \
+  -i ${bam} -fq ${bam.baseName}.fastq
 """
 }
 
+params.bam_to_fastq_pairedend = ""
+params.bam_to_fastq_pairedend_out = ""
 process bam_to_fastq_pairedend {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "${bam_id}"
+  if (params.bam_to_fastq_pairedend_out != "") {
+    publishDir "results/${params.bam_to_fastq_pairedend_out}", mode: 'copy'
+  }
 
   input:
   tuple val(bam_id), path(bam)
@@ -69,14 +90,20 @@ process bam_to_fastq_pairedend {
   script:
 """
 bedtools bamtofastq \
--i ${bam} -fq ${bam.baseName}_R1.fastq -fq2 ${bam.baseName}_R2.fastq
+  ${params.bam_to_fastq_pairedend} \
+  -i ${bam} -fq ${bam.baseName}_R1.fastq -fq2 ${bam.baseName}_R2.fastq
 """
 }
 
+params.bam_to_bedgraph = ""
+params.bam_to_bedgraph_out = ""
 process bam_to_bedgraph {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "${bam_id}"
+  if (params.bam_to_bedgraph_out != "") {
+    publishDir "results/${params.bam_to_bedgraph_out}", mode: 'copy'
+  }
 
   input:
   tuple val(bam_id), path(bam)
@@ -87,6 +114,7 @@ process bam_to_bedgraph {
   script:
 """
 bedtools genomecov \
+  ${params.bam_to_bedgraph} \
   -ibam ${bam} \
   -bg > ${bam.simpleName}.bg
 """
diff --git a/src/nf_modules/bedtools/tests.sh b/src/nf_modules/bedtools/tests.sh
deleted file mode 100755
index 61b3cc9c44bad7f171e87f180f2a2a156009d48f..0000000000000000000000000000000000000000
--- a/src/nf_modules/bedtools/tests.sh
+++ /dev/null
@@ -1,15 +0,0 @@
-./nextflow src/nf_modules/bedtools/fasta_from_bed.nf \
-  -c src/nf_modules/bedtools/fasta_from_bed.config \
-  -profile docker \
-  --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" \
-  --bed "data/tiny_dataset/annot/tiny.bed" \
-  -resume
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/bedtools/fasta_from_bed.nf \
-  -c src/nf_modules/bedtools/fasta_from_bed.config \
-  -profile singularity \
-  --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" \
-  --bed "data/tiny_dataset/annot/tiny.bed" \
-  -resume
-fi
diff --git a/src/nf_modules/bioawk/main.nf b/src/nf_modules/bioawk/main.nf
new file mode 100644
index 0000000000000000000000000000000000000000..eaa5a4a2e50a1e9244f99ae6cc9fb10401bfb5a9
--- /dev/null
+++ b/src/nf_modules/bioawk/main.nf
@@ -0,0 +1,24 @@
+version = "1.0"
+container_url = "lbmc/bioawk:${version}"
+
+params.fasta_to_transcripts_lengths = ""
+params.fasta_to_transcripts_lengths_out = ""
+process fasta_to_transcripts_lengths {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.fasta_to_transcripts_lengths_out != "") {
+    publishDir "results/${params.fasta_to_transcripts_lengths_out}", mode: 'copy'
+  }
+
+  input:
+    tuple val(file_id), path(fasta)
+
+  output:
+    tuple val(file_id), path("${fasta.simpleName}_transcripts_lengths.tsv"), emit: tsv
+
+  script:
+"""
+bioawk -c fastx '{print(\$name" "length(\$seq))}' ${fasta} > ${fasta.simpleName}_transcripts_lengths.tsv
+"""
+}
\ No newline at end of file
diff --git a/src/nf_modules/bioconvert/main.nf b/src/nf_modules/bioconvert/main.nf
new file mode 100644
index 0000000000000000000000000000000000000000..884ebb9ee693b27fca4cda7724ca3145476ec60f
--- /dev/null
+++ b/src/nf_modules/bioconvert/main.nf
@@ -0,0 +1,46 @@
+version = "0.4.0"
+container_url = "lbmc/bioconvert:${version}"
+params.bigwig_to_wig = ""
+params.bigwig_to_wig_out = ""
+process bigwig_to_wig {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "${file_id}"
+  if (params.bigwig_to_wig_out != "") {
+    publishDir "results/${params.bigwig_to_wig_out}", mode: 'copy'
+  }
+
+  input:
+  tuple val(file_id) path(bw)
+
+  output:
+  tuple val(file_id), path("*.wig"), emit: wig
+
+  script:
+"""
+bioconvert bigwig2wiggle ${bw} ${bw.simpleName}.wig
+"""
+}
+
+params.bigwig2_to_wig2 = ""
+params.bigwig2_to_wig2_out = ""
+process bigwig2_to_wig2 {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "${file_id}"
+  if (params.bigwig_to_wig_out != "") {
+    publishDir "results/${params.bigwig_to_wig_out}", mode: 'copy'
+  }
+
+  input:
+  tuple val(file_id), path(bw_a), path(bw_b)
+
+  output:
+  tuple val(file_id), path("${bw_a.simpleName}.wig"), path("${bw_b.simpleName}.wig"), emit: wig
+
+  script:
+"""
+bioconvert bigwig2wiggle ${bw_a} ${bw_a.simpleName}.wig
+bioconvert bigwig2wiggle ${bw_b} ${bw_b.simpleName}.wig
+"""
+}
\ No newline at end of file
diff --git a/src/nf_modules/bowtie/indexing.config b/src/nf_modules/bowtie/indexing.config
deleted file mode 100644
index 50dff96596a72ad4f568698ef4a80df9e0b85d1b..0000000000000000000000000000000000000000
--- a/src/nf_modules/bowtie/indexing.config
+++ /dev/null
@@ -1,56 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withLabel: bowtie {
-        cpus = 4
-        container = "lbmc/bowtie:1.2.2"
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withLabel: bowtie {
-        cpus = 4
-        container = "lbmc/bowtie:1.2.2"
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withLabel: bowtie {
-        container = "lbmc/bowtie:1.2.2"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        memory = "20GB"
-        cpus = 32
-        time = "12h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withLabel: bowtie {
-        container = "lbmc/bowtie:1.2.2"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n"
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/bowtie/indexing.nf b/src/nf_modules/bowtie/indexing.nf
deleted file mode 100644
index d09a5de8676bc85e43a0ac76a7dbcdbc5df50f9e..0000000000000000000000000000000000000000
--- a/src/nf_modules/bowtie/indexing.nf
+++ /dev/null
@@ -1,33 +0,0 @@
-/*                      fasta indexing                                     */
-
-params.fasta = "$baseDir/data/bam/*.fasta"
-
-log.info "fasta files : ${params.fasta}"
-
-Channel
-  .fromPath( params.fasta )
-  .ifEmpty { error "Cannot find any bam files matching: ${params.fasta}" }
-  .set { fasta_file }
-
-process index_fasta {
-  tag "$fasta.baseName"
-  publishDir "results/mapping/index/", mode: 'copy'
-  label "bowtie"
-
-  input:
-    file fasta from fasta_file
-
-  output:
-    file "*.index*" into index_files
-    file "*_report.txt" into indexing_report
-
-  script:
-"""
-bowtie-build --threads ${task.cpus} -f ${fasta} ${fasta.baseName}.index &> ${fasta.baseName}_bowtie_report.txt
-
-if grep -q "Error" ${fasta.baseName}_bowtie_report.txt; then
-  exit 1
-fi
-"""
-}
-
diff --git a/src/nf_modules/bowtie/main.nf b/src/nf_modules/bowtie/main.nf
index d250e21f754a9ecb9c9c1bb84d174feb8b528fdd..a841fc36195dabcf2bb238431680f9e0677aa701 100644
--- a/src/nf_modules/bowtie/main.nf
+++ b/src/nf_modules/bowtie/main.nf
@@ -1,21 +1,27 @@
 version = "1.2.2"
 container_url = "lbmc/bowtie:${version}"
 
+params.index_fasta = ""
+params.index_fasta_out = ""
 process index_fasta {
   container = "${container_url}"
   label "big_mem_multi_cpus"
-  tag "$fasta.baseName"
+  tag "$file_id"
+  if (params.index_fasta_out != "") {
+    publishDir "results/${params.index_fasta_out}", mode: 'copy'
+  }
 
   input:
-    path fasta
+    tuple val(file_id), path(fasta)
 
   output:
-    path "*.index*", emit: index
-    path "*_report.txt", emit: report
+    tuple val(file_id), path("*.index*"), emit: index
+    tuple val(file_id), path("*_report.txt"), emit: report
 
   script:
 """
 bowtie-build --threads ${task.cpus} \
+  ${params.index_fasta} \
   -f ${fasta} ${fasta.baseName}.index &> \
   ${fasta.baseName}_bowtie_index_report.txt
 
@@ -25,56 +31,69 @@ fi
 """
 }
 
+params.mapping_fastq = "--very-sensitive"
+params.mapping_fastq_out = ""
 process mapping_fastq {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$pair_id"
+  if (params.mapping_fastq_out != "") {
+    publishDir "results/${params.mapping_fastq_out}", mode: 'copy'
+  }
 
   input:
-  path index
-  tuple val(pair_id), path(reads)
+  tuple val(index_id), path(index)
+  tuple val(file_id), path(reads)
 
   output:
-  tuple val(pair_id), path("*.bam"), emit: bam
+  tuple val(file_id), path("*.bam"), emit: bam
   path "*_report.txt", emit: report
 
   script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
   index_id = index[0]
   for (index_file in index) {
     if (index_file =~ /.*\.1\.bt2/ && !(index_file =~ /.*\.rev\.1\.bt2/)) {
         index_id = ( index_file =~ /(.*)\.1\.bt2/)[0][1]
     }
   }
-if (reads instanceof List)
-"""
-# -v specify the max number of missmatch, -k the number of match reported per
-# reads
-bowtie --best -v 3 -k 1 --sam -p ${task.cpus} ${index_id} \
-  -1 ${reads[0]} -2 ${reads[1]} 2> \
-  ${pair_id}_bowtie_report_tmp.txt | \
-  samtools view -Sb - > ${pair_id}.bam
-
-if grep -q "Error" ${pair_id}_bowtie_report_tmp.txt; then
-  exit 1
-fi
-tail -n 19 ${pair_id}_bowtie_report_tmp.txt > \
-  ${pair_id}_bowtie_mapping_report.txt
-"""
-else
-"""
-bowtie --best -v 3 -k 1 --sam -p ${task.cpus} ${index_id} \
-  -q ${reads} 2> \
-  ${file_id}_bowtie_report_tmp.txt | \
-  samtools view -Sb - > ${file_id}.bam
-
-if grep -q "Error" ${file_id}_bowtie_report_tmp.txt; then
-  exit 1
-fi
-tail -n 19 ${file_id}_bowtie_report_tmp.txt > \
-  ${file_id}_bowtie_mapping_report.txt
-"""
+  if (reads.size() == 2)
+  """
+  # -v specify the max number of missmatch, -k the number of match reported per
+  # reads
+  bowtie --best -v 3 -k 1 --sam -p ${task.cpus} ${index_id} \
+    ${params.mapping_fastq} \
+    -1 ${reads[0]} -2 ${reads[1]} 2> \
+    ${file_id}_bowtie_report_tmp.txt | \
+    samtools view -Sb - > ${file_id}.bam
+
+  if grep -q "Error" ${file_id}_bowtie_report_tmp.txt; then
+    exit 1
+  fi
+  tail -n 19 ${file_id}_bowtie_report_tmp.txt > \
+    ${file_id}_bowtie_mapping_report.txt
+  """
+  else
+  """
+  bowtie --best -v 3 -k 1 --sam -p ${task.cpus} ${index_id} \
+    ${params.mapping_fastq}
+    -q ${reads} 2> \
+    ${file_id}_bowtie_report_tmp.txt | \
+    samtools view -Sb - > ${file_id}.bam
+
+  if grep -q "Error" ${file_id}_bowtie_report_tmp.txt; then
+    exit 1
+  fi
+  tail -n 19 ${file_id}_bowtie_report_tmp.txt > \
+    ${file_id}_bowtie_mapping_report.txt
+  """
 }
 
+params.mapping_fastq_pairedend = ""
 process mapping_fastq_pairedend {
   container = "${container_url}"
   label "big_mem_multi_cpus"
@@ -99,6 +118,7 @@ process mapping_fastq_pairedend {
 # -v specify the max number of missmatch, -k the number of match reported per
 # reads
 bowtie --best -v 3 -k 1 --sam -p ${task.cpus} ${index_id} \
+  ${params.mapping_fastq_pairedend} \
   -1 ${reads[0]} -2 ${reads[1]} 2> \
   ${pair_id}_bowtie_report_tmp.txt | \
   samtools view -Sb - > ${pair_id}.bam
@@ -111,7 +131,7 @@ tail -n 19 ${pair_id}_bowtie_report_tmp.txt > \
 """
 }
 
-
+params.mapping_fastq_singleend = ""
 process mapping_fastq_singleend {
   container = "${container_url}"
   label "big_mem_multi_cpus"
@@ -134,6 +154,7 @@ process mapping_fastq_singleend {
   }
 """
 bowtie --best -v 3 -k 1 --sam -p ${task.cpus} ${index_id} \
+  ${params.mapping_fastq_singleend} \
   -q ${reads} 2> \
   ${file_id}_bowtie_report_tmp.txt | \
   samtools view -Sb - > ${file_id}.bam
diff --git a/src/nf_modules/bowtie/mapping_paired.config b/src/nf_modules/bowtie/mapping_paired.config
deleted file mode 100644
index 1d924e9b23430cc81c0f1576ad9d6bf78ebf755e..0000000000000000000000000000000000000000
--- a/src/nf_modules/bowtie/mapping_paired.config
+++ /dev/null
@@ -1,56 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withLabel: bowtie {
-        container = "lbmc/bowtie:1.2.2"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withLabel: bowtie {
-        cpus = 4
-        container = "lbmc/bowtie:1.2.2"
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withLabel: bowtie {
-        container = "lbmc/bowtie:1.2.2"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withLabel: bowtie {
-        container = "lbmc/bowtie:1.2.2"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n"
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/bowtie/mapping_paired.nf b/src/nf_modules/bowtie/mapping_paired.nf
deleted file mode 100644
index 6357a8b69de5cc083dba8451964c1848a939e0c2..0000000000000000000000000000000000000000
--- a/src/nf_modules/bowtie/mapping_paired.nf
+++ /dev/null
@@ -1,55 +0,0 @@
-/*
-* mapping paired fastq
-*/
-
-params.fastq = "$baseDir/data/fastq/*_{1,2}.fastq"
-params.index = "$baseDir/data/index/*.index.*"
-
-log.info "fastq files : ${params.fastq}"
-log.info "index files : ${params.index}"
-
-Channel
-  .fromFilePairs( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .set { fastq_files }
-Channel
-  .fromPath( params.index )
-  .ifEmpty { error "Cannot find any index files matching: ${params.index}" }
-  .set { index_files }
-
-process mapping_fastq {
-  tag "$pair_id"
-  publishDir "results/mapping/bams/", mode: 'copy'
-  label "bowtie"
-
-  input:
-  set pair_id, file(reads) from fastq_files
-  file index from index_files.collect()
-
-  output:
-  file "*.bam" into bam_files
-  file "*_report.txt" into mapping_report
-
-  script:
-  index_id = index[0]
-  for (index_file in index) {
-  if (index_file =~ /.*\.1\.ebwt/ && !(index_file =~ /.*\.rev\.1\.ebwt/)) {
-        index_id = ( index_file =~ /(.*)\.1\.ebwt/)[0][1]
-    }
-  }
-"""
-# -v specify the max number of missmatch, -k the number of match reported per
-# reads
-bowtie --best -v 3 -k 1 --sam -p ${task.cpus} ${index_id} \
--1 ${reads[0]} -2 ${reads[1]} 2> \
-${pair_id}_bowtie_report_tmp.txt | \
-samtools view -Sb - > ${pair_id}.bam
-
-if grep -q "Error" ${pair_id}_bowtie_report_tmp.txt; then
-  exit 1
-fi
-tail -n 19 ${pair_id}_bowtie_report_tmp.txt > ${pair_id}_bowtie_report.txt
-"""
-}
-
-
diff --git a/src/nf_modules/bowtie/mapping_single.config b/src/nf_modules/bowtie/mapping_single.config
deleted file mode 100644
index 1d924e9b23430cc81c0f1576ad9d6bf78ebf755e..0000000000000000000000000000000000000000
--- a/src/nf_modules/bowtie/mapping_single.config
+++ /dev/null
@@ -1,56 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withLabel: bowtie {
-        container = "lbmc/bowtie:1.2.2"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withLabel: bowtie {
-        cpus = 4
-        container = "lbmc/bowtie:1.2.2"
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withLabel: bowtie {
-        container = "lbmc/bowtie:1.2.2"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withLabel: bowtie {
-        container = "lbmc/bowtie:1.2.2"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n"
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/bowtie/mapping_single.nf b/src/nf_modules/bowtie/mapping_single.nf
deleted file mode 100644
index ac28719368de4e9055c98bf138ae33df81708004..0000000000000000000000000000000000000000
--- a/src/nf_modules/bowtie/mapping_single.nf
+++ /dev/null
@@ -1,51 +0,0 @@
-/*
-* mapping single end fastq
-*/
-
-params.fastq = "$baseDir/data/fastq/*.fastq"
-
-log.info "fastq files : ${params.fastq}"
-log.info "index files : ${params.index}"
-
-Channel
-  .fromPath( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { fastq_files }
-Channel
-  .fromPath( params.index )
-  .ifEmpty { error "Cannot find any index files matching: ${params.index}" }
-  .set { index_files }
-
-process mapping_fastq {
-  tag "$file_id"
-  publishDir "results/mapping/bams/", mode: 'copy'
-  label "bowtie"
-
-  input:
-  set file_id, file(reads) from fastq_files
-  file index from index_files.collect()
-
-  output:
-  set file_id, "*.bam" into bam_files
-  file "*_report.txt" into mapping_report
-
-  script:
-index_id = index[0]
-for (index_file in index) {
-  if (index_file =~ /.*\.1\.ebwt/ && !(index_file =~ /.*\.rev\.1\.ebwt/)) {
-      index_id = ( index_file =~ /(.*)\.1\.ebwt/)[0][1]
-  }
-}
-"""
-bowtie --best -v 3 -k 1 --sam -p ${task.cpus} ${index_id} \
--q ${reads} 2> \
-${file_id}_bowtie_report_tmp.txt | \
-samtools view -Sb - > ${file_id}.bam
-
-if grep -q "Error" ${file_id}_bowtie_report_tmp.txt; then
-  exit 1
-fi
-tail -n 19 ${file_id}_bowtie_report_tmp.txt > ${file_id}_bowtie_report.txt
-"""
-}
diff --git a/src/nf_modules/bowtie/tests.sh b/src/nf_modules/bowtie/tests.sh
deleted file mode 100755
index a30529182addf06c272ca86f9d438d0647b0f49e..0000000000000000000000000000000000000000
--- a/src/nf_modules/bowtie/tests.sh
+++ /dev/null
@@ -1,41 +0,0 @@
-./nextflow src/nf_modules/bowtie/indexing.nf \
-  -c src/nf_modules/bowtie/indexing.config \
-  -profile docker \
-  --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" \
-  -resume
-
-./nextflow src/nf_modules/bowtie/mapping_single.nf \
-  -c src/nf_modules/bowtie/mapping_single.config \
-  -profile docker \
-  --index "results/mapping/index/*.ebwt" \
-  --fastq "data/tiny_dataset/fastq/tiny*_S.fastq" \
-  -resume
-
-./nextflow src/nf_modules/bowtie/mapping_paired.nf \
-  -c src/nf_modules/bowtie/mapping_paired.config \
-  -profile docker \
-  --index "results/mapping/index/*.ebwt" \
-  --fastq "data/tiny_dataset/fastq/tiny*_R{1,2}.fastq" \
-  -resume
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/bowtie/indexing.nf \
-  -c src/nf_modules/bowtie/indexing.config \
-  -profile singularity \
-  --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" \
-  -resume
-
-./nextflow src/nf_modules/bowtie/mapping_single.nf \
-  -c src/nf_modules/bowtie/mapping_single.config \
-  -profile singularity \
-  --index "results/mapping/index/*.ebwt" \
-  --fastq "data/tiny_dataset/fastq/tiny*_S.fastq" \
-  -resume
-
-./nextflow src/nf_modules/bowtie/mapping_paired.nf \
-  -c src/nf_modules/bowtie/mapping_paired.config \
-  -profile singularity \
-  --index "results/mapping/index/*.ebwt" \
-  --fastq "data/tiny_dataset/fastq/tiny*_R{1,2}.fastq" \
-  -resume
-fi
diff --git a/src/nf_modules/bowtie2/indexing.config b/src/nf_modules/bowtie2/indexing.config
deleted file mode 100644
index 5a487f8e02c0d48dc586f68c1de1616e87669dd6..0000000000000000000000000000000000000000
--- a/src/nf_modules/bowtie2/indexing.config
+++ /dev/null
@@ -1,56 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withLabel: bowtie2 {
-        container = "lbmc/bowtie2:2.3.4.1"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withLabel: bowtie2 {
-        container = "lbmc/bowtie2:2.3.4.1"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withLabel: bowtie2 {
-        container = "lbmc/bowtie2:2.3.4.1"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "20GB"
-        time = "12h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withLabel: bowtie2 {
-        container = "lbmc/bowtie2:2.3.4.1"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n"
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/bowtie2/indexing.nf b/src/nf_modules/bowtie2/indexing.nf
deleted file mode 100644
index e1ec9ae97a0e26495e674b9319960eb363fd33fa..0000000000000000000000000000000000000000
--- a/src/nf_modules/bowtie2/indexing.nf
+++ /dev/null
@@ -1,32 +0,0 @@
-params.fasta = "$baseDir/data/bam/*.fasta"
-
-log.info "fasta files : ${params.fasta}"
-
-Channel
-  .fromPath( params.fasta )
-  .ifEmpty { error "Cannot find any bam files matching: ${params.fasta}" }
-  .set { fasta_file }
-
-process index_fasta {
-  tag "$fasta.baseName"
-  publishDir "results/mapping/index/", mode: 'copy'
-  label "bowtie2"
-
-  input:
-    file fasta from fasta_file
-
-  output:
-    file "*.index*" into index_files
-    file "*_report.txt" into indexing_report
-
-  script:
-"""
-bowtie2-build --threads ${task.cpus} ${fasta} ${fasta.baseName}.index &> ${fasta.baseName}_bowtie2_report.txt
-
-if grep -q "Error" ${fasta.baseName}_bowtie2_report.txt; then
-  exit 1
-fi
-"""
-}
-
-
diff --git a/src/nf_modules/bowtie2/main.nf b/src/nf_modules/bowtie2/main.nf
index 02d2663540b393f17b935bb4f4f0623bb5e2b2f3..3a0fc967f381ee84f40cd7a4ac3887bcb32d70ed 100644
--- a/src/nf_modules/bowtie2/main.nf
+++ b/src/nf_modules/bowtie2/main.nf
@@ -1,130 +1,48 @@
 version = "2.3.4.1"
 container_url = "lbmc/bowtie2:${version}"
 
+params.index_fasta = ""
+params.index_fasta_out = ""
 process index_fasta {
   container = "${container_url}"
   label "big_mem_multi_cpus"
-  tag "$fasta.baseName"
+  tag "$file_id"
+  if (params.index_fasta_out != "") {
+    publishDir "results/${params.index_fasta_out}", mode: 'copy'
+  }
 
   input:
-    path fasta
+    tuple val(file_id), path(fasta)
 
   output:
-    path "*.index*", emit: index
-    path "*_report.txt", emit: report
+    tuple val(file_id), path("*.bt2"), emit: index
+    tuple val(file_id), path("*_report.txt"), emit: report
 
   script:
 """
 bowtie2-build --threads ${task.cpus} \
   ${fasta} \
-  ${fasta.baseName}.index &> \
-  ${fasta.baseName}_bowtie2_index_report.txt
+  ${fasta.simpleName} &> \
+  ${fasta.simpleName}_bowtie2_index_report.txt
 
-if grep -q "Error" ${fasta.baseName}_bowtie2_index_report.txt; then
+if grep -q "Error" ${fasta.simpleName}_bowtie2_index_report.txt; then
   exit 1
 fi
 """
 }
 
-
+params.mapping_fastq = "--very-sensitive"
+params.mapping_fastq_out = ""
 process mapping_fastq {
-  container = "${container_url}"
-  label "big_mem_multi_cpus"
-  tag "$pair_id"
-
-  input:
-  path index
-  tuple val(pair_id), path(reads)
-
-  output:
-  tuple val(pair_id), path("*.bam"), emit: bam
-  path "*_report.txt", emit: report
-
-  script:
-  index_id = index[0]
-  for (index_file in index) {
-    if (index_file =~ /.*\.1\.bt2/ && !(index_file =~ /.*\.rev\.1\.bt2/)) {
-        index_id = ( index_file =~ /(.*)\.1\.bt2/)[0][1]
-    }
-  }
-if (reads instanceof List)
-"""
-bowtie2 --very-sensitive \
-  -p ${task.cpus} \
-  -x ${index_id} \
-  -1 ${reads[0]} \
-  -2 ${reads[1]} 2> \
-  ${pair_id}_bowtie2_mapping_report_tmp.txt | \
-  samtools view -Sb - > ${pair_id}.bam
-
-if grep -q "Error" ${pair_id}_bowtie2_mapping_report_tmp.txt; then
-  exit 1
-fi
-tail -n 19 ${pair_id}_bowtie2_mapping_report_tmp.txt > \
-  ${pair_id}_bowtie2_mapping_report.txt
-"""
-else
-"""
-bowtie2 --very-sensitive \
-  -p ${task.cpus} \
-  -x ${index_id} \
-  -U ${reads} 2> \
-  ${reads.baseName}_bowtie2_mapping_report_tmp.txt | \
-  samtools view -Sb - > ${reads.baseName}.bam
-
-if grep -q "Error" ${reads.baseName}_bowtie2_mapping_report_tmp.txt; then
-  exit 1
-fi
-tail -n 19 ${reads.baseName}_bowtie2_mapping_report_tmp.txt > \
-  ${reads.baseName}_bowtie2_mapping_report.txt
-"""
-}
-
-process mapping_fastq_pairedend {
-  container = "${container_url}"
-  label "big_mem_multi_cpus"
-  tag "$pair_id"
-
-  input:
-  path index
-  tuple val(pair_id), path(reads)
-
-  output:
-  tuple val(pair_id), path("*.bam"), emit: bam
-  path "*_report.txt", emit: report
-
-  script:
-  index_id = index[0]
-  for (index_file in index) {
-    if (index_file =~ /.*\.1\.bt2/ && !(index_file =~ /.*\.rev\.1\.bt2/)) {
-        index_id = ( index_file =~ /(.*)\.1\.bt2/)[0][1]
-    }
-  }
-"""
-bowtie2 --very-sensitive \
-  -p ${task.cpus} \
-  -x ${index_id} \
-  -1 ${reads[0]} \
-  -2 ${reads[1]} 2> \
-  ${pair_id}_bowtie2_mapping_report_tmp.txt | \
-  samtools view -Sb - > ${pair_id}.bam
-
-if grep -q "Error" ${pair_id}_bowtie2_mapping_report_tmp.txt; then
-  exit 1
-fi
-tail -n 19 ${pair_id}_bowtie2_mapping_report_tmp.txt > \
-  ${pair_id}_bowtie2_mapping_report.txt
-"""
-}
-
-
-process mapping_fastq_singleend {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.mapping_fastq_out != "") {
+    publishDir "results/${params.mapping_fastq_out}", mode: 'copy'
+  }
 
   input:
-  path index
+  tuple val(index_id), path(index)
   tuple val(file_id), path(reads)
 
   output:
@@ -138,18 +56,47 @@ process mapping_fastq_singleend {
         index_id = ( index_file =~ /(.*)\.1\.bt2/)[0][1]
     }
   }
-"""
-bowtie2 --very-sensitive \
-  -p ${task.cpus} \
-  -x ${index_id} \
-  -U ${reads} 2> \
-  ${reads.baseName}_bowtie2_mapping_report_tmp.txt | \
-  samtools view -Sb - > ${reads.baseName}.bam
+  switch(file_id) {
+    case {it instanceof List}:
+      file_prefix = file_id[0]
+    break
+    case {it instanceof Map}:
+      file_prefix = file_id.values()[0]
+    break
+    default:
+      file_prefix = file_id
+    break
+  }
 
-if grep -q "Error" ${reads.baseName}_bowtie2_mapping_report_tmp.txt; then
-  exit 1
-fi
-tail -n 19 ${reads.baseName}_bowtie2_mapping_report_tmp.txt > \
-  ${reads.baseName}_bowtie2_mapping_report.txt
-"""
+  if (reads.size() == 2)
+  """
+  bowtie2 ${params.mapping_fastq} \
+    -p ${task.cpus} \
+    -x ${index_id} \
+    -1 ${reads[0]} \
+    -2 ${reads[1]} 2> \
+    ${file_prefix}_bowtie2_mapping_report_tmp.txt | \
+    samtools view -Sb - > ${file_prefix}.bam
+
+  if grep -q "Error" ${file_prefix}_bowtie2_mapping_report_tmp.txt; then
+    exit 1
+  fi
+  tail -n 19 ${file_prefix}_bowtie2_mapping_report_tmp.txt > \
+    ${file_prefix}_bowtie2_mapping_report.txt
+  """
+  else
+  """
+  bowtie2 ${params.mapping_fastq} \
+    -p ${task.cpus} \
+    -x ${index_id} \
+    -U ${reads} 2> \
+    ${file_prefix}_bowtie2_mapping_report_tmp.txt | \
+    samtools view -Sb - > ${file_prefix}.bam
+
+  if grep -q "Error" ${file_prefix}_bowtie2_mapping_report_tmp.txt; then
+    exit 1
+  fi
+  tail -n 19 ${file_prefix}_bowtie2_mapping_report_tmp.txt > \
+    ${file_prefix}_bowtie2_mapping_report.txt
+  """
 }
diff --git a/src/nf_modules/bowtie2/mapping_paired.config b/src/nf_modules/bowtie2/mapping_paired.config
deleted file mode 100644
index addc6b2e3722f35ceffda826c6121f89dc4761ee..0000000000000000000000000000000000000000
--- a/src/nf_modules/bowtie2/mapping_paired.config
+++ /dev/null
@@ -1,56 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withLabel: bowtie2 {
-        container = "lbmc/bowtie2:2.3.4.1"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withLabel: bowtie2 {
-        container = "lbmc/bowtie2:2.3.4.1"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withLabel: bowtie2 {
-        container = "lbmc/bowtie2:2.3.4.1"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withLabel: bowtie2 {
-        container = "lbmc/bowtie2:2.3.4.1"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n"
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/bowtie2/mapping_paired.nf b/src/nf_modules/bowtie2/mapping_paired.nf
deleted file mode 100644
index 96c209fdb0f8aca424adf6a5ef8cd7a2a23a3912..0000000000000000000000000000000000000000
--- a/src/nf_modules/bowtie2/mapping_paired.nf
+++ /dev/null
@@ -1,48 +0,0 @@
-params.fastq = "$baseDir/data/fastq/*_{1,2}.fastq"
-params.index = "$baseDir/data/index/*.index.*"
-
-log.info "fastq files : ${params.fastq}"
-log.info "index files : ${params.index}"
-
-Channel
-  .fromFilePairs( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .set { fastq_files }
-Channel
-  .fromPath( params.index )
-  .ifEmpty { error "Cannot find any index files matching: ${params.index}" }
-  .set { index_files }
-
-process mapping_fastq {
-  tag "$pair_id"
-  publishDir "results/mapping/bams/", mode: 'copy'
-  label "bowtie2"
-
-  input:
-  set pair_id, file(reads) from fastq_files
-  file index from index_files.collect()
-
-  output:
-  set pair_id, "*.bam" into bam_files
-  file "*_report.txt" into mapping_report
-
-  script:
-  index_id = index[0]
-  for (index_file in index) {
-    if (index_file =~ /.*\.1\.bt2/ && !(index_file =~ /.*\.rev\.1\.bt2/)) {
-        index_id = ( index_file =~ /(.*)\.1\.bt2/)[0][1]
-    }
-  }
-"""
-bowtie2 --very-sensitive -p ${task.cpus} -x ${index_id} \
--1 ${reads[0]} -2 ${reads[1]} 2> \
-${pair_id}_bowtie2_report_tmp.txt | \
-samtools view -Sb - > ${pair_id}.bam
-
-if grep -q "Error" ${pair_id}_bowtie2_report_tmp.txt; then
-  exit 1
-fi
-tail -n 19 ${pair_id}_bowtie2_report_tmp.txt > ${pair_id}_bowtie2_report.txt
-"""
-}
-
diff --git a/src/nf_modules/bowtie2/mapping_single.config b/src/nf_modules/bowtie2/mapping_single.config
deleted file mode 100644
index addc6b2e3722f35ceffda826c6121f89dc4761ee..0000000000000000000000000000000000000000
--- a/src/nf_modules/bowtie2/mapping_single.config
+++ /dev/null
@@ -1,56 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withLabel: bowtie2 {
-        container = "lbmc/bowtie2:2.3.4.1"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withLabel: bowtie2 {
-        container = "lbmc/bowtie2:2.3.4.1"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withLabel: bowtie2 {
-        container = "lbmc/bowtie2:2.3.4.1"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withLabel: bowtie2 {
-        container = "lbmc/bowtie2:2.3.4.1"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n"
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/bowtie2/mapping_single.nf b/src/nf_modules/bowtie2/mapping_single.nf
deleted file mode 100644
index 69db77e2f7fb40c2bc00ba663619ef6640b34df3..0000000000000000000000000000000000000000
--- a/src/nf_modules/bowtie2/mapping_single.nf
+++ /dev/null
@@ -1,47 +0,0 @@
-params.fastq = "$baseDir/data/fastq/*.fastq"
-
-log.info "fastq files : ${params.fastq}"
-log.info "index files : ${params.index}"
-
-Channel
-  .fromPath( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { fastq_files }
-Channel
-  .fromPath( params.index )
-  .ifEmpty { error "Cannot find any index files matching: ${params.index}" }
-  .set { index_files }
-
-process mapping_fastq {
-  tag "$file_id"
-  publishDir "results/mapping/bams/", mode: 'copy'
-  label "bowtie2"
-
-  input:
-  set file_id, file(reads) from fastq_files
-  file index from index_files.collect()
-
-  output:
-  set file_id, "*.bam" into bam_files
-  file "*_report.txt" into mapping_report
-
-  script:
-  index_id = index[0]
-  for (index_file in index) {
-    if (index_file =~ /.*\.1\.bt2/ && !(index_file =~ /.*\.rev\.1\.bt2/)) {
-        index_id = ( index_file =~ /(.*)\.1\.bt2/)[0][1]
-    }
-  }
-"""
-bowtie2 --very-sensitive -p ${task.cpus} -x ${index_id} \
--U ${reads} 2> \
-${file_id}_bowtie2_report_tmp.txt | \
-samtools view -Sb - > ${file_id}.bam
-
-if grep -q "Error" ${file_id}_bowtie2_report_tmp.txt; then
-  exit 1
-fi
-tail -n 19 ${file_id}_bowtie2_report_tmp.txt > ${file_id}_bowtie2_report.txt
-"""
-}
diff --git a/src/nf_modules/bowtie2/tests.sh b/src/nf_modules/bowtie2/tests.sh
deleted file mode 100755
index bfec0d533a6430fb6cac928fe7350f3fa97a01ae..0000000000000000000000000000000000000000
--- a/src/nf_modules/bowtie2/tests.sh
+++ /dev/null
@@ -1,41 +0,0 @@
-./nextflow src/nf_modules/bowtie2/indexing.nf \
-  -c src/nf_modules/bowtie2/indexing.config \
-  -profile docker \
-  --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" \
-  -resume
-
-./nextflow src/nf_modules/bowtie2/mapping_single.nf \
-  -c src/nf_modules/bowtie2/mapping_single.config \
-  -profile docker \
-  --index "data/tiny_dataset/fasta/*.bt2" \
-  --fastq "data/tiny_dataset/fastq/tiny*_S.fastq" \
-  -resume
-
-./nextflow src/nf_modules/bowtie2/mapping_paired.nf \
-  -c src/nf_modules/bowtie2/mapping_paired.config \
-  -profile docker \
-  --index "data/tiny_dataset/fasta/*.bt2" \
-  --fastq "data/tiny_dataset/fastq/tiny*_R{1,2}.fastq" \
-  -resume
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/bowtie2/indexing.nf \
-  -c src/nf_modules/bowtie2/indexing.config \
-  -profile singularity \
-  --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" \
-  -resume
-
-./nextflow src/nf_modules/bowtie2/mapping_single.nf \
-  -c src/nf_modules/bowtie2/mapping_single.config \
-  -profile singularity \
-  --index "data/tiny_dataset/fasta/*.bt2" \
-  --fastq "data/tiny_dataset/fastq/tiny*_S.fastq" \
-  -resume
-
-./nextflow src/nf_modules/bowtie2/mapping_paired.nf \
-  -c src/nf_modules/bowtie2/mapping_paired.config \
-  -profile singularity \
-  --index "data/tiny_dataset/fasta/*.bt2" \
-  --fastq "data/tiny_dataset/fastq/tiny*_R{1,2}.fastq" \
-  -resume
-fi
diff --git a/src/nf_modules/bwa/indexing.config b/src/nf_modules/bwa/indexing.config
deleted file mode 100644
index b3b90059b1cb324f5b792f811079002aabd8acd2..0000000000000000000000000000000000000000
--- a/src/nf_modules/bwa/indexing.config
+++ /dev/null
@@ -1,56 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: index_fasta {
-        container = "lbmc/bwa:0.7.17"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: index_fasta {
-        container = "lbmc/bwa:0.7.17"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: index_fasta {
-        container = "lbmc/bwa:0.7.17"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: index_fasta {
-        container = "lbmc/bwa:0.7.17"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n"
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/bwa/indexing.nf b/src/nf_modules/bwa/indexing.nf
deleted file mode 100644
index 09096eeaa0b7e9b77d7078c4edd68734a7d68dbf..0000000000000000000000000000000000000000
--- a/src/nf_modules/bwa/indexing.nf
+++ /dev/null
@@ -1,28 +0,0 @@
-params.fasta = "$baseDir/data/bam/*.fasta"
-
-log.info "fasta files : ${params.fasta}"
-
-Channel
-  .fromPath( params.fasta )
-  .ifEmpty { error "Cannot find any bam files matching: ${params.fasta}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { fasta_file }
-
-process index_fasta {
-  tag "$fasta_id"
-  publishDir "results/mapping/index/", mode: 'copy'
-
-  input:
-    set fasta_id, file(fasta) from fasta_file
-
-  output:
-    set fasta_id, "${fasta.baseName}.*" into index_files
-    file "*_bwa_report.txt" into index_files_report
-
-  script:
-"""
-bwa index -p ${fasta.baseName} ${fasta} \
-&> ${fasta.baseName}_bwa_report.txt
-"""
-}
-
diff --git a/src/nf_modules/bwa/main.nf b/src/nf_modules/bwa/main.nf
index c6551dce588281d973d8b14a7bf10a1e446fad2b..0490b082a4d71d8cef7aa68d44ccf96d2ea41d89 100644
--- a/src/nf_modules/bwa/main.nf
+++ b/src/nf_modules/bwa/main.nf
@@ -1,10 +1,29 @@
 version = "0.7.17"
 container_url = "lbmc/bwa:${version}"
 
+
+workflow mapping {
+  take:
+    fasta
+    fastq
+  main:
+    index_fasta(fasta)
+    mapping_fastq(index_fasta.out.index.collect(), fastq)
+  emit:
+    bam = mapping_fastq.out.bam
+    report = mapping_fastq.out.report
+}
+
+
+params.index_fasta = ""
+params.index_fasta_out = ""
 process index_fasta {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.index_fasta_out != "") {
+    publishDir "results/${params.index_fasta_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(fasta)
@@ -15,50 +34,63 @@ process index_fasta {
 
   script:
 """
-bwa index -p ${fasta.simpleName} ${fasta} \
+bwa index ${params.index_fasta} -p ${fasta.simpleName} ${fasta} \
 &> ${fasta.simpleName}_bwa_report.txt
 """
 }
 
 
+params.mapping_fastq = ""
+params.mapping_fastq_out = ""
 process mapping_fastq {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.mapping_fastq_out != "") {
+    publishDir "results/${params.mapping_fastq_out}", mode: 'copy'
+  }
 
   input:
-  tuple val(file_id), path(reads)
   tuple val(index_id), path(index)
+  tuple val(file_id), path(reads)
 
   output:
   tuple val(file_id), path("*.bam"), emit: bam
-  tuple val(file_id), path("${id}_bwa_report.txt"), emit: report
+  tuple val(file_id), path("${file_prefix}_bwa_report.txt"), emit: report
 
   script:
-if (file_id.containsKey('library')) {
-  library = file_id.library
-  id = file_id.id
-} else {
-  library = file_id
-  id = file_id
-}
+  if (file_id instanceof List){
+    library = file_id[0]
+    file_prefix = file_id[0]
+  } else if (file_id instanceof Map) {
+      library = file_id[0]
+      file_prefix = file_id[0]
+      if (file_id.containsKey('library')) {
+        library = file_id.library
+        file_prefix = file_id.id
+      }
+  } else {
+    library = file_id
+    file_prefix = file_id
+  }
 bwa_mem_R = "@RG\\tID:${library}\\tSM:${library}\\tLB:lib_${library}\\tPL:illumina"
-if (reads instanceof List)
+  if (reads.size() == 2)
 """
 bwa mem -t ${task.cpus} \
+${params.mapping_fastq} \
 -R '${bwa_mem_R}' \
-${index_id} ${reads[0]} ${reads[1]} 2> \
-  ${id}_bwa_report.txt | \
-  samtools view -@ ${task.cpus} -Sb - > ${id}.bam
+${index[0].baseName} ${reads[0]} ${reads[1]} 2> \
+  ${file_prefix}_bwa_report.txt | \
+  samtools view -@ ${task.cpus} -Sb - > ${file_prefix}.bam
 """
-else
-
+  else
 """
 bwa mem -t ${task.cpus} \
+${params.mapping_fastq} \
 -R '${bwa_mem_R}' \
-${index_id} ${reads} 2> \
-  ${id}_bwa_report.txt | \
-  samtools view -@ ${task.cpus} -Sb - > ${id}.bam
+${index[0].baseName} ${reads} 2> \
+  ${file_prefix}_bwa_report.txt | \
+  samtools view -@ ${task.cpus} -Sb - > ${file_prefix}.bam
 """
 }
 
diff --git a/src/nf_modules/bwa/mapping_paired.config b/src/nf_modules/bwa/mapping_paired.config
deleted file mode 100644
index b28e5b9cf21573fc04a6fcf103fff13fee1b8ffa..0000000000000000000000000000000000000000
--- a/src/nf_modules/bwa/mapping_paired.config
+++ /dev/null
@@ -1,56 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: mapping_fastq {
-        container = "lbmc/bwa:0.7.17"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: mapping_fastq {
-        container = "lbmc/bwa:0.7.17"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: mapping_fastq {
-        container = "lbmc/bwa:0.7.17"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: mapping_fastq {
-        container = "lbmc/bwa:0.7.17"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n"
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/bwa/mapping_paired.nf b/src/nf_modules/bwa/mapping_paired.nf
deleted file mode 100644
index 7309c6052fb2e670d29ef51b7b547c59ce092c4a..0000000000000000000000000000000000000000
--- a/src/nf_modules/bwa/mapping_paired.nf
+++ /dev/null
@@ -1,37 +0,0 @@
-params.fastq = "$baseDir/data/fastq/*_{1,2}.fastq"
-params.index = "$baseDir/data/index/*.index.*"
-
-log.info "fastq files : ${params.fastq}"
-log.info "index files : ${params.index}"
-
-Channel
-  .fromFilePairs( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .set { fastq_files }
-Channel
-  .fromPath( params.index )
-  .ifEmpty { error "Cannot find any index files matching: ${params.index}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .groupTuple()
-  .set { index_files }
-
-process mapping_fastq {
-  tag "$reads"
-  publishDir "results/mapping/sam/", mode: 'copy'
-
-  input:
-  set pair_id, file(reads) from fastq_files
-  set index_id, file(index) from index_files.collect()
-
-  output:
-  file "${pair_id}.sam" into sam_files
-  file "${pair_id}_bwa_report.txt" into mapping_repport_files
-
-  script:
-"""
-bwa mem -t ${task.cpus} \
-${index_id} ${reads[0]} ${reads[1]} \
--o ${pair_id}.sam &> ${pair_id}_bwa_report.txt
-"""
-}
-
diff --git a/src/nf_modules/bwa/tests.sh b/src/nf_modules/bwa/tests.sh
deleted file mode 100755
index e601d200cd9b642ad7d96ff05be666938e2c6178..0000000000000000000000000000000000000000
--- a/src/nf_modules/bwa/tests.sh
+++ /dev/null
@@ -1,42 +0,0 @@
-./nextflow src/nf_modules/bwa/indexing.nf \
-  -c src/nf_modules/bwa/indexing.config \
-  -profile docker \
-  --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" \
-  -resume
-
-# ./nextflow src/nf_modules/bwa/mapping_single.nf \
-#   -c src/nf_modules/bwa/mapping_single.config \
-#   -profile docker \
-#   --index "results/mapping/index/tiny_v2.index" \
-#   --fastq "data/tiny_dataset/fastq/tiny*_S.fastq"
-
-./nextflow src/nf_modules/bwa/mapping_paired.nf \
-  -c src/nf_modules/bwa/mapping_paired.config \
-  -profile docker \
-  --index "results/mapping/index/tiny_v2.*" \
-  --fastq "data/tiny_dataset/fastq/tiny*_R{1,2}.fastq" \
-  -resume
-
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/bwa/indexing.nf \
-  -c src/nf_modules/bwa/indexing.config \
-  -profile singularity \
-  --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" \
-  -resume
-
-
-# ./nextflow src/nf_modules/bwa/mapping_single.nf \
-#   -c src/nf_modules/bwa/mapping_single.config \
-#   -profile singularity \
-#   --index "results/mapping/index/tiny_v2.index" \
-#   --fastq "data/tiny_dataset/fastq/tiny*_S.fastq"
-
-./nextflow src/nf_modules/bwa/mapping_paired.nf \
-  -c src/nf_modules/bwa/mapping_paired.config \
-  -profile singularity \
-  --index "results/mapping/index/tiny_v2.*" \
-  --fastq "data/tiny_dataset/fastq/tiny*_R{1,2}.fastq" \
-  -resume
-
-fi
diff --git a/src/nf_modules/cutadapt/adaptor_removal_paired.config b/src/nf_modules/cutadapt/adaptor_removal_paired.config
deleted file mode 100644
index 33b2179fd41d2a43e1001d6b75df343b879f94be..0000000000000000000000000000000000000000
--- a/src/nf_modules/cutadapt/adaptor_removal_paired.config
+++ /dev/null
@@ -1,55 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withLabel: cutadapt {
-        container = "lbmc/cutadapt:2.1"
-        cpus = 1
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withLabel: cutadapt {
-        container = "lbmc/cutadapt:2.1"
-        cpus = 1
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withLabel: cutadapt {
-        container = "lbmc/cutadapt:2.1"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "20GB"
-        time = "12h"
-        queue = "monointeldeb128"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withLabel: cutadapt {
-        container = "lbmc/cutadapt:2.1"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n"
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/cutadapt/adaptor_removal_paired.nf b/src/nf_modules/cutadapt/adaptor_removal_paired.nf
deleted file mode 100644
index 0d20373335c963833d71f1d9d12fd0145b004db1..0000000000000000000000000000000000000000
--- a/src/nf_modules/cutadapt/adaptor_removal_paired.nf
+++ /dev/null
@@ -1,25 +0,0 @@
-log.info "fastq files : ${params.fastq}"
-
-Channel
-  .fromFilePairs( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .set { fastq_files }
-
-process adaptor_removal {
-  tag "$pair_id"
-  publishDir "results/fastq/adaptor_removal/", mode: 'copy'
-  label "cutadapt"
-
-  input:
-  set pair_id, file(reads) from fastq_files
-
-  output:
-  set pair_id, "*_cut_R{1,2}.fastq.gz" into fastq_files_cut
-
-  script:
-  """
-  cutadapt -a AGATCGGAAGAG -g CTCTTCCGATCT -A AGATCGGAAGAG -G CTCTTCCGATCT \
-  -o ${pair_id}_cut_R1.fastq.gz -p ${pair_id}_cut_R2.fastq.gz \
-  ${reads[0]} ${reads[1]} > ${pair_id}_report.txt
-  """
-}
diff --git a/src/nf_modules/cutadapt/adaptor_removal_single.config b/src/nf_modules/cutadapt/adaptor_removal_single.config
deleted file mode 100644
index 33b2179fd41d2a43e1001d6b75df343b879f94be..0000000000000000000000000000000000000000
--- a/src/nf_modules/cutadapt/adaptor_removal_single.config
+++ /dev/null
@@ -1,55 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withLabel: cutadapt {
-        container = "lbmc/cutadapt:2.1"
-        cpus = 1
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withLabel: cutadapt {
-        container = "lbmc/cutadapt:2.1"
-        cpus = 1
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withLabel: cutadapt {
-        container = "lbmc/cutadapt:2.1"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "20GB"
-        time = "12h"
-        queue = "monointeldeb128"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withLabel: cutadapt {
-        container = "lbmc/cutadapt:2.1"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n"
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/cutadapt/adaptor_removal_single.nf b/src/nf_modules/cutadapt/adaptor_removal_single.nf
deleted file mode 100644
index ac2b631ef4029ea635174014b15fe0665113f595..0000000000000000000000000000000000000000
--- a/src/nf_modules/cutadapt/adaptor_removal_single.nf
+++ /dev/null
@@ -1,26 +0,0 @@
-log.info "fastq files : ${params.fastq}"
-
-Channel
-  .fromPath( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { fastq_files }
-
-process adaptor_removal {
-  tag "$file_id"
-  label "cutadapt"
-
-  input:
-  set file_id, file(reads) from fastq_files
-
-  output:
-  set file_id, "*_cut.fastq.gz" into fastq_files_cut
-
-  script:
-  """
-  cutadapt -a AGATCGGAAGAG -g CTCTTCCGATCT\
-  -o ${file_id}_cut.fastq.gz \
-  ${reads} > ${file_id}_report.txt
-  """
-}
-
diff --git a/src/nf_modules/cutadapt/main.nf b/src/nf_modules/cutadapt/main.nf
index 6464935194f9cb9ce88f7175669a15d770fbd74f..7cac589e23669969ea100ecff00e842d6ae5c910 100644
--- a/src/nf_modules/cutadapt/main.nf
+++ b/src/nf_modules/cutadapt/main.nf
@@ -1,142 +1,79 @@
 version = "2.1"
 container_url = "lbmc/cutadapt:${version}"
 
-adapter_3_prim = "AGATCGGAAGAG"
-adapter_5_prim = "CTCTTCCGATCT"
-trim_quality = "20"
-
-
+params.adapter_3_prim = "AGATCGGAAGAG"
+params.adapter_5_prim = "CTCTTCCGATCT"
+params.adaptor_removal = "-a ${params.adapter_3_prim} -g ${params.adapter_5_prim} -A ${params.adapter_3_prim} -G ${params.adapter_5_prim}"
+params.adaptor_removal_out = ""
 process adaptor_removal {
-  container = "${container_url}"
-  label "big_mem_mono_cpus"
-  tag "$pair_id"
-
-  input:
-  tuple val(pair_id), path(reads)
-
-  output:
-  tuple val(pair_id), path("*_cut_R{1,2}.fastq.gz"), emit: fastq
-  path "*_report.txt", emit: report
-
-  script:
-if (reads instanceof List)
-  """
-  cutadapt -a ${adapter_3_prim} -g ${adapter_5_prim} -A ${adapter_3_prim} -G ${adapter_5_prim} \
-  -o ${pair_id}_cut_R1.fastq.gz -p ${pair_id}_cut_R2.fastq.gz \
-  ${reads[0]} ${reads[1]} > ${pair_id}_report.txt
-  """
-else:
-  """
-  cutadapt -a ${adapter_3_prim} -g ${adapter_5_prim} \
-  -o ${file_id}_cut.fastq.gz \
-  ${reads} > ${file_id}_report.txt
-  """
-}
-
-process adaptor_removal_pairedend {
-  container = "${container_url}"
-  label "big_mem_mono_cpus"
-  tag "$pair_id"
-
-  input:
-  tuple val(pair_id), path(reads)
-
-  output:
-  tuple val(pair_id), path("*_cut_R{1,2}.fastq.gz"), emit: fastq
-  path "*_report.txt", emit: report
-
-  script:
-  """
-  cutadapt -a ${adapter_3_prim} -g ${adapter_5_prim} -A ${adapter_3_prim} -G ${adapter_5_prim} \
-  -o ${pair_id}_cut_R1.fastq.gz -p ${pair_id}_cut_R2.fastq.gz \
-  ${reads[0]} ${reads[1]} > ${pair_id}_report.txt
-  """
-}
-
-process adaptor_removal_singleend {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.adaptor_removal_out != "") {
+    publishDir "results/${params.adaptor_removal_out}", mode: 'copy'
+  }
 
   input:
   tuple val(file_id), path(reads)
 
   output:
-  tuple val(file_id), path("*_cut.fastq.gz"), emit: fastq
+  tuple val(file_id), path("*_cut_*"), emit: fastq
   path "*_report.txt", emit: report
 
   script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
+  if (reads.size() == 2)
   """
-  cutadapt -a ${adapter_3_prim} -g ${adapter_5_prim} \
-  -o ${file_id}_cut.fastq.gz \
-  ${reads} > ${file_id}_report.txt
+  cutadapt ${params.adaptor_removal} \
+  -o ${file_prefix}_cut_R1.fastq.gz -p ${file_prefix}_cut_R2.fastq.gz \
+  ${reads[0]} ${reads[1]} > ${file_prefix}_report.txt
   """
-}
-
-process trimming_pairedend {
-  container = "${container_url}"
-  label "big_mem_mono_cpus"
-  tag "$pair_id"
-
-  input:
-  tuple val(pair_id), path(reads)
-
-  output:
-  tuple val(pair_id), path("*_trim_R{1,2}.fastq.gz"), emit:fastq
-  path "*_report.txt", emit: report
-
-  script:
-if (reads instanceof List)
-  """
-  cutadapt -q ${trim_quality},${trim_quality} \
-  -o ${pair_id}_trim_R1.fastq.gz -p ${pair_id}_trim_R2.fastq.gz \
-  ${reads[0]} ${reads[1]} > ${pair_id}_report.txt
+  else
   """
-else
-  """
-  cutadapt -q ${trim_quality},${trim_quality} \
-  -o ${file_id}_trim.fastq.gz \
-  ${reads} > ${file_id}_report.txt
+  cutadapt ${params.adaptor_removal} \
+  -o ${file_prefix}_cut.fastq.gz \
+  ${reads} > ${file_prefix}_report.txt
   """
 }
 
-process trimming_pairedend {
-  container = "${container_url}"
-  label "big_mem_mono_cpus"
-  tag "$pair_id"
-
-  input:
-  tuple val(pair_id), path(reads)
-
-  output:
-  tuple val(pair_id), path("*_trim_R{1,2}.fastq.gz"), emit:fastq
-  path "*_report.txt", emit: report
-
-  script:
-  """
-  cutadapt -q ${trim_quality},${trim_quality} \
-  -o ${pair_id}_trim_R1.fastq.gz -p ${pair_id}_trim_R2.fastq.gz \
-  ${reads[0]} ${reads[1]} > ${pair_id}_report.txt
-  """
-}
-
-process trimming_singleend {
+params.trim_quality = "20"
+params.trimming = "-q ${params.trim_quality},${params.trim_quality}"
+params.trimming_out = ""
+process trimming {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.trimming_out != "") {
+    publishDir "results/${params.trimming_out}", mode: 'copy'
+  }
 
   input:
   tuple val(file_id), path(reads)
 
   output:
-  tuple val(file_id), path("*_trim.fastq.gz"), emit: fastq
+  tuple val(file_id), path("*_trim_*"), emit:fastq
   path "*_report.txt", emit: report
 
   script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
+  if (reads.size() == 2)
   """
-  cutadapt -q ${trim_quality},${trim_quality} \
-  -o ${file_id}_trim.fastq.gz \
-  ${reads} > ${file_id}_report.txt
+  cutadapt ${params.trimming} \
+  -o ${file_prefix}_trim_R1.fastq.gz -p ${file_prefix}_trim_R2.fastq.gz \
+  ${reads[0]} ${reads[1]} > ${file_prefix}_report.txt
+  """
+  else
+  """
+  cutadapt ${params.trimming} \
+  -o ${file_prefix}_trim.fastq.gz \
+  ${reads} > ${file_prefix}_report.txt
   """
 }
-
diff --git a/src/nf_modules/cutadapt/tests.sh b/src/nf_modules/cutadapt/tests.sh
deleted file mode 100755
index 77712b2083e0ba335a3f0c1944c2e8045b66f4c7..0000000000000000000000000000000000000000
--- a/src/nf_modules/cutadapt/tests.sh
+++ /dev/null
@@ -1,49 +0,0 @@
-./nextflow src/nf_modules/cutadapt/adaptor_removal_paired.nf \
-  -c src/nf_modules/cutadapt/adaptor_removal_paired.config \
-  -profile docker \
-  --fastq "data/tiny_dataset/fastq/tiny_R{1,2}.fastq" \
-  -resume
-
-./nextflow src/nf_modules/cutadapt/adaptor_removal_single.nf \
-  -c src/nf_modules/cutadapt/adaptor_removal_single.config \
-  -profile docker \
-  --fastq "data/tiny_dataset/fastq/tiny*_S.fastq" \
-  -resume
-
-./nextflow src/nf_modules/cutadapt/trimming_paired.nf \
-  -c src/nf_modules/cutadapt/trimming_paired.config \
-  -profile docker \
-  --fastq "data/tiny_dataset/fastq/tiny_R{1,2}.fastq" \
-  -resume
-
-./nextflow src/nf_modules/cutadapt/trimming_single.nf \
-  -c src/nf_modules/cutadapt/trimming_single.config \
-  -profile docker \
-  --fastq "data/tiny_dataset/fastq/tiny*_S.fastq" \
-  -resume
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/cutadapt/adaptor_removal_paired.nf \
-  -c src/nf_modules/cutadapt/adaptor_removal_paired.config \
-  -profile docker \
-  --fastq "data/tiny_dataset/fastq/tiny_R{1,2}.fastq" \
-  -resume
-
-./nextflow src/nf_modules/cutadapt/adaptor_removal_single.nf \
-  -c src/nf_modules/cutadapt/adaptor_removal_single.config \
-  -profile docker \
-  --fastq "data/tiny_dataset/fastq/tiny*_S.fastq" \
-  -resume
-
-./nextflow src/nf_modules/cutadapt/trimming_paired.nf \
-  -c src/nf_modules/cutadapt/trimming_paired.config \
-  -profile docker \
-  --fastq "data/tiny_dataset/fastq/tiny_R{1,2}.fastq" \
-  -resume
-
-./nextflow src/nf_modules/cutadapt/trimming_single.nf \
-  -c src/nf_modules/cutadapt/trimming_single.config \
-  -profile docker \
-  --fastq "data/tiny_dataset/fastq/tiny*_S.fastq" \
-  -resume
-fi
diff --git a/src/nf_modules/cutadapt/trimming_paired.config b/src/nf_modules/cutadapt/trimming_paired.config
deleted file mode 100644
index 33b2179fd41d2a43e1001d6b75df343b879f94be..0000000000000000000000000000000000000000
--- a/src/nf_modules/cutadapt/trimming_paired.config
+++ /dev/null
@@ -1,55 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withLabel: cutadapt {
-        container = "lbmc/cutadapt:2.1"
-        cpus = 1
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withLabel: cutadapt {
-        container = "lbmc/cutadapt:2.1"
-        cpus = 1
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withLabel: cutadapt {
-        container = "lbmc/cutadapt:2.1"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "20GB"
-        time = "12h"
-        queue = "monointeldeb128"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withLabel: cutadapt {
-        container = "lbmc/cutadapt:2.1"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n"
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/cutadapt/trimming_paired.nf b/src/nf_modules/cutadapt/trimming_paired.nf
deleted file mode 100644
index 9fb317ffbddec30204fb940b011a8abbade98e73..0000000000000000000000000000000000000000
--- a/src/nf_modules/cutadapt/trimming_paired.nf
+++ /dev/null
@@ -1,25 +0,0 @@
-log.info "fastq files : ${params.fastq}"
-
-Channel
-  .fromFilePairs( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .set { fastq_files }
-
-process trimming {
-  tag "$pair_id"
-  publishDir "results/fastq/trimming/", mode: 'copy'
-  label "cutadapt"
-
-  input:
-  set pair_id, file(reads) from fastq_files
-
-  output:
-  set pair_id, "*_trim_R{1,2}.fastq.gz" into fastq_files_trim
-
-  script:
-  """
-  cutadapt -q 20,20 \
-  -o ${pair_id}_trim_R1.fastq.gz -p ${pair_id}_trim_R2.fastq.gz \
-  ${reads[0]} ${reads[1]} > ${pair_id}_report.txt
-  """
-}
diff --git a/src/nf_modules/cutadapt/trimming_single.config b/src/nf_modules/cutadapt/trimming_single.config
deleted file mode 100644
index 33b2179fd41d2a43e1001d6b75df343b879f94be..0000000000000000000000000000000000000000
--- a/src/nf_modules/cutadapt/trimming_single.config
+++ /dev/null
@@ -1,55 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withLabel: cutadapt {
-        container = "lbmc/cutadapt:2.1"
-        cpus = 1
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withLabel: cutadapt {
-        container = "lbmc/cutadapt:2.1"
-        cpus = 1
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withLabel: cutadapt {
-        container = "lbmc/cutadapt:2.1"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "20GB"
-        time = "12h"
-        queue = "monointeldeb128"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withLabel: cutadapt {
-        container = "lbmc/cutadapt:2.1"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n"
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/cutadapt/trimming_single.nf b/src/nf_modules/cutadapt/trimming_single.nf
deleted file mode 100644
index 0d125e7e987c0a1ffaca54dfef63a497205d7d33..0000000000000000000000000000000000000000
--- a/src/nf_modules/cutadapt/trimming_single.nf
+++ /dev/null
@@ -1,26 +0,0 @@
-log.info "fastq files : ${params.fastq}"
-
-Channel
-  .fromPath( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { fastq_files }
-
-process trimming {
-  tag "$file_id"
-  label "cutadapt"
-
-  input:
-  set file_id, file(reads) from fastq_files
-
-  output:
-  set file_id, "*_trim.fastq.gz" into fastq_files_cut
-
-  script:
-  """
-  cutadapt -q 20,20 \
-  -o ${file_id}_trim.fastq.gz \
-  ${reads} > ${file_id}_report.txt
-  """
-}
-
diff --git a/src/nf_modules/danpos/main.nf b/src/nf_modules/danpos/main.nf
new file mode 100644
index 0000000000000000000000000000000000000000..81d8b836f015ef21991d217e2c214099bd925ba7
--- /dev/null
+++ b/src/nf_modules/danpos/main.nf
@@ -0,0 +1,446 @@
+version = "v2.2.2_cv3"
+container_url = "biocontainers/danpos:${version}"
+
+include {
+  bigwig2_to_wig2;
+  bigwig_to_wig;
+  wig_to_bedgraph;
+  wig2_to_bedgraph2
+} from "./../ucsc/main.nf"
+
+params.dpos = "--smooth_width 0 -n N "
+params.dpos_out = ""
+
+workflow dpos_bam_bg {
+  take:
+    fasta
+    fastq
+    bam
+
+  main:
+    dpos_bam(fastq, bam)
+    wig2_to_bedgraph2(fasta, dpos_bam.out.wig)
+
+  emit:
+    bg = wig2_to_bedgraph2.out.bg
+    wig = dpos_bam.out.wig
+    bed = dpos_bam.out.bed
+}
+
+process dpos_bam {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.dpos_out != "") {
+    publishDir "results/${params.dpos_out}", mode: 'copy', overwrite: true
+  }
+
+  input:
+    val fastq 
+    tuple val(file_id), path(bam_ip), path(bam_wce)
+
+  output:
+    tuple val(file_id), path("${file_prefix}/${bam_ip.simpleName}*.wig"), path("${file_prefix}/${bam_wce.simpleName}*.wig"), emit: wig
+  tuple val(file_id), path("${file_prefix}/*.positions.bed"), emit: bed
+
+  script:
+
+  switch(file_id) {
+    case {it instanceof List}:
+      file_prefix = file_id[0]
+    break
+    case {it instanceof Map}:
+      file_prefix = file_id.values()[0]
+    break
+    default:
+      file_prefix = file_id
+    break
+  }
+
+  m = 0
+  if (fastq[1].size() == 2){
+    m = 1
+  }
+"""
+danpos.py dpos -m ${m} \
+  ${params.dpos} \
+  -b ${bam_wce} \
+  -o ${file_prefix} \
+  ${bam_ip}
+mv ${file_prefix}/pooled/* ${file_prefix}/
+rm -R ${file_prefix}/pooled
+awk -v FS='\t' -v OFS='\t' 'FNR > 1 { print \$1, \$2-1, \$3, "Interval_"NR-1, \$6, "+" }' ${file_prefix}/${bam_ip.simpleName}.bgsub.positions.xls > ${file_prefix}/${bam_ip.simpleName}.bgsub.positions.bed
+"""
+}
+
+workflow dpos_bw {
+  take:
+    fasta
+    fastq
+    bw
+  main:
+    bigwig2_to_wig2(bw)
+    dpos_wig(fastq, bigwig2_to_wig2.out.wig)
+    wig_to_bedgraph(fasta, bigwig2_to_wig2.out.wig)
+
+  emit:
+  bg = wig_to_bedgraph.out.bg
+  wig = bigwig2_to_wig2.out.wig
+  bed = dpos_wig.out.bed
+}
+
+process dpos_wig {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.dpos_out != "") {
+    publishDir "results/${params.dpos_out}", mode: 'copy', overwrite: true
+  }
+
+  input:
+    val fastq 
+    tuple val(file_id), path(wig_ip), path(wig_wce)
+
+  output:
+    tuple val(file_id), path("${file_prefix}/*.positions.bed"), emit: bed
+    tuple val(file_id), path("${file_prefix}/${bam_ip.simpleName}*.wig"), path("${file_prefix}/${bam_wce.simpleName}*.wig"), emit: wig
+
+  script:
+
+  switch(file_id) {
+    case {it instanceof List}:
+      file_prefix = file_id[0]
+    break
+    case {it instanceof Map}:
+      file_prefix = file_id.values()[0]
+    break
+    default:
+      file_prefix = file_id
+    break
+  }
+
+  m = 0
+  if (fastq[1].size() == 2){
+    m = 1
+  }
+"""
+danpos.py dpos -m ${m} \
+  ${params.dpos} \
+  -b ${wig_wce} \
+  -o ${file_prefix} \
+  ${wig_ip}
+mv ${file_prefix}/pooled/* ${file_prefix}/
+rm -R ${file_prefix}/pooled
+awk -v FS='\t' -v OFS='\t' 'FNR > 1 { print \$1, \$2-1, \$3, "Interval_"NR-1, \$6, "+" }' ${file_prefix}/${wig_ip.simpleName}.positions.xls > ${file_prefix}/${wig_ip.simpleName}.positions.bed
+"""
+}
+
+workflow dpos_bw_no_b {
+  take:
+    fasta
+    fastq
+    bw
+  main:
+    bigwig_to_wig(bw)
+    dpos_wig_no_b(fastq, bigwig_to_wig.out.wig)
+    wig_to_bedgraph(fasta, bigwig_to_wig.out.wig)
+
+  emit:
+  bg = wig_to_bedgraph.out.bg
+  wig = bigwig_to_wig.out.wig
+  bed = dpos_wig_no_b.out.bed
+}
+
+process dpos_wig_no_b {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.dpos_out != "") {
+    publishDir "results/${params.dpos_out}", mode: 'copy', overwrite: true
+  }
+
+  input:
+    val fastq 
+    tuple val(file_id), path(wig_ip)
+
+  output:
+    tuple val(file_id), path("${file_prefix}/*.positions.bed"), emit: bed
+
+  script:
+
+  switch(file_id) {
+    case {it instanceof List}:
+      file_prefix = file_id[0]
+    break
+    case {it instanceof Map}:
+      file_prefix = file_id.values()[0]
+    break
+    default:
+      file_prefix = file_id
+    break
+  }
+
+  m = 0
+  if (fastq[1].size() == 2){
+    m = 1
+  }
+"""
+danpos.py dpos -m ${m} \
+  ${params.dpos} \
+  -o ${file_prefix} \
+  ${wig_ip}
+mv ${file_prefix}/pooled/* ${file_prefix}/
+rm -R ${file_prefix}/pooled
+awk -v FS='\t' -v OFS='\t' 'FNR > 1 { print \$1, \$2-1, \$3, "Interval_"NR-1, \$6, "+" }' ${file_prefix}/${wig_ip.simpleName}.positions.xls > ${file_prefix}/${wig_ip.simpleName}.positions.bed
+"""
+}
+
+workflow dwig_bwvsbw {
+  take:
+    fasta
+    fastq
+    bw_a
+    bw_b
+  main:
+    dpos_wigvswig(
+      fastq,
+      bigwig2_to_wig2(bw_a),
+      bigwig2_to_wig2(bw_b),
+    )
+    wig_to_bedgraph(fasta, dpos_wigvswig.out.wig)
+
+  emit:
+  bg = wig_to_bedgraph.out.bg
+  wig = dpeak_wig.out.wig
+  bed = dpeak_wig.out.bed
+}
+
+process dpos_wigvswig {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.dpos_out != "") {
+    publishDir "results/${params.dpos_out}", mode: 'copy', overwrite: true
+  }
+
+  input:
+    val fastq 
+    tuple val(file_id_a), path(wig_ip_a)
+    tuple val(file_id_b), path(wig_ip_b)
+
+  output:
+    tuple val(file_id), path("${file_prefix}/${wig_ip_a.simpleName}*.wig"), emit: wig
+  tuple val(file_id), path("${file_prefix}/*.positions.bed"), emit: bed
+
+  script:
+
+  switch(file_id_a) {
+    case {it instanceof List}:
+      file_prefix = file_id_a[0]
+    break
+    case {it instanceof Map}:
+      file_prefix = file_id_a.values()[0]
+    break
+    default:
+      file_prefix = file_id_a
+    break
+  }
+
+  m = 0
+  if (fastq[1].size() == 2){
+    m = 1
+  }
+"""
+danpos.py dpos -m ${m} \
+  ${params.dpos} \
+  -b ${wig_ip_a},${wig_ip_b} \
+  -o ${file_prefix} \
+  ${wig_ip_a}:${wig_ip_b}
+mv ${file_prefix}/pooled/* ${file_prefix}/
+rm -R ${file_prefix}/pooled
+awk -v FS='\t' -v OFS='\t' 'FNR > 1 { print \$1, \$2-1, \$3, "Interval_"NR-1, \$6, "+" }' ${file_prefix}/${bam_ip.simpleName}.positions.xls > ${file_prefix}/${bam_ip.simpleName}.positions.bed
+"""
+}
+
+params.dpeak = "--smooth_width 0 -n N "
+params.dpeak_out = ""
+
+process dpeak_bam {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.dpeak_out != "") {
+    publishDir "results/${params.dpeak_out}", mode: 'copy', overwrite: true
+  }
+
+  input:
+    val fastq 
+    tuple val(file_id), path(bam_ip), path(bam_wce)
+
+  output:
+    tuple val(file_id), path("${file_prefix}/${bam_ip.simpleName}*.wig"), path("${file_prefix}/${bam_wce.simpleName}*.wig"), emit: wig
+  tuple val(file_id), path("${file_prefix}/*.positions.bed"), path("${file_prefix}/*.summit.bed"), emit: bed
+    tuple val(file_id), path("${file_prefix}/*.bed"), emit: bed
+
+  script:
+
+  switch(file_id) {
+    case {it instanceof List}:
+      file_prefix = file_id[0]
+    break
+    case {it instanceof Map}:
+      file_prefix = file_id.values()[0]
+    break
+    default:
+      file_prefix = file_id
+    break
+  }
+
+  m = 0
+  if (fastq[1].size() == 2){
+    m = 1
+  }
+"""
+danpos.py dpeak -m ${m} \
+  ${params.dpeak} \
+  -b ${bam_wce} \
+  -o ${file_prefix} \
+  ${bam_ip}
+mv ${file_prefix}/pooled/* ${file_prefix}/
+rm -R ${file_prefix}/pooled
+awk -v FS='\t' -v OFS='\t' 'FNR > 1 { print \$1, \$2-1, \$3, "Interval_"NR-1, \$6, "+" }' ${file_prefix}/${bam_ip.simpleName}.bgsub.peaks.xls > ${file_prefix}/${bam_ip.simpleName}.bgsub.positions.bed
+awk -v FS='\t' -v OFS='\t' 'FNR > 1 { print \$1, \$4-1, \$4, "Interval_"NR-1, \$6, "+" }' ${file_prefix}/${bam_ip.simpleName}.bgsub.peaks.xls > ${file_prefix}/${bam_ip.simpleName}.bgsub.positions.summit.bed
+"""
+}
+
+workflow dpeak_bw {
+  take:
+    fasta
+    fastq
+    bw
+  main:
+    dpeak_wig(fastq, bigwig2_to_wig2(bw))
+    wig2_to_bedgraph2(fasta, dpeak_wig.out.wig)
+
+  emit:
+  bg = wig2_to_bedgraph2.out.bg
+  wig = dpeak_wig.out.wig
+  bed = dpeak_wig.out.bed
+}
+
+
+process dpeak_wig {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.dpeak_out != "") {
+    publishDir "results/${params.dpeak_out}", mode: 'copy', overwrite: true
+  }
+
+  input:
+    val fastq 
+    tuple val(file_id), path(wig_ip), path(wig_wce)
+
+  output:
+  tuple val(file_id), path("${file_prefix}/${wig_ip.simpleName}.bgsub.wig"), path("${file_prefix}/${wig_wce.simpleName}.wig"), emit: wig
+  tuple val(file_id), path("${file_prefix}/*.positions.bed"), path("${file_prefix}/*.summit.bed"), emit: bed
+
+  script:
+
+  switch(file_id) {
+    case {it instanceof List}:
+      file_prefix = file_id[0]
+    break
+    case {it instanceof Map}:
+      file_prefix = file_id.values()[0]
+    break
+    default:
+      file_prefix = file_id
+    break
+  }
+
+  m = 0
+  if (fastq[1].size() == 2){
+    m = 1
+  }
+"""
+danpos.py dpeak -m ${m} \
+  ${params.dpeak} \
+  -b ${wig_wce} \
+  -o ${file_prefix} \
+  ${wig_ip}
+mv ${file_prefix}/pooled/* ${file_prefix}/
+rm -R ${file_prefix}/pooled
+awk -v FS='\t' -v OFS='\t' 'FNR > 1 { print \$1, \$2-1, \$3, "Interval_"NR-1, \$6, "+" }' ${file_prefix}/${wig_ip.simpleName}.bgsub.peaks.xls > ${file_prefix}/${wig_ip.simpleName}.bgsub.positions.bed
+awk -v FS='\t' -v OFS='\t' 'FNR > 1 { print \$1, \$4-1, \$4, "Interval_"NR-1, \$6, "+" }' ${file_prefix}/${wig_ip.simpleName}.bgsub.peaks.xls > ${file_prefix}/${wig_ip.simpleName}.bgsub.positions.summit.bed
+"""
+}
+
+workflow dpeak_bwvsbw {
+  take:
+    fasta
+    fastq
+    bw_a
+    bw_b
+  main:
+    dpeak_wigvswig(
+      fastq,
+      bigwig2_to_wig2(bw_a),
+      bigwig2_to_wig2(bw_b),
+    )
+    wig2_to_bedgraph2(fasta, dpeak_wigvswig.out.wig)
+
+  emit:
+  bg = wig2_to_bedgraph2.out.bg
+  wig = dpeak_wig.out.wig
+  bed = dpeak_wig.out.bed
+}
+
+
+process dpeak_wigvswig {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.dpeak_out != "") {
+    publishDir "results/${params.dpeak_out}", mode: 'copy', overwrite: true
+  }
+
+  input:
+    val fastq 
+    tuple val(file_id_a), path(wig_ip_a), path(wig_wce_a)
+    tuple val(file_id_b), path(wig_ip_b), path(wig_wce_b)
+
+  output:
+  tuple val(file_id), path("${file_prefix}/${wig_ip_a.simpleName}.bgsub.wig"), path("${file_prefix}/${wig_wce_a.simpleName}.wig"), emit: wig
+  tuple val(file_id), path("${file_prefix}/*.positions.bed"), path("${file_prefix}/*.summit.bed"), emit: bed
+
+  script:
+
+  switch(file_id_a) {
+    case {it instanceof List}:
+      file_prefix = file_id_a[0]
+    break
+    case {it instanceof Map}:
+      file_prefix = file_id_a.values()[0]
+    break
+    default:
+      file_prefix = file_id_a
+    break
+  }
+
+  m = 0
+  if (fastq[1].size() == 2){
+    m = 1
+  }
+"""
+danpos.py dpeak -m ${m} \
+  ${params.dpeak} \
+  -b ${wig_ip_a}:${wig_wce_a},${wig_ip_b}:${wig_wce_b} \
+  -o ${file_prefix} \
+  ${wig_ip_a}:${wig_ip_b}
+mv ${file_prefix}/pooled/* ${file_prefix}/
+rm -R ${file_prefix}/pooled
+awk -v FS='\t' -v OFS='\t' 'FNR > 1 { print \$1, \$2-1, \$3, "Interval_"NR-1, \$6, "+" }' ${file_prefix}/${bam_ip.simpleName}.bgsub.peaks.xls > ${file_prefix}/${bam_ip.simpleName}.bgsub.positions.bed
+awk -v FS='\t' -v OFS='\t' 'FNR > 1 { print \$1, \$4-1, \$4, "Interval_"NR-1, \$6, "+" }' ${file_prefix}/${bam_ip.simpleName}.bgsub.peaks.xls > ${file_prefix}/${bam_ip.simpleName}.bgsub.positions.summit.bed
+"""
+}
\ No newline at end of file
diff --git a/src/nf_modules/deeptools/bam_to_bigwig.config b/src/nf_modules/deeptools/bam_to_bigwig.config
deleted file mode 100644
index 9dd57e691ea71e368e4835e5b37d6a8c8c04a175..0000000000000000000000000000000000000000
--- a/src/nf_modules/deeptools/bam_to_bigwig.config
+++ /dev/null
@@ -1,76 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: index_bam {
-        container = "lbmc/sambamba:0.6.7"
-        cpus = 4
-      }
-      withName: bam_to_bigwig {
-        container = "lbmc/deeptools:3.0.2"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: index_bam {
-        container = "lbmc/sambamba:0.6.7"
-        cpus = 4
-      }
-      withName: bam_to_bigwig {
-        container = "lbmc/deeptools:3.0.2"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: index_bam {
-        container = "lbmc/deeptools:3.0.2"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: index_bam {
-        container = "lbmc/sambamba:0.6.7"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-      withName: bam_to_bigwig {
-        container = "lbmc/deeptools:3.0.2"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/deeptools/bam_to_bigwig.nf b/src/nf_modules/deeptools/bam_to_bigwig.nf
deleted file mode 100644
index 4c30ee0eed193fba70ea3b236a8bfde800993697..0000000000000000000000000000000000000000
--- a/src/nf_modules/deeptools/bam_to_bigwig.nf
+++ /dev/null
@@ -1,49 +0,0 @@
-params.bam = "$baseDir/data/bam/*.bam"
-
-log.info "bams files : ${params.bam}"
-
-Channel
-  .fromPath( params.bam )
-  .ifEmpty { error "Cannot find any bam files matching: ${params.bam}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { bam_files }
-
-bam_files.into{
-  bam_files_index;
-  bam_files_bigwig
-  }
-
-process index_bam {
-  tag "$file_id"
-
-  input:
-    set file_id, file(bam) from bam_files_index
-
-  output:
-    set file_id, "*.bam*" into indexed_bam_file
-
-  script:
-"""
-sambamba index -t ${task.cpus} ${bam}
-"""
-}
-
-bam_files_indexed = bam_files_bigwig.join(indexed_bam_file, by: 0)
-
-process bam_to_bigwig {
-  tag "$file_id"
-
-  publishDir "results/mapping/bigwig/", mode: 'copy'
-
-  input:
-    set file_id, file(bam), file(idx) from bam_files_indexed
-
-  output:
-    set file_id, "*.bw" into bw_files
-
-  script:
-"""
-bamCoverage -p ${task.cpus} --ignoreDuplicates -b ${bam} -o ${file_id}.bw
-"""
-}
-
diff --git a/src/nf_modules/deeptools/compute_matrix.config b/src/nf_modules/deeptools/compute_matrix.config
deleted file mode 100644
index 8c9c1b7a6cf50cb012569ffabc605d666af11fac..0000000000000000000000000000000000000000
--- a/src/nf_modules/deeptools/compute_matrix.config
+++ /dev/null
@@ -1,57 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: compute_matrix {
-        container = "lbmc/deeptools:3.0.2"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: compute_matrix {
-        container = "lbmc/deeptools:3.0.2"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: compute_matrix {
-        container = "lbmc/deeptools:3.0.2"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: compute_matrix {
-        container = "lbmc/deeptools:3.0.2"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/deeptools/compute_matrix.nf b/src/nf_modules/deeptools/compute_matrix.nf
deleted file mode 100644
index 2b6e0e915b00fd1f475818352e11e4ce97225ee0..0000000000000000000000000000000000000000
--- a/src/nf_modules/deeptools/compute_matrix.nf
+++ /dev/null
@@ -1,38 +0,0 @@
-params.bw = "$baseDir/data/bigwig/*.bw"
-params.bed = "$baseDir/data/annot/*.bed"
-
-log.info "bigwig files : ${params.bw}"
-log.info "bed files : ${params.bed}"
-
-Channel
-  .fromPath( params.bw )
-  .ifEmpty { error "Cannot find any bigwig files matching: ${params.bw}" }
-  .set { bw_files }
-
-Channel
-  .fromPath( params.bed )
-  .ifEmpty { error "Cannot find any bed files matching: ${params.bed}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { bed_files }
-
-process compute_matrix {
-  tag "$bed_file_id"
-  publishDir "results/mapping/region_matrix/", mode: 'copy'
-
-  input:
-    file bw from bw_files.collect()
-    set bed_file_id, file(bed) from bed_files.collect()
-
-  output:
-    set bed_file_id, "*.mat.gz" into region_matrix
-
-  script:
-"""
-computeMatrix scale-regions -S ${bw} \
-  -p ${task.cpus} \
-  -R ${bed} \
-  --beforeRegionStartLength 100 \
-  --afterRegionStartLength 100 \
-  -o ${bed_file_id}.mat.gz
-"""
-}
diff --git a/src/nf_modules/deeptools/main.nf b/src/nf_modules/deeptools/main.nf
index ccf3657a3752ffff487bf81135d672e18ef63b84..97e4027de2f91930c5fd227c29b0b563b9f3c027 100644
--- a/src/nf_modules/deeptools/main.nf
+++ b/src/nf_modules/deeptools/main.nf
@@ -1,16 +1,21 @@
-version = "3.1.1"
+version = "3.5.1"
 container_url = "lbmc/deeptools:${version}"
 
+params.index_bam = ""
+params.index_bam_out = ""
 process index_bam {
   container = "${container_url}"
-  label "big_mem__cpus"
+  label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.index_bam_out != "") {
+    publishDir "results/${params.index_bam_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bam)
 
   output:
-    tuple val(file_id), path("*.bam*"), emit: bam
+    tuple val(file_id), path("${bam}"), path("*.bam*"), emit: bam_idx
 
   script:
 """
@@ -18,11 +23,15 @@ sambamba index -t ${task.cpus} ${bam}
 """
 }
 
+params.bam_to_bigwig = ""
+params.bam_to_bigwig_out = ""
 process bam_to_bigwig {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
-
+  if (params.bam_to_bigwig_out != "") {
+    publishDir "results/${params.bam_to_bigwig_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bam), path(idx)
@@ -37,10 +46,15 @@ bamCoverage -p ${task.cpus} --ignoreDuplicates -b ${bam} \
 """
 }
 
+params.compute_matrix = ""
+params.compute_matrix_out = ""
 process compute_matrix {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "${bed_file_id}"
+  if (params.compute_matrix_out != "") {
+    publishDir "results/${params.compute_matrix_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bw)
@@ -60,10 +74,15 @@ computeMatrix scale-regions -S ${bw} \
 """
 }
 
+params.plot_profile = ""
+params.plot_profile_out = ""
 process plot_profile {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.compute_matrix_out != "") {
+    publishDir "results/${params.compute_matrix_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(matrix)
diff --git a/src/nf_modules/deeptools/plot_profile.config b/src/nf_modules/deeptools/plot_profile.config
deleted file mode 100644
index 9320784680c101ee6bf0b8fa77aff11afcd0f2a4..0000000000000000000000000000000000000000
--- a/src/nf_modules/deeptools/plot_profile.config
+++ /dev/null
@@ -1,57 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: plot_profile {
-        container = "lbmc/deeptools:3.0.2"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: compute_matrix {
-        container = "lbmc/deeptools:3.0.2"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: plot_profile {
-        container = "lbmc/deeptools:3.0.2"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: plot_profile {
-        container = "lbmc/deeptools:3.0.2"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/deeptools/plot_profile.nf b/src/nf_modules/deeptools/plot_profile.nf
deleted file mode 100644
index dfce4e5504bdd4cede56fa156ffa4fa268a7fede..0000000000000000000000000000000000000000
--- a/src/nf_modules/deeptools/plot_profile.nf
+++ /dev/null
@@ -1,36 +0,0 @@
-params.matrix = "$baseDir/data/region_matrix/*.mat.gz"
-params.title = "plot title"
-
-log.info "matrix files : ${params.matrix}"
-log.info "plot title : ${params.title}"
-
-Channel
-  .fromPath( params.matrix )
-  .ifEmpty { error "Cannot find any matrix files matching: ${params.matrix}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { matrix_files }
-
-process plot_profile {
-  tag "$file_id"
-  publishDir "results/mapping/region_matrix/", mode: 'copy'
-
-  input:
-    set file_id, file(matrix) from matrix_files
-
-  output:
-    set file_id, "*.pdf" into region_matrix
-
-  script:
-/*
-see more option at
-https://deeptools.readthedocs.io/en/develop/content/tools/plotProfile.html
-*/
-"""
-plotProfile -m ${matrix} \
-  --plotFileFormat=pdf \
-  -out ${file_id}.pdf \
-  --plotType=fill \
-  --perGroup \
-  --plotTitle "${params.title}"
-"""
-}
diff --git a/src/nf_modules/deeptools/tests.sh b/src/nf_modules/deeptools/tests.sh
deleted file mode 100755
index 4253689a7c94a62feec6be1536f11d394fec909b..0000000000000000000000000000000000000000
--- a/src/nf_modules/deeptools/tests.sh
+++ /dev/null
@@ -1,46 +0,0 @@
-#!/bin/sh
-
-cp data/tiny_dataset/map/tiny_v2.sort.bam \
-  data/tiny_dataset/map/tiny_v2_bis.sort.bam
-
-./nextflow src/nf_modules/deeptools/bam_to_bigwig.nf \
-  -c src/nf_modules/deeptools/bam_to_bigwig.config \
-  -profile docker \
-  --bam "data/tiny_dataset/map/tiny_v2*.sort.bam" \
-  -resume
-
-./nextflow src/nf_modules/deeptools/compute_matrix.nf \
-  -c src/nf_modules/deeptools/compute_matrix.config \
-  -profile docker \
-  --bw "results/mapping/bigwig/*.bw" \
-  --bed "data/tiny_dataset/annot/tiny.bed" \
-  -resume
-
-./nextflow src/nf_modules/deeptools/plot_profile.nf \
-  -c src/nf_modules/deeptools/plot_profile.config \
-  -profile docker \
-  --matrix "results/mapping/region_matrix/*.mat.gz" \
-  --title "plot title" \
-  -resume
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/deeptools/bam_to_bigwig.nf \
-  -c src/nf_modules/deeptools/bam_to_bigwig.config \
-  -profile docker \
-  --bam "data/tiny_dataset/map/tiny_v2*.sort.bam" \
-  -resume
-
-./nextflow src/nf_modules/deeptools/compute_matrix.nf \
-  -c src/nf_modules/deeptools/compute_matrix.config \
-  -profile docker \
-  --bw "results/mapping/bigwig/*.bw" \
-  --bed "data/tiny_dataset/annot/tiny.bed" \
-  -resume
-
-./nextflow src/nf_modules/deeptools/plot_profile.nf \
-  -c src/nf_modules/deeptools/plot_profile.config \
-  -profile docker \
-  --matrix "results/mapping/region_matrix/*.mat.gz" \
-  --title "plot title" \
-  -resume
-fi
diff --git a/src/nf_modules/emase-zero/main.nf b/src/nf_modules/emase-zero/main.nf
new file mode 100644
index 0000000000000000000000000000000000000000..87c5020d38950cbfe8618aa2fc9ed839492e6aac
--- /dev/null
+++ b/src/nf_modules/emase-zero/main.nf
@@ -0,0 +1,54 @@
+version = "0.3.1"
+container_url = "lbmc/emase-zero:${version}"
+
+include { g2tr } from "./../kb/main.nf"
+include { bam2ec } from "./../alntools/main.nf"
+include { fasta_to_transcripts_lengths } from "./../bioawk/main.nf"
+
+
+params.count = "-m 2"
+params.count_out = ""
+workflow count {
+  take:
+    bam_idx
+    fasta
+    gtf
+
+  main:
+    g2tr(gtf)
+    fasta_to_transcripts_lengths(fasta)
+    bam2ec(bam_idx, fasta_to_transcripts_lengths.out.tsv.collect())
+    emase(bam2ec.out.bin, fasta.collect(), bam2ec.out.tsv, g2tr.out.g2t.collect())
+
+  emit:
+    count = emase.out.count
+}
+
+process emase {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.count_out != "") {
+    publishDir "results/${params.count_out}", mode: 'copy'
+  }
+
+  input:
+    tuple val(file_id), path(bin)
+    tuple val(fasta_id), path(fasta)
+    tuple val(transcript_length_id), path(transcript_length)
+    tuple val(gene_to_transcript_id), path(gene_to_transcript)
+
+  output:
+    tuple val(file_id), path("${bin.simpleName}.quantified*"), emit: count
+    path "*_report.txt", emit: report
+
+  script:
+"""
+grep ">" ${fasta} | sed 's/>//' > tr_list.txt
+emase-zero ${params.count} \
+  -o ${bin.simpleName}.quantified \
+  -l ${transcript_length} \
+  -g ${gene_to_transcript} \
+  ${bin} &> ${file_id}_emase-zero_report.txt
+"""
+}
\ No newline at end of file
diff --git a/src/nf_modules/emase/main.nf b/src/nf_modules/emase/main.nf
new file mode 100644
index 0000000000000000000000000000000000000000..4388aad7656b1d648ac53398b3f99de1791b387a
--- /dev/null
+++ b/src/nf_modules/emase/main.nf
@@ -0,0 +1,24 @@
+version = "0.10.16"
+container_url = "lbmc/emase:${version}"
+
+params.diploid_genome = "-x"
+params.diploid_genome_out = "-x"
+process diploid_genome {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "${genome_a}-${genome_b}"
+  if (params.diploid_genome_out != "") {
+    publishDir "results/${params.diploid_genome_out}", mode: 'copy'
+  }
+
+  input:
+    tuple val(genome_a), path(fasta_a), val(genome_b), path(fasta_b)
+
+  output:
+    tuple val("${genome_a}_${genome_b}"), path(".fa"), emit: fasta
+
+  script:
+"""
+prepare-emase -G ${fasta_a},${fasta_b} -s ${genome_a},${genome_b} ${params.diploid_genome} 
+"""
+}
\ No newline at end of file
diff --git a/src/nf_modules/fastp/fastp_paired.config b/src/nf_modules/fastp/fastp_paired.config
deleted file mode 100644
index dc9bf8cde93bdc0e51dc318d2e4d9f9fb321ae82..0000000000000000000000000000000000000000
--- a/src/nf_modules/fastp/fastp_paired.config
+++ /dev/null
@@ -1,56 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: fastp_fastq {
-        container = "lbmc/fastp:0.19.7"
-        cpus = 1
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: fastp_fastq {
-        cpus = 1
-        container = "lbmc/fastp:0.19.7"
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: fastp_fastq {
-        container = "lbmc/fastp:0.19.7"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "20GB"
-        time = "12h"
-        queue = "monointeldeb128"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: fastp_fastq {
-        container = "lbmc/fastp:0.19.7"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/fastp/fastp_paired.nf b/src/nf_modules/fastp/fastp_paired.nf
deleted file mode 100644
index 88d6710bc93061804681bcceea1bce7ce10cd669..0000000000000000000000000000000000000000
--- a/src/nf_modules/fastp/fastp_paired.nf
+++ /dev/null
@@ -1,34 +0,0 @@
-params.fastq = "$baseDir/data/fastq/*_{1,2}.fastq"
-
-log.info "fastq files : ${params.fastq}"
-
-Channel
-  .fromFilePairs( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .set { fastq_files }
-
-process fastp_fastq {
-  tag "$pair_id"
-  publishDir "results/fastq/fastp/", mode: 'copy'
-
-  input:
-  set pair_id, file(reads) from fastq_files
-
-  output:
-    file "*.{zip,html}" into fastp_report
-    set pair_id, file "*.fastq.gz" fastq_trim_files
-
-  script:
-"""
-fastp --thread ${task.cpus} \
---qualified_quality_phred 20 \
---disable_length_filtering \
---detect_adapter_for_pe \
---in1 ${reads[0]} \
---in2 ${reads[1]} \
---out1 ${pair_id}_R1_trim.fastq.gz \
---out2 ${pair_id}_R2_trim.fastq.gz \
---html ${pair_id}.html \
---report_title ${pair_id}
-"""
-}
diff --git a/src/nf_modules/fastp/fastp_single.config b/src/nf_modules/fastp/fastp_single.config
deleted file mode 100644
index dc9bf8cde93bdc0e51dc318d2e4d9f9fb321ae82..0000000000000000000000000000000000000000
--- a/src/nf_modules/fastp/fastp_single.config
+++ /dev/null
@@ -1,56 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: fastp_fastq {
-        container = "lbmc/fastp:0.19.7"
-        cpus = 1
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: fastp_fastq {
-        cpus = 1
-        container = "lbmc/fastp:0.19.7"
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: fastp_fastq {
-        container = "lbmc/fastp:0.19.7"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "20GB"
-        time = "12h"
-        queue = "monointeldeb128"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: fastp_fastq {
-        container = "lbmc/fastp:0.19.7"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/fastp/fastp_single.nf b/src/nf_modules/fastp/fastp_single.nf
deleted file mode 100644
index 31262172f8e45376b09293b98bdd4fa5403c9b0c..0000000000000000000000000000000000000000
--- a/src/nf_modules/fastp/fastp_single.nf
+++ /dev/null
@@ -1,32 +0,0 @@
-params.fastq = "$baseDir/data/fastq/*.fastq"
-
-log.info "fastq files : ${params.fastq}"
-
-Channel
-  .fromPath( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { fastq_files }
-
-process fastp_fastq {
-  tag "$file_id"
-  publishDir "results/fastq/fastp/", mode: 'copy'
-
-  input:
-  set file_id, file(reads) from fastq_files
-
-  output:
-    file "*.{zip,html}" into fastp_report
-    set file_id, file "*.fastq.gz" fastq_trim_files
-
-  script:
-"""
-fastp --thread ${task.cpus} \
---qualified_quality_phred 20 \
---disable_length_filtering \
---in1 ${reads} \
---out1 ${file_id}_R1_trim.fastq.gz \
---html ${file_id}.html \
---report_title ${file_id}
-"""
-}
diff --git a/src/nf_modules/fastp/main.nf b/src/nf_modules/fastp/main.nf
index 82f8d88062e982cd784f8673bc0267507dce3e69..2593eefecab24037db7d80f213b83db29e0092b4 100644
--- a/src/nf_modules/fastp/main.nf
+++ b/src/nf_modules/fastp/main.nf
@@ -1,103 +1,153 @@
 version = "0.20.1"
 container_url = "lbmc/fastp:${version}"
 
-process fastp {
-  container = "${container_url}"
-  label "big_mem_multi_cpus"
-  tag "$pair_id"
-  publishDir "results/QC/fastp/", mode: 'copy', pattern: "*.html"
+params.fastp_protocol = ""
 
-  input:
-  tuple val(pair_id), path(reads)
+params.fastp = ""
+params.fastp_out = ""
+workflow fastp {
+  take:
+    fastq
 
-  output:
-    tuple val(pair_id), path("*.fastq.gz"), emit: fastq
-    tuple val(pair_id), path("*.html"), emit: html
-    tuple val(pair_id), path("*.json"), emit: report
-
-  script:
-if (reads instanceof List)
-"""
-fastp --thread ${task.cpus} \
---qualified_quality_phred 20 \
---disable_length_filtering \
---detect_adapter_for_pe \
---in1 ${reads[0]} \
---in2 ${reads[1]} \
---out1 ${pair_id}_R1_trim.fastq.gz \
---out2 ${pair_id}_R2_trim.fastq.gz \
---html ${pair_id}.html \
---json ${pair_id}_fastp.json \
---report_title ${pair_id}
-"""
-else
-"""
-fastp --thread ${task.cpus} \
---qualified_quality_phred 20 \
---disable_length_filtering \
---detect_adapter_for_pe \
---in1 ${reads} \
---out1 ${pair_id}_trim.fastq.gz \
---html ${pair_id}.html \
---json ${pair_id}_fastp.json \
---report_title ${pair_id}
-"""
+  main:
+  switch(params.fastp_protocol) {
+    case "accel_1splus":
+      fastp_accel_1splus(fastq)
+      fastp_accel_1splus.out.fastq.set{res_fastq}
+      fastp_accel_1splus.out.report.set{res_report}
+    break;
+    default:
+      fastp_default(fastq)
+      fastp_default.out.fastq.set{res_fastq}
+      fastp_default.out.report.set{res_report}
+    break;
+  }
+  emit:
+    fastq = res_fastq
+    report = res_report
 }
 
-process fastp_pairedend {
+process fastp_default {
   container = "${container_url}"
   label "big_mem_multi_cpus"
-  tag "$pair_id"
-  publishDir "results/QC/fastp/", mode: 'copy', pattern: "*.html"
+  tag "$file_prefix"
+  if (params.fastp_out != "") {
+    publishDir "results/${params.fastp_out}", mode: 'copy'
+  }
 
   input:
-  tuple val(pair_id), path(reads)
+  tuple val(file_id), path(reads)
 
   output:
-    tuple val(pair_id), path("*.fastq.gz"), emit: fastq
-    tuple val(pair_id), path("*.html"), emit: html
-    tuple val(pair_id), path("*.json"), emit: report
+    tuple val(file_id), path("*_trim.fastq.gz"), emit: fastq
+    tuple val(file_id), path("${file_prefix}.html"), emit: html
+    tuple val(file_id), path("${file_prefix}_fastp.json"), emit: report
 
   script:
-"""
-fastp --thread ${task.cpus} \
---qualified_quality_phred 20 \
---disable_length_filtering \
---detect_adapter_for_pe \
---in1 ${reads[0]} \
---in2 ${reads[1]} \
---out1 ${pair_id}_R1_trim.fastq.gz \
---out2 ${pair_id}_R2_trim.fastq.gz \
---html ${pair_id}.html \
---json ${pair_id}_fastp.json \
---report_title ${pair_id}
-"""
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
+  if (reads.size() == 2)
+  """
+  fastp --thread ${task.cpus} \
+    --qualified_quality_phred 20 \
+    --disable_length_filtering \
+    --detect_adapter_for_pe \
+    ${params.fastp} \
+    --in1 ${reads[0]} \
+    --in2 ${reads[1]} \
+    --out1 ${file_prefix}_R1_trim.fastq.gz \
+    --out2 ${file_prefix}_R2_trim.fastq.gz \
+    --html ${file_prefix}.html \
+    --json ${file_prefix}_fastp.json \
+    --report_title ${file_prefix}
+  """
+  else
+  """
+  fastp --thread ${task.cpus} \
+    --qualified_quality_phred 20 \
+    --disable_length_filtering \
+    --detect_adapter_for_pe \
+    ${params.fastp} \
+    --in1 ${reads[0]} \
+    --out1 ${file_prefix}_trim.fastq.gz \
+    --html ${file_prefix}.html \
+    --json ${file_prefix}_fastp.json \
+    --report_title ${file_prefix}
+  """
 }
 
-process fastp_singleend {
+process fastp_accel_1splus {
   container = "${container_url}"
   label "big_mem_multi_cpus"
-  tag "$pair_id"
-  publishDir "results/QC/fastp/", mode: 'copy', pattern: "*.html"
+  tag "$file_prefix"
+  if (params.fastp_out != "") {
+    publishDir "results/${params.fastp_out}", mode: 'copy'
+  }
 
   input:
-  tuple val(pair_id), path(reads)
+  tuple val(file_id), path(reads)
 
   output:
-    tuple val(pair_id), path("*.fastq.gz"), emit: fastq
-    tuple val(pair_id), path("*.html"), emit: html
-    tuple val(pair_id), path("*.json"), emit: report
+    tuple val(file_id), path("*_trim.fastq.gz"), emit: fastq
+    tuple val(file_id), path("${file_prefix}.html"), emit: html
+    tuple val(file_id), path("${file_prefix}_fastp.json"), emit: report
 
   script:
-"""
-fastp --thread ${task.cpus} \
---qualified_quality_phred 20 \
---disable_length_filtering \
---detect_adapter_for_pe \
---in1 ${reads} \
---out1 ${pair_id}_trim.fastq.gz \
---html ${pair_id}.html \
---json ${pair_id}_fastp.json \
---report_title ${pair_id}
-"""
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
+
+  if (reads.size() == 2)
+  """
+  fastp --thread ${task.cpus} \
+    --disable_quality_filtering \
+    --disable_length_filtering \
+    --disable_trim_poly_g \
+    --detect_adapter_for_pe \
+    --stdout \
+    --in1 ${reads[0]} \
+    --in2 ${reads[1]} 2> /dev/null | \
+    fastp --thread ${task.cpus} \
+      --stdin \
+      --interleaved_in \
+      --trim_front1=10 \
+      --trim_front2=10 \
+      --disable_adapter_trimming \
+      --qualified_quality_phred 20 \
+      --disable_length_filtering \
+      --detect_adapter_for_pe \
+      ${params.fastp} \
+      --out1 ${file_prefix}_R1_trim.fastq.gz \
+      --out2 ${file_prefix}_R2_trim.fastq.gz \
+      --html ${file_prefix}.html \
+      --json ${file_prefix}_fastp.json \
+      --report_title ${file_prefix}
+  """
+  else
+  """
+  fastp --thread ${task.cpus} \
+    --disable_quality_filtering \
+    --disable_length_filtering \
+    --disable_trim_poly_g \
+    --detect_adapter_for_pe \
+    --stdout \
+    --in1 ${reads[0]} 2> /dev/null  | \
+    fastp --thread ${task.cpus} \
+      --disable_adapter_trimming \
+      --stdin \
+      --trim_front1=10 \
+      --qualified_quality_phred 20 \
+      --disable_length_filtering \
+      --detect_adapter_for_pe \
+      ${params.fastp} \
+      --out1 ${file_prefix}_trim.fastq.gz \
+      --html ${file_prefix}.html \
+      --json ${file_prefix}_fastp.json \
+      --report_title ${file_prefix}
+  """
 }
diff --git a/src/nf_modules/fastp/tests.sh b/src/nf_modules/fastp/tests.sh
deleted file mode 100755
index fae4f08a92c56752b4d467b5c31aca629eb2aac3..0000000000000000000000000000000000000000
--- a/src/nf_modules/fastp/tests.sh
+++ /dev/null
@@ -1,25 +0,0 @@
-./nextflow src/nf_modules/fastp/fastp_paired.nf \
-  -c src/nf_modules/fastp/fastp_paired.config \
-  -profile docker \
-  --fastq "data/tiny_dataset/fastq/tiny_R{1,2}.fastq" \
-  -resume
-
-./nextflow src/nf_modules/fastp/fastp_single.nf \
-  -c src/nf_modules/fastp/fastp_single.config \
-  -profile docker \
-  --fastq "data/tiny_dataset/fastq/tiny_S.fastq" \
-  -resume
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/fastp/fastp_paired.nf \
-  -c src/nf_modules/fastp/fastp_paired.config \
-  -profile singularity \
-  --fastq "data/tiny_dataset/fastq/tiny_R{1,2}.fastq" \
-  -resume
-
-./nextflow src/nf_modules/fastp/fastp_single.nf \
-  -c src/nf_modules/fastp/fastp_single.config \
-  -profile singularity \
-  --fastq "data/tiny_dataset/fastq/tiny_S.fastq" \
-  -resume
-fi
diff --git a/src/nf_modules/fastqc/fastqc_paired.config b/src/nf_modules/fastqc/fastqc_paired.config
deleted file mode 100644
index 5f12694b8f34eea4b873cd8ddc4cccdc680002a9..0000000000000000000000000000000000000000
--- a/src/nf_modules/fastqc/fastqc_paired.config
+++ /dev/null
@@ -1,56 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: fastqc_fastq {
-        container = "lbmc/fastqc:0.11.5"
-        cpus = 1
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: fastqc_fastq {
-        cpus = 1
-        container = "lbmc/fastqc:0.11.5"
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: fastqc_fastq {
-        container = "lbmc/fastqc:0.11.5"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "20GB"
-        time = "12h"
-        queue = "monointeldeb128"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: fastqc_fastq {
-        container = "lbmc/fastqc:0.11.5"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/fastqc/fastqc_paired.nf b/src/nf_modules/fastqc/fastqc_paired.nf
deleted file mode 100644
index 6755edec7dca244b1c1581dc6459cd2b8afcc996..0000000000000000000000000000000000000000
--- a/src/nf_modules/fastqc/fastqc_paired.nf
+++ /dev/null
@@ -1,26 +0,0 @@
-params.fastq = "$baseDir/data/fastq/*_{1,2}.fastq"
-
-log.info "fastq files : ${params.fastq}"
-
-Channel
-  .fromFilePairs( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .set { fastq_files }
-
-process fastqc_fastq {
-  tag "$pair_id"
-  publishDir "results/fastq/fastqc/", mode: 'copy'
-
-  input:
-  set pair_id, file(reads) from fastq_files
-
-  output:
-    file "*.{zip,html}" into fastqc_report
-
-  script:
-"""
-fastqc --quiet --threads ${task.cpus} --format fastq --outdir ./ \
-${reads[0]} ${reads[1]}
-"""
-}
-
diff --git a/src/nf_modules/fastqc/fastqc_single.config b/src/nf_modules/fastqc/fastqc_single.config
deleted file mode 100644
index 5f12694b8f34eea4b873cd8ddc4cccdc680002a9..0000000000000000000000000000000000000000
--- a/src/nf_modules/fastqc/fastqc_single.config
+++ /dev/null
@@ -1,56 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: fastqc_fastq {
-        container = "lbmc/fastqc:0.11.5"
-        cpus = 1
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: fastqc_fastq {
-        cpus = 1
-        container = "lbmc/fastqc:0.11.5"
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: fastqc_fastq {
-        container = "lbmc/fastqc:0.11.5"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "20GB"
-        time = "12h"
-        queue = "monointeldeb128"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: fastqc_fastq {
-        container = "lbmc/fastqc:0.11.5"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/fastqc/fastqc_single.nf b/src/nf_modules/fastqc/fastqc_single.nf
deleted file mode 100644
index ab7e22aade6df50ad6a87f574c9d1eb57cd35c02..0000000000000000000000000000000000000000
--- a/src/nf_modules/fastqc/fastqc_single.nf
+++ /dev/null
@@ -1,26 +0,0 @@
-params.fastq = "$baseDir/data/fastq/*.fastq"
-
-log.info "fastq files : ${params.fastq}"
-
-Channel
-  .fromPath( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { fastq_files }
-
-process fastqc_fastq {
-  tag "$file_id"
-  publishDir "results/fastq/fastqc/", mode: 'copy'
-
-  input:
-  set file_id, file(reads) from fastq_files
-
-  output:
-    file "*.{zip,html}" into fastqc_report
-
-  script:
-"""
-fastqc --quiet --threads ${task.cpus} --format fastq --outdir ./ ${reads}
-"""
-}
-
diff --git a/src/nf_modules/fastqc/main.nf b/src/nf_modules/fastqc/main.nf
index 5e770297d16b3a7dd3591d73091698d01738c4a3..da0c7bc7c952a7c4161751eef34fcfcf1882d2bd 100644
--- a/src/nf_modules/fastqc/main.nf
+++ b/src/nf_modules/fastqc/main.nf
@@ -1,61 +1,31 @@
 version = "0.11.5"
 container_url = "lbmc/fastqc:${version}"
 
+params.fastqc_fastq = ""
+params.fastqc_fastq_out = ""
 process fastqc_fastq {
-  container = "${container_url}"
-  label "big_mem_mono_cpus"
-  tag "$pair_id"
-
-  input:
-  tuple val(pair_id), path(reads)
-
-  output:
-  path "*.{zip,html}", emit: report
-
-  script:
-if (reads instanceof List)
-"""
-fastqc --quiet --threads ${task.cpus} --format fastq --outdir ./ \
-  ${reads[0]} ${reads[1]}
-"""
-else
-"""
-  fastqc --quiet --threads ${task.cpus} --format fastq --outdir ./ ${reads}
-"""
-}
-
-process fastqc_fastq_pairedend {
-  container = "${container_url}"
-  label "big_mem_mono_cpus"
-  tag "$pair_id"
-
-  input:
-  tuple val(pair_id), path(reads)
-
-  output:
-  path "*.{zip,html}", emit: report
-
-  script:
-"""
-fastqc --quiet --threads ${task.cpus} --format fastq --outdir ./ \
-  ${reads[0]} ${reads[1]}
-"""
-}
-
-process fastqc_fastq_singleend {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.fastqc_fastq_out != "") {
+    publishDir "results/${params.fastqc_fastq_out}", mode: 'copy'
+  }
 
   input:
   tuple val(file_id), path(reads)
 
   output:
-    path "*.{zip,html}", emit: report
+  tuple val(file_id), path("*.{zip,html}"), emit: report
 
   script:
-"""
-  fastqc --quiet --threads ${task.cpus} --format fastq --outdir ./ ${reads}
-"""
-}
-
+  if (reads.size() == 2)
+  """
+  fastqc --quiet --threads ${task.cpus} --format fastq --outdir ./ \
+    ${params.fastqc_fastq} \
+    ${reads[0]} ${reads[1]}
+  """
+  else
+  """
+    fastqc --quiet --threads ${task.cpus} --format fastq --outdir ./ ${params.fastqc_fastq} ${reads[0]}
+  """
+}
\ No newline at end of file
diff --git a/src/nf_modules/fastqc/tests.sh b/src/nf_modules/fastqc/tests.sh
deleted file mode 100755
index 7002e71d6396d407f5ba9f37864ccdbf661597f3..0000000000000000000000000000000000000000
--- a/src/nf_modules/fastqc/tests.sh
+++ /dev/null
@@ -1,25 +0,0 @@
-./nextflow src/nf_modules/fastqc/fastqc_paired.nf \
-  -c src/nf_modules/fastqc/fastqc_paired.config \
-  -profile docker \
-  --fastq "data/tiny_dataset/fastq/tiny_R{1,2}.fastq" \
-  -resume
-
-./nextflow src/nf_modules/fastqc/fastqc_single.nf \
-  -c src/nf_modules/fastqc/fastqc_single.config \
-  -profile docker \
-  --fastq "data/tiny_dataset/fastq/tiny_S.fastq" \
-  -resume
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/fastqc/fastqc_paired.nf \
-  -c src/nf_modules/fastqc/fastqc_paired.config \
-  -profile singularity \
-  --fastq "data/tiny_dataset/fastq/tiny_R{1,2}.fastq" \
-  -resume
-
-./nextflow src/nf_modules/fastqc/fastqc_single.nf \
-  -c src/nf_modules/fastqc/fastqc_single.config \
-  -profile singularity \
-  --fastq "data/tiny_dataset/fastq/tiny_S.fastq" \
-  -resume
-fi
diff --git a/src/nf_modules/flexi_splitter/main.nf b/src/nf_modules/flexi_splitter/main.nf
new file mode 100644
index 0000000000000000000000000000000000000000..753ef1be2fd8586645c09ddde2d29342b7e5c103
--- /dev/null
+++ b/src/nf_modules/flexi_splitter/main.nf
@@ -0,0 +1,82 @@
+version = "1.0.2"
+container_url = "lbmc/flexi_splitter:${version}"
+
+params.split = ""
+params.split_out = ""
+
+
+workflow split {
+  take:
+    reads
+    config
+  main:
+    split_fastq(reads, config)
+    group_fastq(split_fastq.out.fastq_folder)
+    group_fastq.out.fastq
+      .map{ it -> it[1] }
+      .flatten()
+      .collate(2)
+      .map{ it -> [it[0].simpleName - ~/_{0,1}R[12]/, it]}
+      .set{ splited_fastq }
+
+  emit:
+    fastq = splited_fastq
+}
+
+process split_fastq {
+  // You can get an example of config file here:
+  // src/nf_modules/flexi_splitter/marseq_flexi_splitter.yaml
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.split_out != "") {
+    publishDir "results/${params.split_out}", mode: 'copy'
+  }
+
+  input:
+  tuple val(file_id), path(reads)
+  tuple val(config_id), path(config)
+
+  output:
+  tuple val(file_id), path("split"), emit: fastq_folder
+
+  script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
+
+  if (reads.size() == 2)
+  """
+  flexi_splitter ${params.split} -n 2 -f ${reads[0]},${reads[1]} -o split -c ${config}
+  """
+  else
+  """
+  flexi_splitter ${params.split} -n 1 -f ${reads[0]} -o split -c ${config}
+  """
+}
+
+process group_fastq {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.split_out != "") {
+    publishDir "results/${params.split_out}", mode: 'copy'
+  }
+
+  input:
+  tuple val(file_id), path(reads_folder)
+
+  output:
+  tuple val(file_id), path("results/*"), emit: fastq
+
+  script:
+"""
+mkdir -p results/
+find split/ -type "f" | \
+  grep -v "unassigned" | \
+  sed -E "s|(split/(.*)/(.*))|\\1 \\2_\\3|g" |
+  awk '{system("mv "\$1" results/"\$2)}'
+"""
+}
\ No newline at end of file
diff --git a/src/nf_modules/flexi_splitter/marseq_flexi_splitter.yaml b/src/nf_modules/flexi_splitter/marseq_flexi_splitter.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..72f46d2dc1833e1fab00b734aefadd55a70b44c1
--- /dev/null
+++ b/src/nf_modules/flexi_splitter/marseq_flexi_splitter.yaml
@@ -0,0 +1,41 @@
+PLATE:
+  coords:
+    reads: 0
+    start: 1
+    stop: 4
+    header: False
+  samples:
+    - name : Plate1
+      seq: GACT
+    - name : Plate2
+      seq: CATG
+    - name : Plate3
+      seq: CCAA
+    - name : Plate4
+      seq: CTGT
+    - name : Plate5
+      seq: GTAG
+    - name : Plate6
+      seq: TGAT
+    - name : Plate7
+      seq: ATCA
+    - name : Plate8
+      seq: TAGA
+
+conditions:
+    - Plate1 :
+      Plate1
+    - Plate2 :
+      Plate2
+    - Plate3 :
+      Plate3
+    - Plate4 :
+      Plate4
+    - Plate5 :
+      Plate5
+    - Plate6 :
+      Plate6
+    - Plate7 :
+      Plate7
+    - Plate8 :
+      Plate8
diff --git a/src/nf_modules/flexi_splitter/toy_file_paired.yaml b/src/nf_modules/flexi_splitter/toy_file_paired.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..dec6e8c4df34121693f591fd68f49dc8f787e7d4
--- /dev/null
+++ b/src/nf_modules/flexi_splitter/toy_file_paired.yaml
@@ -0,0 +1,50 @@
+PCR:
+  coords:
+    reads: 3
+    start: 1
+    stop: 6
+    header: False
+  samples:
+    - name : PCR1
+      seq: NCAGTG
+    - name : PCR2
+      seq : CGATGT
+    - name : PCR3
+      seq: TTAGGC
+    - name : PCR4
+      seq : TGACCA
+    - name: PCR5
+      seq: NGAACG
+    - name: PCR6
+      seq: NCAACA
+RT:
+  coords:
+    reads: 1
+    start: 6
+    stop: 13
+    header: False
+  samples:
+    - name : RT1
+      seq: TAGTGCC
+    - name : RT2
+      seq: GCTACCC
+    - name: RT3
+      seq: ATCGACC
+    - name: RT4
+      seq: CGACTCC
+UMI:
+  coords:
+    reads: 1
+    start: 1
+    stop: 6
+    header: False
+conditions:
+  wt:
+    - RT1
+    - PCR1
+  ko:
+    - RT2
+    - PCR2
+  sample_paired:
+    - RT2
+    - PCR6
diff --git a/src/nf_modules/g2gtools/main.nf b/src/nf_modules/g2gtools/main.nf
index 18a05b640b79a30c862caab6538316da5d3031e7..15af58850fac947d903eec5b3459a4eb90b8b09d 100644
--- a/src/nf_modules/g2gtools/main.nf
+++ b/src/nf_modules/g2gtools/main.nf
@@ -1,10 +1,15 @@
 version = "0.2.8"
 container_url = "lbmc/g2gtools:${version}"
 
+params.vci_build = ""
+params.vci_build_out = ""
 process vci_build {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.vci_build_out != "") {
+    publishDir "results/${params.vci_build_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(vcf)
@@ -13,85 +18,130 @@ process vci_build {
     tuple val(file_id), path("*.vci.gz"), path("*.vci.gz.tbi"), emit: vci
     tuple val(file_id), path("*_report.txt"), emit: report
   script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
+
   input_vcf = ""
   for (vcf_file in vcf) {
     input_vcf += " -i ${vcf_file}"
   }
 """
 g2gtools vcf2vci \
+  ${params.vci_build} \
   -p ${task.cpus} \
   -f ${fasta} \
   ${input_vcf} \
-  -s ${file_id} \
-  -o ${file_id}.vci 2> ${file_id}_g2gtools_vcf2vci_report.txt
+  -s ${file_prefix} \
+  -o ${file_prefix}.vci 2> ${file_prefix}_g2gtools_vcf2vci_report.txt
 """
 }
 
+params.incorporate_snp = ""
+params.incorporate_snp_out = ""
 process incorporate_snp {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.incorporate_snp_out != "") {
+    publishDir "results/${params.incorporate_snp_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(vci), path(tbi)
     tuple val(ref_id), path(fasta)
   output:
-    tuple val(file_id), path("${file_id}_snp.fa"), path("${vci}"), path("${tbi}"), emit: fasta
+    tuple val(file_id), path("${file_prefix}_snp.fa"), path("${vci}"), path("${tbi}"), emit: fasta
     tuple val(file_id), path("*_report.txt"), emit: report
   script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
 """
 g2gtools patch \
+  ${params.incorporate_snp} \
   -p ${task.cpus} \
   -i ${fasta} \
   -c ${vci} \
-  -o ${file_id}_snp.fa 2> ${file_id}_g2gtools_path_report.txt
+  -o ${file_prefix}_snp.fa 2> ${file_prefix}_g2gtools_path_report.txt
 """
 }
 
+params.incorporate_indel = ""
+params.incorporate_indel_out = ""
 process incorporate_indel {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.incorporate_indel_out != "") {
+    publishDir "results/${params.incorporate_indel_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(fasta), path(vci), path(tbi)
   output:
-    tuple val(file_id), path("${file_id}_snp_indel.fa"), path("${vci}"), path("${tbi}"), emit: fasta
+    tuple val(file_id), path("${file_prefix}_snp_indel.fa"), path("${vci}"), path("${tbi}"), emit: fasta
     tuple val(file_id), path("*_report.txt"), emit: report
   script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
 """
 g2gtools transform \
+  ${params.incorporate_indel} \
   -p ${task.cpus} \
   -i ${fasta} \
   -c ${vci} \
-  -o ${file_id}_snp_indel.fa 2> ${file_id}_g2gtools_transform_report.txt
+  -o ${file_prefix}_snp_indel.fa 2> ${file_prefix}_g2gtools_transform_report.txt
 """
 }
 
+params.convert_gtf = ""
+params.convert_gtf_out = ""
 process convert_gtf {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.convert_gtf_out != "") {
+    publishDir "results/${params.convert_gtf_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(vci), path(tbi)
     tuple val(annot_id), path(gtf)
   output:
-    tuple val(file_id), path("${file_id}.gtf"), emit: gtf
+    tuple val(file_id), path("${file_prefix}.gtf"), emit: gtf
     tuple val(file_id), path("*_report.txt"), emit: report
   script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
 """
 g2gtools convert \
+  ${params.convert_gtf} \
   -i ${gtf} \
   -c ${vci} \
-  -o ${file_id}.gtf 2> ${file_id}_g2gtools_convert_report.txt
+  -o ${file_prefix}.gtf 2> ${file_prefix}_g2gtools_convert_report.txt
 """
 }
 
+params.convert_bed = ""
+params.convert_bed_out = ""
 process convert_bed {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.convert_bed_out != "") {
+    publishDir "results/${params.convert_bed_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(vci), path(tbi)
@@ -100,18 +150,29 @@ process convert_bed {
     tuple val(file_id), path("${file_id}.bed"), emit: bed
     tuple val(file_id), path("*_report.txt"), emit: report
   script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
 """
 g2gtools convert \
+  ${params.convert_bed} \
   -i ${bed} \
   -c ${vci} \
   -o ${file_id}.bed 2> ${file_id}_g2gtools_convert_report.txt
 """
 }
 
+params.convert_bam = ""
+params.convert_bam_out = ""
 process convert_bam {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "${bam_id} ${file_id}"
+  if (params.convert_bam_out != "") {
+    publishDir "results/${params.convert_bam_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(vci), path(tbi)
@@ -120,8 +181,14 @@ process convert_bam {
     tuple val(file_id), path("${file_id}_${bam_id.baseName}.bam"), emit: bam
     tuple val(file_id), path("*_report.txt"), emit: report
   script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
 """
 g2gtools convert \
+  ${params.convert_bam} \
   -i ${bam} \
   -c ${vci} \
   -o ${file_id}_${bam.baseName}.bam 2> ${file_id}_g2gtools_convert_report.txt
diff --git a/src/nf_modules/gatk3/main.nf b/src/nf_modules/gatk3/main.nf
index cb3656f4191dba556bcff54a7c6a675c49a5e93e..35a1ab7a479b41da6480535364c5caeca1c12e34 100644
--- a/src/nf_modules/gatk3/main.nf
+++ b/src/nf_modules/gatk3/main.nf
@@ -1,10 +1,15 @@
 version = "3.8.0"
 container_url = "lbmc/gatk:${version}"
 
+params.variant_calling = ""
+params.variant_calling_out = ""
 process variant_calling {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.variant_calling_out != "") {
+    publishDir "results/${params.variant_calling_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bam), path(bai)
@@ -13,19 +18,30 @@ process variant_calling {
     tuple val(file_id), path("*.vcf"), emit: vcf
 
   script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
 """
 gatk3 -T HaplotypeCaller \
   -nct ${task.cpus} \
+  ${params.variant_calling} \
   -R ${fasta} \
   -I ${bam} \
-  -o ${file_id}.vcf
+  -o ${file_prefix}.vcf
 """
 }
 
+params.filter_snp = ""
+params.filter_snp_out = ""
 process filter_snp {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.filter_snp_out != "") {
+    publishDir "results/${params.filter_snp_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(vcf)
@@ -33,20 +49,31 @@ process filter_snp {
   output:
     tuple val(file_id), path("*_snp.vcf"), emit: vcf
   script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
 """
 gatk3 -T SelectVariants \
   -nct ${task.cpus} \
+  ${params.filter_snp} \
   -R ${fasta} \
   -V ${vcf} \
   -selectType SNP \
-  -o ${file_id}_snp.vcf
+  -o ${file_prefix}_snp.vcf
 """
 }
 
+params.filter_indels = ""
+params.filter_indels_out = ""
 process filter_indels {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.filter_indels_out != "") {
+    publishDir "results/${params.filter_indels_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(vcf)
@@ -54,22 +81,32 @@ process filter_indels {
   output:
     tuple val(file_id), path("*_indel.vcf"), emit: vcf
   script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
 """
 gatk3 -T SelectVariants \
   -nct ${task.cpus} \
+  ${params.filter_indels} \
   -R ${fasta} \
   -V ${vcf} \
   -selectType INDEL \
-  -o ${file_id}_indel.vcf
+  -o ${file_prefix}_indel.vcf
 """
 }
 
-high_confidence_snp_filter = "(QD < 2.0) || (FS > 60.0) || (MQ < 40.0) || (MQRankSum < -12.5) || (ReadPosRankSum < -8.0) || (SOR > 4.0)"
-
+params.high_confidence_snp_filter = "(QD < 2.0) || (FS > 60.0) || (MQ < 40.0) || (MQRankSum < -12.5) || (ReadPosRankSum < -8.0) || (SOR > 4.0)"
+params.high_confidence_snp = "--filterExpression \"${params.high_confidence_snp_filter}\" --filterName \"basic_snp_filter\""
+params.high_confidence_snp_out = ""
 process high_confidence_snp {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.high_confidence_snp_out != "") {
+    publishDir "results/${params.high_confidence_snp_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(vcf)
@@ -77,23 +114,31 @@ process high_confidence_snp {
   output:
     tuple val(file_id), path("*_snp.vcf"), emit: vcf
   script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
 """
 gatk3 -T VariantFiltration \
   -nct ${task.cpus} \
   -R ${fasta} \
   -V ${vcf} \
-  --filterExpression "${high_confidence_snp_filter}" \
-  --filterName "basic_snp_filter" \
-  -o ${file_id}_filtered_snp.vcf
+  ${params.high_confidence_snp} \
+  -o ${file_prefix}_filtered_snp.vcf
 """
 }
 
-high_confidence_indel_filter = "QD < 3.0 || FS > 200.0 || ReadPosRankSum < -20.0 || SOR > 10.0"
-
+params.high_confidence_indel_filter = "QD < 3.0 || FS > 200.0 || ReadPosRankSum < -20.0 || SOR > 10.0"
+params.high_confidence_indels = "--filterExpression \"${params.high_confidence_indel_filter}\" --filterName \"basic_indel_filter\""
+params.high_confidence_indels_out = ""
 process high_confidence_indels {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.high_confidence_indels_out != "") {
+    publishDir "results/${params.high_confidence_indels_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(vcf)
@@ -101,21 +146,30 @@ process high_confidence_indels {
   output:
     tuple val(file_id), path("*_indel.vcf"), emit: vcf
   script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
 """
 gatk3 -T VariantFiltration \
   -nct ${task.cpus} \
   -R ${fasta} \
   -V ${vcf} \
-  --filterExpression "${high_confidence_indel_filter}" \
-  --filterName "basic_indel_filter" \
-  -o ${file_id}_filtered_indel.vcf
+  ${params.high_confidence_indels} \
+  -o ${file_prefix}_filtered_indel.vcf
 """
 }
 
+params.recalibrate_snp_table = ""
+params.recalibrate_snp_table_out = ""
 process recalibrate_snp_table {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.recalibrate_snp_table_out != "") {
+    publishDir "results/${params.recalibrate_snp_table_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(snp_file), path(indel_file), path(bam), path(bam_idx)
@@ -126,6 +180,7 @@ process recalibrate_snp_table {
 """
 gatk3 -T BaseRecalibrator \
   -nct ${task.cpus} \
+  ${recalibrate_snp_table} \
   -R ${fasta} \
   -I ${bam} \
   -knownSites ${snp_file} \
@@ -134,10 +189,15 @@ gatk3 -T BaseRecalibrator \
 """
 }
 
+params.recalibrate_snp = ""
+params.recalibrate_snp_out = ""
 process recalibrate_snp {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.recalibrate_snp_out != "") {
+    publishDir "results/${params.recalibrate_snp_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(snp_file), path(indel_file), path(bam), path(bam_idx)
@@ -146,22 +206,33 @@ process recalibrate_snp {
   output:
     tuple val(file_id), path("*.bam"), emit: bam
   script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
 """
 gatk3 -T PrintReads \
   --use_jdk_deflater \
   --use_jdk_inflater \
+  ${recalibrate_snp} \
   -nct ${task.cpus} \
   -R ${fasta} \
   -I ${bam} \
   -BQSR recal_data_table \
-  -o ${file_id}_recal.bam
+  -o ${file_prefix}_recal.bam
 """
 }
 
+params.haplotype_caller = ""
+params.haplotype_caller_out = ""
 process haplotype_caller {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.haplotype_caller_out != "") {
+    publishDir "results/${params.haplotype_caller_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bam)
@@ -169,21 +240,32 @@ process haplotype_caller {
   output:
     tuple val(file_id), path("*.gvcf"), emit: gvcf
   script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
 """
 gatk3 -T HaplotypeCaller \
   -nct ${task.cpus} \
+  ${params.haplotype_caller} \
   -R ${fasta} \
   -I ${bam} \
   -ERC GVCF \
   -variant_index_type LINEAR -variant_index_parameter 128000 \
-  -o ${file_id}.gvcf
+  -o ${file_prefix}.gvcf
 """
 }
 
+params.gvcf_genotyping = ""
+params.gvcf_genotyping_out = ""
 process gvcf_genotyping {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.gvcf_genotyping_out != "") {
+    publishDir "results/${params.gvcf_genotyping_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(gvcf)
@@ -191,19 +273,30 @@ process gvcf_genotyping {
   output:
     tuple val(file_id), path("*.vcf"), emit: vcf
   script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
 """
 gatk3 -T GenotypeGVCFs \
   -nct ${task.cpus} \
+  ${params.gvcf_genotyping} \
   -R ${fasta} \
   -V ${gvcf} \
-  -o ${file_id}_joint.vcf
+  -o ${file_prefix}_joint.vcf
 """
 }
 
+params.select_variants_snp = ""
+params.select_variants_snp_out = ""
 process select_variants_snp {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.select_variants_snp_out != "") {
+    publishDir "results/${params.select_variants_snp_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(vcf)
@@ -211,20 +304,31 @@ process select_variants_snp {
   output:
     tuple val(file_id), path("*_joint_snp.vcf"), emit: vcf
   script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
 """
 gatk3 -T SelectVariants \
   -nct ${task.cpus} \
+  ${params.select_variants_snp} \
   -R ${fasta} \
   -V ${vcf} \
   -selectType SNP \
-  -o ${file_id}_joint_snp.vcf
+  -o ${file_prefix}_joint_snp.vcf
 """
 }
 
+params.select_variants_indels = ""
+params.select_variants_indels_out = ""
 process select_variants_indels {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.select_variants_indels_out != "") {
+    publishDir "results/${params.select_variants_indels_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(vcf)
@@ -232,20 +336,31 @@ process select_variants_indels {
   output:
     tuple val(file_id), path("*_joint_indel.vcf"), emit: vcf
   script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
 """
 gatk3 -T SelectVariants \
   -nct ${task.cpus} \
+  ${params.select_variants_indels} \
   -R ${fasta} \
   -V ${vcf} \
   -selectType INDEL \
-  -o ${file_id}_joint_indel.vcf
+  -o ${file_prefix}_joint_indel.vcf
 """
 }
 
+params.personalized_genome = ""
+params.personalized_genome_out = ""
 process personalized_genome {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.personalized_genome_out != "") {
+    publishDir "results/${params.personalized_genome_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(vcf)
@@ -254,12 +369,17 @@ process personalized_genome {
     tuple val(file_id), path("*_genome.fasta"), emit: fasta
 
   script:
-  library = pick_library(file_id, library_list)
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
 """
 gatk3 -T FastaAlternateReferenceMaker\
+  ${params.personalized_genome} \
   -R ${reference} \
   -V ${vcf} \
-  -o ${library}_genome.fasta
+  -o ${file_prefix}_genome.fasta
 """
 }
 
diff --git a/src/nf_modules/gatk4/main.nf b/src/nf_modules/gatk4/main.nf
index 22efa0e0c0253f9f6a57db529d018c7978a93568..885b3211f0586cdb7fa52a307b090bc3328b739f 100644
--- a/src/nf_modules/gatk4/main.nf
+++ b/src/nf_modules/gatk4/main.nf
@@ -1,10 +1,329 @@
 version = "4.2.0.0"
 container_url = "broadinstitute/gatk:${version}"
 
+def get_file_prefix(file_id) {
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else if (file_id instanceof Map) {
+      library = file_id[0]
+      file_prefix = file_id[0]
+      if (file_id.containsKey('library')) {
+        library = file_id.library
+        file_prefix = file_id.id
+      }
+  } else {
+    file_prefix = file_id
+  }
+  return file_prefix
+}
+
+include {
+  index_fasta as samtools_index_fasta;
+  index_bam;
+} from './../samtools/main.nf'
+include {
+  index_fasta as picard_index_fasta;
+  index_bam as picard_index_bam;
+  mark_duplicate;
+} from './../picard/main.nf'
+
+params.variant_calling_out = ""
+workflow germline_cohort_data_variant_calling {
+  take:
+    bam
+    fasta
+  main:
+    // data preparation
+    mark_duplicate(bam)
+    index_bam(mark_duplicate.out.bam)
+    picard_index_bam(mark_duplicate.out.bam)
+    index_bam.out.bam_idx
+      .join(picard_index_bam.out.index)
+      .set{ bam_idx }
+    picard_index_fasta(fasta)
+    samtools_index_fasta(fasta)
+    fasta
+      .join(picard_index_fasta.out.index)
+      .join(samtools_index_fasta.out.index)
+      .set{ fasta_idx }
+    
+    // variant calling
+    call_variants_per_sample(
+      bam_idx,
+      fasta_idx.collect()
+    )
+    call_variants_all_sample(
+      call_variants_per_sample.out.gvcf,
+      fasta_idx
+    )
+  emit:
+    vcf = call_variants_all_sample.out.vcf
+}
+
+/*******************************************************************/
+workflow base_quality_recalibrator{
+  take:
+    bam_idx
+    fasta_idx
+    vcf
+
+  main:
+    index_vcf(vcf)
+    compute_base_recalibration(
+      bam_idx,
+      fasta_idx,
+      index_vcf.out.vcf_idx
+    ) 
+    apply_base_recalibration(
+      bam_idx,
+      fasta_idx,
+      compute_base_recalibration.out.table
+    )
+    emit:
+    bam = apply_base_recalibration.out.bam
+}
+
+process index_vcf {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  input:
+    tuple val(file_id), path(vcf)
+  output:
+    tuple val(file_id), path("${vcf}"), path("*"), emit: vcf_idx
+
+  script:
+  xmx_memory = "${task.memory}" - ~/\s*GB/
+  file_prefix = get_file_prefix(file_id)
+"""
+gatk --java-options "-Xmx${xmx_memory}G" IndexFeatureFile \
+  -I ${vcf}
+"""
+}
+
+process compute_base_recalibration {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  input:
+    tuple val(file_id), path(bam), path(bam_idx), path(bam_idx_bis)
+    tuple val(ref_id), path(fasta), path(fai), path(dict)
+    tuple val(vcf_id), path(vcf), path(vcf_idx)
+  output:
+    tuple val(file_id), path("${bam.simpleName}.table"), emit: table
+
+  script:
+  xmx_memory = "${task.memory}" - ~/\s*GB/
+  file_prefix = get_file_prefix(file_id)
+  def vcf_cmd = ""
+  if (vcf instanceof List){
+    for (vcf_file in vcf){
+      vcf_cmd += "--known-sites ${vcf_file} "
+    }
+  } else {
+    vcf_cmd = "--known-sites ${vcf} "
+  }
+"""
+ gatk --java-options "-Xmx${xmx_memory}G" BaseRecalibrator \
+   -I ${bam} \
+   -R ${fasta} \
+   ${vcf_cmd} \
+   -O ${bam.simpleName}.table
+"""
+}
+
+process apply_base_recalibration {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  input:
+    tuple val(file_id), path(bam), path(bam_idx), path(bam_idx_bis)
+    tuple val(ref_id), path(fasta), path(fai), path(dict)
+    tuple val(table_id), path(table)
+  output:
+    tuple val(file_id), path("${bam.simpleName}_recalibrate.bam"), emit: bam
+
+  script:
+  xmx_memory = "${task.memory}" - ~/\s*GB/
+  file_prefix = get_file_prefix(file_id)
+"""
+ gatk --java-options "-Xmx${xmx_memory}G" ApplyBQSR \
+   -R ${fasta} \
+   -I ${bam} \
+   --bqsr-recal-file ${table} \
+   -O ${bam.simpleName}_recalibrate.bam
+"""
+}
+
+/*******************************************************************/
+params.variant_calling_gvcf_out = ""
+process call_variants_per_sample {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.variant_calling_gvcf_out != "") {
+    publishDir "results/${params.variant_calling_gvcf_out}", mode: 'copy'
+  }
+
+  input:
+    tuple val(file_id), path(bam), path(bam_idx), path(bam_idx_bis)
+    tuple val(ref_id), path(fasta), path(fai), path(dict)
+  output:
+    tuple val(file_id), path("${bam.simpleName}.gvcf.gz"), emit: gvcf
+
+  script:
+  xmx_memory = "${task.memory}" - ~/\s*GB/
+  file_prefix = get_file_prefix(file_id)
+"""
+ gatk --java-options "-Xmx${xmx_memory}G" HaplotypeCaller  \
+   -R ${fasta} \
+   -I ${bam} \
+   -O ${bam.simpleName}.gvcf.gz \
+   -ERC GVCF
+"""
+}
+
+/*******************************************************************/
+
+workflow call_variants_all_sample {
+  take:
+    gvcf
+    fasta_idx
+
+  main:
+    index_gvcf(gvcf)
+    validate_gvcf(
+      index_gvcf.out.gvcf_idx,
+      fasta_idx.collect()
+    )
+    consolidate_gvcf(
+      validate_gvcf.out.gvcf
+      .groupTuple(),
+      fasta_idx.collect()
+    )
+    genomic_db_call(
+      consolidate_gvcf.out.gvcf_idx,
+      fasta_idx.collect()
+    )
+  emit:
+    vcf = genomic_db_call.out.vcf
+}
+
+process index_gvcf {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  input:
+    tuple val(file_id), path(gvcf)
+  output:
+    tuple val(file_id), path("${gvcf}"), path("${gvcf}.tbi"), emit: gvcf_idx
+    tuple val(file_id), path("${gvcf.simpleName}_IndexFeatureFile_report.txt"), emit: report
+
+  script:
+  xmx_memory = "${task.memory}" - ~/\s*GB/
+  file_prefix = get_file_prefix(file_id)
+"""
+gatk --java-options "-Xmx${xmx_memory}G" IndexFeatureFile \
+      -I ${gvcf} 2> ${gvcf.simpleName}_IndexFeatureFile_report.txt
+"""
+}
+
+process validate_gvcf {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  input:
+    tuple val(file_id), path(gvcf), path(gvcf_idx)
+    tuple val(ref_id), path(fasta), path(fai), path(dict)
+  output:
+    tuple val(file_id), path("${gvcf}"), path("${gvcf_idx}"), emit: gvcf
+
+  script:
+  xmx_memory = "${task.memory}" - ~/\s*GB/
+  file_prefix = get_file_prefix(file_id)
+"""
+gatk --java-options "-Xmx${xmx_memory}G" ValidateVariants \
+   -V ${gvcf} \
+   -R ${fasta} -gvcf
+"""
+}
+
+process consolidate_gvcf {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  input:
+    tuple val(file_id), path(gvcf), path(gvcf_idx)
+    tuple val(ref_id), path(fasta), path(fai), path(dict)
+  output:
+    tuple val(file_id), path("${file_prefix}.gvcf"), path("${file_prefix}.gvcf.idx"), emit: gvcf_idx
+    tuple val(file_id), path("${file_prefix}_CombineGVCFs_report.txt"), emit: report
+
+  script:
+  xmx_memory = "${task.memory}" - ~/\s*GB/
+  file_prefix = get_file_prefix(file_id)
+  def gvcf_cmd = ""
+  if (gvcf instanceof List){
+    for (gvcf_file in gvcf){
+      gvcf_cmd += "-V ${gvcf_file} "
+    }
+  } else {
+    gvcf_cmd = "-V ${gvcf} "
+  }
+"""
+mkdir tmp
+gatk --java-options "-Xmx${xmx_memory}G" CombineGVCFs \
+    ${gvcf_cmd} \
+    -R ${fasta} \
+    -O ${file_prefix}.gvcf 2> ${file_prefix}_CombineGVCFs_report.txt
+gatk --java-options "-Xmx${xmx_memory}G" IndexFeatureFile \
+      -I ${file_prefix}.gvcf 2> ${file_prefix}_IndexFeatureFile_report.txt
+"""
+}
+
+process genomic_db_call {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.variant_calling_out != "") {
+    publishDir "results/${params.variant_calling_out}", mode: 'copy'
+  }
+  input:
+    tuple val(file_id), path(gvcf), path(gvcf_idx)
+    tuple val(ref_id), path(fasta), path(fai), path(dict)
+  output:
+    tuple val(file_id), path("${gvcf.simpleName}.vcf.gz"), emit: vcf
+
+  script:
+  xmx_memory = "${task.memory}" - ~/\s*GB/
+  file_prefix = get_file_prefix(file_id)
+  def gvcf_cmd = ""
+  if (gvcf instanceof List){
+    for (gvcf_file in gvcf){
+      gvcf_cmd += "--V ${gvcf_file} "
+    }
+  } else {
+    gvcf_cmd = "--V ${gvcf} "
+  }
+"""
+mkdir tmp
+gatk --java-options "-Xmx${xmx_memory}G" GenotypeGVCFs \
+   -R ${fasta} \
+   -V ${gvcf} \
+   -O ${gvcf.simpleName}.vcf.gz \
+   --tmp-dir ./tmp
+"""
+}
+
+/*******************************************************************/
+params.variant_calling = ""
 process variant_calling {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.variant_calling_out != "") {
+    publishDir "results/${params.variant_calling_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bam), path(bai)
@@ -14,18 +333,25 @@ process variant_calling {
 
   script:
   xmx_memory = "${task.memory}" - ~/\s*GB/
+  file_prefix = get_file_prefix(file_id)
 """
 gatk --java-options "-Xmx${xmx_memory}G" HaplotypeCaller \
+  ${params.variant_calling} \
   -R ${fasta} \
   -I ${bam} \
   -O ${bam.simpleName}.vcf
 """
 }
 
+params.filter_snp = ""
+params.filter_snp_out = ""
 process filter_snp {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.filter_snp_out != "") {
+    publishDir "results/${params.filter_snp_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(vcf)
@@ -34,8 +360,10 @@ process filter_snp {
     tuple val(file_id), path("*_snp.vcf"), emit: vcf
   script:
   xmx_memory = "${task.memory}" - ~/\s*GB/
+  file_prefix = get_file_prefix(file_id)
 """
 gatk --java-options "-Xmx${xmx_memory}G" SelectVariants \
+  ${params.filter_snp} \
   -R ${fasta} \
   -V ${vcf} \
   -select-type SNP \
@@ -43,10 +371,15 @@ gatk --java-options "-Xmx${xmx_memory}G" SelectVariants \
 """
 }
 
+params.filter_indels = ""
+params.filter_indels_out = ""
 process filter_indels {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.filter_indels_out != "") {
+    publishDir "results/${params.filter_indels_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(vcf)
@@ -55,8 +388,10 @@ process filter_indels {
     tuple val(file_id), path("*_indel.vcf"), emit: vcf
   script:
   xmx_memory = "${task.memory}" - ~/\s*GB/
+  file_prefix = get_file_prefix(file_id)
 """
 gatk --java-options "-Xmx${xmx_memory}G" SelectVariants \
+  ${params.filter_indels} \
   -R ${fasta} \
   -V ${vcf} \
   -select-type INDEL \
@@ -64,12 +399,16 @@ gatk --java-options "-Xmx${xmx_memory}G" SelectVariants \
 """
 }
 
-high_confidence_snp_filter = "(QD < 2.0) || (FS > 60.0) || (MQ < 40.0) || (MQRankSum < -12.5) || (ReadPosRankSum < -8.0) || (SOR > 4.0)"
-
+params.high_confidence_snp_filter = "(QD < 2.0) || (FS > 60.0) || (MQ < 40.0) || (MQRankSum < -12.5) || (ReadPosRankSum < -8.0) || (SOR > 4.0)"
+params.high_confidence_snp = "--filter-expression \"${params.high_confidence_snp_filter}\" --filter-name \"basic_snp_filter\""
+params.high_confidence_snp_out = ""
 process high_confidence_snp {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.high_confidence_snp_out != "") {
+    publishDir "results/${params.high_confidence_snp_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(vcf)
@@ -78,22 +417,26 @@ process high_confidence_snp {
     tuple val(file_id), path("*_snp.vcf"), emit: vcf
   script:
   xmx_memory = "${task.memory}" - ~/\s*GB/
+  file_prefix = get_file_prefix(file_id)
 """
 gatk --java-options "-Xmx${xmx_memory}G" VariantFiltration \
   -R ${fasta} \
   -V ${vcf} \
-  --filter-expression "${high_confidence_snp_filter}" \
-  --filter-name "basic_snp_filter" \
+  ${params.high_confidence_snp} \
   -O ${vcf.simpleName}_filtered_snp.vcf
 """
 }
 
-high_confidence_indel_filter = "QD < 3.0 || FS > 200.0 || ReadPosRankSum < -20.0 || SOR > 10.0"
-
+params.high_confidence_indel_filter = "QD < 3.0 || FS > 200.0 || ReadPosRankSum < -20.0 || SOR > 10.0"
+params.high_confidence_indels = "--filter-expression \"${params.high_confidence_indel_filter}\" --filter-name \"basic_indel_filter\""
+params.high_confidence_indels_out = ""
 process high_confidence_indels {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.high_confidence_indels_out != "") {
+    publishDir "results/${params.high_confidence_indels_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(vcf)
@@ -102,34 +445,41 @@ process high_confidence_indels {
     tuple val(file_id), path("*_indel.vcf"), emit: vcf
   script:
   xmx_memory = "${task.memory}" - ~/\s*GB/
+  file_prefix = get_file_prefix(file_id)
 """
 gatk --java-options "-Xmx${xmx_memory}G" VariantFiltration \
   -R ${fasta} \
   -V ${vcf} \
-  --filter-expression "${high_confidence_indel_filter}" \
-  --filter-name "basic_indel_filter" \
+  ${params.high_confidence_indels} \
   -O ${vcf.simpleName}_filtered_indel.vcf
 """
 }
 
+params.recalibrate_snp_table = ""
+params.recalibrate_snp_table_out = ""
 process recalibrate_snp_table {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.recalibrate_snp_table_out != "") {
+    publishDir "results/${params.recalibrate_snp_table_out}", mode: 'copy'
+  }
 
   input:
-    tuple val(file_id), path(snp_file), path(indel_file), path(bam), path(bam_idx)
+    tuple val(file_id), path(snp_file), path(indel_file), path(bam), path(bam_idx), path(bam_idx_bis)
     tuple val(ref_id), path(fasta), path(fai), path(dict)
   output:
     tuple val(file_id), path("recal_data_table"), emit: recal_table
   script:
   xmx_memory = "${task.memory}" - ~/\s*GB/
+  file_prefix = get_file_prefix(file_id)
 """
 gatk --java-options "-Xmx${xmx_memory}G" IndexFeatureFile \
   -I ${snp_file}
 gatk --java-options "-Xmx${xmx_memory}G" IndexFeatureFile \
   -I ${indel_file}
 gatk --java-options "-Xmx${xmx_memory}G" BaseRecalibrator \
+  ${params.recalibrate_snp_table} \
   -R ${fasta} \
   -I ${bam} \
   -known-sites ${snp_file} \
@@ -138,10 +488,15 @@ gatk --java-options "-Xmx${xmx_memory}G" BaseRecalibrator \
 """
 }
 
+params.recalibrate_snp = ""
+params.recalibrate_snp_out = ""
 process recalibrate_snp {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.recalibrate_snp_out != "") {
+    publishDir "results/${params.recalibrate_snp_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(snp_file), path(indel_file), path(bam), path(bam_idx), path(recal_table)
@@ -150,8 +505,10 @@ process recalibrate_snp {
     tuple val(file_id), path("*.bam"), emit: bam
   script:
   xmx_memory = "${task.memory}" - ~/\s*GB/
+  file_prefix = get_file_prefix(file_id)
 """
 gatk --java-options "-Xmx${xmx_memory}G" ApplyBQSR \
+  ${params.recalibrate_snp} \
   -R ${fasta} \
   -I ${bam} \
   --bqsr-recal-file recal_data_table \
@@ -159,10 +516,15 @@ gatk --java-options "-Xmx${xmx_memory}G" ApplyBQSR \
 """
 }
 
+params.haplotype_caller = ""
+params.haplotype_caller_out = ""
 process haplotype_caller {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.haplotype_caller_out != "") {
+    publishDir "results/${params.haplotype_caller_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bam)
@@ -171,8 +533,10 @@ process haplotype_caller {
     tuple val(file_id), path("*.gvcf"), emit: gvcf
   script:
   xmx_memory = "${task.memory}" - ~/\s*GB/
+  file_prefix = get_file_prefix(file_id)
 """
 gatk --java-options "-Xmx${xmx_memory}G" HaplotypeCaller \
+  ${params.haplotype_caller} \
   -R ${fasta} \
   -I ${bam} \
   -ERC GVCF \
@@ -180,10 +544,15 @@ gatk --java-options "-Xmx${xmx_memory}G" HaplotypeCaller \
 """
 }
 
+params.gvcf_genotyping = ""
+params.gvcf_genotyping_out = ""
 process gvcf_genotyping {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.gvcf_genotyping_out != "") {
+    publishDir "results/${params.gvcf_genotyping_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(gvcf)
@@ -192,18 +561,25 @@ process gvcf_genotyping {
     tuple val(file_id), path("*.vcf.gz"), emit: vcf
   script:
   xmx_memory = "${task.memory}" - ~/\s*GB/
+  file_prefix = get_file_prefix(file_id)
 """
 gatk --java-options "-Xmx${xmx_memory}G" GenotypeGVCFs \
+  ${params.gvcf_genotyping} \
   -R ${fasta} \
   -V ${gvcf} \
   -O ${gvcf.simpleName}_joint.vcf.gz
 """
 }
 
+params.select_variants_snp = ""
+params.select_variants_snp_out = ""
 process select_variants_snp {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.select_variants_snp_out != "") {
+    publishDir "results/${params.select_variants_snp_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(vcf)
@@ -212,8 +588,10 @@ process select_variants_snp {
     tuple val(file_id), path("*_joint_snp.vcf"), emit: vcf
   script:
   xmx_memory = "${task.memory}" - ~/\s*GB/
+  file_prefix = get_file_prefix(file_id)
 """
 gatk --java-options "-Xmx${xmx_memory}GG" SelectVariants \
+  ${params.select_variants_snp} \
   -R ${fasta} \
   -V ${vcf} \
   -select-type SNP \
@@ -221,10 +599,15 @@ gatk --java-options "-Xmx${xmx_memory}GG" SelectVariants \
 """
 }
 
+params.select_variants_indels = ""
+params.select_variants_indels_out = ""
 process select_variants_indels {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.select_variants_indels_out != "") {
+    publishDir "results/${params.select_variants_indels_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(vcf)
@@ -233,19 +616,26 @@ process select_variants_indels {
     tuple val(file_id), path("*_joint_indel.vcf"), emit: vcf
   script:
   xmx_memory = "${task.memory}" - ~/\s*GB/
+  file_prefix = get_file_prefix(file_id)
 """
 gatk --java-options "-Xmx${xmx_memory}G" SelectVariants \
+  ${params.select_variants_indels} \
   -R ${fasta} \
   -V ${vcf} \
   -select-type INDEL \
-  -O ${file_id}_joint_indel.vcf
+  -O ${file_prefix}_joint_indel.vcf
 """
 }
 
+params.personalized_genome = ""
+params.personalized_genome_out = ""
 process personalized_genome {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.personalized_genome_out != "") {
+    publishDir "results/${params.personalized_genome_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(vcf)
@@ -255,11 +645,15 @@ process personalized_genome {
 
   script:
   xmx_memory = "${task.memory}" - ~/\s*GB/
+  file_prefix = get_file_prefix(file_id)
 """
 gatk --java-options "-Xmx${xmx_memory}G" FastaAlternateReferenceMaker\
+  ${params.personalized_genome} \
   -R ${reference} \
   -V ${vcf} \
   -O ${vcf.simpleName}_genome.fasta
 """
 }
 
+
+
diff --git a/src/nf_modules/gffread/main.nf b/src/nf_modules/gffread/main.nf
new file mode 100644
index 0000000000000000000000000000000000000000..3f7a4db4bf7118b61b90c6d23bcf147c62436a99
--- /dev/null
+++ b/src/nf_modules/gffread/main.nf
@@ -0,0 +1,31 @@
+version = "0.12.2"
+container_url = "lbmc/gffread:${version}"
+
+params.gffread = ""
+params.gffread_out = ""
+process gffread {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_prefix"
+  if (params.gffread_out != "") {
+    publishDir "results/${params.gffread_out}", mode: 'copy'
+  }
+
+  input:
+  tuple val(file_id), path(gtf)
+  tuple val(fasta_id), path(fasta)
+
+  output:
+    tuple val(fasta_id), path("${file_prefix}.fasta"), emit: fasta
+
+  script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
+  """
+  gffread ${gtf} -g ${fasta} -M -x dup_${file_prefix}.fasta
+  awk 'BEGIN {i = 1;} { if (\$1 ~ /^>/) { tmp = h[i]; h[i] = \$1; } else if (!a[\$1]) { s[i] = \$1; a[\$1] = "1"; i++; } else { h[i] = tmp; } } END { for (j = 1; j < i; j++) { print h[j]; print s[j]; } }' < dup_${file_prefix}.fasta | grep -v -e "^\$" > ${file_prefix}.fasta
+  """
+}
diff --git a/src/nf_modules/hisat2/indexing.config b/src/nf_modules/hisat2/indexing.config
deleted file mode 100644
index 5d154dd49c0a40d51046a5134e71621a0f6db6f6..0000000000000000000000000000000000000000
--- a/src/nf_modules/hisat2/indexing.config
+++ /dev/null
@@ -1,57 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: index_fasta {
-        container = "lbmc/hisat2:2.1.0"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: index_fasta {
-        cpus = 4
-        container = "lbmc/hisat2:2.1.0"
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: index_fasta {
-        container = "lbmc/hisat2:2.1.0"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        memory = "20GB"
-        cpus = 32
-        time = "12h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: index_fasta {
-        container = "lbmc/hisat2:2.1.0"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/hisat2/indexing.nf b/src/nf_modules/hisat2/indexing.nf
deleted file mode 100644
index 1b11b3ef7ec09a21e84e2dfd33cf6bea129bfa52..0000000000000000000000000000000000000000
--- a/src/nf_modules/hisat2/indexing.nf
+++ /dev/null
@@ -1,32 +0,0 @@
-/*
-* Hisat2 :
-* Imputs : fastq files
-* Imputs : fasta files
-* Output : bam files
-*/
-
-/*                      fasta indexing                                     */
-params.fasta = "$baseDir/data/bam/*.fasta"
-
-log.info "fasta files : ${params.fasta}"
-
-Channel
-  .fromPath( params.fasta )
-  .ifEmpty { error "Cannot find any fasta files matching: ${params.fasta}" }
-  .set { fasta_file }
-
-process index_fasta {
-  tag "$fasta.baseName"
-  publishDir "results/mapping/index/", mode: 'copy'
-
-  input:
-    file fasta from fasta_file
-
-  output:
-    file "*.index*" into index_files
-
-  script:
-"""
-hisat2-build -p ${task.cpus} ${fasta} ${fasta.baseName}.index
-"""
-}
diff --git a/src/nf_modules/hisat2/mapping_paired.config b/src/nf_modules/hisat2/mapping_paired.config
deleted file mode 100644
index 8333a5853b2cf633222f7cf655f2f76753eb13f0..0000000000000000000000000000000000000000
--- a/src/nf_modules/hisat2/mapping_paired.config
+++ /dev/null
@@ -1,55 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: mapping_fastq {
-        cpus = 4
-        container = "lbmc/hisat2:2.1.0"
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: mapping_fastq {
-        cpus = 4
-        container = "lbmc/hisat2:2.1.0"
-      }
-    }
-  }
-  sge {
-    process{
-      withName: mapping_fastq {
-        beforeScript = "source /usr/share/lmod/lmod/init/bash; module use ~/privatemodules"
-        module = "hisat2/2.1.0:samtools/1.7"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        memory = "20GB"
-        cpus = 32
-        time = "12h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: mapping_fastq {
-        container = "lbmc/hisat2:2.1.0"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/hisat2/mapping_paired.nf b/src/nf_modules/hisat2/mapping_paired.nf
deleted file mode 100644
index 323faf7cf47f280c65843871ddf4734fbeed1949..0000000000000000000000000000000000000000
--- a/src/nf_modules/hisat2/mapping_paired.nf
+++ /dev/null
@@ -1,58 +0,0 @@
-params.fastq = "$baseDir/data/fastq/*_{1,2}.fastq"
-params.index = "$baseDir/data/index/*.index.*"
-params.output = "results/"
-
-log.info "fastq files : ${params.fastq}"
-log.info "index files : ${params.index}"
-
-Channel
-  .fromFilePairs( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .set { fastq_files }
-Channel
-  .fromPath( params.index )
-  .ifEmpty { error "Cannot find any index files matching: ${params.index}" }
-  .set { index_files }
-
-process mapping_fastq {
-  tag "$pair_id"
-  publishDir "${params.output}", mode: 'copy'
-
-  input:
-  set pair_id, file(fastq_filtred) from fastq_files
-  file index from index_files.collect()
-
-  output:
-  file "*" into counts_files
-  set pair_id, "*.{bam, bai}" into bam_files
-  file "*_report.txt" into mapping_report
-
-  script:
-  index_id = index[0]
-  for (index_file in index) {
-    if (index_file =~ /.*\.1\.ht2/ && !(index_file =~ /.*\.rev\.1\.ht2/)) {
-        index_id = ( index_file =~ /(.*)\.1\.ht2/)[0][1]
-    }
-  }
-"""
-hisat2 -x ${index_id} \
-       -p ${task.cpus} \
-       -1 ${fastq_filtred[0]} \
-       -2 ${fastq_filtred[1]} \
-       --un-conc-gz ${pair_id}_notaligned_R%.fastq.gz \
-       --rna-strandness 'FR' \
-       --dta \
-       --no-softclip\
-       --trim3 1\
-       --trim5 1\
-       2> ${pair_id}_report.txt \
-| samtools view -bS -F 4 - \
-| samtools sort -@ ${task.cpus} -o ${pair_id}.bam \
-&& samtools index ${pair_id}.bam
-
-if grep -q "ERR" ${pair_id}.txt; then
-  exit 1
-fi
-
-"""
-}
diff --git a/src/nf_modules/hisat2/mapping_single.config b/src/nf_modules/hisat2/mapping_single.config
deleted file mode 100644
index 964db5213282012246a0fbfe63332abd37f53e5b..0000000000000000000000000000000000000000
--- a/src/nf_modules/hisat2/mapping_single.config
+++ /dev/null
@@ -1,55 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: mapping_fastq {
-        cpus = 4
-        container = "lbmc/hisat2:2.1.0"
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: mapping_fastq {
-        cpus = 4
-        container = "lbmc/hisat2:2.1.0"
-      }
-    }
-  }
-  sge {
-    process{
-      withName: mapping_fastq {
-        beforeScript = "source /usr/share/lmod/lmod/init/bash; module use ~/privatemodules"
-        module = "hisat2/2.1.0:samtools/1.7"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        memory = "20GB"
-        cpus = 32
-        time = "12h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: mapping_fastq {
-        container = "lbmc/hisat2:2.1.0"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/hisat2/mapping_single.nf b/src/nf_modules/hisat2/mapping_single.nf
deleted file mode 100644
index 0fdb729e90f9bdc097209846e20af08a6a0ce41c..0000000000000000000000000000000000000000
--- a/src/nf_modules/hisat2/mapping_single.nf
+++ /dev/null
@@ -1,53 +0,0 @@
-/*
-* for single-end data
-*/
-
-params.fastq = "$baseDir/data/fastq/*.fastq"
-params.index = "$baseDir/data/index/*.index*"
-
-log.info "fastq files : ${params.fastq}"
-log.info "index files : ${params.index}"
-
-Channel
-  .fromPath( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { fastq_files }
-Channel
-  .fromPath( params.index )
-  .ifEmpty { error "Cannot find any index files matching: ${params.index}" }
-  .set { index_files }
-
-process mapping_fastq {
-  tag "$file_id"
-  publishDir "results/mapping/", mode: 'copy'
-
-  input:
-  set file_id, file(reads) from fastq_files
-  file index from index_files.collect()
-
-  output:
-  file "*" into count_files
-  set file_id, "*.bam" into bam_files
-  file "*_report.txt" into mapping_report
-
-  script:
-  index_id = index[0]
-  for (index_file in index) {
-    if (index_file =~ /.*\.1\.ht2/ && !(index_file =~ /.*\.rev\.1\.ht2/)) {
-        index_id = ( index_file =~ /(.*)\.1\.ht2/)[0][1]
-    }
-  }
-"""
-hisat2 -p ${task.cpus} \
- -x ${index_id} \
- -U ${reads} 2> \
-${file_id}_hisat2_report.txt | \
-samtools view -Sb - > ${file_id}.bam
-
-if grep -q "Error" ${file_id}_hisat2_report.txt; then
-  exit 1
-fi
-
-"""
-}
diff --git a/src/nf_modules/hisat2/tests.sh b/src/nf_modules/hisat2/tests.sh
deleted file mode 100755
index 50e4396652e867a3bc114db2ba79b4af4dc7de9a..0000000000000000000000000000000000000000
--- a/src/nf_modules/hisat2/tests.sh
+++ /dev/null
@@ -1,39 +0,0 @@
-./nextflow src/nf_modules/hisat2/indexing.nf \
-  -c src/nf_modules/hisat2/indexing.config \
-  -profile docker \
-  --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" \
-  -resume
-
-./nextflow src/nf_modules/hisat2/mapping_paired.nf \
-  -c src/nf_modules/hisat2/mapping_paired.config \
-  -profile docker \
-  --index "results/mapping/index/tiny_v2.index*" \
-  --fastq "data/tiny_dataset/fastq/tiny*_R{1,2}.fastq" \
-  -resume
-
-./nextflow src/nf_modules/hisat2/mapping_single.nf \
-  -c src/nf_modules/hisat2/mapping_single.config \
-  -profile docker \
-  --index "results/mapping/index/tiny_v2.index*" \
-  --fastq "data/tiny_dataset/fastq/tiny*_S.fastq" \
-  -resume
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/hisat2/indexing.nf \
-  -c src/nf_modules/hisat2/indexing.config \
-  -profile singularity \
-  --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" \
-  -resume
-
-./nextflow src/nf_modules/hisat2/mapping_paired.nf \
-  -c src/nf_modules/hisat2/mapping_paired.config \
-  -profile singularity \
-  --index "results/mapping/index/tiny_v2.index*" \
-  --fastq "data/tiny_dataset/fastq/tiny*_R{1,2}.fastq"
-
-./nextflow src/nf_modules/hisat2/mapping_single.nf \
-  -c src/nf_modules/hisat2/mapping_single.config \
-  -profile singularity \
-  --index "results/mapping/index/tiny_v2.index*" \
-  --fastq "data/tiny_dataset/fastq/tiny*_S.fastq"
-fi
diff --git a/src/nf_modules/htseq/htseq.config b/src/nf_modules/htseq/htseq.config
deleted file mode 100644
index 931ca7848e8c382befe60caf281b46dd723d6f19..0000000000000000000000000000000000000000
--- a/src/nf_modules/htseq/htseq.config
+++ /dev/null
@@ -1,75 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: sort_bam {
-        container = "lbmc/samtools:1.7"
-        cpus = 1
-      }
-      withName: counting {
-        container = "lbmc/htseq:0.11.2"
-        cpus = 1
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: sort_bam {
-        container = "lbmc/samtools:1.7"
-        cpus = 1
-      }
-      withName: counting {
-        container = "lbmc/htseq:0.11.2"
-        cpus = 1
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: sort_bam {
-        container = "lbmc/htseq:0.11.2"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "20GB"
-        time = "12h"
-        queue = "monointeldeb128"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: sort_bam {
-        container = "lbmc/samtools:1.7"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-      withName: counting {
-        container = "lbmc/htseq:0.11.2"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/htseq/htseq.nf b/src/nf_modules/htseq/htseq.nf
deleted file mode 100644
index 7cade9a55b17ced135f32a36ffd90dc5354b72af..0000000000000000000000000000000000000000
--- a/src/nf_modules/htseq/htseq.nf
+++ /dev/null
@@ -1,52 +0,0 @@
-params.bam = "$baseDir/data/bam/*.bam"
-params.gtf = "$baseDir/data/annotation/*.gtf"
-
-log.info "bam files : ${params.bam}"
-log.info "gtf files : ${params.gtf}"
-
-Channel
-  .fromPath( params.bam )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.bam}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { bam_files }
-Channel
-  .fromPath( params.gtf )
-  .ifEmpty { error "Cannot find any gtf file matching: ${params.gtf}" }
-  .set { gtf_file }
-
-process sort_bam {
-  tag "$file_id"
-  cpus 4
-
-  input:
-    set file_id, file(bam) from bam_files
-
-  output:
-    set file_id, "*_sorted.sam" into sorted_bam_files
-
-  script:
-"""
-# sort bam by name
-samtools sort -@ ${task.cpus} -n -O SAM -o ${file_id}_sorted.sam ${bam}
-"""
-}
-
-process counting {
-  tag "$file_id"
-  publishDir "results/quantification/", mode: 'copy'
-
-  input:
-  set file_id, file(bam) from sorted_bam_files
-  file gtf from gtf_file
-
-  output:
-  file "*.count" into count_files
-
-  script:
-"""
-htseq-count ${bam} ${gtf} \
--r pos --mode=intersection-nonempty -a 10 -s no -t exon -i gene_id \
-> ${file_id}.count
-"""
-}
-
diff --git a/src/nf_modules/htseq/tests.sh b/src/nf_modules/htseq/tests.sh
deleted file mode 100755
index eada26b6d280b80f6ebb7db3d3dc5bf652013b27..0000000000000000000000000000000000000000
--- a/src/nf_modules/htseq/tests.sh
+++ /dev/null
@@ -1,15 +0,0 @@
-./nextflow src/nf_modules/htseq/htseq.nf \
-  -c src/nf_modules/htseq/htseq.config \
-  -profile docker \
-  --gtf "data/tiny_dataset/annot/tiny.gff" \
-  --bam "data/tiny_dataset/map/tiny_v2.bam" \
-  -resume
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/htseq/htseq.nf \
-  -c src/nf_modules/htseq/htseq.config \
-  -profile singularity \
-  --gtf "data/tiny_dataset/annot/tiny.gff" \
-  --bam "data/tiny_dataset/map/tiny_v2.bam" \
-  -resume
-fi
diff --git a/src/nf_modules/kallisto/indexing.config b/src/nf_modules/kallisto/indexing.config
deleted file mode 100644
index 0a17f88d40b83db0846e34ba66006436d9a6ecbe..0000000000000000000000000000000000000000
--- a/src/nf_modules/kallisto/indexing.config
+++ /dev/null
@@ -1,57 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withLabel: kallisto {
-        container = "lbmc/kallisto:0.44.0"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withLabel: kallisto {
-        container = "lbmc/kallisto:0.44.0"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withLabel: kallisto {
-        container = "lbmc/kallisto:0.44.0"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withLabel: kallisto {
-        container = "lbmc/kallisto:0.44.0"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/kallisto/indexing.nf b/src/nf_modules/kallisto/indexing.nf
deleted file mode 100644
index 5b0fba5e5502b5ee3c3d10b3697233a594cec0ac..0000000000000000000000000000000000000000
--- a/src/nf_modules/kallisto/indexing.nf
+++ /dev/null
@@ -1,28 +0,0 @@
-params.fasta = "$baseDir/data/bam/*.fasta"
-
-log.info "fasta files : ${params.fasta}"
-
-Channel
-  .fromPath( params.fasta )
-  .ifEmpty { error "Cannot find any bam files matching: ${params.fasta}" }
-  .set { fasta_file }
-
-process index_fasta {
-  tag "$fasta.baseName"
-  publishDir "results/mapping/index/", mode: 'copy'
-  label "kallisto"
-
-  input:
-    file fasta from fasta_file
-
-  output:
-    file "*.index*" into index_files
-    file "*_kallisto_report.txt" into index_files_report
-
-  script:
-"""
-kallisto index -k 31 --make-unique -i ${fasta.baseName}.index ${fasta} \
-2> ${fasta.baseName}_kallisto_report.txt
-"""
-}
-
diff --git a/src/nf_modules/kallisto/main.nf b/src/nf_modules/kallisto/main.nf
index bb80e4b361d08b3a34598ee1857268005545a030..8d3fe1d3a9bbdd1435863f93ab0b74c1d7bcd994 100644
--- a/src/nf_modules/kallisto/main.nf
+++ b/src/nf_modules/kallisto/main.nf
@@ -1,68 +1,67 @@
 version = "0.44.0"
 container_url = "lbmc/kallisto:${version}"
 
+params.index_fasta = "-k 31 --make-unique"
+params.index_fasta_out = ""
 process index_fasta {
   container = "${container_url}"
   label "big_mem_multi_cpus"
-  tag "$fasta.baseName"
+  tag "$file_id"
+  if (params.index_fasta_out != "") {
+    publishDir "results/${params.index_fasta_out}", mode: 'copy'
+  }
 
   input:
-    path fasta
+    tuple val(file_id), path(fasta)
 
   output:
-    path "*.index*", emit: index
-    path "*_report.txt", emit: report
+    tuple val(file_id), path("*.index*"), emit: index
+    tuple val(file_id), path("*_report.txt"), emit: report
 
   script:
 """
-kallisto index -k 31 --make-unique -i ${fasta.baseName}.index ${fasta} \
+kallisto index ${params.index_fasta} -i ${fasta.baseName}.index ${fasta} \
 2> ${fasta.baseName}_kallisto_index_report.txt
 """
 }
 
-
-process mapping_fastq_pairedend {
+params.mapping_fastq = "--bias --bootstrap-samples 100"
+params.mapping_fastq_out = ""
+process mapping_fastq {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$pair_id"
+  if (params.mapping_fastq_out != "") {
+    publishDir "results/${params.mapping_fastq_out}", mode: 'copy'
+  }
 
   input:
-  path index
-  tuple val(pair_id), path(reads)
-
-  output:
-  path "${pair_id}", emit: counts
-  path "*_report.txt", emit: report
-
-  script:
-"""
-mkdir ${pair_id}
-kallisto quant -i ${index} -t ${task.cpus} \
---bias --bootstrap-samples 100 -o ${pair_id} \
-${reads[0]} ${reads[1]} &> ${pair_id}_kallisto_mapping_report.txt
-"""
-}
-
-
-process mapping_fastq_singleend {
-  container = "${container_url}"
-  label "big_mem_multi_cpus"
-  tag "$file_id"
-
-  input:
-  path index
+  tuple val(index_id), path(index)
   tuple val(file_id), path(reads)
 
   output:
-  tuple val(file_id), path("${pair_id}"), emit: counts
-  path "*_report.txt", emit: report
+  tuple val(file_id), path("${file_prefix}"), emit: counts
+  tuple val(file_id), path("*_report.txt"), emit: report
 
   script:
-"""
-mkdir ${file_id}
-kallisto quant -i ${index} -t ${task.cpus} --single \
---bias --bootstrap-samples 100 -o ${file_id} \
--l ${params.mean} -s ${params.sd} \
-${reads} &> ${reads.simpleName}_kallisto_mapping_report.txt
-"""
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
+
+  if (reads.size() == 2)
+  """
+  mkdir ${file_prefix}
+  kallisto quant -i ${index} -t ${task.cpus} \
+  ${params.mapping_fastq} -o ${file_prefix} \
+  ${reads[0]} ${reads[1]} &> ${file_prefix}_kallisto_mapping_report.txt
+  """
+  else
+  """
+  mkdir ${file_prefix}
+  kallisto quant -i ${index} -t ${task.cpus} --single \
+  ${params.mapping_fastq} -o ${file_prefix} \
+  ${reads[0]} &> ${file_prefix}_kallisto_mapping_report.txt
+  """
 }
diff --git a/src/nf_modules/kallisto/mapping_paired.config b/src/nf_modules/kallisto/mapping_paired.config
deleted file mode 100644
index 962e1fdc923692b4395d72dfee42ec80001861b1..0000000000000000000000000000000000000000
--- a/src/nf_modules/kallisto/mapping_paired.config
+++ /dev/null
@@ -1,59 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withLabel: kallisto {
-        container = "lbmc/kallisto:0.44.0"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withLabel: kallisto {
-        container = "lbmc/kallisto:0.44.0"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withLabel: kallisto {
-        container = "lbmc/kallisto:0.44.0"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withLabel: kallisto {
-        container = "lbmc/kallisto:0.44.0"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
-
-
diff --git a/src/nf_modules/kallisto/mapping_paired.nf b/src/nf_modules/kallisto/mapping_paired.nf
deleted file mode 100644
index 9b45d01218ff87bb2491bfe4ec21f0f241bf6665..0000000000000000000000000000000000000000
--- a/src/nf_modules/kallisto/mapping_paired.nf
+++ /dev/null
@@ -1,38 +0,0 @@
-params.fastq = "$baseDir/data/fastq/*_{1,2}.fastq"
-params.index = "$baseDir/data/index/*.index.*"
-params.output = "results/mapping/quantification/"
-
-log.info "fastq files : ${params.fastq}"
-log.info "index files : ${params.index}"
-log.info "output folder : ${params.output}"
-
-Channel
-  .fromFilePairs( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .set { fastq_files }
-Channel
-  .fromPath( params.index )
-  .ifEmpty { error "Cannot find any index files matching: ${params.index}" }
-  .set { index_files }
-
-process mapping_fastq {
-  tag "$reads"
-  publishDir "${params.output}", mode: 'copy'
-  label "kallisto"
-
-  input:
-  set pair_id, file(reads) from fastq_files
-  file index from index_files.collect()
-
-  output:
-  file "*" into counts_files
-
-  script:
-"""
-mkdir ${pair_id}
-kallisto quant -i ${index} -t ${task.cpus} \
---bias --bootstrap-samples 100 -o ${pair_id} \
-${reads[0]} ${reads[1]} &> ${pair_id}/kallisto_report.txt
-"""
-}
-
diff --git a/src/nf_modules/kallisto/mapping_single.config b/src/nf_modules/kallisto/mapping_single.config
deleted file mode 100644
index 962e1fdc923692b4395d72dfee42ec80001861b1..0000000000000000000000000000000000000000
--- a/src/nf_modules/kallisto/mapping_single.config
+++ /dev/null
@@ -1,59 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withLabel: kallisto {
-        container = "lbmc/kallisto:0.44.0"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withLabel: kallisto {
-        container = "lbmc/kallisto:0.44.0"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withLabel: kallisto {
-        container = "lbmc/kallisto:0.44.0"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withLabel: kallisto {
-        container = "lbmc/kallisto:0.44.0"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
-
-
diff --git a/src/nf_modules/kallisto/mapping_single.nf b/src/nf_modules/kallisto/mapping_single.nf
deleted file mode 100644
index 7ae2128012fccab58732ca8b8781833b325da586..0000000000000000000000000000000000000000
--- a/src/nf_modules/kallisto/mapping_single.nf
+++ /dev/null
@@ -1,42 +0,0 @@
-params.fastq = "$baseDir/data/fastq/*.fastq"
-params.index = "$baseDir/data/index/*.index*"
-params.mean = 200
-params.sd = 100
-
-log.info "fastq files : ${params.fastq}"
-log.info "index files : ${params.index}"
-log.info "mean read size: ${params.mean}"
-log.info "sd read size: ${params.sd}"
-
-Channel
-  .fromPath( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { fastq_files }
-Channel
-  .fromPath( params.index )
-  .ifEmpty { error "Cannot find any index files matching: ${params.index}" }
-  .set { index_files }
-
-process mapping_fastq {
-  tag "$file_id"
-  publishDir "results/mapping/quantification/", mode: 'copy'
-  label "kallisto"
-
-  input:
-  set file_id, file(reads) from fastq_files
-  file index from index_files.collect()
-
-  output:
-  file "*" into count_files
-
-  script:
-"""
-mkdir ${file_id}
-kallisto quant -i ${index} -t ${task.cpus} --single \
---bias --bootstrap-samples 100 -o ${file_id} \
--l ${params.mean} -s ${params.sd} \
-${reads} &> ${file_id}/kallisto_report.txt
-"""
-}
-
diff --git a/src/nf_modules/kallisto/tests.sh b/src/nf_modules/kallisto/tests.sh
deleted file mode 100755
index 412a29754d7948ba562b31e7b54038db80f60241..0000000000000000000000000000000000000000
--- a/src/nf_modules/kallisto/tests.sh
+++ /dev/null
@@ -1,41 +0,0 @@
-./nextflow src/nf_modules/kallisto/indexing.nf \
-  -c src/nf_modules/kallisto/indexing.config \
-  -profile docker \
-  --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" \
-  -resume
-
-./nextflow src/nf_modules/kallisto/mapping_single.nf \
-  -c src/nf_modules/kallisto/mapping_single.config \
-  -profile docker \
-  --index "results/mapping/index/tiny_v2.index" \
-  --fastq "data/tiny_dataset/fastq/tiny*_S.fastq" \
-  -resume
-
-./nextflow src/nf_modules/kallisto/mapping_paired.nf \
-  -c src/nf_modules/kallisto/mapping_paired.config \
-  -profile docker \
-  --index "results/mapping/index/tiny_v2.index" \
-  --fastq "data/tiny_dataset/fastq/tiny*_R{1,2}.fastq" \
-  -resume
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/kallisto/indexing.nf \
-  -c src/nf_modules/kallisto/indexing.config \
-  -profile singularity \
-  --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" \
-  -resume
-
-./nextflow src/nf_modules/kallisto/mapping_single.nf \
-  -c src/nf_modules/kallisto/mapping_single.config \
-  -profile singularity \
-  --index "results/mapping/index/tiny_v2.index" \
-  --fastq "data/tiny_dataset/fastq/tiny*_S.fastq" \
-  -resume
-
-./nextflow src/nf_modules/kallisto/mapping_paired.nf \
-  -c src/nf_modules/kallisto/mapping_paired.config \
-  -profile singularity \
-  --index "results/mapping/index/tiny_v2.index" \
-  --fastq "data/tiny_dataset/fastq/tiny*_R{1,2}.fastq" \
-  -resume
-fi
diff --git a/src/nf_modules/kb/main.nf b/src/nf_modules/kb/main.nf
new file mode 100644
index 0000000000000000000000000000000000000000..ca4e7552c58fe8ee6118b843d5b0228f2eada860
--- /dev/null
+++ b/src/nf_modules/kb/main.nf
@@ -0,0 +1,463 @@
+version = "0.26.0"
+container_url = "lbmc/kb:${version}"
+
+params.index_fasta = ""
+params.index_fasta_out = ""
+
+workflow index_fasta {
+  take:
+    fasta
+    gtf
+
+  main:
+    tr2g(gtf)
+    index_default(fasta, gtf, tr2g.out.t2g)
+
+  emit:
+    index = index_default.out.index
+    t2g = index_default.out.t2g
+    report = index_default.out.report
+}
+
+process tr2g {
+  // create transcript to gene table from gtf if no transcript to gene file is provided
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.index_fasta_out != "") {
+    publishDir "results/${params.index_fasta_out}", mode: 'copy'
+  }
+
+  input:
+    tuple val(file_id), path(gtf)
+
+  output:
+    tuple val(file_id), path("t2g.txt"), emit: t2g
+
+  script:
+  """
+  t2g.py --gtf ${gtf}
+  sort -k1 -u t2g_dup.txt > t2g.txt
+  """
+}
+
+process g2tr {
+  // create gene to transcript table from gtf if no transcript to gene file is provided
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.index_fasta_out != "") {
+    publishDir "results/${params.index_fasta_out}", mode: 'copy'
+  }
+
+  input:
+    tuple val(file_id), path(gtf)
+
+  output:
+    tuple val(file_id), path("g2t.txt"), emit: g2t
+
+  script:
+  """
+  t2g.py --gtf ${gtf}
+  sort -k1 -u t2g_dup.txt > t2g.txt
+  awk 'BEGIN{OFS="\\t"}{print \$2, \$1}' t2g.txt > g2t.txt
+  """
+}
+
+process index_default {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.index_fasta_out != "") {
+    publishDir "results/${params.index_fasta_out}", mode: 'copy'
+  }
+
+  input:
+    tuple val(file_id), path(fasta)
+    tuple val(gtf_id), path(gtf)
+    tuple val(t2g_id), path(transcript_to_gene)
+
+  output:
+    tuple val(file_id), path("*.idx"), emit: index
+    tuple val(t2g_id), path("${transcript_to_gene}"), emit: t2g
+    tuple val(file_id), path("*_report.txt"), emit: report
+
+  script:
+"""
+kb ref \
+  -i ${fasta.simpleName}.idx \
+  -g ${transcript_to_gene} \
+  ${params.index_fasta} \
+  -f1 cdna.fa ${fasta} ${gtf} > ${fasta.simpleName}_kb_index_report.txt
+"""
+}
+
+
+include { split } from "./../flexi_splitter/main.nf"
+
+params.kb_protocol = "10x_v3"
+params.count = ""
+params.count_out = ""
+workflow count {
+  take:
+    index
+    fastq
+    transcript_to_gene
+    whitelist
+    config
+
+  main:
+  whitelist
+    .ifEmpty(["NO WHITELIST", 0])
+    .set{ whitelist_optional }
+  switch(params.kb_protocol) {
+    case "marsseq":
+      split(fastq, config.collect())
+      kb_marseq(index.collect(), split.out.fastq, transcript_to_gene.collect(), whitelist_optional.collect())
+      kb_marseq.out.counts.set{res_counts}
+      kb_marseq.out.report.set{res_report}
+    break;
+    default:
+      kb_default(index.collect(), fastq, transcript_to_gene.collect(), whitelist_optional.collect())
+      kb_default.out.counts.set{res_counts}
+      kb_default.out.report.set{res_report}
+    break;
+  }
+
+  emit:
+    counts = res_counts
+    report = res_report
+}
+
+process kb_default {
+  container = "${container_url}"
+  label "big_mem_multi_cpus"
+  tag "$file_prefix"
+  if (params.count_out != "") {
+    publishDir "results/${params.count_out}", mode: 'copy'
+  }
+
+  input:
+  tuple val(index_id), path(index)
+  tuple val(file_id), path(reads)
+  tuple val(t2g_id), path(transcript_to_gene)
+  tuple val(whitelist_id), path(whitelist)
+
+  output:
+  tuple val(file_id), path("${file_prefix}"), emit: counts
+  tuple val(file_id), path("*_report.txt"), emit: report
+
+  script:
+  def kb_memory = "${task.memory}" - ~/GB/
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
+  def whitelist_param = ""
+  if (whitelist_id != "NO WHITELIST"){
+    whitelist_param = "-w ${whitelist}"
+  }
+
+  if (reads.size() == 2)
+  """
+  mkdir ${file_prefix}
+  kb count  -t ${task.cpus} \
+    -m ${kb_memory} \
+    -i ${index} \
+    -g ${transcript_to_gene} \
+    -o ${file_prefix} \
+    ${whitelist_param} \
+    -x 10XV3 \
+    --h5ad \
+    ${params.count} \
+    ${reads[0]} ${reads[1]} > ${file_prefix}_kb_mapping_report.txt
+  
+  fix_t2g.py --t2g ${transcript_to_gene}
+  cp fix_t2g.txt ${file_prefix}/
+  cp ${transcript_to_gene} ${file_prefix}/
+  """
+}
+
+process kb_marseq {
+  // With the MARS-Seq protocol, we have:
+  // on the read 1: 4 nt of bc plate
+  // on the read 2: 6 nt of bc cell, and 8 nt of UMI
+  // this process expect that the bc plate is removed from the read 1
+  container = "${container_url}"
+  label "big_mem_multi_cpus"
+  tag "$file_prefix"
+  if (params.count_out != "") {
+    publishDir "results/${params.count_out}", mode: 'copy'
+  }
+
+  input:
+  tuple val(index_id), path(index)
+  tuple val(file_id), path(reads)
+  tuple val(t2g_id), path(transcript_to_gene)
+  tuple val(whitelist_id), path(whitelist)
+
+  output:
+  tuple val(file_id), path("${file_prefix}"), emit: counts
+  tuple val(file_id), path("*_report.txt"), emit: report
+
+  script:
+  def kb_memory = "${task.memory}" - ~/GB/
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
+  def whitelist_param = ""
+  if (whitelist_id != "NO WHITELIST"){
+    whitelist_param = "-w ${whitelist}"
+  }
+
+  if (reads.size() == 2)
+  """
+  mkdir ${file_prefix}
+  kb count  -t ${task.cpus} \
+    -m ${kb_memory} \
+    -i ${index} \
+    -g ${transcript_to_gene} \
+    -o ${file_prefix} \
+    ${whitelist_param} \
+    ${params.count} \
+    --h5ad \
+    -x 1,0,6:1,6,14:0,0,0 \
+    ${reads[0]} ${reads[1]} > ${file_prefix}_kb_mapping_report.txt
+  fix_t2g.py --t2g ${transcript_to_gene}
+  cp fix_t2g.txt ${file_prefix}/
+  cp ${transcript_to_gene} ${file_prefix}/
+  """
+  else
+  """
+  mkdir ${file_prefix}
+  kb count  -t ${task.cpus} \
+    -m ${kb_memory} \
+    -i ${index} \
+    -g ${transcript_to_gene} \
+    -o ${file_prefix} \
+    ${whitelist_param} \
+    ${params.count} \
+    -x 1,0,6:1,6,14:0,0,0 \
+    --h5ad \
+    ${reads} > ${file_prefix}_kb_mapping_report.txt
+  fix_t2g.py --t2g ${transcript_to_gene}
+  cp fix_t2g.txt ${file_prefix}/
+  cp ${transcript_to_gene} ${file_prefix}/
+  """
+}
+
+// ************************** velocity workflow **************************
+
+workflow index_fasta_velocity {
+  take:
+    fasta
+    gtf
+
+  main:
+    tr2g(gtf)
+    index_fasta_velocity_default(fasta, gtf, tr2g.out.t2g)
+
+  emit:
+    index = index_fasta_velocity_default.out.index
+    t2g = index_fasta_velocity_default.out.t2g
+    report = index_fasta_velocity_default.out.report
+}
+
+process index_fasta_velocity_default {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "$file_id"
+  if (params.index_fasta_out != "") {
+    publishDir "results/${params.index_fasta_out}", mode: 'copy'
+  }
+
+  input:
+    tuple val(file_id), path(fasta)
+    tuple val(gtf_id), path(gtf)
+    tuple val(t2g_id), path(transcript_to_gene)
+
+  output:
+    tuple val(file_id), path("*.idx"), emit: index
+    tuple val(t2g_id), path("${transcript_to_gene}"), path("cdna_t2c.txt"), path("intron_t2c.txt"), emit: t2g
+    tuple val(file_id), path("*_report.txt"), emit: report
+
+  script:
+"""
+kb ref \
+  -i ${fasta.simpleName}.idx \
+  -g ${transcript_to_gene} \
+  ${params.index_fasta} \
+  -f1 cdna.fa -f2 intron.fa -c1 cdna_t2c.txt -c2 intron_t2c.txt --workflow lamanno \
+  ${fasta} ${gtf} > ${fasta.simpleName}_kb_index_report.txt
+"""
+}
+
+params.count_velocity = ""
+params.count_velocity_out = ""
+workflow count_velocity {
+  take:
+    index
+    fastq
+    transcript_to_gene
+    whitelist
+    config
+
+  main:
+  whitelist
+    .ifEmpty(["NO WHITELIST", 0])
+    .set{ whitelist_optional }
+  switch(params.kb_protocol) {
+    case "marsseq":
+      split(fastq, config.collect())
+      velocity_marseq(index.collect(), split.out.fastq, transcript_to_gene.collect(), whitelist_optional.collect())
+      velocity_marseq.out.counts.set{res_counts}
+      velocity_marseq.out.report.set{res_report}
+    break;
+    default:
+      velocity_default(index.collect(), fastq, transcript_to_gene.collect(), whitelist_optional.collect())
+      velocity_default.out.counts.set{res_counts}
+      velocity_default.out.report.set{res_report}
+    break;
+  }
+
+  emit:
+    counts = res_counts
+    report = res_report
+}
+
+process velocity_default {
+  container = "${container_url}"
+  label "big_mem_multi_cpus"
+  tag "$file_prefix"
+  if (params.count_velocity_out != "") {
+    publishDir "results/${params.count_velocity_out}", mode: 'copy'
+  }
+
+  input:
+  tuple val(index_id), path(index)
+  tuple val(file_id), path(reads)
+  tuple val(t2g_id), path(transcript_to_gene), path(cdna_t2g), path(intron_t2g)
+  tuple val(whitelist_id), path(whitelist)
+
+  output:
+  tuple val(file_id), path("${file_prefix}"), emit: counts
+  tuple val(file_id), path("*_report.txt"), emit: report
+
+  script:
+  def kb_memory = "${task.memory}" - ~/GB/
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
+  def whitelist_param = ""
+  if (whitelist_id != "NO WHITELIST"){
+    whitelist_param = "-w ${whitelist}"
+  }
+
+  if (reads.size() == 2)
+  """
+  mkdir ${file_prefix}
+  kb count  -t ${task.cpus} \
+    -m ${kb_memory} \
+    -i ${index} \
+    -g ${transcript_to_gene} \
+    -o ${file_prefix} \
+    -c1 ${cdna_t2g} \
+    -c2 ${intron_t2g} \
+    --workflow lamanno \
+    ${whitelist_param} \
+    -x 10XV3 \
+    --h5ad \
+    ${params.count} \
+    ${reads[0]} ${reads[1]} > ${file_prefix}_kb_mapping_report.txt
+  fix_t2g.py --t2g ${transcript_to_gene}
+  cp fix_t2g.txt ${file_prefix}/
+  cp ${transcript_to_gene} ${file_prefix}/
+  cp ${cdna_t2g} ${file_prefix}/
+  cp ${intron_t2g} ${file_prefix}/
+  """
+}
+
+process velocity_marseq {
+  // With the MARS-Seq protocol, we have:
+  // on the read 1: 4 nt of bc plate
+  // on the read 2: 6 nt of bc cell, and 8 nt of UMI
+  // this process expect that the bc plate is removed from the read 1
+  container = "${container_url}"
+  label "big_mem_multi_cpus"
+  tag "$file_prefix"
+  if (params.count_velocity_out != "") {
+    publishDir "results/${params.count_velocity_out}", mode: 'copy'
+  }
+
+  input:
+  tuple val(index_id), path(index)
+  tuple val(file_id), path(reads)
+  tuple val(t2g_id), path(transcript_to_gene), path(cdna_t2g), path(intron_t2g)
+  tuple val(whitelist_id), path(whitelist)
+
+  output:
+  tuple val(file_id), path("${file_prefix}"), emit: counts
+  tuple val(file_id), path("*_report.txt"), emit: report
+
+  script:
+  def kb_memory = "${task.memory}" - ~/GB/
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
+  def whitelist_param = ""
+  if (whitelist_id != "NO WHITELIST"){
+    whitelist_param = "-w ${whitelist}"
+  }
+
+  if (reads.size() == 2)
+  """
+  mkdir ${file_prefix}
+  kb count  -t ${task.cpus} \
+    -m ${kb_memory} \
+    -i ${index} \
+    -g ${transcript_to_gene} \
+    -o ${file_prefix} \
+    -c1 ${cdna_t2g} \
+    -c2 ${intron_t2g} \
+    --workflow lamanno \
+     --h5ad \
+    ${whitelist_param} \
+    ${params.count} \
+    -x 1,0,6:1,6,14:0,0,0 \
+    ${reads[0]} ${reads[1]} > ${file_prefix}_kb_mapping_report.txt
+  fix_t2g.py --t2g ${transcript_to_gene}
+  cp fix_t2g.txt ${file_prefix}/
+  cp ${transcript_to_gene} ${file_prefix}/
+  cp ${cdna_t2g} ${file_prefix}/
+  cp ${intron_t2g} ${file_prefix}/
+  """
+  else
+  """
+  mkdir ${file_prefix}
+  kb count  -t ${task.cpus} \
+    -m ${kb_memory} \
+    -i ${index} \
+    -g ${transcript_to_gene} \
+    -o ${file_prefix} \
+    -c1 ${cdna_t2g} \
+    -c2 ${intron_t2g} \
+    --workflow lamanno \
+    ${whitelist_param} \
+    ${params.count} \
+    -x 1,0,6:1,6,14:0,0,0 \
+    ${reads} > ${file_prefix}_kb_mapping_report.txt
+  fix_t2g.py --t2g ${transcript_to_gene}
+  cp fix_t2g.txt ${file_prefix}/
+  cp ${transcript_to_gene} ${file_prefix}/
+  cp ${cdna_t2g} ${file_prefix}/
+  cp ${intron_t2g} ${file_prefix}/
+  """
+}
+
diff --git a/src/nf_modules/macs2/main.nf b/src/nf_modules/macs2/main.nf
index a350fb438e71b9bfa7b5a3f42fe77e84e4fec651..51bed4bebf4060e71f5532fb9ac83fdd89b24231 100644
--- a/src/nf_modules/macs2/main.nf
+++ b/src/nf_modules/macs2/main.nf
@@ -3,11 +3,15 @@ container_url = "lbmc/macs2:${version}"
 
 params.macs_gsize=3e9
 params.macs_mfold="5 50"
-
+params.peak_calling = "--mfold ${params.macs_mfold} --gsize ${params.macs_gsize}"
+params.peak_calling_out = ""
 process peak_calling {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "${file_id}"
+  if (params.peak_calling_out != "") {
+    publishDir "results/${params.peak_calling_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bam_ip), path(bam_control)
@@ -21,13 +25,13 @@ process peak_calling {
 /* remove --nomodel option for real dataset */
 """
 macs2 callpeak \
+  ${params.peak_calling} \
   --treatment ${bam_ip} \
   --call-summits \
   --control ${bam_control} \
   --keep-dup all \
-  --name ${bam_ip.simpleName} \
-  --mfold ${params.macs_mfold} \
-  --gsize ${params.macs_gsize} 2> \
+  --qvalue 0.99 \
+  --name ${bam_ip.simpleName} 2> \
   ${bam_ip.simpleName}_macs2_report.txt
 
 if grep -q "ERROR" ${bam_ip.simpleName}_macs2_report.txt; then
@@ -37,10 +41,15 @@ fi
 """
 }
 
+params.peak_calling_bg = "--mfold ${params.macs_mfold} --gsize ${params.macs_gsize}"
+params.peak_calling_bg_out = ""
 process peak_calling_bg {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "${file_id}"
+  if (params.peak_calling_bg_out != "") {
+    publishDir "results/${params.peak_calling_bg_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bg_ip), path(bg_control)
@@ -58,13 +67,13 @@ awk '{print \$1"\t"\$2"\t"\$3"\t.\t+\t"\$4}' ${bg_ip} > \
 awk '{print \$1"\t"\$2"\t"\$3"\t.\t+\t"\$4}' ${bg_control} > \
   ${bg_control.simpleName}.bed
 macs2 callpeak \
+  ${params.peak_calling_bg} \
   --treatment ${bg_ip.simpleName}.bed \
+  --qvalue 0.99 \
   --call-summits \
   --control ${bg_control.simpleName}.bed \
   --keep-dup all \
-  --name ${bg_ip.simpleName} \
-  --mfold ${params.macs_mfold} \
-  --gsize ${params.macs_gsize} 2> \
+  --name ${bg_ip.simpleName} 2> \
   ${bg_ip.simpleName}_macs2_report.txt
 
 if grep -q "ERROR" ${bg_ip.simpleName}_macs2_report.txt; then
diff --git a/src/nf_modules/macs2/peak_calling.config b/src/nf_modules/macs2/peak_calling.config
deleted file mode 100644
index d3cbbdb98ced18dd2756bc4f96b18f0e6d28647a..0000000000000000000000000000000000000000
--- a/src/nf_modules/macs2/peak_calling.config
+++ /dev/null
@@ -1,57 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: peak_calling {
-        container = "lbmc/macs2:2.1.2"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: peak_calling {
-        container = "lbmc/macs2:2.1.2"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: peak_calling {
-        container = "lbmc/macs2:2.1.2"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: peak_calling {
-        container = "lbmc/macs2:2.1.2"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/macs2/peak_calling.nf b/src/nf_modules/macs2/peak_calling.nf
deleted file mode 100644
index 12398f2e92e48f8df7936efe62d4778c8112cdfd..0000000000000000000000000000000000000000
--- a/src/nf_modules/macs2/peak_calling.nf
+++ /dev/null
@@ -1,52 +0,0 @@
-params.genome_size = "hs"
-params.control_tag = "control"
-log.info "bam files : ${params.bam}"
-log.info "genome size : ${params.genome_size}"
-log.info "control tag : ${params.control_tag}"
-
-Channel
-  .fromPath( params.bam )
-  .ifEmpty { error "Cannot find any bam files matching: ${params.bam}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { bam_files }
-
-/* split bam Channel into control and ip if "control" keyword is detected*/
-bam_files_control = Channel.create()
-bam_files_ip = Channel.create()
-bam_files.choice(
-  bam_files_control,
-  bam_files_ip ) { a -> a[0] =~ /.*${params.control_tag}.*/ ? 0 : 1 }
-
-process peak_calling {
-  tag "${file_id}"
-  publishDir "results/peak_calling/${file_id}", mode: 'copy'
-
-  input:
-    set file_id, file(file_ip) from bam_files_ip
-    set file_id_control, file(file_control) from bam_files_control
-      .ifEmpty {
-        error "Cannot find any bam files matching: ${params.control_tag}"
-      }
-      .collect()
-
-  output:
-    file "*" into peak_output
-    file "*_report.txt" into peak_calling_report
-
-  script:
-/* remove --nomodel option for real dataset */
-"""
-macs2 callpeak \
-  --nomodel \
-  --treatment ${file_ip} \
-  --control ${file_control} \
-  --name ${file_id} \
-  --gsize ${params.genome_size} 2> \
-${file_ip}_macs2_report.txt
-
-if grep -q "ERROR" ${file_ip}_macs2_report.txt; then
-  echo "MACS2 error"
-  exit 1
-fi
-"""
-}
diff --git a/src/nf_modules/macs2/tests.sh b/src/nf_modules/macs2/tests.sh
deleted file mode 100755
index fc73ce29a9e9aed5ee83ba0e171e49059185bc5c..0000000000000000000000000000000000000000
--- a/src/nf_modules/macs2/tests.sh
+++ /dev/null
@@ -1,18 +0,0 @@
-cp data/tiny_dataset/map/tiny_v2.bam data/tiny_dataset/map/tiny_v2_control.bam
-./nextflow src/nf_modules/macs2/peak_calling.nf \
-  -c src/nf_modules/macs2/peak_calling.config \
-  -profile docker \
-  -resume \
-  --bam "data/tiny_dataset/map/tiny_v2*.bam" \
-  --genome_size 129984 \
-  --control_tag "control"
-
-if [ -x "$(command -v singularity)" ]; then
-  ./nextflow src/nf_modules/macs2/peak_calling.nf \
-    -c src/nf_modules/macs2/peak_calling.config \
-    -profile singularity \
-    -resume \
-    --bam "data/tiny_dataset/map/tiny_v2*.bam" \
-    --genome_size 129984 \
-    --control_tag "control"
-fi
diff --git a/src/nf_modules/macs3/main.nf b/src/nf_modules/macs3/main.nf
index c36140aa236ad220213d8bc61ad726dcda170145..b8c2dbcebac257a7ff63f85d3737b48a97f648c2 100644
--- a/src/nf_modules/macs3/main.nf
+++ b/src/nf_modules/macs3/main.nf
@@ -2,12 +2,16 @@ version = "3.0.0a6"
 container_url = "lbmc/macs3:${version}"
 
 params.macs_gsize=3e9
-params.macs_mfold=[5, 50]
-
+params.macs_mfold="5 50"
+params.peak_calling = "--mfold ${params.macs_mfold} --gsize ${params.macs_gsize}"
+params.peak_calling_out = ""
 process peak_calling {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "${file_id}"
+  if (params.peak_calling_out != "") {
+    publishDir "results/${params.peak_calling_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bam_ip), path(bam_control)
@@ -24,7 +28,7 @@ macs3 callpeak \
   --call-summits \
   --control ${bam_control} \
   --keep-dup all \
-  --mfold params.macs_mfold[0] params.macs_mfold[1]
+  ${params.peak_calling} \
   --name ${bam_ip.simpleName} \
   --gsize ${params.macs_gsize} 2> \
   ${bam_ip.simpleName}_macs3_report.txt
@@ -36,10 +40,15 @@ fi
 """
 }
 
+params.peak_calling_bg = "--mfold ${params.macs_mfold} --gsize ${params.macs_gsize}"
+params.peak_calling_bg_out = ""
 process peak_calling_bg {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "${file_id}"
+  if (params.peak_calling_bg_out != "") {
+    publishDir "results/${params.peak_calling_bg_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bg_ip), path(bg_control)
@@ -56,6 +65,7 @@ awk '{print \$1"\t"\$2"\t"\$3"\t.\t+\t"\$4}' ${bg_ip} > \
 awk '{print \$1"\t"\$2"\t"\$3"\t.\t+\t"\$4}' ${bg_control} > \
   ${bg_control.simpleName}.bed
 macs3 callpeak \
+  ${params.peak_calling_bg} \
   --treatment ${bg_ip.simpleName}.bed \
   --call-summits \
   --control ${bg_control.simpleName}.bed \
diff --git a/src/nf_modules/minimap2/main.nf b/src/nf_modules/minimap2/main.nf
index 73dc0e34560110c010da21bd6b3154beac131535..9dbe5a575f840d9b911f778b7adf039747a6024f 100644
--- a/src/nf_modules/minimap2/main.nf
+++ b/src/nf_modules/minimap2/main.nf
@@ -1,50 +1,62 @@
 version = "2.17"
 container_url = "lbmc/minimap2:${version}"
 
+params.index_fasta = ""
+params.index_fasta_out = ""
 process index_fasta {
   container = "${container_url}"
   label "big_mem_multi_cpus"
-  tag "$fasta.baseName"
+  tag "$file_id"
+  if (params.index_fasta_out != "") {
+    publishDir "results/${params.index_fasta_out}", mode: 'copy'
+  }
 
   input:
-    path fasta
+    tuple val(file_id), path(fasta)
 
   output:
-    tuple path("${fasta}"), path("*.mmi*"), emit: index
-    path "*_report.txt", emit: report
+    tuple val(file_id), path("${fasta}"), path("*.mmi*"), emit: index
 
   script:
   memory = "${task.memory}" - ~/\s*GB/
 """
-minimap2 -t ${task.cpus} -I ${memory}G -d ${fasta.baseName}.mmi ${fasta}
+minimap2 ${params.index_fasta} -t ${task.cpus} -I ${memory}G -d ${fasta.baseName}.mmi ${fasta}
 """
 }
 
-
+params.mapping_fastq = "-ax sr"
+params.mapping_fastq_out = ""
 process mapping_fastq {
   container = "${container_url}"
   label "big_mem_multi_cpus"
-  tag "$pair_id"
+  tag "$file_id"
+  if (params.mapping_fastq_out != "") {
+    publishDir "results/${params.mapping_fastq_out}", mode: 'copy'
+  }
 
   input:
-  tuple path(fasta), path(index)
-  tuple val(pair_id), path(reads)
+  tuple val(fasta_id), path(fasta), path(index)
+  tuple val(file_id), path(reads)
 
   output:
-  tuple val(pair_id), path("*.bam"), emit: bam
-  path "*_report.txt", emit: report
+  tuple val(file_id), path("*.bam"), emit: bam
 
   script:
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
   memory = "${task.memory}" - ~/\s*GB/
-  memory = memory / (task.cpus + 1.0)
-if (reads instanceof List)
-"""
-minimap2 -ax sr -t ${task.cpus} -K ${memory} ${fasta} ${reads[0]} ${reads[1]} |
-  samtools view -Sb - > ${pair_id}.bam
-"""
-else
-"""
-minimap2 -ax sr -t ${task.cpus} -K ${memory} ${fasta} ${reads} |
-  samtools view -Sb - > ${reads.baseName}.bam
-"""
-}
\ No newline at end of file
+  memory = memory.toInteger() / (task.cpus + 1.0)
+  if (reads.size() == 2)
+  """
+  minimap2 ${params.mapping_fastq} -t ${task.cpus} -K ${memory} ${fasta} ${reads[0]} ${reads[1]} |
+    samtools view -Sb - > ${pair_id}.bam
+  """
+  else
+  """
+  minimap2 ${params.mapping_fastq} -t ${task.cpus} -K ${memory} ${fasta} ${reads} |
+    samtools view -Sb - > ${pair_id}.bam
+  """
+}
diff --git a/src/nf_modules/multiqc/main.nf b/src/nf_modules/multiqc/main.nf
index 64cecaace1fbce77b5fecd99f47e253b33ded071..755ae0212662f241acd1202593ba383541d74551 100644
--- a/src/nf_modules/multiqc/main.nf
+++ b/src/nf_modules/multiqc/main.nf
@@ -1,19 +1,70 @@
-version = "1.9"
+// multiqc generate nice html report combining lots of differents bioinformatics
+// tools report.
+// 
+// EXAMPLE:
+
+/*
+include { multiqc } 
+  from './nf_modules/multiqc/main'
+  addParams(
+    multiqc_out: "QC/"
+  )
+
+multiqc(
+  report_a
+  .mix(
+    report_b,
+    report_c,
+    report_d
+  )
+)
+*/
+
+version = "1.11"
 container_url = "lbmc/multiqc:${version}"
 
-process multiqc {
+params.multiqc = ""
+params.multiqc_out = "QC/"
+workflow multiqc {
+  take:
+    report
+  main:
+    report
+    .map{it ->
+      if (it instanceof List){
+        if(it.size() > 1) {
+          it[1]
+        } else {
+          it[0]
+        }
+      } else {
+        it
+      }
+    }
+    .unique()
+    .flatten()
+    .set { report_cleaned }
+    multiqc_default(report_cleaned.collect())
+
+  emit:
+  report = multiqc_default.out.report
+}
+
+process multiqc_default {
   container = "${container_url}"
   label "big_mem_mono_cpus"
-  publishDir "results/QC/", mode: 'copy'
+  if (params.multiqc_out != "") {
+    publishDir "results/${params.multiqc_out}", mode: 'copy'
+  }
 
   input:
-    path report
+    path report 
 
   output:
     path "*multiqc_*", emit: report
 
   script:
 """
-multiqc -f .
+multiqc ${params.multiqc} -f .
 """
 }
diff --git a/src/nf_modules/multiqc/multiqc_paired.config b/src/nf_modules/multiqc/multiqc_paired.config
deleted file mode 100644
index 2d786124c88e7f59cb39fdb1d9dfabb82fa14ba2..0000000000000000000000000000000000000000
--- a/src/nf_modules/multiqc/multiqc_paired.config
+++ /dev/null
@@ -1,84 +0,0 @@
-
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: fastqc_fastq {
-        container = "lbmc/fastqc:0.11.5"
-        cpus = 1
-      }
-      withName: multiqc {
-        container = "lbmc/multiqc:1.7"
-        cpus = 1
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: fastqc_fastq {
-        container = "lbmc/fastqc:0.11.5"
-        cpus = 1
-      }
-      withName: multiqc {
-        container = "lbmc/multiqc:1.7"
-        cpus = 1
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: fastqc_fastq {
-        container = "lbmc/fastqc:0.11.5"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "5GB"
-        time = "6h"
-        queue = "monointeldeb128"
-      }
-      withName: multiqc {
-        container = "lbmc/multiqc:1.7"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "5GB"
-        time = "6h"
-        queue = "monointeldeb128"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: fastq_fastqc {
-        container = "lbmc/fastqc:0.11.5"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r"
-        cpus = 1
-        queue = "huge"
-      }
-      withName: multiqc {
-        container = "lbmc/multiqc:1.7"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r"
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
-
diff --git a/src/nf_modules/multiqc/multiqc_paired.nf b/src/nf_modules/multiqc/multiqc_paired.nf
deleted file mode 100644
index b459a9bbc8ddd4c89cd51f164d3ef3a14c814841..0000000000000000000000000000000000000000
--- a/src/nf_modules/multiqc/multiqc_paired.nf
+++ /dev/null
@@ -1,43 +0,0 @@
-params.fastq = "$baseDir/data/fastq/*_{1,2}.fastq"
-
-log.info "fastq files : ${params.fastq}"
-
-Channel
-  .fromFilePairs( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .set { fastq_files }
-
-process fastqc_fastq {
-  tag "$pair_id"
-  publishDir "results/fastq/fastqc/", mode: 'copy'
-
-  input:
-  set pair_id, file(reads) from fastq_files
-
-  output:
-    file "*.{zip,html}" into fastqc_report
-
-  script:
-"""
-fastqc --quiet --threads ${task.cpus} --format fastq --outdir ./ \
-${reads[0]} ${reads[1]}
-"""
-}
-
-process multiqc {
-  tag "$report[0].baseName"
-  publishDir "results/fastq/multiqc/", mode: 'copy'
-  cpus = 1
-
-  input:
-    file report from fastqc_report.collect()
-
-  output:
-    file "*multiqc_*" into multiqc_report
-
-  script:
-"""
-multiqc -f .
-"""
-}
-
diff --git a/src/nf_modules/multiqc/multiqc_single.config b/src/nf_modules/multiqc/multiqc_single.config
deleted file mode 100644
index 75f2f9aa8579b4fbf9ce7d99dceba0359e467db5..0000000000000000000000000000000000000000
--- a/src/nf_modules/multiqc/multiqc_single.config
+++ /dev/null
@@ -1,82 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: fastqc_fastq {
-        container = "lbmc/fastqc:0.11.5"
-        cpus = 1
-      }
-      withName: multiqc {
-        container = "lbmc/multiqc:1.7"
-        cpus = 1
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: fastqc_fastq {
-        container = "lbmc/fastqc:0.11.5"
-        cpus = 1
-      }
-      withName: multiqc {
-        container = "lbmc/multiqc:1.7"
-        cpus = 1
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: fastqc_fastq {
-        container = "lbmc/fastqc:0.11.5"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "5GB"
-        time = "6h"
-        queue = "monointeldeb128"
-      }
-      withName: multiqc {
-        container = "lbmc/multiqc:1.7"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "5GB"
-        time = "6h"
-        queue = "monointeldeb128"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: fastq_fastqc {
-        container = "lbmc/fastqc:0.11.5"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r"
-        cpus = 1
-        queue = "huge"
-      }
-      withName: multiqc {
-        container = "lbmc/multiqc:1.7"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r"
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/multiqc/multiqc_single.nf b/src/nf_modules/multiqc/multiqc_single.nf
deleted file mode 100644
index ea1115b546f0776a4970e4a56fefcce5e3b90de9..0000000000000000000000000000000000000000
--- a/src/nf_modules/multiqc/multiqc_single.nf
+++ /dev/null
@@ -1,44 +0,0 @@
-params.fastq = "$baseDir/data/fastq/*.fastq"
-
-log.info "fastq files : ${params.fastq}"
-
-Channel
-  .fromPath( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { fastq_files }
-
-process fastqc_fastq {
-  tag "$file_id"
-  publishDir "results/fastq/fastqc/", mode: 'copy'
-  cpus = 1
-
-  input:
-    set file_id, file(reads) from fastq_files
-
-  output:
-    file "*.{zip,html}" into fastqc_report
-
-  script:
-"""
-fastqc --quiet --threads ${task.cpus} --format fastq --outdir ./ ${reads}
-"""
-}
-
-process multiqc {
-  tag "$report[0].baseName"
-  publishDir "results/fastq/multiqc/", mode: 'copy'
-  cpus = 1
-
-  input:
-    file report from fastqc_report.collect()
-
-  output:
-    file "*multiqc_*" into multiqc_report
-
-  script:
-"""
-multiqc -f .
-"""
-}
-
diff --git a/src/nf_modules/multiqc/tests.sh b/src/nf_modules/multiqc/tests.sh
deleted file mode 100755
index ff23852be9e17167a860752f869a5cde154e4d87..0000000000000000000000000000000000000000
--- a/src/nf_modules/multiqc/tests.sh
+++ /dev/null
@@ -1,25 +0,0 @@
-./nextflow src/nf_modules/multiqc/multiqc_paired.nf \
-  -c src/nf_modules/multiqc/multiqc_paired.config \
-  -profile docker \
-  --fastq "data/tiny_dataset/fastq/tiny_R{1,2}.fastq" \
-  -resume
-
-./nextflow src/nf_modules/multiqc/multiqc_single.nf \
-  -c src/nf_modules/multiqc/multiqc_single.config \
-  -profile docker \
-  --fastq "data/tiny_dataset/fastq/tiny_S.fastq" \
-  -resume
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/multiqc/multiqc_paired.nf \
-  -c src/nf_modules/multiqc/multiqc_paired.config \
-  -profile singularity \
-  --fastq "data/tiny_dataset/fastq/tiny_R{1,2}.fastq" \
-  -resume
-
-./nextflow src/nf_modules/multiqc/multiqc_single.nf \
-  -c src/nf_modules/multiqc/multiqc_single.config \
-  -profile singularity \
-  --fastq "data/tiny_dataset/fastq/tiny_S.fastq" \
-  -resume
-fi
diff --git a/src/nf_modules/music/peak_calling_single.config b/src/nf_modules/music/peak_calling_single.config
deleted file mode 100644
index dea6fa7b77851ace3272ac460a9db888acb2bb31..0000000000000000000000000000000000000000
--- a/src/nf_modules/music/peak_calling_single.config
+++ /dev/null
@@ -1,94 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: compute_mappability {
-        container = "lbmc/music:6613c53"
-        cpus = 1
-      }
-      withName: music_preprocessing {
-        container = "lbmc/music:6613c53"
-        cpus = 1
-      }
-      withName: music_computation{
-        container = "lbmc/music:6613c53"
-        cpus = 1
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: compute_mappability {
-        container = "lbmc/music:6613c53"
-        cpus = 1
-      }
-      withName: music_preprocessing {
-        container = "lbmc/music:6613c53"
-        cpus = 1
-      }
-      withName: music_computation{
-        container = "lbmc/music:6613c53"
-        cpus = 1
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: compute_mappability {
-        container = "lbmc/music:6613c53"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "20GB"
-        time = "12h"
-        queue = "monointeldeb128"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: compute_mappability {
-        container = "lbmc/music:6613c53"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-      withName: music_preprocessing {
-        container = "lbmc/music:6613c53"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-      withName: music_computation{
-        container = "lbmc/music:6613c53"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/music/peak_calling_single.nf b/src/nf_modules/music/peak_calling_single.nf
deleted file mode 100644
index be280394b80e6ddd9644da85f62e8d2be5d843ed..0000000000000000000000000000000000000000
--- a/src/nf_modules/music/peak_calling_single.nf
+++ /dev/null
@@ -1,104 +0,0 @@
-params.read_size = 100
-params.frag_size = 200
-params.step_l = 50
-params.min_l = 200
-params.max_l = 5000
-log.info "bam files : ${params.bam}"
-log.info "index files : ${params.index}"
-log.info "fasta files : ${params.fasta}"
-
-Channel
-  .fromPath( params.fasta )
-  .ifEmpty { error "Cannot find any bam files matching: ${params.fasta}" }
-  .set { fasta_files }
-Channel
-  .fromPath( params.bam )
-  .ifEmpty { error "Cannot find any bam files matching: ${params.bam}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { bam_files }
-Channel
-  .fromPath( params.index )
-  .ifEmpty { error "Cannot find any index files matching: ${params.index}" }
-  .set { index_files }
-
-process compute_mappability {
-  tag "${fasta.baseName}"
-
-  input:
-    file index from index_files.collect()
-    file fasta from fasta_files
-
-  output:
-    file "*.bin" into mappability
-    file "temp/chr_ids.txt" into chr_ids
-
-  script:
-
-"""
-generate_multimappability_signal.csh ${fasta} ${params.read_size} ./
-bash temp_map_reads.csh
-bash temp_process_mapping.csh
-"""
-}
-
-process music_preprocessing {
-  tag "${file_id}"
-
-  input:
-    set file_id, file(bam) from bam_files
-    file chr_ids from chr_ids.collect()
-
-  output:
-    set file_id, "preprocessed/*.tar" into preprocessed_bam_files
-
-  script:
-
-"""
-mkdir preprocessed
-samtools view *.bam | \
-MUSIC -preprocess SAM stdin preprocessed/
-mkdir preprocessed/sorted
-MUSIC -sort_reads preprocessed/ preprocessed/sorted/
-mkdir preprocessed/dedup
-MUSIC -remove_duplicates ./preprocessed/sorted 2 preprocessed/dedup/
-cd preprocessed
-tar -c -f ${file_id}.tar *
-"""
-}
-
-preprocessed_bam_files_control = Channel.create()
-preprocessed_bam_files_chip = Channel.create()
-preprocessed_bam_files.choice(
-  preprocessed_bam_files_control,
-  preprocessed_bam_files_chip ) { a -> a[0] =~ /.*control.*/ ? 0 : 1 }
-
-process music_computation {
-  tag "${file_id}"
-  publishDir "results/peak_calling/${file_id}", mode: 'copy'
-
-  input:
-    set file_id, file(control) from preprocessed_bam_files_chip
-    set file_id_control, file(chip) from preprocessed_bam_files_control.collect()
-    file mapp from mappability.collect()
-
-  output:
-    file "*" into music_output_forward
-    file "*.bed" into peaks_forward
-
-  script:
-
-"""
-mkdir mappability control chip
-mv ${mapp} mappability/
-tar -xf ${control} -C control/
-tar -xf ${chip} -C chip/
-
-MUSIC -get_per_win_p_vals_vs_FC -chip chip/ -control control/ \
-  -l_win_step ${params.step_l} \
-  -l_win_min ${params.min_l} -l_win_max ${params.max_l}
-MUSIC -get_multiscale_punctate_ERs \
-  -chip chip/ -control control/ -mapp mappability/ \
-  -l_mapp ${params.read_size} -l_frag ${params.frag_size} -q_val 1 -l_p 0
-ls -l
-"""
-}
diff --git a/src/nf_modules/music/tests.sh b/src/nf_modules/music/tests.sh
deleted file mode 100755
index 4169c449d5706e01a28c443247bf04c2a39fe40d..0000000000000000000000000000000000000000
--- a/src/nf_modules/music/tests.sh
+++ /dev/null
@@ -1,20 +0,0 @@
-cp data/tiny_dataset/map/tiny_v2.sort.bam data/tiny_dataset/map/tiny_v2_control.sort.bam
-./nextflow src/nf_modules/music/peak_calling_single.nf \
-  -c src/nf_modules/music/peak_calling_single.config \
-  -profile docker \
-  --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" \
-  --bam "data/tiny_dataset/map/*.sort.bam" \
-  --index "data/tiny_dataset/map/*.sort.bam.bai*" \
-  --read_size 50 --frag_size 300 \
-  -resume
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/music/peak_calling_single.nf \
-  -c src/nf_modules/music/peak_calling_single.config \
-  -profile singularity \
-  --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" \
-  --bam "data/tiny_dataset/map/*.sort.bam" \
-  --index "data/tiny_dataset/map/*.sort.bam.bai*" \
-  --read_size 50 --frag_size 300 \
-  -resume
-fi
diff --git a/src/nf_modules/picard/main.nf b/src/nf_modules/picard/main.nf
index aa24096cef2ee44322e8648532b2a7ebe092411a..449d5fd014fa2d786a42196e41960ddbfcb806dd 100644
--- a/src/nf_modules/picard/main.nf
+++ b/src/nf_modules/picard/main.nf
@@ -1,34 +1,43 @@
 version = "2.18.11"
 container_url = "lbmc/picard:${version}"
 
+params.mark_duplicate = "VALIDATION_STRINGENCY=LENIENT REMOVE_DUPLICATES=true"
+params.mark_duplicate_out = ""
 process mark_duplicate {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.mark_duplicate_out != "") {
+    publishDir "results/${params.mark_duplicate_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bam)
   output:
     tuple val(file_id) , path("*.bam"), emit: bam
-    path "*_report.txt", emit: report
+    path "*_report.dupinfo.txt", emit: report
 
 
   script:
 """
 PicardCommandLine MarkDuplicates \
-  VALIDATION_STRINGENCY=LENIENT \
-  REMOVE_DUPLICATES=true \
+  ${params.mark_duplicate} \
   INPUT=${bam} \
   OUTPUT=${bam.baseName}_dedup.bam \
-  METRICS_FILE=${bam.baseName}_picard_dedup_report.txt &> \
+  METRICS_FILE=${bam.baseName}_picard_dedup_report.dupinfo.txt &> \
   picard_${bam.baseName}.log
 """
 }
 
+params.index_fasta = ""
+params.index_fasta_out = ""
 process index_fasta {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.index_fasta_out != "") {
+    publishDir "results/${params.index_fasta_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(fasta)
@@ -38,15 +47,21 @@ process index_fasta {
   script:
 """
 PicardCommandLine CreateSequenceDictionary \
-REFERENCE=${fasta} \
-OUTPUT=${fasta.baseName}.dict
+  ${params.index_fasta} \
+  REFERENCE=${fasta} \
+  OUTPUT=${fasta.baseName}.dict
 """
 }
 
+params.index_bam = ""
+params.index_bam_out = ""
 process index_bam {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.index_bam_out != "") {
+    publishDir "results/${params.index_bam_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bam)
@@ -56,6 +71,7 @@ process index_bam {
   script:
 """
 PicardCommandLine BuildBamIndex \
-INPUT=${bam}
+  ${params.index_bam} \
+  INPUT=${bam}
 """
 }
diff --git a/src/nf_modules/rasusa/main.nf b/src/nf_modules/rasusa/main.nf
new file mode 100644
index 0000000000000000000000000000000000000000..4a671d0c34d8bff59dbe776e4d7896148f711b71
--- /dev/null
+++ b/src/nf_modules/rasusa/main.nf
@@ -0,0 +1,81 @@
+version = "0.6.0"
+container_url = "lbmc/rasusa:${version}"
+
+include { index_fasta } from "./../samtools/main.nf"
+
+params.sample_fastq = ""
+params.sample_fastq_coverage = ""
+params.sample_fastq_size = ""
+params.sample_fastq_out = ""
+workflow sample_fastq {
+  take:
+  fastq
+  fasta
+
+  main:
+  if (params.sample_fastq_coverage == "" && params.sample_fastq_size == ""){
+    fastq
+      .set{ final_fastq }
+  } else {
+    index_fasta(fasta)
+    sub_sample_fastq(fastq, index_fasta.out.index)
+    sub_sample_fastq.out.fastq
+      .set{ final_fastq }
+  }
+
+  emit:
+  fastq = final_fastq
+
+}
+
+process sub_sample_fastq {
+  container = "${container_url}"
+  label "small_mem_mono_cpus"
+  tag "$file_id"
+  if (params.index_fasta_out != "") {
+    publishDir "results/${params.sample_fastq_out}", mode: 'copy'
+  }
+
+  input:
+    tuple val(file_id), path(fastq)
+    tuple val(index_id), path(idx)
+
+  output:
+    tuple val(file_id), path("sub_*.fastq.gz"), emit: fastq
+
+  script:
+
+  switch(file_id) {
+    case {it instanceof List}:
+      file_prefix = file_id[0]
+    break
+    case {it instanceof Map}:
+      file_prefix = file_id.values()[0]
+    break
+    default:
+      file_prefix = file_id
+    break
+  }
+
+  sample_option = "-c " + params.sample_fastq_coverage
+  if (params.sample_fastq_size != ""){
+    sample_option = "-b " + params.sample_fastq_size
+  }
+
+  if (fastq.size() == 2)
+"""
+rasusa \
+  -i ${fastq[0]} ${fastq[1]} \
+  -g ${idx} \
+  ${sample_option} \
+  -o sub_${fastq[0].simpleName}.fastq.gz sub_${fastq[1].simpleName}.fastq.gz
+"""
+  else
+"""
+rasusa \
+  -i ${fastq} \
+  -g ${idx} \
+  ${sample_option} \
+  -o sub_${fastq.simpleName}.fastq.gz
+"""
+}
\ No newline at end of file
diff --git a/src/nf_modules/rasusa/test.nf b/src/nf_modules/rasusa/test.nf
new file mode 100644
index 0000000000000000000000000000000000000000..261e374bbbcd934c1992f844448884f915bc29ab
--- /dev/null
+++ b/src/nf_modules/rasusa/test.nf
@@ -0,0 +1,27 @@
+nextflow.enable.dsl=2
+
+/*
+./nextflow src/nf_modules/rasusa/test.nf -c src/nextflow.config -profile docker --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" --fastq "data/tiny_dataset/fastq/tiny_R1.fastq"
+./nextflow src/nf_modules/rasusa/test.nf -c src/nextflow.config -profile docker --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" --fastq "data/tiny_dataset/fastq/tiny_R{1,2}.fastq" --coverage 1.0
+./nextflow src/nf_modules/rasusa/test.nf -c src/nextflow.config -profile docker --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" --fastq "data/tiny_dataset/fastq/tiny_R1.fastq" --size "1Mb"
+*/
+
+params.fastq = "data/fastq/*R{1,2}*"
+params.fasta = "data/fasta/*.fasta"
+params.coverage = ""
+params.size = ""
+
+include { sample_fastq } from "./main.nf" addParams(sample_fastq_coverage: params.coverage, sample_fastq_size: params.size, sample_fastq_out: "sample/")
+
+channel
+  .fromFilePairs( params.fastq, size: -1)
+  .set { fastq_files }
+
+channel
+  .fromPath( params.fasta )
+  .map { it -> [it.simpleName, it]}
+  .set { fasta_files }
+
+workflow {
+  sample_fastq(fastq_files, fasta_files.collect())
+}
\ No newline at end of file
diff --git a/src/nf_modules/rasusa/test.sh b/src/nf_modules/rasusa/test.sh
new file mode 100644
index 0000000000000000000000000000000000000000..d66e26f2a334fe43a43b3228b270046d0dbed66c
--- /dev/null
+++ b/src/nf_modules/rasusa/test.sh
@@ -0,0 +1,4 @@
+#! /bin/sh
+./nextflow src/nf_modules/rasusa/test.nf -c src/nextflow.config -profile docker --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" --fastq "data/tiny_dataset/fastq/tiny_R1.fastq"
+./nextflow src/nf_modules/rasusa/test.nf -c src/nextflow.config -profile docker --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" --fastq "data/tiny_dataset/fastq/tiny_R{1,2}.fastq" --coverage 1.0
+./nextflow src/nf_modules/rasusa/test.nf -c src/nextflow.config -profile docker --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" --fastq "data/tiny_dataset/fastq/tiny_R1.fastq" --size "1Mb"
\ No newline at end of file
diff --git a/src/nf_modules/rsem/indexing.config b/src/nf_modules/rsem/indexing.config
deleted file mode 100644
index 86f57dde734750c4e55857b645cd60f231dd72ed..0000000000000000000000000000000000000000
--- a/src/nf_modules/rsem/indexing.config
+++ /dev/null
@@ -1,57 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: index_fasta {
-        container = "lbmc/rsem:1.3.0"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: index_fasta {
-        container = "lbmc/rsem:1.3.0"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: index_fasta {
-        container = "lbmc/rsem:1.3.0"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: index_fasta {
-        container = "lbmc/rsem:1.3.0"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/rsem/indexing.nf b/src/nf_modules/rsem/indexing.nf
deleted file mode 100644
index 4b95622aaa31f5eece4390f3f09418750e44570c..0000000000000000000000000000000000000000
--- a/src/nf_modules/rsem/indexing.nf
+++ /dev/null
@@ -1,39 +0,0 @@
-params.fasta = "$baseDir/data/bam/*.fasta"
-params.annotation = "$baseDir/data/bam/*.gff3"
-
-log.info "fasta files : ${params.fasta}"
-
-Channel
-  .fromPath( params.fasta )
-  .ifEmpty { error "Cannot find any fasta files matching: ${params.fasta}" }
-  .set { fasta_file }
-Channel
-  .fromPath( params.annotation )
-  .ifEmpty { error "Cannot find any annotation files matching: ${params.annotation}" }
-  .set { annotation_file }
-
-process index_fasta {
-  tag "$fasta.baseName"
-  publishDir "results/mapping/index/", mode: 'copy'
-
-  input:
-    file fasta from fasta_file
-    file annotation from annotation_file
-
-  output:
-    file "*.index*" into index_files
-
-  script:
-  def cmd_annotation = "--gff3 ${annotation}"
-  if(annotation ==~ /.*\.gtf$/){
-    cmd_annotation = "--gtf ${annotation}"
-  }
-"""
-rsem-prepare-reference -p ${task.cpus} --bowtie2 \
---bowtie2-path \$(which bowtie2 | sed 's/bowtie2\$//g') \
-${cmd_annotation} ${fasta} ${fasta.baseName}.index > \
-${fasta.baseName}_rsem_bowtie2_report.txt
-"""
-}
-
-
diff --git a/src/nf_modules/rsem/quantification_paired.config b/src/nf_modules/rsem/quantification_paired.config
deleted file mode 100644
index a959b00faf62fd63965be66d1efbab61c70b1772..0000000000000000000000000000000000000000
--- a/src/nf_modules/rsem/quantification_paired.config
+++ /dev/null
@@ -1,57 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: mapping_fastq {
-        container = "lbmc/rsem:1.3.0"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: mapping_fastq {
-        container = "lbmc/rsem:1.3.0"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: mapping_fastq {
-        container = "lbmc/rsem:1.3.0"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: mapping_fastq {
-        container = "lbmc/rsem:1.3.0"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/rsem/quantification_paired.nf b/src/nf_modules/rsem/quantification_paired.nf
deleted file mode 100644
index c9e23ac8d19e510eee63823069dcb5bc86a38c9b..0000000000000000000000000000000000000000
--- a/src/nf_modules/rsem/quantification_paired.nf
+++ /dev/null
@@ -1,48 +0,0 @@
-params.fastq = "$baseDir/data/fastq/*_{1,2}.fastq"
-params.index = "$baseDir/data/index/*.index.*"
-
-log.info "fastq files : ${params.fastq}"
-log.info "index files : ${params.index}"
-
-Channel
-  .fromFilePairs( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .set { fastq_files }
-Channel
-  .fromPath( params.index )
-  .ifEmpty { error "Cannot find any index files matching: ${params.index}" }
-  .set { index_files }
-
-process mapping_fastq {
-  tag "$pair_id"
-  publishDir "results/mapping/quantification/", mode: 'copy'
-
-  input:
-  set pair_id, file(reads) from fastq_files
-  file index from index_files.toList()
-
-  output:
-  file "*" into counts_files
-
-  script:
-  index_id = index[0]
-  for (index_file in index) {
-    if (index_file =~ /.*\.1\.bt2/ && !(index_file =~ /.*\.rev\.1\.bt2/)) {
-        index_id = ( index_file =~ /(.*)\.1\.bt2/)[0][1]
-    }
-  }
-"""
-rsem-calculate-expression --bowtie2 \
---bowtie2-path \$(which bowtie2 | sed 's/bowtie2\$//g') \
---bowtie2-sensitivity-level "very_sensitive" \
--output-genome-bam -p ${task.cpus} \
---paired-end ${reads[0]} ${reads[1]} ${index_id} ${pair_id} \
-2> ${pair_id}_rsem_bowtie2_report.txt
-
-if grep -q "Error" ${pair_id}_rsem_bowtie2_report.txt; then
-  exit 1
-fi
-"""
-}
-
-
diff --git a/src/nf_modules/rsem/quantification_single.config b/src/nf_modules/rsem/quantification_single.config
deleted file mode 100644
index a959b00faf62fd63965be66d1efbab61c70b1772..0000000000000000000000000000000000000000
--- a/src/nf_modules/rsem/quantification_single.config
+++ /dev/null
@@ -1,57 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: mapping_fastq {
-        container = "lbmc/rsem:1.3.0"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: mapping_fastq {
-        container = "lbmc/rsem:1.3.0"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: mapping_fastq {
-        container = "lbmc/rsem:1.3.0"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: mapping_fastq {
-        container = "lbmc/rsem:1.3.0"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/rsem/quantification_single.nf b/src/nf_modules/rsem/quantification_single.nf
deleted file mode 100644
index bacba6b33ab974f4390cdcff141ad4222be5b1db..0000000000000000000000000000000000000000
--- a/src/nf_modules/rsem/quantification_single.nf
+++ /dev/null
@@ -1,53 +0,0 @@
-params.fastq = "$baseDir/data/fastq/*.fastq"
-params.index = "$baseDir/data/index/*.index*"
-params.mean = 200
-params.sd = 100
-
-log.info "fastq files : ${params.fastq}"
-log.info "index files : ${params.index}"
-log.info "mean read size: ${params.mean}"
-log.info "sd read size: ${params.sd}"
-
-Channel
-  .fromPath( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { fastq_files }
-Channel
-  .fromPath( params.index )
-  .ifEmpty { error "Cannot find any index files matching: ${params.index}" }
-  .set { index_files }
-
-process mapping_fastq {
-  tag "$file_id"
-  publishDir "results/mapping/quantification/", mode: 'copy'
-
-  input:
-  set file_id, file(reads) from fastq_files
-  file index from index_files.collect()
-
-  output:
-  file "*" into count_files
-
-  script:
-  index_id = index[0]
-  for (index_file in index) {
-    if (index_file =~ /.*\.1\.bt2/ && !(index_file =~ /.*\.rev\.1\.bt2/)) {
-        index_id = ( index_file =~ /(.*)\.1\.bt2/)[0][1]
-    }
-  }
-"""
-rsem-calculate-expression --bowtie2 \
---bowtie2-path \$(which bowtie2 | sed 's/bowtie2\$//g') \
---bowtie2-sensitivity-level "very_sensitive" \
---fragment-length-mean ${params.mean} --fragment-length-sd ${params.sd} \
---output-genome-bam -p ${task.cpus} \
-${reads} ${index_id} ${file_id} \
-2> ${file_id}_rsem_bowtie2_report.txt
-
-if grep -q "Error" ${file_id}_rsem_bowtie2_report.txt; then
-  exit 1
-fi
-"""
-}
-
diff --git a/src/nf_modules/rsem/tests.sh b/src/nf_modules/rsem/tests.sh
deleted file mode 100755
index e4cd27267371e378d4c47454f9344298cdde4566..0000000000000000000000000000000000000000
--- a/src/nf_modules/rsem/tests.sh
+++ /dev/null
@@ -1,44 +0,0 @@
-./nextflow src/nf_modules/rsem/indexing.nf \
-  -c src/nf_modules/rsem/indexing.config \
-  -profile docker \
-  --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" \
-  --annotation "data/tiny_dataset/annot/tiny.gff" \
-  -resume
-
-./nextflow src/nf_modules/rsem/quantification_single.nf \
-  -c src/nf_modules/rsem/quantification_single.config \
-  -profile docker \
-  --index "results/mapping/index/tiny_v2.index*" \
-  --fastq "data/tiny_dataset/fastq/tiny*_R1.fastq" \
-  -resume
-
-./nextflow src/nf_modules/rsem/quantification_paired.nf \
-  -c src/nf_modules/rsem/quantification_paired.config \
-  -profile docker \
-  --index "results/mapping/index/tiny_v2.index*" \
-  --fastq "data/tiny_dataset/fastq/tiny*_R{1,2}.fastq" \
-  -resume
-
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/rsem/indexing.nf \
-  -c src/nf_modules/rsem/indexing.config \
-  -profile docker \
-  --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" \
-  --annotation "data/tiny_dataset/annot/tiny.gff" \
-  -resume
-
-./nextflow src/nf_modules/rsem/quantification_single.nf \
-  -c src/nf_modules/rsem/quantification_single.config \
-  -profile docker \
-  --index "results/mapping/index/tiny_v2.index*" \
-  --fastq "data/tiny_dataset/fastq/tiny*_R1.fastq" \
-  -resume
-
-./nextflow src/nf_modules/rsem/quantification_paired.nf \
-  -c src/nf_modules/rsem/quantification_paired.config \
-  -profile docker \
-  --index "results/mapping/index/tiny_v2.index*" \
-  --fastq "data/tiny_dataset/fastq/tiny*_R{1,2}.fastq" \
-  -resume
-fi
diff --git a/src/nf_modules/sambamba/index_bams.config b/src/nf_modules/sambamba/index_bams.config
deleted file mode 100644
index da8f84fc81c1eb17176d16fd04af21c886661477..0000000000000000000000000000000000000000
--- a/src/nf_modules/sambamba/index_bams.config
+++ /dev/null
@@ -1,57 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: index_bam {
-        container = "lbmc/sambamba:0.6.9"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: index_bam {
-        container = "lbmc/sambamba:0.6.9"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: index_bam {
-        container = "lbmc/sambamba:0.6.9"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: index_bam {
-        container = "lbmc/sambamba:0.6.9"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/sambamba/index_bams.nf b/src/nf_modules/sambamba/index_bams.nf
deleted file mode 100644
index 3ea36df4a3e512fba7b577b4b2c6916a4e0bb940..0000000000000000000000000000000000000000
--- a/src/nf_modules/sambamba/index_bams.nf
+++ /dev/null
@@ -1,25 +0,0 @@
-params.bam = "$baseDir/data/bam/*.bam"
-
-log.info "bams files : ${params.bam}"
-
-Channel
-  .fromPath( params.bam )
-  .ifEmpty { error "Cannot find any bam files matching: ${params.bam}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { bam_files }
-
-process index_bam {
-  tag "$file_id"
-
-  input:
-    set file_id, file(bam) from bam_files
-
-  output:
-    set file_id, "*.bam*" into indexed_bam_file
-
-  script:
-"""
-sambamba index -t ${task.cpus} ${bam}
-"""
-}
-
diff --git a/src/nf_modules/sambamba/main.nf b/src/nf_modules/sambamba/main.nf
index e07210bb98312e6ac52db4d8a59462df7bf5b738..ea6c6e972bf3e0360df35c2b6eeb9ad227d42450 100644
--- a/src/nf_modules/sambamba/main.nf
+++ b/src/nf_modules/sambamba/main.nf
@@ -1,6 +1,7 @@
 version = "0.6.7"
 container_url = "lbmc/sambamba:${version}"
 
+params.index_bam = ""
 process index_bam {
   container = "${container_url}"
   label "big_mem_multi_cpus"
@@ -14,10 +15,11 @@ process index_bam {
 
   script:
 """
-sambamba index -t ${task.cpus} ${bam}
+sambamba index ${params.index_bam} -t ${task.cpus} ${bam}
 """
 }
 
+params.sort_bam = ""
 process sort_bam {
   container = "${container_url}"
   label "big_mem_multi_cpus"
@@ -31,11 +33,11 @@ process sort_bam {
 
   script:
 """
-sambamba sort -t ${task.cpus} -o ${bam.baseName}_sorted.bam ${bam}
+sambamba sort -t ${task.cpus} ${params.sort_bam} -o ${bam.baseName}_sorted.bam ${bam}
 """
 }
 
-
+params.split_bam = ""
 process split_bam {
   container = "${container_url}"
   label "big_mem_multi_cpus"
@@ -49,9 +51,9 @@ process split_bam {
     tuple val(file_id), path("*_reverse.bam*"), emit: bam_reverse
   script:
 """
-sambamba view -t ${task.cpus} -h -F "strand == '+'" ${bam} > \
+sambamba view -t ${task.cpus} ${params.split_bam} -h -F "strand == '+'" ${bam} > \
   ${bam.baseName}_forward.bam
-sambamba view -t ${task.cpus} -h -F "strand == '-'" ${bam} > \
+sambamba view -t ${task.cpus} ${params.split_bam} -h -F "strand == '-'" ${bam} > \
   ${bam.baseName}_reverse.bam
 """
 }
diff --git a/src/nf_modules/sambamba/sort_bams.config b/src/nf_modules/sambamba/sort_bams.config
deleted file mode 100644
index 610573272475384df4ad0d0ee46f7dd053f1aa3c..0000000000000000000000000000000000000000
--- a/src/nf_modules/sambamba/sort_bams.config
+++ /dev/null
@@ -1,58 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: sort_bam {
-        container = "lbmc/sambamba:0.6.9"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: sort_bam {
-        container = "lbmc/sambamba:0.6.9"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: sort_bam {
-        container = "lbmc/sambamba:0.6.9"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 4
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: sort_bam {
-        container = "lbmc/sambamba:0.6.9"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-
-    }
-  }
-}
diff --git a/src/nf_modules/sambamba/sort_bams.nf b/src/nf_modules/sambamba/sort_bams.nf
deleted file mode 100644
index ac610cdca146693df69d8b765928d406b36652b6..0000000000000000000000000000000000000000
--- a/src/nf_modules/sambamba/sort_bams.nf
+++ /dev/null
@@ -1,26 +0,0 @@
-params.bam = "$baseDir/data/bam/*.bam"
-
-log.info "bams files : ${params.bam}"
-
-Channel
-  .fromPath( params.bam )
-  .ifEmpty { error "Cannot find any bam files matching: ${params.bam}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { bam_files }
-
-process sort_bam {
-  tag "$file_id"
-  cpus 4
-
-  input:
-    set file_id, file(bam) from bam_files
-
-  output:
-    set file_id, "*_sorted.bam" into sorted_bam_files
-
-  script:
-"""
-sambamba sort -t ${task.cpus} -o ${file_id}_sorted.bam ${bam}
-"""
-}
-
diff --git a/src/nf_modules/sambamba/split_bams.config b/src/nf_modules/sambamba/split_bams.config
deleted file mode 100644
index e9df7d6965eb7a88c9f8d196888c05a8ec6d8c03..0000000000000000000000000000000000000000
--- a/src/nf_modules/sambamba/split_bams.config
+++ /dev/null
@@ -1,57 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: split_bam {
-        container = "lbmc/sambamba:0.6.9"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: split_bam {
-        container = "lbmc/sambamba:0.6.9"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: split_bam {
-        container = "lbmc/sambamba:0.6.9"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: split_bam {
-        container = "lbmc/sambamba:0.6.9"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/sambamba/split_bams.nf b/src/nf_modules/sambamba/split_bams.nf
deleted file mode 100644
index ba64d2e2b77eb3a9b24ec1355c2b3a68b95c7a4d..0000000000000000000000000000000000000000
--- a/src/nf_modules/sambamba/split_bams.nf
+++ /dev/null
@@ -1,27 +0,0 @@
-params.bam = "$baseDir/data/bam/*.bam"
-
-log.info "bams files : ${params.bam}"
-
-Channel
-  .fromPath( params.bam )
-  .ifEmpty { error "Cannot find any bam files matching: ${params.bam}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { bam_files }
-
-process split_bam {
-  tag "$file_id"
-  cpus 4
-
-  input:
-    set file_id, file(bam) from bam_files
-
-  output:
-    set file_id, "*_forward.bam*" into forward_bam_files
-    set file_id, "*_reverse.bam*" into reverse_bam_files
-  script:
-"""
-sambamba view -t ${task.cpus} -h -F "strand == '+'" ${bam} > ${file_id}_forward.bam
-sambamba view -t ${task.cpus} -h -F "strand == '-'" ${bam} > ${file_id}_reverse.bam
-"""
-}
-
diff --git a/src/nf_modules/sambamba/tests.sh b/src/nf_modules/sambamba/tests.sh
deleted file mode 100755
index 256df65f9ed23efac640aea6548f15397479ca60..0000000000000000000000000000000000000000
--- a/src/nf_modules/sambamba/tests.sh
+++ /dev/null
@@ -1,37 +0,0 @@
-./nextflow src/nf_modules/sambamba/sort_bams.nf \
-  -c src/nf_modules/sambamba/sort_bams.config \
-  -profile docker \
-  --bam "data/tiny_dataset/map/tiny_v2.bam" \
-  -resume
-
-./nextflow src/nf_modules/sambamba/index_bams.nf \
-  -c src/nf_modules/sambamba/index_bams.config \
-  -profile docker \
-  --bam "data/tiny_dataset/map/tiny_v2.sort.bam" \
-  -resume
-
-./nextflow src/nf_modules/sambamba/split_bams.nf \
-  -c src/nf_modules/sambamba/split_bams.config \
-  -profile docker \
-  --bam "data/tiny_dataset/map/tiny_v2.bam" \
-  -resume
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/sambamba/sort_bams.nf \
-  -c src/nf_modules/sambamba/sort_bams.config \
-  -profile singularity \
-  --bam "data/tiny_dataset/map/tiny_v2.bam" \
-  -resume
-
-./nextflow src/nf_modules/sambamba/index_bams.nf \
-  -c src/nf_modules/sambamba/index_bams.config \
-  -profile singularity \
-  --bam "data/tiny_dataset/map/tiny_v2.sort.bam" \
-  -resume
-
-./nextflow src/nf_modules/sambamba/split_bams.nf \
-  -c src/nf_modules/sambamba/split_bams.config \
-  -profile singularity \
-  --bam "data/tiny_dataset/map/tiny_v2.bam" \
-  -resume
-fi
diff --git a/src/nf_modules/samblaster/dedup_sams.config b/src/nf_modules/samblaster/dedup_sams.config
deleted file mode 100644
index e35185c15749ec3a4c91d4497e29391cb6180018..0000000000000000000000000000000000000000
--- a/src/nf_modules/samblaster/dedup_sams.config
+++ /dev/null
@@ -1,57 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: dedup_sam {
-        container = "lbmc/samblaster:0.1.24"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: split_bam {
-        container = "lbmc/sambamba:0.6.7"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: dedup_sam {
-        container = "lbmc/sambamba:0.6.7"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: dedup_sam {
-        container = "lbmc/sambamba:0.6.7"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/samblaster/dedup_sams.nf b/src/nf_modules/samblaster/dedup_sams.nf
deleted file mode 100644
index bcfd829c6bba1c119ea71e34ef5d148c693e4fcf..0000000000000000000000000000000000000000
--- a/src/nf_modules/samblaster/dedup_sams.nf
+++ /dev/null
@@ -1,27 +0,0 @@
-params.sam = "$baseDir/data/sam/*.bam"
-
-log.info "bam files : ${params.bam}"
-
-Channel
-  .fromPath( params.bam )
-  .ifEmpty { error "Cannot find any bam files matching: ${params.bam}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { bam_files }
-
-process dedup_sam {
-  tag "$file_id"
-
-  input:
-    set file_id, file(bam) from bam_files
-
-  output:
-    set file_id, "*_dedup.bam*" into dedup_bam_files
-  script:
-"""
-samtools view -h ${bam} | \
-samblaster --addMateTags 2> /dev/null | \
-samtools view -Sb - > ${file_id}_dedup.bam
-"""
-}
-
-
diff --git a/src/nf_modules/samblaster/tests.sh b/src/nf_modules/samblaster/tests.sh
deleted file mode 100755
index 10ad93d7841ba4b708664803c0ca1299cc34ee19..0000000000000000000000000000000000000000
--- a/src/nf_modules/samblaster/tests.sh
+++ /dev/null
@@ -1,13 +0,0 @@
-./nextflow src/nf_modules/samblaster/dedup_sams.nf \
-  -c src/nf_modules/samblaster/dedup_sams.config \
-  -profile docker \
-  --bam "data/tiny_dataset/map/tiny_v2.bam" \
-  -resume
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/samblaster/dedup_sams.nf \
-  -c src/nf_modules/samblaster/dedup_sams.config \
-  -profile docker \
-  --bam "data/tiny_dataset/map/tiny_v2.bam" \
-  -resume
-fi
diff --git a/src/nf_modules/samtools/filter_bams.config b/src/nf_modules/samtools/filter_bams.config
deleted file mode 100644
index dfd7f8bf183336f1168e3c93e755f153ca2f410d..0000000000000000000000000000000000000000
--- a/src/nf_modules/samtools/filter_bams.config
+++ /dev/null
@@ -1,57 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: filter_bam {
-        container = "lbmc/samtools:1.7"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: filter_bam {
-        container = "lbmc/samtools:1.7"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: filter_bam {
-        container = "lbmc/samtools:1.7"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: filter_bam {
-        container = "lbmc/samtools:1.7"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/samtools/filter_bams.nf b/src/nf_modules/samtools/filter_bams.nf
deleted file mode 100644
index 6e81b758dede1f317d981409f7c85571ede55b06..0000000000000000000000000000000000000000
--- a/src/nf_modules/samtools/filter_bams.nf
+++ /dev/null
@@ -1,32 +0,0 @@
-params.bam = "$baseDir/data/bam/*.bam"
-params.bed = "$baseDir/data/bam/*.bed"
-
-log.info "bams files : ${params.bam}"
-log.info "bed file : ${params.bed}"
-
-Channel
-  .fromPath( params.bam )
-  .ifEmpty { error "Cannot find any bam files matching: ${params.bam}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { bam_files }
-Channel
-  .fromPath( params.bed )
-  .ifEmpty { error "Cannot find any bed file matching: ${params.bed}" }
-  .set { bed_files }
-
-process filter_bam {
-  tag "$file_id"
-
-  input:
-    set file_id, file(bam) from bam_files
-    file bed from bed_files
-
-  output:
-    set file_id, "*_filtered.bam*" into filtered_bam_files
-  script:
-"""
-samtools view -@ ${task.cpus} -hb ${bam} -L ${bed} > ${file_id}_filtered.bam
-"""
-}
-
-
diff --git a/src/nf_modules/samtools/index_bams.config b/src/nf_modules/samtools/index_bams.config
deleted file mode 100644
index 173af8fe0940a96d9fe03c0179eb62aaca76dace..0000000000000000000000000000000000000000
--- a/src/nf_modules/samtools/index_bams.config
+++ /dev/null
@@ -1,56 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: index_bam {
-        container = "lbmc/samtools:1.7"
-        cpus = 1
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: index_bam {
-        container = "lbmc/samtools:1.7"
-        cpus = 1
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: index_bam {
-        container = "lbmc/samtools:1.7"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "20GB"
-        time = "12h"
-        queue = "monointeldeb128"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: index_bam {
-        container = "lbmc/samtools:1.7"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/samtools/index_bams.nf b/src/nf_modules/samtools/index_bams.nf
deleted file mode 100644
index 489b0f4f71f39d1fdc5b7870547e9fd18a29f9af..0000000000000000000000000000000000000000
--- a/src/nf_modules/samtools/index_bams.nf
+++ /dev/null
@@ -1,25 +0,0 @@
-params.bam = "$baseDir/data/bam/*.bam"
-
-log.info "bams files : ${params.bam}"
-
-Channel
-  .fromPath( params.bam )
-  .ifEmpty { error "Cannot find any bam files matching: ${params.bam}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { bam_files }
-
-process index_bam {
-  tag "$file_id"
-
-  input:
-    set file_id, file(bam) from bam_files
-
-  output:
-    set file_id, "*.bam*" into indexed_bam_file
-
-  script:
-"""
-samtools index ${bam}
-"""
-}
-
diff --git a/src/nf_modules/samtools/main.nf b/src/nf_modules/samtools/main.nf
index 8143a81468fbd210c2c2bdc3b3eff1bd0d3f55b3..ed88dc56dfeaa239845c7f1f8467bfc1d81624eb 100644
--- a/src/nf_modules/samtools/main.nf
+++ b/src/nf_modules/samtools/main.nf
@@ -1,10 +1,15 @@
 version = "1.11"
 container_url = "lbmc/samtools:${version}"
 
+params.index_fasta = ""
+params.index_fasta_out = ""
 process index_fasta {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.index_fasta_out != "") {
+    publishDir "results/${params.index_fasta_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(fasta)
@@ -13,16 +18,26 @@ process index_fasta {
 
   script:
 """
-samtools faidx ${fasta}
+if gzip -t ${fasta}; then
+  zcat ${fasta} > ${fasta.simpleName}.fasta
+  samtools faidx ${params.index_fasta}  ${fasta.simpleName}.fasta
+else
+  samtools faidx ${params.index_fasta} ${fasta}
+fi
+
 """
 }
 
-filter_bam_quality_threshold = 30
-
+params.filter_bam_quality_threshold = 30
+params.filter_bam_quality = "-q ${params.filter_bam_quality_threshold}"
+params.filter_bam_quality_out = ""
 process filter_bam_quality {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.filter_bam_quality_out != "") {
+    publishDir "results/${params.filter_bam_quality_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bam)
@@ -31,34 +46,65 @@ process filter_bam_quality {
     tuple val(file_id), path("*_filtered.bam"), emit: bam
   script:
 """
-samtools view -@ ${task.cpus} -hb ${bam} -q ${filter_bam_quality_threshold} > \
+samtools view -@ ${task.cpus} -hb ${bam} ${params.filter_bam_quality} > \
   ${bam.simpleName}_filtered.bam
 """
 }
 
-
+params.filter_bam = ""
+params.filter_bam_out = ""
 process filter_bam {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.filter_bam_out != "") {
+    publishDir "results/${params.filter_bam_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bam)
-    path bed
+    tuple val(bed_id), path(bed)
 
   output:
     tuple val(file_id), path("*_filtered.bam"), emit: bam
   script:
 """
-samtools view -@ ${task.cpus} -hb ${bam} -L ${bed} > \
+samtools view -@ ${task.cpus} -hb ${bam} -L ${bed} ${params.filter_bam} > \
   ${bam.simpleName}_filtered.bam
 """
 }
 
+params.rm_from_bam = ""
+params.rm_from_bam_out = ""
+process rm_from_bam {
+  container = "${container_url}"
+  label "big_mem_multi_cpus"
+  tag "$file_id"
+  if (params.rm_from_bam_out != "") {
+    publishDir "results/${params.rm_from_bam_out}", mode: 'copy'
+  }
+
+  input:
+    tuple val(file_id), path(bam)
+    tuple val(bed_id), path(bed)
+
+  output:
+    tuple val(file_id), path("*_filtered.bam"), emit: bam
+  script:
+"""
+samtools view -@ ${task.cpus} ${params.filter_bam} -hb -L ${bed} -U ${bam.simpleName}_filtered.bam ${bam} >  /dev/null
+"""
+}
+
+params.filter_bam_mapped = "-F 4"
+params.filter_bam_mapped_out = ""
 process filter_bam_mapped {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.filter_bam_mapped_out != "") {
+    publishDir "results/${params.filter_bam_mapped_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bam)
@@ -67,15 +113,20 @@ process filter_bam_mapped {
     tuple val(file_id), path("*_mapped.bam"), emit: bam
   script:
 """
-samtools view -@ ${task.cpus} -F 4 -hb ${bam} > \
+samtools view -@ ${task.cpus} ${params.filter_bam_mapped} -hb ${bam} > \
   ${bam.simpleName}_mapped.bam
 """
 }
 
+params.filter_bam_unmapped = "-f 4"
+params.filter_bam_unmapped_out = ""
 process filter_bam_unmapped {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.filter_bam_unmapped_out != "") {
+    publishDir "results/${params.filter_bam_unmapped_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bam)
@@ -84,33 +135,41 @@ process filter_bam_unmapped {
     tuple val(file_id), path("*_unmapped.bam"), emit: bam
   script:
 """
-samtools view -@ ${task.cpus} -f 4 -hb ${bam} > ${bam.simpleName}_unmapped.bam
+samtools view -@ ${task.cpus} ${params.filter_bam_unmapped} -hb ${bam} > ${bam.simpleName}_unmapped.bam
 """
 }
 
-
+params.index_bam = ""
+params.index_bam_out = ""
 process index_bam {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$file_id"
+  if (params.index_bam_out != "") {
+    publishDir "results/${params.index_bam_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bam)
 
   output:
-    tuple val(file_id), path(bam), emit: bam
-    tuple val(file_id), path("*.bam.bai"), emit: bam_idx
+    tuple val(file_id), path("${bam}"), path("*.bam.bai"), emit: bam_idx
 
   script:
 """
-samtools index ${bam}
+samtools index ${params.index_bam} ${bam}
 """
 }
 
+params.sort_bam = ""
+params.sort_bam_out = ""
 process sort_bam {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.sort_bam_out != "") {
+    publishDir "results/${params.sort_bam_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bam)
@@ -120,15 +179,19 @@ process sort_bam {
 
   script:
 """
-samtools sort -@ ${task.cpus} -O BAM -o ${bam.simpleName}_sorted.bam ${bam}
+samtools sort -@ ${task.cpus} ${params.sort_bam} -O BAM -o ${bam.simpleName}_sorted.bam ${bam}
 """
 }
 
-
+params.split_bam = ""
+params.split_bam_out = ""
 process split_bam {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
+  if (params.split_bam_out != "") {
+    publishDir "results/${params.split_bam_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bam)
@@ -138,19 +201,22 @@ process split_bam {
     tuple val(file_id), path("*_reverse.bam*"), emit: bam_reverse
   script:
 """
-samtools view --@ ${Math.round(task.cpus/2)} \
+samtools view -@ ${Math.round(task.cpus/2)} ${params.split_bam} \
   -hb -F 0x10 ${bam} > ${bam.simpleName}_forward.bam &
-samtools view --@ ${Math.round(task.cpus/2)} \
+samtools view -@ ${Math.round(task.cpus/2)} ${params.split_bam} \
   -hb -f 0x10 ${bam} > ${bam.simpleName}_reverse.bam
 """
 }
 
-
+params.merge_bam = ""
+params.merge_bam_out = ""
 process merge_bam {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
-  cpus = 2
+  if (params.merge_bam_out != "") {
+    publishDir "results/${params.merge_bam_out}", mode: 'copy'
+  }
 
   input:
     tuple val(first_file_id), path(first_bam)
@@ -160,16 +226,20 @@ process merge_bam {
     tuple val(file_id), path("*.bam*"), emit: bam
   script:
 """
-samtools merge ${first_bam} ${second_bam} \
+samtools merge -@ ${task.cpus} ${params.merge_bam} ${first_bam} ${second_bam} \
   ${first_bam.simpleName}_${second_file.simpleName}.bam
 """
 }
 
+params.merge_multi_bam = ""
+params.merge_multi_bam_out = ""
 process merge_multi_bam {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
-  cpus = 2
+  if (params.merge_multi_bam_out != "") {
+    publishDir "results/${params.merge_multi_bam_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bams)
@@ -179,49 +249,66 @@ process merge_multi_bam {
   script:
 """
 samtools merge -@ ${task.cpus} \
+  ${params.merge_multi_bam} \
   ${bams[0].simpleName}_merged.bam \
   ${bams}
 """
 }
 
+params.stats_bam = ""
+params.stats_bam_out = ""
 process stats_bam {
   container = "${container_url}"
   label "big_mem_multi_cpus"
   tag "$file_id"
-  cpus = 2
+  if (params.stats_bam_out != "") {
+    publishDir "results/${params.stats_bam_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(bam)
 
   output:
     tuple val(file_id), path("*.tsv"), emit: tsv
+    path "*.flagstat.txt", emit: report 
   script:
 """
-samtools flagstat -@ ${task.cpus} -O tsv ${bam} > ${bam.simpleName}_stats.tsv
+samtools flagstat -@ ${task.cpus} ${params.stats_bam} -O tsv ${bam} > ${bam.simpleName}.flagstat.txt
+cp ${bam.simpleName}.flagstat.txt ${bam.simpleName}.tsv
 """
 }
 
+params.flagstat_2_multiqc = ""
+params.flagstat_2_multiqc_out = ""
 process flagstat_2_multiqc {
   tag "$file_id"
+  if (params.flagstat_2_multiqc_out != "") {
+    publishDir "results/${params.flagstat_2_multiqc_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(tsv)
 
   output:
-    path "*.txt" , emit: report
+    tuple val(file_id), path("*.txt"), emit: report
 """
 mv ${tsv} ${tsv.simpleName}.flagstat.txt
 """
 }
 
+params.idxstat_2_multiqc = ""
+params.idxstat_2_multiqc_out = ""
 process idxstat_2_multiqc {
   tag "$file_id"
+  if (params.idxstat_2_multiqc_out != "") {
+    publishDir "results/${params.idxstat_2_multiqc_out}", mode: 'copy'
+  }
 
   input:
     tuple val(file_id), path(tsv)
 
   output:
-    path "*.txt", emit: report
+    tuple val(file_id), path("*.txt"), emit: report
 """
 mv ${tsv} ${tsv.simpleName}.idxstats.txt
 """
diff --git a/src/nf_modules/samtools/sort_bams.config b/src/nf_modules/samtools/sort_bams.config
deleted file mode 100644
index 48f18580754cd776a7c409206963dbd5a55a50f8..0000000000000000000000000000000000000000
--- a/src/nf_modules/samtools/sort_bams.config
+++ /dev/null
@@ -1,57 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: sort_bam {
-        container = "lbmc/samtools:1.7"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: sort_bam {
-        container = "lbmc/samtools:1.7"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: sort_bam {
-        container = "lbmc/samtools:1.7"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: sort_bam {
-        container = "lbmc/samtools:1.7"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/samtools/sort_bams.nf b/src/nf_modules/samtools/sort_bams.nf
deleted file mode 100644
index db1b30d1d6d59ae74535267815f6f887e2b267dd..0000000000000000000000000000000000000000
--- a/src/nf_modules/samtools/sort_bams.nf
+++ /dev/null
@@ -1,25 +0,0 @@
-params.bam = "$baseDir/data/bam/*.bam"
-
-log.info "bams files : ${params.bam}"
-
-Channel
-  .fromPath( params.bam )
-  .ifEmpty { error "Cannot find any bam files matching: ${params.bam}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { bam_files }
-
-process sort_bam {
-  tag "$file_id"
-
-  input:
-    set file_id, file(bam) from bam_files
-
-  output:
-    set file_id, "*_sorted.bam" into sorted_bam_files
-
-  script:
-"""
-samtools sort -@ ${task.cpus} -O BAM -o ${file_id}_sorted.bam ${bam}
-"""
-}
-
diff --git a/src/nf_modules/samtools/split_bams.config b/src/nf_modules/samtools/split_bams.config
deleted file mode 100644
index 757a946a3888bc801aeef5a68eb90c3695197192..0000000000000000000000000000000000000000
--- a/src/nf_modules/samtools/split_bams.config
+++ /dev/null
@@ -1,56 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: split_bam {
-        container = "lbmc/samtools:1.7"
-        cpus = 2
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: split_bam {
-        container = "lbmc/samtools:1.7"
-        cpus = 2
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: split_bam {
-        container = "lbmc/samtools:1.7"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "20GB"
-        time = "12h"
-        queue = "monointeldeb128"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: split_bam {
-        container = "lbmc/samtools:1.7"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/samtools/split_bams.nf b/src/nf_modules/samtools/split_bams.nf
deleted file mode 100644
index 0d02a5d170893a88625a0885ffed16e1adbef35c..0000000000000000000000000000000000000000
--- a/src/nf_modules/samtools/split_bams.nf
+++ /dev/null
@@ -1,26 +0,0 @@
-params.bam = "$baseDir/data/bam/*.bam"
-
-log.info "bams files : ${params.bam}"
-
-Channel
-  .fromPath( params.bam )
-  .ifEmpty { error "Cannot find any bam files matching: ${params.bam}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { bam_files }
-
-process split_bam {
-  tag "$file_id"
-
-  input:
-    set file_id, file(bam) from bam_files
-
-  output:
-    set file_id, "*_forward.bam*" into forward_bam_files
-    set file_id, "*_reverse.bam*" into reverse_bam_files
-  script:
-"""
-samtools view -hb -F 0x10 ${bam} > ${file_id}_forward.bam &
-samtools view -hb -f 0x10 ${bam} > ${file_id}_reverse.bam
-"""
-}
-
diff --git a/src/nf_modules/samtools/tests.sh b/src/nf_modules/samtools/tests.sh
deleted file mode 100755
index 256fa058beeaf3464391f4e974a2b4a2ef499b0c..0000000000000000000000000000000000000000
--- a/src/nf_modules/samtools/tests.sh
+++ /dev/null
@@ -1,51 +0,0 @@
-./nextflow src/nf_modules/samtools/sort_bams.nf \
-  -c src/nf_modules/samtools/sort_bams.config \
-  -profile docker \
-  --bam "data/tiny_dataset/map/tiny_v2.bam" \
-  -resume
-
-./nextflow src/nf_modules/samtools/index_bams.nf \
-  -c src/nf_modules/samtools/index_bams.config \
-  -profile docker \
-  --bam "data/tiny_dataset/map/tiny_v2.sort.bam" \
-  -resume
-
-./nextflow src/nf_modules/samtools/split_bams.nf \
-  -c src/nf_modules/samtools/split_bams.config \
-  -profile docker \
-  --bam "data/tiny_dataset/map/tiny_v2.bam" \
-  -resume
-
-./nextflow src/nf_modules/samtools/filter_bams.nf \
-  -c src/nf_modules/samtools/filter_bams.config \
-  -profile docker \
-  --bam "data/tiny_dataset/map/tiny_v2.bam" \
-  --bed "data/tiny_dataset/OLD/2genes.bed" \
-  -resume
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/samtools/sort_bams.nf \
-  -c src/nf_modules/samtools/sort_bams.config \
-  -profile docker \
-  --bam "data/tiny_dataset/map/tiny_v2.bam" \
-  -resume
-
-./nextflow src/nf_modules/samtools/index_bams.nf \
-  -c src/nf_modules/samtools/index_bams.config \
-  -profile docker \
-  --bam "data/tiny_dataset/map/tiny_v2.sort.bam" \
-  -resume
-
-./nextflow src/nf_modules/samtools/split_bams.nf \
-  -c src/nf_modules/samtools/split_bams.config \
-  -profile docker \
-  --bam "data/tiny_dataset/map/tiny_v2.bam" \
-  -resume
-
-./nextflow src/nf_modules/samtools/filter_bams.nf \
-  -c src/nf_modules/samtools/filter_bams.config \
-  -profile docker \
-  --bam "data/tiny_dataset/map/tiny_v2.bam" \
-  --bed "data/tiny_dataset/OLD/2genes.bed" \
-  -resume
-fi
diff --git a/src/nf_modules/sratoolkit/fastqdump.config b/src/nf_modules/sratoolkit/fastqdump.config
deleted file mode 100644
index 4287e92f36e6b2774f39ce3d38191c76881f312f..0000000000000000000000000000000000000000
--- a/src/nf_modules/sratoolkit/fastqdump.config
+++ /dev/null
@@ -1,56 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: fastq_dump {
-        container = "lbmc/sratoolkit:2.8.2"
-        cpus = 1
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: split_bam {
-        container = "lbmc/sratoolkit:2.8.2"
-        cpus = 1
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: fastq_dump {
-        container = "lbmc/sratoolkit:2.8.2"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "20GB"
-        time = "12h"
-        queue = "monointeldeb128"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: fastq_dump {
-        container = "lbmc/sratoolkit:2.8.2"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/sratoolkit/fastqdump.nf b/src/nf_modules/sratoolkit/fastqdump.nf
deleted file mode 100644
index 6ba77271405a0f00eef0283d85f7813d3e3a6c9d..0000000000000000000000000000000000000000
--- a/src/nf_modules/sratoolkit/fastqdump.nf
+++ /dev/null
@@ -1,48 +0,0 @@
-/*
-* sra-tools :
-
-*/
-
-/*                      fastq-dump
-* Imputs : srr list
-* Outputs : fastq files
-*/
-
-params.list_srr = "$baseDir/data/SRR/*.txt"
-
-log.info "downloading list srr : ${params.list_srr}"
-
-Channel
-  .fromPath( params.list_srr )
-  .ifEmpty { error "Cannot find any bam files matching: ${params.list_srr}" }
-  .splitCsv()
-  .map { it -> it[0]}
-  .set { SRR }
-
-//run is the column name containing SRR ids
-
-process fastq_dump {
-  tag "$file_id"
-  publishDir "results/download/fastq/${file_id}/", mode: 'copy'
-
-  input:
-    val file_id from SRR
-
-  output:
-    set file_id, "*.fastq" into fastq
-
-  script:
-"""
-HOME=\$(pwd) # ensure that fastq dump tmp file are in the right place
-#for test only 10000  reads are downloading with the option -N 10000 -X 20000
-fastq-dump --split-files --defline-seq '@\$ac_\$si/\$ri' --defline-qual "+" -N 10000 -X 20000 ${file_id}
-if [ -f ${file_id}_1.fastq ]
-then
-  mv ${file_id}_1.fastq ${file_id}_R1.fastq
-fi
-if [ -f ${file_id}_2.fastq ]
-then
-  mv ${file_id}_2.fastq ${file_id}_R2.fastq
-fi
-"""
-}
diff --git a/src/nf_modules/sratoolkit/main.nf b/src/nf_modules/sratoolkit/main.nf
index 3396d6e3e749eca10e6d69b3aa72e47ef81217f1..158d4058764560e9771133952db53b70391182c9 100644
--- a/src/nf_modules/sratoolkit/main.nf
+++ b/src/nf_modules/sratoolkit/main.nf
@@ -1,10 +1,15 @@
 version = "2.8.2"
 container_url = "lbmc/sratoolkit:${version}"
 
+params.fastq_dump = ""
+params.fastq_dump_out = ""
 process fastq_dump {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "$sra"
+  if (params.fastq_dump_out != "") {
+    publishDir "results/${params.fastq_dump_out}", mode: 'copy'
+  }
 
   input:
     val sra
@@ -14,7 +19,7 @@ process fastq_dump {
 
   script:
 """
-fastq-dump --split-files --gzip ${sra}
+fastq-dump ${params.fastq_dump} --split-files --gzip ${sra}
 if [ -f ${sra}_1.fastq ]
 then
   mv ${sra}_1.fastq ${sra}_R1.fastq
diff --git a/src/nf_modules/sratoolkit/tests.sh b/src/nf_modules/sratoolkit/tests.sh
deleted file mode 100755
index 1747137d454630bffa8538efc3009469f271b50c..0000000000000000000000000000000000000000
--- a/src/nf_modules/sratoolkit/tests.sh
+++ /dev/null
@@ -1,13 +0,0 @@
-./nextflow src/nf_modules/sratoolkit/fastqdump.nf \
-  -c src/nf_modules/sratoolkit/fastqdump.config \
-  -profile docker \
-  --list_srr "src/nf_modules/sratoolkit/list-srr.txt" \
-  -resume
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/sratoolkit/fastqdump.nf \
-  -c src/nf_modules/sratoolkit/fastqdump.config \
-  -profile docker \
-  --list_srr "src/nf_modules/sratoolkit/list-srr.txt" \
-  -resume
-fi
diff --git a/src/nf_modules/star/indexing.config b/src/nf_modules/star/indexing.config
deleted file mode 100644
index 503c1b8c9fa6c21261530c8d3f9c78a5a30c207e..0000000000000000000000000000000000000000
--- a/src/nf_modules/star/indexing.config
+++ /dev/null
@@ -1,56 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: index_fasta {
-        container = "lbmc/star:2.7.3a"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: index_fasta {
-        container = "lbmc/star:2.7.3a"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: index_fasta {
-        container = "lbmc/star:2.7.3a"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "20GB"
-        time = "12h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: index_fasta {
-        container = "lbmc/star:2.7.3a"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n"
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/star/indexing.nf b/src/nf_modules/star/indexing.nf
deleted file mode 100644
index 0f340b2d3d11ff5fd79d6b1e5e3f8c56d35f2154..0000000000000000000000000000000000000000
--- a/src/nf_modules/star/indexing.nf
+++ /dev/null
@@ -1,36 +0,0 @@
-params.fasta = "$baseDir/data/bam/*.fasta"
-params.annotation = "$baseDir/data/bam/*.gtf"
-
-log.info "fasta files : ${params.fasta}"
-
-Channel
-  .fromPath( params.fasta )
-  .ifEmpty { error "Cannot find any fasta files matching: ${params.fasta}" }
-  .set { fasta_file }
-Channel
-  .fromPath( params.annotation )
-  .ifEmpty { error "Cannot find any annotation files matching: ${params.annotation}" }
-  .set { annotation_file }
-
-process index_fasta {
-  tag "$fasta.baseName"
-  publishDir "results/mapping/index/", mode: 'copy'
-
-  input:
-    file fasta from fasta_file
-    file annotation from annotation_file
-
-  output:
-    file "*" into index_files
-
-  script:
-"""
-STAR --runThreadN ${task.cpus} --runMode genomeGenerate \
---genomeDir ./ \
---genomeFastaFiles ${fasta} \
---sjdbGTFfile ${annotation} \
---genomeSAindexNbases 3 # min(14, log2(GenomeLength)/2 - 1)
-"""
-}
-
-
diff --git a/src/nf_modules/star/mapping_paired.config b/src/nf_modules/star/mapping_paired.config
deleted file mode 100644
index d9d43b943740d4f714e05075827b843a606e02e3..0000000000000000000000000000000000000000
--- a/src/nf_modules/star/mapping_paired.config
+++ /dev/null
@@ -1,56 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: mapping_fastq {
-        container = "lbmc/star:2.7.3a"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: mapping_fastq {
-        container = "lbmc/star:2.7.3a"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: mapping_fastq {
-        container = "lbmc/star:2.7.3a"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: mapping_fastq {
-        container = "lbmc/star:2.7.3a"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n"
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/star/mapping_paired.nf b/src/nf_modules/star/mapping_paired.nf
deleted file mode 100644
index 9ea901751da50944d23327f47bc3813ff3a49859..0000000000000000000000000000000000000000
--- a/src/nf_modules/star/mapping_paired.nf
+++ /dev/null
@@ -1,40 +0,0 @@
-params.fastq = "$baseDir/data/fastq/*_{1,2}.fastq"
-params.index = "$baseDir/data/index/*.index.*"
-
-log.info "fastq files : ${params.fastq}"
-log.info "index files : ${params.index}"
-
-Channel
-  .fromFilePairs( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .set { fastq_files }
-Channel
-  .fromPath( params.index )
-  .ifEmpty { error "Cannot find any index files matching: ${params.index}" }
-  .set { index_files }
-
-process mapping_fastq {
-  tag "$pair_id"
-  publishDir "results/mapping/bams/", mode: 'copy'
-
-  input:
-  set pair_id, file(reads) from fastq_files
-  file index from index_files.collect()
-
-  output:
-  set pair_id, "*.bam" into bam_files
-  file "*.out" into mapping_report
-
-  script:
-"""
-mkdir -p index
-mv ${index} index/
-STAR --runThreadN ${task.cpus} \
---genomeDir index/ \
---readFilesIn ${reads[0]} ${reads[1]} \
---outFileNamePrefix ${pair_id} \
---outSAMmapqUnique 0 \
---outSAMtype BAM SortedByCoordinate
-"""
-}
-
diff --git a/src/nf_modules/star/mapping_single.config b/src/nf_modules/star/mapping_single.config
deleted file mode 100644
index d9d43b943740d4f714e05075827b843a606e02e3..0000000000000000000000000000000000000000
--- a/src/nf_modules/star/mapping_single.config
+++ /dev/null
@@ -1,56 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: mapping_fastq {
-        container = "lbmc/star:2.7.3a"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: mapping_fastq {
-        container = "lbmc/star:2.7.3a"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: mapping_fastq {
-        container = "lbmc/star:2.7.3a"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: mapping_fastq {
-        container = "lbmc/star:2.7.3a"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n"
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/star/mapping_single.nf b/src/nf_modules/star/mapping_single.nf
deleted file mode 100644
index 9d3d51b38da6cc7cf7dff985e5c0052b2924168a..0000000000000000000000000000000000000000
--- a/src/nf_modules/star/mapping_single.nf
+++ /dev/null
@@ -1,39 +0,0 @@
-params.fastq = "$baseDir/data/fastq/*.fastq"
-
-log.info "fastq files : ${params.fastq}"
-log.info "index files : ${params.index}"
-
-Channel
-  .fromPath( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { fastq_files }
-Channel
-  .fromPath( params.index )
-  .ifEmpty { error "Cannot find any index files matching: ${params.index}" }
-  .set { index_files }
-
-process mapping_fastq {
-  tag "$file_id"
-  publishDir "results/mapping/bams/", mode: 'copy'
-
-  input:
-  set file_id, file(reads) from fastq_files
-  file index from index_files.collect()
-
-  output:
-  set file_id, "*.bam" into bam_files
-  file "*.out" into mapping_report
-
-  script:
-"""
-mkdir -p index
-mv ${index} index/
-STAR --runThreadN ${task.cpus} \
---genomeDir index/ \
---readFilesIn ${reads} \
---outFileNamePrefix ${file_id} \
---outSAMmapqUnique 0 \
---outSAMtype BAM SortedByCoordinate
-"""
-}
diff --git a/src/nf_modules/star/tests.sh b/src/nf_modules/star/tests.sh
deleted file mode 100755
index 046ffe4f32bc78467a61440774553335f888b288..0000000000000000000000000000000000000000
--- a/src/nf_modules/star/tests.sh
+++ /dev/null
@@ -1,43 +0,0 @@
-./nextflow src/nf_modules/star/indexing.nf \
-  -c src/nf_modules/star/indexing.config \
-  -profile docker \
-  --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" \
-  --annotation "data/tiny_dataset/annot/tiny.gtf" \
-  -resume
-
-./nextflow src/nf_modules/star/mapping_single.nf \
-  -c src/nf_modules/star/mapping_single.config \
-  -profile docker \
-  --index "results/mapping/index/*" \
-  --fastq "data/tiny_dataset/fastq/tiny*_S.fastq" \
-  -resume
-
-./nextflow src/nf_modules/star/mapping_paired.nf \
-  -c src/nf_modules/star/mapping_paired.config \
-  -profile docker \
-  --index "results/mapping/index/*" \
-  --fastq "data/tiny_dataset/fastq/tiny*_R{1,2}.fastq" \
-  -resume
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/star/indexing.nf \
-  -c src/nf_modules/star/indexing.config \
-  -profile singularity \
-  --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" \
-  --annotation "data/tiny_dataset/annot/tiny.gtf" \
-  -resume
-
-./nextflow src/nf_modules/star/mapping_single.nf \
-  -c src/nf_modules/star/mapping_single.config \
-  -profile singularity \
-  --index "results/mapping/index/*" \
-  --fastq "data/tiny_dataset/fastq/tiny*_S.fastq" \
-  -resume
-
-./nextflow src/nf_modules/star/mapping_paired.nf \
-  -c src/nf_modules/star/mapping_paired.config \
-  -profile singularity \
-  --index "results/mapping/index/*" \
-  --fastq "data/tiny_dataset/fastq/tiny*_R{1,2}.fastq" \
-  -resume
-fi
diff --git a/src/nf_modules/subread/subread.config b/src/nf_modules/subread/subread.config
deleted file mode 100644
index 2d09601a5bc48e9ff0f65edde7fb56d978ffd90b..0000000000000000000000000000000000000000
--- a/src/nf_modules/subread/subread.config
+++ /dev/null
@@ -1,75 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: sort_bam {
-        container = "lbmc/samtools:1.7"
-        cpus = 1
-      }
-      withName: counting {
-        container = "lbmc/subread:1.6.4"
-        cpus = 1
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: sort_bam {
-        container = "lbmc/samtools:1.7"
-        cpus = 1
-      }
-      withName: counting {
-        container = "lbmc/subread:1.6.4"
-        cpus = 1
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: sort_bam {
-        container = "lbmc/subread:1.6.4"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "20GB"
-        time = "12h"
-        queue = "monointeldeb128"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: sort_bam {
-        container = "lbmc/samtools:1.7"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-      withName: counting {
-        container = "lbmc/subread:1.6.4"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/nf_modules/subread/subread.nf b/src/nf_modules/subread/subread.nf
deleted file mode 100644
index f0b53e396a27b871f9091ecd5cff4858550fa98f..0000000000000000000000000000000000000000
--- a/src/nf_modules/subread/subread.nf
+++ /dev/null
@@ -1,52 +0,0 @@
-params.bam = "$baseDir/data/bam/*.bam"
-params.gtf = "$baseDir/data/annotation/*.gtf"
-
-log.info "bam files : ${params.bam}"
-log.info "gtf files : ${params.gtf}"
-
-Channel
-  .fromPath( params.bam )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.bam}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { bam_files }
-Channel
-  .fromPath( params.gtf )
-  .ifEmpty { error "Cannot find any gtf file matching: ${params.gtf}" }
-  .set { gtf_file }
-
-process sort_bam {
-  tag "$file_id"
-  cpus 4
-
-  input:
-    set file_id, file(bam) from bam_files
-
-  output:
-    set file_id, "*_sorted.sam" into sorted_bam_files
-
-  script:
-"""
-# sort bam by name
-samtools sort -@ ${task.cpus} -n -O SAM -o ${file_id}_sorted.sam ${bam}
-"""
-}
-
-process counting {
-  tag "$file_id"
-  publishDir "results/quantification/", mode: 'copy'
-
-  input:
-  set file_id, file(bam) from sorted_bam_files
-  file gtf from gtf_file
-
-  output:
-  file "*.count" into count_files
-
-  script:
-"""
-featureCounts ${bam} -a ${gtf} -p \
-  -o ${file_id}.count \
-  -R BAM
-"""
-}
-
diff --git a/src/nf_modules/subread/tests.sh b/src/nf_modules/subread/tests.sh
deleted file mode 100755
index c50b20e3601efeaa8a0d3721ad6ab20739c3db48..0000000000000000000000000000000000000000
--- a/src/nf_modules/subread/tests.sh
+++ /dev/null
@@ -1,15 +0,0 @@
-./nextflow src/nf_modules/subread/subread.nf \
-  -c src/nf_modules/subread/subread.config \
-  -profile docker \
-  --gtf "data/tiny_dataset/annot/tiny.gff" \
-  --bam "data/tiny_dataset/map/tiny_v2.bam" \
-  -resume
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/subread/subread.nf \
-  -c src/nf_modules/subread/subread.config \
-  -profile singularity \
-  --gtf "data/tiny_dataset/annot/tiny.gff" \
-  --bam "data/tiny_dataset/map/tiny_v2.bam" \
-  -resume
-fi
diff --git a/src/nf_modules/ucsc/main.nf b/src/nf_modules/ucsc/main.nf
index 1e288e835c03a7564587e7862d28944683ce4780..e661e8a559075a163612c21d9d3c8e5777da102e 100644
--- a/src/nf_modules/ucsc/main.nf
+++ b/src/nf_modules/ucsc/main.nf
@@ -1,14 +1,23 @@
 version = "407"
 container_url = "lbmc/ucsc:${version}"
 
+include {
+  index_fasta
+} from './../samtools/main'
+
+params.bedgraph_to_bigwig = ""
+params.bedgraph_to_bigwig_out = ""
 process bedgraph_to_bigwig {
   container = "${container_url}"
   label "big_mem_mono_cpus"
   tag "${file_id}"
+  if (params.bedgraph_to_bigwig_out != "") {
+    publishDir "results/${params.bedgraph_to_bigwig_out}", mode: 'copy'
+  }
 
   input:
-  tuple val(file_id) path(bg)
-  tuple val(file_id) path(bed)
+  tuple val(file_id), path(bg)
+  tuple val(file_id), path(bed)
 
   output:
   tuple val(file_id), path("*.bw"), emit: bw
@@ -20,9 +29,214 @@ LC_COLLATE=C
 awk -v OFS="\\t" '{print \$1, \$3}' ${bed} > chromsize.txt
 
 sort -T ./ -k1,1 -k2,2n ${bg} > \
-  bedGraphToBigWig - \
+  bedGraphToBigWig ${params.bedgraph_to_bigwig} - \
     chromsize.txt \
     ${bg.simpleName}_norm.bw
 """
 }
 
+params.wig_to_bedgraph = ""
+params.wig_to_bedgraph_out = ""
+workflow wig_to_bedgraph {
+  take:
+    fasta
+    wig
+  main:
+    wig_to_bigwig(
+      fasta,
+      wig
+    )
+    bigwig_to_bedgraph(
+      wig_to_bigwig.out.bw
+    )
+  emit:
+  bg = bigwig_to_bedgraph.out.bg
+}
+
+workflow wig2_to_bedgraph2 {
+  take:
+    fasta
+    wig
+  main:
+    wig2_to_bigwig2(
+      fasta,
+      wig
+    )
+    bigwig2_to_bedgraph2(
+      wig2_to_bigwig2.out.bw
+    )
+  emit:
+  bg = bigwig2_to_bedgraph2.out.bg
+}
+
+params.bigwig_to_bedgraph = ""
+params.bigwig_to_bedgraph_out = ""
+process bigwig_to_bedgraph {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "${file_id}"
+  if (params.bigwig_to_bedgraph_out != "") {
+    publishDir "results/${params.bigwig_to_bedgraph_out}", mode: 'copy'
+  }
+
+  input:
+  tuple val(file_id), path(bw)
+
+  output:
+  tuple val(file_id), path("*.bg"), emit: bg
+
+  script:
+"""
+bigWigToBedGraph ${bw} ${bw.simpleName}.bg
+"""
+}
+
+params.bigwig2_to_bedgraph2 = ""
+params.bigwig2_to_bedgraph2_out = ""
+process bigwig2_to_bedgraph2 {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "${file_id}"
+  if (params.bigwig_to_bedgraph_out != "") {
+    publishDir "results/${params.bigwig_to_bedgraph_out}", mode: 'copy'
+  }
+
+  input:
+  tuple val(file_id), path(bw_a), path(bw_b)
+
+  output:
+  tuple val(file_id), path("${bw_a.simpleName}.bg"), path("${bw_b.simpleName}.bg"), emit: bg
+
+  script:
+"""
+bigWigToBedGraph ${bw_a} ${bw_a.simpleName}.bg
+bigWigToBedGraph ${bw_b} ${bw_b.simpleName}.bg
+"""
+}
+
+params.bigwig_to_wig = ""
+params.bigwig_to_wig_out = ""
+process bigwig_to_wig {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "${file_id}"
+  if (params.bigwig_to_wig_out != "") {
+    publishDir "results/${params.bigwig_to_wig_out}", mode: 'copy'
+  }
+
+  input:
+  tuple val(file_id), path(bw)
+
+  output:
+  tuple val(file_id), path("*.wig"), emit: wig
+
+  script:
+"""
+bigWigToBedGraph ${bw} ${bw.simpleName}.bg
+bedgraph_to_wig.pl --bedgraph ${bw.simpleName}.bg --wig ${bw.simpleName}.wig --step 10
+"""
+}
+
+params.bigwig2_to_wig2 = ""
+params.bigwig2_to_wig2_out = ""
+process bigwig2_to_wig2 {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "${file_id}"
+  if (params.bigwig_to_wig_out != "") {
+    publishDir "results/${params.bigwig_to_wig_out}", mode: 'copy'
+  }
+
+  input:
+  tuple val(file_id), path(bw_a), path(bw_b)
+
+  output:
+  tuple val(file_id), path("${bw_a.simpleName}.wig"), path("${bw_b.simpleName}.wig"), emit: wig
+
+  script:
+"""
+bigWigToBedGraph ${bw_a} ${bw_a.simpleName}.bg
+bedgraph_to_wig.pl --bedgraph ${bw_a.simpleName}.bg --wig ${bw_a.simpleName}.wig --step 10
+bigWigToBedGraph ${bw_b} ${bw_b.simpleName}.bg
+bedgraph_to_wig.pl --bedgraph ${bw_b.simpleName}.bg --wig ${bw_b.simpleName}.wig --step 10
+"""
+}
+
+params.wig_to_bigwig = ""
+params.wig_to_bigwig_out = ""
+
+workflow wig_to_bigwig {
+  take:
+    fasta
+    wig
+  main:
+    index_fasta(fasta)
+    wig_to_bigwig_sub(
+      wig,
+      index_fasta.out.index
+    )
+  emit:
+  bw = wig_to_bigwig_sub.out.bw
+}
+
+process wig_to_bigwig_sub {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "${file_id}"
+  if (params.bigwig_to_wig_out != "") {
+    publishDir "results/${params.bigwig_to_wig_out}", mode: 'copy'
+  }
+
+  input:
+  tuple val(file_id), path(w)
+  tuple val(idx_id), path(fasta_idx)
+
+  output:
+  tuple val(file_id), path("${w.simpleName}.bw"), emit: bw
+
+  script:
+"""
+cut -f 1,2 ${fasta_idx} > ${fasta_idx.simpleName}.sizes
+wigToBigWig -clip ${w} ${fasta_idx.simpleName}.sizes ${w.simpleName}.bw
+"""
+}
+
+params.wig2_to_bigwig2 = ""
+params.wig2_to_bigwig2_out = ""
+
+workflow wig2_to_bigwig2 {
+  take:
+    fasta
+    wigs
+  main:
+    index_fasta(fasta)
+    wig2_to_bigwig2_sub(
+      wigs,
+      index_fasta.out.index
+    )
+  emit:
+  bw = wig2_to_bigwig2_sub.out.bw
+}
+
+process wig2_to_bigwig2_sub {
+  container = "${container_url}"
+  label "big_mem_mono_cpus"
+  tag "${file_id}"
+  if (params.bigwig_to_wig_out != "") {
+    publishDir "results/${params.bigwig_to_wig_out}", mode: 'copy'
+  }
+
+  input:
+  tuple val(file_id), path(w_a), path(w_b)
+  tuple val(idx_id), path(fasta_idx)
+
+  output:
+  tuple val(file_id), path("${w_a.simpleName}.bw"), path("${w_b.simpleName}.bw"), emit: bw
+
+  script:
+"""
+cut -f 1,2 ${fasta_idx} > ${fasta_idx.simpleName}.sizes
+wigToBigWig -clip ${w_a} ${fasta_idx.simpleName}.sizes ${w_a.simpleName}.bw
+wigToBigWig -clip ${w_b} ${fasta_idx.simpleName}.sizes ${w_b.simpleName}.bw
+"""
+}
\ No newline at end of file
diff --git a/src/nf_modules/urqt/main.nf b/src/nf_modules/urqt/main.nf
index 48200cc0487938db3c8295e12c5077d8e721b39d..b91afb74ccad107e7accd76ead32ea9945dda8b1 100644
--- a/src/nf_modules/urqt/main.nf
+++ b/src/nf_modules/urqt/main.nf
@@ -3,74 +3,37 @@ container_url = "lbmc/urqt:${version}"
 
 trim_quality = "20"
 
+params.trimming = "--t 20"
 process trimming {
   container = "${container_url}"
   label "big_mem_multi_cpus"
-  tag "${reads}"
+  tag "${file_id}"
 
   input:
-  tuple val(pair_id), path(reads)
-
-  output:
-  tuple val(pair_id), path("*_trim_R{1,2}.fastq.gz"), emit: fastq
-  path "*_report.txt", emit: report
-
-  script:
-if (reads instanceof List)
-"""
-UrQt --t 20 --m ${task.cpus} --gz \
-  --in ${reads[0]} --inpair ${reads[1]} \
-  --out ${pair_id}_trim_R1.fastq.gz --outpair ${pair_id}_trim_R2.fastq.gz \
-  > ${pair_id}_trimming_report.txt
-"""
-else
-"""
-UrQt --t 20 --m ${task.cpus} --gz \
-  --in ${reads} \
-  --out ${file_id}_trim.fastq.gz \
-  > ${file_id}_trimming_report.txt
-"""
-}
-
-process trimming_pairedend {
-  container = "${container_url}"
-  label "big_mem_multi_cpus"
-  tag "${reads}"
-
-  input:
-  tuple val(pair_id), path(reads)
+  tuple val(file_id), path(reads)
 
   output:
   tuple val(pair_id), path("*_trim_R{1,2}.fastq.gz"), emit: fastq
   path "*_report.txt", emit: report
 
   script:
-"""
-UrQt --t 20 --m ${task.cpus} --gz \
+  if (file_id instanceof List){
+    file_prefix = file_id[0]
+  } else {
+    file_prefix = file_id
+  }
+  if (reads.size() == 2)
+"""
+UrQt ${params.trimming} --m ${task.cpus} --gz \
   --in ${reads[0]} --inpair ${reads[1]} \
-  --out ${pair_id}_trim_R1.fastq.gz --outpair ${pair_id}_trim_R2.fastq.gz \
+  --out ${file_prefix}_trim_R1.fastq.gz --outpair ${file_prefix}_trim_R2.fastq.gz \
   > ${pair_id}_trimming_report.txt
 """
-}
-
-process trimming_singleend {
-  container = "${container_url}"
-  label "big_mem_multi_cpus"
-  tag "$file_id"
-
-  input:
-  tuple val(file_id), path(reads)
-
-  output:
-  tuple val(file_id), path("*_trim.fastq.gz"), emit: fastq
-  path "*_report.txt", emit: report
-
-  script:
+  else
 """
-UrQt --t 20 --m ${task.cpus} --gz \
-  --in ${reads} \
-  --out ${file_id}_trim.fastq.gz \
-  > ${file_id}_trimming_report.txt
+UrQt ${params.trimming} --m ${task.cpus} --gz \
+  --in ${reads[0]} \
+  --out ${file_prefix}_trim.fastq.gz \
+  > ${file_prefix}_trimming_report.txt
 """
-}
-
+}
\ No newline at end of file
diff --git a/src/nf_modules/urqt/tests.sh b/src/nf_modules/urqt/tests.sh
deleted file mode 100755
index 9992c67490ce65df42471a73d5c36bb0b708763e..0000000000000000000000000000000000000000
--- a/src/nf_modules/urqt/tests.sh
+++ /dev/null
@@ -1,25 +0,0 @@
-./nextflow src/nf_modules/urqt/trimming_paired.nf \
-  -c src/nf_modules/urqt/trimming_paired.config \
-  -profile docker \
-  --fastq "data/tiny_dataset/fastq/tiny_R{1,2}.fastq" \
-  -resume
-
-./nextflow src/nf_modules/urqt/trimming_single.nf \
-  -c src/nf_modules/urqt/trimming_single.config \
-  -profile docker \
-  --fastq "data/tiny_dataset/fastq/tiny_R{1,2}.fastq" \
-  -resume
-
-if [ -x "$(command -v singularity)" ]; then
-./nextflow src/nf_modules/urqt/trimming_single.nf \
-  -c src/nf_modules/urqt/trimming_single.config \
-  -profile singularity \
-  --fastq "data/tiny_dataset/fastq/tiny_R{1,2}.fastq" \
-  -resume
-
-./nextflow src/nf_modules/urqt/trimming_single.nf \
-  -c src/nf_modules/urqt/trimming_single.config \
-  -profile singularity \
-  --fastq "data/tiny_dataset/fastq/tiny_R{1,2}.fastq" \
-  -resume
-fi
diff --git a/src/nf_modules/urqt/trimming_paired.config b/src/nf_modules/urqt/trimming_paired.config
deleted file mode 100644
index e6eb56ae6e0673581495ab0bcc600e991b5064bb..0000000000000000000000000000000000000000
--- a/src/nf_modules/urqt/trimming_paired.config
+++ /dev/null
@@ -1,58 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withLabel: urqt {
-        cpus = 4
-        container = "lbmc/urqt:d62c1f8"
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withLabel: urqt {
-        cpus = 4
-        container = "lbmc/urqt:d62c1f8"
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withLabel: urqt {
-        container = "lbmc/urqt:d62c1f8"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        memory = "5GB"
-        cpus = 32
-        time = "12h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withLabel: urqt {
-        container = "lbmc/urqt:d62c1f8"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
-
diff --git a/src/nf_modules/urqt/trimming_paired.nf b/src/nf_modules/urqt/trimming_paired.nf
deleted file mode 100644
index 0b28219c1db247aba868beea8e472abe14629d39..0000000000000000000000000000000000000000
--- a/src/nf_modules/urqt/trimming_paired.nf
+++ /dev/null
@@ -1,27 +0,0 @@
-log.info "fastq files : ${params.fastq}"
-
-Channel
-  .fromFilePairs( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .set { fastq_files }
-
-process trimming {
-  tag "${reads}"
-  publishDir "results/fastq/trimming/", mode: 'copy'
-  label "urqt"
-
-  input:
-  set pair_id, file(reads) from fastq_files
-
-  output:
-  set pair_id, "*_trim_R{1,2}.fastq.gz" into fastq_files_trim
-
-  script:
-"""
-UrQt --t 20 --m ${task.cpus} --gz \
---in ${reads[0]} --inpair ${reads[1]} \
---out ${pair_id}_trim_R1.fastq.gz --outpair ${pair_id}_trim_R2.fastq.gz \
-> ${pair_id}_trimming_report.txt
-"""
-}
-
diff --git a/src/nf_modules/urqt/trimming_single.config b/src/nf_modules/urqt/trimming_single.config
deleted file mode 100644
index e1843f172a39b7bd0013eb2c0dc7a6cc831d90f7..0000000000000000000000000000000000000000
--- a/src/nf_modules/urqt/trimming_single.config
+++ /dev/null
@@ -1,58 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withLabel: urqt {
-        container = "lbmc/urqt:d62c1f8"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withLabel: urqt {
-        container = "lbmc/urqt:d62c1f8"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withLabel: urqt {
-        container = "lbmc/urqt:d62c1f8"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "5GB"
-        time = "12h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withLabel: urqt {
-        container = "lbmc/urqt:d62c1f8"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
-
diff --git a/src/nf_modules/urqt/trimming_single.nf b/src/nf_modules/urqt/trimming_single.nf
deleted file mode 100644
index 42bc7c2c5dc6cc4444ba519e9b94a397bcf6af22..0000000000000000000000000000000000000000
--- a/src/nf_modules/urqt/trimming_single.nf
+++ /dev/null
@@ -1,30 +0,0 @@
-params.fastq = "$baseDir/data/fastq/*.fastq"
-
-log.info "fastq files : ${params.fastq}"
-
-Channel
-  .fromPath( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .map { it -> [(it.baseName =~ /([^\.]*)/)[0][1], it]}
-  .set { fastq_files }
-
-process trimming {
-  tag "$file_id"
-  echo true
-  label "urqt"
-
-  input:
-  set file_id, file(reads) from fastq_files
-
-  output:
-  set file_id, "*_trim.fastq.gz" into fastq_files_trim
-
-  script:
-  """
-  UrQt --t 20 --m ${task.cpus} --gz \
-  --in ${reads} \
-  --out ${file_id}_trim.fastq.gz \
-  > ${file_id}_trimming_report.txt
-  """
-}
-
diff --git a/src/solution_RNASeq.config b/src/solution_RNASeq.config
deleted file mode 100644
index fadfceb4bef905feafb1df7fbf1f2d3101ebb822..0000000000000000000000000000000000000000
--- a/src/solution_RNASeq.config
+++ /dev/null
@@ -1,171 +0,0 @@
-profiles {
-  docker {
-    docker.temp = "auto"
-    docker.enabled = true
-    process {
-      withName: adaptor_removal {
-        container = "lbmc/cutadapt:2.1"
-        cpus = 1
-      }
-      withName: trimming {
-        cpus = 4
-        container = "lbmc/urqt:d62c1f8"
-      }
-      withName: fasta_from_bed {
-        container = "lbmc/bedtools:2.25.0"
-        cpus = 1
-      }
-      withName: index_fasta {
-        container = "lbmc/kallisto:0.44.0"
-        cpus = 4
-      }
-      withName: mapping_fastq {
-        container = "lbmc/kallisto:0.44.0"
-        cpus = 4
-      }
-    }
-  }
-  singularity {
-    singularity.enabled = true
-    singularity.cacheDir = "./bin/"
-    process {
-      withName: adaptor_removal {
-        container = "lbmc/cutadapt:2.1"
-        cpus = 1
-      }
-      withName: trimming {
-        cpus = 4
-        container = "lbmc/urqt:d62c1f8"
-      }
-      withName: fasta_from_bed {
-        container = "lbmc/bedtools:2.25.0"
-        cpus = 1
-      }
-      withName: index_fasta {
-        container = "lbmc/kallisto:0.44.0"
-        cpus = 4
-      }
-      withName: mapping_fastq {
-        container = "lbmc/kallisto:0.44.0"
-        cpus = 4
-      }
-    }
-  }
-  psmn{
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_psmn/"
-    singularity.runOptions = "--bind /Xnfs,/scratch"
-    process{
-      withName: adaptor_removal {
-        container = "lbmc/cutadapt:2.1"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "20GB"
-        time = "12h"
-        queue = "monointeldeb128"
-      }
-      withName: trimming {
-        container = "lbmc/urqt:d62c1f8"
-        scratch = true
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-
-      }
-      withName: fasta_from_bed {
-        container = "lbmc/bedtools:2.25.0"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 1
-        memory = "20GB"
-        time = "12h"
-        queue = "monointeldeb128"
-      }
-      withName: index_fasta {
-        container = "lbmc/kallisto:0.44.0"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-      withName: mapping_fastq {
-        container = "lbmc/kallisto:0.44.0"
-        executor = "sge"
-        clusterOptions = "-cwd -V"
-        cpus = 32
-        memory = "30GB"
-        time = "24h"
-        queue = "CLG6242deb384A,CLG6242deb384C,CLG5218deb192A,CLG5218deb192B,CLG5218deb192C,CLG5218deb192D,SLG5118deb96,SLG6142deb384A,SLG6142deb384B,SLG6142deb384C,SLG6142deb384D"
-        penv = "openmp32"
-      }
-    }
-  }
-  ccin2p3 {
-    singularity.enabled = true
-    singularity.cacheDir = "$baseDir/.singularity_in2p3/"
-    singularity.runOptions = "--bind /pbs,/sps,/scratch"
-    process{
-      withName: adaptor_removal {
-        container = "lbmc/cutadapt:2.1"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n"
-        cpus = 1
-        queue = "huge"
-      }
-      withName: trimming {
-        container = "lbmc/urqt:d62c1f8"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-      withName: fasta_from_bed {
-        container = "lbmc/bedtools:2.25.0"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n"
-        cpus = 1
-        queue = "huge"
-      }
-      withName: index_fasta {
-        container = "lbmc/kallisto:0.44.0"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-      withName: mapping_fastq {
-        container = "lbmc/kallisto:0.44.0"
-        scratch = true
-        stageInMode = "copy"
-        stageOutMode = "rsync"
-        executor = "sge"
-        clusterOptions = "-P P_lbmc -l os=cl7 -l sps=1 -r n\
-        "
-        cpus = 1
-        queue = "huge"
-      }
-    }
-  }
-}
diff --git a/src/solution_RNASeq.nf b/src/solution_RNASeq.nf
index 1f43fd1ff6be5c42453c07b581f722ad07feafa6..23adf0998d2c6d2dfc60697ab140b8b0ebb16ba0 100644
--- a/src/solution_RNASeq.nf
+++ b/src/solution_RNASeq.nf
@@ -1,36 +1,34 @@
 nextflow.enable.dsl=2
 
-/*
-./nextflow src/solution_RNASeq.nf --fastq "data/tiny_dataset/fastq/tiny2_R{1,2}.fastq.gz" --fasta "data/tiny_dataset/fasta/tiny_v2_10.fasta" --bed "data/tiny_dataset/annot/tiny.bed" -profile docker
-*/
+include { fastp } from "./nf_modules/fastp/main.nf"
+include { fasta_from_bed } from "./nf_modules/bedtools/main.nf"
+include { index_fasta; mapping_fastq } from './nf_modules/kallisto/main.nf' addParams(mapping_fastq_out: "quantification/")
 
-log.info "fastq files : ${params.fastq}"
+
+params.fastq = "data/fastq/*_{1,2}.fastq"
+
+log.info "fastq files: ${params.fastq}"
 log.info "fasta file : ${params.fasta}"
 log.info "bed file : ${params.bed}"
 
-Channel
+channel
+  .fromFilePairs( params.fastq, size: -1)
+  .set { fastq_files }
+
+channel
   .fromPath( params.fasta )
   .ifEmpty { error "Cannot find any fasta files matching: ${params.fasta}" }
+  .map { it -> [it.simpleName, it]}
   .set { fasta_files }
-Channel
+channel
   .fromPath( params.bed )
   .ifEmpty { error "Cannot find any bed files matching: ${params.bed}" }
+  .map { it -> [it.simpleName, it]}
   .set { bed_files }
-Channel
-  .fromFilePairs( params.fastq )
-  .ifEmpty { error "Cannot find any fastq files matching: ${params.fastq}" }
-  .set { fastq_files }
-
-include { adaptor_removal_pairedend } from './nf_modules/cutadapt/main'
-include { trimming_pairedend } from './nf_modules/urqt/main'
-include { fasta_from_bed } from './nf_modules/bedtools/main'
-include { index_fasta; mapping_fastq_pairedend } from './nf_modules/kallisto/main'
 
 workflow {
-    adaptor_removal_pairedend(fastq_files)
-    trimming_pairedend(adaptor_removal_pairedend.out.fastq)
-    fasta_from_bed(fasta_files, bed_files)
-    index_fasta(fasta_from_bed.out.fasta)
-    mapping_fastq_pairedend(index_fasta.out.index.collect(), trimming_pairedend.out.fastq)
+  fastp(fastq_files)
+  fasta_from_bed(fasta_files, bed_files)
+  index_fasta(fasta_from_bed.out.fasta)
+  mapping_fastq(index_fasta.out.index.collect(), fastp.out.fastq)
 }
-