Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • cginevra/nextflow
  • LBMC/Palladino/RNAseq_nextflow
  • fmortreu/nextflow
  • rseraphi/nextflow
  • elabaron/nextflow
  • acorbin/nextflow
  • cfournea/nextflow
  • pberna01/nextflow
  • jblin/nextflow
  • LBMC/RMI2/rmi2_pipelines
  • carpin/nextflow
  • dtorresc/nextflow
  • nlecouvr/nextflow-nathan
  • ggirau03/nextflow
  • LBMC/nextflow
  • lpicard/nextflow
  • letien02/nextflow
  • ogandril/nextflow
  • mparis/nextflow
  • lanani/nextflow
  • mshamjal/nextflow
  • mcariou/nextflow
  • fduveau/nextflow
  • jshapiro/nextflow
  • yjia01/nextflow
  • pmarie01/nextflow
  • jclaud01/nextflow
  • mprieux/nextflow
  • hpolvech/nextflow
  • rhoury/nextflow
  • alapendr/nextflow
  • cbourgeo/nextflow
  • jvalat/nextflow
  • z483800/nextflow
  • hregue/nextflow
  • dchalopi/nextflow
  • mherbett/nextflow
  • jprobin/nextflow
  • lestrada/nextflow
  • z483801/nextflow
  • ecombe01/nextflow
  • lgely/nextflow
  • gyvert/nextflow
  • nfontrod/nextflow
  • gbenoit/nextflow
  • aguill09/nextflow
  • LBMC/regards/nextflow
  • mvilcot/nextflow
  • jkleine/nextflow
  • jseimand/nextflow
  • mdjaffar/nextflow
  • LBMC/Delattre/JU28_59vs17_SNP
  • mlepetit/nextflow
  • vvanoost/nextflow
54 results
Show changes
Commits on Source (1188)
Showing with 1446 additions and 1263 deletions
# SPDX-FileCopyrightText: 2022 Laurent Modolo <laurent.modolo@ens-lyon.fr>
#
# SPDX-License-Identifier: AGPL-3.0-or-later
nextflow
.nextflow.log*
.nextflow/
work/
results
workspace.code-workspace
[submodule "src/sge_modules"]
path = src/sge_modules
url = gitlab_lbmc:PSMN/modules.git
# SPDX-FileCopyrightText: 2022 Laurent Modolo <laurent.modolo@ens-lyon.fr>
#
# SPDX-License-Identifier: AGPL-3.0-or-later
[submodule "src/.docker_modules/hicstuff/3.1.3/hicstuff"]
path = src/.docker_modules/hicstuff/3.1.3/hicstuff
url = git@github.com:koszullab/hicstuff.git
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: nextflow
Upstream-Contact: Laurent Modolo <laurent.modolo@ens-lyon.fr>
Source: https://gitbio.ens-lyon.fr/LBMC/nextflow
# Sample paragraph, commented out:
#
# Files: src/*
# Copyright: $YEAR $NAME <$CONTACT>
# License: ...
<!--
SPDX-FileCopyrightText: 2022 Laurent Modolo <laurent.modolo@ens-lyon.fr>
SPDX-License-Identifier: CC-BY-SA-4.0
-->
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.4.0] - 2019-11-18
### Added
- Add new tools (star,...)
- conda support at the psmn
## Changed
- configuration simplification
- docker and singularity image download instead of local build
- hidden directories in `src` for project clarity (only `nf_modules` is visible)
## Removed
- conda support at in2p3 with `-profile in2p3_conda`
## [0.3.0] - 2019-05-23
### Added
- Add new tools (umi_tools, fastp,...)
- singularity support at in2p3 with `-profile in2p3`
- conda support at in2p3 with `-profile in2p3_conda`
## [0.2.9] - 2019-03-26
### Added
- Add new tools (fastq, macs2, umitools, ...)
- singularity support
### Changed
- every tool name is now in lowercase in each module section
## [0.2.7] - 2018-10-23
### Added
- Add new tools (BWA, GATK, sambamba, ...)
### Changed
- `sge` profile is now called `psmn` profile to prepare tests in the CCIN2P3
- every `psmn` config file has an update configuration for mono or 16 cpus queues
- update process naming to follow new nextflow format
## [0.2.6] - 2018-08-23
### Added
- Added `src/training_dataset.nf` to build a small training dataset from NGS data
### Changed
- the structure of `src/nf_modules`: the `tests` folder was removed
## [0.2.5] - 2018-08-22
### Added
- This fine changelog
### Changed
- the structure of `src/nf_modules`: the `tests` folder was removed
## [0.2.4] - 2018-08-02
### Changed
- add `paired_id` variable in the output of every single-end data processes to match the paired output
## [0.2.3] - 2018-07-25
### Added
- List of tools available as nextflow, docker or sge module to the `README.md`
## [0.2.2] - 2018-07-23
### Added
- SRA module from cigogne/nextflow-master 52b510e48daa1fb7
## [0.2.1] - 2018-07-23
### Added
- List of tools available as nextflow, docker or sge module
## [0.2.0] - 2018-06-18
### Added
- `doc/TP_computational_biologists.md`
- Kallisto/0.44.0
### Changed
- add `paired_id` variable in the output of every paired data processes
- BEDtools: fixes for fasta handling
- UrQt: fix git version in Docker
## [0.1.2] - 2018-06-18
### Added
- `doc/tp_experimental_biologist.md` and Makefile to build the pdf
- tests files for BEDtools
### Changed
- Kallisto: various fixes
- UrQt: improve output and various fixes
### Removed
- `src/nf_test.config` modules have their own `.config`
## [0.1.2] - 2018-06-18
### Added
- `doc/tp_experimental_biologist.md` and Makefile to build the pdf
- tests files for BEDtools
### Changed
- Kallisto: various fixes
- UrQt: improve output and various fixes
### Removed
- `src/nf_test.config` modules have their own `.config`
## [0.1.0] - 2018-05-06
This is the first working version of the repository as a nextflow module repository
<!--
SPDX-FileCopyrightText: 2022 Laurent Modolo <laurent.modolo@ens-lyon.fr>
SPDX-License-Identifier: CC-BY-SA-4.0
-->
# Contributing
When contributing to this repository, please first discuss the change you wish to make via issue,
email, or any other method with the owners of this repository before making a change.
When contributing to this repository, please first discuss the change you wish to make via issues,
email, or on the [ENS-Bioinfo channel](https://matrix.to/#/#ens-bioinfo:matrix.org) before making a change.
## Forking
In git, the [action of forking](https://git-scm.com/book/en/v2/GitHub-Contributing-to-a-Project) means that you are going to make your own private copy of a repository. You can then write modifications in your project, and if they are of interest for the source repository create a merge request (here [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow)). Merge requests are sent to the source repository to ask the maintainers to integrate modifications.
![merge request button](./doc/img/merge_request.png)
## Project organization
The `LBMC/nextflow` project is structured as follows:
- all the code is in the `src/` folder
- scripts downloading external tools should download them in the `bin/` folder
- all the documentation (including this file) can be found int he `doc/` folder
- the `data` and `results` folders contain the data and results of your pipelines and are ignored by `git`
## Code structure
The `src/` folder is where we want to save the pipeline (`.nf`) scripts. This folder also contains
- the `src/install_nextflow.sh` to install the nextflow executable at the root of the project.
- some pipelines examples (like the one build during the nf_pratical)
- the `src/nextflow.config` global configuration file which contains the `docker`, `singularity`, `psmn` and `ccin2p3` profiles.
- the `src/nf_modules` folder contains per tools `main.nf` modules with predefined process that users can import in their projects with the [DSL2](https://www.nextflow.io/docs/latest/dsl2.html)
But also some hidden folders that users don't need to see when building their pipeline:
- the `src/.docker_modules` contains the recipes for the `docker` containers used in the `src/nf_modules/<tool_names>/main.nf` files
- the `src/.singularity_in2p3` and `src/.singularity_psmn` are symbolic links to the shared folder where the singularity images are downloaded on the PSMN and CCIN2P3
# Proposing a new tool
Each tool named `<tool_name>` must have two dedicated folders:
- [`src/nf_modules/<tool_name>`](./src/nf_modules/fastp/) where users can find `.nf` files to include
- [`src/.docker_modules/<tool_name>/<version_number>`](./src/.docker_modules/fastp/0.20.1/) where we have the [`Dockerfile`](./src/.docker_modules/fastp/0.20.1/Dockerfile) to construct the container used in the `main.nf` file
## `src/nf_module` guide lines
We are going to take the [`fastp`, `nf_module`](./src/nf_modules/fastp/) as an example.
The [`src/nf_modules/<tool_name>`](./src/nf_modules/fastp/) should contain a [`main.nf`](./src/nf_modules/fastp/main.nf) file that describe at least one process using `<tool_name>`
### container informations
The first two lines of [`main.nf`](./src/nf_modules/fastp/main.nf) should define two variables
```Groovy
version = "0.20.1"
container_url = "lbmc/fastp:${version}"
```
we can then use the `container_url` definition in each `process` in the `container` attribute.
In addition to the `container` directive, each `process` should have one of the following `label` attributes (defined in the `src/nextflow.config` file)
- `big_mem_mono_cpus`
- `big_mem_multi_cpus`
- `small_mem_mono_cpus`
- `small_mem_multi_cpus`
```Groovy
process fastp {
container = "${container_url}"
label = "big_mem_multi_cpus"
...
}
```
### process options
Before each process, you should declare at least two `params.` variables:
- A `params.<process_name>` defaulting to `""` (empty string) to allow user to add more command line option to your process without rewriting the process definition
- A `params.<process_name>_out` defaulting to `""` (empty string) that define the `results/` subfolder where the process output should be copied if the user wants to save the process output
```Groovy
params.fastp = ""
params.fastp_out = ""
process fastp {
container = "${container_url}"
label "big_mem_multi_cpus"
if (params.fastp_out != "") {
publishDir "results/${params.fastp_out}", mode: 'copy'
}
...
script:
"""
fastp --thread ${task.cpus} \
${params.fastp} \
...
"""
}
```
The user can then change the value of these variables:
- from the command line `--fastp "--trim_head1=10"``
- with the `include` command within their pipeline: `include { fastq } from "nf_modules/fastq/main" addParams(fastq_out: "QC/fastq/")
- by defining the variable within their pipeline: `params.fastq_out = "QC/fastq/"
### `input` and `output` format
You should always use `tuple` for input and output channel format with at least:
- a `val` containing variable(s) related to the item
- a `path` for the file(s) that you want to process
for example:
```Groovy
process fastp {
container = "${container_url}"
label "big_mem_multi_cpus"
tag "$file_id"
if (params.fastp_out != "") {
publishDir "results/${params.fastp_out}", mode: 'copy'
}
input:
tuple val(file_id), path(reads)
output:
tuple val(file_id), path("*.fastq.gz"), emit: fastq
tuple val(file_id), path("*.html"), emit: html
tuple val(file_id), path("*.json"), emit: report
...
```
Here `file_id` can be anything from a simple identifier to a list of several variables.
In which case the first item of the List should be usable as a file prefix.
So you have to keep that in mind if you want to use it to define output file names (you can test for that with `file_id instanceof List`).
In some case, the `file_id` may be a Map to have a cleaner access to the `file_id` content by explicit keywords.
If you want to use information within the `file_id` to name outputs in your `script` section, you can use the following snipet:
```Groovy
script:
switch(file_id) {
case {it instanceof List}:
file_prefix = file_id[0]
break
case {it instanceof Map}:
file_prefix = file_id.values()[0]
break
default:
file_prefix = file_id
break
}
```
and use the `file_prefix` variable.
This also means that channel emitting `path` item should be transformed with at least the following map function:
```Groovy
.map { it -> [it.simpleName, it]}
```
for example
```Groovy
channel
.fromPath( params.fasta )
.ifEmpty { error "Cannot find any fasta files matching: ${params.fasta}" }
.map { it -> [it.simpleName, it]}
.set { fasta_files }
```
The rationale behind taking a `file_id` and emitting the same `file_id` is to facilitate complex channel operations in pipelines without having to rewrite the `process` blocks.
### dealing with paired-end and single-end data
When oppening fastq files with `channel.fromFilePairs( params.fastq )`, item in the channel have the following shape:
```Groovy
[file_id, [read_1_file, read_2_file]]
```
To make this call more generic, we can use the `size: -1` option, and accept arbitrary number of associated fastq files:
```Groovy
channel.fromFilePairs( params.fastq, size: -1 )
```
Please note we have a code of conduct, please follow it in all your interactions with the project.
will thus give `[file_id, [read_1_file, read_2_file]]` for paired-end data and `[file_id, [read_1_file]]` for single-end data
## Pull Request Process
1. Ensure any install or build dependencies are removed before the end of the layer when doing a
build.
2. Update the README.md with details of changes to the interface, this includes new environment
variables, exposed ports, useful file locations and container parameters.
3. Increase the version numbers in any examples files and the README.md to the new version that this
Pull Request would represent. The versioning scheme we use is [SemVer](http://semver.org/).
4. You may merge the Pull Request in once you have the sign-off of two other developers, or if you
do not have permission to do that, you may request the second reviewer to merge it for you.
You can the use tests on `read.size()` to define conditional `script` block:
## Code of Conduct
```Groovy
...
script:
if (file_id instanceof List){
file_prefix = file_id[0]
} else {
file_prefix = file_id
}
if (reads.size() == 2)
"""
fastp --thread ${task.cpus} \
${params.fastp} \
--in1 ${reads[0]} \
--in2 ${reads[1]} \
--out1 ${file_prefix}_R1_trim.fastq.gz \
--out2 ${file_prefix}_R2_trim.fastq.gz \
--html ${file_prefix}.html \
--json ${file_prefix}_fastp.json \
--report_title ${file_prefix}
"""
else
"""
fastp --thread ${task.cpus} \
${params.fastp} \
--in1 ${reads[0]} \
--out1 ${file_prefix}_trim.fastq.gz \
--html ${file_prefix}.html \
--json ${file_prefix}_fastp.json \
--report_title ${file_prefix}
"""
...
```
### Our Pledge
### Complex processes
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
nationality, personal appearance, race, religion, or sexual identity and
orientation.
Sometime you want to do write complex processes, for example for `fastp` we want to have predefine `fastp` process for different protocols, order of adapter trimming and reads clipping.
We can then use the fact that `process` or named `workflow` can be interchangeably imported with th [DSL2](https://www.nextflow.io/docs/latest/dsl2.html#workflow-composition).
### Our Standards
With the following example, the user can simply include the `fastp` step without knowing that it's a named `workflow` instead of a `process`.
By specifying the `params.fastp_protocol`, the `fastp` step will transparently switch betwen the different `fastp` `process`es.
Here `fastp_default` or `fastp_accel_1splus`, and other protocols can be added later, pipeline will be able to handle these new protocols by simply updating from the `upstream` repository without changing their codes.
Examples of behavior that contributes to creating a positive environment
include:
```Groovy
params.fastp_protocol = ""
workflow fastp {
take:
fastq
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
main:
switch(params.fastp_protocol) {
case "accel_1splus":
fastp_accel_1splus(fastq)
fastp_accel_1splus.out.fastq.set{res_fastq}
fastp_accel_1splus.out.report.set{res_report}
break;
default:
fastp_default(fastq)
fastp_default.out.fastq.set{res_fastq}
fastp_default.out.report.set{res_report}
break;
}
emit:
fastq = res_fastq
report = res_report
}
```
Examples of unacceptable behavior by participants include:
## `src/.docker_modules` guide lines
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
We are going to take the [`fastp`, `.docker_modules`](./src/.docker_module/fastp/0.20.1/) as an example.
### Our Responsibilities
The [`src/.docker_modules/<tool_name>/<version_number>`](./src/nf_modules/fastp/0.20.1/) should contain a [`Dockerfile`](./src/.docker_module/fastp/0.20.1/Dockerfile) and a [`docker_init.sh`](./src/.docker_module/fastp/0.20.1/docker_init.sh).
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
### `Dockerfile`
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
The [`Dockerfile`](./src/.docker_module/fastp/0.20.1/Dockerfile) shoud contains a `docker` recipe to build a image with `<tool_name>` installed in a system-wide binary folder (`/bin`, `/usr/local/bin/`, etc).
Therefore, your scripts are easily accessible from within the container.
### Scope
This recipe should have:
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
- an easily changeable `<version_number>` to be able to update the corresponding image to a newer version of the tool
- the `ps` executable (package `procps` in debian)
- a default `bash` command (`CMD ["bash"]`)
### Enforcement
### `docker_init.sh`
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at [INSERT EMAIL ADDRESS]. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
The [`docker_init.sh`](./src/.docker_module/fastp/0.20.1/docker_init.sh) script is a small sh script with the following content:
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
```sh
#!/bin/sh
docker pull lbmc/fastp:0.20.1
docker build src/.docker_modules/fastp/0.20.1 -t 'lbmc/fastp:0.20.1'
docker push lbmc/fastp:0.20.1
```
### Attribution
We want to be able to execute the `src/.docker_module/fastp/0.20.1/docker_init.sh` from the root of the project to :
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at [http://contributor-covenant.org/version/1/4][version]
- try to download the corresponding container if it exists on the [Docker Hub](https://hub.docker.com/repository/docker/lbmc/)
- if not build the container from the correspondig [`Dockerfile`](./src/.docker_module/fastp/0.20.1/Dockerfile) and with the same name as the name we would get from the `docker pull` command
- push the container on the [Docker Hub](https://hub.docker.com/repository/docker/lbmc/) (only [laurent.modolo@ens-lyon.fr](mailto:laurent.modolo@ens-lyon.fr) can do this step for the group **lbmc**)
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/
This diff is collapsed.
This diff is collapsed.
Creative Commons Attribution-ShareAlike 4.0 International
Creative Commons Corporation (“Creative Commons”) is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an “as-is” basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible.
Using Creative Commons Public Licenses
Creative Commons public licenses provide a standard set of terms and conditions that creators and other rights holders may use to share original works of authorship and other material subject to copyright and certain other rights specified in the public license below. The following considerations are for informational purposes only, are not exhaustive, and do not form part of our licenses.
Considerations for licensors: Our public licenses are intended for use by those authorized to give the public permission to use material in ways otherwise restricted by copyright and certain other rights. Our licenses are irrevocable. Licensors should read and understand the terms and conditions of the license they choose before applying it. Licensors should also secure all rights necessary before applying our licenses so that the public can reuse the material as expected. Licensors should clearly mark any material not subject to the license. This includes other CC-licensed material, or material used under an exception or limitation to copyright. More considerations for licensors.
Considerations for the public: By using one of our public licenses, a licensor grants the public permission to use the licensed material under specified terms and conditions. If the licensor’s permission is not necessary for any reason–for example, because of any applicable exception or limitation to copyright–then that use is not regulated by the license. Our licenses grant only permissions under copyright and certain other rights that a licensor has authority to grant. Use of the licensed material may still be restricted for other reasons, including because others have copyright or other rights in the material. A licensor may make special requests, such as asking that all changes be marked or described.
Although not required by our licenses, you are encouraged to respect those requests where reasonable. More considerations for the public.
Creative Commons Attribution-ShareAlike 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-ShareAlike 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions.
Section 1 – Definitions.
a. Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image.
b. Adapter's License means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License.
c. BY-SA Compatible License means a license listed at creativecommons.org/compatiblelicenses, approved by Creative Commons as essentially the equivalent of this Public License.
d. Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights.
e. Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements.
f. Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material.
g. License Elements means the license attributes listed in the name of a Creative Commons Public License. The License Elements of this Public License are Attribution and ShareAlike.
h. Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License.
i. Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license.
j. Licensor means the individual(s) or entity(ies) granting rights under this Public License.
k. Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them.
l. Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world.
m. You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning.
Section 2 – Scope.
a. License grant.
1. Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to:
A. reproduce and Share the Licensed Material, in whole or in part; and
B. produce, reproduce, and Share Adapted Material.
2. Exceptions and Limitations. For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions.
3. Term. The term of this Public License is specified in Section 6(a).
4. Media and formats; technical modifications allowed. The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a)(4) never produces Adapted Material.
5. Downstream recipients.
A. Offer from the Licensor – Licensed Material. Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License.
B. Additional offer from the Licensor – Adapted Material. Every recipient of Adapted Material from You automatically receives an offer from the Licensor to exercise the Licensed Rights in the Adapted Material under the conditions of the Adapter’s License You apply.
C. No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.
6. No endorsement. Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i).
b. Other rights.
1. Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise.
2. Patent and trademark rights are not licensed under this Public License.
3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties.
Section 3 – License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the following conditions.
a. Attribution.
1. If You Share the Licensed Material (including in modified form), You must:
A. retain the following if it is supplied by the Licensor with the Licensed Material:
i. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of warranties;
v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable;
B. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and
C. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License.
2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information.
3. If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable.
b. ShareAlike.In addition to the conditions in Section 3(a), if You Share Adapted Material You produce, the following conditions also apply.
1. The Adapter’s License You apply must be a Creative Commons license with the same License Elements, this version or later, or a BY-SA Compatible License.
2. You must include the text of, or the URI or hyperlink to, the Adapter's License You apply. You may satisfy this condition in any reasonable manner based on the medium, means, and context in which You Share Adapted Material.
3. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, Adapted Material that restrict exercise of the rights granted under the Adapter's License You apply.
Section 4 – Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database;
b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material, including for purposes of Section 3(b); and
c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights.
Section 5 – Disclaimer of Warranties and Limitation of Liability.
a. Unless otherwise separately undertaken by the Licensor, to the extent possible, the Licensor offers the Licensed Material as-is and as-available, and makes no representations or warranties of any kind concerning the Licensed Material, whether express, implied, statutory, or other. This includes, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. Where disclaimers of warranties are not allowed in full or in part, this disclaimer may not apply to You.
b. To the extent possible, in no event will the Licensor be liable to You on any legal theory (including, without limitation, negligence) or otherwise for any direct, special, indirect, incidental, consequential, punitive, exemplary, or other losses, costs, expenses, or damages arising out of this Public License or use of the Licensed Material, even if the Licensor has been advised of the possibility of such losses, costs, expenses, or damages. Where a limitation of liability is not allowed in full or in part, this limitation may not apply to You.
c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability.
Section 6 – Term and Termination.
a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically.
b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates:
1. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or
2. upon express reinstatement by the Licensor.
c. For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License.
d. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License.
e. Sections 1, 5, 6, 7, and 8 survive termination of this Public License.
Section 7 – Other Terms and Conditions.
a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License.
Section 8 – Interpretation.
a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License.
b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions.
c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor.
d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority.
Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the “Licensor.” Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at creativecommons.org/policies, Creative Commons does not authorize the use of the trademark “Creative Commons” or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses.
Creative Commons may be contacted at creativecommons.org.
<!--
SPDX-FileCopyrightText: 2022 Laurent Modolo <laurent.modolo@ens-lyon.fr>
SPDX-License-Identifier: CC-BY-SA-4.0
-->
# nextflow pipeline
This repository is a template and a library repository to help you build nextflow pipeline.
You can fork this repository to build your own pipeline.
To get the last commits from this repository into your fork use the following commands:
```sh
git remote add upstream https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow.git
git pull upstream master
```
## Getting Started
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system.
### Prerequisites
To run nextflow on you computer you need to have java (>= 1.8) installed.
```sh
java --version
```
To be able to easily test tools already implemented for nextflow on your computer (`src/nf_modules/` to see their list). You need to have docker installed.
```sh
docker run hello-world
```
### Installing
To install nextflow on you computer simply run the following command:
## Getting the last updates
```sh
src/install_nextflow.sh
```
Then to initialise a given tools run the following command:
To get the last commits from this repository into your fork use the following commands:
For the first time:
```sh
src/docker_modules/<tool_name>/<tool_version>/docker_init.sh
git remote add upstream git@gitbio.ens-lyon.fr:LBMC/nextflow.git
git pull upstream master
```
for example to initialise `file_handle` version `0.1.1`, run:
Then to make an update:
```sh
src/docker_modules/file_handle/0.1.1/docker_init.sh
git pull upstream master
git merge upstream/master
```
To initialise all the tools:
```sh
find src/docker_modules/ -name "docker_init.sh" | awk '{system($0)}'
```
## Getting Started
## Running the tests
These instructions will get you a copy of the project as a template when you want to build your own pipeline.
To run tests we first need to get a training set
```sh
cd data
git clone -c http.sslVerify=false https://gitlab.biologie.ens-lyon.fr/LBMC/tiny_dataset.git
cp tiny_dataset/fastq/tiny_R1.fastq tiny_dataset/fastq/tiny2_R1.fastq
cp tiny_dataset/fastq/tiny_R2.fastq tiny_dataset/fastq/tiny2_R2.fastq
cp tiny_dataset/fastq/tiny_S.fastq tiny_dataset/fastq/tiny2_S.fastq
cd ..
```
[you can follow them here.](doc/getting_started.md)
Then to run the tests for a given tools run the following command:
## Building your pipeline
```sh
src/nf_modules/<tool_name>/<tool_version>/tests/tests.sh
```
You can follow the [building your pipeline guide](./doc/building_your_pipeline.md) for a gentle introduction to `nextflow` and taking advantage of this template to build your pipelines.
for example to run the tests on `Bowtie2` run:
## Existing Nextflow pipeline
```sh
src/nf_modules/Bowtie2/tests/tests.sh
```
Before starting a new project, you can check if someone else didn’t already to the work !
- [on the nextflow project page](./doc/nf_projects.md)
- [on the nf-core project](https://nf-co.re/pipelines)
## Contributing
Please read [CONTRIBUTING.md](CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests to us.
## Versioning
We use [SemVer](http://semver.org/) for versioning. For the versions available, see the [tags on this repository](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/tags).
If you want to add more tools to this project, please read the [CONTRIBUTING.md](CONTRIBUTING.md).
## Authors
* **Laurent Modolo** - *Initial work*
See also the list of [contributors](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/graphs/master) who participated in this project.
See also the list of [contributors](https://gitbio.ens-lyon.fr/pipelines/nextflow/graphs/master) who participated in this project.
## License
This project is licensed under the CeCiLL License- see the [LICENSE](LICENSE) file for details
# SPDX-FileCopyrightText: 2022 Laurent Modolo <laurent.modolo@ens-lyon.fr>
#
# SPDX-License-Identifier: AGPL-3.0-or-later
*
# SPDX-FileCopyrightText: 2022 Laurent Modolo <laurent.modolo@ens-lyon.fr>
#
# SPDX-License-Identifier: AGPL-3.0-or-later
*
# SPDX-FileCopyrightText: 2022 Laurent Modolo <laurent.modolo@ens-lyon.fr>
#
# SPDX-License-Identifier: AGPL-3.0-or-later
*.pdf
all: TP_experimental_biologists.pdf TP_computational_biologists.pdf
TP_experimental_biologists.pdf: TP_experimental_biologists.md
R -e 'require(rmarkdown); rmarkdown::render("TP_experimental_biologists.md")'
TP_computational_biologists.pdf: TP_computational_biologists.md
R -e 'require(rmarkdown); rmarkdown::render("TP_computational_biologists.md")'
---
title: "TP for computational biologists"
author: Laurent Modolo [laurent.modolo@ens-lyon.fr](mailto:laurent.modolo@ens-lyon.fr)
date: 20 Jun 2018
output:
pdf_document:
toc: true
toc_depth: 3
number_sections: true
highlight: tango
latex_engine: xelatex
---
The goal of this practical is to learn how to *wrap* tools in [Docker](https://www.docker.com/what-docker) or [Environment Module](http://www.ens-lyon.fr/PSMN/doku.php?id=documentation:tools:modules) to make them available to nextflow on a personal computer or at the [PSMN](http://www.ens-lyon.fr/PSMN/doku.php).
Here we assume that you followed the [TP for experimental biologists](./TP_experimental_biologists.md), and that you know the basics of [Docker containers](https://www.docker.com/what-container) and [Environment Module](http://www.ens-lyon.fr/PSMN/doku.php?id=documentation:tools:modules). We are also going to assume that you know how to build and use a nextflow pipeline from the template [pipelines/nextflow](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow).
For the practical you can either work with the WebIDE of Gitlab, or locally as described in the [git: basis formation](https://gitlab.biologie.ens-lyon.fr/formations/git_basis).
# Docker
To run a tool within a [Docker container](https://www.docker.com/what-container) you need to write a `Dockerfile`.
[`Dockerfile`](./src/docker_modules/Kallisto/0.44.0/Dockerfile) are found in the [pipelines/nextflow](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow) project under `src/docker_modules/`. Each [`Dockerfile`](./src/docker_modules/Kallisto/0.44.0/Dockerfile) is paired with a [`docker_init.sh`](./src/docker_modules/Kallisto/0.44.0/docker_init.sh) file like following the example for `Kallisto` version `0.43.1`:
```sh
$ ls -l src/docker_modules/Kallisto/0.43.1/
total 16K
drwxr-xr-x 2 laurent users 4.0K Jun 5 19:06 ./
drwxr-xr-x 3 laurent users 4.0K Jun 6 09:49 ../
-rw-r--r-- 1 laurent users 587 Jun 5 19:06 Dockerfile
-rwxr-xr-x 1 laurent users 79 Jun 5 19:06 docker_init.sh*
```
## [`docker_init.sh`](./src/docker_modules/Kallisto/0.44.0/docker_init.sh)
The [`docker_init.sh`](./src/docker_modules/Kallisto/0.44.0/docker_init.sh) is a simple sh script with executable rights (`chmod +x`). By executing this script, the user creates a [Docker container](https://www.docker.com/what-container) with the tool installed a specific version. You can check the [`docker_init.sh`](./src/docker_modules/Kallisto/0.44.0/docker_init.sh) file of any implemented tools as a template.
Remember that the name of the [container](https://www.docker.com/what-container) must be in lower case and in the format `<tool_name>:<version>`.
For tools without a version number you can use a commit hash instead.
## [`Dockerfile`](./src/docker_modules/Kallisto/0.44.0/Dockerfile)
The recipe to wrap your tool in a [Docker container](https://www.docker.com/what-container) is written in a [`Dockerfile`](./src/docker_modules/Kallisto/0.44.0/Dockerfile) file.
For `Kallisto` version `0.44.0` the header of the `Dockerfile` is :
```Docker
FROM ubuntu:18.04
MAINTAINER Laurent Modolo
ENV KALLISTO_VERSION=0.44.0
```
The `FROM` instruction means that the [container](https://www.docker.com/what-container) is initialized from a bare installation of Ubuntu 18.04. You can check the versions of Ubuntu available [here](https://hub.docker.com/_/ubuntu/) or others operating systems like [debian](https://hub.docker.com/_/debian/) or [worst](https://hub.docker.com/r/microsoft/windowsservercore/).
Then we declare the *maintainer* of the container. Before declaring an environment variable for the container named `KALLISTO_VERSION`, which contains the version of the tool wrapped. This this bash variable will be declared for the user root within the [container](https://www.docker.com/what-container).
You should always declare a variable `TOOLSNAME_VERSION` that contains the version number of commit number of the tools you wrap. In simple cases you just have to modify this line to create a new `Dockerfile` for another version of the tool.
The following lines of the [`Dockerfile`](./src/docker_modules/Kallisto/0.44.0/Dockerfile) are a succession of `bash` commands executed as the **root** user within the container.
Each `RUN` block is run sequentially by `Docker`. If there is an error or modifications in a `RUN` block, only this block and the following `RUN` will be executed.
You can learn more about the building of Docker containers [here](https://docs.docker.com/engine/reference/builder/#usage).
When you build your [`Dockerfile`](./src/docker_modules/Kallisto/0.44.0/Dockerfile), instead of launching many times the [`docker_init.sh`](./src/docker_modules/Kallisto/0.44.0/docker_init.sh) script to tests your [container](https://www.docker.com/what-container), you can connect to a base container in interactive mode to launch tests your commands.
```sh
docker run -it ubuntu:18.04 bash
KALLISTO_VERSION=0.44.0
```
# SGE / [PSMN](http://www.ens-lyon.fr/PSMN/doku.php)
To run easily tools on the [PSMN](http://www.ens-lyon.fr/PSMN/doku.php), you need to build your own [Environment Module](http://www.ens-lyon.fr/PSMN/doku.php?id=documentation:tools:modules).
You can read the Contributing guide for the [PMSN/modules](https://gitlab.biologie.ens-lyon.fr/PSMN/modules) project [here](https://gitlab.biologie.ens-lyon.fr/PSMN/modules/blob/master/CONTRIBUTING.md)
# Nextflow
The last step to wrap your tool is to make it available in nextflow. For this you need to create at least 4 files, like the following for Kallisto version `0.44.0`:
```sh
ls -lR src/nf_modules/Kallisto
src/nf_modules/Kallisto/:
total 12
-rw-r--r-- 1 laurent users 866 Jun 18 17:13 kallisto.config
-rw-r--r-- 1 laurent users 2711 Jun 18 17:13 kallisto.nf
drwxr-xr-x 2 laurent users 4096 Jun 18 17:14 tests/
src/nf_modules/Kallisto/tests:
total 16
-rw-r--r-- 1 laurent users 551 Jun 18 17:14 index.nf
-rw-r--r-- 1 laurent users 901 Jun 18 17:14 mapping_paired.nf
-rw-r--r-- 1 laurent users 1037 Jun 18 17:14 mapping_single.nf
-rwxr-xr-x 1 laurent users 627 Jun 18 17:14 tests.sh*
```
The [`kallisto.config`](./src/nf_modules/Kallisto/kallisto.config) file contains instructions for two profiles : `sge` and `docker`.
The [`kallisto.nf`](./src/nf_modules/Kallisto/kallisto.nf) file contains nextflow processes to use `Kallisto`.
The [`tests/tests.sh`](./src/nf_modules/Kallisto/tests/tests.sh) script (with executable rights), contains a series of nextflow calls on the other `.nf` files of the [`tests/`](./src/nf_modules/kallisto/tests/) folder. Those tests correspond to execution of the processes present in the [`kallisto.nf`](./src/nf_modules/Kallisto/kallisto.nf) file on the [LBMC/tiny_dataset](https://gitlab.biologie.ens-lyon.fr/LBMC/tiny_dataset) dataset with the `docker` profile. You can read the *Running the tests* section of the [README.md](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/README.md).
## [`kallisto.config`](./src/nf_modules/Kallisto/kallisto.config)
The `.config` file defines the configuration to apply to your process conditionally to the value of the `-profile` option. You must define configuration for at least the `sge` and `docker` profile.
```Groovy
profiles {
docker {
docker.temp = 'auto'
docker.enabled = true
process {
}
}
sge {
process{
}
}
```
### `docker` profile
The `docker` profile starts by enabling docker for the whole pipeline. After that you only have to define the container name for each process:
For example, for `Kallisto` with the version `0.44.0`, we have:
```Groovy
process {
$index_fasta {
container = "kallisto:0.44.0"
}
$mapping_fastq {
container = "kallisto:0.44.0"
}
}
```
### `sge` profile
The `sge` profile defines for each process all the informations necessary to launch your process on a given queue with SGE at the [PSMN](http://www.ens-lyon.fr/PSMN/doku.php).
For example, for `Kallisto`, we have:
```Groovy
process{
$index_fasta {
beforeScript = "module purge; module load Kallisto/0.44.0"
executor = "sge"
cpus = 1
memory = "5GB"
time = "6h"
queueSize = 1000
pollInterval = '60sec'
queue = 'h6-E5-2667v4deb128'
penv = 'openmp8'
}
$mapping_fastq {
beforeScript = "module purge; module load Kallisto/0.44.0"
executor = "sge"
cpus = 4
memory = "5GB"
time = "6h"
queueSize = 1000
pollInterval = '60sec'
queue = 'h6-E5-2667v4deb128'
penv = 'openmp8'
}
}
```
The `beforeScript` variable is executed before the main script for the corresponding process.
## [`kallisto.nf`](./src/nf_modules/Kallisto/kallisto.nf)
The [`kallisto.nf`](./src/nf_modules/Kallisto/kallisto.nf) file contains examples of nextflow process that execute Kallisto.
- Each example must be usable as it is to be incorporated in a nextflow pipeline.
- You need to define, default value for the parameters passed to the process.
- Input and output must be clearly defined.
- Your process should be usable as a starting process or a process retrieving the output of another process.
For more informations on processes and channels you can check the [nextflow documentation](https://www.nextflow.io/docs/latest/index.html).
## Making your wrapper available to the LBMC
To make your module available to the LBMC you must have a `tests.sh` script and one or many `docker_init.sh` scripts working without errors.
All the processes in your `.nf` must be covered by the tests.
After pushing your modifications on your forked repository, you can make a Merge Request to the [PSMN/modules](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow) **dev** branch. Where it will be tested and integrated to the **master** branch.
You can read more on this process [here](https://guides.github.com/introduction/flow/)
---
title: "TP for experimental biologists"
author: Laurent Modolo [laurent.modolo@ens-lyon.fr](mailto:laurent.modolo@ens-lyon.fr)
date: 6 Jun 2018
output:
pdf_document:
toc: true
toc_depth: 3
number_sections: true
highlight: tango
latex_engine: xelatex
---
The Goal of this practical is to learn how to build your own pipeline with nextflow and using the tools already *wrapped*.
For this we are going to build a small RNASeq analysis pipeline that should run the following steps:
- remove Illumina adaptors
- trim reads by quality
- build the index of a reference genome
- estimate the amount of RNA fragments mapping to the transcripts of this genome
# Initialize your own project
You are going to build a pipeline for you or your team. So the first step is to create your own project.
## Forking
Instead of reinventing the wheel, you can use the [pipelines/nextflow](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow) as a template.
To easily do so, go to the [pipelines/nextflow](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow) repository and click on the [**fork**](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/forks/new) button.
![fork button](img/fork.png)
In git, the [action of forking](https://git-scm.com/book/en/v2/GitHub-Contributing-to-a-Project) means that you are going to make your own private copy of a repository. You can then write modifications in your project, and if they are of interest for the source repository (here [pipelines/nextflow](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow)) create a merge request. Merge requests are sent to the source repository to ask the maintainers to integrate modifications.
![merge request button](img/merge_request.png)
## Project organisation
This project (and yours) follows the [guide of good practices for the LBMC](http://www.ens-lyon.fr/LBMC/intranet/services-communs/pole-bioinformatique/ressources/good_practice_LBMC)
You are now on the main page of your fork of the [pipelines/nextflow](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow). You can explore this project, all the code in it is under the CeCILL licence (in the [LICENCE](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/LICENSE) file).
The [README.md](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/README.md) file contains instructions to run your pipeline and test its installation.
The [CONTRIBUTING.md](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/CONTRIBUTING.md) file contains guidelines to follow if you want to contribute to the [pipelines/nextflow](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow) (making a merge request for example).
The [data](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/tree/master/data) folder will be the place where you store the raw data for your analysis.
The [results](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/tree/master/results) folder will be the place where you store the results of your analysis.
Note that the content of these two folders should never be saved on git.
The [doc](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/tree/master/doc) folder contains the documentation of this practical course.
And most interestingly for you, the [src](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/tree/master/src) contains code to wrap tools. This folder contains two subdirectories. A `docker_modules`, a `nf_modules` and a `sge_modules` folder.
### `docker_modules`
The `src/docker_modules` contains the code to wrap tools in [Docker](https://www.docker.com/what-docker). [Docker](https://www.docker.com/what-docker) is a framework that allows you to execute software within [containers](https://www.docker.com/what-container). The `docker_modules` contains directory corresponding to tools and subdirectories corresponding to their version.
```sh
ls -l src/docker_modules/
rwxr-xr-x 3 laurent _lpoperator 96 May 25 15:42 BEDtools/
drwxr-xr-x 4 laurent _lpoperator 128 Jun 5 16:14 Bowtie2/
drwxr-xr-x 3 laurent _lpoperator 96 May 25 15:42 FastQC/
drwxr-xr-x 4 laurent _lpoperator 128 Jun 5 16:14 HTSeq/
```
To each `tools/version` corresponds two files:
```sh
ls -l src/docker_modules/Bowtie2/2.3.4.1/
-rw-r--r-- 1 laurent _lpoperator 283 Jun 5 15:07 Dockerfile
-rwxr-xr-x 1 laurent _lpoperator 79 Jun 5 16:18 docker_init.sh*
```
The `Dockerfile` is the [Docker](https://www.docker.com/what-docker) recipe to create a [container](https://www.docker.com/what-container) containing `Bowtie2` in its `2.3.4.1` version. And the `docker_init.sh` file is a small script to create the [container](https://www.docker.com/what-container) from this recipe.
By running this script you will be able to easily install tools in different versions on your personal computer and use it in your pipeline. Some of the advantages are:
- Whatever the computer, the installation and the results will be the same
- You can keep [container](https://www.docker.com/what-container) for old version of tools and run it on new systems (science = reproducibility)
- You don’t have to bother with tedious installation procedures, somebody else already did the job and wrote a `Dockerfile`.
- You can easily keep [containers](https://www.docker.com/what-container) for different version of the same tools.
### `sge_modules`
The `src/sge_modules` folder is not really there. It’s a submodule of the project [PSMN/modules](https://gitlab.biologie.ens-lyon.fr/PSMN/modules). To populate it locally you can use the following command:
```sh
git submodule init
```
Like the `src/docker_modules` the [PSMN/modules](https://gitlab.biologie.ens-lyon.fr/PSMN/modules) project describe recipes to install tools and use them. The main difference is that you cannot use [Docker](https://www.docker.com/what-docker) on the PSMN. Instead you have to use another framework [Environment Module](http://www.ens-lyon.fr/PSMN/doku.php?id=documentation:tools:modules) which allows you to load modules for specific tools and version.
The [README.md](https://gitlab.biologie.ens-lyon.fr/PSMN/modules/blob/master/README.md) file of the [PSMN/modules](https://gitlab.biologie.ens-lyon.fr/PSMN/modules) repository contains all the instruction to be able to load the modules maintained by the LBMC and present in the [PSMN/modules](https://gitlab.biologie.ens-lyon.fr/PSMN/modules) repository.
### `nf_modules`
The `src/nf_modules` folder contains templates of [nextflow](https://www.nextflow.io/) wrappers for the tools available in [Docker](https://www.docker.com/what-docker) and [SGE](http://www.ens-lyon.fr/PSMN/doku.php?id=documentation:tools:sge). The details of the [nextflow](https://www.nextflow.io/) wrapper will be presented in the next section. Alongside the `.nf` and `.config` there is a `tests` folder that contains a `tests.sh` script to run test on the tool.
# Nextflow pipeline
A pipeline is a succession of **process**. Each process has data input(s) and optional data output(s). Data flows are modeled as **channels**.
## Processes
Here is an example of **process**:
```Groovy
process sample_fasta {
input:
file fasta from fasta_file
output:
file "sample.fasta" into fasta_sample
script:
"""
head ${fasta} > sample.fasta
"""
}
```
We have the process `sample_fasta` that takes as `fasta_file` channel as input and output a `fasta_sample` channel. The process itself is defined in the `script:` block and within `"""`.
```Groovy
input:
file fasta from fasta_file
```
When we zoom on the `input:` block we see that we define a variable `fasta` of type `file` from the `fasta_file` channel. This mean that groovy is going to write a file named as the content of the variable `fasta` in the root of the folder where `script:` is executed.
```Groovy
output:
file "sample.fasta" into fasta_sample
```
At the end of the script, a file named `sample.fasta` is found in the root the folder where `script:` is executed and send into the pipeline `fasta_sample`
Using the WebIDE of Gitlab, create a file `src/fasta_sampler.nf` with this process and commit to your repository.
![webide](img/webide.png)
## Channels
Why bother with channels? In the above example, the advantages of channels are not really clear. We could have just given the `fasta` file to the process. But what if we have many fasta files to process? What if we have sub processes to run on each of the sampled fasta files? Nextflow can easily deal with these problems with the help of channels.
Channels are streams of items that are emitted by a source and consumed by a process. A process with a channel as input will be run on every item send through the channel.
```Groovy
Channel
.fromPath( "data/tiny_dataset/fasta/*.fasta" )
.set { fasta_file }
```
Here we defined the channel `fasta_file` that is going to send every fasta file from the folder `data/fasta/` into the process that take it as input.
Add the definition of the channel to the `src/fasta_sampler.nf` file and commit to your repository.
## Run your pipeline locally
After writing this first pipeline, you may want to test it. To do that, first clone your repository. To easily do that set the visibility level to *public* in the settings/General/Permissions page of your project.
You can then run the following commands to download your project on your computer:
```sh
git clone -c http.sslVerify=false https://gitlab.biologie.ens-lyon.fr/<usr_name>/nextflow.git
cd nextflow
src/install_nextflow.sh
```
We also need data to run our pipeline:
```
cd data
git clone -c http.sslVerify=false https://gitlab.biologie.ens-lyon.fr/LBMC/tiny_dataset.git
cd ..
```
We can run our pipeline with the following command:
```sh
./nextflow src/fasta_sampler.nf
```
## Getting your results
Our pipeline seems to work but we don’t know where is the `sample.fasta`. To get results out of a process, we need to tell nextflow to write it somewhere (we may don’t need to get every intermediate file in our results).
To do that we need to add the following line before the `input:` section:
```Groovy
publishDir "results/sampling/", mode: 'copy'
```
Every file described in the `output:` section will be copied from nextflow to the folder `results/sampling/`.
Add this to your `src/fasta_sampler.nf` file with the WebIDE and commit to your repository.
Pull your modifications locally with the command:
```sh
git pull origin master
```
You can run your pipeline again and check the content of the folder `results/sampling`.
## Fasta everywhere
We ran our pipeline on one fasta file. How nextflow would handle 100 of them? To test that we need to duplicate the `tiny_v2.fasta` file:
```sh
for i in {1..100}
do
cp data/tiny_dataset/fasta/tiny_v2.fasta data/tiny_dataset/fasta/tiny_v2_${i}.fasta
done
```
You can run your pipeline again and check the content of the folder `results/sampling`.
Every `fasta_sampler` process write a `sample.fasta` file. We need to make the name of the output file dependent of the name of the input file.
```Groovy
output:
file "*_sample.fasta" into fasta_sample
script:
"""
head ${fasta} > ${fasta.baseName}_sample.fasta
"""
```
Add this to your `src/fasta_sampler.nf` file with the WebIDE and commit to your repository before pulling your modifications locally.
You can run your pipeline again and check the content of the folder `results/sampling`.
# Build your own RNASeq pipeline
In this section you are going to build your own pipeline for RNASeq analysis from the code available in the `src/nf_modules` folder.
## Create your Docker containers
For this practical, we are going to need the following tools:
- For Illumina adaptor removal: cutadapt
- For reads trimming by quality: UrQt
- For mapping and quantifying reads: BEDtools and Kallisto
To initialize these tools, follow the **Installing** section of the [README.md](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/README.md) file.
If you are using a CBP computer don’t forget to clean up your docker containers at the end of the practical with the following command:
```sh
docker rm $(docker stop $(docker ps -aq))
docker rmi $(docker images -qf "dangling=true")
```
## Cutadapt
The first step of the pipeline is to remove any Illumina adaptor left in your read files.
Open the WebIDE and create a `src/RNASeq.nf` file. Browse for [src/nf_modules/cutadapt/cutadapt.nf](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/src/nf_modules/cutadapt/cutadapt.nf), this file contains examples for cutadapt. We are interested in the *Illumina adaptor removal*, *for paired-end data* section of the code. Copy this code in your pipeline and commit.
Compared to before, we have few new lines:
```Groovy
params.fastq = "$baseDir/data/fastq/*_{1,2}.fastq"
```
We declare a variable that contain the path of the fastq file to look for. The advantage of using `params.fastq` is that now the option `--fastq` in our call to the pipeline allows us to define this variable:
```sh
./nextflow src/RNASeq.nf --fastq "data/tiny_dataset/fastq/*_R{1,2}.fastq"
```
```Groovy
log.info "fastq files: ${params.fastq}"
```
This line simply displays the value of the variable
```Groovy
Channel
.fromFilePairs( params.fastq )
```
As we are working with paired-end RNASeq data, we tell nextflow to send pairs of fastq in the `fastq_file` channel.
### cutadapt.config
For the `fastq_sampler.nf` pipeline we used the command `head` present in most base UNIX systems. Here we want to use `cutadapt` which is not. Therefore, we have three main options:
- install cutadapt locally so nextflow can use it
- launch the process in a Docker container that has cutadapt installed
- launch the process with SGE while loading the correct module to have cutadapt available
We are not going to use the first option which requires no configuration for nextflow but tedious tools installations. Instead, we are going to use existing *wrappers* and tell nextflow about it. This is what the [src/nf_modules/cutadapt/cutadapt.config](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/src/nf_modules/cutadapt/cutadapt.config) is used for.
Copy the content of this config file to an `src/RNASeq.config` file. This file is structured in process blocks. Here we are only interested in configuring `adaptor_removal` process not `trimming` process. So you can remove the `trimming` block and commit.
You can test your pipeline with the following command:
```sh
./nextflow src/RNASeq.nf -c src/RNASeq.config -profile docker --fastq "data/tiny_dataset/fastq/*_R{1,2}.fastq"
```
## UrQt
The second step of the pipeline is to trim reads by quality.
Browse for [src/nf_modules/UrQt/urqt.nf](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/src/nf_modules/UrQt/urqt.nf), this file contains examples for UrQt. We are interested in the *for paired-end data* section of the code. Copy the process section code in your pipeline and commit.
This code won’t work if you try to run it: the `fastq_file` channel is already consumed by the `adaptor_removal` process. In nextflow once a channel is used by a process, it ceases to exist. Moreover, we don’t want to trim the input fastq, we want to trim the fastq that comes from the `adaptor_removal` process.
Therefore, you need to change the line:
```Groovy
set pair_id, file(reads) from fastq_files
```
In the `trimming` process to:
```Groovy
set pair_id, file(reads) from fastq_files_cut
```
The two processes are now connected by the channel `fastq_files_cut`.
Add the content of the [src/nf_modules/UrQt/urqt.config](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/src/nf_modules/UrQt/urqt.config) file to your `src/RNASeq.config` file and commit.
You can test your pipeline.
## BEDtools
Kallisto need the sequences of the transcripts that need to be quantified. We are going to extract these sequences from the reference `data/tiny_dataset/fasta/tiny_v2.fasta` with the `bed` annotation `data/tiny_dataset/annot/tiny.bed`.
You can copy to your `src/RNASeq.nf` file the content of [src/nf_modules/BEDtools/bedtools.nf](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/src/nf_modules/BEDtools/bedtools.nf) and to your `src/RNASeq.config` file the content of [src/nf_modules/BEDtools/bedtools.config](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/src/nf_modules/BEDtools/bedtools.config).
Commit your work and test your pipeline with the following command:
```sh
./nextflow src/RNASeq.nf -c src/RNASeq.config -profile docker --fastq "data/tiny_dataset/fastq/*_R{1,2}.fastq" --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" --bed "data/tiny_dataset/annot/tiny.bed"
```
## Kallisto
Kallisto run in two steps: the indexation of the reference and the quantification on this index.
You can copy to your `src/RNASeq.nf` file the relevant content of [src/nf_modules/Kallisto/kallisto.nf](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/src/nf_modules/Kallisto/kallisto.nf) and to your `src/RNASeq.config` file the content of [src/nf_modules/Kallisto/kallisto.config](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/src/nf_modules/Kallisto/kallisto.config).
We are going to work with paired-end so only copy the relevant processes. The `index_fasta` process needs to take as input the output of your `fasta_from_bed` process. The `fastq` input of your `mapping_fastq` process needs to take as input the output of your `index_fasta` process and the `trimming` process.
Commit your work and test your pipeline.
You now have a RNASeq analysis pipeline that can run locally with Docker!
## Additional nextflow option
With nextflow you can restart the computation of a pipeline and get a trace of the process with the following options:
```sh
-resume -with-dag results/RNASeq_dag.pdf -with-timeline results/RNASeq_timeline
```
# Run your RNASeq pipeline on the PSMN
First you need to connect to the PSMN:
```sh
login@allo-psmn
```
Then once connected to `allo-psmn`, you can connect to `e5-2667v4comp1`:
```sh
login@e5-2667v4comp1
```
## Set your environment
Make the LBMC modules available to you:
```sh
ln -s /Xnfs/lbmcdb/common/modules/modulefiles ~/privatemodules
echo "module use ~/privatemodules" >> .bashrc
```
Then you need to clone your pipeline and get the data:
```sh
git clone -c http.sslVerify=false https://gitlab.biologie.ens-lyon.fr/lmodolo/nextflow.git
cd nextflow/data
git clone -c http.sslVerify=false https://gitlab.biologie.ens-lyon.fr/LBMC/tiny_dataset.git
cd ..
```
## Run nextflow
As we don’t want nextflow to be killed in case of disconnection, we start by launching `tmux`. In case of deconnection, you can restore your session with the command `tmux a`.
```sh
tmux
module load nextflow/0.28.2
nextflow src/RNASeq.nf -c src/RNASeq.config -profile sge --fastq "data/tiny_dataset/fastq/*_R{1,2}.fastq" --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" --bed "data/tiny_dataset/annot/tiny.bed"
```
To use the scratch for nextflow computations add the option :
```sh
-w /scratch/login
```
You just ran your pipeline on the PSMN!
<!--
SPDX-FileCopyrightText: 2022 Laurent Modolo <laurent.modolo@ens-lyon.fr>
SPDX-License-Identifier: CC-BY-SA-4.0
-->
# Building your own pipeline
The goal of this guide is to walk you through the Nextflow pipeline building process you will learn:
1. How to use this [git repository (LBMC/nextflow)](https://gitbio.ens-lyon.fr/LBMC/nextflow) as a template for your project.
2. The basis of [Nextflow](https://www.nextflow.io/) the pipeline manager that we use at the lab.
3. How to build a simple pipeline for the transcript-level quantification of RNASeq data
4. How to run the exact same pipeline on a computing center ([PSMN](http://www.ens-lyon.fr/PSMN/doku.php))
This guide assumes that you followed the [Git basis, training course](https://gitbio.ens-lyon.fr/LBMC/hub/formations/git_basis).
# Initialize your own project
You are going to build a pipeline for you or your team. So the first step is to create your own project.
## Forking
Instead of reinventing the wheel, you can use the [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow) as a template.
To easily do so, go to the [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow) repository and click on the [**fork**](https://gitbio.ens-lyon.fr/LBMC/nextflow/forks/new) button (you need to log-in).
![fork button](./img/fork.png)
In git, the [action of forking](https://git-scm.com/book/en/v2/GitHub-Contributing-to-a-Project) means that you are going to make your own private copy of a repository.
This repository will keep a link with the original [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow) project from which you will be able to
- [get updates](https://gitbio.ens-lyon.fr/LBMC/nextflow#getting-the-last-updates) `LBMC/nextflow` from the repository
- propose update (see [contributing guide](https://gitbio.ens-lyon.fr/LBMC/nextflow/-/blob/master/CONTRIBUTING.md#forking))
## Project organization
This project (and yours) follows the [guide of good practices for the LBMC](http://www.ens-lyon.fr/LBMC/intranet/services-communs/pole-bioinformatique/ressources/good_practice_LBMC)
You are now on the main page of your fork of the [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow). You can explore this project, all the codes in it is under the CeCILL licence (in the [LICENCE](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/LICENSE) file).
The [README.md](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/README.md) file contains instructions to run your pipeline and test its installation.
The [CONTRIBUTING.md](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/CONTRIBUTING.md) file contains guidelines if you want to contribute to the [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow).
The [data](https://gitbio.ens-lyon.fr/LBMC/nextflow/tree/master/data) folder will be the place where you store the raw data for your analysis.
The [results](https://gitbio.ens-lyon.fr/LBMC/nextflow/tree/master/results) folder will be the place where you store the results of your analysis.
**The content of `data` and `results` folders should never be saved on git.**
The [doc](https://gitbio.ens-lyon.fr/LBMC/nextflow/tree/master/doc) folder contains the documentation and this guide.
And most interestingly for you, the [src](https://gitbio.ens-lyon.fr/LBMC/nextflow/tree/master/src) contains code to wrap tools. This folder contains one visible subdirectories `nf_modules` some pipeline examples and other hidden folders and files.
# Nextflow pipeline
A pipeline is a succession of [**process**](https://www.nextflow.io/docs/latest/process.html#process-page). Each `process` has data input(s) and optional data output(s). Data flows are modeled as [**channels**](https://www.nextflow.io/docs/latest/channel.html).
## Processes
Here is an example of **process**:
```Groovy
process sample_fasta {
input:
path fasta
output:
path "sample.fasta", emit: fasta_sample
script:
"""
head ${fasta} > sample.fasta
"""
}
```
We have the process `sample_fasta` that takes fasta `path` input and as output a fasta `path`. The `process` task itself is defined in the `script:` block and within `"""`.
```Groovy
input:
path fasta
```
When we zoom on the `input:` block, we see that we define a variable `fasta` of type `path`.
This means that the `sample_fasta` `process` is going to get a flux of fasta file(s).
Nextflow is going to write a file named as the content of the variable `fasta` in the root of the folder where `script:` is executed.
```Groovy
output:
path "sample.fasta", emit: fasta_sample
```
At the end of the script, a file named `sample.fasta` is found in the root the folder where `script:` is executed and will be emitted as `fasta_sample`.
Using the WebIDE of Gitlab, create a file `src/fasta_sampler.nf`
![webide](./img/webide.png)
The first line that you need to add is
```Groovy
nextflow.enable.dsl=2
```
Then add the `sample_fastq` process and commit it to your repository.
## Workflow
In Nexflow, `process` blocks are chained together within a `workflow` block.
For the time being, we only have one `process` so `workflow` may look like an unnecessary complication, but keep in mind that we want to be able to write complex bioinformatic pipeline.
```Groovy
workflow {
sample_fasta(fasta_file)
}
```
Like `process` blocks `workflow` can take some inputs: `fasta_files`
and transmit this input to `process`es
```Groovy
sample_fasta(fasta_file)
```
Add the definition of the `workflow` to the `src/fasta_sampler.nf` file and commit it to your repository.
## Channels
Why bother with `channel`s? In the above example, the advantages of `channel`s are not really clear. We could have just given the `fasta` file to the `workflow`. But what if we have many fasta files to process? What if we have sub processes to run on each of the sampled fasta files? Nextflow can easily deal with these problems with the help of `channel`s.
> **Channels** are streams of items that are emitted by a source and consumed by a process. A process with a `channel` as input will be run on every item send through the `channel`.
```Groovy
channel
.fromPath( "data/tiny_dataset/fasta/*.fasta" )
.set { fasta_file }
```
Here we defined the `channel`, `fasta_file`, that is going to send every fasta file from the folder `data/tiny_dataset/fasta/` into the process that takes it as input.
Add the definition of the `channel`, above the `workflow` block, to the `src/fasta_sampler.nf` file and commit it to your repository.
## Run your pipeline locally
After writing this first pipeline, you may want to test it. To do that, first clone your repository.
After following the [Git basis, training course](https://gitbio.ens-lyon.fr/LBMC/hub/formations/git_basis), you should have an up-to-date `ssh` configuration to connect to the `gitbio.ens-lyon.fr` git server.
You can run the following commands to download your project on your computer:
```sh
git clone git@gitbio.ens-lyon.fr:<usr_name>/nextflow.git
cd nextflow
src/install_nextflow.sh
```
We also need data to test our pipeline:
```sh
cd data
git clone git@gitbio.ens-lyon.fr:LBMC/hub/tiny_dataset.git
cd ..
```
We can run our pipeline with the following command:
```sh
./nextflow src/fasta_sampler.nf
```
## Getting your results
Our pipeline seems to work but we don’t know where is the `sample.fasta`. To get results out of a `process`, we need to tell nextflow to write it somewhere (we may don’t need to get every intermediate file in our results).
To do that we need to add the following line before the `input:` section:
```Groovy
publishDir "results/sampling/", mode: 'copy'
```
Every file described in the `output:` section will be copied from nextflow to the folder `results/sampling/`.
Add this to your `src/fasta_sampler.nf` file with the WebIDE and commit to your repository.
Pull your modifications locally with the command:
```sh
git pull origin master
```
You can run your pipeline again and check the content of the folder `results/sampling`.
## Fasta everywhere
We ran our pipeline on one fasta file. How would nextflow handle 100 of them? To test that we need to duplicate the `tiny_v2.fasta` file:
```sh
for i in {1..100}
do
cp data/tiny_dataset/fasta/tiny_v2.fasta data/tiny_dataset/fasta/tiny_v2_${i}.fasta
done
```
You can run your pipeline again and check the content of the folder `results/sampling`.
Every `fasta_sampler` process write a `sample.fasta` file. We need to make the name of the output file dependent of the name of the input file.
```Groovy
output:
path "*_sample.fasta", emit: fasta_sample
script:
"""
head ${fasta} > ${fasta.simpleName}_sample.fasta
"""
```
Add this to your `src/fasta_sampler.nf` file with the WebIDE and commit it to your repository before pulling your modifications locally.
You can run your pipeline again and check the content of the folder `results/sampling`.
Congratulations you built your first, one step, nextflow pipeline !
# Build your own RNASeq pipeline
In this section you are going to build your own pipeline for RNASeq analysis from the code available in the `src/nf_modules` folder.
Open the WebIDE and create a `src/RNASeq.nf` file.
The first line that we are going to add is
```Groovy
nextflow.enable.dsl=2
```
## fastp
The first step of the pipeline is to remove any Illumina adaptors left in your read files and to trim your reads by quality.
The [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow) template provide you with many tools, for which you can find a predefined `process` block.
You can find a list of these tools in the [`src/nf_modules`](./src/nf_modules) folder.
You can also ask for a new tool by creating a [new issue for it](https://gitbio.ens-lyon.fr/LBMC/nextflow/-/issues/new) in the [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow) project.
We are going to include the [`src/nf_modules/fastp/main.nf`](./src/nf_modules/fastp/main.nf) in our `src/RNASeq.nf` file
```Groovy
include { fastp } from "./nf_modules/fastp/main.nf"
```
The `./nf_modules/fastp/main.nf` is relative to the `src/RNASeq.nf` file, this is why we don’t include the `src/` part of the path.
With this line we can call the `fastp` block in our future `workflow` without having to write it !
If we check the content of the file [`src/nf_modules/fastp/main.nf`](./src/nf_modules/fastp/main.nf), we can see that by including `fastp`, we are including a sub-`workflow` (we will come back on this object latter). Sub-`workflow` can be used like `process`es.
This `sub-workflow` takes a `fastq` `channel`. We need to make one:
```Groovy
channel
.fromFilePairs( "data/tiny_dataset/fastq/*_R{1,2}.fastq", size: -1)
.set { fastq_files }
```
The `.fromFilePairs()` function creates a `channel` of pairs of fastq files. Therefore, the items emitted by the `fastq_files` channel are going to be pairs of fastq for paired-end data.
The option `size: -1` allows for arbitrary numbers of associated files. Therefore, we can use the same `channel` creation for single-end data.
We can now include the `workflow` definition, passing the `fastq_files` `channel` to `fastp` to our `src/RNASeq.nf` file.
```Groovy
workflow {
fastp(fastq_files)
}
```
You can commit your `src/RNASeq.nf` file, `pull` your modification locally and run your pipeline with the command:
```Groovy
./nextflow src/RNASeq.nf
```
What is happening ?
## Nextflow `-profile`
Nextflow tells you the following error: `fastp: command not found`. You haven’t `fastp` installed on your computer.
Tools installation can be a tedious process and reinstalling old version of those tools to reproduce old analyses can be very difficult.
Containers technologies like [Docker](https://www.docker.com/) or [Singularity](https://sylabs.io/singularity/) create small virtual environments where we can install software in a given version with all it’s dependencies. This environment can be saved, and shared, to have access to this exact working version of the software.
> Why two different systems ?
> Docker is easy to use and can be installed on Windows / MacOS / GNU/Linux but need admin rights.
> Singularity can only be used on GNU/Linux but don’t need admin rights, and can be used on shared environment.
The [LBMC/nextflow](https://gitbio.ens-lyon.fr/LBMC/nextflow) template provides you with [4 different `-profile`s to run your pipeline](https://gitbio.ens-lyon.fr/LBMC/nextflow/-/blob/master/doc/getting_started.md#nextflow-profile).
Profiles are defined in the [`src/nextflow.config`](./src/nextflow.config), which is the default configuration file for your pipeline (you don’t have to edit this file).
To run the pipeline locally you can use the profile `singularity` or `docker`
```Groovy
./nextflow src/RNASeq.nf -profile singularity
```
The `fastp`, `singularity` or `docker`, image is downloaded automatically and the fastq files are processed.
## Pipeline `--` arguments
We have defined the fastq files path within our `src/RNASeq.nf` file.
But what if we want to share our pipeline with someone who doesn’t want to analyze the `tiny_dataset` and but other fastq.
We can define a variable instead of fixing the path.
```Groovy
params.fastq = "data/fastq/*_{1,2}.fastq"
channel
.fromFilePairs( params.fastq, size: -1)
.set { fastq_files }
```
We declare a variable that contains the path of the fastq file to look for. The advantage of using `params.fastq` is that the option `--fastq` is now a parameter of your pipeline.
Thus, you can call your pipeline with the `--fastq` option.
You can commit your `src/RNASeq.nf` file, `pull` your modification locally and run your pipeline with the command:
```sh
./nextflow src/RNASeq.nf -profile singularity --fastq "data/tiny_dataset/fastq/*_R{1,2}.fastq"
```
We can also add the following line:
```Groovy
log.info "fastq files: ${params.fastq}"
```
This line simply displays the value of the variable
## BEDtools
We need the sequences of the transcripts that need to be quantified. We are going to extract these sequences from the reference `data/tiny_dataset/fasta/tiny_v2.fasta` with the `bed` file annotation `data/tiny_dataset/annot/tiny.bed`.
You can include the `fasta_from_bed` `process` from the [src/nf_modules/bedtools/main.nf](https://gitbio.ens-lyon.fr/LBMC/nextflow/blob/master/src/nf_modules/bedtools/main.nf) file to your `src/RNASeq.nf` file.
You need to be able to input a `fasta_files` `channel` and a `bed_files` `channel`.
```Groovy
log.info "fasta file : ${params.fasta}"
log.info "bed file : ${params.bed}"
channel
.fromPath( params.fasta )
.ifEmpty { error "Cannot find any fasta files matching: ${params.fasta}" }
.map { it -> [it.simpleName, it]}
.set { fasta_files }
channel
.fromPath( params.bed )
.ifEmpty { error "Cannot find any bed files matching: ${params.bed}" }
.map { it -> [it.simpleName, it]}
.set { bed_files }
```
We introduce 2 new directives:
- `.ifEmpty { error "Cannot find any fasta files matching: ${params.fasta}" }` to throw an error if the path of the file is not right
- `.map { it -> [it.simpleName, it]}` to transform our `channel` to a format compatible with the [`CONTRIBUTING`](../CONTRIBUTING.md) rules. Item, in the `channel` have the following shape [file_id, [file]], like the ones emited by the `.fromFilePairs(..., size: -1)` function.
We can add the `fastq_from_bed` step to our `workflow`
```Groovy
workflow {
fastp(fastq_files)
fasta_from_bed(fasta_files, bed_files)
}
```
Commit your work and test your pipeline with the following command:
```sh
./nextflow src/RNASeq.nf -profile singularity --fastq "data/tiny_dataset/fastq/*_R{1,2}.fastq" --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" --bed "data/tiny_dataset/annot/tiny.bed"
```
## Kallisto
Kallisto run in two steps: the indexation of the reference and the quantification of the transcript on this index.
You can include two `process`es with the following syntax:
```Groovy
include { index_fasta; mapping_fastq } from './nf_modules/kallisto/main.nf'
```
The `index_fasta` process needs to take as input the output of your `fasta_from_bed` `process`, which has the shape `[fasta_id, [fasta_file]]`.
The input of your `mapping_fastq` `process` needs to take as input and the output of your `index_fasta` `process` and the `fastp` `process`, of shape `[index_id, [index_file]]`, and `[fastq_id, [fastq_r1_file, fastq_r2_file]]`.
The output of a `process` is accessible through `<process_name>.out`.
In the cases where we have an `emit: <channel_name>` we can access the corresponding channel with`<process_name>.out.<channel_name>`
```Groovy
workflow {
fastp(fastq_files)
fasta_from_bed(fasta_files, bed_files)
index_fasta(fasta_from_bed.out.fasta)
mapping_fastq(index_fasta.out.index.collect(), fastp.out.fastq)
}
```
Commit your work and test your pipeline.
## Returning results
By default none of the `process` defined in `src/nf_modules` use the `publishDir` instruction.
You can specify their `publishDir` directory by specifying the :
```Groovy
params.<process_name>_out = "path"
```
Where "path" will describe a path within the `results` folder
Therefore you can either:
- call your pipeline with the following parameter `--mapping_fastq_out "quantification/"`
- add the following line to your `src/RNASeq.nf` file to get the output of the `mapping_fastq` process:
```Groovy
include { index_fasta; mapping_fastq } from './nf_modules/kallisto/main.nf' addParams(mapping_fastq_out: "quantification/")
```
Commit your work and test your pipeline.
You now have a RNASeq analysis pipeline that can run locally with Docker or Singularity!
## Bonus
A file `report.html` is created for each run with the detail of your pipeline execution.
You can use the `-resume` option to be able to save into cache the process results (the in a `work/` folder).
# Run your RNASeq pipeline on the PSMN
First you need to connect to the PSMN:
```sh
login@allo-psmn
```
Then once connected to `allo-psmn`, you can connect to `e5-2667v4comp1`:
```sh
login@m6142comp2
```
## Set your environment
Create and go to your `scratch` folder:
```sh
mkdir -p /scratch/Bio/<login>
cd /scratch/Bio/<login>
```
Then you need to clone your pipeline and get the data:
```sh
git clone https://gitbio.ens-lyon.fr/<usr_name>/nextflow.git
cd nextflow/data
git clone https://gitbio.ens-lyon.fr/LBMC/hub/tiny_dataset.git
cd ..
```
## Run nextflow
As we don’t want nextflow to be killed in case of disconnection, we start by launching `tmux`. In case of disconnection, you can restore your session with the command `tmux a` and close one with `ctr + b + d`
```sh
tmux
src/install_nextflow.sh
./nextflow src/RNASeq.nf -profile psmn --fastq "data/tiny_dataset/fastq/*_R{1,2}.fastq" --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" --bed "data/tiny_dataset/annot/tiny.bed"
```
You just ran your pipeline on the PSMN!
<!--
SPDX-FileCopyrightText: 2022 Laurent Modolo <laurent.modolo@ens-lyon.fr>
SPDX-License-Identifier: CC-BY-SA-4.0
-->
# Getting Started
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
You can follow the [building your pipeline guide](./doc/building_your_pipeline.md) to learn how to build your own pipelines.
## Prerequisites
To run nextflow on your computer you need to have `java` (>= 1.8) installed.
```sh
java --version
```
and `git`
```sh
git --version
```
To be able to run existing tools in nextflow on your computer (`src/nf_modules/` to see the list). You need to have `docker` installed.
```sh
docker run hello-world
```
Alternatively if you are on Linux, you can use `singularity`:
```sh
singularity run docker://hello-world
```
## Installing
To install nextflow on your computer simply run the following command:
```sh
git clone git@gitbio.ens-lyon.fr:LBMC/nextflow.git
cd nextflow/
src/install_nextflow.sh
```
## Running a toy RNASeq quantification pipeline
To run tests we first need to get a training set
```sh
cd data
git clone https://gitbio.ens-lyon.fr/LBMC/Hub/tiny_dataset.git
cp tiny_dataset/fastq/tiny_R1.fastq tiny_dataset/fastq/tiny2_R1.fastq
cp tiny_dataset/fastq/tiny_R2.fastq tiny_dataset/fastq/tiny2_R2.fastq
cp tiny_dataset/fastq/tiny_S.fastq tiny_dataset/fastq/tiny2_S.fastq
cd ..
```
Then to run the tests for a given tools run the following command:
```sh
./nextflow src/solution_RNASeq.nf --fastq "data/tiny_dataset/fastq/tiny2_R{1,2}.fastq.gz" --fasta "data/tiny_dataset/fasta/tiny_v2_10.fasta" --bed "data/tiny_dataset/annot/tiny.bed" -profile docker
```
## Nextflow profile
By default le `src/nextflow.config` file define 4 different profiles
- `-profile docker` each process of the pipeline will be executed within a `docker` container locally
- `-profile singularity` each process of the pipeline will be executed within a `singularity` container locally
- `-profile psmn` each process will be sent as a separate job within a `charliecloud` container on the PSMN
- `-profile ccin2p3` each process will be sent as a separate job within a `singularity` container on the CCIN2P3
If the containers are not found locally, they are automatically downloaded before running the process. For the PSMN and CCIN2P3, the `singularity` images are downloaded in a shared folder (`/scratch/Bio/charliecloud` for the PSMN, and `/sps/lbmc/common/singularity/` for the CCIN2P3)
### PSMN
To have access to `charliecloud` on the PSMN you need to add the followin path to your `PATH` variable:
```sh
PATH=/Xnfs/abc/charliecloud_bin/:$PATH
```
You can add this line in your `~/.bashrc` or `~/.zshrc` file
When running `nextflow` on the PSMN, we recommend to use `tmux` before launching the pipeline:
```sh
tmux
./nextflow src/solution_RNASeq.nf --fastq "data/tiny_dataset/fastq/tiny2_R{1,2}.fastq.gz" --fasta "data/tiny_dataset/fasta/tiny_v2_10.fasta" --bed "data/tiny_dataset/annot/tiny.bed" -profile psmn
```
Therefore, the `nextflow` process will continue to run even if you are disconnected.
You can re-attach the `tmux` session, with the command `tmux a` (and press `ctrl` `+` `b` `+` `d` to detach the attached session).
### CCIN2P3
When runnning `nextflow` on the CCIN2P3, you cannot use `tmux`, instead you should send a _daemon_ jobs which will launch the `nextflow` command.
You can edit the `src/ccin2p3.pbs` file to personalize your `nextflow` command and send it with the command:
```sh
qsub src/ccin2p3.pbs
```
SPDX-FileCopyrightText: 2022 Laurent Modolo <laurent.modolo@ens-lyon.fr>
SPDX-License-Identifier: CC-BY-SA-4.0
SPDX-FileCopyrightText: 2022 Laurent Modolo <laurent.modolo@ens-lyon.fr>
SPDX-License-Identifier: CC-BY-SA-4.0