diff --git a/doc/TP_experimental_biologists.md b/doc/TP_experimental_biologists.md index 2b622187617db7db00872b9d60327294a3dd75e1..b9d0c42b49b13d0e73488ba74f0282eebb96010b 100644 --- a/doc/TP_experimental_biologists.md +++ b/doc/TP_experimental_biologists.md @@ -3,11 +3,11 @@ title: "TP for experimental biologists" author: Laurent Modolo [laurent.modolo@ens-lyon.fr](mailto:laurent.modolo@ens-lyon.fr) date: 6 Jun 2018 output: - pdf_document: - toc: true - toc_depth: 3 +pdf_document: +toc: true +toc_depth: 3 number_sections: true - highlight: tango +highlight: tango latex_engine: xelatex --- @@ -17,7 +17,7 @@ For this we are going to build a small RNASeq analysis pipeline that should run - remove Illumina adaptors - trim reads by quality - build the index of a reference genome -- estimate the number of RNA fragments mapping to the transcript of this genome +- estimate the amount of RNA fragments mapping to the transcript of this genome # Initialize your own project @@ -30,67 +30,67 @@ To easily do so, go to the [pipelines/nextflow](https://gitlab.biologie.ens-lyon  -In git, the [action of forking](https://git-scm.com/book/en/v2/GitHub-Contributing-to-a-Project) means that you are going to make your own private copy of a repository. You can then write modifications in your project, and if they are of interest for the source repository (here [pipelines/nextflow](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow)) create a merge request. Merge request are send to the source repository to ask the maintainers to integrate modifications. +In git, the [action of forking](https://git-scm.com/book/en/v2/GitHub-Contributing-to-a-Project) means that you are going to make your own private copy of a repository. You can then write modifications in your project, and if they are of interest for the source repository (here [pipelines/nextflow](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow)) create a merge request. Merge requests are sent to the source repository to ask the maintainers to integrate modifications.  ## Project organisation -This project (and yours) follow the [guide of good practices for the LBMC](http://www.ens-lyon.fr/LBMC/intranet/services-communs/pole-bioinformatique/ressources/good_practice_LBMC) +This project (and yours) follows the [guide of good practices for the LBMC](http://www.ens-lyon.fr/LBMC/intranet/services-communs/pole-bioinformatique/ressources/good_practice_LBMC) -You are now on the main page of your fork of the [pipelines/nextflow](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow). You can explore this project, all the code in it is under the CeCILL lience (in the [LICENCE](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/LICENSE) file). +You are now on the main page of your fork of the [pipelines/nextflow](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow). You can explore this project, all the code in it is under the CeCILL licence (in the [LICENCE](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/LICENSE) file). -The [README.md](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/README.md) file contains instructions to run your pipeline and test it's installation. +The [README.md](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/README.md) file contains instructions to run your pipeline and test its installation. The [CONTRIBUTING.md](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/CONTRIBUTING.md) file contains guidelines to follow if you want to contribute to the [pipelines/nextflow](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow) (making a merge request for example). -The [data](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/tree/master/data) folder will be the place were you store the raw data for your analysis. -The [results](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/tree/master/results) folder will be the place were you store the results of your analysis. +The [data](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/tree/master/data) folder will be the place where you store the raw data for your analysis. +The [results](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/tree/master/results) folder will be the place where you store the results of your analysis. Note that the content of these two folders should never be saved on git. The [doc](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/tree/master/doc) folder contains the documentation of this practical course. -And most interestingly for you, the [src](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/tree/master/src) contains code to wrapp tools. This folder contains two subdirectory. A `docker_modules`, an `nf_modules` and an `sge_modules` folder. +And most interestingly for you, the [src](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/tree/master/src) contains code to wrap tools. This folder contains two subdirectories. A `docker_modules`, a `nf_modules` and a `sge_modules` folder. ### `docker_modules` -The `src/docker_modules` contains the code to wrapp tools in [Docker](https://www.docker.com/what-docker). [Docker](https://www.docker.com/what-docker) is a framework that allow you to execute software withing [containers](https://www.docker.com/what-container). The `docker_modules` contains directory corresponding to tools and subdirectories corresponding to their version. +The `src/docker_modules` contains the code to wrap tools in [Docker](https://www.docker.com/what-docker). [Docker](https://www.docker.com/what-docker) is a framework that allows you to execute software within [containers](https://www.docker.com/what-container). The `docker_modules` contains directory corresponding to tools and subdirectories corresponding to their version. ```sh ls -l src/docker_modules/ -rwxr-xr-x 3 laurent _lpoperator 96 May 25 15:42 BEDtools/ -drwxr-xr-x 4 laurent _lpoperator 128 Jun 5 16:14 Bowtie2/ -drwxr-xr-x 3 laurent _lpoperator 96 May 25 15:42 FastQC/ -drwxr-xr-x 4 laurent _lpoperator 128 Jun 5 16:14 HTSeq/ +rwxr-xr-x 3 laurent _lpoperator 96 May 25 15:42 BEDtools/ +drwxr-xr-x 4 laurent _lpoperator 128 Jun 5 16:14 Bowtie2/ +drwxr-xr-x 3 laurent _lpoperator 96 May 25 15:42 FastQC/ +drwxr-xr-x 4 laurent _lpoperator 128 Jun 5 16:14 HTSeq/ ``` To each `tools/version` corresponds two files: ```sh ls -l src/docker_modules/Bowtie2/2.3.4.1/ --rw-r--r-- 1 laurent _lpoperator 283 Jun 5 15:07 Dockerfile --rwxr-xr-x 1 laurent _lpoperator 79 Jun 5 16:18 docker_init.sh* +-rw-r--r-- 1 laurent _lpoperator 283 Jun 5 15:07 Dockerfile +-rwxr-xr-x 1 laurent _lpoperator 79 Jun 5 16:18 docker_init.sh* ``` -The `Dockerfile` is the [Docker](https://www.docker.com/what-docker) recipe to create a [container](https://www.docker.com/what-container) containing `Bowtie2` in it's `2.3.4.1` version. And the `docker_init.sh` file is a small script to create the [container](https://www.docker.com/what-container) from this recipe. +The `Dockerfile` is the [Docker](https://www.docker.com/what-docker) recipe to create a [container](https://www.docker.com/what-container) containing `Bowtie2` in its `2.3.4.1` version. And the `docker_init.sh` file is a small script to create the [container](https://www.docker.com/what-container) from this recipe. -By running this script you will be able to easily install tools in different version on your personal computer and use it in your pipeline. Some of the advantages are: +By running this script you will be able to easily install tools in different versions on your personal computer and use it in your pipeline. Some of the advantages are: - Whatever the computer, the installation and the results will be the same - You can keep [container](https://www.docker.com/what-container) for old version of tools and run it on new systems (science = reproducibility) -- You don't have to bother with tedious installation procedure, somebody else already did the job and wrote a `Dockerfile`. +- You don’t have to bother with tedious installation procedures, somebody else already did the job and wrote a `Dockerfile`. - You can easily keep [container](https://www.docker.com/what-container) for different version of the same tools. ### `sge_modules` -The `src/sge_modules` folder is not really there. It's a submodule of the project [PSMN/modules](https://gitlab.biologie.ens-lyon.fr/PSMN/modules). To populate it locally you can use the following command: +The `src/sge_modules` folder is not really there. It’s a submodule of the project [PSMN/modules](https://gitlab.biologie.ens-lyon.fr/PSMN/modules). To populate it locally you can use the following command: ```sh git submodule init ``` -Like for the `src/docker_modules` the [PSMN/modules](https://gitlab.biologie.ens-lyon.fr/PSMN/modules) project describe recipes to install tools and use them. The main difference is that you cannot use [Docker](https://www.docker.com/what-docker) on the PSMN. Instead you have to use another framework [Environment Module](http://www.ens-lyon.fr/PSMN/doku.php?id=documentation:tools:modules) which allows you to load modules for specific tools and version. -The [README.md](https://gitlab.biologie.ens-lyon.fr/PSMN/modules/blob/master/README.md) file of the [PSMN/modules](https://gitlab.biologie.ens-lyon.fr/PSMN/modules) respository contains all the instruction to be able to load the modules maintained by the LBMC en present in the [PSMN/modules](https://gitlab.biologie.ens-lyon.fr/PSMN/modules) respository. +Like the `src/docker_modules` the [PSMN/modules](https://gitlab.biologie.ens-lyon.fr/PSMN/modules) project describe recipes to install tools and use them. The main difference is that you cannot use [Docker](https://www.docker.com/what-docker) on the PSMN. Instead you have to use another framework [Environment Module](http://www.ens-lyon.fr/PSMN/doku.php?id=documentation:tools:modules) which allows you to load modules for specific tools and version. +The [README.md](https://gitlab.biologie.ens-lyon.fr/PSMN/modules/blob/master/README.md) file of the [PSMN/modules](https://gitlab.biologie.ens-lyon.fr/PSMN/modules) repository contains all the instruction to be able to load the modules maintained by the LBMC en present in the [PSMN/modules](https://gitlab.biologie.ens-lyon.fr/PSMN/modules) repository. ### `nf_modules` @@ -98,19 +98,19 @@ The `src/nf_modules` folder contains templates of [nextflow](https://www.nextflo # Nextflow pipeline -A pipeline is a succession of **process**. Each process has data input(s) and optional data output(s). Data flow are modeled as **channels**. +A pipeline is a succession of **process**. Each process has data input(s) and optional data output(s). Data flows are modeled as **channels**. ## Processes -Here are an example of **process**: +Here is an example of **process**: ```Groovy process sample_fasta { input: - file fasta from fasta_file +file fasta from fasta_file output: - file "sample.fasta" into fasta_sample +file "sample.fasta" into fasta_sample script: """ @@ -119,19 +119,19 @@ head ${fasta} > sample.fasta } ``` -We have the process `sample_fasta` that take as `fasta_file` channel as imput and output a `fasta_sample` channel. The process itself is deffined in the `script:` block and within `"""`. +We have the process `sample_fasta` that takes as `fasta_file` channel as input and output a `fasta_sample` channel. The process itself is defined in the `script:` block and within `"""`. ```Groovy - input: - file fasta from fasta_file +input: +file fasta from fasta_file ``` When we zoom on the `input:` block we see that we define a variable `fasta` of type `file` from the `fasta_file` channel. This mean that groovy is going to write a file named as the content of the variable `fasta` in the root of the folder where `script:` is executed. ```Groovy - output: - file "sample.fasta" into fasta_sample +output: +file "sample.fasta" into fasta_sample ``` At the end of the script, a file named `sample.fasta` is found in the root the folder where `script:` is executed and send into the pipeline `fasta_sample` @@ -142,9 +142,9 @@ Using the WebIDE of Gitlab create a file `src/fasta_sampler.nf` with this proces ## Channels -Why bother with channels ? In the above example, the advantages of channels are not really clear. We could have just given the `fasta` file to the process. But what if we have many fasta file to process ? What if we have sub processes to run on each of the sampled fasta files ? Nextflow can easily deal with these problems with the help of channels. +Why bother with channels? In the above example, the advantages of channels are not really clear. We could have just given the `fasta` file to the process. But what if we have many fasta files to process? What if we have sub processes to run on each of the sampled fasta files? Nextflow can easily deal with these problems with the help of channels. -Channels are streams of items that are emitted by a source and consumed by a process. A process with a channel as input will be run on every items send through the channel. +Channels are streams of items that are emitted by a source and consumed by a process. A process with a channel as input will be run on every item send through the channel. ```Groovy Channel @@ -152,7 +152,7 @@ Channel .set { fasta_file } ``` -Here we defined a channel `fasta_file` that is going to send every fasta file from the folder `data/fasta/` into the process that take it as input. +Here we defined the channel `fasta_file` that is going to send every fasta file from the folder `data/fasta/` into the process that take it as input. Add the definition of the channel to the `src/fasta_sampler.nf` file and commit to your repository. @@ -161,7 +161,7 @@ Add the definition of the channel to the `src/fasta_sampler.nf` file and commit After writing this first pipeline, you may want to test it. To do that first clone your repository. To easily do that set visibility level to *public* in the settings/General/Permissions page of your project. -You can then run the following commands to download your project on your computer : +You can then run the following commands to download your project on your computer: ```sh git clone -c http.sslVerify=false https://gitlab.biologie.ens-lyon.fr/<usr_name>/nextflow.git @@ -169,7 +169,7 @@ cd nextflow src/install_nextflow.sh ``` -We also need data to run our pipeline : +We also need data to run our pipeline: ``` cd data @@ -185,43 +185,43 @@ We can run our pipeline with the following command: ## Getting your results -Our pipeline seems to work but we don't know where is the `sample.fasta`. To get results out of a process, we need to tell nextflow to write it somewhere (we may don't need to get every intermediate files in our results). +Our pipeline seems to work but we don’t know where is the `sample.fasta`. To get results out of a process, we need to tell nextflow to write it somewhere (we may don’t need to get every intermediate file in our results). To do that we need to add the following line before the `input:` section: ```Groovy - publishDir "results/sampling/", mode: 'copy' +publishDir "results/sampling/", mode: 'copy' ``` Every file described in the `output:` section will be copied from nextflow to the folder `results/sampling/`. -Add this to you `src/fasta_sampler.nf` file with the WebIDE and commit to your repository. +Add this to your `src/fasta_sampler.nf` file with the WebIDE and commit to your repository. Pull your modifications locally with the command: ```sh git pull origin master ``` -You can run you pipeline again and check the content of the folder `results/sampling`. +You can run your pipeline again and check the content of the folder `results/sampling`. ## Fasta everywhere -We ran our pipeline on one fasta file. How nextflow would handle 100 of them ? To test that we need to duplicate the `tiny_v2.fasta` file: +We ran our pipeline on one fasta file. How nextflow would handle 100 of them? To test that we need to duplicate the `tiny_v2.fasta` file: ```sh for i in {1..100} do - cp data/tiny_dataset/fasta/tiny_v2.fasta data/tiny_dataset/fasta/tiny_v2_${i}.fasta +cp data/tiny_dataset/fasta/tiny_v2.fasta data/tiny_dataset/fasta/tiny_v2_${i}.fasta done ``` -You can run you pipeline again and check the content of the folder `results/sampling`. +You can run your pipeline again and check the content of the folder `results/sampling`. Every `fasta_sampler` process write a `sample.fasta` file. We need to make the name of the output file dependent of the name of the input file. ```Groovy - output: - file "*_sample.fasta" into fasta_sample +output: +file "*_sample.fasta" into fasta_sample script: """ @@ -229,8 +229,8 @@ head ${fasta} > ${fasta.baseName}_sample.fasta """ ``` -Add this to you `src/fasta_sampler.nf` file with the WebIDE and commit to your repository before pulling your modifications locally. -You can run you pipeline again and check the content of the folder `results/sampling`. +Add this to your `src/fasta_sampler.nf` file with the WebIDE and commit to your repository before pulling your modifications locally. +You can run your pipeline again and check the content of the folder `results/sampling`. # Build your own RNASeq pipeline @@ -238,15 +238,15 @@ In this section you are going to build your own pipeline for RNASeq analysis fro ## Create your Docker containers -For this practical, we are going to need the following tools : +For this practical, we are going to need the following tools: -- For Illumina adaptor removal : cutadapt -- For reads trimming by quality : UrQt -- For mapping and quantifying reads : BEDtools and Kallisto +- For Illumina adaptor removal: cutadapt +- For reads trimming by quality: UrQt +- For mapping and quantifying reads: BEDtools and Kallisto To initialize these tools, follow the **Installing** section of the [README.md](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/README.md) file. -If you are using a CBP computer don't forget to cleanup your docker containers at the end of the practical with the following command: +If you are using a CBP computer don’t forget to clean up your docker containers at the end of the practical with the following command: ```sh docker rm $(docker stop $(docker ps -aq)) @@ -255,9 +255,9 @@ docker rmi $(docker images -qf "dangling=true") ## Cutadapt -The first step of the pipeline is to remove any Illumina adaptor left in your reads files. +The first step of the pipeline is to remove any Illumina adaptor left in your read files. -Open the WebIDE and create a `src/RNASeq.nf` file. Browse for [src/nf_modules/cutadapt/cutadapt.nf](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/src/nf_modules/cutadapt/cutadapt.nf), this file contains example for cutadapt. We are interested in the *Illumina adaptor removal*,*for paired-end data* section of the code. Copy this code in your pipeline and commit. +Open the WebIDE and create a `src/RNASeq.nf` file. Browse for [src/nf_modules/cutadapt/cutadapt.nf](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/src/nf_modules/cutadapt/cutadapt.nf), this file contains examples for cutadapt. We are interested in the *Illumina adaptor removal*,*for paired-end data* section of the code. Copy this code in your pipeline and commit. Compared to before, we have few new lines: @@ -265,24 +265,24 @@ Compared to before, we have few new lines: params.fastq = "$baseDir/data/fastq/*_{1,2}.fastq" ``` -We declare a variable that contain the path of the fastq file to look for. The advantage of using `params.fastq` is that now the option `--fastq` in our call to the pipeline allow us to define this variable: +We declare a variable that contain the path of the fastq file to look for. The advantage of using `params.fastq` is that now the option `--fastq` in our call to the pipeline allows us to define this variable: ```sh ./nextflow src/RNASeq.nf --fastq "data/tiny_dataset/fastq/*_R{1,2}.fastq" ``` ```Groovy -log.info "fastq files : ${params.fastq}" +log.info "fastq files: ${params.fastq}" ``` -This line simply display the value of the variable +This line simply displays the value of the variable ```Groovy Channel .fromFilePairs( params.fastq ) ``` -As we are working with paired-end RNASeq data we tell nextflow to send pairs of fastq in the `fastq_file` channel. +As we are working with paired-end RNASeq data, we tell nextflow to send pairs of fastq in the `fastq_file` channel. ### cutadapt.config @@ -290,10 +290,10 @@ As we are working with paired-end RNASeq data we tell nextflow to send pairs of For the `fastq_sampler.nf` pipeline we used the command `head` present in most base UNIX systems. Here we want to use `cutadapt` which is not. Therefore, we have three main options: - install cutadapt locally so nextflow can use it -- launch the process in a Docker container that have cutadapt installed +- launch the process in a Docker container that has cutadapt installed - launch the process with SGE while loading the correct module to have cutadapt available -We are not going to use the first option which requiere no configuration for nextflow but tedious tools installation. Instead, we are going to use existing *wrappers* and tell nextflow about it. This is what the [src/nf_modules/cutadapt/cutadapt.config](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/src/nf_modules/cutadapt/cutadapt.config) is used for. +We are not going to use the first option which requires no configuration for nextflow but tedious tools installations. Instead, we are going to use existing *wrappers* and tell nextflow about it. This is what the [src/nf_modules/cutadapt/cutadapt.config](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/src/nf_modules/cutadapt/cutadapt.config) is used for. Copy the content of this config file to an `src/RNASeq.config` file. This file is structured in process blocks. Here we are only interested in configuring `adaptor_removal` process not `trimming` process. So you can remove the `trimming` block and commit. @@ -308,20 +308,20 @@ You can test your pipeline with the following command: The second step of the pipeline is to trim reads by quality. -Browse for [src/nf_modules/UrQt/urqt.nf](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/src/nf_modules/UrQt/urqt.nf), this file contains example for UrQt. We are interested in the *for paired-end data* section of the code. Copy the process section code in your pipeline and commit. +Browse for [src/nf_modules/UrQt/urqt.nf](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/src/nf_modules/UrQt/urqt.nf), this file contains examples for UrQt. We are interested in the *for paired-end data* section of the code. Copy the process section code in your pipeline and commit. -This code won't work if you try to run it: the `fastq_file` channel is already consumed by the `adaptor_removal` process. In nextflow once a channel is used by a process, it cease to exist. Moreover, we don't want to trim the input fastq, we want to trim the fastq that come from the `adaptor_removal` process. +This code won’t work if you try to run it: the `fastq_file` channel is already consumed by the `adaptor_removal` process. In nextflow once a channel is used by a process, it ceases to exist. Moreover, we don’t want to trim the input fastq, we want to trim the fastq that comes from the `adaptor_removal` process. -Therefore, you need to change the line : +Therefore, you need to change the line: ```Groovy - set pair_id, file(reads) from fastq_files +set pair_id, file(reads) from fastq_files ``` In the `trimming` process to: ```Groovy - set pair_id, file(reads) from fastq_files_cut +set pair_id, file(reads) from fastq_files_cut ``` The two processes are now connected by the channel `fastq_files_cut`. @@ -336,7 +336,7 @@ Kallisto need the sequences of the transcript that need to be quantified. We are You can copy to your `src/RNASeq.nf` file the content of [src/nf_modules/BEDtools/bedtools.nf](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/src/nf_modules/BEDtools/bedtools.nf) and to your `src/RNASeq.config` file the content of [src/nf_modules/BEDtools/bedtools.config](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/src/nf_modules/BEDtools/bedtools.config). -Commit your work and test your pipeline with the following command : +Commit your work and test your pipeline with the following command: ```sh ./nextflow src/RNASeq.nf -c src/RNASeq.config -profile docker --fastq "data/tiny_dataset/fastq/*_R{1,2}.fastq" --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" --bed "data/tiny_dataset/annot/tiny.bed" @@ -344,14 +344,14 @@ Commit your work and test your pipeline with the following command : ## Kallisto -Kallisto run in two step: the indexation of the reference and the quantification on this index. +Kallisto run in two steps: the indexation of the reference and the quantification on this index. You can copy to your `src/RNASeq.nf` file the relevant content of [src/nf_modules/Kallisto/kallisto.nf](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/src/nf_modules/Kallisto/kallisto.nf) and to your `src/RNASeq.config` file the content of [src/nf_modules/Kallisto/kallisto.config](https://gitlab.biologie.ens-lyon.fr/pipelines/nextflow/blob/master/src/nf_modules/Kallisto/kallisto.config). -We are going to work with paired-end so only copy the relevant processes. The `index_fasta` process need to take as input the output of your `fasta_from_bed` process. The `fastq` input of your `mapping_fastq` process need to take as input the output of your `index_fasta` process and the `trimming` process. +We are going to work with paired-end so only copy the relevant processes. The `index_fasta` process needs to take as input the output of your `fasta_from_bed` process. The `fastq` input of your `mapping_fastq` process needs to take as input the output of your `index_fasta` process and the `trimming` process. Commit your work and test your pipeline. -You now have a RNASeq analysis pipeline that can run locally with Docker ! +You now have a RNASeq analysis pipeline that can run locally with Docker! # Run your RNASeq pipeline on the PSMN @@ -362,7 +362,7 @@ login@allo-psmn login@e5-2667v4comp1 ``` -## Set your environement +## Set your environment Make the LBMC modules available to you: @@ -371,7 +371,7 @@ ln -s /Xnfs/lbmcdb/common/modules/modulefiles ~/privatemodules echo "module use ~/privatemodules" >> .bashrc ``` -Then you need to clone your pipeline and get the data : +Then you need to clone your pipeline and get the data: ```sh git clone -c http.sslVerify=false https://gitlab.biologie.ens-lyon.fr/lmodolo/nextflow.git @@ -382,7 +382,7 @@ cd .. ## Run nextflow -As we don't want nextflow to be killed in case of deconnection we start by launching `tmux`. In case of deconnection, you can restore your session with the command `tmux a`. +As we don’t want nextflow to be killed in case of disconnection, we start by launching `tmux`. In case of deconnection, you can restore your session with the command `tmux a`. ```sh tmux @@ -390,4 +390,4 @@ module load nextflow/0.28.2 nextflow src/RNASeq.nf -c src/RNASeq.config -profile sge --fastq "data/tiny_dataset/fastq/*_R{1,2}.fastq" --fasta "data/tiny_dataset/fasta/tiny_v2.fasta" --bed "data/tiny_dataset/annot/tiny.bed" ``` -You just ran your pipeline on the PSMN ! +You just ran your pipeline on the PSMN!