Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • can/unix-command-line
  • gdurif/unix-command-line_dev
2 results
Show changes
Commits on Source (121)
Showing
with 3639 additions and 91 deletions
*.html
.DS_Store
.Rproj.user
/.quarto/
/_book/
*_cache/
pages:
stage: deploy
image: rocker/tidyverse
script:
- apt update && apt install -y libxt6
- quarto -v
- |
quarto render
mkdir public
cp -r _book/* public/
interruptible: true
artifacts:
paths:
- public
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
This diff is collapsed.
---
title: Install system programs
author: "Laurent Modolo"
---
```{r include = FALSE}
if (!require("fontawesome")) {
install.packages("fontawesome")
}
library(fontawesome)
knitr::opts_chunk$set(echo = TRUE)
knitr::opts_chunk$set(comment = NA)
```
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">
<img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" />
</a>
Objective: Learn how to install programs in GNU/Linux
As we have seen in the [4 Unix file system](http://perso.ens-lyon.fr/laurent.modolo/unix/4_unix_file_system.html#lib-and-usrlib) session, programs are files that contain instruction for the computer to do things. Those files can be in binary or text format (with a [shebang](http://perso.ens-lyon.fr/laurent.modolo/unix/9_batch_processing.html#shebang)). Any of those files, present in a folder of the [**PATH**](http://perso.ens-lyon.fr/laurent.modolo/unix/9_batch_processing.html#path) variable are executable anywhere by the user. For system wide installation, the program files are copied within shared folder path containained in the [**PATH**](http://perso.ens-lyon.fr/laurent.modolo/unix/9_batch_processing.html#path) variable.
Developers don’t want to reinvent the wheel each time they want to write complex instruction in their programs, this is why they use shared library of pre-written complex instruction. This allows for quicker development, fewer bugs (we only have to debug the library once and use it many times), and also [better memory management](http://perso.ens-lyon.fr/laurent.modolo/unix/6_unix_processes.html#processes-tree) (we only load the library once and it can be used by different programs).
## Package Manager
However, interdependencies between programs and libraries can be a nightmare to handle manually this is why most of the time when you install a program you will use a [package manager](https://en.wikipedia.org/wiki/Package_manager). [Package managers](https://en.wikipedia.org/wiki/Package_manager) are system tools that will handle automatically all the dependencies of a program. They rely on **repositories** of programs and library which contains all the information about the trees of dependence and the corresponding files (**packages**).
Systemwide installation steps:
- The user asks the package manager to install a program
- The **package manager** queries its repository lists to search for the most recent **package** version of the program (or a specific version)
- The **package manager** construct the dependency tree of the program
- The **package manager** check that the new dependency tree is compatible with every other installed program
- The **package manager** install the program **package** and all its dependencies **packages** in their correct version
The main difference between GNU/Linux distribution is the package manager they use
- Debian / Ubuntu: [apt](https://en.wikipedia.org/wiki/APT_(Debian))
- CentOS / RedHat: [yum](https://en.wikipedia.org/wiki/YUM_(software))
- ArchLinux: [pacman](https://en.wikipedia.org/wiki/Pacman_(Arch_Linux))
- SUSE / OpenSUSE: [zypper](https://en.wikipedia.org/wiki/Zypper)
- Gentoo: [portage](https://en.wikipedia.org/wiki/Portage_(software))
- Alpine: [apk](https://wiki.alpinelinux.org/wiki/Alpine_newbie_apk_packages)
Packages manager install the packages in **root** owned folders, you need **root** access to be able to use them.
<details><summary>Solution</summary>
<p>
```sh
docker run -it --volume /:/root/chroot alpine sh -c "chroot /root/chroot /bin/bash -c 'usermod -a -G sudo etudiant'" && su etudiant
```
</p>
</details>
### Installing R
**R** is a complex program that relies on loots of dependencies. Your current VM run on Ubuntu, so we are going to use the `apt` tool (`apt-get` is the older version of the `apt` command, `synaptic` is a graphical interface for `apt-get`).
You can check the **r-base** package dependencies on the website [packages.ubuntu.com](https://packages.ubuntu.com/focal/r-base). Not too much dependency ? Check the sub-package **r-base-core**.
You can check the **man**ual of the `apt` command to install **r-base-core**.
<details><summary>Solution</summary>
<p>
```sh
sudo apt install r-base-core
```
</p>
</details>
What is the **R** version that you installed ? Is there a newer version of **R** ?
### Adding a new repository
You can check the list of repositories that `apt` checks in the file `/etc/apt/sources.list`.
You can add the official cran repository to your repositories list:
```sh
sudo add-apt-repository 'deb https://cloud.r-project.org/bin/linux/ubuntu <release_name>-cran40/'
```
You can use the command `lsb_release -sc` to get your **release name**.
Then you must add the public key of this repository:
```sh
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E298A3A825C0D65DFD57CBB651716619E084DAB9
```
### Updating the repository list
You can now use `apt` to update your repository list dans try to reinstall **r-base-core**
<details><summary>Solution</summary>
<p>
```sh
sudo apt update
```
</p>
</details>
The command gives you a way to list all the upgradable **packages**, which version of **R** can you install now ?
You can upgrade all the upgradable **packages**.
<details><summary>Solution</summary>
<p>
```sh
sudo apt upgrade
```
</p>
</details>
With the combination of `update` and `upgrade` you can keep your whole system up to date the even the kernel files is just another package. You can use `apt` to search for the various versions of the `linux-image`.
<details><summary>Solution</summary>
<p>
```sh
sudo apt search linux-image
```
</p>
</details>
### Language specific package manager
If it’s not a good idea to have different **package manager** on the same system (they don’t know how the dependencies are handled by the other’s manager). You will also encounter language specific package manager:
- `ppm` for Perl
- `pip` for Python
- `npm` for JavaScript
- `cargo` for Rust
- `install.packages` for R
- ...
These **package managers** allow you to make installation local to the user, which is advisable to avoid any conflict with the **packages manager** of the system.
For example, you can use the following command to install `glances` system wide with `pip`
```sh
sudo pip3 install glances
```
You can now try to install `glances` with `apt`
What is the `glances` version installed with `apt`, what is the one installed with `pip` ? What is the version of the `glances` of your **PATH** ?
Next-time use `pip` with the `--user` switch.
## Manual installation
Sometimes, a specific tool that you want to use will not be available through a **package manager**. If you are lucky, you will find a **package** for your distribution. For `apt` the **package** are `.deb` files.
For example, you can download `simplenote` version 2.7.0 for your architecture [here](https://github.com/Automattic/simplenote-electron/releases/tag/v2.7.0).
<details><summary>Solution</summary>
<p>
```sh
wget https://github.com/Automattic/simplenote-electron/releases/download/v2.7.0/Simplenote-linux-2.7.0-amd64.deb
```
</p>
</details>
You can then use `apt` to install this file.
## From sources
If the program is open source, you can also [download the sources](https://github.com/Automattic/simplenote-electron/archive/v2.7.0.tar.gz) and build them.
<details><summary>Solution</summary>
<p>
```sh
wget https://github.com/Automattic/simplenote-electron/archive/v2.7.0.tar.gz
```
</p>
</details>
You can use the command `tar -xvf` to extract this archive
When you go into the `simplenote-electron-2.7.0` folder, you can see a `Makefile` this means that you can use the `make` command to build Simplenote from those files. `make` is a tool that read recipes (`Makefiles`) to build programs.
You can try to install `node` and `npx` with `apt`. What happened ?
<details><summary>Solution</summary>
<p>
```sh
sudo apt install nodejs
```
</p>
</details>
You can use the [https://packages.ubuntu.com/](https://packages.ubuntu.com/) to search the name of the package containing the `libnss3.so` file.
<details><summary>Solution</summary>
<p>
```sh
sudo apt install libnss3
```
</p>
</details>
What now ? Installing dependencies manually is an iterative process...
<details><summary>Solution</summary>
<p>
```sh
sudo apt install libnss3 libatk1.0-dev libatk-bridge2.0-0 libgdk-pixbuf2.0-0 libgtk-3-0 libgbm1
```
</p>
</details>
Yay we should have every lib !
What now ? A nodejs dependency is missing... After, some query on the internet we can find the solution...
<details><summary>Solution</summary>
<p>
```sh
sudo apt install libnss3 libatk1.0-dev libatk-bridge2.0-0 libgdk-pixbuf2.0-0 libgtk-3-0 libgbm1
npm install --save-dev electron-window-state
```
</p>
</details>
And now you understand why program packaging takes time in a project, and why it’s important !
You can finalize the installation with the command `make install`. Usually the command to build a tool is available in the `README.md` file of the project.
Read the `README` file of the [fastp](https://github.com/OpenGene/fastp) program to see which methods of installation are available.
> We have used the following commands:
>
> - apt to install packages on Ubuntu
> - pip3 to install Python packages
> - npm to install Nodejs packages
> - make to build programs from sources
Installing programs and maintain different versions of a program on the same system is a difficult task. In the next session, we will learn how to use [virtualization](./12_virtualization.html) to facilitate our job.
---
title: Virtualization
author: "Laurent Modolo"
---
```{r include = FALSE}
if (!require("fontawesome")) {
install.packages("fontawesome")
}
library(fontawesome)
knitr::opts_chunk$set(echo = TRUE)
knitr::opts_chunk$set(comment = NA)
```
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">
<img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" />
</a>
Objective: Learn how to build virtual images or container of a system
If a computer can run any programs, it can also run a program simulating another computer. This is a core concept of virtualization. The software that creates the **guest** system (the simulated computer) is called a **hypervisor** or **virtual machine monitor**.
You can save the state of the whole **guest** system using a **snapshot**. The **snapshots** can then be executed on any other **hypervisor**. This as several benefits:
- If the **host** has a hardware failure, the **snapshots** can be executed on another **host** to avoid service interruption
- For scalable system, as many **guest** systems as necessary can be launched adaptively on many **host** systems to handle peak consumption. When the peak is over, we can easily stop the additional **guest** systems.
- For computing science a **snapshot** of a suite of tools allows to you run the same computation as it also captures all the software (and simulated) hardware environment.
To avoid the overhead of simulating every component of the **guest** system, which means that the **hypervisor** programs must run code that simulates a given hardware and code that simulate the **guest** programs running on this hardware, some part of the **host** system can be shared (with control) with the **guest** system.
There are different levels of virtualisation which correspond to different levels of isolation between the virtual machine (**guest**) and the real computer (**host**).
## Full virtualization
A key challenge for full virtualization is the interception and simulation of privileged operations, such as I/O instructions. The effects of every operation performed within a given virtual machine must be kept within that virtual machine – virtual operation cannot be allowed to alter the state of any other virtual machine, the control program, or the hardware. Some machine instructions can be executed directly by the hardware, since their effects are entirely contained within the elements managed by the control program, such as memory locations and arithmetic registers. But other instructions that would "pierce the virtual machine" cannot be allowed to execute directly; they must instead be trapped and simulated. Such instructions either access or affect state information that is outside the virtual machine.
## Paravirtualization
In paravitualization, the virtual hardware of the **guest** system is similar to the hardware of the **host**. The goal is to reduce the portion of the **guest** execution time spent to simulate hardware which is the same as the **host** hardware. The paravirtualization provides specially defined **hooks** to allow the **guest** and **host** to request and acknowledge these tasks, which would otherwise be executed in the virtual domain (where execution performance is worse).
A hypervisor provides the virtualization of the underlying computer system. In [full virtualization](https://en.wikipedia.org/wiki/Full_virtualization), a guest operating system runs unmodified on a hypervisor. However, improved performance and efficiency is achieved by having the guest operating system communicate with the hypervisor. By allowing the guest operating system to indicate its intent to the hypervisor, each can cooperate to obtain better performance when running in a virtual machine. This type of communication is referred to as paravirtualization.
## OS-level virtualization
**OS-level virtualization** is an [operating system](https://en.wikipedia.org/wiki/Operating_system) paradigm in which the [kernel](https://en.wikipedia.org/wiki/Kernel_(computer_science)) allows the existence of multiple isolated [user space](https://en.wikipedia.org/wiki/User_space) instances. Such instances, called **containers** may look like real computers from the point of view of programs running in them. Programs running inside a container can only see the container's contents and devices assigned to the container.
## VirtualBox
VirtualBox is own by oracle, you can add the following repository to get the last version:
<details><summary>Solution</summary>
<p>
```sh
docker run -it --volume /:/root/chroot alpine sh -c "chroot /root/chroot /bin/bash -c 'usermod -a -G sudo etudiant'" && su etudiant
```
</p>
</details>
```sh
wget -q -O- http://download.virtualbox.org/virtualbox/debian/oracle_vbox_2016.asc | sudo apt-key add -
sudo apt update
sudo apt install virtualbox
sudo usermod -G vboxusers -a $USER
```
The first thing that we need to do with virtual box is to create a new virtual machine. We want to install Ubuntu 20.04 on it.
```sh
VBoxManage createvm --name Ubuntu20.04 --register
```
We create a virtual hard disk for this VM:
```sh
VBoxManage createhd --filename Ubuntu20.04 --size 14242
```
We can then configure the VM, we use the Ubuntu presets.
```sh
VBoxManage modifyvm Ubuntu20.04 --ostype Ubuntu
```
We set the virtual RAM
```sh
VBoxManage modifyvm Ubuntu20.04 --memory 1024
```
We add a virtual IDE peripheric storage on which we can boot on.
```sh
VBoxManage storagectl Ubuntu20.04 --name IDE --add ide --controller PIIX4 --bootable on
```
And add an ubuntu image to this IDE device
```sh
wget https://releases.ubuntu.com/20.10/ubuntu-20.10-live-server-amd64.iso
VBoxManage storageattach Ubuntu20.04 --storagectl IDE --port 0 --device 0 --type dvddrive --medium "/home/etudiant/ubuntu-20.10-live-server-amd64.iso"
```
Add a network interface
```sh
VBoxManage modifyvm Ubuntu20.04 --nic1 nat --nictype1 82540EM --cableconnected1 on
```
And then start the VM to launch the `ubuntu-20.10-live-server-amd64.iso` installation
```sh
VBoxManage startvm Ubuntu20.04
```
Why did this last command fail ? Which kind of virtualisation VirtualBox is using ?
## Docker
Docker is an **OS-level virtualization** system where the virtualization is managed by the `docker` daemon.
You can use the `systemctl` command and the `/` key to search for this daemon.
Like VirtualBox, you can install system programs within a container.
Prebuilt containers can be found on different sources like [the docker hub](https://hub.docker.com/) or [the biocontainers registry](https://biocontainers.pro/registry).
Launching a container
```sh
docker run -it alpine:latest
```
You can check your user name
<details><summary>Solution</summary>
<p>
```sh
echo $USER
id
```
</p>
</details>
Launching a background container
```sh
docker run -d -p 8787:8787 -e PASSWORD=yourpasswordhere rocker/rstudio:3.2.0
```
You can check the running container with :
```sh
docker ps
```
Run a command within a running container:
```sh
docker exec <CONTAINER ID> id
```
Stopping a container:
```sh
docker stop <CONTAINER ID>
```
Deleting a container:
```sh
docker rm <CONTAINER ID>
```
Deleting a container image
```sh
docker rmi rocker/rstudio:3.2.0
```
Try to run the `mcr.microsoft.com/windows/servercore:ltsc2019` container, what is happening ?
### Building your own container
You can also create your own container by writing a container recipe. For Docker this file is named `Dockerfile`
The first line of such recipe is a `FROM` statement. You don't start from scratch like in VirtualBox, but from a bare distribution:
```dockerfile
FROM ubuntu:20.04
```
From this point you can add instructions
`COPY` will copy files from the `Dockerfile` directory to a path inside the container
```dockerfile
COPY .bashrc /
```
`RUN`will execute command inside the container
```dockerfile
RUN apt updatge && apt install -y htop
```
You can then build your container:
```sh
docker build ./ -t 'ubuntu_with_htop'
```
## Singularity
Like Docker, Singularity is an **OS-level virtualization**. This main difference with docker is that the user is the same within and outside a container. Singularity is available on the [neuro.debian.net](http://neuro.debian.net/install_pkg.html?p=singularity-container) repository, you can add this source with the following commands:
```sh
wget -O- http://neuro.debian.net/lists/focal.de-md.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pool.sks-keyservers.net:80 A5D32F012649A5A9
sudo apt-get update
sudo apt-get install singularity-container
```
Launching a container
```sh
singularity run docker://alpine:latest
```
You can check your user name
<details><summary>Solution</summary>
<p>
```sh
echo $USER
id
```
</p>
</details>
Executing a command within a container
```sh
singularity exec docker://alpine:latest apk
```
---
title: Understanding a computer
author: "Laurent Modolo"
---
```{r include = FALSE}
if (!require("fontawesome")) {
install.packages("fontawesome")
}
library(fontawesome)
knitr::opts_chunk$set(echo = TRUE)
knitr::opts_chunk$set(comment = NA)
```
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">
<img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" />
</a>
Objective: understand the relations between the different components of a computer
## Which parts are necessary to define a computer ?
## Computer components
### CPU (Central Processing Unit)
![CPU](./img/amd-ryzen-7-1700-cpu-inhand1-2-1500x1000.jpg){width=100%}
### Memory
#### RAM (Random Access Memory)
![RAM](./img/ram.png){width=100%}
#### HDD (Hard Disk Drive) / SSD (Solid-State Drive)
![HDD](./img/SSD.jpeg){width=100%}
![SSD](./img/hdd.png){width=100%}
### Motherboard
![motherboard](./img/motherboard.jpg){width=100%}
### GPU (Graphical Processing Unit)
![GPU](./img/foundation-100046736-orig.jpg){width=100%}
### Alimentation
![Alim](./img/LD0003357907_2.jpg){width=100%}
---
## Computer model: universal Turing machine
![width:20% height:20%](./img/lego_turing_machine.jpg){width=100%}
---
## As simple as a Turing machine ?
![Universal Truing Machine](./img/universal_truing_machine.png){width=100%}
- A tape divided into cells, one next to the other. Each cell contains a symbol from some finite alphabet.
- A head that can read and write symbols on the tape and move the tape left and right one (and only one) cell at a time.
- A state register that stores the state of the Turing machine, one of finitely many. Among these is the special start state with which the state register is initialized.
- A finite table of instructions that, given the state the machine is currently in and the symbol it is reading on the tape, tells the machine what to do.
---
## Basic Input Output System (BIOS)
> Used to perform hardware initialization during the booting process (power-on startup), and to provide runtime services for operating systems and programs.
- comes pre-installed on a personal computer's system board
- the first software to run when powered on
- in modern PCs initializes and tests the system hardware components, and loads a boot loader from a mass memory device
---
## Unified Extensible Firmware Interface (UEFI)
Advantages:
- Ability to use large-disk partitions (over 2 TB) with a GUID Partition Table (GPT)
- CPU-independent architecture
- CPU-independent drivers
- Flexible pre-OS environment, including network capability
- Modular design
- Backward and forward compatibility
Disadvantages:
- More complex
---
## Operating System (OS)
> A system software that manages computer hardware, software resources, and provides common services for computer programs.
- The first thing loaded by the BIOS/UEFI
- The first thing on the tape of a Turing machine
### Kernel
> The kernel provides the most basic level of control over all of the computer's hardware devices. It manages memory access for programs in the RAM, it determines which programs get access to which hardware resources, it sets up or resets the CPU's operating states for optimal operation at all times, and it organizes the data for long-term non-volatile storage with file systems on such media as disks, tapes, flash memory, etc.
![Kernel](./img/220px-Kernel_Layout.svg.png){width=100%}
---
## UNIX
> Unix is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix,
[![Unix history](./img/1920px-Unix_timeline.en.svg.png){width=100%}](https://upload.wikimedia.org/wikipedia/commons/b/b5/Linux_Distribution_Timeline_21_10_2021.svg)
The ones you are likely to encounter:
- [macOS](https://en.wikipedia.org/wiki/MacOS)
- [BSD (Berkeley Software Distribution) variant](https://www.freebsd.org/)
- [GNU/Linux](https://www.kernel.org/)
The philosophy of UNIX is to have a large number of small software which do few things but to them well.
## GNU/Linux
Linux is the name of the kernel which software, to get a full OS, Linux is part of the [GNU Project](https://www.gnu.org/).
The GNU with Richard Stallman introduced the notion of Free Software:
1. The freedom to run the program as you wish, for any purpose.
2. The freedom to study how the program works, and change it so it does your computing as you wish. Access to the source code is a precondition for this.
3. The freedom to redistribute copies so you can help others.
4. The freedom to distribute copies of your modified versions to others. By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.
You can find a [list of software licenses](https://www.gnu.org/licenses/license-list.html)
<video width="100%" controls>
<source src="https://audio-video.gnu.org/video/TEDxGE2014_Stallman05_LQ.webm" type="video/webm">
Your browser does not support the video tag.
</video>
See this [presentation](https://plmlab.math.cnrs.fr/gdurif/presentation_foss/-/blob/main/presentation/presentation_DURIF_foss.pdf) (in french) for a quick introduction about **software license** and **free/open source software**.
[Instead of installing GNU/Linux on your computer, you are going to learn to use the IFB Cloud.](./2_using_the_ifb_cloud.html)
---
theme: default
paginate: true
---
# Understanding a computer
[![cc_by_sa](./img/cc_by_sa.png)](http://creativecommons.org/licenses/by-sa/4.0/)
Objective: understand the relations between the different components of a computer
---
# Which parts are necessary to define a computer ?
---
# Computer model: an universal Turing machine
![width:20% height:20%](./img/lego_turing_machine.jpg)
---
# Computer components
---
# As simple as a Turing machine ?
![universal_truing_machine](./img/universal_truing_machine.png)
- A tape divided into cells, one next to the other. Each cell contains a symbol from some finite alphabet.
- A head that can read and write symbols on the tape and move the tape left and right one (and only one) cell at a time.
- A state register that stores the state of the Turing machine, one of finitely many. Among these is the special start state with which the state register is initialized.
- A finite table of instructions that, given the state the machine is currently in and the symbol it is reading on the tape, tells the machine what to do.
---
# Basic Input Output System (BIOS)
> Used to perform hardware initialization during the booting process (power-on startup), and to provide runtime services for operating systems and programs.
- comes pre-installed on a personal computer's system board
- the first software to run when powered on
- in modern PCs initializes and tests the system hardware components, and loads a boot loader from a mass memory device
---
# Unified Extensible Firmware Interface (UEFI)
Advantages:
- Ability to use large disks partitions (over 2 TB) with a GUID Partition Table (GPT)
- CPU-independent architecture
- CPU-independent drivers
- Flexible pre-OS environment, including network capability
- Modular design
- Backward and forward compatibility
Disadvantages:
- More complex
---
# Operating System (OS)
> A system software that manages computer hardware, software resources, and provides common services for computer programs.
---
# UNIX
---
# UNIX-like OS
---
---
title: IFB (Institut Français de bio-informatique) Cloud
author: "Laurent Modolo"
---
```{r include = FALSE}
if (!require("fontawesome")) {
install.packages("fontawesome")
}
library(fontawesome)
knitr::opts_chunk$set(echo = TRUE)
knitr::opts_chunk$set(comment = NA)
```
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">
<img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" />
</a>
Objective: Start and connect to an appliance on the IFB cloud
Instead of working on your computer where you don't have an Unix-like OS or have limited right, we are going to use the [IFB (Institut Français de bio-informatique) Cloud]( https://biosphere.france-bioinformatique.fr/).
## Creating an IFB account
1. Access the [**https://biosphere.france-bioinformatique.fr/**](https://biosphere.france-bioinformatique.fr/) website
2. On the top right of the screen click on <img src="./img/signin_ifb.png" alt="sign in" style="zoom:150%;" />
3. Then click on ![login](./img/login_ifb.png)
4. Use the **Incremental search field** to select your identity provider (CNRS / ENS de Lyon / etc.)
5. Login
6. Complete the form with your **Name**, **First Name**, **Town** and **Zip Code**. You can ignore the other field and click on **accept**.
7. Go to your **Groups** parameters on the top right ![group_selection_ifb](./img/group_selection_ifb.png)
8. Click on ![join_a_group](./img/join_a_group.png) and type **CAN UNIX 2023**
9. You can click on the **+** sign to register and wait to be accepted in the group
## Starting the LBMC Unix 2022 appliance
To follow this practical you will need to start the **[LBMC Unix 2022](https://biosphere.france-bioinformatique.fr/catalogue/appliance/177/)** appliance from the [IFB Cloud](https://biosphere.france-bioinformatique.fr/) and click on the ![start](./img/start_VM.png) button after login with your account.
In the IFB jargon, appliance means **virtual machine** (VM). Remember how a universal Turing machine can run any programs ? A virtual machine, is a simulation program, simulating a physical computer. VM's have the following advantages:
- Copies of the VM will be identical (there will be no differences between your running *LBMC Unix 2022 appliance* and mine )
- Upon starting the VM is reset to the *LBMC Unix 2022 appliance* state
- You can break everything in your VM, terminate it and start a new one.
To access to your appliance you can go to the [**myVM** tab](https://biosphere.france-bioinformatique.fr/cloud/)
![myVM tab](./img/my_VM_ifb.png)
You will see the list of your running or starting appliances.
![my appliances](./img/my_appliances_ifb.png)
**Don't forget to terminate your appliances at the end of the session by clicking on** ![rm my appliances ifb.](./img/rm_my_appliances_ifb.png)
You will need to start this appliance at the start of each session of this course and terminate it afterward.
The ![hourglass](./img/wait_my_appliances_ifb.png) symbol indicates that your appliance is starting.
## Accessing the LBMC Unix 2022
You can open the **https** link next to the termination button of your appliance in a new tab. You will have the following message
![ssl warning](./img/ssl_warning.png)
This means that the https connection is encrypted with a certificate unknown to your browser. As this certificate is going to be destroyed when you terminate your appliance, we don't want to pay a certification authority to validate it. Therefore you can safely add an exception for this certificate.
![ssl exception](./img/ssl_exception.png)
The webpage you will load will display only the following line:
```sh
achinea3e18205-1f8c-4ee1-995e-d33ef57afa3c login:
```
To get your identification information you can click on the **Params** next to the **https** link.
**The password and the https links are one of the only things that going to change when you start a new appliance.**
```sh
Password:
```
To copy / paste your password, you will need to perform a right-click and select **past from the browser.**
![past from browser](./img/shellinabox_past_from_browser.png)
Then paste your password in the dialog box.
Don't worry the password will not be displayed (not even in the form of `*****`, so someone looking at your screen will not be able to guess it's length), you can press **enter** to log on your VM.
[First steps in a terminal.](./3_first_steps_in_a_terminal.html)
---
title: First step in a terminal
author: "Laurent Modolo"
---
```{r include = FALSE}
if (!require("fontawesome")) {
install.packages("fontawesome")
}
library(fontawesome)
knitr::opts_chunk$set(echo = TRUE)
knitr::opts_chunk$set(comment = NA)
```
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">
<img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" />
</a>
Objective: learn to use basic terminal command
Congratulations you are now connected on your VM !
The first thing that you can see is a welcome message (yes GNU/Linux users are polite and friendly), and information on your distribution.
> A **Linux distribution** (often abbreviated as **distro**) is an [operating system](https://en.wikipedia.org/wiki/Operating_system) made from a software collection that is based upon the [Linux kernel](https://en.wikipedia.org/wiki/Linux_kernel)
What is the distribution installed on your VM ?
You can go to this distribution website and have a look at the list of firms using it.
## Shell
A command-line interpreter (or shell), is a software designed to read lines of text entered by a user to interact with an OS.
To simplify the shell executes the following infinite loop:
1. read a line
2. translate this line as a program execution with its parameters
3. launch the corresponding program with the parameters
3. wait for the program to finish
4. Go back to 1.
When you open a terminal on an Unix-like OS, you will have a **prompt** displayed: it can end with a `$` or a `%` character depending on your configuration. As long as you see your prompt, it means that you are in step **1.**, if no prompt is visible, you are in step **4.** or you have set up a very minimalist configuration for your shell.
<img src="./img/prompt.png" alt="prompt" style="zoom:150%;" />
The blinking square or vertical bar represents your **cursor**. Shell predates graphical interfaces, so most of the time you won’t be able to move this cursor with your mouse, but with the **directional arrows** (left and right).
On the IFB, your prompt is a `$`:
```sh
etudiant@VM:~$
```
You can identify the following information from your prompt: **etudiant** is your login and **VM** is the name of your VM (`~` is where you are on the computer, i.e. the current working directory, but we will come back to that later).
On Ubuntu 20.04, the default shell is [Bash](https://en.wikipedia.org/wiki/Bash_(Unix_shell)) while on recent version of macOS it’s [zsh](https://en.wikipedia.org/wiki/Z_shell). There are [many different shell](https://en.wikipedia.org/wiki/List_of_command-line_interpreters), for example, Ubuntu 20.04 also has [sh](https://en.wikipedia.org/wiki/Bourne_shell) installed.
## Launching Programs
You can launch every program present on your computer from the shell. The syntax will always be the following:
```sh
etudiant@VM:~$ program_name option_a option_b option_c [...] option_n
```
And pressing **enter** to execute your command.
For example, we can launch the `cal` software by typing the following command and pressing **enter**:
```sh
cal
```
When you launch a command, various things can happen:
- Information can be displayed in the shell
- Computation can be made
- Files can we read or written
- etc.
We can pass argument to the `cal` software the following way.
```sh
cal -3
```
What is the effect of the `-3` parameter ?
You can add as many parameters as you want to your command, try `-3 -1` what is the meaning of the `-1` parameter ?
The `-d` option display the month of a given date in a `yyyy-mm` format. Try to display your month of birth.
Traditionally, parameters are *named* which means that they are in the form of:
* `-X` for an on/off option (like `cal -3`)
* `-X something` for an input option (like `cal -d yyyy-mm`)
Here the name of the parameter is `X`, but software can also accept list of unnamed parameters. Try the following:
```sh
cal 2
cal 1999
cal 2 1999
```
What is the difference for the parameter value `2` in the first and third command ?
## Moving around
For the `cal` program, the position in the file system is not important (it’s not going to change the calendar). However, for most tools that are able to read or write files, it’s important to know where you are. This is the first real difficulty with command line interface: you need to remember where you are.
If you are lost, you can **p**rint your **w**orking **d**irectory (i.e., where you are now, working) with the command.
```sh
pwd
```
Like `cal`, the `pwd` command return textual information
By default when you log on an Unix system, you are in your **HOME** directory. Every user (except one) should have it’s home directory in the `/home/`folder.
To **c**hange **d**irectory you can type the command `cd`, `cd` take one argument: the path of the directory where you want to go. go to the `/home` directory.
```sh
cd /home
```
The `cd` command doesn’t return any textual information, but change the environment of the shell (you can confirm it with ` pwd`) ! You can also see this change in your prompt:
```sh
`etudiant@VM:/home$`
```
What happens when you type `cd` without any argument ?
What is the location shown in your prompt ? Is it coherent with the `pwd` information ? Can you `cd` to the `pwd` path ?
When we move around a file system, we often want to see what is in a given folder. We want to **l**i**s**t the directory content. Go back to the `/home` directory and use to the `ls` command see how many people have a home directory there.
We will see various options for the `ls` command, throughout this course. Try the `-a` option.
```sh
ls -a
```
What changed compared to the `ls` command without this option ?
Go to your home folder with the bare `cd` command and run the `ls -a` command again. The `-a` option makes the `ls` command list hidden files and folders. On Unix systems, hidden files and folders are all files and folders whose name starts with a `.`.
Can you `cd` to `.` ?
```sh
cd .
```
What happened ?
Can you cd to `..` ?
```sh
cd ..
```
What appended ?
Repeat 3 times the previous command (you can use the upper directional arrow to repeat the last command).
What append ?
You can use the `-l` option in combination with the `-a` option to know more about those folders.
> We have seen the commands :
>
> - `cal` for calendar
> - `cd` for change directory
> - `ls` for list directory
> - `pwd` for print working directory
[You can now go to the Unix file system.](./4_unix_file_system.html)
---
title: GNU/Linux file system
author: "Laurent Modolo"
---
```{r include = FALSE}
if (!require("fontawesome")) {
install.packages("fontawesome")
}
library(fontawesome)
knitr::opts_chunk$set(echo = TRUE)
knitr::opts_chunk$set(comment = NA)
```
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">
<img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" />
</a>
Objective: Understand how files are organized in Unix
> On a UNIX system, everything is a file ; if something is not a file, it is a process.
>
> Machtelt Garrels
The followings are files:
- a text file
- an executable file
- a folder
- a keyboard
- a disk
- a USB key
- ...
This means that your keyboard is represented as a file within the OS.
This file system is organized as a tree. As you have seen, every folder has a parent folder except the `/` folder whose parent is itself.
Every file can be accessed by an **absolute path** starting at the root. Your user home folder can be accessed with the path `/home/etudiant/`. Go to your user home folder.
We can also access files with a **relative path**, using the special folder `..`. From your home folder, go to the *ubuntu* user home folder without passing by the root (we will see use of the `.` folder later).
## File Types
As you may have guessed, every file type is not the same. We have already seen that common file and folder are different. Here is the list of file types:
- `-` common files
- `d` folders
- `l` links
- `b` disk
- `c` special files
- `s` socket
- `p` named pipes
To see the file type you can type the command
```sh
ls -la
```
The first column will tell you the type of the file (here we have only the type `-` and `d`). We will come back on the other information later. Another less used command to get fine technical information on a file is the command `stat [file_name]`. Can you get the same information as `ls -la` with `stat` ?
## Common Structure
From the root of the system (`/`), most of the Unix-like distribution will share the same folder tree structure. On macOS, the names will be different because when you sell the most advanced system in the world you need to rename things, with more advanced names.
### `/home`
You already know this one. You will find all your file and your configuration files here. Which configuration file can you identify in your home ?
### `/boot`
You can find the Linux kernel and the boot manager there. What is the name of your boot manager (process by elimination) ?
You can see a new type of file here, the type `l`. What it the version of the **vmlinuz** kernel ?
### `/root`
The home directory of the super user, also called root (we will go back on him later). Can you check its configuration file ?
### `/bin`, `/sbin`, `/usr/bin` and `/opt`
The folder containing the programs used by the system and its users. Programs are simple file readable by a computer, these files are often in **bin**ary format which means that it’s extremely difficult for a human to read them.
What is the difference between `/bin` and `/usr/bin` ?
`/sbin` stand for system binary. What are the names of the programs to power off and restart your system ?
`/opt` is where you will find the installation of non-conventional programs (if you don’t follow [the guide of good practice of the LBMC](http://www.ens-lyon.fr/LBMC/intranet/services-communs/pole-bioinformatique/ressources/good_practice_LBMC), you can put your bioinformatics tools with crapy installation procedure there).
### `/lib` and `/usr/lib`
Those folder contains system libraries. Libraries are a collection of pieces of codes usable by programs.
What is the difference between `/lib` and `/usr/lib`.
Search information on the `/lib/gnupg` library on the net.
### `/etc`
The place where system configuration file and default configuration file are. What is the name of the default configuration file for `bash` ?
### `/dev`
Contains every peripheric
What is the type of the file `stdout` (you will have to follow the links)?
With the command `ls -l` can you identify files of type `b` ?
Using `less` can you visualize the content of the file `urandom` ? What about the file `random` ?
What is the content of `/dev/null`?
### `/var`
Storage space for variables and temporary files, like system logs, locks, or file waiting to be printed...
In the file `auth.log` you can see the creation of the `ubuntu` and `etudiant `account. To visualize a file you can use the command
```sh
less [file_path]
```
You can navigate the file with the navigation arrows. Which group the user `ubuntu` belongs to that the user `etudiant`don’t ?
To close the `less` you can press `Q`. Try the opposite of `less`, what are the differences ?
What is the type of the file `autofs.fifo-var-autofs-ifb` in the `run` folder ? From **fifo** in the name, can you guess the function of the `p` file ?
There are few examples of the last type of file in the `run` folder, in which color the command `ls -l` color them ?
### `/tmp`
Temporary space. **Erased at each shutdown of the system !**
### `/proc`
Information on the system resources. This file system is virtual. What do we mean by that ?
One of the columns of the command `ls -l` show the size of the files. Try is on the `/etc` folder. You can add the `-h` option to have human readable file size.
What are the sizes of the files in the `/proc` folder ?
From the `cpuinfo` file get the brand of the cpu simulated by your VM.
From the `meminfo` file retrieve the total size of RAM
## Links
With the command `ls -l` we have seen some links, the command `stat` can give us more information on them
```sh
stat /var/run
```
What is the kind of link for `/var/run` ?
Most of the time, when you are going to work with links, you will work with this kind of link. You can create a **l**i**n**k with the command `ln` and the option `-s` for a **s**ymbolic.
The first argument after the option of the `ln` command is the target of the link, the second argument is the link itself:
```sh
cd
ln -s .bash_history bash_history_slink
ls -la
```
What are the differences between the two following commands ?
```sh
stat bash_history_slink
stat .bash_history
```
Symbolic links can bridge across, file system, if the target of the link disappears the link will be broken.
You can delete a file with the command `rm`
**There is no trash with the command `rm` double-check your command before pressing enter !**
Delete your `.bash_history` file, what happened to the `bash_history_slink` ?
The command `ln` without the `-s` option create hard links. Try the following commands:
```sh
stat .bashrc
ln .bashrc bashrc_linka
stat .bashrc
ln .bashrc bashrc_linkb
```
Use `stat` to also study `bashrc_linka` and `bashrc_linkb`.
What happens when you delete `bashrc_linka` ?
To understand the notion of **Inode** we need to know more about storage systems.
## Disk and partition
On a computer, the data are physically stored on a media (HDD, SSD, USB key, punch card...)
![IBM_card_storage_NARA](./img/IBM_card_storage_NARA.jpg)
(Punched cards in storage at a U.S. Federal records center in 1959. All the data visible here would fit on a 4 GB flash drive.)
You cannot dump data directly into the disk, you need to organize things to be able to find them back.
![disk](./img/disk.png)
Each media is divided into partitions:
![partitions](./img/partition.png)
The media is divided into one or many partition, each of which have a file system type. Examples of file system types are
- fat32, exFAT
- ext3, ext4
- HFS+
- NTFS
- ...
The file system handle the physical position of each file on the media. The position of the file in the index of file is called **Inode**.
The action of attaching a given media to the Unix file system tree, is called mounting a partition or media. To have a complete list of information on what is mounted where, you can use the `mount `command without argument.
```sh
mount
```
Find which disk is mounted at the root of the file tree.
> We have seen the commands:
>
> - `stat` to display information on a file
> - `less` to visualize the content of a file
> - `ln` to create links
> - `mount` to list mount points
[That’s all for the Unix file system, we will come back to it from time to time but for now you can head to the next section.](./5_users_and_rights.html)
---
title: Users and rights
author: "Laurent Modolo"
---
```{r include = FALSE}
if (!require("fontawesome")) {
install.packages("fontawesome")
}
library(fontawesome)
knitr::opts_chunk$set(echo = TRUE)
knitr::opts_chunk$set(comment = NA)
```
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">
<img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" />
</a>
Objective: Understand how rights works in GNU/Linux
GNU/Linux and other Unix-like OS are multiuser, this means that they are designed to work with multiple users connected simultaneously to the same computer.
There is always at least one user: the **root** user
- It’s the super user
- he has every right (we can say that he ignores the rights system)
- this account should only be used to administer the system.
There can also be other users who
- have rights
- belong to groups
- the groups also have rights
## File rights
Each file is associated with a set of rights:
- `-` nothing
- `r` **r**eading right
- `w` **w**riting right
- `x` e**x**ecution right
Check your set of rights on your `.bashrc` file
```sh
ls -l ~/.bashrc
```
The first column of the `ls -l` output shows the status of the rights on the file
![user_rights](./img/user_right.png)
```
rwxr-xr--
\ /\ /\ /
v v v
| | others (o)
| |
| group (g)
|
user (u)
```
- the 1st character is the type of the file (we already know this one)
- he 3 following characters (2 to 4) are the **user** rights on the file
- the characters 5 to 7 are the **group** rights on the file
- the characters 8 to 10 are the **others’** rights on the file (anyone not the **user** nor in the **group**)
To change the file rights you can use the command `chmod`
Use the command `ls -l` to check the effect of the following options for `chmod`
```sh
chmod u+x .bashrc
```
```sh
chmod g=rw .bashrc
```
```sh
chmod o+r .bashrc
```
```sh
chmod u-x,g-w,o= .bashrc
```
What can you conclude on the symbols `+` , `=`, `-` and `,` with the `chmod` command ?
> ### Numeric notation
>
> Another method for representing Unix permissions is an [octal](https://en.wikipedia.org/wiki/Octal) (base-8) notation as shown by `stat -c %a`.
>
> | Symbolic notation | Numeric notation | English |
> | ------------------ | ----------------- | ------------------------------------------------------------ |
> | `----------` | 0000 | no permissions |
> | `-rwx------` | 0700 | **read, write, & execute only for owner** |
> | `-rwxrwx---` | 0770 | read, write, & execute for owner and group |
> | `-rwxrwxrwx` | 0777 | read, write, & execute for owner, group and others |
> | `---x--x--x` | 0111 | execute |
> | `--w--w--w-` | 0222 | write |
> | `--wx-wx-wx` | 0333 | write & execute |
> | `-r--r--r--` | 0444 | read |
> | `-r-xr-xr-x` | 0555 | read & execute |
> | `-rw-rw-rw-` | 0666 | read & write |
> | `-rwxr-----` | 0740 | owner can read, write, & execute; group can only read; others have no permission |
The default group of your user is the first in the list of the groups you belong to. You can use the command `groups` to display this list. What is your default group ?
The command `id` show the same information, but with some differences what are they ?
Can you cross this additional information with the content of the file `/etc/passwd` and `/etc/group` ?
What is the user *id* of **root** ?
When you create an empty file, system default rights and your default groups are used. You can use the command `touch` to create a file.
```sh
touch my_first_file.txt
```
What are the default rights when you crate a file ?
You can create folders with the command `mkdir` (**m**a**k**e **dir**ectories).
```sh
mkdir my_first_dir
```
What are the default rights when you create a directory ? Try to remove the execution rights, what appends then ?
You can see the **/root** home directory. Can you see it’s content ? Why ?
Create a symbolic link (`ln -s`) to your **.bashrc** file, what are the default rights to symbolic links ?
Can you remove the writing right of this link ? What happened ?
## Users and Groups
We have seen how to change the right associated with the group, but what about changing the group itself ? The command `chgrp` allows you to do just that:
```sh
chgrp audio .bashrc
```
Now the next step is to change the owner of a file, you can use the command `chown` for that.
```sh
chown ubuntu my_first_file.txt
```
You can change the user and the group with this command:
```sh
chown ubuntu:audio my_first_file.txt
```
What are the rights on the program `mkdir` (the command `which` can help you find where program file are) ?
Can you remove the execution rights for the others ?
The command `cp` allows you to **c**o**p**y file from one destination to another.
```sh
man cp
```
Copy the `mkdir` tool to your home directory. Can you remove execution rights for the others on your copy of `mkdir` ? Can you read the contentof the `mkdir` file ?
You cannot change the owner of a file, but you can always allow another user to copy it and change the rights on its copy.
## Getting admin access
Currently you don’t have administrative access to your VM, this means that you don’t have the password to the *root* account. Another way to get administrative access in Linux is to use the `sudo` command.
You can read the documentation (manual) of the `sudo` command with the command `man`
```sh
man sudo
```
Like for the command, `less` you can close `man` by pressing **Q**.
![sandwich](https://imgs.xkcd.com/comics/sandwich.png)
On Ubuntu, only members of the group **sudo** can use the `sudo` command. Are you in this group ?
**The root user can do everything in your VM, for example it can delete everything from the `/` directory but it’s not a good idea (see the [Peter Parker principle](https://en.wikipedia.org/wiki/With_great_power_comes_great_responsibility))**
One advantage of using a command line interface is that you can easily reuse command written by others. Copy and paste the following command in your terminal to add yourself in the **sudo** group.
```sh
docker run -it --volume /:/root/chroot alpine sh -c "chroot /root/chroot /bin/bash -c 'usermod -a -G sudo etudiant'"
```
We will come back to this command later in this course when we talk about virtualisation.
You have to logout and login to update your list of groups. To logout from a terminal, you can type `exit` or press **ctrl** + **d**.
Check your user information with the `sudo` command
```sh
sudo id
```
You can try again the `chown` command with the `sudo` command.
Check the content of the file `/etc/shadow` , what is the utility of this file (you can get help from the `man` command).
## Creating Users
You can add a new user to your system with the command `useradd`
```sh
useradd -m -s /bin/bash -g users -G adm,docker student
```
- `-m` create a home directory
- `-s` specify the shell to use
- `-g` the default group
- `-G` the additional groups
To log into another account you can use the command `su`
What is the difference between the two following commands ?
```sh
su student
```
```sh
sudo su student
```
What append when you don't specify a login with the `su` command ?
## Creating groups
You can add new groups to your system with the command `groupadd`
```sh
sudo groupadd dummy
```
Then you can add users to this group with the command `usermod`
```sh
sudo usermod -a -G dummy student
```
And check the result:
```sh
groups student
```
To remove an user from a group you can rewrite its list of groups with the command `usermod`
```sh
sudo usermod -G student student
```
Check the results.
## Security-Enhanced Linux
While what you have seen in this section hold true for every Unix system, additional rules can be applied to control the rights in Linux. This is what is called [SE Linux](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) (**s**ecurity-**e**nhanced **Linux**)
When SE Linux is enabled on a system, every **process** can be assigned a set of rights. This is how, on Android for example, some programs can access your GPS while others cannot, etc. In this case it's not the user rights that prevail, but the **process** launched by the user.
> We have seen the commands:
>
> - `chmod` to change rights
> - `touch` to create an empty file
> - `mkdir` to create a directory
> - `chgrp` to change associated group
> - `chown` to change owner
> - `man` to display the manual
> - `cp` to copy files
> - `sudo` to borrow **root** rights
> - `groupadd` to create groups
> - `groups` to list groups
> - `usermod`to manipulate users' groups
[To understand more about processes you can head to the next section.](./6_unix_processes.html)
---
title: Unix Processes
author: "Laurent Modolo"
---
```{r include = FALSE}
if (!require("fontawesome")) {
install.packages("fontawesome")
}
library(fontawesome)
knitr::opts_chunk$set(echo = TRUE)
knitr::opts_chunk$set(comment = NA)
```
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">
<img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" />
</a>
Objective: Understand how process works in GNU/Linux
A program is a list of instructions to be executed on a computer. These instructions are written in one or many files in a format readable by the computer (binary files), or interpretable by the computer (text file). The interpretable format needs to be processed by an interpreter who is in binary format.
The execution of a program on the system is represented as one or many processes. The program is the file of instruction while the process is the instructions being read.
`mkdir` is the program, when you type `mkdir my_folder`, you launch a `mkdir` process.
Your shell is a process to manipulate other processes.
> In multitasking operating systems, processes (running programs) need a way to create new processes, e.g., to run other programs. [Fork](https://en.wikipedia.org/wiki/Fork_(system_call)) and its variants are typically the only way of doing so in Unix-like systems. For a process to start the execution of a different program, it first forks to create a copy of itself. Then, the copy, called the "[child process](https://en.wikipedia.org/wiki/Child_process)", calls the [exec](https://en.wikipedia.org/wiki/Exec_(system_call)) system call to overlay itself with the other program: it ceases execution of its former program in favor of the other.
Some commands in your shell don’t have an associated process, for example there is no `cd` program, it’s a functionality of your shell. The `cd` command tell your `bash` process to do something not to fork another process.
## Process attributes
- **PID** : the **p**rocess **id**entifier is an integer, at a given time each **PID** is unique to a process
- **PPID** : the **p**arent **p**rocess **id**entifier is the **PID** of the process that has stared the current process
- **UID** : the **u**ser **id**entifier is the identifier of the user that has started the process, except for SE Linux, the process will have the same right as the user launching it.
- **PGID** : the **p**rocess **g**roup **id**entifier (like users, processes have groups)
You can use the command `ps` to see the processes launched by your user.
```sh
ps
```
Like for the command, `ls` you can use the switch `-l` to have more details.
Open another tab in your browser to log again on your VM. Keep these tabs open, we are going to use both of them in this session.
In this new tab, you are going to launch a `less` process.
```sh
less .bashrc
```
Come back, to your first tab, can you see your `less` process with the command `ps` ?
The `ps` option `-u [login]` list all the processes with **UID** the **UID** associated with `[login]`
```sh
ps -l -u etudiant
```
Is the number of `bash` processes consistent with the number of tabs you opened ?
What is the **PPID** of the `less` process ? Can you identify in which `bash` process `less` is running ?
Did you launch the `systemd` and `(sd-pam)` process ?
**pam** stands for [**p**luggable **a**uthentication **m**odules](https://www.linux.com/news/understanding-pam/), it’s a collection of tools that handle identification and resource access restriction. From the **PPID** of the `(sd-pam)` can you find which process launched `(sd-pam)` ? What is the **PPID** of this process ?
The option `-C` allows you to filter process by name
```sh
ps -l -C systemd
```
Who launched the first `systemd` process ?
## Processes tree
From **PPID** to **PPID**, you can guess that like the file system, processes are organized in a tree. The command `pstree` can give you a nice representation of this tree.
The following `ps` command shows information on the process with **PID 1**
```sh
ps -l -1
```
Is this output coherent with what you know on **PID** and the previous `ps` command ? Can you look at the corresponding program (with the command `which`) ?
Can you look for information on **PID 0** ?
The root of the processes tree is the **PID 1**.
What is the **UID** of the `dockerd` process, can you guess why we were able to gain `sudo` access in the previous section by using a `docker` command ?
`ps` give you a static snapshot of the processes but processes are dynamic. To see them running you can use the command `top`. While `top` is functional, most systems have `htop` with a more accessible interface. You can test `top` and `htop`.
Like `ps` you can use `-u etduiant` with htop to only display your user processes.
With the `F6` key, you can change the column on which to sort your process.
- Which process is consuming the most of CPU ?
- Which process is consuming the most of memory ?
What is the difference between `M_SIZE` (`VIRT` column), `M_RESIDENT` (`RES` column) and `M_SHARE` (`SHR` column) ? To which value, the column `MEM%` corresponds to ?
- `M_SIZE` : The total amount of virtual memory used by the task. It includes all code, data and shared libraries plus pages that have been swapped out and pages that have been mapped but not used (if an application requests 1 GB of memory but uses only 1 MB, then `VIRT` will report 1 GB).
- `M_RESIDENT` : what’s currently in the physical memory. This does not include the swapped out memory and some of the memory may be shared with other processes (If a process uses 1 GB of memory and it calls `fork()`, the result of forking will be two processes whose `RES` is both 1 GB but only 1 GB will actually be used since Linux uses copy-on-write).
- `M_SHARE` : The amount of shared memory used by a task. It simply reflects memory that could be potentially shared with other processes.
Wait what is swapped out memory ?
> Linux divides its physical RAM (random access memory) into chucks of memory called pages. Swapping is the process whereby a page of memory is copied to the preconfigured space on the hard disk, called swap space, to free up that page of memory. The combined sizes of the physical memory and the swap space is the amount of virtual memory available.
And as your HDD (even your fast SSD) is way slower than your RAM, when you run out of RAM and the system start to swap out memory, things will start to go really slowly on your computer. Generally, you want to avoid swapping. The swap space is often a dedicated partition in the *Linux_swap* format.
From the `htop` command, what is the size of the swap space on your VM ?
You have control over all the process launched with your UID. To test this control we are going to use the well-named command `stress`. Check the **man**ual of the `stress` command.
Launch the `stress` for 1 CPU and 3600 second.
You don’t have a prompt, it means that the last command (`stress`) is running.
## Terminate
Instead of taking a nap and come back at the end of this session, we may want to interrupt this command. The first way to do that is to ask the system to terminate the `stress` process.
From your terminal you can press `ctrl` + `c`. This short cut terminates the current process, it works everywhere except for programs like `man` or `less` which can be closed with the key `q`.
Launch another long `stress` process and switch to your other terminal tab, then, list your active process.
```sh
ps -l -u etudiant
```
You ask `stress` to launch a worker using 100% of one CPU (you can also see that with `htop`). You can see that the `stress` process you launched (with the PPID of your `bash`) forked another `stress` process.
Another way to terminate a process is with the command `kill`. `kill` is used to send a signal to a process with the command:
```sh
kill -15 PID
```
The `-15` option is the default option for `kill` so you can also write `kill PID`.
We can do the same thing as with the command `ctrl` + `c`: ask nicely the process to terminate itself. The `-15` signal is called the **SIGTERM**.
> On rare occasions a buggy process will not be able to listen to signals anymore. The signal `-9` will kill a process (not nicely at all). The `-9` signal is called the **SIGKILL** signal. There are 64 different signals that you can send with the `kill` command.
Use the `kill` command to terminate the worker process of your stress command. Go to the other tab where stress was running. What is the difference with your previous `ctrl` + `c` ?
In your current terminal type the `bash` command, nothing happens. You have a shell within a shell. Launch a long `stress` command and switch to the other tab.
You can use the `ps` command to check that `sleep` is running within a `bash` within a `bash`
```sh
ps -l --forest -u etudiant
```
Nicely terminate the intermediate `bash`. What happened ?
Try not nicely. What happened ?
A process with a **PPID** of **1** is called a **daemon**, daemons are processes that run in the background. Congratulations you created your first daemon.
Kill the remaining `stress `processes with the command `pkill`. You can check the **man**ual on how to do that.
## Suspend
Launch `htop` then press `ctrl` + `z`. What happened ?
```sh
ps -l -u etudiant
```
The manual of the `ps` command say the following about process state
```
D uninterruptible sleep (usually IO)
I Idle kernel thread
R running or runnable (on run queue)
S interruptible sleep (waiting for an event to complete)
T stopped by job control signal
t stopped by debugger during the tracing
W paging (not valid since the 2.6.xx kernel)
X dead (should never be seen)
Z defunct ("zombie") process, terminated but not reaped by its parent
```
You can use the command `fg` to put `htop` in the **f**ore**g**round.
Close `htop` and type twice the following command (`-i 1` is for simulating **io** **i**nput **o**utput operations) :
```sh
stress -i 1 -t 3600 &
```
Type the command `jobs`. What do you see ? You can specify which `stress` process you want to bring to the foreground with the command `fg %N` with `N` the number of the job.
```sh
fg %2
```
Bring the 2nd `htop` to the foreground. Put it back to the background with `ctrl` + `z`. What is now the differences between your two `stress` processes ?
The command `bg` allows you to resume a job stopped in the background. You can restart your stopped `stress` process with this command. You can use the `kill %N` syntax to kill your two `stress` processes.
## Priority
We have seen that we can launch a `stress `process to use 100% of a CPU. Launch two `stress` processes like that in the background.
What happened ? What can you see with the `htop` command ?
> In Linux the [Scheduler](https://en.wikipedia.org/wiki/Completely_Fair_Scheduler), is a system process that manages the order of execution of the task by the CPU(s). Linux and most Unix systems are also multiprocessors OS which means that your OS is constantly switching between process that has access to the CPU(s). From a universal Turing machine point of view, the head of the machine would be constantly switching back and forth on the tape.
You are working on a computer with a graphical interface, think about the processes drawing your windows, the processes reading and rendering your mouse, checking for your mail, loading and rendering your web pages, reading your keystrokes to send them back over the network to the NSA. The scheduler of your OS has to jungle between everything’s without losing anything (don’t be too hard on the windows OS).
The **nice** state of a process indicates it’s priority for the scheduler. **nice** value ranges from **-20** (the highest priority) to **19** (the lowest priority). The default **nice** value is **0**. The command `renice` allows you to change the **nice** value of a process:
```sh
renice -n N -p PID
```
With `N` the **nice** value.
Use `renice` to set the **nice** value of the first `stress` process worker to **19**. Use the command `htop` to check the result.
Can we increase the difference between the two processes ? Use re renice command to set the **nice** value of the second `stress` process worker to **-20**. What happened ?
Only the *root* user can lower the **nice** value of a process. You can also start start a new process with a given **nice** value with the command `nice`:
```sh
nice -n 10 stress -c 1 -t 3600 &
```
Without root access you can only set value greater than 0.
> We have seen the commands:
>
> - `ps` to display processes
> - `pstree` to display a tree of processes
> - `which` to display the PATH of a program
> - `top`/`htop` for a dynamic view of processes
> - `stress` to stress your system
> - `kill`/`pkill` to stop a process
> - `fg` to bring to the foreground a background processes
> - `jobs` to display background processes
> - `bg` to start a background process
> - `stress` to launch a mock computation
> - `nice`/`renince` to change the nice value of a process
[To learn how to articulate processes you can head to the next section.](./7_streams_and_pipes.html)
---
title: Unix Streams and pipes
author: "Laurent Modolo"
---
```{r include = FALSE}
if (!require("fontawesome")) {
install.packages("fontawesome")
}
library(fontawesome)
knitr::opts_chunk$set(echo = TRUE)
knitr::opts_chunk$set(comment = NA)
```
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">
<img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" />
</a>
Objective: Understand function of streams and pipes in Unix systems
When you read a file you start at the top from left to right, you read a flux of information which stops at the end of the file.
Unix streams are much the same things instead of opening a file as a whole bunch of data, process can process it as a flux. There are 3 standard Unix streams:
0. **stdin** the **st**an**d**ard **in**put
1. **stdout** the **st**an**d**ard **out**put
2. **sterr** the **st**an**d**ard **err**or
Historically, **stdin** has been the card reader or the keyboard, while the two others where the card puncher or the display.
The command `cat `simply read from **stdin** and displays the results on **stdout**
```sh
cat
I can talk with
myself
```
It can also read files and display the results on **stdout**
```sh
cat .bashrc
```
## Streams manipulation
You can use the `>` character to redirect a flux toward a file. The following command makes a copy of your `.bashrc` files.
```sh
cat .bashrc > my_bashrc
```
Check the results of your command with `less`.
Following the same principle create a `my_cal` file containing the **cal**endar of this month. Check the results with the command `less`
Reuse the same command with the unnamed option `1999`. Check the results with the command `less`. What happened ?
Try the following command
```sh
cal -N 2 > my_cal
```
What is the content of `my_cal` what happened ?
The `>` command can have an argument, the syntax to redirect **stdout** to a file is `1>` it's also the default option (equivalent to `>`). Here the `-N` option doesn't exist, `cal` throws an error. Errors are sent to **stderr** which have the number 2.
Save the error message in `my_cal` and check the results with `less`.
We have seen that `>` overwrite the content of the file. Try the following commands:
```sh
cal 2020 > my_cal
cal >> my_cal
cal -N 2 2>> my_cal
```
Check the results with the command `less`.
The command `>` sends the stream from the left to the file on the right. Try the following:
```sh
cat < my_cal
```
What is the function of the command `<`?
You can use different redirection on the same process. Try the following command:
```sh
cat <<EOF > my_notes
```
Type some text and type `EOF` on a new line. `EOF` stand for **e**nd **o**f **f**ile, it's a conventional sequence to use to indicate the start and the end of a file in a stream.
What happened ? Can you check the content of `my_notes` ? How would you modify this command to add new notes?
Finally, you can redirect a stream toward another stream with the following syntax:
```sh
cal -N2 2&> my_redirection
cal 2&>> my_redirection
```
## Pipes
The last stream manipulation that we are going to see is the pipe which transforms the **stdout** of a process into the **stding** of the next. Pipes are useful to chain multiples simple operations. The pipe operator is `| `
```sh
cal 2020 | less
```
What is the difference between with this command ?
```sh
cal 2020 | cat | cat | less
```
The command `zcat` has the same function as the command `cat` but for compressed files in [`gzip` format](https://en.wikipedia.org/wiki/Gzip).
The command `wget` download files from a url to the corresponding file. Don't run the following command which would download the human genome:
```sh
wget http://hgdownload.soe.ucsc.edu/goldenPath/hg38/bigZips/hg38.fa.gz
```
We are going to use the `-q` switch which silence `wget` (no download progress bar or such), and the option `-O` which allows use to set the name of the output file. In Unix setting the output file to `-` allow you to write the output on the **stdout** stream.
Analyze the following command, what would it do ?
```sh
wget -q -O - http://hgdownload.soe.ucsc.edu/goldenPath/hg38/bigZips/hg38.fa.gz | gzip -dc | less
```
Remember that most Unix command process input and output line by line. Which means that you can process huge datasets without intermediate files or huge RAM capacity.
> We have users the following commands:
>
> - `cat`/ `zcat` to display information in **stdout**
> - `>` / `>>` / `<` / `<<` to redirect a flux
> - `|` the pipe operator to connect processes
> - `wget` to download files
[You can head to the next session to apply pipe and stream manipulation.](./8_text_manipulation.html)
---
title: Text manipulation
author: "Laurent Modolo"
---
```{r include = FALSE}
if (!require("fontawesome")) {
install.packages("fontawesome")
}
library(fontawesome)
knitr::opts_chunk$set(echo = TRUE)
knitr::opts_chunk$set(comment = NA)
```
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">
<img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" />
</a>
Objective: Learn simple ways to work with text file in Unix
One of the great things with command line tools is that they are simple and fast. Which means that they are great for handling large files. And as bioinformaticians you have to handle large files, so you need to use command line tools for that.
## Text search
The file [hg38.ncbiRefSeq.gtf.gz](http://hgdownload.soe.ucsc.edu/goldenPath/hg38/bigZips/genes/hg38.ncbiRefSeq.gtf.gz) contains the RefSeq annotation for hg38 in [GFT format](http://www.genome.ucsc.edu/FAQ/FAQformat.html#format4)
We can download files with the `wget` command. Here the annotation is in **gz** format which is a compressed format, you can use the `gzip` tool to handle **gz** files.
On useful command to check large text file is the `head `command.
```sh
wget -q -O - http://hgdownload.soe.ucsc.edu/goldenPath/hg38/bigZips/genes/hg38.ncbiRefSeq.gtf.gz | gzip -dc | head
```
You can change the number of lines displayed with the option `-n number_of_line`. The command `tail` has the same function as `head` but starting from the end of the file.
Try the `tail` for the same number of lines displayed, does the computation take the same time ?
Download the `hg38.ncbiRefSeq.gtf.gz` file in your `~/`.
The program `grep string` allows you to search for *string* through a file or stream line by line.
```sh
gzip -dc hg38.ncbiRefSeq.gtf.gz | grep "chr2" | head
```
What is the last annotation on the chromosome 1 (to write a tabulation character you can type `\t`) ?
You can count things in text file with the command `wc` read the `wc` **man**ual to see how you can count lines in a file.
Does the number of *3UTR* match the number of *5UTR* ?
How many transcripts does the gene *CCR7* have ?
## Regular expression
When you do a loot text search, you will encounter regular expressions (regexp), which allow you to perform fuzzy search. To run `grep` in regexp mode you can use the switch. `-E`
The most basic form fo regexp si the exact match:
```sh
gzip -dc hg38.ncbiRefSeq.gtf.gz | head | grep -E "gene_id"
```
You can use the `.` wildcard character to match anything
```sh
gzip -dc hg38.ncbiRefSeq.gtf.gz | head | grep -E "...._id"
```
There are different special characters in regexp, but you can use `\` to escape them. For example, if you search for **.** you can use the following regexp
```sh
gzip -dc hg38.ncbiRefSeq.gtf.gz | head | grep -E "\."
```
### Character classes and alternatives
There are a number of special patterns that match more than one character. You’ve already seen `.`, which matches any character apart from a newline. There are four other useful tools:
- `\d`: matches any digit.
- `\s`: matches any whitespace (e.g. space, tab, newline).
- `\S`: matches any non-whitespace.
- `[abc]`: matches a, b, or c.
- `[^abc]`: matches anything except a, b, or c.
- `[a-z]`: Match any letter of the alphabet
Search for two digits followed by an uppercase letter and one digit.
<details><summary>Solution</summary>
<p>
```sh
gzip -dc hg38.ncbiRefSeq.gtf.gz | head | perl -E "\d\d[A-Z]\d"
```
</p>
</details>
### Anchors
By default, regular expressions will match any part of a string. It’s often useful to *anchor* the regular expression so that it matches from the start or end of the string. You can use
- `^` to match the start of the string.
- `$` to match the end of the string.
```sh
gzip -dc hg38.ncbiRefSeq.gtf.gz | head | grep -E ";"
gzip -dc hg38.ncbiRefSeq.gtf.gz | head | grep -E ";$"
```
```sh
gzip -dc hg38.ncbiRefSeq.gtf.gz | head | grep -E "c"
gzip -dc hg38.ncbiRefSeq.gtf.gz | head | grep -E "^c"
```
### Repetition
The next step up in power involves controlling how many times a pattern matches
- `?`: 0 or 1
- `+`: 1 or more
- `*`: 0 or more
What is the following regexp going to match ?
```sh
gzip -dc hg38.ncbiRefSeq.gtf.gz | head | grep -E "[a-z]*_[a-z]*\s\"[1-3]\""
```
You can also specify the number of matches precisely:
- `{n}`: exactly n
- `{n,}`: n or more
- `{,m}`: at most m
- `{n,m}`: between n and m
What is the following regexp going to match ?
```sh
gzip -dc hg38.ncbiRefSeq.gtf.gz | grep -E "^[a-z]{3}[2-3]\s.*exon\s\d{4,5}\s\d{4,5}.*"
```
How many gene names of more than 16 characters does the annotation contain ?
<details><summary>Solution</summary>
<p>
```sh
gzip -dc hg38.ncbiRefSeq.gtf.gz | grep -E "transcript\s.*gene_id\s\"\S{16,}\";" | wc -l
```
</p>
</details>
### Grouping and back references
You can group match using `()`, for example the following regexp match doublet of *12* .
```sh
gzip -dc hg38.ncbiRefSeq.gtf.gz | grep -E "(12){2}"
```
Grouping is also used for back references in the case of text replacement. You can use the command `sed` for text replacement. The syntax of `sed` for replacement is the following: `sed -E 's|regexp|\n|g` where `n` is the grouping number. `s` stand for substitute and `g` stand for global (which means that is they are different substitutions per line `sed` won't stop at the first one).
Try the following replacement regexp
```sh
gzip -dc hg38.ncbiRefSeq.gtf.gz | head | sed -E 's|(transcript_).{2}|\1number|g'
```
Try to write a `sed` command to replace *ncbiRefSeq* with *transcript_id* .
<details><summary>Solution</summary>
<p>
```sh
gzip -dc hg38.ncbiRefSeq.gtf.gz | head | sed -E 's|ncbiRefSeq(.*)(transcript_id "([A-Z_0-9.]*))|\3\1\2|g'
```
</p>
</details>
Regexp can be very complex see for example [a regex to validate an email on starckoverflow](https://stackoverflow.com/questions/201323/how-to-validate-an-email-address-using-a-regular-expression/201378#201378). When you start you can always use for a given regexp to a more experienced used (just give him the kind of text you want to match and not match). You can test your regex easily with the [regex101 website](https://regex101.com/).
## Sorting
GTF files should be sorted by chromosome, starting position and end position. But you can change that with the command `sort` to select the column to sort on you can use the option `-k n,n` where `n` is the column number.
You need to specify where sort keys start *and where they end*, otherwise (as in when you use `-k 3` instead of `-k 3,3`) they end at the end of the line.
For example, the following command `sort` on the 4th column and then on the 5th when the values of the 4th column are equal.
```sh
gzip -dc hg38.ncbiRefSeq.gtf.gz | head -n 10000 | sort -k 4,4 -k 5,5 | head
```
> Sorting operations are complex and can take a long time
You can add more option to the sorting of each column, for example `r` for reverse order `d` for dictionary order or `n` for numeric order.
What will the following command do ?
```sh
gzip -dc hg38.ncbiRefSeq.gtf.gz | head -n 10000 | sort -k 3,3d -k 4,4n | head
```
Use the `-u` option to count the number of different annotation type based on the 3rd column
<details><summary>Solution</summary>
<p>
```sh
gzip -dc hg38.ncbiRefSeq.gtf.gz | head -n 10000 | sort -k 3,3d -u | wc -l
```
</p>
</details>
You can check if a file is already sorted with the `-c` switch. Check if the gtf file is sorted by chromosome, start and end position.
<details><summary>Solution</summary>
<p>
```sh
gzip -dc hg38.ncbiRefSeq.gtf.gz | head -n 10000 | sort -k 1,1 -k 4,4n -k 5,5n -c
```
</p>
</details>
## Field extractor
Sometime rather than using complex regexp, we want to extract a particular column from a file. You can use the command `cut` to do that.
The following command extracts the 3rd column of the annotation
```sh
gzip -dc hg38.ncbiRefSeq.gtf.gz | head | cut -f 3
```
You can change the field separator with the option `-d`, set it to `";"` to extract the *transcript_id* and *gene_name* from the information column.
<details><summary>Solution</summary>
<p>
```sh
gzip -dc hg38.ncbiRefSeq.gtf.gz | head | cut -f 2 -f 5 -d ";"
```
</p>
</details>
## Concatenation
There are different tools to concatenate files from the command line `cat` for vertical concatenation and `paste` for horizontal concatenation.
```sh
cat .bashrc .bashrc | wc -l
```
What will be the results of the following command ?
```sh
gzip -dc hg38.ncbiRefSeq.gtf.gz | head | paste - -
```
## Text editor
You often have access to different text editors from the common line, two of the most popular ones are `vim` and `nano`.
`nano` is more friendly to use than `vim` but also very limited.
To open a text file you can type `editor file_path`.
In `nano` everything is written at the bottom, so you only have to remember that `^`is the symbol for the key `Ctrl`.
Open you `.bashrc` file and delete any comment line (starting with the `#` character).
`vim` is a child of the project `vi` (which should also be available on your system), and which bring him more functionality. The workings of `vim` can be a little strange at first, but you have to understand that on a US keyboard the distance that your finger have to travel while using `vim` is minimal.
You have 3 modes in `vim`:
- The **normal** mode, where you can navigate the file and enter command with the `:` key. You can come back to this mode by pressing `Esc`
- The **insert** mode, where you can write things. You enter this mode with the `i` key or any other key insertion key (for example `a` to insert after the cursor or `A` to insert at the end of the line)
- The **visual** mode where you can select text for copy/paste action. You can enter this mode with the `v` key
If you want to learn more about `vim`, you can start with the <https://vim-adventures.com/> website. Once you master `vim` everything is faster but you will have to practice a lot.
> We have used the following commands:
>
> - `head` / `tail` to display head or tail of a file
> - `wget` to download files
> - `gzip` to extract `tar.gz` files
> - `grep` to search text files
> - `wc` to count things
> - `sed` to search and replace string of text
> - `sort` to sort files on specific field
> - `cut` to extract a specific field
> - `cat` / `paste` for concatenation
> - `nano` / `vim` for text edition
In the next session, we are going to apply the logic of pipes and text manipulation to [batch processing.](./9_batch_processing.html)
---
title: Batch processing
author: "Laurent Modolo"
---
```{r include = FALSE}
if (!require("fontawesome")) {
install.packages("fontawesome")
}
library(fontawesome)
knitr::opts_chunk$set(echo = TRUE)
knitr::opts_chunk$set(comment = NA)
```
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">
<img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" />
</a>
Objective: Learn basics of batch processing in GNU/Linux
In the previous section, we have seen how to handle streams and text. We can use this knowledge to generate list of command instead of text. This is called batch processing.
In everyday life, you may want to run command sequentiality without using pipes.
To run `CMD1` and then run `CMD2` you can use the `;` operator
```
CMD1 ; CMD2
```
To run `CMD1` and then run `CMD2` if `CMD1` didn’t throw an error, you can use the `&&` operator which is safer than the `;` operator.
```sh
CMD1 && CMD2
```
You can also use the `||` to manage errors and run `CMD2` if `CMD1` failed.
```sh
CMD1 || CMD2
```
## Executing list of commands
The easiest option to execute list of command is to use `xargs`. `xargs` reads arguments from **stdin** and use them as arguments for a command. In UNIX systems the command `echo` send string of character into **stdout**. We are going to use this command to learn more about `xargs`.
```sh
echo "hello world"
```
In general a string of character differs from a command when it’s placed between quotes.
The two following commands are equivalent, why ?
```sh
echo "file1 file2 file3" | xargs touch
touch file1 file2 file3
```
You can display the command executed by `xargs` with the switch `-t`.
By default the number of arguments sent by `xargs` is defined by the system. You can change it with the option `-n N`, where `N` is the number of arguments sent. Use the option `-t` and `-n` to run the previous command as 3 separate `touch` commands.
<details><summary>Solution</summary>
<p>
```sh
echo "file1 file2 file3" | xargs -t -n 1 touch
```
</p>
</details>
Sometime, the arguments are not separated by space but by other characters. You can use the `-d` option to specify them. Execute `touch`1 time from the following command:
```sh
echo "file1;file2;file3"
```
<details><summary>Solution</summary>
<p>
```sh
echo "file1;file2;file3" | xargs -t -d \; touch
```
</p>
</details>
To reuse the arguments sent to `xargs` you can use the command `-I` which defines a string corresponding to the argument. Try the following command, what does the **man**ual says about the `-c` option of the command `cut` ?
```sh
ls -l file* | cut -c 44- | xargs -t -I % ln -s % link_%
```
Instead of using `ls` the command `xargs` is often used with the command `find`. The command `find` is a powerful command to search for files.
Modify the following command to make a non-hidden copy of all the file with a name starting with *.bash* in your home folder
```sh
find . -name ".bash*" | sed 's|./.||g'
```
<details><summary>Solution</summary>
<p>
```sh
find . -name ".bash*" | sed 's|./.||g' | xargs -t -I % cp .% %
```
</p>
</details>
You can try to remove all the files in the `/tmp` folder with the following command:
```sh
find /tmp/ -type f | xargs -t rm
```
Modify this command to remove every folder in the `/tmp` folder.
<details><summary>Solution</summary>
<p>
```sh
find /tmp/ -type d | xargs -t rm -R
```
</p>
</details>
## Writing `awk` commands
`xargs` It is a simple solution for writing batch commands, but if you want to write more complex command you are going to need to learn `awk`. `awk` is a programming language by itself, but you don’t need to know everything about `awk` to use it.
You can to think of `awk` as a `xargs -I $N` command where `$1` correspond to the first column `$2` to the second column, etc.
There are also some predefined variables that you can use like.
- `$0` Correspond to all the columns.
- `FS` the field separator used
- `NF` the number of fields separated by `FS`
- `NR` the number of records already read
A `awk` program is a chain of commands with the form `motif { action }`
- the `motif` define where there `action` is executed
- there `action` is what you want to do
They `motif` can be
- a regexp
- The keyword `BEGIN`or `END` (before reading the first line, and after reading the last line)
- a comparison like `<`, `<=`, `==`, `>=`, `>` or `!=`
- a combination of the three separated by `&&` (AND), `||`(OR) and `!` (Negation)
- a range of line `motif_1,motif_2`
With `awk` you can
Count the number of lines in a file
```sh
awk '{ print NR " : " $0 }' file
```
Modify this command to only display the total number of line with awk (like `wc -l`)
<details><summary>Solution</summary>
<p>
```sh
awk 'END{ print NR }' file
```
</p>
</details>
Convert a tabulated sequences file into fasta format
```sh
awk -vOFS='' '{print ">",$1,"\n",$2,"\n";}' two_column_sample_tab.txt > sample1.fa
```
Modify this command to only get a list of sequence names in a fasta file
<details><summary>Solution</summary>
<p>
```sh
awk -vOFS='' '{print $1 "\n";}' two_column_sample_tab.txt > seq_name.txt
```
</p>
</details>
Convert a multiline fasta file into a single line fasta file
```sh
awk '!/^>/ { printf "%s", $0; n = "\n" } /^>/ { print n $0; n = "" } END { printf "%s", n }' sample.fa > sample1_singleline.fa
```
Convert fasta sequences to uppercase
```sh
awk '/^>/ {print($0)}; /^[^>]/ {print(toupper($0))}' file.fasta > file_upper.fasta
```
Modify this command to only get a list of sequence names in a fasta file un lowercase
<details><summary>Solution</summary>
<p>
```sh
awk '/[^>]/ {print(tolower($0))}' file.fasta > seq_name_lower.txt
```
</p>
</details>
Return a list of sequence_id sequence_length from a fasta file
```sh
awk 'BEGIN {OFS = "\n"}; /^>/ {print(substr(sequence_id, 2)" "sequence_length); sequence_length = 0; sequence_id = $0}; /^[^>]/ {sequence_length += length($0)}; END {print(substr(sequence_id, 2)" "sequence_length)}' file.fasta
```
Count the number of bases in a fastq.gz file
```sh
(gzip -dc $0) | awk 'NR%4 == 2 {basenumber += length($0)} END {print basenumber}'
```
Only read with more than 20bp from a fastq
```sh
awk 'BEGIN {OFS = "\n"} {header = $0 ; getline seq ; getline qheader ; getline qseq ; if (length(seq) >= 20){print header, seq, qheader, qseq}}' < input.fastq > output.fastq
```
## Writing a bash script
When you start writing complicated command, you may want to save them to reuse them later.
You can find everything that you are typing in your `bash`in the `~/.bash_history` file, but working with this file can be tedious as it also contains all the command that you mistype. A good solution, for reproducibility is to write `bash` scripts. A bash script is simply a text file that contains a sequence of `bash`commands.
As you use `bash` in your terminal, you can execute a `bash` script with the following command:
```bash
source myscrip.sh
```
It’s usual to write the `.sh` extension for `shell`scripts.
Write a bash script named `download_hg38.sh` that download the [hg38.ncbiRefSeq.gtf.gz](http://hgdownload.soe.ucsc.edu/goldenPath/hg38/bigZips/genes/hg38.ncbiRefSeq.gtf.gz) file, then extract it and that says that it has done it.
The `\` character like in regexp cancel the meaning of what follow, you can use it to split your one-liner scripts over many lines to use the `&&` operator.
<details><summary>Solution</summary>
<p>
```sh
wget http://hgdownload.soe.ucsc.edu/goldenPath/hg38/bigZips/genes/hg38.ncbiRefSeq.gtf.gz && \
gzip -dc hg38.ncbiRefSeq.gtf.gz && \
echo "download and extraction complete"
```
</p>
</details>
### shebang
In your first bash script, the only thing saying that your script is a bash script is its extension. But most of the time UNIX system doesn’t care about file extension, a text file is a text file.
To tell the system that your text file is a bash script you need to add a **shebang**. A **shebang** is a special first line that starts with a `#!` followed by the path of the interpreter for your script.
For example, for a bash script in a system where `bash` is installed in `/bin/bash` the **shebang** is:
```bash
##!/bin/bash
```
When you are not sure `which`is the path of the tools available to interpret your script, you can use the following shebang:
```bash
##!/usr/bin/env bash
```
You can add a **shebang** to your script and add it the e**x**ecutable right.
<details><summary>Solution</summary>
<p>
```sh
chmod u+x download_hg38.sh
```
</p>
</details>
Now you can execute your script with the command:
```bash
./download_hg38.sh
```
Congratulations you wrote your first program !
### PATH
Where did they `/usr/bin/env` find the information about your bash ? Why did we have to write a `./` before our script if we are in the same folder ?
This is all linked to the **PATH** bash variable. Like in many programming languages `bash` have what we call *variables*. *variables* are named storage for temporary information. You can print a list of all your environment variables (variables loaded in your `bash` memory), with the command `printenv`.
To create a new variable you can use the following syntax:
```sh
VAR_NAME="text"
VAR_NAME2=2
```
Create a `IDENTIY` variable with your first and last names.
<details><summary>Solution</summary>
<p>
```sh
IDENTITY="First name Last Name"
```
</p>
</details>
It’s good practice to write your `bash` variable in uppercase with `_` in place of spaces.
You can access the value of an existing `bash` variable with the `$VAR_NAME`
To display the value of your `IDENTITY` variable with `echo` you can write:
```sh
echo $IDENTITY
```
When you want to mix variable value and text you can use the two following syntax:
```sh
echo "my name is "$IDENTITY
echo "my name is ${IDENTITY}"
```
Going back to the `printenv` You can see a **PWD** variable that store your current path, a **SHELL** variable that store your current shell, and you can see a **PATH** variable that stores a loot of file path separated by `:`.
The **PATH** variable contains every folder where to look for executable programs. Executable programs can be binary files or text files with a **shebang**.
Display the content of `PATH` with `echo`
<details><summary>Solution</summary>
<p>
```sh
echo $PATH
```
</p>
</details>
You can create a `scripts`folder and move your `download_hg38.sh` script in it. Then we can modify the `PATH` variable to include the `scripts` folder in it.
> Don’t erase your `PATH` variable !
<details><summary>Solution</summary>
<p>
```sh
mkdir ~/scripts
mv `download_hg38.sh` ~/scripts/
PATH=$PATH:~/scripts/
```
</p>
</details>
You can check the result of your command with `echo $PATH`
Try to call your `download_hg38.sh` from anywhere on the file tree. Congratulation you installed your first UNIX program !
### Arguments
You can pass argument to your bash scripts, writing the following command:
```sh
my_script.sh arg1 arg2 arg3
```
Means that from within the script:
- `$0` will give you the name of the script (`my_script.sh`)
- `$1`, `$2`, `$3`, `$n` will give you the value of the arguments (`arg1`, `arg2`, `arg3`, `argn`)
- `$$` the process id of the current shell
- `$#` the total number of arguments passed to the script
- `$@`the value of all the arguments passed to the script
- `$?` the exit status of the last executed command
- `$!`the process id of the last executed command
You can write the following `variables.sh` script in your `scripts` folder:
```sh
##!/bin/bash
echo "Name of the script: $0"
echo "Total number of arguments: $#"
echo "Values of all the arguments: $@"
```
And you can try to call it with some arguments !
> We have used the following commands:
>
> - `echo` to display text
> - `xarg` to execute a chain of commands
> - `awk` to execute complex chain of commands
> - `;` `&&` and `||` to chain commands
> - `source` to load a script
> - `shebang` to specify the language of a script
> - `PATH` to install script
In the next session, we are going to learn how to execute command on other computers with [ssh.](./10_network_and_ssh.html)
This diff is collapsed.
all: public/index.html \
public/github-pandoc.css \
public/1_understanding_a_computer.html \
public/2_using_the_ifb_cloud.html \
public/3_first_steps_in_a_terminal.html \
public/4_unix_file_system.html \
public/5_users_and_rights.html \
public/6_unix_processes.html \
public/7_streams_and_pipes.html \
public/8_text_manipulation.html \
public/9_batch_processing.html \
public/10_network_and_ssh.html \
public/11_install_system_programs.html \
public/12_virtualization.html
public/github-pandoc.css: github-pandoc.css
cp github-pandoc.css public/github-pandoc.css
cp -R img public/
cp *.Rmd public/
cp -R www public/
public/index.html: index.md github-pandoc.css
pandoc -s -c github-pandoc.css index.md -o public/index.html
public/1_understanding_a_computer.html: 1_understanding_a_computer.Rmd public/github-pandoc.css
cd public && Rscript -e 'rmarkdown::render("1_understanding_a_computer.Rmd")'
public/2_using_the_ifb_cloud.html: 2_using_the_ifb_cloud.Rmd public/github-pandoc.css
cd public && Rscript -e 'rmarkdown::render("2_using_the_ifb_cloud.Rmd")'
public/3_first_steps_in_a_terminal.html: 3_first_steps_in_a_terminal.Rmd public/github-pandoc.css
cd public && Rscript -e 'rmarkdown::render("3_first_steps_in_a_terminal.Rmd")'
public/4_unix_file_system.html: 4_unix_file_system.Rmd public/github-pandoc.css
cd public && Rscript -e 'rmarkdown::render("4_unix_file_system.Rmd")'
public/5_users_and_rights.html: 5_users_and_rights.Rmd public/github-pandoc.css
cd public && Rscript -e 'rmarkdown::render("5_users_and_rights.Rmd")'
public/6_unix_processes.html: 6_unix_processes.Rmd public/github-pandoc.css
cd public && Rscript -e 'rmarkdown::render("6_unix_processes.Rmd")'
public/7_streams_and_pipes.html: 7_streams_and_pipes.Rmd public/github-pandoc.css
cd public && Rscript -e 'rmarkdown::render("7_streams_and_pipes.Rmd")'
public/8_text_manipulation.html: 8_text_manipulation.Rmd public/github-pandoc.css
cd public && Rscript -e 'rmarkdown::render("8_text_manipulation.Rmd")'
public/9_batch_processing.html: 9_batch_processing.Rmd public/github-pandoc.css
cd public && Rscript -e 'rmarkdown::render("9_batch_processing.Rmd")'
public/10_network_and_ssh.html: 10_network_and_ssh.Rmd public/github-pandoc.css
cd public && Rscript -e 'rmarkdown::render("10_network_and_ssh.Rmd")'
public/11_install_system_programs.html: 11_install_system_programs.Rmd public/github-pandoc.css
cd public && Rscript -e 'rmarkdown::render("11_install_system_programs.Rmd")'
public/12_virtualization.html: 12_virtualization.Rmd public/github-pandoc.css
cd public && Rscript -e 'rmarkdown::render("12_virtualization.Rmd")'
......@@ -2,15 +2,8 @@
[![cc_by_sa](./img/cc_by_sa.png)](http://creativecommons.org/licenses/by-sa/4.0/)
## Understanding a computer
All the materials are accessible at the following url:
## First steps with a terminal
## What is a process
## The network
## Files manipulation
## Processes manipulation
[https://can.gitbiopages.ens-lyon.fr/unix-command-line/](https://can.gitbiopages.ens-lyon.fr/unix-command-line/)
You can join us on the dedicated matrix channel (ask [laurent.modolo@ens-lyon.fr](mailto:laurent.modolo@ens-lyon.fr))
project:
type: book
book:
title: "UNIX command line"
author:
- "Laurent Modolo"
date: "2023-10-09"
chapters:
- index.md
- 1_understanding_a_computer.Rmd
- 2_using_the_ifb_cloud.Rmd
- 3_first_steps_in_a_terminal.Rmd
- 4_unix_file_system.Rmd
- 5_users_and_rights.Rmd
- 6_unix_processes.Rmd
- 7_streams_and_pipes.Rmd
- 8_text_manipulation.Rmd
- 9_batch_processing.Rmd
- 10_network_and_ssh.Rmd
- 11_install_system_programs.Rmd
- 12_virtualization.Rmd
body-footer: "License: Creative Commons [CC-BY-SA-4.0](http://creativecommons.org/licenses/by-sa/4.0/).<br>Made with [Quarto](https://quarto.org/)."
navbar:
search: true
right:
- icon: git
href: https://gitbio.ens-lyon.fr/can/unix-command-line
text: Sources
# bibliography: references.bib
format:
html:
theme:
light: flatly
dark: darkly
execute:
cache: true
\ No newline at end of file
#!/bin/bash
# This script is executed on the virtual machine during the *Deployment* phase.
# It is used to apply parameters specific to the current deployment.
# It is executed secondly during a cloud deployement in IFB-Biosphere, after the *Installation* phase.
USER_LOGIN=etudiant
USER_PASSWORD=$( openssl rand -hex 12 )
useradd -m -s /bin/bash -g users -G adm,docker,dialout,cdrom,floppy,audio,dip,video,plugdev,netdev ${USER_LOGIN}
cp /etc/skel/.* /home/${USER_LOGIN}/
passwd ${USER_LOGIN} << EOF
${USER_PASSWORD}
${USER_PASSWORD}
EOF
HOST_NAME=$( ss-get --timeout=3 hostname )
HTTP_ENDP="https://$HOST_NAME"
ss-set url.service "${HTTP_ENDP}"
ss-set ss:url.service "[HTTPS]$HTTP_ENDP,[LOGIN]$USER_LOGIN,[PASSWORD]$USER_PASSWORD"