The first session of the R the basis formation will be in the CBP TP room the:
- 14/09 at 11h for the Tuesday session
- 17/09 at 11h for the Friday session
- 20/09 at 11h for the Monday session
# Introduction
For this first session, some formators will wait for you at the reception of the ENS Monod site 15 min before the start of the session to guide you to the room.
The goal of this practical is to practice combining data transformation with `tidyverse`.
The objectives of this session will be to:
You will have access to a computer to do all the practicals with your ens email account (same login and password).
There are no prerequisite for this formation are we will start from scratch.
- Combining multiple operations with the pipe `%>%`
- Work on subgroup of the data with `group_by`
If you want to work on your own laptop, you will need
—a recent browser
—access to the eduroam wifi network
In case of problems, we won't provide any IT support, just advise you to switch to a computer available in the TP room.
<div class="pencadre">
For this session we are going to work with a new dataset included in the `nycflights13` package.
Install this package and load it.
As usual you will also need the `tidyverse` library.
</div>
If you are unable to attend to a session, please give us some heads-up so we will not wait for you. All the supports will be available online so you can try to catch up before the next session.
<details><summary>Solution</summary>
<p>
```{r packageloaded, include=TRUE, message=FALSE}
library("tidyverse")
library("nycflights13")
```
</p>
</details>
# Combining multiple operations with the pipe
<div id="pencadre">
Find the 10 most delayed flights using a ranking function. `min_rank()`
</div>
<details><summary>Solution</summary>
<p>
```{r pipe_example_a, include=TRUE}
flights_md <- mutate(flights,
most_delay = min_rank(desc(dep_delay)))
flights_md <- filter(flights_md, most_delay < 10)
flights_md <- arrange(flights_md, most_delay)
```
</p>
</details>
We don't want to create useless intermediate variables so we can use the pipe operator: `%>%`
(or `ctrl + shift + M`).
Behind the scenes, `x %>% f(y)` turns into `f(x, y)`, and `x %>% f(y) %>% g(z)` turns into `g(f(x, y), z)` and so on. You can use the pipe to rewrite multiple operations in a way that you can read left-to-right, top-to-bottom.
<div id="pencadre">
Try to pipe operators to rewrite your precedent code with only **one** variable assignment.
Working with the pipe is one of the key criteria for belonging to the `tidyverse`. The only exception is `ggplot2`: it was written before the pipe was discovered and use `+` instead of `%>%`. Unfortunately, the next iteration of `ggplot2`, `ggvis`, which does use the pipe, isn’t quite ready for prime time yet.
The pipe is a powerful tool, but it’s not the only tool at your disposal, and it doesn’t solve every problem! Pipes are most useful for rewriting a fairly short linear sequence of operations. I think you should reach for another tool when:
## When not to use the pipe
- Your pipes are longer than (say) ten steps. In that case, create intermediate functions with meaningful names. That will make debugging easier, because you can more easily check the intermediate results, and it makes it easier to understand your code, because the variable names can help communicate intent.
- You have multiple inputs or outputs. If there isn’t one primary object being transformed, but two or more objects being combined together, don’t use the pipe. You can create a function that combines or split the results.
# Grouping variable
The `summarise()` function collapses a data frame to a single row.
Check the difference between `summarise()` and `mutate()` with the following commands:
```{r load_data, eval=FALSE}
flights %>%
mutate(delay = mean(dep_delay, na.rm = TRUE))
flights %>%
summarise(delay = mean(dep_delay, na.rm = TRUE))
```
Where mutate compute the `mean` of `dep_delay` row by row (which is not useful), `summarise` compute the `mean` of the whole `dep_delay` column.
## The power of `summarise()` with `group_by()`
The `group_by()` function changes the unit of analysis from the complete dataset to individual groups.
Individual groups are defined by categorial variable or **factors**.
Then, when you use the function you already know on grouped data frame and they’ll be automatically applied *by groups*.
You can use the following code to compute the average delay per months across years.
Why did we `group_by` `year` and `month` and not only `year` ?
</div>
## Missing values
<div class="pencadre">
You may have wondered about the `na.rm` argument we used above. What happens if we don’t set it?
</div>
<details><summary>Solution</summary>
<p>
```{r summarise_group_by_NA, include=TRUE}
flights %>%
group_by(dest) %>%
summarise(
dist = mean(distance),
delay = mean(arr_delay)
)
```
</p>
</details>
Aggregation functions obey the usual rule of missing values: **if there’s any missing value in the input, the output will be a missing value**.
## Counts
Whenever you do any aggregation, it’s always a good idea to include either a count (`n()`). That way you can check that you’re not drawing conclusions based on very small amounts of data.
```{r summarise_group_by_count, include = T, echo=F, warning=F, message=F, fig.width=8, fig.height=3.5}
ggplot(mapping = aes(x = mean_av_delay, y = mean_cancel_day, color = wday)) +
geom_point() +
geom_errorbarh(mapping = aes(
xmin = -sd_av_delay + mean_av_delay,
xmax = sd_av_delay + mean_av_delay
)) +
geom_errorbar(mapping = aes(
ymin = -sd_cancel_day + mean_cancel_day,
ymax = sd_cancel_day + mean_cancel_day
))
```
</p>
</details>
<div class="pencadre">
Now that you are aware of the interest of using `geom_errorbar`, what `hour` of the day should you fly if you want to avoid delays as much as possible?
Until now we have worked with data already formated in a *nice way*.
In the `tidyverse` data formated in a *nice way* are called **tidy**
The goal of this practical is to understand how to transform an hugly blob of information into a **tidy** data set.
## Tidydata
...
...
@@ -29,17 +38,65 @@ There are three interrelated rules which make a dataset tidy:
- Each observation must have its own row.
- Each value must have its own cell.
```{r load_data, eval=T, message=T}
Doing this kind and transformation is often called **data wrangling**, due to the felling that we have to *wrangle* with the data to force them into a **tidy** format.
But once this step is finish most of the subsequent analysis will be realy fast to do !
<div class="pencadre">
As usual we will need the `tidyverse` library.
</div>
<details><summary>Solution</summary>
<p>
```{r load_data, eval=T, message=F}
library(tidyverse)
```
</p>
</details>
For this practical we are going to use the `table` dataset which demonstrate multiple ways to layout the same tabular data.
<div class="pencadre">
Use the help to know more about this dataset
</div>
<details><summary>Solution</summary>
<p>
`table1`, `table2`, `table3`, `table4a`, `table4b`, and `table5` all display the number of TB (Tuberculosis) cases documented by the World Health Organization in Afghanistan, Brazil, and China between 1999 and 2000. The data contains values associated with four variables (country, year, cases, and population), but each table organizes the values in a different layout.
The data is a subset of the data contained in the World Health Organization Global Tuberculosis Report
</p>
</details>
# Pivoting data
## pivot longer
```{r table4a, eval=T, message=T}
table4a # number of TB cases
<div class="pencadre">
Visualize the `table4a` dataset (you can use the `View()` function).
```{r table4a, eval=F, message=T}
View(table4a)
```
## pivot longer
Is the data **tidy** ? How would you transform this dataset to make it **tidy** ?
</div>
<details><summary>Solution</summary>
<p>
We have information about 3 variables in the `table4a`: `country`, `year` and number of `cases`.
However, the variable information (`year`) is stored as column names.
We want to pivot the horizontal column year, vertically and make the table longer.
You can use the `pivot_longer` fonction to make your table longer and have one observation per row and one variable per column.
For this we need to :
- specify which column to select (all except `country`).
- give the name of the new variable (`year`)
- give the name of the variable stored in the cells of the columns years (`case`)
```{r pivot_longer, eval=T, message=T}
table4a %>%
...
...
@@ -47,20 +104,31 @@ table4a %>%
names_to = "year",
values_to = "case")
```
</p>
</details>
## pivot wider
```{r table2, eval=T, message=T}
table2
```
<div class="pencadre">
Visualize the `table2` dataset
Is the data **tidy** ? How would you transform this dataset to make it **tidy** ? (you can now make also make a guess from the name of the subsection)
</div>
## pivot wider
<details><summary>Solution</summary>
<p>
The column `count` store two types of information: the `population` size of the country and the number of `cases` in the country.
You can use the `pivot_wider` fonction to make your table wider and have one observation per row and one variable per column.