Newer
Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
title: "About evaluation metrics"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{About evaluation metrics}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
## Root Mean Square Error (RMSE)
RMSE measures the average deviation between predicted values and actual values. It is calculated as follows:
\[ \text{RMSE} = \sqrt{\frac{1}{n}\sum_{i=1}^{n}(y_i - \hat{y}_i)^2} \]
## R-squared (R²)
R² measures the proportion of the variance in the dependent variable that is predictable from the independent variable(s). It is calculated as follows:
\[ R^2 = 1 - \frac{\sum_{i=1}^{n}(y_i - \hat{y}_i)^2}{\sum_{i=1}^{n}(y_i - \bar{y})^2} \]
## Precision-Recall Area Under Curve (PR-AUC)
PR-AUC measures the area under the precision-recall curve. It is often used in binary classification tasks where the class distribution is imbalanced.
## Performance Ratio
performance ratio is calculated as the ratio of PR-AUC to the PR-AUC of a random classifier (`pr_randm_AUC`).
It provides a measure of how well the model performs compared to a random baseline.
\[ performanceRatio = \frac{PR_{AUC}}{PR_{randomAUC}} \]
## Receiver Operating Characteristic Area Under Curve (ROC AUC)
ROC AUC measures the area under the receiver operating characteristic curve. It evaluates the classifier's ability to distinguish between classes.
## Confusion Matrix
| | Predicted Negative | Predicted Positive |
|:---------------:|:------------------:|:------------------:|
| Actual Negative | TN | FP |
| Actual Positive | FN | TP |
## Accuracy
Accuracy measures the proportion of correct predictions out of the total predictions made by the model. It is calculated as follows:
\[
\text{Accuracy} = \frac{\text{TP} + \text{TN}}{\text{TP} + \text{TN} + \text{FP} + \text{FN}}
\]
## Specificity
Specificity measures the proportion of true negatives out of all actual negatives. It is calculated as follows:
\[
\text{Specificity} = \frac{\text{TN}}{\text{TN} + \text{FP}}
\]
## Recall (Sensitivity)
Recall, also known as sensitivity, measures the proportion of true positives out of all actual positives. It indicates the model's ability to correctly identify positive instances.
\[ \text{Recall} = \text{Sensitivity} = \frac{\text{TP}}{\text{TP} + \text{FN}} \]
## Precision
Precision measures the proportion of true positives out of all predicted positives. It indicates the model's ability to avoid false positives.
\[ \text{Precision} = \frac{\text{TP}}{\text{TP} + \text{FP}} \]