diff --git a/vignettes/08-evaluationMetrics.Rmd b/vignettes/08-evaluationMetrics.Rmd
new file mode 100644
index 0000000000000000000000000000000000000000..28891f40755b3dd9a983b51838f74b16f02cb515
--- /dev/null
+++ b/vignettes/08-evaluationMetrics.Rmd
@@ -0,0 +1,81 @@
+---
+title: "About evaluation metrics"
+output: rmarkdown::html_vignette
+vignette: >
+  %\VignetteIndexEntry{About evaluation metrics}
+  %\VignetteEngine{knitr::rmarkdown}
+  %\VignetteEncoding{UTF-8}
+---
+
+```{r, include = FALSE}
+knitr::opts_chunk$set(
+  collapse = TRUE,
+  comment = "#>"
+)
+```
+
+
+## Root Mean Square Error (RMSE)
+
+RMSE measures the average deviation between predicted values and actual values. It is calculated as follows:
+
+\[ \text{RMSE} = \sqrt{\frac{1}{n}\sum_{i=1}^{n}(y_i - \hat{y}_i)^2} \]
+
+## R-squared (R²)
+
+R² measures the proportion of the variance in the dependent variable that is predictable from the independent variable(s). It is calculated as follows:
+
+\[ R^2 = 1 - \frac{\sum_{i=1}^{n}(y_i - \hat{y}_i)^2}{\sum_{i=1}^{n}(y_i - \bar{y})^2} \]
+
+## Precision-Recall Area Under Curve (PR-AUC)
+
+PR-AUC measures the area under the precision-recall curve. It is often used in binary classification tasks where the class distribution is imbalanced.
+
+## Performance Ratio
+
+performance ratio is calculated as the ratio of PR-AUC to the PR-AUC of a random classifier (`pr_randm_AUC`). 
+It provides a measure of how well the model performs compared to a random baseline.
+
+\[ performanceRatio =  \frac{PR_{AUC}}{PR_{randomAUC}} \]
+
+
+## Receiver Operating Characteristic Area Under Curve (ROC AUC)
+
+ROC AUC measures the area under the receiver operating characteristic curve. It evaluates the classifier's ability to distinguish between classes.
+
+## Confusion Matrix
+
+|                 | Predicted Negative | Predicted Positive |
+|:---------------:|:------------------:|:------------------:|
+| Actual Negative | TN                 | FP                 |
+| Actual Positive | FN                 | TP                 |
+
+## Accuracy
+
+Accuracy measures the proportion of correct predictions out of the total predictions made by the model. It is calculated as follows:
+
+\[
+\text{Accuracy} = \frac{\text{TP} + \text{TN}}{\text{TP} + \text{TN} + \text{FP} + \text{FN}}
+\]
+
+## Specificity
+
+Specificity measures the proportion of true negatives out of all actual negatives. It is calculated as follows:
+
+\[
+\text{Specificity} = \frac{\text{TN}}{\text{TN} + \text{FP}}
+\]
+
+## Recall (Sensitivity)
+
+Recall, also known as sensitivity, measures the proportion of true positives out of all actual positives. It indicates the model's ability to correctly identify positive instances.
+
+\[ \text{Recall} = \text{Sensitivity} = \frac{\text{TP}}{\text{TP} + \text{FN}} \]
+
+## Precision
+
+Precision measures the proportion of true positives out of all predicted positives. It indicates the model's ability to avoid false positives.
+
+\[ \text{Precision} = \frac{\text{TP}}{\text{TP} + \text{FP}} \]
+
+
diff --git a/vignettes/09-FAQ.Rmd b/vignettes/09-FAQ.Rmd
new file mode 100644
index 0000000000000000000000000000000000000000..d1575b1c67a747645f8ee41974a2c4917efd9bad
--- /dev/null
+++ b/vignettes/09-FAQ.Rmd
@@ -0,0 +1,34 @@
+---
+title: "Frequently Asked Questions"
+output: rmarkdown::html_vignette
+vignette: >
+  %\VignetteIndexEntry{Frequently Asked Questions}
+  %\VignetteEngine{knitr::rmarkdown}
+  %\VignetteEncoding{UTF-8}
+---
+
+```{r, include = FALSE}
+knitr::opts_chunk$set(
+  collapse = TRUE,
+  comment = "#>"
+)
+```
+
+
+## Why mu has no effect within simulation
+
+some element of response
+
+## Difference between norm_distrib = 'univariate' & = 'multivariate'
+
+some element of response
+
+
+## Why use `transform = 'x+1'` with `prepareData2Fit()`
+
+some element of response. Zero Nightmare
+
+
+## Why use `row_threshold = 10` with `prepareData2Fit()`
+
+some element of response