diff --git a/README.md b/README.md
index f89e5ef4a8845f59039504777616fe1f0da204fc..c2215e3edaa000413a42827642443444549ff97c 100644
--- a/README.md
+++ b/README.md
@@ -109,139 +109,7 @@ In the realm of RNAseq analysis, various key experimental parameters play a cruc
 ## Getting started
 
 
-[Getting started](articles/htrfit.html)
-[test_link2](articles/HTRfit_simulation_pipeline.html)
-
-### Init a design and simulate RNAseq data
-
-```r
-library('HTRfit')
-## -- init a design 
-input_var_list <- init_variable( name = "varA", mu = 0, sd = 0.29, level = 2000) %>%
-                  init_variable( name = "varB", mu = 0.27, sd = 0.6, level = 2) %>%
-                    add_interaction( between_var = c("varA", "varB"), mu = 0.44, sd = 0.89)
-## -- simulate RNAseq data 
-mock_data <- mock_rnaseq(input_var_list, 
-                         n_genes = 6000,
-                         min_replicates  = 4,
-                         max_replicates = 4 )
-```
-
-
-
-### Fit your model
-
-```r
-## -- prepare data & fit a model with mixed effect
-data2fit = prepareData2fit(countMatrix = mock_data$counts, 
-                           metadata =  mock_data$metadata, 
-                           normalization = F,   
-                           response_name = "kij")
-l_tmb <- fitModelParallel(formula = kij ~ varB + (varB | varA),
-                          data = data2fit, 
-                          group_by = "geneID",
-                          family = glmmTMB::nbinom2(link = "log"), 
-                          n.cores = 8)
-```
-
-The `fitModelParallel()` function in **HTRfit** provides a powerful way to fit models independently for each gene. This allows for efficient parallelization of the modeling process by specifying the `n.cores` option. However, it's essential to note that as more cores are utilized, there is a corresponding increase in the required RAM. This is because the data necessary for fitting the model needs to be loaded into memory. Our simulations have demonstrated significant time savings when employing more cores. For instance, using 25 cores was nearly three times faster for processing 6,000 genes and 2,000 experimental conditions (2000 levels in varA, 2 levels in varB - 8000 samples). However, using 50 cores yielded minimal time savings but had a noticeable impact on RAM consumption. Therefore, users must carefully balance computation speed and memory usage when selecting the number of cores. 
-
-
-### Diagnostic metrics
-
-The `diagnostic_plot()` function allows to plot a diagnostic plot of AIC (Akaike Information Criterion), BIC (Bayesian Information Criterion), logLik (log-likelihood), deviance, df.resid (residual degrees of freedom), and dispersion. These metrics provide insights into how well the model fits the data and help in comparing different models. By examining these metrics, users can quickly identify any anomalies or potential issues in the fitting process. 
-
-```r
-## -- plot all metrics
-p <- diagnostic_plot(list_tmb = l_tmb)
-```
-
-<div id="bg"  align="center">
-  <img src="./man/figures/diagnostic_plot.png" width="600" height="360">
-</div> 
-
-The diagnostic metrics show that the fit is not as good for all genes. 
-
-### Extracts a tidy result table from a list tmb object
-
-The tidy_results function extracts a dataframe containing estimates of ln(fold changes), standard errors, test statistics, p-values, and adjusted p-values for fixed effects. Additionally, it provides access to correlation terms and standard deviations for random effects, offering a detailed view of HTRfit modeling results.
-
-```r
-## -- get tidy results
-tidy_res <- tidy_results(l_tmb, coeff_threshold = 0.1, alternative_hypothesis = "greater")
-```
-
-### Evaluation 
-
-```r
-## -- evaluation
-resSimu <- evaluation_report(list_tmb = l_tmb,
-                            mock_obj = mock_data,
-                            coeff_threshold = 0.27, 
-                            alt_hypothesis = "greater")
-
-```
-
-The identity plot, generated by the `evaluation_report()` function, provides a visual means to compare the effects used in the simulation (actual effects) with those inferred by the model. This graphical representation facilitates the assessment of the correspondence between the values of the simulated effects and those estimated by the model, allowing for a visual analysis of the model’s goodness of fit to the simulated data. For a quantitative evaluation of inference quality, the R-squared (R2) metric can be employed.
-
-The evaluation of model performance in distinguishing between differentially expressed and non-differentially expressed genes relies on the area under the ROC curve (AUC), a comprehensive metric offering a singular summary of the model's overall effectiveness. A higher AUC is indicative of superior model performance. It is noteworthy that we not only calculate the AUC for the ROC curve but also extend this assessment to the Precision-Recall (PR) curve. In the case of the ROC curve, this AUC should be compared to the value of 0.5, representing a random classifier. On the other hand, for the PR curve, we compute the pr_AUC_random, serving as a baseline for comparison. This dual evaluation approach ensures a thorough understanding of the model's discrimination capabilities under different scenarios, providing valuable insights into its robustness and reliability.
-
-In addition to evaluating model performance through the AUC for both ROC and PR curves, we provide access to key classification metrics, including Accuracy, Precision, Recall (or Sensitivity), and Specificity. These metrics offer a comprehensive view of the model's classification capabilities.
-
-All metrics are computed for each parameter (excluding the intercept when skip_eval_intercept = TRUE), providing detailed insights into individual parameter contributions. Furthermore, an aggregated assessment is performed, considering all parameters (except the intercept by default), offering a perspective on the model's overall classification effectiveness.
-
-### Comparing performances of HTRfit & DESeq2
-
-**HTRfit** offers a wrapper for **DESeq2** outputs. This functionality allows users to seamlessly integrate the results obtained from **DESeq2** into the **HTRfit** analysis pipeline. By doing so, you can readily compare the performance of **HTRfit** with **DESeq2** on your RNAseq data. This comparative analysis aids in determining which tool performs better for your specific research goals and dataset.
-
-
-
-```r
-## -- init a design 
-input_var_list <- init_variable( name = "varA", mu = 0, sd = 0.29, level = 2) %>%
-                  init_variable( name = "varB", mu = 0.27, sd = 0.6, level = 2) %>%
-                    add_interaction( between_var = c("varA", "varB"), mu = 0.44, sd = 0.89)
-N_GENES = 1000
-MIN_REPLICATES = 4
-MAX_REPLICATES = 4
-BASAL_EXPR = 2
-SEQ_DEPTH = 1e+7
-
-## -- simulate RNAseq data 
-mock_data <- mock_rnaseq(input_var_list, 
-                         n_genes = N_GENES,
-                         min_replicates  = MIN_REPLICATES,
-                         max_replicates = MAX_REPLICATES,
-                         basal_expression = BASAL_EXPR,
-                         sequencing_depth = SEQ_DEPTH )
-
-## -- prepare data & fit a model with HTRfit
-data2fit = prepareData2fit(countMatrix = mock_data$counts, 
-                           metadata =  mock_data$metadata, 
-                           normalization = F)
-l_tmb <- fitModelParallel(formula = kij ~ varA + varB  + varA:varB,
-                          data = data2fit, 
-                          group_by = "geneID",
-                          family = glmmTMB::nbinom2(link = "log"), 
-                          n.cores = 8)
-
-
-## -- DESeq2
-library(DESeq2)
-dds <- DESeq2::DESeqDataSetFromMatrix(
-          countData = round(mock_data$counts),
-          colData =  mock_data$metadata,
-          design = ~ varA + varB  + varA:varB )
-dds <- DESeq2::DESeq(dds, quiet = TRUE)
-
-
-## -- get simulation/fit evaluation
-resSimu <- evaluation_report(list_tmb = l_tmb,
-                            dds = dds,
-                            mock_obj = mock_data, 
-                            coeff_threshold = 0.4, 
-                            alt_hypothesis = "greaterAbs")
-```
-
-
-*** ADD plot ***
\ No newline at end of file
+* [ Theory behind HTRfit ](https://htrfit-lbmc-yvertlab-vortex-plasticity-mutation-477701eb488dfd9.gitbiopages.ens-lyon.fr/articles/theoryBehindHtrfit.html)
+* [ Simulation tutorial ](https://htrfit-lbmc-yvertlab-vortex-plasticity-mutation-477701eb488dfd9.gitbiopages.ens-lyon.fr/articles/tutorial.html) 
+* [ Benchmarking HTRfit/DESeq2 ](https://htrfit-lbmc-yvertlab-vortex-plasticity-mutation-477701eb488dfd9.gitbiopages.ens-lyon.fr/articles/htrfit_vs_deseq2.html)
+* [ RNAseq analysis with HTRfit ](https://htrfit-lbmc-yvertlab-vortex-plasticity-mutation-477701eb488dfd9.gitbiopages.ens-lyon.fr/articles/rnaseq_analysis.html)
diff --git a/pkgdown/_pkgdown.yml b/pkgdown/_pkgdown.yml
index 364b42953243ccec0a60b43224c5ee690e10c54f..227281308326b965095270b51e8cc7e82d46164a 100644
--- a/pkgdown/_pkgdown.yml
+++ b/pkgdown/_pkgdown.yml
@@ -18,3 +18,13 @@ template:
   includes:
     in_header:
       <script defer data-domain="pkgdown.r-lib.org,all.tidyverse.org" src="https://plausible.io/js/plausible.js"></script>
+news:
+  releases:
+  - text: "Version 2.0.0"
+    href: https://gitbio.ens-lyon.fr/LBMC/yvertlab/vortex/plasticity_mutation/HTRfit/-/tags/v1.0.2
+  - text: "Version 1.0.2"
+    href: https://gitbio.ens-lyon.fr/LBMC/yvertlab/vortex/plasticity_mutation/HTRfit/-/tags/v1.0.2
+  - text: "Version 1.0.1"
+    href: https://gitbio.ens-lyon.fr/LBMC/yvertlab/vortex/plasticity_mutation/HTRfit/-/tags/v1.0.1
+  - text: "Version 1.0.0"
+    href: https://gitbio.ens-lyon.fr/LBMC/yvertlab/vortex/plasticity_mutation/HTRfit/-/tags/v1.0.0
\ No newline at end of file
diff --git a/vignettes/htrfit_vs_deseq2.Rmd b/vignettes/htrfit_vs_deseq2.Rmd
index c028226350b52c55d52c1d92836bc0e12979bfae..b5365802ba999f4904d04ff6252308c458a4c499 100644
--- a/vignettes/htrfit_vs_deseq2.Rmd
+++ b/vignettes/htrfit_vs_deseq2.Rmd
@@ -1,8 +1,8 @@
 ---
-title: "HTRfit vs DESeq2"
+title: "Benchmarking HTRfit and DESeq2"
 output: rmarkdown::html_vignette
 vignette: >
-  %\VignetteIndexEntry{HTRfit vs DESeq2}
+  %\VignetteIndexEntry{Benchmarking HTRfit and DESeq2}
   %\VignetteEngine{knitr::rmarkdown}
   %\VignetteEncoding{UTF-8}
 ---
@@ -17,7 +17,6 @@ knitr::opts_chunk$set(
 
 
 ```{r setup, warning = FALSE, message = FALSE, results='hide'}
-devtools::load_all()
 library(HTRfit)
 library(DESeq2)
 ```
diff --git a/vignettes/theoryBehindHtrfit.Rmd b/vignettes/theoryBehindHtrfit.Rmd
index cc635ecb82f28966d6c9377e674ebdd69651a164..e661cedd14955c4a75818e8e98d7a1e7a11d27e3 100644
--- a/vignettes/theoryBehindHtrfit.Rmd
+++ b/vignettes/theoryBehindHtrfit.Rmd
@@ -26,7 +26,6 @@ To navigate the selection of optimal values for these experimental parameters, w
   <img src="man/figures/htrfit_workflow.png" width="500" height="300">
 </div> 
 
-
 In this modeling framework, counts denoted as $K_{ij}$ for gene i and sample j are generated using a negative binomial distribution. The negative binomial distribution considers a fitted mean $\mu_{ij}$ and a gene-specific dispersion parameter $dispersion_i$.
 The fitted mean $\mu_{ij}$ is determined by a parameter, $q_{ij}$, which is proportionally related to the sum of all effects specified using `init_variable()` or `add_interaction()`. If basal gene expressions are provided, the $\mu_{ij}$ values are scaled accordingly using the gene-specific basal expression value ($bexpr_i$).
 Furthermore, the coefficients $\beta_i$ represent the natural logarithm fold changes for gene i across each column of the model matrix X. The dispersion parameter $dispersion_i$ plays a crucial role in defining the relationship between the variance of observed counts and their mean value. In simpler terms, it quantifies how far we expect observed counts to deviate from the mean value for each genes.