1 Introduction

This script performs candidate detection using Gradient Forest (GF). Gradient Forest is a machine learning method introduced by Ellis, Smith, and Pitcher (2012). Fitzpatrick & Keller (2015) describe how GF can be used to:
- Analyse and map spatial variation in allele frequencies as a function of environmental gradients (outlier detection via GEA).
- Project patterns of genomic variation under different climates (genomic offset).

More specifically, GF uses Random Forest to fit an ensemble of regression trees modeling changes in allele frequencies across sites and derives monotonic, nonlinear functions of environmental predictors. The empirical, nonlinear turnover functions are constructed by distributing the R² values of all SNPs across predictor gradients in proportion to their importance and along each gradient according to the density of the raw split importance values. The split importance values for all modeled SNPs are aggregated into an overall genome-wide turnover function for each variable, weighted by predictor importance and goodness-of-fit for each SNP model (Fitzpatrick et al., 2021).

GF is considered univariate/multivariate because it models one response variable and one predictor at a time but then summarizes information into cumulative importance turnover functions. This allows results for multiple climate variables simultaneously for a single SNP. GF can also apply the same process across response variables to summarize the effect of explanatory variables on multiple responses (Ellis et al., 2012).

(For comparison: RDA is multivariate, analysing multiple response variables simultaneously. LFMM can be univariate or multivariate depending on the configuration, and BAYPASS is univariate.)

In this script, we apply the GF algorithm for outlier detection following Fitzpatrick et al. (2021) and Archambeau et al. GF will be run on genomic datasets both corrected and uncorrected for population structure. GF will be performed for each SNP, producing a turnover function per SNP using all predictors.

To assess the association of each locus with climatic variables, we calculated empirical p-values. These p-values are derived by comparing a null distribution of R² values with the R² value of each locus: the farther a locus R² is from the null distribution, the lower its p-value.

The first step in this process is selecting the set of SNPs used to generate the null distribution. If available, SNPs from intergenic or non-coding regions identified via genetic load are preferred. In this study, such SNPs are not available; thus, following Archambeau et al. (2024), we use a random subset of SNPs from the dataset to generate the null distribution.

To ensure robustness, we run the model 5 times for each dataset. Candidate SNPs for each dataset are defined as those that overlap across all 5 runs, as SNPs identified in a single run may vary slightly (Archambeau et al., 2024).

1.1 Formatting the genomic and the climatic data

To perform the Gradient Forest analysis following Fitzpatrick et al. (2021), the data must be formatted with populations in rows and SNPs in columns. It is crucial that the order of populations is consistent between the genomic and climate data files.

1.1.1 Climatic data

#climatic data
Past_climatic <- read.csv("C:/Users/tfrancisco/Documents/Thesis/Data/Species/Taxus_baccata/Climatic_data/new_selection/Past_new_6_Climatic_data_scale_df.csv",sep=";",dec=",")
vars <- colnames(Past_climatic[,-c(1:2)])

1.1.2 Genomic data

1.1.2.1 GF raw

The genomic file used is filtered with a minimum allele frequency (MAF) cut-off of 0.20, as low-frequency alleles could potentially bias the genomic–environmental association (GEA) analyses. The same MAF threshold was applied consistently across all GEA methods.

#genomic data
load("C:/Users/tfrancisco/Documents/Thesis/Data/Species/Taxus_baccata/genetic_new/data_allelic_frequencies_29pop_adapcon_gentree_475_8616.Rdata")

genomic_matrix <- data_allelic_frequencies_29pop_adapcon_gentree_475_8616

1.1.2.2 GF corrected

Gradient Forest (GF) does not have an option to directly correct for population structure. To account for population structure in GF, we must use a genomic dataset that has already been corrected. The scaled population structure matrix from BAYPASS cannot be used, as GF requires a data frame rather than a matrix. Instead, we can use the genotypic dataset corrected for population structure via latent factors in LFMM.

We load the corrected LFMM genomic dataframe. The genomic data is at the individual level for 475 individuals, 8616 imputed SNPs and with MAF correction.

load("C:/Users/tfrancisco/Documents/Thesis/Data/Species/Taxus_baccata/genetic_new/GEA/Genomic_matrix_corrected_from_LFMM_T_adapcon_gentree.Rdata")
corrected_geno_data <- Genomic_matrix_corrected_from_LFMM_T_adapcon_gentree
#we need to transform the individual dataframe into a population-level dataframe

#add ind into a column
corrected_geno_data_ind <- rownames_to_column(corrected_geno_data, "VCF_ID")

We need to transform the genomic data at the individual level to the population level:

#add the population info
#meta_data
meta_data_pop <- read.csv("C:/Users/tfrancisco/Documents/Thesis/Data/Species/Taxus_baccata/Populations/taxus_sample_29pop.csv",h=T,sep=";",dec=",")

meta_data_vcf <- read.csv("C:/Users/tfrancisco/Documents/Thesis/Data/Species/Taxus_baccata/Samples/samples_taxus_baccata_adapcon_gentree.csv",h=T,sep=";",dec=",")

geno_pop_info <- merge(meta_data_vcf,corrected_geno_data_ind, "VCF_ID" )

#formatting the genomic data
data_frequencies_num <- geno_pop_info[,-c(1:3)] %>% #keeping only the snps
  apply(2,as.numeric) /2 #we divided by 2 because of the format of genomic data: 0,1,2 and for allelic frequencies and we want 0,0.5, 1

#dataset with all information and genomic data in the right format
data_frequencies_num_tot <- data.frame(geno_pop_info[,c(1:3)],data_frequencies_num)

#calculation of allelic frequencies
allelic_frequencies <-data_frequencies_num_tot %>% dplyr::select(-c("VCF_ID","Country","Population")) %>% #remove non genomic data from the dataset
  group_by(data_frequencies_num_tot$Population) %>% #we want the allelic frequencies at the population level so we grouped
  summarise_at(vars(everything()),funs(mean),na.rm=T) %>% #calculate the mean for each snp per pop
    ungroup() %>%
as.data.frame()

#Pop with row.names
allelic_frequencies_f <- allelic_frequencies %>% column_to_rownames('data_frequencies_num_tot$Population')

2 Run the models

In this step, we will run the GF algorithm five times because, due to the stochastic nature of the machine learning process, the set of candidate SNPs may vary between runs. Additionally, the random selection of SNPs to generate the null distribution can lead to different candidates being identified in each run. To ensure robustness, we will only consider SNPs that are consistently identified as candidates across all five runs. This approach ensures that the selected candidates reflect true signals rather than random variation.

Run_GF_and_select_outliers <- function(genomic_matrix, climatic_data, ntree, cores,nbr_loci_distrib,vars,x,path){
  #GF function
  runGF <- function(alFreq,  envTab, vars, ntree, 
                  cores, indLoci){
  require(data.table)
  require(gradientForest)
  require(parallel)
  require(foreach)
  require(doParallel)
  library(doParallel)
  library(foreach)
  library(parallel)
  library(gradientForest)
  library(data.table)
  
  if(identical(envTab$Population,rownames(alFreq))==F){
    stop("Populations are not in the same order in the genomic and climatic tables.")
  }
  
  # create custom object to hold output 
  gfOutObj <- setClass("gfOutObj", slots = c(alFreq="data.frame", imp="list"))

  # run in parallel if fitting SNPs individually
  if(indLoci==T & !is.na(cores)){
    # fit gf model to each SNP individually
    cl <- makeCluster(cores, setup_strategy = "sequential")
    registerDoParallel(cl)

    gfMods <- foreach(k=1:ncol(alFreq), .verbose=F, .packages=c("gradientForest"), .errorhandling = c("pass")) %dopar%{
      locus <- data.frame(alFreq[,k])
      names(locus) <- colnames(alFreq)[k]
      gf.mod <- gradientForest(data.frame(envTab[, vars], locus), 
                               predictor.vars=vars, response.vars=colnames(alFreq)[k], 
                               corr.threshold=0.5, ntree=ntree, trace=T)
    if(!is.null(gf.mod)){
        imps <- importance(gf.mod)
        imps <- imps[order(names(imps))]
        data.frame(imps, SNP = colnames(alFreq)[k])
      }
    }
    
    stopCluster(cl)
    return(gfOutObj(alFreq = data.frame(alFreq), imp = gfMods))
  } else {
    # run all SNPs at once if not fitting individually
    gf.mod <- gradientForest(data.frame(envTab[, vars], alFreq), 
                             predictor.vars=vars, response.vars=colnames(alFreq), 
                             corr.threshold=0.5, ntree=ntree, trace=T)
    
    return(gfOutObj(alFreq = data.frame(alFreq), imp = gfMods))
  }
}
  #run GF
  GF_test <- runGF(genomic_matrix,climatic_data,vars,ntree=ntree, 
                  cores=cores, indLoci=T)
  
  #extract the loci correlated to the climate
  Extract_correlation_loci_climate<- GF_test@imp
loci_correlated_climate <- Filter(function(x) !inherits(x, "error"),  Extract_correlation_loci_climate)

#extracting R^2 values
gfR2tab <- function(gfMods.list){
  gfMods.list <- gfMods.list
  i=1
  while(is.null(gfMods.list[[i]])){i=i+1}
  tab <- do.call(rbind, gfMods.list)
  vrNm <- rep(row.names(tab)[1:nrow(gfMods.list[[i]])], 
              nrow(tab)/nrow(gfMods.list[[i]]))
  tab <- data.frame(variable=vrNm, tab)
  tab <- reshape2::dcast(tab, SNP~variable, value.var="imps")
  totalR2 <- rowSums(tab[,-1])
  return(data.frame(tab, totalR2=totalR2))}

dataset_R2_loci_climate <- gfR2tab(loci_correlated_climate)

  #select randomly the SNPs, we selected 20% of all SNPs to create the null distribution
name_neutral_snps <- sample(dataset_R2_loci_climate$SNP,nbr_loci_distrib,replace = F)

neutral_snps_set <- dataset_R2_loci_climate %>% 
    filter(SNP %in% name_neutral_snps)

#hist neutral 
 neutral_R2_distrib<-hist(neutral_snps_set$totalR2)
 
 #name
neutral_R2_distrib<-hist(neutral_snps_set$totalR2)

#save the histogram
 png(filename=paste0(path,x,"neutral_R2_distrib",".png"))
 
# a histogram we want to save
hist(neutral_snps_set$totalR2)
# call this function to save the file 
dev.off()
 
#empirical pvalues
empirical_pvalues <- sapply(1:nrow(dataset_R2_loci_climate), function(x, dataset_R2_loci_climate, name_neutral_snps, neutral_snps_set){
    snps2Rank <- rbind(dataset_R2_loci_climate[x,], neutral_snps_set) %>% 
      distinct() %>% 
      dplyr::select(-SNP)
    P <- apply(snps2Rank, 2, function(y){
      rankSNP <- frank(y)
      return(1-rankSNP[1]/length(rankSNP))
    })}, dataset_R2_loci_climate, neutral_snps, neutral_snps_set)
  

  # format output as data.frame
  empirical_pvalues_df <- t(empirical_pvalues)
  colnames(empirical_pvalues_df) <- paste("pval_", colnames(empirical_pvalues_df), sep="")
  empirical_pvalues_df <- data.frame(dataset_R2_loci_climate, empirical_pvalues_df)

    #visualise the pvalues distribution
  pvalues_distribution <- hist(empirical_pvalues_df$pval_totalR2)
  
  
  #save the histogram
png(filename=paste0(path,"pvalues_distribution",x,".png"))

# a histogram we want to save
hist(empirical_pvalues_df$pval_totalR2)

# call this function to save the file 
dev.off()
  # Return the pvalues 
  return(empirical_pvalues_df)
}

We can load the run to skip the previous steps.

3 Identification of candidate loci

3.1 Thresholds

Now we want to identify candidate SNPs. To do that, we applied 2 types of thresholds:
rank pvalues threshold:
- rank based 5%
- rank based 1%

pvalues threshold:
- pvalues 0.05
- pvalues 0.01

for(i in 1: length(GF_data)){
  genomic_data <- get(GF_data[i])
  model_name <- GF_models[i]

for(x in 1:5){
  data_name <- paste0("Run",x,"_",model_name)
  
  data <- get(data_name)
  
   #top 1%
  
 outliers_top1perc_GF <- data[,c(1,15)] %>% 
  arrange(pval_totalR2) %>%
slice(1:(0.01*8616)) %>%  #slice(1:(0.01*nrow(.)))
  as.data.frame()
 
 assign(paste0("Run",x,"_",model_name,"_top1SNP"),outliers_top1perc_GF)
 
   #top 5%
 outliers_top5perc_GF <- data[,c(1,15)] %>% 
  arrange(pval_totalR2) %>%
slice(1:(0.05*nrow(data))) %>%  #slice(1:(0.01*nrow(.)))
  as.data.frame()
 
 assign(paste0("Run",x,"_",model_name,"_top5SNP"),outliers_top5perc_GF)
 
 #pvalues < 0.05
outliers_pv05 <- data[,c(1,15)] %>% filter(pval_totalR2<0.05) %>% pull(SNP) 

 assign(paste0("Run",x,"_",model_name,"_outliers_pv0.05"),outliers_pv05)

#pvalues < 0.01
outliers_pv0.01 <- data[,c(1,15)] %>% filter(pval_totalR2<0.01) %>% pull(SNP)
 
 assign(paste0("Run",x,"_",model_name,"_outliers_pv0.01"),outliers_pv0.01)
  }
}

3.2 Graphical visualisation

An important step is to compare the results across different runs for each threshold. We created Venn diagrams to visualize the number of candidate SNPs shared between runs.

for(i in 1: length(GF_data)){
  genomic_data <- get(GF_data[i])
  model_name <- GF_models[i]

#candidates 0.01
grid.newpage()
ven <- venn.diagram(
  x = list(
    get(paste0("Run1_",model_name,"_outliers_pv0.01")), 
    get(paste0("Run2_",model_name,"_outliers_pv0.01")), 
    get(paste0("Run3_",model_name,"_outliers_pv0.01")), 
    get(paste0("Run4_",model_name,"_outliers_pv0.01")), 
    get(paste0("Run5_",model_name,"_outliers_pv0.01"))
  ),
  category.names = c("RUN1", "RUN2", "RUN3", "RUN4", "RUN5"),
  filename = NULL,
  fill = c("#E69F00", "#56B4E9", "#009E73", "#D55E00","#CC79A7"),
  alpha = 0.30,
  print.mode = c("raw","percent"),  # Display raw counts + percentages
  sigdigs = 0,  # Round percentages to whole numbers
  margin = 0.1,  # Adjust margins
  cat.fontface = "italic",  # Italic category labels
  cat.fontsize = 14,  # Increase category label size
  main = paste0("Venn Diagram: pv 0.01 candidate SNPs across ",model_name," Runs"),
  main.fontface = "bold",
  main.cex = 1.5,  # Increase main title size
  cex = 1.5  # Increase text size inside the Venn diagram
)
grid.draw(ven)

#top 1%
grid.newpage()
ven <- venn.diagram(
  x = list(
    get(paste0("Run1_", model_name, "_top1SNP"))[, 1], 
    get(paste0("Run2_", model_name, "_top1SNP"))[, 1], 
    get(paste0("Run3_", model_name, "_top1SNP"))[, 1], 
    get(paste0("Run4_", model_name, "_top1SNP"))[, 1], 
    get(paste0("Run5_", model_name, "_top1SNP"))[, 1]
  ),
  category.names = c("RUN1", "RUN2", "RUN3", "RUN4", "RUN5"),
  filename = NULL,
  fill = c("#E69F00", "#56B4E9", "#009E73", "#D55E00","#CC79A7"),
  alpha = 0.30,
  print.mode = c("raw","percent"),  # Display raw counts + percentages
  sigdigs = 0,  # Round percentages to whole numbers
  margin = 0.1,  # Adjust margins
  cat.fontface = "italic",  # Italic category labels
  cat.fontsize = 14,  # Increase category label size
  main = paste0("Venn Diagram: Top 1% Candidate SNPs Across ",model_name," Runs"),
  main.fontface = "bold",
  main.cex = 1.5,  # Increase main title size
  cex = 1.5  # Increase text size inside the Venn diagram
)
grid.draw(ven)

#top 5% 
grid.newpage()
ven <- venn.diagram(
  x = list(
    get(paste0("Run1_", model_name, "_top5SNP"))[, 1], 
    get(paste0("Run2_", model_name, "_top5SNP"))[, 1], 
    get(paste0("Run3_", model_name, "_top5SNP"))[, 1], 
    get(paste0("Run4_", model_name, "_top5SNP"))[, 1], 
    get(paste0("Run5_", model_name, "_top5SNP"))[, 1]
  ),
  category.names = c("RUN1", "RUN2", "RUN3", "RUN4", "RUN5"),
  filename = NULL,
  fill = c("#E69F00", "#56B4E9", "#009E73", "#D55E00","#CC79A7"),
  alpha = 0.30,
  print.mode = c("raw","percent"),  # Display raw counts + percentages
  sigdigs = 0,  # Round percentages to whole numbers
  margin = 0.1,  # Adjust margins
  cat.fontface = "italic",  # Italic category labels
  cat.fontsize = 14,  # Increase category label size
  main = paste0("Venn Diagram: Top 5% Candidate SNPs Across ",model_name," Runs"),
  main.fontface = "bold",
  main.cex = 1.5,  # Increase main title size
  cex = 1.5  # Increase text size inside the Venn diagram
)

# Draw the Venn diagram
grid.draw(ven)
}

Overall, we observe that only a subset of SNPs identified as candidates is consistently shared across runs. Furthermore, the top-candidate approach appears more robust than using empirical p-values, which show greater variability and less consistency across runs. This suggests that relying on top candidates is a more appropriate method for identifying robust signals.

4 Save candidates

As candidates, we selected and saved the top candidates:

4.1 1% of SNPs for downstream analysis

for(i in 1: length(GF_data)){

  genomic_data <- get(GF_data[i])
  model_name <- GF_models[i]

#Select only the candidates identified in all 5 runs
outliers_rank_based_top1perc <- Reduce(intersect, list(get(paste0("Run1_",model_name,"_top1SNP"))[,1], get(paste0("Run2_",model_name,"_top1SNP"))[,1], get(paste0("Run3_",model_name,"_top1SNP"))[,1], get(paste0("Run4_",model_name,"_top1SNP"))[,1], get(paste0("Run5_",model_name,"_top1SNP"))[,1]))

#number of outliers
length(outliers_rank_based_top1perc)

assign(paste0("outliers_rank_based_top1perc_",model_name),outliers_rank_based_top1perc)
#save
   save(list = paste0("outliers_rank_based_top1perc_",model_name), file = paste0("C:/Users/tfrancisco/Documents/Thesis/Results/species/taxus/GEA_new_var/outliers/outliers_rank_based_top1perc_", model_name, ".Rdata"))
}

4.2 5% of SNPs for downstream analysis as a relax threshold candidates

for(i in 1: length(GF_data)){

  genomic_data <- get(GF_data[i])
  model_name <- GF_models[i]
  
#Select only the outliers identified in all 5 runs
outliers_rank_based_top5perc <- Reduce(intersect, list(get(paste0("Run1_",model_name,"_top5SNP"))[,1], get(paste0("Run2_",model_name,"_top5SNP"))[,1], get(paste0("Run3_",model_name,"_top5SNP"))[,1], get(paste0("Run4_",model_name,"_top5SNP"))[,1], get(paste0("Run5_",model_name,"_top5SNP"))[,1]))

#number of outliers
length(outliers_rank_based_top5perc)

assign(paste0("outliers_rank_based_top5perc_",model_name),outliers_rank_based_top5perc)

#save
   save(list = paste0("outliers_rank_based_top5perc_",model_name), file = paste0("C:/Users/tfrancisco/Documents/Thesis/Results/species/taxus/GEA_new_var/outliers/outliers_rank_based_top5perc_", model_name, ".Rdata"))
}
devtools::session_info()
## ─ Session info ───────────────────────────────────────────────────────────────
##  setting  value
##  version  R version 4.3.2 (2023-10-31 ucrt)
##  os       Windows 11 x64 (build 26100)
##  system   x86_64, mingw32
##  ui       RTerm
##  language (EN)
##  collate  French_France.utf8
##  ctype    French_France.utf8
##  tz       Europe/Paris
##  date     2025-09-22
##  pandoc   3.1.1 @ C:/Program Files/RStudio/resources/app/bin/quarto/bin/tools/ (via rmarkdown)
## 
## ─ Packages ───────────────────────────────────────────────────────────────────
##  package        * version    date (UTC) lib source
##  abind            1.4-8      2024-09-12 [1] CRAN (R 4.3.3)
##  backports        1.4.1      2021-12-13 [1] CRAN (R 4.3.1)
##  base64enc        0.1-3      2015-07-28 [1] CRAN (R 4.3.1)
##  broom            1.0.8      2025-03-28 [1] CRAN (R 4.3.3)
##  bslib            0.6.1      2023-11-28 [1] CRAN (R 4.3.2)
##  cachem           1.0.8      2023-05-01 [1] CRAN (R 4.3.1)
##  car              3.1-2      2023-03-30 [1] CRAN (R 4.3.2)
##  carData          3.0-5      2022-01-06 [1] CRAN (R 4.3.2)
##  cellranger       1.1.0      2016-07-27 [1] CRAN (R 4.3.2)
##  cli              3.6.1      2023-03-23 [1] CRAN (R 4.3.1)
##  colorspace       2.1-0      2023-01-23 [1] CRAN (R 4.3.1)
##  curl             5.2.0      2023-12-08 [1] CRAN (R 4.3.2)
##  data.table       1.15.0     2024-01-30 [1] CRAN (R 4.3.2)
##  devtools         2.4.5      2022-10-11 [1] CRAN (R 4.3.3)
##  digest           0.6.33     2023-07-07 [1] CRAN (R 4.3.1)
##  dplyr          * 1.1.4      2023-11-17 [1] CRAN (R 4.3.2)
##  ellipsis         0.3.2      2021-04-29 [1] CRAN (R 4.3.1)
##  evaluate         0.23       2023-11-01 [1] CRAN (R 4.3.2)
##  extendedForest   1.6.2      2023-12-12 [1] R-Forge (R 4.3.2)
##  fastmap          1.1.1      2023-02-24 [1] CRAN (R 4.3.1)
##  formatR          1.14       2023-01-17 [1] CRAN (R 4.3.2)
##  fs               1.6.3      2023-07-20 [1] CRAN (R 4.3.1)
##  futile.logger  * 1.4.3      2016-07-10 [1] CRAN (R 4.3.2)
##  futile.options   1.0.1      2018-04-20 [1] CRAN (R 4.3.1)
##  generics         0.1.4      2025-05-09 [1] CRAN (R 4.3.2)
##  ggplot2        * 3.5.2      2025-04-09 [1] CRAN (R 4.3.2)
##  glue             1.7.0      2024-01-09 [1] CRAN (R 4.3.2)
##  gradientForest * 0.1-37     2023-08-19 [1] R-Forge (R 4.3.1)
##  gtable           0.3.6      2024-10-25 [1] CRAN (R 4.3.3)
##  highr            0.10       2022-12-22 [1] CRAN (R 4.3.1)
##  hms              1.1.3      2023-03-21 [1] CRAN (R 4.3.2)
##  htmltools        0.5.7      2023-11-03 [1] CRAN (R 4.3.2)
##  htmlwidgets      1.6.4      2023-12-06 [1] CRAN (R 4.3.2)
##  httpuv           1.6.13     2023-12-06 [1] CRAN (R 4.3.2)
##  httr             1.4.7      2023-08-15 [1] CRAN (R 4.3.2)
##  import           1.3.2      2024-01-21 [1] CRAN (R 4.3.3)
##  jquerylib        0.1.4      2021-04-26 [1] CRAN (R 4.3.1)
##  jsonlite         1.8.8      2023-12-04 [1] CRAN (R 4.3.2)
##  knitr            1.45       2023-10-30 [1] CRAN (R 4.3.2)
##  lambda.r         1.2.4      2019-09-18 [1] CRAN (R 4.3.2)
##  later            1.3.2      2023-12-06 [1] CRAN (R 4.3.2)
##  lattice          0.22-5     2023-10-24 [2] CRAN (R 4.3.2)
##  lazyeval         0.2.2      2019-03-15 [1] CRAN (R 4.3.2)
##  lifecycle        1.0.4      2023-11-07 [1] CRAN (R 4.3.2)
##  lubridate      * 1.9.3      2023-09-27 [1] CRAN (R 4.3.2)
##  magrittr       * 2.0.3      2022-03-30 [1] CRAN (R 4.3.1)
##  markdown         1.12       2023-12-06 [1] CRAN (R 4.3.2)
##  MASS             7.3-60.0.1 2024-01-13 [2] CRAN (R 4.3.2)
##  memoise          2.0.1      2021-11-26 [1] CRAN (R 4.3.1)
##  mime             0.12       2021-09-28 [1] CRAN (R 4.3.1)
##  miniUI           0.1.1.1    2018-05-18 [1] CRAN (R 4.3.2)
##  mnormt           2.1.1      2022-09-26 [1] CRAN (R 4.3.1)
##  munsell          0.5.1      2024-04-01 [1] CRAN (R 4.3.3)
##  nlme             3.1-164    2023-11-27 [2] CRAN (R 4.3.2)
##  patchwork        1.2.0      2024-01-08 [1] CRAN (R 4.3.2)
##  pillar           1.10.2     2025-04-05 [1] CRAN (R 4.3.3)
##  pkgbuild         1.4.3      2023-12-10 [1] CRAN (R 4.3.2)
##  pkgconfig        2.0.3      2019-09-22 [1] CRAN (R 4.3.2)
##  pkgload          1.3.4      2024-01-16 [1] CRAN (R 4.3.2)
##  plotly           4.10.4     2024-01-13 [1] CRAN (R 4.3.2)
##  png              0.1-8      2022-11-29 [1] CRAN (R 4.3.1)
##  profvis          0.3.8      2023-05-02 [1] CRAN (R 4.3.2)
##  promises         1.2.1      2023-08-10 [1] CRAN (R 4.3.2)
##  psych            2.4.1      2024-01-18 [1] CRAN (R 4.3.2)
##  purrr            1.0.2      2023-08-10 [1] CRAN (R 4.3.2)
##  R6               2.6.1      2025-02-15 [1] CRAN (R 4.3.3)
##  radiant.data   * 1.6.3      2023-12-17 [1] CRAN (R 4.3.3)
##  randomizr        1.0.0      2023-08-10 [1] CRAN (R 4.3.3)
##  Rcpp             1.0.11     2023-07-06 [1] CRAN (R 4.3.1)
##  readr            2.1.5      2024-01-10 [1] CRAN (R 4.3.2)
##  readxl           1.4.3      2023-07-06 [1] CRAN (R 4.3.2)
##  remotes          2.5.0      2024-03-17 [1] CRAN (R 4.3.3)
##  rlang            1.1.3      2024-01-10 [1] CRAN (R 4.3.2)
##  rmarkdown        2.25       2023-09-18 [1] CRAN (R 4.3.1)
##  rstudioapi       0.15.0     2023-07-07 [1] CRAN (R 4.3.2)
##  sass             0.4.8      2023-12-06 [1] CRAN (R 4.3.2)
##  scales           1.3.0      2023-11-28 [1] CRAN (R 4.3.2)
##  sessioninfo      1.2.2      2021-12-06 [1] CRAN (R 4.3.2)
##  shiny            1.8.0      2023-11-17 [1] CRAN (R 4.3.2)
##  shinyAce         0.4.2      2022-05-06 [1] CRAN (R 4.3.3)
##  shinyFiles       0.9.3      2022-08-19 [1] CRAN (R 4.3.3)
##  stringi          1.8.3      2023-12-11 [1] CRAN (R 4.3.2)
##  stringr          1.5.1      2023-11-14 [1] CRAN (R 4.3.2)
##  textshape      * 1.7.5      2024-04-01 [1] CRAN (R 4.3.3)
##  tibble           3.2.1      2023-03-20 [1] CRAN (R 4.3.2)
##  tidyr          * 1.3.1      2024-01-24 [1] CRAN (R 4.3.2)
##  tidyselect       1.2.1      2024-03-11 [1] CRAN (R 4.3.3)
##  timechange       0.3.0      2024-01-18 [1] CRAN (R 4.3.2)
##  tzdb             0.4.0      2023-05-12 [1] CRAN (R 4.3.2)
##  urlchecker       1.0.1      2021-11-30 [1] CRAN (R 4.3.2)
##  usethis          2.2.3      2024-02-19 [1] CRAN (R 4.3.2)
##  vctrs            0.6.5      2023-12-01 [1] CRAN (R 4.3.2)
##  VennDiagram    * 1.7.3      2022-04-12 [1] CRAN (R 4.3.3)
##  viridisLite      0.4.2      2023-05-02 [1] CRAN (R 4.3.2)
##  withr            3.0.2      2024-10-28 [1] CRAN (R 4.3.3)
##  writexl        * 1.5.0      2024-02-09 [1] CRAN (R 4.3.2)
##  xfun             0.41       2023-11-01 [1] CRAN (R 4.3.2)
##  xtable           1.8-4      2019-04-21 [1] CRAN (R 4.3.2)
##  yaml             2.3.8      2023-12-11 [1] CRAN (R 4.3.2)
## 
##  [1] C:/Users/tfrancisco/AppData/Local/R/win-library/4.3
##  [2] C:/Users/tfrancisco/AppData/Local/Programs/R/R-4.3.2/library
## 
## ──────────────────────────────────────────────────────────────────────────────