--- title: "How to use gWQS package" author: "Stefano Renzetti, Paul Curtin, Allan C Just, Ghalib Bello, Chris Gennings" date: "`r Sys.Date()`" output: bookdown::html_document2: toc: TRUE toc_float: TRUE header-includes: - \usepackage{float} vignette: > %\VignetteIndexEntry{How to use gWQS package} %\VignetteEngine{knitr::rmarkdown} \usepackage[utf8]{inputenc} --- ```{r setup, include=FALSE} # knitr::opts_chunk$set(cache = TRUE) library(cowplot) ``` # Introduction Weighted Quantile Sum (WQS) regression is a statistical approach for multivariate regression in high-dimensional data with complex correlation patterns commonly encountered in environmental exposures, epi/genomics, and metabolomic studies, among others. The model constructs an empirically weighted index estimating the mixture effect of predictor (e.g., exposure) variables on an outcome, which may then be used in a regression model with relevant covariates to test the association of the index with a dependent variable or outcome. The contribution of each individual predictor to the overall index effect may is assessed by the relative strength of the estimated weights since the components are quantiled and are therefore on the same scale. The gWQS package extends WQS regression to applications with continuous and categorical outcomes and implements the random subset WQS and the repeated holdout WQS. In practical terms, the primary outputs of an analysis are the parameter estimates and significance tests for the overall index effect of predictor variables, and the estimated weights assigned to each predictor, which identify the relevant contribution of each variable to the relationship between the WQS index and the outcome variable. For additional theoretical background on WQS regression and its extensions, see the references provided below. # How to use the `gWQS` package The main functions of the `gWQS` package are `gwqs` and `gwqs_multinom`. The first extends WQS regression to applications with continuous, categorical and count outcomes; the second extends WQS regression to applications with categorical outcomes with more than 2 non-ordinal categories while both functions include the option `rs` that allows to apply a random subset implementation of WQS and the argument `rh` that if set greater than 1 implements a repeated holdout validation procedure. In this vignette we will only show the application of WQS to a continuous outcome. We created the `wqs_data` dataset (available once the package is installed and loaded) to show how to use this function. These data reflect 59 exposure concentrations simulated from a distribution of 34 PCB exposures and 25 phthalate biomarkers measured in subjects participating in the NHANES study (2001-2002). Additionally, 8 outcome measures were simulated applying different distributions and fixed beta coefficients to the predictors. In particular `y` and `yLBX` were simulated from a normal distribution, `ybin` and `ybinLBX` from a binomial distribution, `ymultinom` and `ymultinomLBX` from a multinomial distribution and `ycount` and `ycountLBX` from a Poisson distribution. The `sex` variable was also simulated to allow to adjust for a covariate in the model (see the `wqs_data` help page for more details). This dataset can thus be used to test the `gWQS` package by analyzing the mixture effect of the simulated chemicals on the different outcomes, with adjustments for covariates. ## Example ### Step 1 Following the algorithm proposed in Renzetti et al. (2023) we start fitting a WQS regression with two indices, one exploring the positive and the second exploring the negative direction specifying the terms `pwqs` and `nwqs`, respectively. We also opted for the repeated holdout validation procedure to get more stable results. The following script calls a WQS model for a continuous outcome using the function `gwqs` that returns an object of class `gwqs`; the three functions `gwqs_barplot`, `gwqs_scatterplot` and `gwqs_fitted_vs_resid` allows to plot the figures shown in figure \@ref(fig:model1): ```{r, echo=TRUE, eval=TRUE} library(gWQS) library(ggplot2) library(knitr) library(kableExtra) library(reshape2) # we save the names of the mixture variables in the variable "PCBs" PCBs <- names(wqs_data)[1:34] # we run the model and save the results in the variable "results2i" results2i <- gwqs(yLBX ~ pwqs + nwqs, mix_name = PCBs, data = wqs_data, q = 10, validation = 0.6, b = 5, b1_pos = TRUE, rh = 5, family = "gaussian", seed = 2016) ``` This WQS model tests the relationship between our dependent variable, `y`, and a WQS index estimated from ranking exposure concentrations in deciles (`q = 10`); in the `gwqs` formula the `wqs` (when we want to build a single index) or the `pwqs` and `nwqs` (when we want to build a double index) terms must be included as if they were present in the dataset. The data were divided in 40% of the dataset for training and 60% for validation (`validation = 0.6`) and we repeated this split procedure 5 times (`rh = 5`), and 5 bootstrap samples (`b = 5`) for parameter estimation were assigned (in practical applications we suggest at least 100 repeated holdout validation and 100 bootstrap samples to be used). We first examined a bidirectional association including both the positive (`pwqs`) and negative (`nwqs`) indices. We linked our model to a gaussian distribution to test for relationships between the continuous outcome and exposures (`family = "gaussian"`, other families available within the `gwqs` function are `"binomial"`, `"poisson"`, `"quasipoisson"` and `"negbin"` while the function `gwqs_multinom` allows to fit a WQS regression for multinomial outcomes), and fixed the seed to 2016 for reproducible results (`seed = 2016`). To test the statistical significance of the association between the variables in the model, the following code has to be run as for a classical `R` regression function: ```{r, echo=TRUE, eval=TRUE} summary(results2i) ``` From this output we can observe a statistically significant association in the positive ($\beta$ = 1.108 95%CI 0.903, 1.314) but not in the negative direction ($\beta$ = -0.069 95%CI -0.170, 0.031). We can also draw some useful plots using the following secondary functions built in the `gWQS` package: ```{r, echo=TRUE, eval=FALSE} # bar plot gwqs_barplot(results2i) # scatter plot y vs wqs gwqs_scatterplot(results2i) # scatter plot residuals vs fitted values gwqs_fitted_vs_resid(results2i) # boxplot of the weights estimated at each repeated holdout step gwqs_boxplot(results2i) ``` ```{r model1, results='asis', fig.show='hold', fig.height=8, fig.width=8, echo=FALSE, fig.cap="Plots displayed for linear outcomes.", message=FALSE} PCBs <- names(wqs_data)[1:34] results2i <- gwqs(yLBX ~ pwqs + nwqs, mix_name = PCBs, data = wqs_data, q = 10, validation = 0.6, b = 5, b1_pos = TRUE, rh = 5, family = "gaussian", seed = 2016) w_ord <- order(results2i$final_weights$`Estimate pos`) mean_weight_pos <- results2i$final_weights$`Estimate pos`[w_ord] mean_weight_neg <- results2i$final_weights$`Estimate neg`[w_ord] mix_name <- factor(results2i$final_weights$mix_name[w_ord], levels = results2i$final_weights$mix_name[w_ord]) data_plot <- data.frame(mean_weight = c(mean_weight_pos, mean_weight_neg), mix_name = rep(mix_name, 2), index = factor(rep(c("pwqs", "nwqs"), each = length(w_ord)), levels = c("pwqs", "nwqs"))) bar_plot_h <- ggplot(data_plot, aes(x = mix_name, y = mean_weight)) + geom_bar(stat = "identity", color = "black") + theme_bw() + theme(axis.ticks = element_blank(), axis.title = element_blank(), axis.text.x = element_text(color='black'), legend.position = "none") + coord_flip() + ggtitle("A") + geom_hline(yintercept = 1/length(PCBs), linetype="dashed", color = "red") + facet_wrap(~ index) yadj_vs_wqs <- ggplot(melt(results2i$y_wqs_df, measure.vars = c("pwqs", "nwqs")), aes(value, y_adj)) + geom_point() + facet_wrap(~ variable) + xlab("wqs") + stat_smooth(method = "loess", se = FALSE, linewidth = 1.5) + theme_bw() + ggtitle("B") fit_df <- data.frame(fitted = fitted(results2i), resid = residuals(results2i, type = "response")) res_vs_fitted <- ggplot(fit_df, aes(x = fitted, y = resid)) + geom_point() + theme_bw() + xlab("Fitted values") + ylab("Residuals") + ggtitle("C") wboxplot <- melt(results2i$wmat, varnames = c("rh", "mix_name")) wboxplot$mix_name <- factor(wboxplot$mix_name, levels = results2i$final_weights$mix_name) wboxplot$L1 <- factor(wboxplot$L1, levels = c("wmatpos", "wmatneg"), labels = c("pwqs", "nwqs")) box_plot <- ggplot(wboxplot, aes(x = mix_name, y = value)) + geom_boxplot(outlier.shape = " ") + theme_bw() + facet_wrap(~ L1, ncol = 1) + theme(axis.text.x = element_text(angle = 45, hjust = 1)) + ylab("Weight") + stat_summary(fun = mean, geom = "point", shape = 18, size = 3) + geom_hline(yintercept = 1/length(PCBs), linetype="dashed", color = "red") + geom_jitter(alpha = 0.3) + ggtitle("D") plot_grid(bar_plot_h, yadj_vs_wqs, res_vs_fitted, box_plot, nrow = 2, ncol = 2) ``` Figure \@ref(fig:model1) A is a barplot showing the weights assigned to each variable ordered from the highest weight to the lowest of the positive index (by default). These results indicate that the variables `LBXF07LA`, `LBX138LA` and `LBXD02LA` are the largest contributors to this mixture effect in the positive direction while we have different contributors in the negative direction but there was not a significant association. The dashed red line represents the cutoff $\tau$ (by default equal to the inverse of the number of elements in the mixture as suggested in Carrico et al. 2014) to discriminate which elements are of most importance. In plot B of figure \@ref(fig:model1) we have a representation of the wqs index vs the outcome (adjusted for the model residual when covariates are included in the model) that shows the direction and the shape of the association between the exposure and the outcome. For example, in this case we observe a linear and positive relationship between the mixture and the `yLBX` variable in a positive direction and an almost horizontal line in the negative direction (meaning there is no negative effect of the mixture on the outcome). In plot C a diagnostic graph of the residuals vs the fitted values is shown to check if they are randomly spread around zero or if there is a trend. All these plots are built using the `ggplot2` package. In plot D the boxplots of the distribution of the weights estimated for each repeated holdout is displayed. To have the exact values of the estimated weights we can apply the command `results2i$final_weights`. The following code shows the first six highest weights; the full list of weights can be called by omitting the head function: ```{r, echo=TRUE, eval=TRUE} head(results2i$final_weights) ``` These same tables are also shown in the Viewer window through the functions `gwqs_summary_tab` and `gwqs_weights_tab` respectively. Both these two functions use the package `kableExtra` to produce the output. The output (table \@ref(tab:sum1) and \@ref(tab:w1)) and respective code is shown below: ```{r, echo=TRUE, eval=FALSE} gwqs_summary_tab(results2i) ``` ```{r sum1, results='asis', echo=FALSE} gwqs_summary_tab(results2i, caption = "Summary results of the WQS regression for linear outcomes.") ``` ```{r, echo=TRUE, eval=FALSE} mf_df <- as.data.frame(signif(coef(summary(results2i)), 3)) kable_styling(kable(mf_df, row.names = TRUE)) ``` ```{r, echo=TRUE, eval=FALSE} gwqs_weights_tab(results2i) ``` ```{r w1, echo=FALSE, eval=TRUE} final_weight <- results2i$final_weights final_weight[, -1] <- signif(final_weight[, -1], 3) scroll_box(kable_styling(kable(final_weight, row.names = FALSE, caption = "Weights table of the WQS regression for linear outcomes.")), height = "400px") ```
```{r, echo=TRUE, eval=FALSE} final_weight <- results2i$final_weights final_weight[, -1] <- signif(final_weight[, -1], 3) kable_styling(kable(final_weight, row.names = FALSE)) ``` The `gwqs` function gives back other outputs. If a repeated holdout was performed, we obtain the dataset `y_wqs_df` that contains the dependent variable (adjusted for other covariates if present in the model) and the `wqs` or the `pwqs` and `nwqs` indices depending on how the model was specified; the dataset used in the analysis that includes the WQS indices specified in the model (`results2i$data`); the list of vectors containing the cutoffs used to determine the quantiles of each variable in the mixture (`results2i$qi`); a matrix or a list of two matrices (if two indices were included in the model) with all the estimated weights for each repeated holdout; the list of the output from the repeated WQS regressions (`results2i$gwqslist`). Each element of the list contains the information that we would obtain from a WQS regression with single split: the vector of the values that indicate whether the solver converged (0) or not (1) (`results2i$gwqslist[[1]]$conv`, the specified value `[[1]]` allows to access the information of the first model); the matrix or the list of two matrices (if two indices were included in the model) with all the estimated weights and the associated $\beta_1$, standard errors, statistics and p-values for each bootstrap sample (`results2i$gwqslist[[1]]$bres`); the list of vectors containing the rows of the subjects included in each bootstrap dataset (`results2i$gwqslist[[1]]$bindex`); a logical vector that identifies the rows used to estimate the parameters of the final model (`results2i$gwqslist[[1]]$vindex`); the vector of the values of the objective function at the optimal parameter estimates obtained at each bootstrap step (`results2i$gwqslist[[1]]$objfn_values`) and any messages from the `optim` function (`results2i$gwqslist[[1]]$optim_messages`). The following script allows to reproduce the figures that are automatically generated using the plots functions: ```{r, echo=TRUE, eval=FALSE} # bar plot w_ord <- order(results2i$final_weights$`Estimate pos`) mean_weight_pos <- results2i$final_weights$`Estimate pos`[w_ord] mean_weight_neg <- results2i$final_weights$`Estimate neg`[w_ord] mix_name <- factor(results2i$final_weights$mix_name[w_ord], levels = results2i$final_weights$mix_name[w_ord]) data_plot <- data.frame(mean_weight = c(mean_weight_pos, mean_weight_neg), mix_name = rep(mix_name, 2), index = factor(rep(c("pwqs", "nwqs"), each = length(w_ord)), levels = c("pwqs", "nwqs"))) ggplot(data_plot, aes(x = mix_name, y = mean_weight)) + geom_bar(stat = "identity", color = "black") + theme_bw() + theme(axis.ticks = element_blank(), axis.title = element_blank(), axis.text.x = element_text(color='black'), legend.position = "none") + coord_flip() + geom_hline(yintercept = 1/length(PCBs), linetype="dashed", color = "red") + facet_wrap(~ index) # # scatter plot y vs wqs ggplot(melt(results2i$y_wqs_df, measure.vars = c("pwqs", "nwqs")), aes(value, y_adj)) + geom_point() + facet_wrap(~ variable) + xlab("wqs") + stat_smooth(method = "loess", se = FALSE, linewidth = 1.5) + theme_bw() # # scatter plot residuals vs fitted values fit_df <- data.frame(fitted = fitted(results2i), resid = residuals(results2i, type = "response")) res_vs_fitted <- ggplot(fit_df, aes(x = fitted, y = resid)) + geom_point() + theme_bw() + xlab("Fitted values") + ylab("Residuals") ``` ### Step 2 We then run three WQS regressions setting three different shrinkage parameter values: 1. one equal to the magnitude of the AIC of the regression fitted at step 1 2. one equal to a lower order of magnitude 3. one equal to a greater order of magnitude Additional models can be fitted following a bisection algorithm setting `lambda` between the values with the two smallest AICs. ```{r, echo=TRUE, eval=TRUE} # we run the model setting the penalization term equal to 90 results2i_l90 <- gwqs(yLBX ~ pwqs + nwqs, mix_name = PCBs, data = wqs_data, q = 10, validation = 0.6, b = 5, b1_pos = TRUE, rh = 5, family = "gaussian", seed = 2016, lambda = 90) # we run the model setting the penalization term equal to 900 results2i_l900 <- gwqs(yLBX ~ pwqs + nwqs, mix_name = PCBs, data = wqs_data, q = 10, validation = 0.6, b = 5, b1_pos = TRUE, rh = 5, family = "gaussian", seed = 2016, lambda = 900) # we run the model setting the penalization term equal to 900 results2i_l9000 <- gwqs(yLBX ~ pwqs + nwqs, mix_name = PCBs, data = wqs_data, q = 10, validation = 0.6, b = 5, b1_pos = TRUE, rh = 5, family = "gaussian", seed = 2016, lambda = 9000) ``` Based on the results obtained from this analysis we can choose the value of `lambda` that minimizes the AIC. In our case, `lambda = 90` is the optimal value as shown by the table below: ```{r, echo=TRUE, eval=TRUE} lambda_AIC_2i <- data.frame(lambda = c(0, 90, 900, 9000), AIC = c(results2i$fit$aic, results2i_l90$fit$aic, results2i_l900$fit$aic, results2i_l9000$fit$aic)) kable(lambda_AIC_2i) %>% kable_styling() ``` The results still show a significant positive association and a non-significant negative effect of the mixture on the outcome: ```{r, echo=TRUE, eval=TRUE} summary(results2i_l90) ``` The identification and the ranking of the signnificant weights did not changed compared to the non-penalized WQS but we can appreciate how the non relevant weights have shrunk towards zero ```{r, results='asis', fig.show='hold', fig.height=8, fig.width=8, echo=TRUE, message=FALSE} gwqs_barplot(results2i_l90) ``` ### Step 3 Since the mixture did not show a negative association with the outcome we can fit a final model with a single positive index and follow the same procedure: ```{r, echo=TRUE, eval=TRUE} results1i <- gwqs(yLBX ~ wqs, mix_name = PCBs, data = wqs_data, q = 10, validation = 0.6, b = 5, b1_pos = TRUE, rh = 5, family = "gaussian", seed = 2016) # we run the model setting the penalization term equal to 90 results1i_l90 <- gwqs(yLBX ~ wqs, mix_name = PCBs, data = wqs_data, q = 10, validation = 0.6, b = 5, b1_pos = TRUE, rh = 5, family = "gaussian", seed = 2016, lambda = 90) # we run the model setting the penalization term equal to 900 results1i_l900 <- gwqs(yLBX ~ wqs, mix_name = PCBs, data = wqs_data, q = 10, validation = 0.6, b = 5, b1_pos = TRUE, rh = 5, family = "gaussian", seed = 2016, lambda = 900) # we run the model setting the penalization term equal to 900 results1i_l9000 <- gwqs(yLBX ~ wqs, mix_name = PCBs, data = wqs_data, q = 10, validation = 0.6, b = 5, b1_pos = TRUE, rh = 5, family = "gaussian", seed = 2016, lambda = 9000) ``` Based on the results obtained also in this case we can observe that when `lambda = 90` we have the lower AIC: ```{r, echo=TRUE, eval=TRUE} lambda_AIC_1i <- data.frame(lambda = c(0, 90, 900, 9000), AIC = c(results1i$fit$aic, results1i_l90$fit$aic, results1i_l900$fit$aic, results1i_l9000$fit$aic)) kable(lambda_AIC_1i) %>% kable_styling() ``` We can then choose the WQS regression with single index and a penalization term set to `lambda = 90` with results displayed below and confirm a positive significant association of the mixture with the outcome and `LBXF07LA`, `LBX138LA` and `LBXD02LA` as the elements that contribute the most in this association: ```{r, echo=TRUE, eval=TRUE} summary(results1i_l90) ``` ```{r, results='asis', fig.show='hold', fig.height=8, fig.width=8, echo=TRUE} gwqs_barplot(results1i_l90) ``` # References Carrico C, Gennings C, Wheeler D, Factor-Litvak P. Characterization of a weighted quantile sum regression for highly correlated data in a risk analysis setting. J Agricul Biol Environ Stat. 2014:1-21. ISSN: 1085-7117. DOI: 10.1007/ s13253-014-0180-3. http://dx.doi.org/10.1007/s13253-014-0180-3. Czarnota J, Gennings C, Colt JS, De Roos AJ, Cerhan JR, Severson RK, Hartge P, Ward MH, Wheeler D. 2015. Analysis of environmental chemical mixtures and non-Hodgkin lymphoma risk in the NCI-SEER NHL study. Environmental Health Perspectives. Czarnota J, Gennings C, Wheeler D. 2015. Assessment of weighted quantile sum regression for modeling chemical mixtures and cancer risk. Cancer Informatics, 2015:14(S2) 159-171. Curtin P, Kellogg J, Cech N, and Gennings C. A random subset implementation of weighted quantile sum (wqsrs) regression for analysis of high-dimensional mixtures. Communications in Statistics - Simulation and Computation, 0(0):1–16, 2019. doi: 10.1080/03610918.2019.1577971. Tanner EM, Bornehag CG, and Gennings C. Repeated holdout validation for weighted quantile sum regression. MethodsX, 6:2855 – 2860, 2019. doi: https://doi.org/10.1016/j.mex.2019.11.008. Renzetti S, Gennings C and Calza S (2023) A weighted quantile sum regression with penalized weights and two indices. Front Public Health 11:1151821. doi: 10.3389/fpubh.2023.1151821. # Acknowledgements This package was developed at the CHEAR Data Center (Dept. of Environmental Medicine and Public Health, Icahn School of Medicine at Mount Sinai) with funding and support from NIEHS (U2C ES026555-01) with additional support from the Empire State Development Corporation.