The R package BayesianMediationA is created for linear or nonlinear mediation analysis with binary, continuous, or time-to-event outcomes under the Bayesian setting Yu and Li (2022). The vignette is composed of three parts. Part I focuses on the data sets used for examples, and part II on how to transform variables and prepare data for the mediation analysis. Part III walks through the function on Bayesian mediation analysis, and explains how to make inferences on mediation effects of interests.
To use the R package BayesianMediationA, we first install the package
in R (install.packages("BayesianMediationA")
) and load
it.
We use the data set ``weight_behavior’ which is included in the package as examples for mediation analysis with binary or continuous outcomes (Yu and Li 2017). In addition, a dataset is generated for the time-to-event outcome as the following:
#use a simulation
set.seed(1)
N=100
alpha=0.5
x=rnorm(N,0,1)
x=ifelse(x>0,1,0) #the binary exposure. If want to use a continuous exposure, remove this line
e1=rnorm(N,0,1)
M=alpha*x+e1 #the mediator
lambda=0.01
rho=1
beta=1.2
c=-1
rateC=0.001
v=runif(n=N)
Tlat =(- log(v) / (lambda * exp(c*x+M*beta)))^(1 / rho) #the event time
C=rexp(n=N, rate=rateC) #the censoring time
time=pmin(Tlat, C)
status <- as.numeric(Tlat <= C)
example2 <-cbind(x, M, time, status) #the dataset
The dataorg function is used to do the transformation before the mediation analysis. In the function, the exposure variable(s) (pred) and the mediator(s) (m) are required to input. The response variable (y) is also required. If y is binary or categorical, its reference level is input in the argument levely. Similarly, the reference levels for the exposure variables and mediators are input in the argument predref and mref respectively.
Other input data include cova, the covaritates that are used to explain y, and mcov, the covariates for mediators. Covariates for y are defined as predictors for y, but not explained by the exposure variables pred. Covariates for mediators are explanatory variables for mediators other than the exposure variable(s). Accompanying mcov, we have mclist to specify different covariates for different mediators. If mclist is NULL (by default), all covariates in mcov are used for all mediators in m. Otherwise, the first item of mclist lists all column numbers/names of mediators in m that are to be explained by covariates in mcov, the following items give the covariates in mcov for the mediators in the order of the first item. NA is used when no covariance is to be used for the corresponding mediator. For example, `mclist=list(c(2,4),NA,2:3)’ means that all mediators use all covariates in mcov, except for the second mediator which use none of the covariates, and the fourth mediator, which uses only columns 2 to 3 of mcov as its covariates.
Variables can be transformed to denote potential nonlinear
relationships. The transformation functions are expressed in arguments
fpy, fmy, and fpm separately. In
the name of the arguments, the
p' stands for the predictors,
m’ mediators, and `y’ the
outcome. Namely, fpy define the
transformation functions of the predictors in explaining the outcome
y. The first item lists column
numbers/variable names of the exposure variable in pred,
which needs to be transformed to explain y. By the order of the first item,
each of the rest items of fpy lists the
transformation functional expressions for the predictor. The
exposures/predictors not specified in the list will not be transformed
in any way in explaining y. For example, list(1,c(“x^2”,“log(x)”)) means
that the first column of the pred will be transformed to square and log
forms to explain y. The fmy is defined the
same way for the transformed mediators to explain the outcome variable
y.
fpm denotes the transformation-function-expression list on exposure variable(s) (pred) in explaining mediators (m). The definition of fpm is similar to those of fpy and fmy except that the first item is a matrix with two columns: the first column is the column numbers of the mediators in m, which should be explained by the transformed predictor(s) indicated by the second column. The second column indicates the column number of the exposure in pred that will be transformed to explain the mediator identified by the 1st column of the same row. By the order of the rows of the first item, each of the rest items of fpm lists the transformation functional expressions for the exposure (identified by column 2) in explaining each mediator (identified by column 1). The mediators not specified in the list will be explained by the original format of the exposures in pred. For example, `fpm=list(matrix(c(1,2,1,1),2,2), “x2”,c(”x”,”x2”))’ means that pred[, 1]2 is used to explain m[, 1], and both pred[, 1] and pred[, 1]2 are used to explain m[, 2].
Finally, deltap and deltam define the change in predictors or mediators respectively in calculating the mediation effect.Yu et al. (2022)
Users do not run the data_org function by itself. All arguments are included in the main Bayesian mediation analysis function bma.bx.cy, which runs the data_org function in it to organize data for mediation analysis.
The function bma.bx.cy are used to perform the Bayesian mediation analysis. In the function, the data_org function is called first, which involves all arguments described above. In addition, prior distributions in the generalized linear models can be set up. By default, all coefficients in the Bayesian generalized linear models are independently normal distributed with mean 0 and the precision term specified by speci, default at 10−6.
The prior means and the variance-covariance matrix can be altered. mu defines the prior mean for coefficients of mediators, muc the prior mean vector (of length p2) for coefficients of exposure(s), and mucv the prior mean vector for coefficients of covariate(s)in the final model for y. Related, Omega, Omegac, and Omegacv defines the prior variance-covariance matrix for the coefficients of mediators, exposure(s), and covariates respectively.
The prior distributions for coefficients of intercept/covariates, and exposures in explaining mediators are also assumed to be normal. The default prior mean and variance-covariance matrix are as above, and can be changed in mu0.1 and mu1.1 for means, and Omega0.1 and Omega1.1 for variance-covariance matrices respectively for the intercept/covariates and exposures. Separately, the mean and variance-covariance matrix of the prior distributions for coefficients for estimating the mediators can be specified by mu0, mu1, Omega0, and Omega1 followed by .a, .b, and .c for continuous, binary, or categorical mediators respectively.
The function calls for `jags’ to perform the Bayesian analysis. Default models are fitted for mediators and outcomes. Namely, if the response variable is continuous, linear regression model is fitted, binary response is fitted with logistic regression, categorical response with multivariate logistic regression, and time-to-event response with cox hazard model.
###Binary predictor and continuous outcome In the following example, the exposure variable is sex and the outcome is the bmi. The variables exercise (in hours), sports (in a sport team or not), and sweat (have sweating activity or not) are used to explain the sexual difference in bmi. The summary function returns inferences of the estimated effects with a graph of estimated effects with 95% credible sets.
data("weight_behavior")
#n.iter and n.burnin are set to be very small, should be adjusted
#binary predictor
test.b.c<- bma.bx.cy(pred=weight_behavior[,3], m=weight_behavior[,c(14,12,13)],
y=weight_behavior[,1],n.iter=500,n.burnin = 100)
#> module glm loaded
#> Compiling model graph
#> Resolving undeclared variables
#> Allocating nodes
#> Graph information:
#> Observed stochastic nodes: 3936
#> Unobserved stochastic nodes: 21
#> Total graph size: 7677
#>
#> Initializing model
summary(test.b.c)
#>
#> For Predictor 1 :
#> Estimated Relative Effects for Method 3:
#> DE sweat sports exercises
#> mean 0.9101 0.0269 0.0840 -0.0210
#> sd 0.0434 0.0237 0.0391 0.0149
#> q_0.025 0.8051 -0.0165 0.0230 -0.0535
#> q_0.25 0.8883 0.0108 0.0568 -0.0294
#> q_0.5 0.9141 0.0255 0.0797 -0.0204
#> q_0.75 0.9409 0.0412 0.1041 -0.0116
#> q_0.975 0.9770 0.0752 0.1779 0.0046
The summary function returns inference results for all four methods. By default, the relative effects are shown using method 3. To show the results of other methods, we can change by setting the argument method. Method 4 is calculated for binary/categorical exposures only. If the user would like to see the effect estimations rather than the relative effects, one should set RE = F.
The following example is given for a categorical exposure: race. In the data set, race takes six categories: empty (not reported), other, mixed, Caucasian, Indian, and African. ``CAUCASIAN’’ is used as the reference group, each other race group is compared with Caucasian in bmi and the relative effects from mediators are reported by the summary function.
test.b.c<- bma.bx.cy(pred=weight_behavior[,3], m=weight_behavior[,c(5:11,12:14)],
y=weight_behavior[,1],cova=weight_behavior[,2],mcov=weight_behavior[,c(2,5)],
mclist = list(1,2),n.iter=500,n.burnin = 100)
#> Compiling model graph
#> Resolving undeclared variables
#> Allocating nodes
#> Graph information:
#> Observed stochastic nodes: 10206
#> Unobserved stochastic nodes: 72
#> Total graph size: 31416
#>
#> Initializing model
#> Error in update.jags(model, n.iter, ...): Error in node M2[1,1]
#> Failure to calculate log density
summary(test.ca.c)
#> Error: object 'test.ca.c' not found
The jags model fitted for the above model is as following:
#the jags models for the outcomes and mediators
model {temp <- 1:nmc
for(i in 1:N){
mu_y[i] <- beta0 + inprod(c, x[i,]) + inprod(beta,M1[i,]) + inprod(eta,cova[i,])
y[i] ~ dnorm(mu_y[i],prec4) #the final model since y is continuous. The model is changed for
#different format of the outcome, as is shown in the following
#sections
for (j in 1:p1){ #the model for p1 contiuous mediators
mu_M1[i,contm[j]] <- inprod(alpha0.a[j,mind[contm[j],]],mcov[i,temp[mind[contm[j],]]])+inprod(alpha1.a[j,1:c1],x1[i,])
M2[i,contm[j]] ~ dnorm(mu_M1[i,contm[j]],prec1[j])
for (k in contm1[j,1]:contm1[j,2]){
mu_M1_c[i,k] <- inprod(alpha0[k,mind[contm[j],]],mcov[i,mind[contm[j],]])+inprod(alpha1[k,1:c1],x1[i,])
M1[i,k] ~ dnorm(mu_M1_c[i,k],prec2[k])
}
}
for (k in 1:p2){ #the model for p2 binary mediators
logit(mu_M1[i,binm[k]]) <- inprod(alpha0.b[k,mind[binm[k],]],mcov[i,mind[binm[k],]])+inprod(alpha1.b[k,1:c1],x1[i,])
M2[i,binm[k]] ~ dbern(mu_M1[i,binm[k]])
}
for (j in 1:p3){ #the model for p3 categorical mediators
mu_Mc[i,j,1] <- 1 #baseline is the 1st category
for (k in 2:cat2[j]){
mu_Mc[i,j,k] <- exp(inprod(alpha0.c[j,k-1,mind[catm[j],]],mcov[i,mind[catm[j],]])+inprod(alpha1.c[j,k-1,1:c1],x1[i,]))
}
sum_Mc[i,j] <- sum(mu_Mc[i,j,1:cat2[j]])
for (l in 1:cat2[j])
{mu_Mc0[i,j,l] <- mu_Mc[i,j,l]/sum_Mc[i,j]}
M2[i,catm[j]] ~ dcat(mu_Mc0[i,j,1:cat2[j]])
}
}
beta[1:P] ~ dmnorm(mu[1:P], Omega[1:P, 1:P]) #prior distributions for coefficients of model for y
beta0 ~ dnorm(0, 1.0E-6)
c[1:c2] ~ dmnorm(muc[1:c2], Omegac[1:c2,1:c2])
eta[1:cv1] ~ dmnorm(mucv[1:cv1], Omegacv[1:cv1,1:cv1])
for(j in 1:P) #prior distributions for coefficients of model for not transformed mediators
{alpha1[j,1:c1] ~ dmnorm(mu1.1[j,1:c1], Omega1.1[1:c1, 1:c1])
alpha0[j,1:nmc] ~ dmnorm(mu0.1[j,1:nmc], Omega0.1[1:nmc, 1:nmc])}
for (i in 1:P){
var2[i] ~ dgamma(1,0.1)
prec2[i] <- 1/var2[i]
}
for(j in 1:p1) #prior distributions for coefficients of model for p1 continuous mediators
{alpha1.a[j,1:c1] ~ dmnorm(mu1.a[j,1:c1], Omega1.a[1:c1, 1:c1])
alpha0.a[j,1:nmc] ~ dmnorm(mu0.a[j,1:nmc], Omega0.a[1:nmc, 1:nmc])}
for (i in 1:p1){
var1[i] ~ dgamma(1,0.1)
prec1[i] <- 1/var1[i]
}
for(j in 1:p2) #prior distributions for coefficients of model for p2 binary mediators
{alpha1.b[j,1:c1] ~ dmnorm(mu1.b[j,1:c1], Omega1.b[1:c1, 1:c1])
alpha0.b[j,1:nmc] ~ dmnorm(mu0.b[j,1:nmc], Omega0.b[1:nmc, 1:nmc])
}
for (i in 1:p3){ #prior distributions for coefficients of model for p3 categorical mediators
for(j in 1:cat1)
{alpha1.c[i,j,1:c1] ~ dmnorm(mu1.c[j,1:c1], Omega1.c[1:c1, 1:c1])
alpha0.c[i,j,1:nmc] ~ dmnorm(mu0.c[j,1:nmc], Omega0.c[1:nmc, 1:nmc])}
}
var4 ~ dgamma(1,0.1) #the prior for the variance of y when it is continuous
prec4 <-1/var4
}
The rjags model can be revised when different priors or models are to be used. To do that, write the model file and input it to the argument filename.
We can use transformed predictors for the outcome. The following is an example
test.c.c.2<- bma.bx.cy(pred=weight_behavior[,2], m=weight_behavior[,12:14],
y=weight_behavior[,1],fpy=list(1,c("x","x^2")),n.iter=5,n.burnin = 1)
#> Compiling model graph
#> Resolving undeclared variables
#> Allocating nodes
#> Graph information:
#> Observed stochastic nodes: 3894
#> Unobserved stochastic nodes: 21
#> Total graph size: 9876
#>
#> Initializing model
summary(test.c.c.2,method=1)
#>
#> For Predictor 1 :
#> Estimated Relative Effects for Method 1:
#> DE sports exercises sweat
#> mean 0.6867 0.0468 0.2774 -0.0108
#> sd 0.6571 0.1145 0.6603 0.0694
#> q_0.025 -0.1083 -0.1083 -0.2414 -0.1006
#> q_0.25 0.3310 0.0413 -0.2119 -0.0286
#> q_0.5 0.8162 0.1020 0.0926 0.0055
#> q_0.75 1.1718 0.1074 0.5818 0.0233
#> q_0.975 1.2615 0.1079 1.1102 0.0513
We can also have multiple predictors
test.m.c<- bma.bx.cy(pred=weight_behavior[,2:4], m=weight_behavior[,12:14],
y=weight_behavior[,1],n.iter=10,n.burnin = 1)
#> Compiling model graph
#> Resolving undeclared variables
#> Allocating nodes
#> Graph information:
#> Observed stochastic nodes: 3894
#> Unobserved stochastic nodes: 21
#> Total graph size: 19194
#>
#> Initializing model
summary(test.m.c,method=3)
#>
#> For Predictor 1 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises sweat
#> mean 6.9289 0.0334 0.0739 -0.1515
#> sd 16.8559 0.1516 0.8193 0.7486
#> q_0.025 -23.6426 -0.0362 -0.3381 -1.7113
#> q_0.25 -1.1137 -0.0255 -0.2151 0.0137
#> q_0.5 10.0312 -0.0088 -0.1956 0.0435
#> q_0.75 14.4742 -0.0083 -0.1312 0.1027
#> q_0.975 26.5130 0.3488 1.7928 0.2983
#>
#> For Predictor 2 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises sweat
#> mean -0.7195 0.0680 -0.0258 0.0284
#> sd 0.6195 0.0463 0.0136 0.0278
#> q_0.025 -1.4154 0.0198 -0.0441 -0.0076
#> q_0.25 -1.1402 0.0362 -0.0336 0.0075
#> q_0.5 -1.0249 0.0673 -0.0228 0.0302
#> q_0.75 -0.2019 0.0813 -0.0190 0.0453
#> q_0.975 0.1504 0.1557 -0.0044 0.0713
#>
#> For Predictor 3 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises sweat
#> mean 0.9630 -0.0191 0.0424 -0.1126
#> sd 2.2158 0.1350 0.1023 0.3410
#> q_0.025 -0.8471 -0.1800 -0.0424 -0.8152
#> q_0.25 -0.2750 -0.0897 -0.0013 -0.1375
#> q_0.5 0.4642 -0.0295 0.0217 -0.0058
#> q_0.75 1.1598 0.0416 0.0355 0.0251
#> q_0.975 5.4945 0.2311 0.2530 0.1579
#>
#> For Predictor 4 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises sweat
#> mean 0.1375 -0.0075 0.0146 -0.0223
#> sd 0.1371 0.0046 0.0084 0.0204
#> q_0.025 -0.0498 -0.0148 0.0023 -0.0523
#> q_0.25 0.0206 -0.0105 0.0085 -0.0392
#> q_0.5 0.2019 -0.0060 0.0145 -0.0240
#> q_0.75 0.2419 -0.0033 0.0206 -0.0091
#> q_0.975 0.3005 -0.0021 0.0259 0.0057
#>
#> For Predictor 5 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises sweat
#> mean -0.7768 -0.2355 0.0274 -0.1235
#> sd 4.5994 0.5593 0.0789 0.1570
#> q_0.025 -9.9570 -1.3284 -0.1007 -0.3808
#> q_0.25 -0.7842 -0.1480 -0.0025 -0.1879
#> q_0.5 0.0797 -0.1184 0.0226 -0.0645
#> q_0.75 1.6598 0.0554 0.0616 -0.0299
#> q_0.975 3.9870 0.3482 0.1512 0.0528
#>
#> For Predictor 6 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises sweat
#> mean 1.0455 -0.2049 0.0012 -0.0490
#> sd 0.7577 0.4949 0.0046 0.1548
#> q_0.025 -0.4474 -1.0917 -0.0028 -0.3137
#> q_0.25 1.0943 -0.4152 -0.0019 -0.1336
#> q_0.5 1.3056 0.0446 -0.0012 0.0074
#> q_0.75 1.4263 0.1071 0.0051 0.0216
#> q_0.975 1.8060 0.1783 0.0093 0.1240
#>
#> For Predictor 7 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises sweat
#> mean 1.8936 -0.8338 0.1238 -0.1836
#> sd 2.5220 2.3519 0.3594 0.5285
#> q_0.025 0.5737 -5.8344 -0.0255 -1.3063
#> q_0.25 0.9369 0.0161 -0.0147 -0.0045
#> q_0.5 0.9602 0.0361 -0.0059 0.0037
#> q_0.75 0.9907 0.0652 -0.0004 0.0358
#> q_0.975 7.2568 0.3765 0.8839 0.0753
The following is an example for binary outcome overweight (yes or no)
test.m.b<- bma.bx.cy(pred=weight_behavior[,3], m=weight_behavior[,12:14],
y=weight_behavior[,15],cova=weight_behavior[,5],n.iter=500,n.burnin = 100)
#> Compiling model graph
#> Resolving undeclared variables
#> Allocating nodes
#> Graph information:
#> Observed stochastic nodes: 3930
#> Unobserved stochastic nodes: 22
#> Total graph size: 8973
#>
#> Initializing model
summary(test.m.b,method=2)
#>
#> For Predictor 1 :
#> Estimated Relative Effects for Method 2:
#> DE sports exercises sweat
#> mean 0.2462 0.6748 -0.0696 0.1485
#> sd 8.2216 7.0225 0.8763 2.0993
#> q_0.025 -0.1702 0.0487 -0.1887 -0.0627
#> q_0.25 0.6469 0.1442 -0.0338 0.0022
#> q_0.5 0.7748 0.2120 -0.0138 0.0274
#> q_0.75 0.8424 0.3189 0.0002 0.0644
#> q_0.975 0.9824 1.0683 0.0481 0.2592
For such case, the model for y in the jags is set as following by default, which can be revised.
logit(mu_y[i]) <- beta0 + inprod(c, x[i,]) + inprod(beta,M1[i,]) + inprod(eta,cova[i,])
y[i] ~ dbern(mu_y[i])
The following is an example for survival model
test.m.t.1<- bma.bx.cy(pred=example2[,"x"], m=example2[,"M"], y=Surv(example2[,"time"],example2[,"status"]), inits=function(){ list(r=1,lambda=0.01)},n.iter=10,n.burnin = 1)
#> Compiling model graph
#> Resolving undeclared variables
#> Allocating nodes
#> Graph information:
#> Observed stochastic nodes: 300
#> Unobserved stochastic nodes: 12
#> Total graph size: 2861
#>
#> Initializing model
temp1=summary(test.m.t.1)
print(temp1,method=1,RE=FALSE)
#>
#> For Predictor 1 :
#> Estimated Effects for Method 1:
#> TE DE m1
#> mean 0.0117 -0.7676 0.7792
#> sd 0.2672 0.1450 0.2272
#> q_0.025 -0.2553 -0.8964 0.4671
#> q_0.25 -0.1708 -0.8964 0.6038
#> q_0.5 -0.0539 -0.8650 0.7753
#> q_0.75 0.0565 -0.6576 0.9148
#> q_0.975 0.5055 -0.5445 1.1543
For such case, the model for y in the jags is set as following by default, which can be revised.
elinpred[i] <- exp(inprod(c, x[i,]) + inprod(beta,M1[i,]))
base[i] <- lambda*r*pow(y[i,1], r-1)
loghaz[i] <- log(base[i]*elinpred[i])
phi[i] <- 100000-y[i,2]*loghaz[i]-log(exp(-lambda*pow(y[i,1],r)*elinpred[i])-exp(-lambda*pow(tmax,r)*elinpred[i])) +log(1-exp(-lambda*pow(tmax,r)*elinpred[i]))
zero[i] ~ dpois(phi[i])
The following is an example of setting the priors for r and lambda:
r ~ dunif(0,10) # dunif(1,1.2)
lambda ~ dgamma(1,0.01)
Finally, the following is an example for categorical outcomes
test.m.c<- bma.bx.cy(pred=weight_behavior[,2:4], m=weight_behavior[,12:13],
y=as.factor(weight_behavior[,14]),cova=weight_behavior[,5],n.iter=5,n.burnin = 1)
#> Compiling model graph
#> Resolving undeclared variables
#> Allocating nodes
#> Graph information:
#> Observed stochastic nodes: 2592
#> Unobserved stochastic nodes: 20
#> Total graph size: 23800
#>
#> Initializing model
summary(test.m.c,method=3)
#>
#> For Predictor 1 outcome 1 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises
#> mean -4.2167 -0.0043 0.7099
#> sd 5.0547 0.0098 0.0099
#> q_0.025 -10.6202 -0.0115 0.6968
#> q_0.25 -5.8365 -0.0108 0.7071
#> q_0.5 -3.2761 -0.0075 0.7130
#> q_0.75 -1.6563 -0.0010 0.7157
#> q_0.975 0.5878 0.0086 0.7176
#>
#> For Predictor 1 outcome 2 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises
#> mean -5.6655 -0.0002 0.6910
#> sd 3.2993 0.0068 0.0079
#> q_0.025 -9.8006 -0.0055 0.6804
#> q_0.25 -6.7988 -0.0046 0.6893
#> q_0.5 -5.1244 -0.0022 0.6942
#> q_0.75 -3.9911 0.0022 0.6960
#> q_0.975 -2.4503 0.0087 0.6962
#>
#> For Predictor 2 outcome 1 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises
#> mean 12.0956 0.2155 3.3290
#> sd 4.7696 0.0205 0.1059
#> q_0.025 7.3261 0.1978 3.1924
#> q_0.25 8.5362 0.2005 3.2865
#> q_0.5 11.9964 0.2112 3.3620
#> q_0.75 15.5558 0.2261 3.4044
#> q_0.975 17.0337 0.2405 3.4093
#>
#> For Predictor 2 outcome 2 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises
#> mean 23.0677 0.7487 5.7277
#> sd 5.6998 0.0171 0.1584
#> q_0.025 17.2334 0.7257 5.5607
#> q_0.25 19.0993 0.7462 5.6618
#> q_0.5 22.8365 0.7554 5.7135
#> q_0.75 26.8049 0.7579 5.7794
#> q_0.975 29.2948 0.7603 5.9189
#>
#> For Predictor 3 outcome 1 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises
#> mean -2.1253 -0.2332 0.4842
#> sd 1.8844 0.0120 0.0349
#> q_0.025 -4.4897 -0.2406 0.4472
#> q_0.25 -2.5410 -0.2404 0.4668
#> q_0.5 -1.8084 -0.2383 0.4815
#> q_0.75 -1.3926 -0.2310 0.4989
#> q_0.975 -0.2996 -0.2171 0.5259
#>
#> For Predictor 3 outcome 2 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises
#> mean -2.4938 -0.2313 0.3784
#> sd 1.1664 0.0058 0.0229
#> q_0.025 -3.9305 -0.2353 0.3580
#> q_0.25 -2.8123 -0.2347 0.3655
#> q_0.5 -2.3447 -0.2335 0.3729
#> q_0.75 -2.0262 -0.2302 0.3858
#> q_0.975 -1.3105 -0.2236 0.4080
#>
#> For Predictor 4 outcome 1 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises
#> mean -181.6685 2.7459 22.5675
#> sd 224.5572 2.9620 22.1762
#> q_0.025 -471.8530 0.5170 6.9277
#> q_0.25 -267.7575 0.6069 7.7812
#> q_0.5 -117.2815 1.8151 14.5005
#> q_0.75 -31.1925 3.9541 29.2869
#> q_0.975 -0.9419 6.5573 51.9213
#>
#> For Predictor 4 outcome 2 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises
#> mean -3.8235 0.0314 0.5043
#> sd 1.7930 0.0113 0.0126
#> q_0.025 -5.0224 0.0219 0.4945
#> q_0.25 -4.9462 0.0220 0.4986
#> q_0.5 -4.5319 0.0297 0.5002
#> q_0.75 -3.4092 0.0390 0.5059
#> q_0.975 -1.4204 0.0439 0.5209
#>
#> For Predictor 5 outcome 1 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises
#> mean -0.7367 0.0754 0.4999
#> sd 0.4342 0.0109 0.0044
#> q_0.025 -1.1502 0.0616 0.4955
#> q_0.25 -1.0849 0.0718 0.4980
#> q_0.5 -0.7399 0.0776 0.4993
#> q_0.75 -0.3916 0.0813 0.5012
#> q_0.975 -0.3177 0.0856 0.5054
#>
#> For Predictor 5 outcome 2 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises
#> mean -1.0196 0.1048 0.5412
#> sd 0.3607 0.0093 0.0041
#> q_0.025 -1.3495 0.0928 0.5358
#> q_0.25 -1.3197 0.1020 0.5404
#> q_0.5 -1.0250 0.1071 0.5422
#> q_0.75 -0.7249 0.1100 0.5430
#> q_0.975 -0.6806 0.1129 0.5449
#>
#> For Predictor 6 outcome 1 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises
#> mean -0.8002 0.0873 0.2468
#> sd 0.6029 0.0057 0.0149
#> q_0.025 -1.2810 0.0831 0.2289
#> q_0.25 -1.1077 0.0841 0.2383
#> q_0.5 -0.9872 0.0854 0.2498
#> q_0.75 -0.6797 0.0886 0.2583
#> q_0.975 -0.0015 0.0949 0.2595
#>
#> For Predictor 6 outcome 2 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises
#> mean -1.2981 0.1380 0.3060
#> sd 0.5951 0.0052 0.0159
#> q_0.025 -1.7672 0.1336 0.2867
#> q_0.25 -1.6061 0.1344 0.2973
#> q_0.5 -1.4867 0.1367 0.3091
#> q_0.75 -1.1787 0.1403 0.3178
#> q_0.975 -0.5083 0.1445 0.3199
#>
#> For Predictor 7 outcome 1 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises
#> mean 1.3287 -0.1025 -0.2922
#> sd 0.7597 0.0083 0.0427
#> q_0.025 0.3658 -0.1076 -0.3487
#> q_0.25 1.0020 -0.1067 -0.3048
#> q_0.5 1.5115 -0.1062 -0.2776
#> q_0.75 1.8382 -0.1019 -0.2650
#> q_0.975 1.9810 -0.0913 -0.2607
#>
#> For Predictor 7 outcome 2 :
#> Estimated Relative Effects for Method 3:
#> DE sports exercises
#> mean 1.1661 -0.0727 -0.1361
#> sd 0.3409 0.0023 0.0159
#> q_0.025 0.7292 -0.0742 -0.1568
#> q_0.25 1.0287 -0.0740 -0.1410
#> q_0.5 1.2557 -0.0736 -0.1316
#> q_0.75 1.3931 -0.0723 -0.1267
#> q_0.975 1.4506 -0.0696 -0.1230
For such case, the model for y in the jags is set as following by default, which can be revised.
mu_y1[i,1] <- 1
for (k in 2:caty) #caty is the number of categories of y
{mu_y1[i,k] <- exp(beta0[k-1] + inprod(c[k-1,], x[i,]) + inprod(beta[k-1,],M1[i,]) + inprod(eta[k-1,],cova[i,]))}
sum_y[i] <- sum(mu_y1[i,1:caty])
for (l in 1:caty)
{mu_y[i,l] <- mu_y1[i,l]/sum_y[i]}
y[i] ~ dcat(mu_y[i,])