We can readily generalize the fixed binomial model to allow for a random effect. We do this by generate the p's as random effects, in which the p[i] now are thought of as random realizations from some distribution with parameters we specify. A common way to do this by switching to the logit scale, which allows us to us a common distribution like the normal. The parameters or hyperparameters of the random effects distribution are mu the mean and tau the precision parameter. We proceed by first putting some priors on these parameters:
mu~dnorm(0,0.001)
#put a uniform on the sd
sigma~dunif(0,10)
#bugs likes to use precision parameter for the normal not sd
#note: if you do this in JAGS its tau<-1./sd^2
tau<-pow(sigma,-2)
Once we've set priors we can built the likelihood, where the random effects now occur at the sample level:
#likelihood
for(i in 1:n.samples)
{
#the random effect
y[i]~dnorm(mu,tau)
p[i]<-1./(1+exp(-y[i]))
#the likelihood is nearly the same....
x[i]~dbin(p[i],n[i])
}
We then proceed as before with sampling from the posterior distribution using MCMC. Here is code to simulate some random effects and data and perform the MCMC.
Finally, in general of course we will not know the "true" model (as we did in this case, since we simulated data using that model). This means that potentially we can have 2 or more competing models for the same data structure. As with MLE methods where we had AIC to perform model comparison, in Bayesian approaches we have DIC (Deviance Information Criterion), which performs in much the same way as AIC. The attached code computes DIC for the above simulated data example (the simulated data must still be in memory and the 2 model .txt files in the working directory), and produces a table of comparisons for the 2 models (constant p fixed model, and random -- the model that was used to generate the data). In this example, you may see very little difference in DIC, which can easily happen when there is not much data (as in this example).