It is important to maintain a distinction between the statistical model -- that is, the distributional and other assumptions we are making about how the data arose -- and the estimation (inference) procedure- method we use to obtain estimates or make inferences from the data. This distinction is obviously lost when people refer to a procedure as a "Bayesian model", which is inherently meaningless. In fact, we can and often will apply the same statistical model (or likelihood) using frequentist approaches such as maximum likelihood, or alternatively using Bayesian approaches such as MCMC.
To take a simple example, suppose that we conduct a study with n= 100 marked animals and observe (e.g., by radiotelemetry) that x=56 of them have survived some interval of interest. The statistical model or likelihood for these data is a binomial distribution, and we want to make inferences about the parameter p. There are (at least) 2 approaches, and they both involved the binomial likelihood:
Maximize the likelihood function given the data to estimate the parameter p and its confidence intervals.
Sample from the posterior distribution of p to get Bayesian inference. This requires us to also specify a prior distribution for p, which may be very simple, like uniform on (0,1).
Note that both approaches involve identical assumptions about the sampling distribution, which is at the core of either method. That is, for both we are assuming that we have n independent Bernoulli trials, each with common success parameter p. The only differences are (1) what we assume about the prior distribution of p may affect inference under Bayes, but is irrelevant to ML, (2) the numerical methods we use to get inference (MLE vs. MCMC) are different, (3) there are some differences in how we interpret confidence vs. credibility intervals and other measures of uncertainty.
Kéry and Schaub (2011) drive these points home in their examples in Chapter 3, where they use simulated data that they then analyse by both frequentist approaches (so, generalized linear models) and Bayesian approaches (MCMC using OpenBugs). Here I illustrate these approaches using counts of insects, first for a fixed time effect model (so, counts of insect varying according to a linear model and assuming a Poisson error distribution). I then create a random time effects model. For each model, I conduction the analysis first using glm or lme4 (the MLE procedure) and then using OpenBugs (Bayesian). You can run the analyses both ways for the same data and see that the MLE and Bayesian results do line up fairly closely.