Home‎ > ‎

FAQ

1. How do I estimate occasion-specific detection probabilities?

Here is an example using occu(). The trick is to create an observation-covariate for occasion/visit.

# Simulate data and fit model
R <- 50 
J <- 3 
z <- rbinom(R, 1, 0.5) 
y <- matrix(NA, R, J) 
y[] <- rbinom(R*J, z, 0.3) 
visit <- matrix(as.character(1:J), R, J, byrow=TRUE) 
umf <- unmarkedFrameOccu(y=y, obsCovs=list(visit=visit)) 
(fm <- occu(~visit ~1, umf))  # or ~visit-1

# Back-transform using predict
newdat <- data.frame(visit=as.character(1:J))
predict(fm, newdata=newdat, type="det", appendData=TRUE)


2. Why does backTransform not work for models with covariates?

It does, but you need to use linearComb() first. Here's an example:

data(mallard)
mallardUMF <- unmarkedFramePCount(mallard.y, siteCovs = mallard.site, obsCovs = mallard.obs)

(fm <- pcount(~ 1 ~ forest, mallardUMF))    # Fit a model
backTransform(fm, type="det")               # This works because there are no detection covariates
backTransform(fm, type="state")             # This doesn't work because covariates are present
lc <- linearComb(fm, c(1, 0), type="state") # Estimate abundance on the log scale when forest=0
backTransform(lc)                           # Abundance on the original scale

3. Why does unmarked report AIC instead of AICc?

Because it is not clear what the effective sample size is for these models. See the MacKenzie et al. (2006) book and this thread for some discussion. The R package AICcmodavg let's you choose what you think the sample size is.


4. Does unmarked have methods for modeling spatial autocorrelation?

No because it is hard to do, but we are working on it. Also, note that if your model fits the data well (perhaps as determined by parboot), then spatial correlation is not likely to be a major problem. 


5. Can I make species distribution maps or abundance maps?

Yes! Create a data.frame that contains a row for each pixel in your raster(s), and columns for the covariate values associated with each pixel. Then use predict, to get estimates of occupancy prob or abundance. Now you can map using levelplot in the lattice package, or one of the many other plotting functions. The raster package can also be used. See this thread for more details.

Two examples of mapping species distribution and abundance using hierarchical models fitted in unmarked were given in the Jan 2012 Webinar R scripts. One example involves mapping the distribution of a bird species in Switzerland using data from the Swiss Breeding Bird Survey (this R script is moduleABUN_MHB.R). The other involves mapping the abundance distribution of the Island Scrub jay using distance sampling data (moduleHDS_ISSJ.R). You can download both of those scripts and explore those analyses.

6. What does this error message mean: Error in optim(starts, nll, method = method, hessian = se, control = control) :
initial value in 'vmmin' is not finite
?

It means that the likelihood of the data evaluated at the starting values is 0, so the log-likelihood is -Inf. In other words, you need to provide better starting values. The default is 0 for all parameters (on the log- or logit scales). You can often guess better starting values by inspecting the data and guessing what the estimates should be. For example if you have counts that are close to 20, you could try log(20) or log(30) as a starting value for 'lambda'.

7. What is the K argument in pcount, gmultmix, gdistsamp, and pcountOpen?

For these abundance models, the conditional likelihood must be evaluated at every "possible" value of N. K is the maximum possible value of N at a site. K should be set to some value such that increasing K any more does not affect the parameter estimates. Computation time increases with K, so start low and then increase it until parameter estimates stabilize.

8. Do I have to standardize my continuous covariates?

No, if everything runs smoothly without standardizing, then there is nothing to worry about. If you get any errors during the model fitting process, you might want to standardize your covariates because this adds stability to optimization process. That is, it is easier for optim to find the maximum likelihood estimates.

9. If I standardize my covariates, how can I make preditions on the unstandardized scale?

Make predictions on the standardized scale, and then convert the standardized covariate values back to the original scale by multiplying by the original SD and adding back the mean. There are examples of this in the webinar R scripts here. One example is shown at the end of the moduleOCC.R script.

10. How do I compute standard errors or confidence intervals for the estimates of derived parameters?

The two most general options are the delta method or the bootstrap. unmarked does not currently have a function to compute delta approximations for arbitrary functions, so it is easiest to use parboot or nonparboot.

11. I fit a model with a factor, but when I try to make model predictions, I get the following error: Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) : contrasts can be applied only to factors with 2 or more levels

This is a bug, which will eventually be fixed. In the meantime, the workaround is to ensure that the factor in newdata has exactly the same levels as the factor used to fit the model. For example, if you want predictions for year "2003", but you the original factor also included levels for "2004" and "2005", you would need to do this:

nd <- newdata(year=factor("2003", levels=c("2003", "2004", "2005")))
predict(fm, type="state", newdata=nd)



Comments