Post date: Oct 14, 2019 7:41:14 PM
Our selection coefficients from the 2019 chumash experiment are standardized partial regression coefficients. If we had used multiple regression with normal errors, these would be equal to the standardized selection gradients for linear and correlational selection, but need to be doubled for quadratic selection (https://onlinelibrary.wiley.com/doi/full/10.1111/j.1558-5646.2008.00449.x). But we fit generalized linear models with binomial errors instead. This makes more sense given the binary survival data, but it does complicate things a bit (https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1558-5646.1998.tb02237.x). I think folks often mistakenly equate these with selection gradients, but they are a tad different in they denote the effect of a unit change in traits on the log odds of survival, not survival itself (or some other metric of fitness). What this means is that the effect of a one unit change in the trait affects raw survival in different ways depending on the current/reference value of the trait/survival (i.e., as the probability of survival approaches 0 or 1, the effect of trait change does less and less). You can numerically approximate selection gradients by multiplying the selection coefficients by mean(w(z) * (1-w(z)). I did this. The code is in selectionGradBayes.R:
## https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1558-5646.1998.tb02237.x
## convert to selection gradients
wz<-est_hcl_ac[25:251,3]*(1-est_hcl_ac[25:251,3])
wzBk<-tapply(X=wz,INDEX=Dac$bk,mean)
sg_AC_bk1<-est_hcl_ac[7:11,c(3,1,5)]
for(i in 1:5){
if(i < 3 | i > 4){
sg_AC_bk1[i,]<-sg_AC_bk1[i,] * wzBk[1]
}
else{
sg_AC_bk1[i,]<-sg_AC_bk1[i,] * wzBk[1] * 2
}
}
sg_AC_bk2<-est_hcl_ac[13:17,c(3,1,5)]
for(i in 1:5){
if(i < 3 | i > 4){
sg_AC_bk2[i,]<-sg_AC_bk2[i,] * wzBk[2]
}
else{
sg_AC_bk2[i,]<-sg_AC_bk2[i,] * wzBk[2] * 2
}
}
sg_AC_bk3<-est_hcl_ac[19:23,c(3,1,5)]
for(i in 1:5){
if(i < 3 | i > 4){
sg_AC_bk3[i,]<-sg_AC_bk3[i,] * wzBk[3]
}
else{
sg_AC_bk3[i,]<-sg_AC_bk3[i,] * wzBk[3] * 2
}
}
wz<-est_hcl_mm[25:251,3]*(1-est_hcl_mm[25:251,3])
wzBk<-tapply(X=wz,INDEX=Dmm$bk,mean)
sg_MM_bk1<-est_hcl_mm[7:11,c(3,1,5)]
for(i in 1:5){
if(i < 3 | i > 4){
sg_MM_bk1[i,]<-sg_MM_bk1[i,] * wzBk[1]
}
else{
sg_MM_bk1[i,]<-sg_MM_bk1[i,] * wzBk[1] * 2
}
}
sg_MM_bk2<-est_hcl_mm[13:17,c(3,1,5)]
for(i in 1:5){
if(i < 3 | i > 4){
sg_MM_bk2[i,]<-sg_MM_bk2[i,] * wzBk[2]
}
else{
sg_MM_bk2[i,]<-sg_MM_bk2[i,] * wzBk[2] * 2
}
}
sg_MM_bk3<-est_hcl_mm[19:23,c(3,1,5)]
for(i in 1:5){
if(i < 3 | i > 4){
sg_MM_bk3[i,]<-sg_MM_bk3[i,] * wzBk[3]
}
else{
sg_MM_bk3[i,]<-sg_MM_bk3[i,] * wzBk[3] * 2
}
}
params<-round(cbind(rbind(sg_AC_bk1,sg_AC_bk2,sg_AC_bk3,sg_MM_bk1,sg_MM_bk2,sg_MM_bk3),rbind(est_hcl_ac[c(7:11,13:17,19:23),c(3,1,5)],est_hcl_mm[c(7:11,13:17,19:23),c(3,1,5)])),3)
write.table(params,file="selectionAnalysisParams.csv",quote=FALSE)
Overall, the selection gradients seem pretty reasonable (range = -0.145 to 0.178). Moderately strong selection but nothing crazy.