Contributed talk

Speaker: Danielle Van Boxel (University of Arizona)

Location and Time: SAS 221, Saturday, 11:15–11:45 AM

Title: Bayesian Additive Regression Networks

Abstract: We apply Bayesian Additive Regression Tree (BART) principles to training an ensemble of small neural networks for regression tasks. Using Markov Chain Monte Carlo, we sample from the space of single hidden layer neural networks conditioned on how well they fit the data. To create an ensemble of these, we apply Gibbs’ sampling to update each network against the residual target value (i.e. subtracting the effect of the other networks). We demonstrate the effectiveness of this technique on several benchmark regression problems, comparing it to equivalent single neural networks, BART, and ordinary least squares. Our Bayesian Additive Regression Networks (BARN) provide more consistent and often more accurate results, at the cost of greater computation time. Finally, we make BARN available as a Python SciKit-Learn compatible model with documentation at \url{https://dvbuntu.github.io/barmpy/} for general machine learning practitioners.