It is actually the Homarus gammarus. And I am, thus far, possibly, the only person to ever utter "HoobaGaBoot (though maybe not the last, particularly if you are reading this post aloud at the moment!) H-bGBT is the (well, my) acronym for Histogram-based Gradient Boosting Tree. Which is a machine-learning model.  
Enough tomfoolery. Let's break this down: Histogram-based Gradient Boosting Tree. A histogram is a bar-chart representation of data showing its distribution across variables. Gradient boosting trees are a type of ensemble learning model, based on the decision-tree paradigm, but specifically focused upon large numbers of weak learners that improve upon the results of one another, reducing residual error in the process. A more detailed technical description of the approach and algorithm can be found in an excellent article by Alexey Natekin and Alois Knoll on this topic:
"In gradient boosting machines, or simply, GBMs, the learning procedure consecutively fits new models to provide a more accurate estimate of the response variable. The principle idea behind this algorithm is to construct the new base-learners to be maximally correlated with the negative gradient of the loss function, associated with the whole ensemble. The loss functions applied can be arbitrary, but to give a better intuition, if the error function is the classic squared-error loss, the learning procedure would result in consecutive error-fitting." 
Furthermore, you can find a great explanation of the math and science behind gradient boosting trees here. TL;DR version: it's very fast. And for anyone who has spent any time  doing hyperparameter tuning on a random forest or neural network knows, fast can sometimes be hard to come by. And this particular variant is even faster. According to the scikit learn documentation, the histogram based gradient boosting tree is *much* faster than the standard variant. 
But I saved the best part for last. H-bGBT will not fail or crap out when passed data that includes null values. I hear you cheering (or that may be the voices in my head). Here's how the sci-kit learn team describe it: 
"This estimator has native support for missing values (NaNs). During training, the tree grower learns at each split point whether samples with missing values should go to the left or right child, based on the potential gain. When predicting, samples with missing values are assigned to the left or right child consequently. If no missing values were encountered for a given feature during training, then samples with missing values are mapped to whichever child has the most samples."
I recently wrote about the perils of imputing null values; it's great to see a model-based alternative.  Want to see it in action? Here is a simple coding example taken from an adapted version of a classification problem I recently worked on:
from sklearn.experimental import enable_hist_gradient_boosting
from sklearn.ensemble import HistGradientBoostingClassifier
HbGBT_train = HistGradientBoostingClassifier(max_iter=400, max_depth=4, max_leaf_nodes=8, verbose=3, random_state=1)
HbGBT_train.fit(X_train, y_train)
HbGBT_test = HistGradientBoostingClassifier(max_iter=100, max_depth=8, max_leaf_nodes=8, verbose=2, random_state=1)
HbGBT_test.fit(X_test, y_test)
Above, I instantiate and fit a couple of HbGB classifiers to some test and training data. I set things to be verbose because I don't like being kept in the dark, especially where ensemble models are concerned. 
Making predictions and evaluating the model are pretty straightforward:
y_predictions_HbGBT = HbGBT_test.predict(X_test)
y_predictions_HbGBT
array([1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0,
       1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0,
       1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0,
       1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0], dtype=int64)
# Instantiate the confusion matrix
cm_HbGBT = confusion_matrix(y_test, y_predictions_HbGBT)
cm_HbGBT
array([[24, 11],
       [ 9, 35]], dtype=int64)
How'd it do? So-so, tbh. 
# Compare training and testing accuracy score
​pprint(f'Score on training set: {HbGBT_train.score(X_train, y_train)}')
pprint(f'Score on testing set: {HbGBT_test.score(X_test, y_test)}')
'Score on training set: 0.9788135593220338'
'Score on testing set: 0.7468354430379747
The model is fairly overfit; typical for an untuned ensemble model. In my next blog post, we will walk through hyperparameter tuning. See you then!