Hinge loss is a loss function used for training classifiers.
The hinge loss is used for "maximum-margin" classification, most notably for support vector machines (SVMs).
For a two-class classification, t = +1 or -1, and a model prediction score y, the hinge loss of the prediction y is defined as
hinge_loss = max(0, 1 - t * y)
Note that y should be the "raw" output of the classifier's decision function, not the predicted class label.
When y has the correct prediction, i.e. if t =1, y>=1, or t=-1, y<=-1, then obviously the hinge loss = 0.
When y and t have opposite signs, the prediction is wrong, and as |y| goes bigger, the hinge loss gets bigger too.
When y and t have the same sign (right prediction) but |y| < 1, the hinge loss is between 0 and 1 because there is not enough margin for the correct prediction, i.e. not certain enough.
The hinge loss has also been extended to multi-class classification. Check wiki for details.