Trees are not only used as classifiers, but also for regression. For simplicity, let us explain it in a two-dimensional feature space.
A regression tree can be formally introduced in many different ways. The major idea is how to split the feature space into areas and associate numbers to those areas so that it can make a good estimation of the values of the labels in that region. In order to do that we can associate the average of the labels' value in a region to that, or take the most frequent value (voting).
Let’s consider we want to find a relationship that explains how income and age can predict the loan that can be given to a client. First of all, people below age 25 cannot usually have high enough credit, since there is not a lot of credit record for them unless they have a good income larger than 50k. For people above 25 and below 60, we consider a different group, since they are an active working group. For the active working group, people with income above 30k have higher credit, so we have to regroup them in above 30k or below 30k income. This categorization of the sample, regroup people into 5 groups. We take the average of the loan that is given to each group as the number that representing that group. Now, if we have a new individual, we first observe to which group the sample belongs and then present the mean of that group as the prediction. There are standard methods that can efficiently find a split as we have done above. One standard method is to minimize the least square error.