statrefs home‎ > ‎Main‎ > ‎Books and Data Sets‎ > ‎

Applied Nonparametric Regression (Härdle)

 
 Author(s)  Wolfgang Härdle
 Title  Applied Nonparametric Regression
 Edition  
 Year  1990
 Publisher  Cambridge University Press
 ISBN  0-521-42950-1
 Website  www.cambridge.org/americas/
 www.cambridge.org/gb/knowledge/isbn/item1142597/?site_locale=en_GB
 

A pdf version of the book is available at this website on 28APR2011:  http://www.globalsepri.org/UploadPhotos/2008912164043348.pdf


Related tutorial:
http://faculty.washington.edu/heagerty/Courses/b571/homework/spline-tutorial.q


This volume focuses on the applications and practical problems of two central aspects of curve smoothing: the choice of smoothing parameters and the construction of confidence bounds. Härdle argues that all smoothing methods are based on a local averaging mechanism and can be seen as essentially equivalent to kernel smoothing. To simplify the exposition, kernel smoothers are introduced and discussed in great detail. Building on this exposition, various other smoothing methods (among them splines and orthogonal polynomials) are presented and their merits discussed.




Table of Contents

Preface

Part I. Regression Smoothing

1. Introduction

2. Basic idea of smoothing

3. Smoothing techniques
3.1  Kernel smoothing
3.2  k-nearest neighbor estimates
3.3  Orthogonal series estimators
3.4  Spline smoothing
3.5  An overview of various smoothers
Recursive techniques
The regressogram
Convolution smoothing
Delta function sequence estimators
Median smoothing
Split linear fits
Empirical regression
3.6  A comparison of kernel, k-NN and spline smoothers

Part II. The Kernel Method

4. How close is the smooth to the true curve?
4.1  The speed at which the smooth curve converges
4.2  Pointwise confidence intervals
4.3  Variability bands for functions
Connected error bars
Smooth confidence bands
Bootstrap bands
4.4  Behavior at the boundary
4.5  The accuracy as a function of the kernel
4.6  Bias reduction techniques

5. Choosing the smoothing parameter
5.1  Cross-validation, penalizing functions and the plug-in method
Leave-one-out method, cross-validation
Penalizing functions
The plug-in method
Bandwidth choice for derivative estimation
The dependence of the smoothing parameter on the weight function
5.2  Which selector should be used?
5.3  Local adaptation of teh smoothing parameter
Improving the smooth locally by bootstrapping
The supersmoother
5.4  Comparing bandwidths between laboratories
Canonical kernels

6. Data sets with outliers
6.1  Resistant smoothing techniques
LOcally WEighted Scatter plot Smoothing (LOWESS)
L-smoothing
R-smoothing
M-smoothing

7. Smoothing with correlated data
7.1  Nonparametric prediction of time series
7.2  Smoothing with dependent errors

8. Looking for special features (qualitative smoothing)
8.1  Monotonic and unimodal smoothing
Pool-adjacent-violators (PAV) Algorithm
8.2  Estimation of zeros and extrema

9. Incorporating parametric components and alternatives
9.1  Partial linear models
9.2  Shape-invariant modeling
9.3  Comparing nonparametric and parametric curves

Part III. Smoothing in High Dimensions

10. Investigating multiple regression by additive models
10.1  Regression trees
10.2  Projection pursuit regression
10.3  Alternating conditional expectations
10.4  Average derivative estimation
10.5  Generalized additive models (GAM)


Appendices

References

List of symbols and notation.




SelectionFile type iconFile nameDescriptionSizeRevisionTimeUser
Comments