The Lower Bound for the Nearest Neighbor Estimators with (p,C)-Smooth Regression Functions

Takanori AYANO  

IEICE TRANSACTIONS on Information and Systems   Vol.E94-D    No.11    pp.2244-2249
Publication Date: 2011/11/01
Online ISSN: 1745-1361
DOI: 10.1587/transinf.E94.D.2244
Print ISSN: 0916-8532
Type of Manuscript: PAPER
Category: Artificial Intelligence, Data Mining
regression,  nonparametric estimation,  nearest neighbor,  rate of convergence,  

Full Text: PDF>>
Buy this Article

Let (X,Y) be a Rd R-valued random vector. In regression analysis one wants to estimate the regression function m(x):=E(Y|X=x) from a data set. In this paper we consider the convergence rate of the error for the k nearest neighbor estimators in case that m is (p,C)-smooth. It is known that the minimax rate is unachievable by any k nearest neighbor estimator for p > 1.5 and d=1. We generalize this result to any d ≥ 1. Throughout this paper, we assume that the data is independent and identically distributed and as an error criterion we use the expected L2 error.