Optimal number of features as a function of sample size for various classification rules

被引:311
作者
Hua, JP
Xiong, ZX
Lowey, J
Suh, E
Dougherty, ER [1 ]
机构
[1] Texas A&M Univ, Dept Elect Engn, College Stn, TX 77843 USA
[2] Translat Genom Res Inst, Phoenix, AZ 85004 USA
[3] Univ Texas, MD Anderson Canc Ctr, Dept Pathol, Houston, TX 77030 USA
关键词
D O I
10.1093/bioinformatics/bti171
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Motivation: Given the joint feature-label distribution, increasing the number of features always results in decreased classification error; however, this is not the case when a classifier is designed via a classification rule from sample data. Typically (but not always), for fixed sample size, the error of a designed classifier decreases and then increases as the number of features grows. The potential downside of using too many features is most critical for small samples, which are commonplace for gene-expression-based classifiers for phenotype discrimination. For fixed sample size and feature-label distribution, the issue is to find an optimal number of features. Results: Since only in rare cases is there a known distribution of the error as a function of the number of features and sample size, this study employs simulation for various feature-label distributions and classification rules, and across a wide range of sample and feature-set sizes. To achieve the desired end, finding the optimal number of features as a function of sample size, it employs massively parallel computation. Seven classifiers are treated: 3-nearest-neighbor, Gaussian kernel, linear support vector machine, polynomial support vector machine, perceptron, regular histogram and linear discriminant analysis. Three Gaussian-based models are considered: linear, nonlinear and bimodal. In addition, real patient data from a large breast-cancer study is considered. To mitigate the combinatorial search for finding optimal feature sets, and to model the situation in which subsets of genes are co-regulated and correlation is internal to these subsets, we assume that the covariance matrix of the features is blocked, with each block corresponding to a group of correlated features. Altogether there are a large number of error surfaces for the many cases. These are provided in full on a companion website, which is meant to serve as resource for those working with small-sample classification.
引用
收藏
页码:1509 / 1515
页数:7
相关论文
共 18 条
[1]  
[Anonymous], 1961, STUDIES ITEM ANAL PR
[2]   Is cross-validation valid for small-sample microarray classification? [J].
Braga-Neto, UM ;
Dougherty, ER .
BIOINFORMATICS, 2004, 20 (03) :374-380
[3]  
BRAGANETO U, 2004, UNPUB EXACT PERFORMA
[4]  
CHANG CC, 2000, LIBSVM INTRO BENCHMA
[5]   POSSIBLE ORDERINGS IN MEASUREMENT SELECTION PROBLEM [J].
COVER, TM ;
VANCAMPENHOUT, JM .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1977, 7 (09) :657-661
[6]   EFFECT OF DIMENSIONALITY AND ESTIMATION ON THE PERFORMANCE OF GAUSSIAN CLASSIFIERS [J].
ELSHEIKH, TS ;
WACKER, AG .
PATTERN RECOGNITION, 1980, 12 (03) :115-126
[7]   Determination of the optimal number of features for quadratic discriminant analysis via the normal approximation to the discriminant distribution [J].
Hua, JP ;
Xiong, ZX ;
Dougherty, ER .
PATTERN RECOGNITION, 2005, 38 (03) :403-421
[8]   ON MEAN ACCURACY OF STATISTICAL PATTERN RECOGNIZERS [J].
HUGHES, GF .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1968, 14 (01) :55-+
[9]   Feature selection: Evaluation, application, and small sample performance [J].
Jain, A ;
Zongker, D .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1997, 19 (02) :153-158
[10]   OPTIMAL NUMBER OF FEATURES IN THE CLASSIFICATION OF MULTIVARIATE GAUSSIAN DATA [J].
JAIN, AK ;
WALLER, WG .
PATTERN RECOGNITION, 1978, 10 (5-6) :365-374