1. 제 목 : Feature and Sample Reduction for Classification Problems
2. 발 표 자 : Daniel S. Yeung (Professor, Hong Kong Polytechnic University)
3. 일 시 : 2008년 8월 28일(목) 11 : 00 - 13 : 00
4. 장 소 : 경북대 공대 10호관 318호
5. 초청교수 : 이민호 교수
6. 강사약력 :
He received the Ph.D. degree in applied mathematics from Case Western Reserve University. In the past, he has worked as an Assistant Professor of Mathematics and Computer Science at Rochester Institute of Technology, as a Research Scientist in the General Electric Corporate Research Center, and as a System Integration Engineer at TRW, all in the United States. He was the chairman of the department of Computing, The Hong Kong Polytechnic University, Hong Kong where now he is a Chair Professor.
7. 내용요약 :
A classification system such as a neural network maps input data characterized by a number of features onto output classes. Successful deletion of “"irrelevant or unimportant”" features and samples in the training set, without sacrificing the classification accuracy, could reduce network complexity and learning effort. Such a reduction technique is highly desirable for many application problems. In this talk a comparison on a number of well-known techniques based on the principal component analysis, the mutual information, the support vector machines and the neural network sensitivity analysis will be presented. We shall present a proposal to develop a feature and sample selection method for supervised multi-classification problems using Sensitivity Measures for an ensemble of multiplayer feedforward neural networks (Multilayer Perceptrons or Radial Basis Function Neural Networks). This proposed technique is based on some recent results on generalization error locally near the training points. A number of experimental results using datasets such as the UCI, the 99 KDD Cup, and the text categorization, will be presented.
|