报告时间:2015年5月22日(周五)上午8:30
报告地点:永利yl23411微格教室7教306
主讲人:吴强
报告摘要:
Information theoretical learning (ITL) is an important research area in signal processing and machine learning. It uses concepts of entropies and divergences from information theory to substitute the conventional statistical descriptors of variances and covariances. The empirical minimum error entropy (MEE) algorithm is a typical approach falling into this this framework and has been successfully used in both regression and classification problems. In this talk, I will discuss the consistency analysis of the MEE algorithm. For this purpose, we introduce two types of consistency. The error entropy consistency, which requires the error entropy of the learned function to approximate the minimum error entropy, is proven when the bandwidth parameter tends to 0 at an appropriate rate. The regression consistency, which requires the learned function to approximate the regression function, however, is a complicated issue. We prove that the error entropy consistency implies the regression consistency for homoskedastic models where the noise is independent of the input variable. But for heteroskedastic models,a counterexample is constructed to show that the two types of consistencyare not necessarily coincident. A surprising result is that the regression
consistency holds when the bandwidth parameter is sufficiently large. Regression consistency of two classes of special models is shown to hold With fixed bandwidth parameter. These results illustrate the complication of the MEE algorithm.
报告人简介:
吴强,男,2005年毕业于香港城市大学数学系,获博士学位;2008年美国杜克大学博士后出站。先后任职于美国密歇根大学数学系、英国利物浦大学、现任职美国中田纳西州立大学数学系。
吴强博士现主要研究领域为统计模型与计算、机器学习、高维数据挖掘及应用,计算调和分析。在Journal of Machine Learning Research, Applied and Computational Harmonic Analysis等国际权威期刊发表论文40余篇,出版专著“Classication and Regularization in Learning Theory”。特别在学习理论中分类学习、正则化回归学习等研究方向作出了具有标志性的研究成果,受到国内外学者的广泛关注。
欢迎全院师生参加!