報告題目:Learning to be global optimizer
主講人:西安交通大學孫建永教授
時間:2020年7月8日(周三) 10:00-11:00
地點:6-402會議室
主辦單位:新葡萄8883官網AMG
摘要:
The advancement of artificial intelligence has cast a new light on the development of optimization algorithm. This paper proposes to learn a two-phase (including a minimization phase and an escaping phase) global optimization algorithm for smooth non-convex functions. For the minimization phase, a model-driven deep learning method is developed to learn the update rule of descent direction, which is formalized as a nonlinear combination of historical information, for convex functions. We prove that the resultant algorithm with the proposed adaptive direction guarantees convergence for convex functions. Empirical study shows that the learned algorithm significantly outperforms some well-known classical optimization algorithms, such as gradient descent, conjugate descent and BFGS, and performs well on ill-posed functions. The escaping phase from local optimum is modeled as a Markov decision process with a fixed escaping policy. We prove that the fixed escaping policy is able to escape from local optimum with higher probability than random sampling. We further propose to learn an optimal escaping policy by reinforcement learning. The effectiveness of the escaping policies is verified by optimizing synthesized functions and training a deep neural network for CIFAR image classification. The learned two-phase global optimization algorithm demonstrates a promising global search capability on some benchmark functions and machine learning tasks.
主講人簡介:
孫建永,西安交通大學數學與統計學院教授、博士生導師、信息科學計算系系主任、院長助理、陜西國家應用數學中心常務副主任、陜西省數學會常務副理事長。主要研究方向包括統計機器學習、演化智能優化以及大數據的理論、算法與應用。已在美國科學院院刊(PNAS)和IEEE匯刊等頂級期刊上發表論文60余篇,谷歌學術引用超過2000余次,單篇最高引用超過330次。目前擔任英國EPSRC/BBSRC審稿人,英國HEA Fellow,IEEE高級會員,中國計算機學會大數據專委會通訊委員,2016年獲得第12批中組部-青年項目資助。多次受邀擔任演化領域IEEE會議PC及高級程序委員。
歡迎感興趣的師生積極參加!