机构地区: 中国科学院计算技术研究所智能信息处理重点实验室
出 处: 《计算机研究与发展》 2004年第2期317-324,共8页
摘 要: 讨论目标函数可分解为凸函数和一个广义可微函数之差的优化问题 对于可微函数利用线性函数进行局部逼近 ,从而求得目标函数的一个凸函数逼近 然后求解凸优化问题得到最优解的一个更好近似 ;重复这个过程直到结束 利用广义梯度和凸函数的性质 ,证明得到的优化算法为全局收敛的下降算法 它所求解的优化问题可以具有光滑或非光滑的目标函数 The optimization problem is discussed, in which its objective function can be decomposed into one convex function minus one generalized differential function. For a given approximate value of optimal solution, the differential function is approximated locally at the current approximate value by a linear function, which leads to an approximation of the objective function. After solving the approximation of the objective function, the next(often better)approximate solution for the objective function can be found. The above process is repeated until it satisfies some specified convergent criterion. The global convergence of designing an optimal algorithm can be proven, which is useful for solving smooth or non-smooth optimal problem and analysing the stability of Hopfield network.
领 域: [自动化与计算机技术] [自动化与计算机技术]