机构地区: 华南理工大学工商管理学院
出 处: 《系统工程》 2009年第3期84-88,共5页
摘 要: 支持向量机对分类问题的求解过程相当于解一个线性约束的二次规划问题,求解的变量个数与训练样本数相等,且需要计算和存储的核矩阵大小与训练样本数的平方相关。随着样本数目的增多,经典的求解二次规划问题的算法不再适用。针对大规模二分类问题,基于数据分割和集成学习策略,本文提出了一种快速支持向量机学习算法。其主要思想是:首先对数据集进行预处理,自动将正负类分别聚成若干子簇;然后对两两组合的正负子簇用SMO算法进行交叉学习,得到多个基本分类器;最后对这些基本分类器进行集成学习。在UCI的5个数据集上的实验表明,与SMO学习算法相比,这种基于数据分割的训练策略在精度几乎没有损失的情况下显著地提高了训练速度。 The training problem of SVM for classification is equivalent to solving a linearly constrained quadratic programming with a number of variables equal to the one of the training samples. The size of kernel matrix to be computed and saved is related to the square of the number of the training samples. With the number of the training samples increasing, the conventional algorithms for solving quadratic programming problems can not be used. At present,the design and analysis of the training algorithm is a hot topic for large scale classification problems in the field of SVM. Based on the data partition and ensemble learning, a fast training algorithm is presented for large scale classification problems in this paper. The main idea is stated as follows. Firstly, the original dataset is preprocessed by using k--means clustering and the data points in the positive class and the negative one is divided into several clusters respectively. Then the two small clusters coming from the positive clusters and the negative ones respectively make up of a binary classification problem which is solved by SMO, and the basic classifiers can be obtained. Finally, these basic classifiers are integrated by the ensemble learning. Experiments have been conducted on five data sets in UCI. The results show that the proposed algorithm is faster than SMO without loss of accuracy.
领 域: [自动化与计算机技术] [自动化与计算机技术]