Vector Machine
 PMML3.0 Menu Home PMML Notice and License Changes Conformance General Structure Header Data Dictionary Mining Schema Transformations Statistics Taxomony Targets Output Functions Built-in Functions Model Composition Model Verification Association Rules Cluster Models General Regression Naive Bayes Neural Network Regression Ruleset Sequences Text Models Trees Vector Machine

## PMML 3.0 - Support Vector Machine

Support Vector Machine Models

The description of Support Vector Machine (SVM) models assumes some familiarity with the SVM theory. In this specification, Support Vector Machine models for classification and regression are considered. A Support Vector Machine is a function f which is defined in the space spanned by the kernel basis functions K(x,xi) of the support vectors xi:

f(x) = Sum_(i=1)n αi*K(x,xi) + b.

Here n is the number of all support vectors, αi are the basis coefficients and b is the absolute coefficient. In an equivalent interpretation, n could also be considered as the total number of all training vectors xi. Then the support vectors are the subset of all those vectors xi whose coefficients αi are greater than zero. The term Support Vector (SV) has also a geometrical interpretation because these vectors really support the discrimination function f(x) = 0 in the mechanical interpretation.

At present, there exist a lot of SVM models. Most important are SVMs for classification (SVC) and for regression (SVR). Furthermore, for SVCs it is useful to distinguish between two-class and multi-class models because the case of multiple classes can be handled in multiple different ways. Currently, only two-class classification scenarios can be expressed in PMML. A popular extension of SVMs are nu-SVMs which also exist for both classification and regression.

Since a PMML document may contain some SVM models, for instance for multiclass problems or for trees with SVM nodes, which often share common support vectors, it is useful to store the SVs only in one place of the PMML document. The specification supports this by introducing a common VectorDictionary.

 ``` ```

The attribute modelName specifies the name of the SVM model.

The attribute functionName could be either classification or regression depending on the SVM type.

The attribute svmRepresentation defines whether the SVM function is defined via support vectors or via the coefficients of the hyperplane for the case of linear kernel functions.

Since SVMs require numeric attributes which also could be normalized, transformations are often applied which can be performed in the LocalTransformations element.

For each active MiningField, an element of type UnivariateStats (see ModelStats) holds information about the overall (background) population. This includes(required) DiscrStats or ContStats, which include possible field values and interval boundaries. Optionally, statistical information is included for the background data.

The KERNEL_TYPE defines the function space of the SVM solution through the choice of the basis functions.

The VectorDictionary element holds all support vectors from all support vector machines. Due to the current limitation of 2 classes in case of classification, this will be identical with the content of SupportVectors. In future versions, it may hold the superset of all SVMs necessary for multi-class classification.

SupportVectors holds the support vectors as references towards VectorDictionary used by the respective SVM instance. For storing the SVM coefficients, the element Coefficients is used. Both are combined in the element SupportVectorMachine, which holds a single instance of an SVM.

### SVM Representation

Usually the SVM model uses support vectors to define the model function. However, for the case of a linear function (linear kernel type) the function is a linear hyperplane that can be more efficiently expressed using the coefficients of all mining fields. In this case, no support vectors are required at all, and hence SupportVectors will be absent and only the Coefficients element is necessary.

The SVM representation specifies which of both representations is used:

 ``` ```

### Kernel Types

The kernel defines the type of the basis functions of the SVM model. There exists a huge number of kernel types. The most popular ones are:

LinearKernelType: linear basis functions which lead to a hyperplane as classifier

K(x,y) = <x,y>

PolynomialKernelType: polynomial basis functions which lead to a polynome classifier

K(x,y) = (gamma*<x,y>+coef0)degree

K(x,y) = exp(-gamma*||x - y||2)

SigmoidKernelType: sigmoid kernel functions for some models of Neural Network type

K(x,y) = tanh(gamma*<x,y>+coef0)

 ``` ```
Additional information about the kernel can be entered in the free type attribute description.

### Support Vectors

As already mentioned, a vector dictionary was introduced to store all support vectors. The VectorDictionary is a general container of vectors and could, in principle, also be used for models other than Support Vector Machine.

 ``` ```

The VectorDictionary contains the set of support vectors which are of the type VectorInstance. If present, the attribute numberOfVectors must be equal to the number of vectors contained in the dictionary.

The elements VectorInstance represent support vectors and are referenced by the id-attribute. They do not contain the value of the predicted mining field.

The VectorInstance is a data vector given in sparse array format. The order of the values corresponds to that of the MiningFields of the MiningSchema; the value of the predicted mining field is omitted.

Notice that the sparse representation is an important issue because SVMs are usually able to handle very high-dimensional data whereas the number of support vectors tends to be small.

The element SupportVectors contains all support vectors required for the respective SVM instance.

 ``` ```

The support vectors are represented by the element SupportVector which only has the attribute vectorId - the reference to the support vector in VectorDictionary. If numberOfSupportVectors is specified, then it must match the number of SupportVector elements. If numberOfAttributes is specified, then it must match the number of attributes in the support vectors (which all must have the same length). If one of these requirements is not fulfilled, then the PMML is not valid.

### Support Vector Coefficients

The element Coefficients is used to store the support vector coefficients αi and b.

 ``` ```

Each coefficient αi is described by the element Coefficient and the number of coefficients corresponds to that of the support vectors. Hence the attribute numberOfCoefficients is equal to the number of support vectors. The attribute absoluteValue contains the value of the absolute coefficient b.

### Example Model

This example shows a classification SVM for the simple XOR data set. All vectors are support vectors.

 ```
2 1.0 1 1.0 1 2 1.0 1.0 ```

### Scoring procedure, example

Consider the same example as above in order to illustrate the scoring procedure of the Support Vector Machine. Given the first support vector as input vector

x = mv0 = (x1=0.0, x2=0.0)

we calculate as follows:

f(x) = Sum_(i=1)n αi*K(x,xi) + b

= -1.0*K(x,mv0) + 1.0*K(x,mv1) + 1.0*K(x,mv2) -1.0*K(x,mv3) + 0.5

= -1.0*exp(-1.0*||x - mv0||2) + 1.0*exp(-1.0*||x - mv1||2) + 1.0*exp(-1.0*||x - mv2||2) -1.0*exp(-1.0*||x - mv3||2) + 0.5

= -1.0*exp(-1.0*|| (0,0)T - (0,0)T ||2) + 1.0*exp(-1.0*|| (0,0)T - (0,1)T ||2) + 1.0*exp(-1.0*|| (0,0)T - (1,0)T ||2) -1.0*exp(-1.0*|| (0,0)T - (1,1)T ||2) + 0.5

= -1.0*exp(-1.0*|| (0,0)T ||2) + 1.0*exp(-1.0*|| (0,-1)T ||2) + 1.0*exp(-1.0*|| (-1,0)T ||2) -1.0*exp(-1.0*|| (-1,-1)T ||2) + 0.5

= -1.0*exp(0.0) + 1.0*exp(-1.0) + 1.0*exp(-1.0) -1.0*exp(-2.0) + 0.5

f(x) = 0.100424 .

In the same way, the scoring of the other support vectors delivers f(x = mv1) = 0.899576, f(x = mv2) = 0.899576, f(x = mv3) = 0.100424 thus reasonably approximating the training data.

A classification with a threshold of 0.5 would assign the vectors mv0 and mv3 to class 0 and the vectors mv1 and mv2 to class 1 delivering an exact classification of the training data.

 e-mail info at dmg.org