PMML Neural Network
PMML1.1 Menu

Home


PMML Notice and License

PMML Conformance

Header

Data Dictionary

Mining Schema

Statistics

Normalization

Tree Model

General Regression

General Structure

Association Rules

Neural Network

Center and Distribution - based Clustering

PMML 1.1 DTD

Download PMML v1.1 (zip)

PMML 1.1 -- DTD for Neural Network Models

Neural Network Models for Backpropagation

The description of neural network models assumes that the reader has a general knowledge of artificial neural network technology.

A neural network has one or more input nodes and one or more neurons. Some neuron's outputs are the output of the network. The network is defined by the neurons and their connections, aka weights. All neurons are organized into layers; the sequence of layers defines the order in which the activations are computed. All output activations for neurons in some layer L are evaluated before computation proceeds to the next layer L+1. Note that this allows for recurrent networks where outputs of neurons in layer L+i can be used as input in layer L where L+i > L. The model does not define a specific evaluation order for neurons within a layer.

Each neuron receives one or more input values, each coming via a network connection, and sends only one output value. All incoming connections for a certain neuron are contained in the corresponding Neuron element. Each connection Con stores the ID of a node it comes from and the weight. A bias weight coefficient may be stored as an attribute of Neuron.

All neurons in the network are assumed to have the same (default) activation function, although each individual neuron may have its own activation and threshold that override the default. Given a fixed neuron j, and Wi represents the weight on the connection to neuron i, the activation for neuron j is computed as follows

    Z = Sum( Wi * output(i) ) + bias

    output(j) = activation( Z )

Activation functions are:

    threshold:

      activation(Z) = if Z > threshold then 1 else 0

    logistic:

      activation(Z) = 1 / (1 + exp(-Z))

    tanh:

      activation(Z) = (1-exp(-2Z)/(1+exp(-2Z))

					
	<!ELEMENT NeuralNetwork (Extension*, MiningSchema, ModelStats?, NeuralInputs, ( NeuralLayer+), NeuralOutputs? )>
				
	<!ATTLIST NeuralNetwork
	     modelName              CDATA                           #IMPLIED
	     activationFunction     %ACTIVATION-FUNCTION;           #REQUIRED
	     threshold              #REAL-NUMBER;                   #IMPLIED
	>
				
	<!ELEMENT NeuralInputs ( NeuralInput+ ) >
				
	<!ELEMENT NeuralLayer ( Neuron+ ) >
				
	<!ELEMENT NeuralOutputs ( NeuralOutput+ ) >
				

NeuralInput defines how input fields are normalized such that the values can be processed in the neural network. For example, string values must be encoded as numeric values.

NeuralOutput defines how the output of the neural network must be interpreted.

					
	<!ENTITY % ACTIVATION-FUNCTION "(threshold | logistic | tanh)">
				
	<!ENTITY % NN-NEURON-ID "CDATA">
				
	<!ENTITY % NN-NEURON-IDREF "CDATA" >
				

NN-NEURON-ID is just a string which identifies a neuron. The string is not necessarily an XML ID because a PMML document may contain multiple network models where neurons in differnt models can have the same identifier. Within a model, though, all neurons must have a unique identifier.


Neural Network Input Neurons

An input neuron represents the normalized value for an input field. A numeric input field is usually mapped to a single input neuron while a categorical input field is usually mapped to a set of input neuron using some fan-out function. The elements NormContinuos and NormDiscrete are defined in a separate DTD subset for normalization .

					
	<!ELEMENT NeuralInput (Extension*, ( NormContinuous | NormDiscrete )) >
				
	<!ATTLIST NeuralInput
	     id                     %NN-NEURON-ID;                  #REQUIRED
	>

Restrictions: A numeric input field must not appear more than once in the input layer. Similarly, a pair of categorical input field together with a input value must not appear more than once in the input layer.


Neural Network Neurons

				
	<!ELEMENT Neuron (Extension*, Con+) >
				
	<!ATTLIST Neuron
	     id                     %NN-NEURON-ID;                  #REQUIRED
	     bias                   %REAL-NUMBER;                   #IMPLIED
             activationFunction     %ACTIVATION-FUNCTION;           #IMPLIED
	     threshold              %REAL-NUMBER;                   #IMPLIED
	>

Neuron contains an identifier which must be unique in all layers, its attribute threshold has default value 0. If no activationFunction is given then the default activationFunction of the NeuralNetwork element applies. The attribute 'bias' implicitly defines a connection to a bias unit where the unit's value is 1.0 and the weight is the value of 'bias'.

Weighted connection between neural net nodes are represented by Con elements.

					
	<!ELEMENT Con (Extension*) >
				
	<!ATTLIST Con
	     from                   %NN-NEURON-IDREF;               #REQUIRED
	     weight                 %REAL-NUMBER;                   #REQUIRED
	>

Con elements are always part of a Neuron. They define the connection coming into that parent element. The neuron identified by 'from' may be part of any layer.

NN-NEURON-IDs of all nodes must be unique across the combined set of NeuralInput and Neuron nodes. The 'from' attributes of connections and NeuralOutputs refer to these identifiers.


Neural Network Output Neurons

In parallel to input neurons, there are output neurons which are connected to input fields via some normalization. While the activation of an input neuron is defined by the value of the corresponding input field, the activation of an output neuron is computed by the activation function. Therefore, an output neuron is defined by a Neuron. In networks with supervised learning the computed activation of the output neurons is compared with the normalized values in the corresponding input fields; these values are often called 'teach values'. The difference between the neuron's activation and the normalized output field determines the prediction error. For application mode the normalisation for the output field is used to denormalize the predicted value in the output neuron. Therefore, each instance of Neuron which represent an output neuron, is additionally connected to a normalized field.

Connect a neuron's output to the output of the network.

					
	<!ELEMENT NeuralOutput ( Extension*, ( NormContinuous | NormDiscrete) ) >
				
	<!ATTLIST NeuralOutput
	     outputNeuron           %NN-NEURON-IDREF;               #REQUIRED
	>

For neural value prediction with back propagation, the output layer contains a single neuron, this is denormalized giving the predicted value. For neural classification with backpropagation, the output layers contains one or more neurons. The neuron with maximal activation determines the predicted class label. If there is no unique neuron with maximal activation then the predicted value is undefined.


Conformance

  • backward connections from leven N to level M with M <= N or connections between non-adjacent layers are not in core.
  • variable values for activationFunction per Neuron are not in core.

Example model

					
	<?xml version="1.0" ?>
	<PMML version="1.1">
	<Header copyright="Copyright (c) 2000, DMG.org"/>
				
	<DataDictionary numberOfFields="5">
				
	<DataField name="gender" optype="categorical">
		<Value value="  female"/>
		<Value value="   male"/>
	</DataField>
				
	<DataField name="no of claims" optype="categorical">
		<Value value="    0"/>
		<Value value="    1"/>
		<Value value="    3"/>   <Value value="    > 3"/>
		<Value value="    2"/>
	</DataField>
				
	<DataField name="domicile" optype="categorical">
		<Value value="suburban"/>
		<Value value="  urban"/>
		<Value value="  rural"/>
	</DataField>
				
	<DataField name="age of car" optype="continuous"/>
	<DataField name="amount of claims" optype="continuous"/>
				
	</DataDictionary>
				
	<NeuralNetwork modelName="Neural Insurance"activationFunction="logistic">
				
	<MiningSchema>
		<MiningField name="gender"/>
		<MiningField name="no of claims"/>
		<MiningField name="domicile"/>
		<MiningField name="age of car"/>
		<MiningField name="amount of claims" usageType="predicted"/>
	</MiningSchema>
				
	<NeuralInputs>
		<NeuralInput id="0">
			<NormContinuous field="age of car">
				<LinearNorm orig="0.01" norm="0"/>
				<LinearNorm orig="3.07897" norm="0.5"/>
				<LinearNorm orig="11.44" norm="1"/>
			</NormContinuous>
		</NeuralInput>
				
		<NeuralInput id="1">
			<NormDiscrete field="gender"value="male"/>
		</NeuralInput>
				
		<NeuralInput id="2">
			<NormDiscrete field="no of claims"value="0"/>
			</NeuralInput>
				
		<NeuralInput id="3">
			<NormDiscrete field="no of claims"value="1"/>
		</NeuralInput>
				
		<NeuralInput id="4">
			<NormDiscrete field="no of claims"value="3"/>
		</NeuralInput>
				
		<NeuralInput id="5">
			<NormDiscrete field="no of claims"value=" >3"/>
		</NeuralInput>
				
		<NeuralInput id="6">
			<NormDiscrete field="no of claims"value="2"/>
		</NeuralInput>
				
		<NeuralInput id="7">
			<NormDiscrete field="domicile"value="suburban"/>
		</NeuralInput>
				
		<NeuralInput id="8">
			<NormDiscrete field="domicile"value="urban"/>
		</NeuralInput>
				
		<NeuralInput id="9">
			<NormDiscrete field="domicile"value="rural"/>
		</NeuralInput>
				
	</NeuralInputs>
				
				
	<NeuralLayer>
		<Neuron id="10">
			<Con from="0"weight="-2.08148"/>
			<Con from="1"weight="3.69657"/>
			<Con from="2"weight="-1.89986"/>
			<Con from="3"weight="5.61779"/>
			<Con from="4"weight="0.427558"/>
			<Con from="5"weight="-1.25971"/>
			<Con from="6"weight="-6.55549"/>
			<Con from="7"weight="-4.62773"/>
			<Con from="8"weight="1.97525"/>
			<Con from="9"weight="-1.0962"/>
		</Neuron>
				
		<Neuron id="11">
			<Con from="0"weight="-0.698997"/>
			<Con from="1"weight="-3.54943"/>
			<Con from="2"weight="-3.29632"/>
			<Con from="3"weight="-1.20931"/>
			<Con from="4"weight="1.00497"/>
			<Con from="5"weight="0.033502"/>
			<Con from="6"weight="1.12016"/>
			<Con from="7"weight="0.523197"/>
			<Con from="8"weight="-2.96135"/>
			<Con from="9"weight="-0.398626"/>
		</Neuron>
	
		<Neuron id="12">
			<Con from="0" weight="0.904057"/>
			<Con from="1" weight="1.75084"/>
			<Con from="2" weight="2.51658"/>    
			<Con from="3" weight="-0.151895"/>
			<Con from="4" weight="-2.88008"/>    
			<Con from="5" weight="0.920063"/>
			<Con from="6" weight="-3.30742"/>    
			<Con from="7" weight="-1.72251"/>
			<Con from="8" weight="-1.13156"/>    
			<Con from="9" weight="-0.758563"/>   
		</Neuron>
	</NeuralLayer>  
				
	<NeuralLayer>   
		<Neuron id="13">    
			<Con from="10" weight="0.76617"/>
			<Con from="11" weight="-1.5065"/>    
			<Con from="12" weight="0.999797"/>   
		</Neuron>
	</NeuralLayer>  
				
	<NeuralOutputs>
		<NeuralOutput outputNeuron="13">
		<NormContinuous field="amount of claims">     
			<LinearNorm orig="0" norm="0.1"/>     
			<LinearNorm orig="1291.68" norm="0.5"/>      
			<LinearNorm orig="5327.26" norm="0.9"/>    
		</NormContinuous>   
	</NeuralOutput>
				
	</NeuralOutputs> 
			
	</NeuralNetwork>
	
	</PMML>

e-mail info at dmg.org