DMG logo PMML 4.4 - PMML Interoperability
PMML4.4 Menu



XML Schema



General Structure

Field Scope










Built-in Functions

Model Verification

Model Explanation

Multiple Models

Anomaly Detection

Association Rules

Baseline Models

Bayesian Network











Text Models

Time Series


Vector Machine

PMML 4.4 - PMML Interoperability

One of the main objectives for the Predictive Model Markup Language (PMML) is to facilitate the exchange of models from one environment to another. For example, a model developed with one tool can be transferred via PMML to another tool for scoring. Or, a model can be documented in PMML and given to others for review, inspection or archival purposes. Exchanging predictive models between different products or environments requires a common understanding of the PMML specification. This understanding can be less than perfect, especially since PMML contains over 700 language elements, along with the ability to add product specific extensions. The result is that, even though there is a detailed PMML specification, models defined in PMML can vary in subtle ways from vendor to vendor. Any lack of interoperability reduces the usefulness of PMML and hampers the growth of its use by the data mining community. It is the goal of the PMML Interoperability Guidelines to increase the reliability of PMML as a seamless, multi-vendor model exchange medium.

Note: Prior to PMML 3.2, PMML Interoperability was discussed in the context of Conformance of PMML Producers and Consumers, as well as in the context of Core and Non-core features. It turned out that PMML Conformance was difficult to define and enforce. Over time, as new features were added to the PMML specification, it became difficult to define which features should be defined as core and which should not. As a result, a new approach for interoperability was introduced in PMML version 3.2.

PMML Interoperability Overview

The purpose of PMML Interoperability is simple: Producers of PMML models would like some way to ensure their models are deployed properly. Similarly, consumers of PMML models would like to make sure the PMML they interpret is well formed. Put another way, producers and consumers need to know they can rely on the PMML. The PMML produced has to be "valid PMML" so that any properly implemented consumer can score the model faithfully. For PMML to work, both parties need to hold up their end of the PMML contract:

  • Producers need to generate PMML that is valid
  • Given valid PMML, consumers need to deploy models accurately.

In this way, consumers can rely on producers and producers can rely on consumers. It is this mutual-reliance that forms the basis of the PMML Interoperability standard proposed here.

First, building models is the job of model producers, and the PMML that describes those models needs to be "valid". Hence, the interoperability standard includes the means for PMML Validation.

Second, those models are deployed to model consumers and, unless that deployment is strictly for archival, reference, or visualization purposes, scores will be generated from the PMML and those scores must be consistent with the scores the producers would get. Hence, the interoperability standard includes the means to verify that models are scored consistently.

Both situations require PMML and data. Producers take data and generate PMML models. Consumers take models and produce data or, more specifically, scores.

PMML Validation

PMML Validation requires two steps. The first step is XSD Validation. The purpose of XSD Validation is to ensure that the PMML is properly formed XML, and adheres to the appropriate version of the PMML's XML Schema Definition (XSD). But just because the XML adheres to the schema, it's still possible the model isn't usable as PMML. A good analogy is using a spell check tool to make sure the spelling in a document is correct. However, a document free of spelling errors is not necessarily grammatically correct. Therefore, a second step is needed to make sure the PMML truly represents a model.

This second step is more of a challenge. It requires that the PMML elements in combination are not only syntactically correct but are also understandable to a properly implemented model consumer. To accomplish this, a different XML technology is used: XSLT (Extensible Stylesheet Language Transformations). This technology is typically used to translate XML from one form to another. More importantly, it can look across more than one XML element at a time. In this case, we use it to make sure key features of PMML are implemented properly.

Neither of these steps is perfect. XSD Validation can be tricky and XSLT validation might not cover every circumstance. But used in combination, they can yield a high degree of confidence in the validity of a particular PMML model.

XSD Validation

With XSD Validation, a document is analyzed by an XML Parser to see if it adheres to a particular XSD. Since PMML 2.0, the Data Mining Group (DMG) has provided XML Schema Definitions for each version released. And since tools exist that will validate that an XML document adheres to the specified XSD, one would think that this would be a fool-proof way to determine if a given PMML is valid. And, in many cases, this is true. But unfortunately, in certain situations this process breaks down:

  • The way NUM-ARRAY elements are defined in the XSD can cause problems with certain XML parsers, most notably Microsoft's DOM (MSXML4). By altering the XSD references to NUM-ARRAY in the ContStats and PartitionFieldStats elements, the problem can be avoided. But this is a workaround to the published XSD.
  • Extensions are an important part of the PMML standard. They allow model producers to include additional information about a model that can useful or even required by certain tools. Even though these extensions are not supposed to affect the scoring of models, the way they are implemented can affect XSD validation. In particular, the PMML specification had allowed the use of x- attributes to add additional information to an XML element. For example:
    <DataField name="X1" optype="continuous" dataType="integer" x-storageType="num">

    Although this approach has been deprecated in PMML 3.1, it is still used occasionally. The issue is that XSD Validation can fail when these x- attributes are encountered since they are not explicitly defined in the XSD. The alternative is to use the more verbose Extension element instead:

    <DataField name="X1" optype="continuous" dataType="integer">
      <Extension name="x-storageType" value="num" extender="Acme, Inc."/>
  • PMML allows for a data transformation operation called MapValues. Any discrete value can be mapped to any possibly different discrete value by listing the pairs of values. This list is implemented by a table. In this example, the short form gender labels m and f are mapped to long form words male and female. Some XSD Validation tools will flag an error on the elements shortForm and longForm since they are not explicitly defined in the XSD:
    <MapValues outputColumn="longForm">
      <FieldColumnPair field="gender" column="shortForm"/>
  • Finally, as mentioned earlier, XSD Validation only determines if the PMML adheres to the XSD. It doesn't verify requirements that are not enforced by the XSD.

So, while XSD Validation is a necessary part of the interoperability process, it is not sufficient by itself for determining if a PMML model is valid.

XSLT Validation

To complement XSD Validation, a set of rules is created that cover particular requirements of the PMML specification. These rules are embodied into XSL transformations and are applied to a particular PMML using an XSLT processor. The result is another document which contains any rules that were violated.

The obvious challenge with this process is defining a comprehensive set of rules. These rules need to have sufficient coverage of the PMML specification so that any PMML model that passes the process with no errors is truly a valid model. These rules breakdown into three areas:

  • Rules that validate adherence to the XSD. This may be considered redundant with the XSD Validation step but it is not as exhaustive as XSD Validation and helps handle situations where XSD Validation fails erroneously.
  • Rules that verify PMML features that are common to all model types are implemented properly.
  • Rules that verify PMML features unique to a particular type of model.

Realistically, XSLT Validation is only as good as the rules embodied in the XSLT. And since these rules are created by inspection of the PMML specification, it should be expected that not every conceivable situation is covered. Over time, however, as cases not previously caught by XSLT Validation are found, the XSLT will be updated and improved.


While neither XSD Validation nor XSLT Validation is perfect, in combination they result in a high degree of confidence that a particular model is valid. The analogy of verifying a document using a spell-checker (XSD Validation) and a grammar-checker (XSLT Validation) is appropriate. Spell checkers can miss certain words or abbreviations. Grammar checkers enforce many common rules of the written language but they aren't perfect. And yet, both tools are commonly used by writers to ensure their documents are syntactically correct and make sense. Since PMML models follow a much more rigid structure, using both XSD Validation and XSLT Validation will yield a strong confidence that a PMML model is valid.

Model Verification

For the purposes of PMML Interoperability, consumers come in two types:

  • Those that display the PMML in some form, but do not actually generate scores from data. These applications can include an archive or repository that stores and retrieves the PMML in its original format. Or, they can be visualization tools that convert the PMML into another representation of the model (for instance, a neural network, a decision tree or a regression equation).
  • Those that use the PMML to generate scores from input records.

While model storage and visualization are important applications, the ability to determine if they faithfully represent PMML is either trivial or very complex. At the trivial end of the spectrum are model repositories. Any repository that cannot store and retrieve an XML file reliably must have some fundamental flaw independent of PMML since the XML is not being parsed. At the other end of the spectrum, visualization can take many forms and it is beyond the scope of PMML to define what forms that visualization should take.

The consumers that generate scores have a greater responsibility since those scores can be used to make important decisions, such as denying someone credit or correctly diagnosing a patient. Fortunately, it's a straight forward process for a consumer to demonstrate its interoperability: it simply needs to generate scores accurately. More precisely, for a given set of records, the consumer needs to generate the same scores that the model producers would generate.

Note: There is an assumption here that the model producer is generating scores accurately. If that assumption is wrong, then the consumer would have to get the same wrong answers for the problem to go undetected. This situation is considered to be highly unlikely, especially if the number of model verification records is large and varied (e.g., edge and corner conditions, normal and extreme ranges, expected and odd combinations).

For a consumer to generate scores, it needs data: specifically records for input and their expected results. The PMML includes a specification for Model Verification records. These records form a table of data, each column representing an input or an expected result. The inputs are PMML DataFields and the results are PMML OutputFields. DataFields are always specified as part of the PMML's DataDictionary element. OutputFields are not required, but allow one model to have multiple output definitions, which can include predicted values and probabilities. For continuous results, there is a precision specification for each field that shows how close those scores are expected to be.

A consumer simply needs to score each record, generate results and compare them to the expected results. All records need to be scored correctly for a consumer to demonstrate it can faithfully deploy the model.

For more information, see Model Verification.

Data and Models

With PMML Validation and Model Verification in place, we simply need data and PMML models to show interoperability. Now, for specific applications based on proprietary data, the above process will work fine. Individual models can be checked against the XSD and XSLT and model verification can be used to confirm the results where the model is deployed. But it would be more valuable to the industry and the data mining community if this interoperability evaluation could be done ahead of time so that users would know which tools produce valid PMML and which tools can score PMML properly. Fortunately, the DMG has on its website a PMML Examples page that includes dozens of models based on a handful of public domain datasets. These samples are intended to help developers learn and understand how a variety of different model types are implemented in PMML.

Process for Demonstrating Interoperability

For producers to demonstrate their interoperability, they simply need to furnish the DMG with PMML and the associated dataset (including inputs and scores). The DMG will conduct the XSD and XSLT Validation tests and provide results. Initially, the logistics for this can be simple: model producers simply email their PMML and datasets to the DMG mailbox. The DMG will conduct the PMML Validation tests and return the results to the producer. The results can be reviewed by the producer and, with the producer's approval, the results will be posted on the DMG web site, along with the PMML and the dataset.

Since vendors may not have implemented support for PMML's Model Verification Records, the DMG will automatically generate these from the dataset and insert them into the PMML prior to inclusion on the web site. In this way, any consumers supporting Model Verification records can simply use the PMML from the DMG site for their interoperability testing, which is the subject of the next section. In the future, this process can be automated so that producers can upload their PMML and dataset directly to the DMG web site, which will automatically run the PMML Validation tests and post the results.

For consumers to demonstrate their interoperability, they need to submit the results from scoring the PMML posted on the DMG web site (per section 4.1). The DMG will review the scores submitted and post the results to the DMG web site. There is the possibility that vendor could simply "fake" the results and submit them to the DMG. For this reason, consumers will be asked to attest to the authenticity of the results they provide and the DMG may, from time to time, audit consumers to ensure the results submitted are genuine. This audit could take the form of an internet demonstration, or an in-person demonstration, depending on the situation.

The PMML Interoperability process outlined here will yield some interesting dividends. Instead of just having a page of sample PMML models, the DMG web site will contain information about PMML producers and consumers that have been certified and tested with that model. Along with each model can be a list of consumers that have been certified to score the model faithfully. Over time, as more models are validated and verified, the amount of PMML acceptance and interoperability will be visible. It should be possible to note the models supported by a particular vendor, or the vendors who support a particular model type. In this way, the DMG site will become a resource not only for developers trying to implement the PMML specification, but for users looking to understand diversity of PMML models and vendors. The result would be an interoperability matrix that would depict which model producers and consumer have demonstrated the interoperability with particular model types.

Core vs. Non-Core PMML

Another dividend comes from the fact, that within a collection of PMML models, it is possible to determine which PMML elements are used in those models and report on them. This can be a solution to a difficult problem faced by the DMG. Early in its history, the PMML specification had a concept of core and non-Core PMML: "For a given class of model, the corresponding XML Schema and specifications identify core features which all conforming producers and consumers must support and non-core features which may optionally be supported." Over time however, as new features were added to the PMML specification, these core and non-core definitions were not maintained. Today, it is difficult to define which features should be defined as core and which should not. The DMG has debated this issue over time, including considering a "two-vendor rule" (any PMML feature supported by two or more vendors is core). But it seems an almost intractable problem to decide which of the over 700 PMML elements are core and which are not.

However, with PMML Interoperability, there's another approach to solving this problem. The models submitted to the DMG represent the state-of-the-art in PMML, particularly which features are being used. The DMG can provide an analysis showing which PMML elements are in use, based on the models submitted. This can reduce the need to explicitly define core and non-core features up front, and more importantly, avoid time-consuming debates or sub-optimal decisions. The DMG can simply provide a listing of PMML features and the models that use them. Then users can see which features are in use and to what extent.

This approach has a lot to offer. For PMML producers, they have the freedom to use the entire specification and not worry about core and non-core distinctions. For PMML Consumers, they can see which features are in use and which are not. And for the DMG, they can focus on improving the spec and avoid debating the status of each and every PMML element.

See the Model Coverage page for the latest PMML Coverage information.

DMG Requirements

In order to realize this interoperability standard, the DMG is committed to make the follow requirements happen:

  1. Provide a page on the DMG web site for Model Coverage.
  2. Provide an email location where producers can submit their PMML for Validation testing at
  3. Provide an email location on the DMG web site for submitting Model Verification results at A representative on the DMG will review the results and update the interoperability information on the DMG website.
  4. On the Model Coverage page, summarize the submitted PMML, including producer and consumer results and an interoperability matrix.
  5. On the Model Coverage page, provide information on PMML element usage.
  6. Feedback on PMML Interoperability can be sent to
e-mail info at