A simple means of determining your actual headway in creating a decision model is to measure the number of test cases it can satisfy that bolster business benefit. But what test cases do you need, and how can you generate them efficiently?
What is the most meaningful way of measuring progress when developing an automated solution to a problem? The number, size and complexity of artefacts created (be they machines, software or even DMN1 decision models) is certainly no safe measure of achievement. The only safe way to measure progress is to demonstrate increasing business value. Show that our solution has made incremental progress in solving the problem. How can this be achieved?
Decision models can be conceptual—used to communicate ideas about human decision-making—or executable—used to create automated decisions. For conceptual decision models, progress might be determined by the coverage of the model, its reach (how many reviewers it’s had) and its demonstrated ability to inform and influence stakeholders. In short, progress is measured in terms of business value. Value can be hard to measure, but generally, the more people a model has helped and the more understanding and innovation it has fostered, the better. These models are meant to cause insightful dialogue between knowledgeable and authoritative people.
For executable decision models, there is only one achievement of any worth: the ability to pass tests specified by the subject matter experts over a broad scope of business endeavours. Tests can be used to define the coverage and requirements of a decision model. The number of test passes shows your development progress. Any other metric (e.g., number and size of decisions, inputs or models) is hollow.
As a result, creating test cases and automating their use is crucial to projects in which automated models are produced. Providing the test scenarios have business value, the number, coverage and pass rate of tests can meaningfully measure development progress. In addition, the test pass rate measures how close a model is to achieving its goal. Finally, test failures give an early warning if there is a backward step (in which previously passed tests now fail due to a newly introduced mistake). This article discusses a viable process for measuring the progress of executable DMN decision models using tests.
The number of possible tests for a non-trivial decision model is vast (often in the hundreds of thousands or more). Usually, it is impractical to create test sets of this size, let alone apply them repeatedly. Nevertheless, it is possible to handcraft a few business-critical tests and supplement these with automatically generated ‘bulk’ test data. The former, handcrafted tests can be used to create decision outcomes that can be compared with correct results. They are designed to explore the behaviour of decision-making at key boundaries. Bulk tests can be used to test the robustness of the decision—does it fail when exposed to nonsensical data? They can also determine its runtime performance and visualize its behaviour over various possible inputs.
The business case for testing goes beyond ensuring that a decision is made correctly. Testing is more than a process that follows and supports development. Indeed, a defined test is the only way to express a requirement fully, and a passed test is the only accurate measure of progress towards a business goal. Because of this, testing cannot wait until after development. Instead, it must start before development and be concurrent with the requirements definition. The development of an automated decision model should be test driven.
Testing must be prioritized by business value. The more valuable a decision, the more it should be tested. Consequently, determining a decision’s business value (the key business performance indicators) must be an early priority. Fortunately, decision modelling using DMN has tools (KPIs, Knowledge Sources) for this.
Comprehensive Testing Approach
Testing should be
- Methodical – test data should be designed to demonstrate specific business scenarios. These ‘spot’ tests illustrate the correctness of the model, often around significant boundaries. The greater the business value of the requirement, the earlier and more frequently it should be tested.
- Continuous – tests should be applied early, regularly and automatically. By early, we mean that creating a test should be done before implementing a decision, not after. Designing test cases will test our understanding of the requirements, avoiding the waste of developing decision models from faulty requirements. Tests should be applied frequently (at least daily) and automatically, not at the whim of human modelers. We must accept that many test cases will fail in the early versions of the decision model.
- Comprehensive – ‘spot’ tests should be complemented with high-volume tests over a broad range of possible scenarios to determine the model’s robustness, coverage, performance and resource requirements.
To support comprehensive testing, large libraries of test data (test sets) should be software generated across a business-defined data distribution. These comprehensive test sets complement a small number of hand-made tests and support testing an extensive range of business scenarios with minimal effort. Four test sets should be created:
- The handcrafted test set. This set includes test cases and expected (correct) results. This set is designed to exercise specific business scenarios of value.
- The robustness test set. This automatically generated test set has values sampled randomly from the business-defined distribution. In some test sets, some of the data records will be nonsensical. Our goal in creating these is to see how gracefully the model reacts to inconsistent input.
- The coverage test set. For this automatically created set, the generation procedure considers the business constraints between fields in test records. For example, the cover start date of an insurance policy should be before the end date. These constraints ensure that all test records are valid. These test sets are designed to test decision model coverage and correctness. It can also be used to visualize decision-making behaviour to ensure that all the consequences of our requirements are desirable.
- The performance test set. This large, automatically generated test set aims to test runtime speed, memory usage, latency and other performance characteristics.
Note that interesting tests from the coverage set can be migrated to the handcrafted set if labelled with the ‘correct answer’.
All test sets are retained and submitted to the model frequently and automatically. The automation should include comparing the test output against a known correct result (or a previous unreviewed result) where this is available. The latter allows for regression testing and change impact analysis.
Comprehensive testing typically involves creating test sets with many thousands of tests. It is not realistic to do this manually; software support is essential. Specifically, you require software, a Test Data Generator, that can:
- understand the data schema of a DMN decision model and create test data consistent with it
- generate an arbitrary number of test cases;
- where the storage of large volumes of test cases is not feasible, allow for the reproducibility of test set generation (i.e., using a pseudo-random generation initiated with a specified seed);
- constrain the distribution of the test cases to a subset that is useful to the business when required; and
- guarantee the coverage of the test set.
Using this software, business subject matter experts can create a decision model and quickly furnish it with tests that specify its behaviour. Then, they can regularly and accurately monitor their progress towards completion using test pass rates.
RapidGen Software’s Test Data Generator (TDG)
RapidGen’s TDG software can fulfil all of the above demands and is fully compatible with their DMN model execution engine, Genius. So please get in touch with us if we can assist you in any way with decision modelling and testing.
 Decision Model and Notation, the mostly widely used standard for representing decisions.
About the Author
Jan Purchase has been working in investment banking for 20 years during which he has worked with nine of the world’s top 40 banks by market capitalization. In the last 13 years he has focused exclusively on helping clients with automated Business Decisions, Decision Modelling (in DMN) and Machine Learning. Dr Purchase specializes in delivering, training and mentoring all of these concepts to financial organizations and improving the integration of predictive analytics and machine learning within compliance-based operational decisions.
Dr Purchase has published a book Real World Decision Modelling with DMN, with James Taylor, which covers their experiences of using decision management and analytics in finance. He also runs a Decision Management Blog www.luxmagi.com/blog, contributes regularly to industry conferences and is currently working on ways to improve the explainability of predictive analytics, machine learning and artificial intelligence using decision modelling.