MBD Quality Suite

Model Quality Tester (MQTester)

Click the button below to apply for a free trial.

Model Quality Tester(MQTester)

MQTester 2024 has been released, supporting MATLAB 2010a–2024b and TargetLink 3.0–4.2. The new version delivers comprehensive performance improvements across all aspects. For details, please download the MQTester 2025 Highlights document or the Release Notes.

News

Latest Updates

Features

Overview of Basic Functions

Purpose of the Model Testing Tool

MQTester is a model-based testing tool for embedded systems, supporting the entire testing process: including test modeling, test execution, test assessment, and test report generation. In model testing, generating test environments, writing test cases, and evaluating test results are common challenges. MQTester provides optimized solutions in these three areas, making the testing process simple and fast.

Creating a Test Environment

As long as the object under test is located within a subsystem and the model can be simulated, MQTester can detect various model parameters, including used parameters, referenced library paths, and employed data dictionaries. By simply specifying the subsystem under test, MQTester can automatically generate the test environment. MQTester can generate test environments for both open-loop and closed-loop systems. In closed-loop testing, users can focus only on the results and coverage metrics of the object under test, and when performing SIL testing, it can create SIL modules specifically for the object under test.

Test Case Generation

MQTester provides two methods for generating test cases: one is to generate test cases using a custom scripting language based on requirements. In this case, the system automatically analyzes the interfaces of the object under test, presenting all signal parameter information to the user. The other method is to automatically generate test cases that satisfy maximum coverage based on the model’s architecture. In MQTester, test cases can be automatically generated with a single command.

Test Result Assessment

MQTester provides multiple assessment methods suitable for testing projects of various scales:
Manual Assessment: Determining whether the model meets requirements by manually reviewing input and output signals.
Signal Comparison: Automatically configuring comparisons between RTL, SIL, PIL, and reference signals, providing automatic comparison results.
Expected Values: Defining expected values for output signals when defining input signals; the system automatically compares them with model outputs. Writing functions to calculate expected values of output signals based on requirements and input signal changes; automatically comparing them with system outputs and providing assessment results.

Fault Injection & Calibration Parameter Modification

The calibration parameter modification function enables efficient and rapid testing of various changes to calibration parameters within a model. Fault injection allows for simple testing of how a model handles illegal signals and is also an essential function for interface testing. MQTester implements fault injection and parameter modification at the model level and also supports modifications within the generated code.

Full Combination Test Cases

Generating test cases and evaluating test results often consume significant time during the testing process. The full combination test case generation feature can substantially reduce this workload. By obtaining information such as data types and ranges of input signals in the model, the system automatically generates configuration files, selects key points of input signals, and, through manual intervention, adds signals and their key values to the combination, includes assessment information, and automatically generates full combination test cases. During this process, expected values for corresponding output signals can also be specified for each combination, thereby automating result evaluation.

Automatic Generation of Assessment Functions

When defining input signal values, using the expectation value functions provided by MQTester allows the system to automatically generate assessment functions.
MQTester offers a variety of expectation value functions, covering most needs in testing work.

Signal Viewing

The signal viewer allows for the viewing, comparison, and arbitrary zooming in/out of various input and output signals during testing. Multiple signals can be selected and displayed in a single view for convenient detailed comparison.

Test Quality Monitoring

A complete test requires all testing tasks to be performed according to standard requirements, such as ensuring test cases cover all data, conducting corresponding reviews for all tasks, fully evaluating all test results, and achieving required structural coverage levels. MQTester provides a complete set of monitoring metrics to evaluate testing work.

Requirements Traceability

Requirements are the input conditions for testers to execute tests. The purpose of testing is to ensure the designed system ultimately meets the requirements. Therefore, for an excellent testing software, the ability to link and trace to a requirements system is an indispensable feature. Many current standards, such as IEC 61508, DO-178B, EN 50128, and ISO 26262, have strict requirements for requirements management.

Continuous Integration Support

MQTester supports continuous integration in Jenkins. Through configuration, MQTester execution can be triggered in Jenkins in various ways, achieving automated model testing execution, saving time, and improving efficiency.

FAQs

Common Testing Difficulties And Solutions

Providing solutions to multiple testing challenges in the development of high-safety, high-complexity model-based systems.

Data Store Memory (DSM) variables in Simulink are system-wide global variables that facilitate data exchange between components within a model. Recording the changes in these global variables provides critical data support for rapid issue diagnosis, especially for those used within Stateflow.

Solution: Our tool automatically identifies all DSMs used in Simulink models and Stateflow, including instances defined via DSM blocks and Simulink.Signal objects. It comprehensively analyzes the scope of usage for each DSM and applies the appropriate method to handle them automatically, thereby achieving the capability to auto-record Data Store Memory variables utilized by the object under test.

In practical applications, not all DSMs need to have their contents recorded. Therefore, a user selection interface is provided to choose which variables require recording.

After user selection, the tool automatically configures logging properties or generates corresponding variables in the base workspace based on how each variable is used. This eliminates the tedious manual process of locating and setting up signals, enabling easy signal logging and parameterization. Crucially, for the recorded values to be meaningful, the test result assessment must be capable of evaluating these variables.

With the increasingly widespread application of Autosar AP models, many traditional testing tools only support Autosar CP models, making the generation of test environments for AP models difficult. Even when some software can manage to produce a test environment, the generated environment often cannot be executed in simulation. In such cases, the core modules must be extracted from the model and modified before testing can proceed. This approach is not only inefficient but also lacks guaranteed accuracy and validity, as the tested object is not equivalent to the original model.

Solution: By deeply parsing the AP model and targeting port-based function definitions, our tool automatically constructs a complete test environment: it generates corresponding function calls for port-based function prototypes in separate models, adds input and output signals for these functions, and then connects them to the correct function call ports in the main model. If the function prototype called in the original model does not exist, the tool automatically generates the prototype in another model based on the call information and connects it to the appropriate port in the main model—all without manual intervention, improving efficiency and ensuring test completeness. Throughout this process, it handles the data type definitions used by the functions accordingly, guaranteeing that the generated test environment can be correctly executed in simulation.

During testing, it is necessary not only to test normal operating conditions but also to evaluate various abnormal scenarios such as transient faults, permanent faults, and other exception types to verify the system’s ability to handle illegal signals. This requires fault injection testing functionality, which is especially essential in interface testing.

Solution: Fault injection is implemented by recording intermediate signals directly in the model and allowing test cases to inject abnormal values into the signals at designated fault injection points. After determining the injection points, the signals available for input in the test cases are expanded, enabling arbitrary abnormal values to be injected at any moment. Once the fault injection ends, the original signals are immediately restored, while ensuring that the intermediate values at the injection points can also be evaluated. By using Simulink signal line properties, markers can be added to points requiring fault injection. The system automatically checks attributes such as the signal data type to ensure correctness of the injected signal properties. Compared to traditional manual model modification, this fault injection approach significantly improves the efficiency of abnormal testing, supplements edge cases and fault conditions not covered by normal testing, and ensures system stability when encountering real-world faults.

When testing different values of calibration parameters, their values need to be modified within the domain defined by the parameters. Current testing tools typically only allow modification before the simulation of a single test case runs. This means each test case can only validate one specific value, necessitating the creation of multiple test cases for multiple values, which increases the number of similar test cases and extends testing time. It also prevents results from different calibration values from being displayed in a single plot.

Solution: During the simulation process, the execution can be paused to modify the value of a calibration parameter, and then resumed to continue the current simulation. This enables testing all variations within a single test case, significantly improving testing efficiency.

Example: Take a Gain block in a model whose value is assigned by a variable as an example. Within a test case, its value can be adjusted directly and dynamically, with results obtained in real time. This allows rapid simulation of multiple scenarios without modifying the model or interrupting the simulation, greatly enhancing testing efficiency and coverage.

In Simulink models, elements such as Simulink Function and Function Caller, Goto and From, and Data Store Write/Memory and Data Store Read are used in pairs. However, they may not reside at the same hierarchical level. During testing, if the object under test contains only “isolated elements” like Function Caller, From, or Data Store Read, the simulation may fail due to the missing counterpart module.

Solution: Depending on the scenario—whether the counterpart module exists elsewhere in the model but not in the test object, or is entirely absent from the model—different measures are taken to automatically complement the corresponding modules or add the necessary input signals, ensuring smooth simulation execution.

Example: (using Goto and From modules) If a From module is present in the test object but its corresponding Goto module is missing, the tool automatically identifies this isolated signal, complements it with a Goto module in the test environment, and treats the variable as an input signal—allowing its value to be modified at any time when writing test cases. This ensures both simulation continuity and seamless testing progression.

The traditional manual method of writing test cases is not only time-consuming but also heavily reliant on individual expertise. Furthermore, manual exhaustive enumeration becomes impractical with a high number of combinations, often leading to incomplete combinations and inadequate test coverage, which directly results in insufficient test coverage.

Solution: We provide a multi-modal, high-efficiency test case generation solution that meets all requirements:

Script/Table-based Case Generation: An intuitive table-filling interface intelligently inherits signal initial values, requiring only the definition of signal values over time. It offers auxiliary functions and Matlab expressions, automatically generates assessment functions, and significantly improves authoring efficiency.

Full Combination Test Cases (Equivalence Class Combination Cases): Automatically selects signal values (user-modifiable), intelligently constructs test combinations, and supports the embedding of assessment or expectation value functions, achieving comprehensive testing for various scenarios.

High-Coverage Case Auto-generation: Automatically generates test cases with the highest coverage based on the model. Assessment functions can be used for automatic result evaluation, fundamentally ensuring test sufficiency and objectivity.

Coverage Gap Filling: Automatically generates test cases to supplement missing coverage based on the current coverage status of existing test cases.

Example: When creating a test group in the MQTester interface, users can freely choose between script-based or table-based methods. These diverse authoring methods make testing more flexible.

Under the test case menu, users can see a series of options related to combination testing. After selecting “New,” the combination configuration file is automatically generated and opened. This file lists all configuration information required for permutations and combinations of the current object under test. Users can configure expected values or assessment functions within the file as needed.

In MQT, using the “Auto Test” command automatically generates test cases with the highest coverage. By selecting Simulation Configuration -> Fill Coverage Gaps from the dropdown menu, new test cases are generated based on the current test coverage.

When there are many input signals, the number of combinations can easily become extremely large. For example, with 13 input signals and 3 possible values for each signal, the number of combinations is 3¹³ = 1,594,323. This already exceeds the maximum number of rows that can be stored in an Excel sheet (1 million), and MATLAB’s processing speed for such non-matrix data is also quite slow.

Solution: Based on the number of elements and possible values in the combination, the system automatically selects a suitable orthogonal array using orthogonal design theory. It then picks representative test case combinations according to the orthogonal array, eliminating redundant ones. When the number of combinations is enormous, this significantly reduces the total number of combinations, improves test case efficiency, and lowers time costs. When the number of elements exceeds 4, orthogonal arrays can reduce the test volume by over 90%. Using an orthogonal array, the example above would require only 27 representative combinations.

Example: MQTester provides functionality to automatically select an orthogonal array and then generate orthogonal combinations based on the selected array.

Solution: MQTester currently supports HiL (Hardware-in-the-Loop) simulation on Speedgoat hardware. It can directly generate the target environment and code within the MATLAB environment, and execute HiL simulations on the target machine using test cases managed by MQTester. The simulation results can be viewed with the MQTester Signal Viewer or included in test reports. For details, please contact our technical support.

Need A One-On-One Expert Demonstration?

Book an exclusive in-depth demo where a consultant provides real-time answers to all your technical and implementation questions.