Validation of calibration software ? as required by ISO 17025, for example ? is a topic that folks don?t prefer to talk about. Often there is uncertainty concerning the following: Which software actually should be validated? If that’s the case, who should take care of it? Which requirements must be satisfied by validation? How do you do it efficiently and how is it documented? The following post explains the background and gives a recommendation for implementation in five steps.
In a calibration laboratory, software is used, among other things, from supporting the evaluation process, up to fully automated calibration. Regardless of the amount of automation of the software, validation always identifies the entire processes into which the program is integrated. Behind validation, therefore, is the fundamental question of if the procedure for calibration fulfills its purpose and whether it achieves all its intended goals, in other words, does it provide the required functionality with sufficient accuracy?
If you need to do validation tests now, you ought to know of two basic principles of software testing:
Full testing is not possible.
Testing is always influenced by the environment.
The former states that the test of most possible inputs and configurations of an application cannot be performed due to the large numbers of possible combinations. According to the application, the user must always decide which functionality, which configurations and quality features should be prioritised and which are not relevant for him.
Which decision is made, often depends on the next point ? the operating environment of the software. Depending on application, practically, you can find always different requirements and priorities of software use. There are also customer-specific adjustments to the program, such as concerning the contents of the certificate. But also the average person conditions in the laboratory environment, with a wide range of instruments, generate variance. The wide selection of requirement perspectives and the sheer, endless complexity of the program configurations within the customer-specific application areas therefore make it impossible for a manufacturer to check for all the needs of a specific customer.
Correspondingly, considering the above points, the validation falls onto the user themself. In order to make this technique as efficient as you possibly can, a procedure fitting the following five points is preferred:
The data for typical calibration configurations should be defined as ?test sets?.
At Stop , typically once a year, but at least after any software update, these test sets should be entered into the software.
The resulting certificates could be compared with those from the prior version.
Regarding a first validation, a cross-check, e.g. via MS Excel, may take place.
The validation evidence should be documented and archived.
WIKA provides a PDF documentation of the calculations completed in the software.
Note
For further information on our calibration software and calibration laboratories, visit the WIKA website.

Leave a Reply