When executing test scripts all tests are performed as written and all expected results are met as described. In “real life” this is not always the case. Generally, though, there will be a mix of everything going as expected, something not right with the protocol, and expected results not being met. The following sections discuss what to record and how to record it and how to manage attachments (such as screen shots).

The Basics

As has been alluded to, the individual executing the test should somehow annotate each page where data was recorded indicating that he or she did, in fact, record that data. In addition, all the test equipment, the unit under test, and any setup should be recorded. In all cases, whatever is written down, record everything using Good Documentation Practices.

Annotations

Minimize annotations but don’t hesitate to make them if it helps clarify results or test efforts. If, for example, a test is stopped at the end of one day and resumed the next day, an annotation should be made to show where and when (day) testing stopped and where and when testing resumed. Similarly, if a test is performed by multiple testers, annotations should be made to show which test steps were performed by which tester.

Annotations should only be used to provide necessary explanations. Annotations should not be used for reminders, grocery lists, doodles while waiting for a test to complete, etc. This is especially relevant in regulated industries but pertinent to any industry to ensure a professional result.

Deviant Behavior

“Uh oh, that didn’t work!” Deviations are test failures. The actual results deviated from the expected result. Deviations could cause testing to stop cold while the deviation is analyzed and a disposition determined.

All deviations should be immediately raised to appropriate personnel (e.g., test coordinator, Quality Assurance, management).

Deviation handling is defined by company policy; however, handling generally allows for the following:

  • Stop testing until correction made
  • Allow testing to continue but deviation must be corrected prior to
    launch / release
  • Allow testing to continue; deviation will be acceptable for release (will be corrected in a later release)
  • Reject – deviation is a protocol error. Redline (or update and re-release) protocol and continue testing.

Deviations are considered original data so they should not be “covered up” even if the protocol is updated and re-released.

Variances

“Hey, this protocol step is not right.” It’s often the case that, despite the best intentions and careful reviews, protocol errors slip through the cracks. There are several varieties of protocol errors and all are considered variances. Handling each type will be discussed below.

Obvious Typographical Errors

The easiest type of variance to deal with is the obvious typographical error. Generally, these can be marked up during execution by the tester with a (GDP) note indicating the error. Obvious typographical errors need not be detailed in the test report as individual variances.

The report can make a general statement that typographical errors were identified and changes redlined. Note that once testing is completed, the protocols should be updated to incorporate the changes!

Procedural Errors in the Test Step

These generally require a bit more effort to correct. The tester should make the necessary changes using markups and annotations with explanations and execute the protocol in accordance with the changes.

Once the test is completed, the reviewer should review the change and record an annotation indicating concurrence. Depending on the extent of the variance, the tester may wish to involve the reviewer early and get concurrence that the change is appropriate prior to executing the changes. Procedural errors are individually summarized in the test report.

Procedural Errors in the Expected Results

These are the most challenging errors to deal with. These changes generally raise red flags for auditors. Put yourself in their shoes: you are executing a protocol with pre-defined and approved expected results.

When executing the test, though, the tester changes the expected result. Was this done to just pass the test (make the expected results match the actual results) or was it an appropriate change? Ideally, changes to expected results are pre-approved prior to execution either by the reviewer or by a QA representative.

The tester should not make a unilateral decision to change the expected results and move on. The tester red-line the changes and then approval can be shown as additional annotations.

Once execution begins, the software (and environment) will, ideally, not change. Sometimes, though, that is not the case. For example, if a fatal flaw is exposed – especially one that has additional consequences (i.e., will cause other tests to fail), it’s likely better to suspend testing, fix the problem, then resume testing.

Clearly, it would not be proper just to pick up where testing was suspended. An analysis will need to be made to determine if test results captured prior to the point of the change are still valid. Such analysis is recorded in the Test Report.

If analysis shows that results gathered prior to the point of suspension are no longer valid, the results are still kept as “original data” and noted in the Test Report that the analysis indicated that re-execution of the tests was necessary.

Handling Attachments

When a screen shot or other data is captured to provide evidence of compliance, the screen shot becomes part of the test record. It is thus important to properly annotate the screen shot. Annotations should
include the following:

  • a unique “attachment” reference (e.g., “attachment 1”)
  • the tester’s initials (or signature if required by company
    procedures)
  • the date the screenshot was taken
  • a reference to the test protocol and test step
  • a “page x of y” note (even if a single page)

For example, annotate the step(s) with “See Attachment 1 for results.” Each page of the attachment needs to refer to the protocol and step and should be numbered “Page x of y.”

An example annotation for an appendix might then look like “Attachment 1 for , step 15. Page 1 of 5” (where and clearly establish the document to which the attachment is being made). This way, if a page or pages become separated, it’s easy to locate the results package to which the pages belong.

Closing Thoughts: Improving

It’s always a balance between testing enough and releasing the product. You can never test all of the possible scenarios in a system. You can get 100% path coverage but there are too many external influences that may affect software operations.

Once testing completes, monitor the types of bugs discovered after release. These would be considered “escapes” from the testing phase. Very often, you’ll be able to identify patterns in the types of bugs that escaped. Use this information to understand where the testing fell short and improve current and future protocols.

Extract from “Writing & Executing a Software Validation Protocol: Plain and Simple”

This quick and easy guide describes methods and approaches for writing a validation protocol that can help ensure a thorough validation effort. It also provides some tips and tricks on executing the protocol and documenting the results. While this book was written primarily for those new to validation projects and the effort required gathering sufficient evidence to support validation claims , experienced folks will likely find nuggets to help improve their efforts.

Click here to purchase your copy today