So now we’ve decided that we’re no longer in a validated state.

Do we just re-run the original protocol from start to finish?

Not necessarily.

Risk Based Approach

We can take a (uh-oh, here it comes again) risk-based approach to re-validation.

If one aspect wasn’t impacted in the new release, we may be able to make a case for not re-running those tests.

With our change management described previously, we already know what the impacts have been to requirements, validation protocols, etc.

The first thing to do is to consider the requirements that were impacted and since we have our trace matrix showing the relationship between requirements and test, we’ll start with those tests being re-run.

Indirectly Affected

Next, we want to consider things that may be indirectly affected by the change.

This is commonly called regression testing.

One example could be you have a change to an input form, maybe changing a field length or format.

Obviously, the input function would be assessed in re-validation but to ensure no regression, anything using data from that field should be addressed in regression testing.

Impact the Entire System

With changes that potentially impact the entire system, e.g., an update to the underlying database engine, a new “major release” by a software supplier, etc., it’s likely difficult to justify not testing the entire system again, especially if the system is considered high-risk.

When in doubt, it’s best to include more in re-validation testing.

Document your Decisions

But if you can provide risk-based justification for not performing certain test sections, it’s considered acceptable to do so.

Use your application validation plan to document your decisions and rationale.

Once you have defined what the re-validation effort will be, execution proceeds ‘normally’ as described earlier.