What validation problems are you likely to see over and over? When tackling complex validation challenges, you’ll save time, money and headaches when you know the most common problems and where to find them.

The following analysis is based on validation work EduQuest performed for a large FDA-regulated company over the past year. The goal was to bring the company’s software validation evidence up to the level of the U.S. FDA’s current expectations as well as those of the client’s own independent auditor.

Our efforts yielded 1,720 observations. As part of a “lessons learned” review, we categorized the observations and identified which documents contained them. The results – in our experience – are typical of the problems most companies face.

80/20 Rule Applies to Common Validation Problems

Through Pareto analysis of the categories of problems, we discovered that about 80% of our observations were clustered around nine types of deficiencies.
The most frequent deficiencies we found were:

Missing Information

Documents or records omitted fundamental information or content that should have been included.

Inconsistency

Documents contained statements inconsistent with other statements about the same topic in the same document or in the same validation package. What’s more, no explanation or reason was given for the difference. We found that jargon, varying terminology, and contradictions in logic frequently caused these kinds of inconsistencies.

Lack of Needed Detail

This deficiency applied mostly to requirements documents. The requirements in the validation package did not adequately describe the characteristics of:

  • Data
  • User interactions with business processes
  • Key processes internal to the software

Traceability

We found three frequent traceability problems:

  • The traceability matrix did not account for a traceable specification or an observation step in a test script
  • The trace was broken. Either a requirement was barren (lacked decedents or a test) or one of the detailed requirements or test results was an orphan (lacked a parent somewhere in the requirement tree)
  • The traceability matrix was incomplete. Requirement details were not explicitly numbered and traced to associated test steps. Requirements were not traced at a detailed level, so the reviewer needed to infer the detailed links between specifications and steps in a test script

Vague wording

Documents used generalities such as “in accordance to an approved procedure”, or “applicable regulatory requirements”, or “all associated GxP and business processes”. In addition, documents used vague words such as “may”, “possibly”, “more or less”, and approximately.

Unverifiable test results

Expected results were not described sufficiently so that an independent reviewer could compare and verify actual results. The IEEE Standard for Software Test Documentation, Std. 829.1988, Clause 6.2.4 says you should, “…provide the exact value (with tolerances where appropriate) for each required output or feature”. For executed scripts, actual results were not recorded or captured in a way that allowed an independent reviewer to compare them to expected results. For example, “OK” was noted in the actual-result column with no reference to a screen shot.

GDP

We found three frequent Good Documentation Practice problems:

  • Hand-recorded data and testing evidence, such as test results, were presented in a way that could cause doubts about their authenticity (for example, cross-outs without initials, date, and reason)
  • Data that confirmed a specific requirement was hard to find in the evidence provided (for example, a busy screen shot crammed with data)
  • Handwritten corrections were made that changed the sense of a requirement or an expected test result, but no discrepancy report or change request was filed (for example, changing an expected result from indicator “Off” to “On”). In GDP, hand corrections are allowed without additional documentation only for obvious typographical errors, such as dropped or transposed letters (for example, correcting “th” or “teh” to “the”)
  • Incomplete Testing – Test scripts did not fully or adequately test the associated requirement
  • Ambiguity – Text could be interpreted more than one way, so it did not establish a single, unique requirement. The words “either” and “or” in a requirement are strong clues the text is ambiguous

Identifying the Most Vulnerable Documents and Records

Taking the next step, we then categorized the documents and records where we found the most frequent deficiencies. We discovered that about 85% of findings were concentrated in six key documentation areas.

The top types of flawed documentation were:

  • Specifications (including User Requirements)
  • Test scripts
  • Validation Plans
  • Test Plans
  • Trace Matrix
  • Test Results

Although the exact order of problem areas may differ in your organization, it’s likely these same six documentation areas will float to the top. From our experience, specification documents are usually the biggest pitfall for most companies.

Fewer Validation Problems and Inspection Success Go Hand-in-Hand

After auditing many companies, large and small, and participating in countless remediation projects, EduQuest has found these results typical of companies worldwide.

More importantly, we also have seen first-hand that companies who reduce the frequency of these problems with focused remediation efforts are much more likely to weather future FDA inspections. You can reasonably assume the same would be true if the frequency of such problems were low in the first place.

I recommend you use these results and definitions to assess your own validation projects, or devise your own categories and charts to pinpoint your company’s most common problems. Either way, you’ll have a major head-start in better allocating validation resources and making needed improvements quickly.