MS Access Validation – Top Tips

This article will present a simple method for validating MS Access databases for GXP use. The goal of our validation strategy is to provide better testing and documentation of individual databases in less time. Once we establish this methodology, we can rapidly apply it to many databases to reach compliance faster.

This article will clearly explain how to effectively validate any MS Access Database used in a regulated environment.

Scope

This article will cover the most common examples of database validation activities. We will cover special or very complicated examples whenever possible, but anything not explicitly covered in this article can be added as needed to fit your existing validation requirements or for special cases.

Assumptions

We are not going to make any assumptions about the existing SDLC (Software Development Life Cycle) that may be currently in use. Any material presented here can be adapted to meet your existing standards and practices. We are also going to assume that the database and associated front-end program that needs validation has already been written, as this is what we find in the overwhelming majority of the cases presented to us. The database should already have technological tools for compliance, such as audit trails, user-level security, etc, and that In this article, we are going to assume that there is no access to the tables or queries (read-only queries are Ok), and all information is entered or presented via forms and reports.

Overall Approach

The basics of our approach are as follows:

We define our basic documentation practices and test methodology in a single document, the Database Validation Master Plan. This document can be referred to for all individual database validation projects, and will not need to be reviewed or approved for each validation effort.

Define the requirements for each form and report that is visible to the user. For each form or report, we will use the Requirements Specification to define in simple terms the purpose or primary function of that object. This is similar to black-box testing, but the individual functions and code will be documented in the Design Specifications.

While we certainly do document the critical parts of the program as part of a risk assessment, we find that it is almost as easy to test everything properly rather than trying to define what needs more or less testing. However, we always include a complete Requirements Trace Matrix to ensure that all requirements are tested in the test protocols.

This article will present a simple method for validating MS Access databases for GXP use. The goal of our validation strategy is to provide better testing and documentation of individual databases in less time. Once we establish this methodology, we can rapidly apply it to many databases to reach compliance faster. This article will clearly explain how to effectively validate any MS Access Database used in a regulated environment.

Requirements Specification

The requirements specification should include all the requirements that your database must accomplish, with the primary focus on defining what information must be collected in the database and how it is collected, processed and presented to the users of the database. This document should be kept simple and relatively non-technical so that anyone who reads the document will understand the requirements.

Starting with a common validation template:

List all forms and reports in the database.

For each form, define the purpose and individual requirements for the form.
If the form is bound or allows data to be input, define what information should be entered. The level of detail about what data can be entered is entirely up to you. Some organizations list individual field names, field types and other properties in the Functional Requirements, while others simply indicate in loose generic terms what type of information is to be collected (ex: patient data, machine settings, etc.) .

We are more concerned about the ultimate purpose of the form, especially since we will be doing a thorough documentation of the field types and properties in the Design Specifications.
If there are particular validation rules, these should be defined. For example, the value of a field called pH might require a number between 0 and 14.

If there are particular workflows a form is involved with, these should be defined. For example, sample data might need to be entered, then the sample data reviewed, and then the sample data can be released.
If a form performs specific calculations, these calculations should be defined. For example, the form might perform a statistical analysis of data entered.

If a form produces a report or creates an external output, this should be defined. For example, the form might export data to an external text file or create a new Word document.

For each report, define the title of the report, the fields that should appear, any grouping or sorting requirements, and any calculations or formatting requirements. You should also define if specific user groups can or cannot open and view specific reports.

For defining the security settings, create a list of all the groups in the security workgroup file and define what each group should and should not be allowed to do within the program. If additional security settings are used, they should be documented here as well. For example, you may have a button labeled, “Release Batch Now” that is only enabled for members of a group called “QA Release Supervisors”. You may want to include code that will disable or un-hide critical fields to certain groups, when dealing with highly sensitive information such as clinical patient information.

Design Specification

Now that you have defined your requirements for each form and report, you can create the detailed software Design Specification. The purpose of this document is to describe how the requirements have been implemented. This document should include enough information so that a developer could create the entire software project based on the information contained within this document and from reading the requirements specification

Inputs, Processing, Outputs and Security

We expand each form into four sections (Inputs, Processing, Outputs and Security) and each report into two sections (Fields and Grouping, Sorting & Calculations). Subforms and subreports can be documented as part of the parent form.

Inputs

Document all controls that allow data to be entered or updated. In an automated system, you can also define the source or the input data or instructions. If any validation rules are used to enforce proper data entry, these should be documented here as well.

Processing

The processing section is for documenting functions and code attached to command buttons and event procedures that are automatically executed upon opening forms, updating records, etc. If there is code that enforces workflows or a certain sequence of steps or events within the program, this should be documented as well and tested in the OQ.

The other kind of processing is the validation of macros and code that is used in the database. Here, the design specification is a good place to document the code and annotate as needed to describe the purpose of each function or subroutine.

Note: If you have good coding standards and user proper headers and comments in your code, this step may be already done for you.

Outputs

We define the outputs of a form as either

The opening or printing of a report (which we document separately) or
The creation of an external file, like an exported CSV text file

In the database is part of an automated system where data is sent to another software application, we would usually cover this type of output under the processing section.

Security

For each form, we define what we call “The Big Six” security settings, which we define as the group or groups that are allowed to:

  • View or open the form
  • Edit existing data
  • Add new records
  • Delete records
  • Sign records by applying electronic signatures
  • Un-Sign or remove electronic signatures

If any additional security settings are used, they should be carefully documented here as well. For example, you may have a button labeled, “Release Batch Now” that is only enabled for members of a group called “QA Release Supervisors”.

For highly sensitive information such as clinical patient information, you also have the option of limiting what fields can be viewed for read-only use. You may want to include code that will disable or un-hide critical fields to certain groups, but the majority of the databases that we see do not need this level of security.

Test Protocols

The testing of any database should demonstrate that the requirements were properly implemented according the design specifications.

The Installation Qualification (IQ) testing is usually limited to making sure the file is in a location where users can access the file, either by installing the front end on each users C-drive or by using a common copy on the local network (The data files are always stored on a shared network folder for multi-user applications). We recommend running the IQ from at least two separate workstations to catch the dreaded but common refrain, “it works on my computer but not on hers…” At a minimum, we suggest that you verify the version of the Access database files and the version of MS Access used during execution of the protocol, and also verify that linked tables are properly connected. If you have the ability to view code, you should also verify all references required by the application and that the code compiles without errors.

Operational Qualification (OQ) testing is used to verify that the Functional Requirements have been implemented according to the settings defined in the Design Specification. All settings that were documented in the Design Specifications (security settings, default values, tab order, etc.) should be tested. The IQ and OQ can be combined into a single IOQ protocol as needed.
We organize test cases by form. For each form, thoroughly test the inputs, workflow, processing and outputs generated by a form.

The one exception is that we typically perform security tests separately. It is easier to test security in separate test cases for each individual group, assuming that security settings are based on group membership and not assigned to individual users.

Input Testing

Some examples of the types of test cases to write for input testing.

What data can be entered into each control on a form?
Are validation rules being enforced? Validation rules can be created at either the field level or at the control level. If these rules are implemented at the field level, then they will be enforced throughout the entire program. However, validation rules applied to controls can be bypassed via functions that write data directly to the fields.

Demonstrate that a record cannot be updated without required information.
Some fields may need additional “challenge testing” to verify that they cannot be left empty, or can only accept certain data from a drop-down list. Some examples of these types of fields include status fields, results fields, assigned to fields and due date fields.

Process & Output Testing

These are usually easier to combine when writing test cases.

Are formulas correct? There are several methods for verifying and testing formulas.

Some examples of test cases for processing and outputs:

  • Demonstrate that clicking a button performs the desired function (the appropriate screen opens, the appropriate output is created, etc.)
  • Demonstrate that a particular workflow is enforced (a lot cannot be released before it is reviewed, etc.)
  • Demonstrate that formula produces the expected result.
  • Demonstrate outputs accurately reproduce data entered into the system.
  • Demonstrate that output have the correct formatting.
  • Reports can be tested by a combination of visual inspection or verification that the data matches the input data.

A word of caution about testing reports: recently, we have noticed FDA 483 observations about validation projects that do not verify that the input data matches outputs on reports. Make sure that you reference test input data from earlier testing and verify that it is being correctly reported on all related reports.

Security Testing

The best way we have found to perform comprehensive security testing is to create a separate test case for each group in the security workgroup file, then go through the entire program to test what a user who is a member of that group can or cannot do within the program. The type and amount of security testing that you can do for databases is largely based on how security has been implemented.

Conclusion

This methodology will result in a User/Functional Requirements Specification, a Software Design Specification, and an IOQ Protocol ready for approval and execution. Any deviations found during testing can be handled according to existing validation practices, and a summary report that shows that all the activities specified in the Validation Master Plan or SOP can be generated.

The benefits we have realized from using the methodology described here include reduced time to generate documents and better, more comprehensive testing of the software application, leading to faster implementation into the production environment.

Author

Tyson Mew

President Ofni Systems