Analytical methods are developed or adopted from pharmacopoeia to analyze the raw materials, intermediates, finished products, stability samples, process validation samples, cleaning validation samples and bioanalytical samples using different analytical techniques like titrimetry, spectrometry, chromatography, polarography, electrophoresis etc.

Incase of non-compendial methods, validation is generally required while for compendial or pharmacopoeal methods at least three lots verification is done to check the suitability of these methods under the actual condition of use. FDA regulations such as GMP, GLP and GCP and quality standards such as ISO17025 require analytical methods to be validated before and during routine use.

What is Test Method Validation?

Analytical method validation is the process used to authenticate that the analytical procedure employed for a specific test is suitable for its intended use.

“As per FDA method validation is defined as the process of proving (through scientific studies) that an analytical method is acceptable for its intended use.”

According to the ISO method validation is defined as the confirmation by examination and provision of evidences that the particular requirements for a specified intended use are fulfilled.

There is great diversity in different literature available on how to perform validation studies. The ICH has a consensus and developed a “Tripartite guideline Validation of analytical procedures, text and methodology” Q2(R1) with detailed methodology. The AOAC, the EPA and other scientific organizations provide methods that are validated through multi-laboratory studies.

In general terms, then, the requirements and performance parameters must first be defined for every analytical method and purpose of analysis; and second, the value for these parameters

must be estimated and checked to see if they really meet the criteria. This is an essential

condition if the results provided are to be used.

Method Validation Parameters and Acceptance Criteria

The parameters, as defined by the ICH and by other organizations and authors, are described in brief in the following paragraphs.

Selectivity/Specificity

The term specificity usually refers to a method that produces a response for a single analyte only, while the term selectivity refers to a method that provides responses for a number of chemical entities that may or may not be distinguished from each other. Since there are very few methods that respond to only one analyte, the term selectivity is usually more appropriate. The USP defines the selectivity of an analytical method as its ability to determine an analyte accurately in the presence of interference, such as synthetic precursors, excipients, enantiomers and known (or likely) degradation products that may be expected to be present in the sample matrix.

In case of bioanalytical method development selectivity studies should also assess interferences that may be caused by the matrix, e.g., urine, blood and water. Optimized sample preparations involving extractions, filtration and percolation etc, can eliminate most of the matrix components.

Precision and reproducibility

As per ICH Precision should be investigated using homogeneous, authentic samples. However, if it is not possible to obtain a homogeneous sample it may be investigated using artificially prepared samples or a sample solution. The ICH requires precision from at least 6 replications to be measured at 100 percent of the test target concentration or from at least 9 replications covering the complete specified range.

The precision of a method can be subdivided into 3 categories: repeatability, intermediate precision and reproducibility. Repeatability is obtained when the analysis is carried out in a laboratory by an operator using a piece of equipment over a relatively short time span. Intermediate precision is determined by comparing the results of a method run within a single laboratory variations: different days, different analyst, different equipment, etc.

The objective of reproducibility is to verify that the method will provide the same results in different laboratories (collaborative studies, usually applied to standardization of methodology) with different analysts, and by using operational and environmental conditions that may differ from, but are still within, the specified parameters of the method (inter laboratory tests).

The acceptance criteria for precision depend very much on the type of analysis. Pharmaceutical QC precision of greater than 1percent RSD is easily achieved for compound analysis, but the precision for biological samples is more like 15 percent at the concentration limits and 10 percent at other concentration levels dependent on the sample matrix, the concentration of the analyte, the performance of the equipment and the analysis technique. It can vary between 2 percent and more than 20 percent.

Accuracy and recovery

The accuracy of an analytical method is the closeness to the true or accepted reference value. The ICH document on validation methodology recommends accuracy to be assessed using a minimum of nine determinations over a minimum of three concentration levels covering the specified range (e.g., three concentrations/three replicates each). Accuracy should be reported as percent recovery by the assay of known added amount of analyte in the sample or as the difference between the mean and the accepted true value, together with the confidence intervals.

The true value for accuracy assessment can be obtained in four ways. One alternative is to compare the results of the method with results from an established reference method. This approach assumes that the uncertainty of the reference method is known. But in fact for pharmaceutical studies, such an alternate method is usually not available. Secondly, accuracy can be assessed by analyzing a sample with known concentrations (e.g., National Institute of Standards and Technology (NIST) reference standards or other control sample & certified reference material) and comparing the measured value with the true value as supplied with the material. Again such a well-characterized sample is usually not available for new drug-related analytes. The third approach, which is the most widely used recovery study, is performed by spiking analyte in blank matrices.

For assay methods, spiked samples are prepared in triplicate at three levels over a range of 50-150% of the target concentration. After extraction of the analyte from the matrix, its recovery can be determined by comparing the response of the extract with the response of the reference material dissolved in a pure solvent. Because this accuracy assessment measures the effectiveness of sample preparation, care should be taken to mimic the actual sample preparation as closely as possible. If validated correctly, the recovery factor determined for different concentrations can be used to correct the final results.

Linearity and Calibration Curve

The linearity of an analytical method is its ability to elicit test results that are directly proportional to the concentration of analytes in samples within a given range or proportional by means of well-defined mathematical transformations. Linearity may be demonstrated directly on the test substance (by dilution of a standard stock solution) and/or by using separate weightings of synthetic mixtures of the test product components, using the proposed procedure.

Linearity is determined by a series of 3 to 6 injections of 5 standards whose concentrations span 80–120 percent of the expected concentration range. The response should be directly proportional to the concentrations of the analytes or proportional by means of a well-defined mathematical calculation. A linear regression equation applied to the results should have an intercept not significantly different from 0. If a significant nonzero intercept is obtained, it should be demonstrated that this has no effect on the accuracy of the method.

Frequently, the linearity is evaluated graphically, in addition to or as an alternative to mathematical evaluation. The evaluation is made by visually inspecting a plot of signal heightor peak area as a function of analyte concentration.

Because deviations from linearity are sometimes difficult to detect, two additional graphical procedures can be used. The first is to plot the deviations from the regression line versus the concentration or versus the logarithm of the concentration, if the concentration range covers several decades. For linear ranges, the deviations should be equally distributed between positive and negative values.

Acceptability of linearity data is often judged by examining the correlation coefficient, y-intercept, slope of the regression line and residual sum of squares. A correlation coefficient of >0.999 is generally considered as evidence of acceptable fit of the data to the regression line. The y-intercept should be less than a few percent of response obtained for the analyte at target level, plot of the data should be included. In addition, an analysis of the deviation of the actual data points from the regression line may also be helpful for evaluating linearity.

Range

The range of an analytical method is the interval between the upper and lower levels (including these levels) that have been demonstrated to be determined with precision, accuracy and linearity using the method as written. The range is normally expressed in the same units as the test results (e.g., percentage, parts per million) obtained by the analytical method. For assay tests, the ICH requires the minimum specified range to be 80 to 120 percent of the test concentration, and for the determination of an impurity, the range to extend from the limit of quantitation, or from 50 percent of the specification of each impurity, whichever is greater, to 120 percent of the specification.

Limit of Detection

The limit of detection is the point at which a measured value is larger than the uncertainty associated with it, it is the lowest concentration of analyte in a sample that can be detected but not necessarily quantified. The limit of detection is frequently confused with the sensitivity of the method. The sensitivity of an analytical method is the capability of the method to discriminate small differences in concentration or mass of the test analyte. In practical terms, sensitivity is the slope of the calibration curve that is obtained by plotting the response against the analyte concentration or mass. In chromatography, the detection limit is the injected amount that results in a peak with a height at least two or three times as high as the baseline noise level. Besides this signal/noise method, the ICH describes three more methods:

The detection limit is determined by the analysis of samples with known concentrations of analyte and by establishing the minimum level at which the analyte can be reliably detected.

Standard deviation of the response based on the standard deviation of the blank: Measurement of the magnitude of analytical background response is performed by analyzing an appropriate number of blank samples and calculating the standard deviation of these responses. Standard deviation of the response based on the slope of the calibration curve. A specific calibration curve is studied using samples containing an analyte in the range of the limit of detection. The residual standard deviation of a regression line, or the standard deviation of y-intercepts of regression lines, may be used as the standard deviation.

Limit of quantitation: The limit of quantitation is the minimum injected amount that produces quantitative measurements in the target matrix with acceptable precision in chromatography, typically requiring peak heights 10 to 20 times higher than the baseline noise. If the required precision of the method at the limit of quantitation has been specified, the EURACHEM approach can be used. A number of samples with decreasing amounts of the analyte are injected six times. The calculated RSD percent of the precision is plotted against the analyte amount. The amount that corresponds to the previously defined required precision is equal to the limit of quantitation. It is important to use not only pure standards for this test but also spiked matrices that closely represent the unknown samples.

For the limit of detection, the ICH recommends, in addition to the procedures as described above, the visual inspection and the standard deviation of the response and the slope of the calibration curve. Any results of limits of detection and quantitation measurements must be verified by experimental tests with samples containing the analytes at levels across the two regions. It is equally important to assess other method validation parameters, such as precision, reproducibility and accuracy, close to the limits of detection and quantitation.

Ruggedness

Ruggedness is not addressed in the ICH guideline, its definition has been replaced by reproducibility, which has the same meaning as ruggedness, defined by the USP as the degree of reproducibility of results obtained under a variety of conditions, such as different laboratories, analysts, instruments, environmental conditions, operators and materials. Ruggedness is a measure of reproducibility of test results under normal, expected operational conditions from laboratory to laboratory and from analyst to analyst. Ruggedness is determined by the analysis of aliquots from homogeneous lots in different laboratories.

Robustness

Robustness tests examine the effect that operational parameters have on the analysis results. For the determination of a method’s robustness, a number of method parameters, for example, pH, flow rate, column temperature, injection volume, detection wavelength or mobile phase composition, are varied within a realistic range, and the quantitative influence of the variables is determined. If the influence of the parameter is within a previously specified tolerance, the parameter is said to be within the method’s robustness range.

Obtaining data on these effects helps to assess whether a method needs to be revalidated when one or more parameters are changed, for example, to compensate for column performance over time. In the ICH document, it is recommended to consider the evaluation of a method’s robustness during the development phase and any results that are critical for the method should be documented. This is not, however, required as part of a registration.

Stability

Many solutes readily decompose prior to chromatographic investigations, for example, during the preparation of the sample solutions, extraction, cleanup, phase transfer or storage of prepared vials (in refrigerators or in an automatic sampler). Under these circumstances, method development should investigate the stability of the analytes and standards. The term system stability has been defined as the stability of the samples being analyzed in a sample solution. It is a measure of the bias in assay results generated during a pre-selected time interval, for example, every hour up to 46 hours, using a single solution. System stability should be determined by replicate analysis of the sample solution.

System stability is considered appropriate when the RSD, calculated on the assay results obtained at different time intervals, does not exceed more than 20 percent of the corresponding value of the system precision. If, on plotting the assay results as a function of time, the value is higher, the maximum duration of the usability of the sample solution can be calculated.