Assay Protocols Validated On

Validation methods are completed to ensure that an analytical methodology is accurate, specific, reproducible and resistant in the specified range in which a target analyte will be analyzed. Assay validation provides a guarantee of reliability during normal use, and is sometimes referred to as “the process of providing documented evidence that the method does what it is intended to do “.

Assay Validation Levels and Steps

1. Assay optimization (pre-validation): Assay optimization and pre-validation are experiments that determine how a variety of matrix and sample elements, as well as assay conditions, affect assay and assay parameters performance. These data, together with scientific judgment, establish the acceptance criteria for the validation of the assay. It is important to establish acceptance criteria before running the validation protocol.

2. Trial scoring: Trial scoring is an experimental protocol that demonstrates that a method will provide meaningful data for the specific conditions, matrix, and samples on which the procedure is performed destined to Assay qualification may not require validation of the precision and reliability of the method. (sensitivity), but just check the suitability of the protocol under real-world conditions (usually specificity).

3. Assay Validation – Comprehensive experiments that evaluate and document the quantitative performance of an assay, including sensitivity, specificity, accuracy, precision, detection limit, range, and limits of quantification. Full assay validation will include interassay and interlaboratory evaluation of the assay repeatability and robustness.

Test parameters DEFINITIONS

Specificity is the ability to unequivocally assess the target pathogen or analyte in the presence of
components that would be expected to be present (3). The specificity of an essay is the ability of the assay to differentiate similar organisms or analytes or other matrix element interference that have a positive or negative effect on the test value.

Accuracy is the agreement between the found value and an excepted reference value (3). This requires a “Gold” standard or method, but in the absence of a gold standard or method, the comparison with the Reference laboratories can substitute.

Precision is the variability in the data from replicate determinations of the same homogeneous.
the sample under normal test conditions (3). For enzyme assays, the precision is usually <10%; 20 to 50% for in vivo and cell-based assays; and> 300% for virus titer assays. Precision includes within assay variability, repeatability (intraday variability) and reproducibility (daily variability). Accuracy can be established without the availability of a “gold” standard, as it represents the dispersion of the data rather than the accuracy (correctness) of the reported result.

The detection limit is the lowest amount of analyte that can be detected, but not necessarily quantified.
as exact value (3). The detection limit is a low concentration that is statistically distinguished from background or negative control, but not precise or accurate enough to be quantified.

The limits of quantification are the lowest and highest concentrations of an analyte in a sample that can be quantitatively determined with adequate precision and accuracy (3). The lower limit of quantification is
often defined by an arbitrary cutoff, such as a signal-to-noise ratio, equal to 1:10, or a value equal to the mean of the negative control plus 5 times the standard deviation of the negative control values.

Linearity is the ability of the assay to return values ​​that are directly proportional to the concentration of the target pathogen or analyte in the sample. Mathematical data transformations, to promote linearity, may be allowed if there is scientific evidence that the transformation is appropriate for the method.

The range is the analyte concentrations or assay values ​​between the low and high limits of quantification. Within the range of the test, linearity, accuracy, and precision are acceptable.

Ruggedness ss is the reproducibility of the assay under a variety of standard but variable tests terms. Variable conditions can include different machines, operators, and reagent lots. Sturdiness provides an estimate of experimental reproducibility with unavoidable error.

Robustness is a measure of the assay’s ability not to be affected by small but deliberate changes under test conditions. The robustness provides an indication of the ability of the assay to perform under normal conditions use.

1. Precision

The closeness of agreement between the value that is accepted as a conventional true value or a value accepted reference value and found value.

Note: When measuring precision, it is important to augment placebo preparations with varying amounts of active
ingredients). If a placebo cannot be obtained, then a sample must be added at different levels. In both cases, an acceptable recovery must be demonstrated.

2. Precision

The precision of an analytical procedure expresses the closeness of agreement (degree of dispersion) between a series of measurements obtained from multiple samplings of the homogeneous sample under the prescribed conditions. Accuracy can be considered at three levels: repeatability, intermediate precision, and reproducibility.

  • Repeatability expresses precision under the same operating conditions over a short interval of weather. Repeatability is also called within-run precision.
  • Intermediate precision expresses variations within laboratories: different days, different analysts, different teams, etc.
  • Reproducibility expresses the precision between laboratories (collaborative studies are usually applied to the standardization of the methodology).

Precision should be investigated using homogeneous and authentic samples (full scale). However, if it is not possible to obtain a full-scale sample, it can be investigated using a pilot or bench-scale sample or sample solution. The precision of an analytical procedure is generally expressed as the variance, standard deviation, or coefficient of variation of a series of measurements.

3. Detection limit

The limit of detection for an individual analytical procedure is the lowest amount of analyte in a sample that can be detected but not necessarily quantified as an exact value.

4. Limit of quantification

The limit of quantification for an individual analytical procedure is the lowest amount of analyte in a sample that can be quantitatively determined with adequate precision and accuracy. The limit of quantification is a parameter quantitative assays for low levels of compounds in sample matrices and is used particularly for the determination of impurities and/or degradation products.

5. Linearity

The linearity of an analytical procedure is its ability (within a given range) to obtain test results that are directly proportional to the concentration (amount) of analyte in the sample.

Note: Measurements must be made using clean standard preparations to demonstrate the detector linearity, while the linearity of the method must be determined simultaneously during the precision study. The classical linearity acceptance criteria require:

  • The correlation coefficient of the linear regression line is not greater than a number close to 1
  • The y-intercept must not differ significantly from zero.

When performing linear regression analysis, it is important not to force the origin as (0,0) in the calculation. This practice can significantly skew the actual best-fit slope across the physical range of use.

6. Distance

The range of an analytical procedure is the interval between the upper and lower concentration (amounts) of the analyte in the sample (including these concentrations) for which the analyte has been shown to the procedure has an appropriate level of precision, accuracy, and linearity.

7. Sturdiness

The robustness of an analytical procedure is a measure of its ability not to be affected by small, but deliberate variations in method parameters and provides an indication of its reliability during normal use.

Note: Ideally, robustness should be explored during test method development. By far the best efficient way to do this is by using a designed experiment. Such experimental designs could include:

  • A Plackett-Burman matrix approach to investigate first-order effects, or
  • A 2k factorial design that will provide information about the first (principal) and higher-order (interaction) effects.

In carrying out such a design, one must first identify the variables in the method that can be expected to influence the result. For example, consider an HPLC assay that uses an ion-pairing reagent. One could investigate: sample sonication or mixing time; mobile phase organic solvent constituents; mobile phase pH; column temperature; injection volume; flow rate; modifier concentration; ion-pairing reagent concentration; etc. It is through this type of development study that the variables with the greatest effects on the results can be determined in a minimum number of experiments. Validation of the actual method will ensure that the final, ranges chosen are robust.

 

Leave a Comment