The GAMP 5 guidance on the risk-based approach to GxP computerized systems laid out a framework for validating laboratory informatics systems (among other systems). This framework has been the gold standard for risk-based assessments since its publication in 2008. The International Society for Pharmaceutical Engineering (ISPE) published an updated Good Practice Guide: A Risk-Based Approach to GxP Compliant Laboratory Computerized Systems in 2012, but since then, no significant developments have occurred. However, that doesn’t mean that the field of validation has not progressed. In fact, computer software assurance (CSA) is becoming the preferred approach, in part to make better use of risk-based assessments.
In this blog post, we’ll put the pieces together to see how risk-based assessments assure laboratories that instruments and systems are validated for their intended use.
Perhaps the most important aspect of the risk-based approach is to assess the risk in the beginning properly. Organizations know their tolerance for, and exposure to, risk better than any overarching regulation could convey. The industry setting is often the first node in a risk assessment decision process. Regulated industries, in which unintended consequences could have human effects, may be more risk-averse. Risk assessment is a highly individualized process, and validation teams commonly use a risk matrix to measure risk objectively.
As you can see from the risk matrix, using a rubric to assign a level of risk to various tasks allows you to assess the need for validation and the appropriate level. Organizations can use risk matrixes to assess purchases and physical or environmental conditions, too. The matrix helps gauge the impact of an event based on its likelihood and its severity. Each organization can define the parameters of the assigned risk classes according to their needs. For example, an instrument in an R&D lab that verifies raw ingredients from a supplier, and tests them before shipping, would have a much lower risk rating than a similar instrument used in a QC lab to check a final product before sending to consumers.
The other important aspect to understanding the risk landscape is the role of testing and giving appropriate weight to test activities beyond the usual IQ OQ PQ testing. These activities can include, but are not limited to, supplier testing before sales or installation, or debugging specific pieces of code. An initial installation in a pharmaceutical QC lab would have a higher risk level than debugging the interface you wrote to let the CDS talk to your ELN, when it stops working after a software update.
The expanded scope of testing is at the heart of the move toward computer system assurance. Each kind of validation testing aligns with specific authors or testers. The vendor/manufacturer often provides the IQ test scripts, and all you need to do is verify that they are complete. OQ testing from the vendor/manufacturer may also be sufficient unless customization of the out-of-the-box functionality is required. Because PQ testing is dependent on your intended use, it is the responsibility of the end user.
Once you’ve accounted for all the testing and have filled out the risk matrix, you can put together a comprehensive risk assessment by looking at the instrument or system itself. It is common sense that a single chromatograph would need less robust testing than a heavily customized LIMS integrated with SAP and an MES across several sites. The validation world is starting to agree.
In GAMP 5, instrument or system categories are not as important as they used to be in previous versions. The original thinking about the GAMP categories was that the more complex the instrument or system was, in terms of hardware or software, the greater the risk associated with it.
However, since the publishing of GAMP 5, the way the instrument or system fits into the end user’s overall business model is more important than how complex it is. An instrument’s risk could span more than one category. Suppose your QC lab tests finished pharmaceutical tablets. In that case, your analytical equipment validation is going to need more scrutiny and care than the validation of the same instrument in a paint manufacturer’s QC lab. The skilled staff in your labs are your best resource for understanding the risk in each setting.
Once you appropriately assess the risk, you can develop your validation process. The V model is the accepted framework for developing the steps after the risk assessment. This approach can be used in small pieces for a simple instrument validation, or as the full model for a complex system, with the result of validating that instrument or system for its intended use. It is an iterative model in which the left side of the V traces to the right side so that each specification has an associated test.
The risk-based assessment process necessarily carries more weight in regulated industries, in which the outcomes can have real consequences for people. Regardless of your industry, CSols has put together several resources to help you properly assess risk. Some are found in the links throughout this blog post, and the rest are listed below.
Have you run into a particular challenge with a risk-based assessment, with which you could use an expert’s help? Comment below and tell us about it.