Frequently when a process development team are putting together the upstream and downstream components of their processes control strategy, the last thing on their minds is the performance of the tests or assays that they’ll be measuring it by. Even if they do think about, they may make an unhealthy assumption that they can simply rely on the data. Sometimes they’ll be right, often they’ll be wrong. When measuring the performance of their process they need to be thinking about the process variability combined with the variability of the method that is measuring it. If you’re able to quantitate a method’s noise, then it should be much more than a simple subtraction from the total variability measured to leave the process variance.
Process variance = Total Variance – Assay Variance
Earlier this year I was asked to speak at the centre for process innovation (CPI) in the U.K. on the efficient use of data when building a process. It was then that the above thoughts came to mind. Increasingly process development scientists are using DoE approaches to bring better control over their processes. Consequently they will have better understanding of the processes design & control spaces. However, a large part of that knowledge will have been based upon the assay readouts. Within that analytical result will be ‘noise’, possibly noise that the process team haven’t factored into their models. The danger of this is that the size of the design space may be overestimated as they will have clear edges of failure (EoF). In reality, the EoF is likely to be blurred. This can result in the processes normal operating ranges (NOR) being set too close to the EoF while the team is blissfully unaware of this. If the process control is able to be tight, then this might not be a problem. However, if it isn’t that easy to control, then they could be teetering on process failure.
When qualifying an assay in early development, precision analysis should be performed. With efficient well structured experimental design, precision estimates can take in sources such as operator, occasion, sample, etc. This intermediate precision analysis can not only provide a more realistic handle of the assay noise that combines with the process noise, but also allows insight on how to perform the assay in future. By generating variance component information, specific replication schemes can be constructed with a predicted precision output, i.e. x number of operators, occasions, samples, repeats, etc., will likely give a %CV of x. When developing your process control strategy, this analytical knowledge can prove extremely valuable. The process development team should be working closely with the analytical development team to get a grip on this. Some companies do, many don’t… Please view the presentation made earlier this year.