top of page
Search
Writer's pictureS Mullins

Why we need sampling theory

Without representative sampling, measurement uncertainty is compromised. Here, we present the current debate between Sampling Theory versus Measurement Uncertainty.


The purpose of sampling is to extract a representative amount of material from a "lot", which is the "sampling objective". It is clear that sampling must and can only be optimized before analysis. Non-representative sampling processes will always result in an invalid aliquot for analytical uncertainty (AI) characterization.


A specific sampling process may or may not be representative. If the sampling is not representative, we only have undefined pieces of material, of reduced mass and without provenance (called "specimens" in sampling theory) that are not really worth analyzing. Only representative aliquots reduce the analytical uncertainty of the entire sampling and analysis process to the desired minimum; and only such estimates of analytical uncertainty are valid. The “correctness” of sampling (which we will define later) and representativeness are essential elements of the sampling process.


We summarize Sampling Theory versus Measurement Uncertainty in brief.


Current measurement uncertainty (MI) approaches do not sufficiently take into account all sources that affect the measurement process, in particular the impact of sampling errors. All sampling steps prior to analysis (from primary sample extraction to laboratory mass reduction and manipulation (subsampling, division, and sample preparation procedures) to final extraction of the analytical test portion) They play an important, often dominant, role in the total uncertainty budget, which, if not included, critically affects the validity of measurement uncertainty estimates.


Most sampling errors are not included in the current IM framework, including incorrect sampling errors, which are only defined in sampling theory. If sampling errors are not adequately reduced or completely eliminated, all measurement uncertainty estimates are subject to uncontrollable and inestimable sampling bias, which is not similar to statistical bias because it is not constant. Therefore, sampling bias cannot be subject to conventional bias correction. Sampling theory describes why all sources of sampling bias must be competently eliminated, or reduced sufficiently (in a fully documentable manner), for measurement uncertainty estimates to be valid. Sampling theory provides all the theoretical and practical countermeasures necessary for the task.


Sampling theory must be involved before traditional measurement uncertainty to provide a representative analytical aliquot; Otherwise, a given measurement uncertainty estimate does not meet its own metrological intentions.


Sampling Theory – TDM – has been established over the last 60 years as the only theoretical framework that:

  1. deals comprehensively with sampling

  2. defines representativeness

  3. defines material heterogeneity

  4. promotes all practical approaches necessary to achieve the required representative portion of testing.


The starting point of any measurement process is the primary batch. All lots are characterized by significant material heterogeneity, a concept that only sampling theory fully recognizes and defines, where it is crucially subdivided into constitutional heterogeneity and distributional heterogeneity (spatial heterogeneity). The concept of heterogeneity (and its multiple manifestations) is introduced and discussed in detail in the relevant literature. The entire path from 'batch to analytical aliquot' is complex (and in some respects counterintuitive due to specific manifestations of heterogeneity) and is subject to many types of uncertainty contributions in addition to the analysis.


The main thrust of our argument is that if the test portion is not representative, in other words, if all effects of sampling error have not been reduced or eliminated where possible, all estimates of measurement uncertainty are compromised; but this does not have to be the case.


The impact of heterogeneity


Proper understanding of the phenomenon of heterogeneity, its influence on sampling correctness, and most importantly, how heterogeneity can be counteracted in the sampling process requires a certain level of understanding. Here, we present a minimum of principles of sampling theory to give an appreciation of the shortcomings inherent in current measurement uncertainty approaches.


It should seem obvious that the notion of modeling every manifestation of heterogeneity within the fixed concepts of systematic and random variability is too simple to cover the almost infinite variations of real-world material and batch heterogeneity. This fact is argued and illustrated in great detail in the literature on sampling theories. Previously, it has been argued that it is obvious that random sampling will (always) fail in this context. Composite sampling is the only way to go.


For well-mixed materials (those that appear "homogeneous" to the naked eye), notions of simple random sampling have often been thought to support the statistical assumption of systematic and random variability components; however, these comprise only a very small proportion of materials with special characteristics, so they clearly cannot be used to justify the same approach for significantly heterogeneous materials.


The theoretical analysis of the phenomenon of heterogeneity shows that the total heterogeneity of all types of batch material must be discriminated into two complements, constitutional heterogeneity and distributional heterogeneity. Constitutional heterogeneity depends on the chemical and/or physical differences between the individual "constituent units" in the batch (particles, grains, nuggets), generically called "fragments". Each fragment can exhibit any analyte concentration between 0 and 100 percent. When a lot is sampled using single increment procedures (manual sampling), lot constitutional heterogeneity manifests itself in the form of a fundamental lack of representativeness of the sampling. The effect of this fundamental sampling error (FSE), which is inevitable with manual sampling, is the most fundamental principle of sampling theory. Batch constitutional heterogeneity increases when the difference in composition between the fragments increases; CHL can only be reduced by grinding, usually crushing.


Batch distributional heterogeneity, on the other hand, reflects the irregular spatial distribution of constituents at all scales between the volume of the sampling tool and the entire batch. Batch distributional heterogeneity is caused by the inherent tendency of particles to clump and segregate locally (clustering), as well as more generally throughout the batch (segregation, stratification), or a combination of the two, as exemplified in the disconcerting diversity of material heterogeneity in science. nature, technology and industry. Batch distributional heterogeneity can only be reduced by mixing and/or by the informed use of composite sampling with a tool that allows for a large number of increments (4, 8, 11-13).


It is rarely possible to carry out forced mixing of an entire primary batch. Therefore, if sampling lots have high-lot distributional heterogeneity, there is a very high probability of significant primary sampling errors: fundamental sampling error + a clustering and segregation error. That is, of course, unless you pay close attention to the full complement of sampling theory principles.


The good news is that once you have recognized the principles of sampling theory, they are applicable to all stages and operations from batch scale to laboratory: representative sampling is scale invariant. Batches come in all shapes, forms and sizes spanning the entire range of at least eight orders of magnitude, from microgram aliquots to million-ton natural or industrial system batches.


It is essential to keep in mind that batch distributional heterogeneity is not a fixed and permanent property of the batch; The effects of clustering and segregation error cannot be estimated reliably, as spatial heterogeneity varies both in space and time as batches are handled, transported, loaded and unloaded, etc. Batch distributional heterogeneity can be intentionally changed (reduced) by forced mixing, but can also be altered unintentionally, for example, by transportation, material handling, or even by agitation on the laboratory bench.


An essential idea of sampling theory is that it is futile to estimate batch distributional heterogeneity based on constancy assumptions. Instead, sampling theory focuses on practical countermeasures that will reduce clustering and segregation error as much as possible (the goal is complete elimination) as an integral part of the sampling and subsampling process. In reality, it is rarely possible to completely eliminate the effects of clustering and segregation error, but they can always be subjected to sufficient quantitative control.


It can be shown that most materials in science, technology and industry are not composed of many identical units. On the other hand, the irregularity of batch distributional heterogeneity is overwhelming. Total batch heterogeneity, especially distributional batch heterogeneity, is simply too irregular and erratic to be explained by traditional statistical approaches that rely on random/systematic variability. In fact, this problem constitutes the main difference between sampling theory and measurement uncertainty.


Muestreo representativo en la práctica


The focus of sampling theory is not on "the sample," but exclusively on the sampling process that produces the sample, a subtle distinction with very important consequences. Without a specific qualification of the sampling process, it is not possible to determine whether a particular sample is representative or not. Vague reference to “representative samples” without batch provenance and fully described, understandable and documented sampling processes is an exercise in futility. We repeat: a sampling process is either representative or not representative; There is no possible declension of this adjective.


The main requirement in this context is sampling correctness, which means the elimination of all bias-inducing errors, called “incorrect sampling errors” (ISE) in sampling theory. After meeting this requirement (using only the correct sampling procedures and equipment), the primary goal of sampling theory is to ensure equal probability that all increments in the lot will be selected and drawn as part of an increment. This is called the "fundamental principle of sampling", without which all possibilities of representativeness are lost. Fundamental principle of sampling underlies all other issues in sampling theory.


A unified approach for the valid estimation of the sum of all sampling errors (TEM) and all analytical errors (TEA) was recently presented in the form of a new international standard, "DS 3077 Representative sampling - horizontal standard". This standard looks at the overall sampling scenario comprehensively, especially how heterogeneity interacts with the sampling process and what to do about it.


A call for integration


We have shown that measurement uncertainty is not a complete, universal, or guaranteed approach to estimating a valid total measurement uncertainty if it does not include all relevant sampling effects. About 60 years of theoretical development and practical application of sampling theory have shown that the processes of sampling, sample handling, and sample preparation are associated with significantly larger uncertainty components, sum of sampling errors, than analysis ( measurement) itself, all analytical errors, which typically multiplies the total analytical error by 10 to 50 times, depending on the heterogeneity of the specific lot and the sampling process in question. The specific, very special deviations from this scenario cannot be generalized.


While the Guide to the Expression of Measurement Uncertainty focuses only on all analytical errors, the Eurochem guide points out some of the possible sources of sampling uncertainty, but does not provide sampling operators with the necessary means to make informed decisions. appropriate measures. Only sampling theory specifies which types of errors can and should be eliminated (incorrect sampling errors) and which cannot but should be minimized ('correct sampling errors').


Surprisingly, the measurement uncertainty literature does not allow for a sufficient understanding of the concept of heterogeneity or recognize the necessary practical sampling competence, both of which are essential for representative sampling. Incorrect sampling errors do not exist in the measurement uncertainty framework, and clustering and segregation error are only incompletely considered, leaving all analytical errors and fundamental sampling error as the only major sources of measurement uncertainty. here.


Furthermore, critically important sampling bias is only considered to a limited extent and is only based on assumptions of statistical constancy. However, the main characteristic of sampling bias is its very violation of constancy, which follows directly from a realistic understanding of heterogeneity. The only scientifically acceptable way to deal with sampling bias is to eliminate it, as has been the main tenet of sampling theory since its creation in 1950 by its founder Pierre Gy. Here lies the main distinction between sampling theory and measurement uncertainty: sampling theory states that sampling bias is a reflection of the effects of incorrect sampling error interacting with specific heterogeneity, while measurement uncertainty It is limited to recognizing a (constant) statistical bias that results from systematic effects attributable to the protocols. or just people.


Measurement uncertainty is a top-down approach that depends on an assumed framework of constant random and systematic effects (incorrect), so individual sources of uncertainty, such as clustering and segregation error, and incorrect sampling errors , are not subject to separate identification, concern, estimation or appropriate action (elimination/reduction). In fact, the full measure of sampling-related sources of uncertainty, sum of sampling errors, are almost completely discarded in the measurement uncertainty approach. It is simply assumed that the analytical sample, which ends up being the test portion, has been extracted and reduced in mass in a representative manner. But if this assumption does not hold, the estimated uncertainty of the analyte concentration is not valid; it will inevitably be too small to an unknown, but significant (and variable) degree. The very different perspectives offered by measurement uncertainty and sampling theory urgently need clarification and reconciliation.


To that end, we call for an integration of sampling theory with the measurement uncertainty approach. To avoid underestimation of active sources of uncertainty, we must integrate the effects of the three components (sample extraction stage, sample preparation stage, and analytical stage), which can actually be done uniformly; There is no need to change the current framework for measurement uncertainty analysis just to recognize the fundamental role played by sampling theory, that is, the framework for measurement uncertainty sampling. These sampling and sample preparation branches should be implemented in each measurement uncertainty framework, as contributions to sampling uncertainty. Logically, it should be treated before traditional measurement uncertainties. In this scheme, sampling theory provides a valid and representative analytical aliquot as a basis for the estimation of the uncertainty analysis of the now valid measurement.


End of the debate?


Hopefully, we have explained the critical deficiencies in measurement uncertainty and have shown that sampling theory must be introduced as an essential first part in the complete framework of the measurement process, taking charge and responsibility for all sampling problems in all cases. the scales, throughout the entire aliquot batch. process. We want to see a much-needed reconciliation between two frameworks that, for too long, have been the subject of quite antagonism. Indeed, the “debate” between the sampling theory and measurement uncertainty communities has sometimes been unnecessarily harsh, but we can add that such hostilities have been one-sided (always directed toward sampling theory).


Disputes aside, we hope that our efforts will be seen as a call for the constructive integration of sampling theory and measurement uncertainty, and we look forward to feedback from the broader community.

Comments


Agrotec Informe Logo.png
bottom of page