UNCERTAINTY INTERVALS

An uncertainty interval (UI), or plausible range, is a range of values that likely contains the true value of something unknown. It is used when data is limited or unavailable. Rather than relying only on statistics, a UI is built using expert judgment, logical reasoning, existing knowledge, and where available, empirical values from related conditions or species that serve as reference points for reasoning. The UI helps express uncertainty clearly while allowing for adjustments as more information becomes available.

A 90% UI provides an upper and lower bound within which the actual value is expected to fall 90% of the time. Construction typically begins with the widest defensible range, which is then narrowed by eliminating values that available knowledge renders implausible. For example, a 90% UI for the prevalence of type 2 diabetes in adults over 40 might be initially bounded between 1% and 90%. Values above 70% can be ruled out because such prevalence would constitute a widely recognized public health crisis. Values below 5% can be ruled out because diabetes is known to be common. General knowledge and published literature might reasonably narrow the interval to 10-35%, a range that still reflects genuine uncertainty but eliminates clearly implausible values.

A range also carries an implicit assumption about where within its bounds the true value is most likely to fall. Stating this assumption explicitly, for example that the true value is treated as equally likely to fall anywhere within the bounds, is itself informative. It signals honestly that the interval reflects the limits of current knowledge rather than a well-characterized distribution, and it invites others to challenge or refine it.

When prevalence or other parameters must be constructed from reasoning rather than direct measurement, which is the normal situation for most welfare conditions in most farmed species, wide intervals are not a failure. They are an honest representation of the current state of knowledge, and they serve two important functions. First, they make assumptions visible and open to scrutiny, which is always preferable to qualitative labels like “common” or “of concern” that rest on the same uncertain evidence but cannot be examined, compared across analysts, or updated systematically. Second, they provide the foundation for sensitivity analysis: by testing whether welfare conclusions hold across the full range of plausible values, including the most and least favorable scenarios, it becomes possible to determine whether the missing data actually matters for the decision at hand. When conclusions are robust across the full range, the uncertainty in that parameter is not decision-relevant. When they are not, the analysis identifies precisely what needs to be measured, at what resolution, and why.

(to familiarize yourself with estimating 90% uncertainty interval for unknown parameters, you can se online calibration tools, such as the one on this website)