Uncertainties in Environmental Modelling and Consequences for Policy Making

Free download. Book file PDF easily for everyone and every device. You can download and read online Uncertainties in Environmental Modelling and Consequences for Policy Making file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Uncertainties in Environmental Modelling and Consequences for Policy Making book. Happy reading Uncertainties in Environmental Modelling and Consequences for Policy Making Bookeveryone. Download file Free Book PDF Uncertainties in Environmental Modelling and Consequences for Policy Making at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Uncertainties in Environmental Modelling and Consequences for Policy Making Pocket Guide.

Data is a descriptive word again, and uncertainty is not among the top fifty words, although it is expected to be an important concept in scenario studies. The other three topics again refer to the main areas of sustainability science. Scenario approaches are more apparently associated with Hydrology and Climate Change studies, while the Ecosystems studies emphasise prediction.

This analysis of a large body of academic literature shows a prediction-orientated modelling and a strong emphasis on empirical data, aligning with the representativeness viewpoint. Such approaches are very common even in the studies that involve scenario analyses, while uncertainty is scarcely mentioned. Still, in the scenario-oriented studies, the emphasis on data and prediction is not as strong as it is in the general modelling studies.

We complement the general information derived from text mining of academic papers with the current validation viewpoints and approaches among modelling practitioners. We employ a short online survey circulated among researchers and policy analysts in academia, policy organisations and industry. Following a clarification about the modelling context, the survey contained a series of Likert scale questions about validation in general, and in the context of scenario generation in particular. Below we discuss the responses to the Likert scale questions, and their relation to the background factors, if there is a statistically significant dependence.

Validation in the general modelling context: The survey questions on model validation primarily address the representativeness and usefulness views on validation. The representativeness view is based on positivism, yet a purely positivist validation based on observational data is argued to be impossible 17 , 18 , Reasons given to support this argument first include that multiple models can generate the same output as the equifinality principle implies; therefore, there is no uniquely true model that can fit to empirical data 20 , Secondly, there is no guarantee that a model can successfully project the future if it can replicate the past, because modelling assumptions like scaling up, averaging, and reducing the resolution level can cause deviations in future projections, even though they replicate the empirical data in a given spatial and temporal scale Furthermore, an objective validation cannot be expected when both the assessment of a fit between the model and empirical data, and the measurement of the data itself is inference-laden Alternative formal validation approaches have been developed to deal with such problems, for instance the generalised likelihood uncertainty estimation GLUE method 46 , 47 that addresses equifinality and accepts multiple models as valid based on statistical inference.

However, the use of such approaches have remained limited to producing uncertainty intervals around the average model output The impossibility of an accurate representation and projection leads to the usefulness view in validation, where usefulness can be defined as how well a model fits for a given purpose. Many scholars object to using models for prediction in the first place, and argue that they should rather be used as heuristics to enhance understanding and guide decision-making 44 , 49 , Policy problems require a comprehensive critique of the scientific enquiry rather than a purely rationalist one, for instance to include practical and ethical concerns Therefore, using models as heuristics can provide a broader and multidimensional view on policy problems.

It can assist the formulation of alternative policies which can deal with various and often unexpected situations. It also allows experimenting with different value systems of stakeholders, especially in participatory settings with citizens where models are used as metaphors to identify implicit norms affecting a policy problem Therefore, this way of using models has potential implications for consensus building.

A model can be used in several other alternative ways, from data condensing to training users for a particular behaviour Therefore, a model that can provide benefits for any of such purposes would be considered useful. These fundamental issues have been discussed for decades, yet they are still the main topics of debate, especially for models that cannot be limited to physical systems and a natural sciences perspective due to the involvement of human and decision-making factors. Therefore, the dichotomy between representativeness and usefulness, the role of empirical data in validation, and the view of decision-makers on validity, are the key dimensions we consider while investigating the existing viewpoints on validation.

Therefore, a dichotomy does not exist among practitioners. This tendency to agree with both statements does not differ among the experience levels or organisational backgrounds. As for the modelling roles, model developers and respondents who identified themselves as both developers and users tend to agree that usefulness is the most important validity criterion.

This finding is counterintuitive, considering that model users, either in research or decision-making contexts, would be expected to value how well the model serves for its purpose and favour usefulness more than representation accuracy, unless their purpose is an accurate representation.

Services on Demand

This asymmetry between the expectations of modellers and model users has been noted earlier, and ascribed to the lack of information non-modellers have about the limitations in models, hence their higher demands for representation accuracy Survey responses to the key issues in model validation. The length of the bars refer to the fraction of responses given to each question on the Likert scale from Strongly disagree to Strongly agree.

Regarding the role of historical data in validation, there is no consensus among the respondents. About one third of the respondents agree and another one third disagree that validity cannot be linked to the replication of the past since multiple models can achieve this Question 4 , indicating an absence of consensus about the equifinality principle in validation. These viewpoints about the role of data are not dependent on the background of the respondents. Overall, these findings indicate that objections to relying on data-oriented validation approaches due to the impossibility of a purely positivist validation have not been widely reflected on the practice.

Concerning the view of decision-makers, e. Therefore, practitioners think that data-driven validation is demanded by decision-makers, and they acknowledge the call for clarifying uncertainties and assumptions, which can be considered as best-practice in contemporary modelling. The acknowledgement of the communication of uncertainties and assumptions depends on experience level. This finding can be interpreted as follows: A longer engagement in modelling and a longer interaction with decision-makers help to acknowledge the necessity of communicating uncertainties and assumptions regardless of the frustrations it may cause.

As for the high support of less experienced respondents, it can be attributed to fresh training on the best-practice of modelling. The employment conditions, which are beyond the scope of this paper, may play a role, too. Validation in the scenario generation context: When the models are used for scenario generation, the focus shifts from the model to the broader analytical context. Therefore, validation in exploratory modelling is suggested to consider the reasonability of modelling assumptions, the strategy of sampling to generate the scenarios, and the logic of connecting experimental results to policy recommendations Yet, when new models are developed for exploring multiple plausible futures, the reported validation techniques are similar to those used in a general modelling context.

Comparison of model output for a single baseline scenario to historical data remains the most commonly used technique, while extreme conditions tests, cross-validation and reality cheques are also employed 54 , 55 , Sensitivity analysis 57 is a commonly used validation technique in general, even in participatory settings 7.

Uncertainties in environmental modelling and consequences for policy making /

It investigates how robust the model output is against the uncertainty in inputs and identifies the factors to which the model is most sensitive. In the modelling cycle, these factors are suggested to be recalibrated for a higher accuracy, if such a sensitivity is not expected in real life 15 , Using models to explore multiple plausible futures raises the question of whether models should be validated differently than the models used for prediction or projection. This is the first survey question asked to respondents in the scenario generation context.

Question 1 in Fig. There is no strong statistical evidence for the dependence of the responses to this question on respondent characteristics. Still, experience level plays a potentially important role. The respondents with medium experience may tend to disagree with this statement more. Survey responses to validation in the scenario context.

The figure shows the responses given to the survey questions about the key issues in model validation in the scenario generation context. These questions cover whether validation should be different in this context than the general modelling context Question 1 , whether the validation should be based on a baseline scenario or the scenario ensemble Questions 2 and 4 , and whether the model output or structure is more important in validation Questions 3 and 5. The length of the bars refer to the fraction of responses given to each question on the Likert scale from strongly disagree to strongly agree.

There is no consensus among the respondents about these questions. Still, a large majority of the respondents disagree or strongly disagree that model output is more important than the structure in the validation of scenario-oriented model.

1st Edition

Still, most respondents are neutral about the use of scenario ensembles in validation, which may be due to unfamiliarity or ambiguity of the concept. The last two questions compare the importance of model output and structure for validation Questions 3 and 5. Seventy-nine percent of respondents disagree with the relative importance of model output, therefore it can be said that the respondents do not favour an output-focused validation over a structure-focused one when the model purpose is scenario generation. Therefore, most respondents prefer to focus on the structure rather than the output in the validation of scenario-oriented models, while a smaller yet considerable group of respondents do not report such a strong preference.

This study investigated the viewpoints on validation in the general modelling context and in the particular scenario exploration context. Three key dimensions were considered in the general modelling context: the historical dichotomy between representativeness and usefulness, the role of empirical data in validation, and the view of decision-makers on validity. In the scenario exploration context, whether validation should be performed differently, the relative importance of model structure and output, and whether validation should be based on a baseline scenario or a scenario ensemble were the three aspects investigated.

Many scholars have argued that data-oriented validation is not sufficient. Conceptual aspects and participatory approaches should be integrated into validation to enhance the public reliability of models. Our text-mining results do not indicate a wide implementation of this view, since the prevalent concepts in academic publications centre around data. Data plays a prominent role in validation practice in all main areas of sustainability science, including hydrology, ecosystems, emissions and energy.

This emphasis on data is not specific to academic publications. Practitioners report that data comparison is one of the most commonly used techniques, and a match between the model output and data is a reliable indicator of its predictive power.

  • Background?
  • Telusuri video lainnya;
  • Telusuri video lainnya.
  • Personal Page: Prof. Dr. Jeroen P. van der Sluijs.
  • Japan: A Concise History (4th Edition).
  • Eddie Rickenbacker: Boy Pilot and Racer (Young Patriots Series).
  • Environmental Decision Making and Risk Management.

Quantifying models with reliable data and checking the plausibility of the output with respect to the past data is surely an indispensable component of validation. Such a data match being seen as a reliable indicator of predictive power can be interpreted as a low acceptance of an integrated validation viewpoint. Data-oriented validation is linked with the representativeness view on validity. Still, practitioners value the usefulness of a model as much as its representation of reality. The usefulness view does enjoy as much support as one might expect in validation practice.

The reason could be the relative difficulty of defining and measuring usefulness compared to representativeness, the prevailing perception of models as descriptions of the reality rather than representations, or the absence of resources to engage experts, stakeholders and decision makers in the validation practice. Still, we echo the calls for integrated validation approaches 27 that evaluate the conceptualisation, structure and behaviour of a model with respect to its predefined purpose, based not only on in-house testing but also on peer reviews by experts and stakeholders.

We also stress the importance of publishing such validation practices explicitly to enhance visibility. According to the perceptions of survey respondents, which are mostly scientists, decision-makers expect a model to replicate historical data, to be comprehensive and detailed, and the assumptions and uncertainties to be communicated clearly. Therefore, the emphasis on replicating the past and representing reality accurately is attributed also to the demands of decision-makers.

This finding is intriguing, since it is the scientists who are often claimed to pursue scrutiny in data-intense, comprehensive and detailed models and to ignore the social and institutional complexities of decision problems. Therefore, a closer look at the science-policy interface in future research can illuminate whether these findings are due to a perception gap between the modellers and decision-makers.

Our survey findings showed that the views on many aspects of model validity and validation are diverse, despite some convergence, for instance, on usefulness as a validity criterion. The team members of a modelling project, whether from the science or policy side, cannot be expected to have a default mutual understanding about validity and mutual expectations from the model.

Therefore, we recall the importance of establishing a common understanding about what is expected from the model, and how it is to be validated. In the scenario generation context, using models to explore multiple plausible futures instead of predicting a best-estimate future can still be considered a niche, since the academic literature vastly emphasises prediction, and most practitioners think that models can be used for prediction purposes.

This view might be reflected on the survey results, since more respondents remained neutral, possibly due to unfamiliarity or ambiguity, about the questions in the scenario context than about the ones in the general context. Furthermore, the data-oriented validation approaches used for prediction-focused models are also commonly used in the scenario generation context. Still, model output is not considered more important than model structure in this context, as indicated by survey responses.

The relative importance of model structure received more agreement. Therefore, model structure can be a point of departure for future validation studies in the scenario generation context. Future research can also focus on the development of validation frameworks for scenario-oriented models. Tests like historical data comparisons or reality cheques ensure that the model successfully generates one plausible future, or that the model is structurally reasonable. Therefore, they should surely remain in such a validation framework. However, as discussed earlier, the strategy of sampling to generate the scenarios, and the logic of analytical framework that links experimental results to policy recommendations are important in validating exploratory models, too.

Return Policy

Whether they are generated by a model or not, scenarios are evaluated on attributes like plausibility 59 , consistency 60 , 61 and diversity Therefore, model validation should include an appreciation of these attributes in the case where the scenario ensemble is generated by an exploratory model. In other words, to evaluate an exploratory model based on its purpose, i. In practice, the relation of these attributes to the sampling strategy and the analytical framework can be elaborately defined, and then formal techniques can be developed to evaluate a model with respect to these criteria.

We examine a broad academic literature with text-mining tools to understand the concepts and relationships governing the model validation practice. This text-mining approach is based on the author-specified keywords, and the frequency of words in the abstracts of the publications, as it will be explained in detail below. In other words, we aim to understand the validation practice based on the words used in the publications.

This text-mining technique allows covering a large number and a wide variety of publications in our analysis. It provides an impression of the prevalent concepts and major clusters of work. Yet, being based on simultaneous occurrence of words, it is not expected to reveal the exact validation approaches and methods used in the literature. Our analysis focuses on two datasets retrieved from the Scopus database, with the motivation to investigate if there are any differences in the validation approaches when models are used specifically for scenario analysis.

Dataset I contains 10, publications mainly from the environmental and decision sciences that address the subject of model validation. Dataset II is a subset of the first one, containing publications and including scenario as a keyword besides model validation. To identify the main concepts and themes in the literature, we employed a text-mining tool called topic modelling In an LDA implementation, the user specifies the number of topics bags , and then the algorithm probabilistically allocates each document to one of these bags to a certain extent.

In other words, the topics are not necessarily mutually exclusive in terms of the documents and words they include. Resulting from this process, LDA forms document-topic and topic-word pairs based on the words included in each document. The results in this paper discuss the topic contents based on the topic-word pairs.

The topics are named based on the most defining, i. For instance, if a topic has water, soil, crop among the 50 most frequent words visualised in a word cloud, we name it Agriculture and Hydrology topic. The topics include many words in common, yet such common words occur in them with different probabilities and with a different set of neighbouring words.

These tables underlie Figs. For the preparation of data, we removed all general stopwords from the abstracts prior to the text-mining analysis, as well as the words that do not have any significant meaning in this particular case, such as model, validation, research, analysis etc. We also stemmize all the words, meaning that the words with the same root, for instance calibrate and calibration, are considered the same. To disclose individual views on the topics widely discussed in the validation literature, we employed a short online survey circulated among researchers and policy analysts in academia, policy organisations and industry.

The survey contained three groups of questions about validation in general, and about scenario generation in particular. The first group of questions was about the organisational background, modelling experience, role and area of the respondents. For the organisational background and the modelling area, respondents could choose multiple options. The second group of questions contained a set of statements Likert scale questions reflecting the published viewpoints and issues, and asked to what extent the respondents agree or disagree with these statements This group also involved a question about the validation techniques used in the studies the respondents have been involved in.

The last group of questions was about the validation of models specifically used for scenario generation. Modelling experience asked for how many years the respondents have been involved in modelling or model-based studies.


Fisheries bioeconomics Theory, modelling and management

For the modelling role, the respondents were asked if they consider themselves model developers with hands-on model building activities , model users who use pre-existing models in research and analysis , with both of these roles, or none of these roles. For the modelling area, the respondents could select multiple options from a pool of research areas from energy to population dynamics. The survey questions were prepared by the authors based on the literature discussed in the first two sections of this paper.

The factors important in survey design, such as a common understanding, recalling the questions, and specifying what is to be rated and the continuum of rating 65 , were taken into account. Budtz-Jorgensen et al. Soil Carbon Dynamics. Jones And Pete Falloon. Global Climate Change. Shilev et al. Du kanske gillar. Lifespan David Sinclair Inbunden. Spara som favorit. Modelers may make these judgments informally, based upon their own views, or they can formally elicit the opinions of experts or stakeholders; for example, through surveys 90 , 91 or participatory modeling exercises.

By definition, models are meant to be approximate representations of reality to help rendering a complex situation workable. These additional ingredients make models partly independent from theory. The language of mathematics coherently links the ingredients but also introduces new constraints into the modeling process, namely analytical tractability and numerical solvability. Various values may inform expert judgment in modeling decisions. However, recent literature in the philosophy of science calls this distinction into question.

The ethical dimension of uncertainty in IAMs refers to the ethically relevant judgments that modelers apply explicitly or implicitly in their modeling choices when dealing with scientific and ethical uncertainty. Procedural ethics describe what is commonly understood by research ethics, that is, the widely accepted norms of good scientific conduct. In the context of integrated assessment modeling, procedural ethical values are relevant simply because IAMs are tools of scientific research.

By contrast, intrinsic ethics refer to ethical values and assumptions that become embedded and hardwired into analytical methods and results, whether explicitly and deliberately or implicitly. Some modelers advocate the use of observed market interest rates, while others suggest a discussion about intergenerational justice as an appropriate starting point see the section Setting of Parameter Values.

More often than not, the ethical value judgments manifested in modeling choices remain implicit; they are subtly implied by judgments that are often perceived by modelers to be of a purely epistemic nature, that is, concerned with only the factual content of the model representation. When model results inform policymaking, intrinsic ethical issues in the model may have important ethical consequences for the wider society, outside of academia.

Such extrinsic ethics in scientific research refer to the societal effects of research outputs. Returning to the example of the social discount rate, since the rate has great influence on SCC estimates see, e. The epistemic consequence of a decision that turns out to be wrong is that an incorrect piece of information is incorporated into the body of scientific knowledge, which ultimately creates a positive learning opportunity.

By contrast, the extrinsic ethical consequence of the decision may be the implementation of suboptimal policies with possibly significant societal consequences. Such pressure can be exercised, for example, through funding arrangements.

The systemic perspective on power highlights the influence of wider social forces on the modeling process. In this review, due to limited space, we focus on two interlinked systems of power relations that IAMs are embedded in: the systemic power relations within the scientific community and the systemic power relations that exist between the two social spheres of science and politics. Scholars in STS have developed theoretical frameworks that explain the role of politics and social relations both within and outside the scientific community in shaping the standards of good science and subsequent practice.

Looking for other ways to read this?

Regarding the relationship between science and politics, a common view in the contemporary STS literature is that the generation of scientific knowledge, policymaking, and social order all exist in a coproductive relationship with each other. These issues then become subject to the implicit and socially contingent judgments of experts rather than a wider political discussion among the numerous and heterogenous stakeholders of climate change.

Ultimately, modeling decisions may narrow prematurely the content of policy deliberations. This exclusion raises questions regarding the democratic legitimacy of expert assessments in policymaking. In this section, we discuss examples of coupled epistemic—ethical issues from the IAM literature. Krueger et al.

Although presented in sequential order, in practice, the choices made at each point are understood to be interdependent and iterative. The perceptual model underlying IAMs is the structured and qualitative understanding of the climate change problem, including its causes, processes, and consequences adapted from Beven This understanding includes a particular framing of the problem that the model is intended to address. The market failure framing also makes it plausible to ask very focused research questions about the optimal carbon price that is expected to fix the market distortion.

This price is set by the intersection of marginal climate damages with the marginal cost of abatement. From an ethical perspective, the market failure framing, which effectively turns the atmosphere into a commodity, assumes high substitutability between human goods such as technology and capital, and nonhuman goods, including biodiversity, ecosystems, and landscapes. First, pricing carbon emissions directly attributes causal responsibility to the emitter. Hence, the logic of this approach does not justify a distinction between subsistence emissions caused by the poor to achieve a decent standard of living and luxury emissions caused by the rich, as suggested by some climate ethicists.

At this stage in the modeling process, the perceptual model is translated into a formal model structure, that is, a set of mathematical equations that formalize the relationships between the elements and processes of the modeled system. Specifying a model structure for IAMs involves choices about which elements and processes of the climate and socioeconomic systems should be included.

In particular, modelers may need to weigh model completeness against model reliability when deciding whether to include poorly understood aspects of climate change that are nevertheless expected to have significant impacts on model results. For example, it is generally considered likely that climate change will cause or worsen violent conflicts, although the empirical basis for quantifying the causal mechanism between climate change and conflicts is still small.

Excluding climate change effects on violent conflicts from the model prevents any future victims of such conflicts who will likely not have contributed to causing climate change from being recognized in the policy process and, ceteris paribus , likely results in an undervaluation of climate change damages and possibly misleading policy recommendations.

Schienke et al. One of these IAMs is the model used by Nordhaus in a study from ; the other is used in a more recent study by McInerney and Keller from Both are optimizing models, meaning that they maximize social welfare over time. While the Nordhaus model runs without additional constraints, McInerney and Keller introduce an additional constraint on optimization.

Their model requires that the probability of a particular event, namely the irreversible collapse of the North Atlantic meridional overturning circulation, must never exceed a predetermined limit. Both models adopt a utilitarian objective function, which generally ignores disparities in welfare distribution between the rich and the poor. The threshold constraint that is introduced in McInerney and Keller's model implies ethical judgments about 1 a specific outcome that should be avoided partly independent of economic costs and 2 an acceptable probability of this outcome occurring anyway.

These decisions would be ethically controversial even if the science of irreversible system thresholds were settled. Specifying a model structure also involves decisions about the level of regional disaggregation, that is, the number of geographical regions in the model, what Morgan and Henrion 48 call a domain parameter. Other domain parameters include model time horizon and time increment.

While domain parameters are often neglected in model uncertainty analysis, their impacts on model results may be considerable. In particular, some IAMs incorporate equity weights to make the cost of climate change more comparable across poor and rich regions. Therefore, the degree of regional disaggregation in the model and the methodology that is used to define model regions affect the fairness of the resulting weighting scheme.

Dennig et al. The estimation of parameter values involves choices, too. In the IAM literature to date, the most explicit discussion of coupled epistemic—ethical choices concerns one specific model parameter: the social discount rate used to calculate the present value of future consumption losses due to climate change. Economists treat climate change mitigation efforts as investments in future consumption, , and discounting acknowledges that consumption may be valued differently depending on when it occurs. In the context of climate change, discounting has ethical importance; it reflects the weight that the current generation assigns to the welfare of future generations in relation to its own, considering that current economic activity is causing future climate impacts.

A common technique for determining the social discount rate in IAMs is the Ramsey optimal growth model, which defines it as a function of the pure rate of time preference, the consumption elasticity of marginal utility, and the projected growth rate of consumption. The pure rate of time preference is essentially a measure of impatience and it measures the loss of utility that is experienced simply because consumption occurs in the future rather than today.

The consumption elasticity describes how quickly the marginal utility of consumption declines with increasing consumption. It essentially measures the value of a dollar's worth of consumption to the poor versus the rich. Economists commonly pursue either a descriptive or a prescriptive approach to define the social discount rate.

The epistemic, ethical, and political dimensions of uncertainty in integrated assessment modeling

The former, as promoted for example by Arrow et al. The intention is to treat the rate of time preference as an uncertain empirical quantity rather than the subject of value diversity. There is no consensus in the economics literature on which approach is preferable. However, several philosophers argue that, independent of the method chosen, the choice is never ethically neutral. Baum illustrates the inevitable ethical judgments involved in observing, measuring, and aggregating society's discounting behavior.

Fischhoff argues that any use of market prices in policy assessments always implies the ethically relevant assumption that the observed market is functioning properly, that is, that all relevant externalities are priced in. IAMs can theoretically deliver various types of model output, but as mentioned earlier see Box 1 , the most commonly presented results are optimal global abatement targets, carbon tax rates, and SCC estimates.

In the face of scientific and ethical uncertainty, these output choices have both epistemic and ethical importance because of the information they conceal. The SCC, for example, is a single monetary impact metric that is aggregated from already highly aggregated estimates of climate damages across time and regions. Aggregation obscures the boundaries between groups and complicates the determination of who exactly is bearing the costs of climate change.

This also complicates the direct comparison of outputs from different studies because the variances in internal model processes and assumptions remain invisible. Having identified examples of concrete coupled epistemic—ethical choices in the development of IAMs, we now turn to identifying concrete examples of the political dimension of uncertainty management in IAMs as reported in the even more dispersed literature.

These have also been identified by scholars in STS as possible points for politics to enter the scientific process. However, the scholarly discussion of the politicization of IAMs, and uncertainty in IAMs in particular, is small and evidence mostly anecdotal. In the following, we review some empirical studies from the somewhat more systematic research on the politics surrounding the development of climate models.

Although we are not suggesting that the two types of models are equivalent, these studies may provide entry points for future research on IAMs. Krueck and Borchers perform an institutional comparison between two climate modeling centers in Europe to investigate how the modelers deal with the challenge to generate policy relevant knowledge about a politically charged issue in the face of scientific uncertainty and value diversity.