Welcome to the Multimodelling documentation¶
Introduction¶
Note
This documentation is work in progress and far from finished. New sections will be added in the near future. Whenever questions or feedback is received from end users, we’re trying to update this documentation on the fly. So don’t hesitate to contact us, whenever you run into problems.
Multi-models (MMs), formed by coupling individual independent models, can serve as powerful instruments for analyzing complex real-world problems. Well-designed MMs can harness the capabilities of participating models, and save significant time by adapting and reusing existing models. However, creating MMs can be quite challenging. These challenges include technical aspects like managing data exchange and coordinating model runs, as well as conceptual aspects such as aligning different model scales and resolutions. Additionally, there are legal, ethical, and institutional challenges to consider, such as software licensing, intellectual property (IP) rights related to data and models, privacy and confidentiality concerns, and process coordination.
The MMviB project, acronym for Naar een Nationale Multi-Model infrastructuur voor integrale Besluitvorming in de energie transitie in Dutch multi-model.nl, seeks to address these challenges within the context of the Dutch energy transition. Its primary goal is to create a MM infrastructure (MMI), currently in its Minimum Viable Product stage, that facilitates repeatable and verifiable interactions among existing models to derive valuable insights for decision-making in integrated energy systems. This MMI is being collaboratively designed and developed by a diverse and extensive communities of practice including modellers, energy experts, decision-makers, and researchers. This collective effort involves eleven consortium partners who share a common commitment to advancing the field of integrated energy system decision-making.
This document is intended to communicate the vision, methodology, and preliminary results of our ongoing initiative, with the intention of fostering increased interest, research, and discourse on the subject of multi-modelling (referred to as MMing). Our aim is to cultivate a more robust and inclusive interdisciplinary community of multi-modellers (referred to as MMers), encompassing researchers and practitioners alike.
Scientific background¶
Note
This documentation is work in progress and far from finished. New sections will be added in the near future. Whenever questions or feedback is received from end users, we’re trying to update this documentation on the fly. So don’t hesitate to contact us, whenever you run into problems.
Multi-model ecosystem¶
The general idea of multi-model ecosystem has been introduced in a whitepaper “Principles, challenges and guidelines for a multi-model ecology” 1
State of the art in multi-modelling¶
A Literature Review of Multi-modelling in Sustainability Transition
29 November 2023
Introduction¶
The urgency of today’s societal challenges, such as climate change and unsustainable resource consumption, calls for large-scale sustainability transitions. Sustainability transitions are interdisciplinary, often emerging from processes within complex socio-technical systems characterised by continually evolving relationships and interactions between technological and ecological factors, institutions and infrastructure. A successful energy transition, e.g., necessitates public buy-in, careful planning across various levels of government, and cross-cutting coordination across the energy, buildings, transportation, and industrial sectors.
A complex socio-technical system is also multi-scalar in nature, as its elements have properties and processes that change quantitatively and qualitatively with scale. These scales may be spatial, temporal, administrative (e.g., institutional) or object related. A holistic understanding of how the behaviour of these characteristics change and how impacts of proposed policies cascade across domains and scales is essential for decision-making.
The multi-domain and multi-scalar nature of sustainability transitions results in complexity that makes human comprehension of the issues at hand an endeavour that pushes the limits of human cognition. This complexity demands the use of modelling and simulation (M&S) in supporting the analysis of such systems to aid effective decision-making. We take B. P. Zeigler, Muzy, and Kofman (2018)’s definition of a model as a set of instructions, rules, equations, or constraints that represent a real-world system, such that given an initial state setting, a model accepts input trajectories and generates corresponding output trajectories. M&S methods allow one to abstract a real-world system and approximate its behaviour in a controlled environment and within an experimental frame, facilitating a more holistic understanding of the system and thus what is required to answer a question or tackle a challenge.
With that said, single models are often insufficient to represent the full complexity of designing and implementing sustainability transitions. The diverse nature of the interacting components within socio-technical systems can be so unlike each other that they are better abstracted and modelled using different modelling methodologies and at different scales. However, it is challenging to encapsulate this complexity cost-effectively and credibly within a single model. Attempts have been made to develop unified, monolithic models that capture all such aspects in a single model. These often inadvertently result in inefficient and potentially incomprehensible models (Voinov & Shugart, 2013).
One way to avoid the challenges of solely relying on single models is to use multi-models. In this review, we define a multi-model as a composition of multiple (stand-alone) models, each of which may use a modelling methodology and scale most well-suited to capture relevant aspects within the system of interest. These models or sub-models are configured to interact with one another to exchange information that influences one another and the overall multi-model outputs.
The benefits of multi-models are well-established in literature. They can provide the users with deeper insight into the system being studied (Yilmaz & Oren, 2005) while increasing the productivity and quality of the sub-models (Mosterman & Vangheluwe, 2004). Multi-models, complex in their nature, can adequately reflect the corresponding level of real-world complexity (DeRosa, Grisogono, Ryan, & Norman, 2008) and are key to achieving the requisite holistic quality of socio-technical systems modelling (Wu, Fookes, Pitchforth, & Mengersen, 2015).
The urgency of the societal challenges at hand often requires models that can be used as soon as possible. This need is often incongruent with the resource- and time-intensive nature of developing new models. Producing fit-for-purpose models from scratch often takes months or even years. This prohibitively costly nature of the model development process is one barrier that limits the mainstream use of M&S for decision-making. This limitation can be addressed with the reuse of existing models. Aside from the fact that such models can be “ready to go” with potentially minor modifications or fine-tuning, these models are also already embedded with valuable domain knowledge that can immensely benefit problem owners (Kasputis & Ng, 2000).
Reuse of previously established and validated models can additionally increase the authority of multi-model simulations. An example of reusing models in a multi-model configuration can be found within the Dutch energy transition context: To better gather insights on future energy infrastructure needs across sectors, decarbonisation objectives are input into the Energy Transition Model (ETM), which generates energy supply and costs scenarios across different Dutch sectors. These scenarios then inform the constraints of the OPERA model (van Stralen, Dalla Longa, Dani¨els, Smekens, & van der Zwaan, 2021), an energy system optimisation model that then calculates future infrastructure requirements. That these models were previously used by government and industry stakeholders further lends credence to the final multi-model and its outputs. Various similar examples exist, all of which support our motivation to focus this research on reusing existing models, typically independently developed and intended for use as stand-alone models, in multi-model configurations.
Unfortunately, many barriers hinder the mainstream practice of model reuse. A multi- modeler may search for and find multiple models that together appear to represent their system of interest, only to find that the combination of models constitutes various modelling methods, scales, and scopes, all resulting in a lack of interoperability between the models. A reaction from the modeller may be to resort to heuristics in a bid to achieve multi-model interoperability. After cobbling together technical, ad-hoc solutions to make the models communicate with one another, they find that their extensive efforts have resulted in a multi-model that is misaligned in terms of semantics, concepts or contexts (Yilmaz, 2004). As Diallo, Padilla, and Tolk (2010) discuss, “the issue with the consistent application of heuristics to solve interoperability (is that) the resulting process might not be interoperability”.
After multi-model interoperability is established, users are tasked with appropriately managing model uncertainties. Model uncertainties can significantly impact the dynamics of transitions within complex system models. This is because achieving successful transitions within complex systems is often characterised as wicked problems (Rittel & Webber, 1973), given the numerous ways to approach the problem and the involvement of many actors with multiple, often competing, perspectives. Furthermore, complex systems are often characterised by open and uncertain processes (K¨ohler et al., 2018) and unanticipated exogenous events, all of which impact the dynamics of change represented in the model. These aspects show that rigorous uncertainty analysis approaches are required when using multi-models for decision support.
There are various steps that a modeller must take in order to ensure that meaningful output can be extracted from multi-scale and multi-domain models. It is necessary to understand the functions required to make the inputs and outputs of different models sufficiently consistent for information exchange. Subsequently, the modeller must understand how the manufacturing of this interoperability interacts with model uncertainties and impacts model outputs and interpretation. We posit that such solutions will emerge from evaluating multi-model case studies that are purposefully designed to span multiple domains and scales. These case studies, composed of interacting socio-technical system models, will build upon foundational M&S theory and form the backbone of this research.
The societal need for effective, scientific and practicable methods for reusing existing models in multi-model configurations is clear. The isolated manner in which singular models have until now been developed and used to aid decision-making demonstrates not just an inefficient use of resources but also missed opportunities to bridge multi-scale perspectives and multi-disciplinary expertise. Furthermore, the transition challenges of today surpass the scope of existing individual models, precluding the ability for more holistic problem-solving. The need to leverage the potential of multi-modelling as a decision-support tool to stimulate successful sustainability transitions motivates this research proposal. Thus, the following sections of this document present an effort to establish a clear understanding of past work on sustainability transitions, multi-modelling, model reuse.
Sustainability Transitions in Socio-technical Systems¶
In recent decades, it has become apparent that unsustainable resource consumption and production threaten the balance of our existing ecological, social and technological systems. This has prompted increasing calls for substantive transitions that bring about profound structural shifts towards sustainability in society (Berkhout, Smith, & Stirling, 2004). However, it is understood that relevant strategies cannot be achieved solely through the incremental development of innovative technologies, nor can solutions be purely technical or purely social (Savaget, Geissdoerfer, Kharrazi, & Evans, 2019). There is a need for sustainability transitions within the socio-technical contexts that we live in.
de Haan et al. (2014) describes socio-technical systems as consisting of technologies entrenched within social, political and economic contexts. Socio-technical systems are complex systems made distinct by the non-linear processes, feedback loops, hierarchies, and self-organising characteristics they represent. Transitions within socio-technical systems are affected by path dependencies, multi-scale emergent effects, and pressures by actors and processes within the system to remain bound to the status quo. Therefore, ‘socio-technical’ refers to the characteristics of and interactions between social and technological elements, while ‘transition’ refers to the processes and interactions that stimulate fundamental change in and between these elements.
In our review, we found that a substantial volume of transitions research is based on qualitative frameworks which aim to capture the complexity of sustainability transitions (K¨ohler et al., 2019). Theoretical frameworks such as the Multi-Level Perspective (MLP) (Geels, 2002; Rip, Kemp, et al., 1998) and the Technological Innovation System (TIS) approach (Hekkert, Suurs, Negro, Kuhlmann, & Smits, 2007) take a systemic perspective better to understand the tensions between change and stability in society. Beyond these conceptual frameworks, K¨ohler et al. (2018)’s literature review showed that transitions research hosts a growing number of studies that employ computational modelling methods as an analytical tool. For example, the study by Walrave and Raven (2016) presents an integration of the MLP and TIS frameworks into a system dynamics model for analysing transition pathways under various resourcing conditions.
K¨ohler et al. (2018) defines ‘transition models’ as the application of existing formal modelling methodologies to explain the dynamics of transitions. The same authors identify the following types of models used in transitions modelling: complex systems models (e.g., complex network models), evolutionary economics models, energy-economy and integrated assessment models, and socio-ecological systems modelling. Though approached and implemented in different ways, these strands of models demonstrate a common requirement, which is the ability to represent characteristics of complex systems (e.g., non-linear processes, heterogeneity of model elements and processes), normative aspects of change, path dependencies, and the potential effects of open, uncertain processes within a single model.
The need to represent multi-scale dimensions in transition models is also mentioned by K¨ohler et al. (2018). In a separate publication, Savaget et al. (2019) found agreement in the literature that sustainability initiatives should take place at local levels, given the differentiation of requirements and opportunities across regions. Nevertheless, Geels (2004) situates the appropriate analysis at the intermediate ‘meso’ level, bridging between ‘macro’ (e.g., social-ecological-economic interactions) and ‘micro’ (e.g., individual choices and perspectives) contexts. The need for transition models to be able to represent multiple scales thus becomes evident.
From this review, we found that using computational models to study transitions in socio- technical systems can be improved to capture better the characteristics of complex systems (e.g., non-linearities, uncertainties, and multi-scale aspects). This substantiates our understanding that multi-modelling is an appropriate approach to studying transitions in socio-technical systems and can benefit the field of transitions research.
Types of Multi-modelling¶
As demonstrated above, transition models are intended to reflect complex objects, processes, and interactions across multiple domains and scales in the real world. This requirement makes multi-modelling a promising approach for developing transition models. In earlier decades, research on multi-modelling was advanced significantly in operational research, primarily for military applications. However, our review showed that in recent years, multi-modelling studies have extended to many other fields, such as supply chain management and industrial ecology.
Although Bollinger, Nikoli´c, Davis, and Dijkema (2015)’s publication is situated in the field of industrial ecology, we find that the concept of a multi-model ecology put forth by the authors to be generalisable. A multi-model ecology is defined as an interacting group of models co-evolving with one another in a dynamic socio-technical environment. This ecology can transform over time as knowledge and practices evolve, and it may contain mental, conceptual, and computational models of multiple scales, scopes and perspectives. These exist alongside and interact with actors, data, information, and knowledge. As noted by Bollinger et al. (2015), the resources in a multi-model ecology can be configured and reconfigured to interact with one another in different ways to form a more multi-dimensional representation of the relevant system. However, as will be explained in Section 2.4, the lack of a set of practicable methods for developing multi-models from elements within such an ecology inhibits its further development.
As described by the original authors, the solution procedure is “an analytical equation or numerical algorithm that has been developed for the set of model equations to obtain the desired results”.
We found that multi-models can be broadly categorised as tightly-coupled and loosely- coupled models. Tightly-coupled multi-models can be characterised by the parallel operation of two or more sub-models, with dynamic process interactions between the sub-models during the simulation run that impact the intermediate states of the sub-model and overall multi-model outputs (Antle et al., 2001). This interaction is similar to the Class II hybrid model described by Shanthikumar and Sargent (1983), whereby the sub-models cannot be independently solved (Figure 1).

Figure 1: Classes of hybrid models, adapted from Shanthikumar and Sargent (1983).
A substantial volume of publications on multi-models is based on the United States Department of Defense’s High-level Architecture (HLA) standards, a widely adopted framework for tightly-coupled models. HLA is a well-known and accepted standard (IEEE 1516-2010) to enable interoperability and model component reuse in distributed simulations by a comprehensive specification of attributes and relations between model components (IEEE Std 1516-2010, 2010). It is intended that compliance with HLA standards at the start of the model development process can ensure the interoperability of multiple model components within an integrated simulation environment. However, current practices in M&S reflect that models are typically not developed with the consideration for potential incorporation into a multi-model, which precludes many existing models from being considered for reuse within an HLA framework. Furthermore, the complexity and involvedness of HLA methods limit its accessibility to a broader range of practitioners (Falcone, Garro, Anagnostou, & Taylor, 2017).
On the other hand, in loosely-coupled multi-models, outputs from one sub-model are channelled as inputs into other sub-models (Antle et al., 2001). Such a system comprises two or more stand-alone sub-models that can be run independently without the presence of the other sub- models. This type of multi-model can allow (but does not require) dynamic process interactions in between the sub-models. The variables in such models are distinct, separate and infrequently interact or overlap across sub-models (Orton & Weick, 1990). These characteristics suggest that any existing model can (theoretically) be considered for loose-coupling, thereby reaping the benefits of model reuse described by Kasputis and Ng (2000) and Davis and Anderson (2003). In the classification introduced by Shanthikumar and Sargent (1983), this corresponds to Class I and III/IV hybrid models (Figure 1). The focus of this research will be centred on loosely-coupled models.
In our review, we found many studies on the topic of loosely coupling models: for example, Viana, Brailsford, Harindra, and Harper (2014) and Morgan, Howick, and Belton (2011) present methods for combining Discrete Events Simulations (DES) and System Dynamics (SD) models; Swinerd and McNaught (2012) present three classes of SD/Agent-based modelling (ABM) hybrid models; Borschev (2013) discussed six common architectures to combine SD, DES, and ABM models. There is an abundance of piecemeal studies in various domains that demonstrate methods and theories for coupling models of multiple modelling methodologies. However, we identified a lack of a systematic framework or generalised set of methods to guide the process of loosely coupling models.
Reusing Models¶
The availability of composable, reusable and interoperable models is an important factor in mainstreaming the practice of multi-modelling. In theory, coupling such models to create multi-models is potentially more feasible, economical, and easily validatable. In our review of these concepts, we observed that many publications on reusing models are also related to model composability and interoperability. We draw definitions of the stated terms from reviewed literature:
Model composability refers to the degree to which model components can be selected and assembled in various combinations into simulation systems to satisfy specific user requirements (Petty & Weisel, 2019),
Model reusability refers to the degree to which a model is capable of being used again or repeatedly (Balci, Arthur, & Ormsby, 2011),
Model interoperability refers to the ability of two or more sub-models to exchange information and meaningfully use the information exchanged (Diallo et al., 2010).
Composability refers to a property of a model made up of a combination of multiple com- ponents parts. These components are designed and developed to be a part of a whole model, rather than used as stand-alone models. This differs from the anticipated scope of this research, which focuses on reusing stand-alone, complete models in a multi-model configuration. However, composable models host qualities which make them conducive for reuse (Kasputis & Ng, 2000). One such quality is related to consistency: the development of composable model parts requires complete descriptors, which eases the understanding of a model’s underlying workings, and thus the selection of models that are consistent with one another.
The model development practices implemented by the original developers significantly im- pact the reusability of a model. Yilmaz (2004) notes that the original context of the model must be explicated and made clear for successful model reuse. Furthermore, there must be a clear separation of factors that influence simulation outcomes, distinguishing contextual factors from other factors and explicating distinct experimental frames. The term experimental frame was first coined by B. Zeigler (1976) to formally describe a model’s context to provide repro- ducible experiment descriptions. It specifies the conditions under which the modelled system is observed and experimented and represents an operational formulation of the objectives that motivate an M&S project. A model’s composability and reusability can be improved by clearly characterising and clarifying the difference between the model context and the experimental frame (Yilmaz, 2004).
Unfortunately, the practice of building highly composable (and therefore potentially reusable) models is challenging to implement. When practitioners develop models, they typically do not set out with composability as an objective, as it is a costly endeavour that scarcely rewards the model developers (Davis & Anderson, 2003). Furthermore, the fitness for purpose or validity of the selected model is challenging to assess when the model is built for one purpose and attempted to be reused for another, or when it is linked to models developed under a misaligned or conflicting set of assumptions (Pidd, 2002). The resulting consequence on the prospects of model composability is aptly noted by Kasputis and Ng (2000): “Unless models are designed to work together, they don’t (at least not easily and cost-effectively).”
A model’s reusability depends not just on its composability but also on the technical ability and knowledge of future model users and the reuse mechanisms available. Table 1 expands upon these reuse strategies, with the left column summarising the technical aspects that must be addressed in effective model reuse strategies as outlined by Pidd (2002), while the right column establishes how these aspects contribute to model reuse.
Table 1: Technical aspects in model reuse strategies,Pidd(2002)
Technical aspects |
Objective |
Abstraction, for the efficient and adequate conveyance of the model’s purpose, nature and behaviour. |
To assess the substantive interoperability of different model components. |
Selection, as in directory and search services for locating, comparing, and selecting models. |
To support model search and selection. |
Specialisation, as in features for specialising model components into useable entities. |
To support modification of the model components such that they fit within the multi-model configuration. |
Integration, refers to a framework (or an agreed architecture) to combine and connect model com- ponents. |
To support the linking of model components and facilitating model interoperability. |
The abstraction and selection strategies are expanded upon by Isasi, Noguer´on, and Wij- nands (2015), who explain that ontologies and hierarchies rich in syntax, semantics and structure are required to capture model documentation for automation of model search and selection. This documentation should be stored and searchable within a model reference library alongside the models. Furthermore, the model reusers should be skilled in valid and credible methods to facilitate interoperability between the selected models within a coherent workflow and assess the impacts of those methods on model outputs.
Furthermore, we observed that the reuse of models is also rooted in social processes and considerations. Social factors can influence the perception of validity and, hence, the reusability of a model. As an example, the Dynamic Integrated Climate-Economy (DICE) and Regional Integrated Climate-Economy (RICE) models quantified the impacts of climate policies on the economy, which was considered a breakthrough at the time of development (Nordhaus, 1992; Nordhaus & Yang, 1996). The author, William Nordhaus, was awarded a Nobel Prize for his work. The simplicity of the models can be considered a factor that supports its wide-ranging use but also exacerbates its contention amongst climate economists. Despite heavy criticisms of such models and integrated assessment models in general (Storm, 2017), these models remain widely used in research on climate economics and policies, as well as by authoritative governmental actors such as the United States Environmental Protection Agency.

Figure 2: Relations between the composability, interoperability, and reusability of a model.
Our review found that the distinction between composability, reusability, and interoperability is nuanced. Figure 2 summarises our understanding of the relations between these three properties based on this literature review. In essense, model reusability is dependent on how easily it can be made interoperable with other models, as well as on the availability of verifiable methods for meaningfully using and linking the models as well as the available infrastructure (such as model reference libraries). The reusability of a model also depends on its composability, as a more composable model is more easily made interoperable with other models and is, therefore, more reusable. However, a reused model may not be composable, and a composable model may never be reused.
As demonstrated in this section, we found that the most relevant literature dates back to approximately 10-20 years ago. These foundational publications addressed conceptual requirements for developing methodologies and standards to mitigate the intricacies of developing reusable models. However, in surveying more recent literature, we did not find a concrete realisation of these methodologies or standards. Our review revealed a lack of practical guidelines or methods for systematically approaching the reuse of models, whether as a stand-alone model or within a multi-model configuration.
Challenges in Multi-modelling¶
Guidelines for systematically approaching model reuse must address the challenges of multi- modelling. These challenges are fundamentally rooted in the varied nature of the modelling methodologies used, which directly influence (individual) model characteristics. The taxonomy by Lynch and Diallo (2015) suggest that there are six key simulation model characteristics: time representation, the basis of value, behaviour, expression, resolution, and execution (Figure 3). These characteristics are described to be mutually exclusive, and the presence of multiple such competing characteristics within one multi-model triggers interoperability challenges.

Figure 3: Taxonomy of model characteristics (Lynch & Diallo, 2015), as adapted by the authors from Sulistio et al. (2004)
Furthermore, uncertainty analysis for multi-models is an essential dimension of this research. While there is a rich repository of knowledge on managing and understanding uncertainties in singular models, it is still unclear how sub-model uncertainties influence overall multi-model outputs. As Davis and Anderson (2003) hinted, these uncertainties may “propagate in trouble- some and non-intuitive ways”. This behaviour is further influenced by the various techniques used to make the sub-models interoperable. Understanding this topic is essential for the interpretability and credibility of the multi-model as a decision-support tool. Thus, we also reviewed and summarised the literature on uncertainty analysis for multi-models.
Interoperability¶
Multi-models consist of sub-models that are (typically) conceived with different modelling methods and experimental frames, giving rise to interoperability concerns. The operational principles that distinguish these modelling methods relate to the mathematical compatibility of the model components and must be treated accordingly. There are practical issues that impact interoperability when connecting models with different mathematical representations.
There are various frameworks that structure model interoperability in literature. We find the earlier categorisation by Dahmann, Salisbury, Barry, Turrell, and Blemberg (1999) to be most helpful: they identify two categories of simulation interoperability, which are the technical (syntactic) and the substantive (semantic). This categorisation can be seen as a coarser version of Wang, Tolk, and Wang (2009)’s Levels of Conceptual Interoperability Model (LCIM) (Figure 4), whereby technical interoperability corresponds to LCIM levels 1 and 2, and substantive interoperability corresponds to LCIM levels 3 through 7.
The different characteristics of the chosen modelling approaches have immediate consequences for the technical interoperability of the model. The different time representations and bases of value in the models result in different forms of model inputs and outputs. These differences must be reconciled for the sub-models to communicate. For example, a dynamic simulation model may produce time-series outputs that must be transformed into static representations before being communicated to an optimisation model.
Figure 4: The Levels of Conceptual Interoperability Model (Wang et al.,2009)
The technological and social phenomena pertinent to socio-technical systems exhibit behaviours relevant at different scales and resolutions. Naturally, then, different sub-models are conceived at different scales. Various studies often ascribe different definitions to the word ‘scale’ (Bar-Yam, 2004; Febres, 2018). In this review, we define scale as the extent (or dimension) of the aspects of the original system represented in the model. For example, a wind farm model may simulate the wind energy generation from all wind farms in the Netherlands for the next ten years. In this case, we say that the geographical scale of the model is the Netherlands, and the time scale of the model is ten years. Scale is often temporal or spatial, but it is not limited to those. For example, a biological system model may be at a scale of cell, tissue, organ or beyond.
Current literature demonstrates that scale and resolution are important aspects of M&S that affect technical and substantive interoperability. This has been addressed not just in Lynch and Diallo (2015)’s taxonomy of multi-modelling but also in the sheer volume of publications on the meaning, challenges, and solutions related to multi-resolution studies. For elements of different scales and resolutions to communicate, aggregation and disaggregation functions are needed to make the communicated information consistent with one another. Aggregation has been described as a bottom-up approach where elements of a model are grouped and described on a higher level of abstraction (Iwasaki & Simon, 1994), while disaggregation refers to a top-down approach where system elements are broken into a set of smaller elements of subsystems (Alfaris, Siddiqi, Rizk, Weck, & Svetinovic, 2010).
Multi-resolution modelling (MRM), sometimes called variable-resolution modelling, is the practice of building a single model or a family of models to describe the same phenomena at different levels of resolution (Davis & Bigelow, 1998). While this research is not focused on multi-resolution modelling, the concepts driving MRM research apply to multi-modelling research. Namely, a motivation for MRM is that both high- and low-resolution models play important roles in using M&S for decision-support. As discussed by Davis and Bigelow (1998), high-resolution models may be well-suited to understand and demonstrate bottom-up, emergent phenomena and are often perceived to exhibit higher (better) fidelity. They are also increasingly feasible to implement, given the increasing proliferation of detailed and open data. However, high-resolution models are computationally expensive and time-consuming to execute. Such models also typically leave important determinants of higher-level behaviours as implicit (rather than explicit) qualities. On the other hand, low-resolution models provide higher interpretability, require lower computation cost, and explicate important higher-level behaviours. These qualities make low-resolution models important tools for exploratory analysis. Jointly, these models may be used for cross-validation and to extract findings that cannot be provided by a single model alone.
Past research has put forth a set of tools and techniques that can systematically transform a model across multiple levels of resolution. Paul and Hillestad (1993) propose a set of tools for transforming a model across multiple resolutions, namely via Selected Viewing, the use of alternative sub-models (e.g., surrogate models or meta-models), and Integrated Hierarchical Variable Resolution (IHVR) modelling. Davis and Bigelow (1998) proposed using array formal- ism or vectors, a method to simplify the model structure and rewrite the model in terms of array operations, to reveal differing sets of object classes that potentially ease the mapping of objects across scales.
Resolving technical interoperability issues related to diverse modelling methods and scales is but the first challenge of achieving adequate multi-model interoperability. The LCIM model demonstrates four other levels of interoperability (i.e., semantic, pragmatic, dynamic, and conceptual) that are necessary for a multi-model to be entirely correct. However, establishing these types of interoperability between models is a challenge that has been discussed by many authors such as Yilmaz (2004), Davis and Tolk (2007) and Balci et al. (2017). The model development process is such that a sub-model can contain many ‘hidden’ assumptions that will impact the behaviour of other interacting sub-models. Unfortunately, these assumptions are often not explicated and can result in misalignments between sub-models that obstruct full substantive interoperability. We note that the methods found and discussed in existing literature do not adequately guide a user in systematically approaching these interoperability concerns related to model reuse in multi-models.
Uncertainty Analysis¶
Complex systems models often incorporate relatively high levels of uncertainty (relative to engineering models of physical systems, for example). This is because complex systems models often incorporate non-linear simulation methods and allow for contingencies and uncertainties. While this flexibility may reflect increased realism, it results in high levels of uncertainty in the generated outputs. It is important to understand and adequately manage these model uncertainties as part of the model verification and validation procedure. Model verification entails determining if an implemented model is consistent with its conceptual specification. It answers the question, “did we build the model right?” On the other hand, model validation entails establishing that the behaviors of the model and the real system are sufficiently aligned within the experimental frame. It answers the question, “did we build the right model?”
Uncertainties can originate from data inputs, model structure, or model parameters and affect model behaviour and outputs in unanticipated ways. The dynamics of these uncertainties can affect the interpretation and validity of model outputs, leaving room for misuse of the model (Saltelli et al., 2020). Misuse occurs when, for example, modellers project an undue amount of certainty to model outputs or when politicians make strategic use of uncertainties in model outputs to back a preferred policy. One way to mitigate such misuse is to increase transparency by adequately analysing and communicating the impacts of these uncertainties.
The importance of appropriately managing model uncertainties is heightened when the models are used to support decisions for large-scale socio-technical transitions. This is because such decisions are likely to have far-reaching impacts that cascade into the future. Although many studies linking models to socio-technical transition theories aim to provide decision support, they often fall short of doing so (Hirt, Schell, Sahakian, & Trutnevyte, 2020). Furthermore, transition models attempt to reflect the character of socio-technical transitions, which is that they are affected by open, path-dependent processes that lead to uncertain outcomes (K¨ohler et al., 2018). It is therefore important to account for dynamics of change that can be triggered by uncertain, unknown, or unanticipated endogenous processes and exogenous events.
Numerous studies have attempted to structure or typify these uncertainties in model-based decision-making (Bevan, 2022; Kwakkel, Walker, & Marchau, 2010; Petersen, 2006). In essence, many uncertainties arise when we abstract a real-world system into a model (structural uncertainties) and parameterise this model of the system (parametric uncertainties). The uncertainties may be epistemic (due to diverging perspectives or lack of knowledge) or ontic (as some phenomena simply cannot be neatly captured with numbers or equations) in nature. Pace (2015) further identified three sources of uncertainty in M&S: stochastic variables and processes, a lack of accuracy and precision, and errors. Adequate analysis and management of these uncertainties are important for understanding the dynamics of the system and informing meaningful interpretation of model outputs.
Two ways to analyse uncertainties in M&S models are uncertainty quantification and uncertainty characterisation. Uncertainty quantification refers to the representation of model output uncertainty using probability distributions (Cooke, 1991; Reed et al., 2022), while uncertainty characterisation refers to model evaluation under alternative factor hypotheses to explore their implications for model output uncertainty (Moallemi, Kwakkel, de Haan, & Bryan, 2020; Reed et al., 2022; W. E. Walker et al., 2003). A comprehensive uncertainty analysis endeavour is often computationally expensive as it requires many runs of the model to observe the effects of variations in model inputs and parameters on model outputs. Such an endeavour becomes infeasible when a single run of the model is in itself computationally costly.
The methods used to manage model uncertainties can depend on the level of uncertainty in the system. Pruyt and Kwakkel (2014) describe a range of levels of uncertainty ranging from no uncertainty to total ignorance (Figure 5). Sensitivity analysis can be an effective way to understand the impacts of uncertainties on model outcomes. It is defined by Saltelli, Tarantola, Campolongo, and Ratto (2004) as the study of how uncertainty in the output of a model can be apportioned to different sources of uncertainty in the model input. Uncertainties can further be understood via structured experimental designs that represent a systematic exploration of the uncertainty space and subsequently analysing the results using statistical or data mining methods to understand typical system trajectories and the conditions that facilitate them (Bryant & Lempert, 2010; Halbe et al., 2015). Another method to manage unresolvable uncertainties is exploratory modelling, a framework to explore the implications of varying assumptions and hypotheses by means of a series of computation experiments (Bankes, 1993).
The presence of interactions between the sub-models complicates uncertainty analysis in a multi-model. These interactions occur at the interface of the sub-models, originating in the methods employed to achieve interoperability between the sub-models (Drent, 2020; Nikolic et al., 2019). Furthermore, repeated interactions between the sub-models can result in a cascade of uncertainty resulting from the accumulation of individual sub-model uncertainties and un- certainties resulting from the sub-model interactions; this process is described in further detail by Wilby and Dessai (2010). Drent (2020) further found that the multi-model configuration (whether undirected, with feedbacks across the models or directed with no feedbacks) impacts whether the uncertainties should be analysed for both the whole multi-model as well as the individual sub-models or the whole multi-model only.
Figure 5: Levels of uncertainty as structured byW. Walker, Lempert, and Kwakkel(2013)
Our literature review revealed that previous research on uncertainty analysis in loosely- coupled multi-models is limited. Some studies discussed and applied uncertainty management concepts. For example, DeVolder et al. (2002) and Ye et al. (2021) studied uncertainty quantification for multi-scale models in the discipline of physical sciences. However, these studies do not directly assess how sub-model interactions or multi-model configuration influence the dynamics of uncertainty propagation through a multi- model, nor do they discuss methods for analyzing and interpreting such uncertainties.
Final remarks¶
Sustainability transitions represent complex challenges that span multiple domains and multiple scales. A promising approach for studies on such complex systems is to use multi-models. The urgency of the sustainability challenges at hand often requires multi-models to be used expeditiously. The model development process is, however, resource- and time-consuming and must be informed by sufficient domain expertise. These factors make the reuse of existing models an appealing option for multi-modelling. This review found that a model’s reusability depends on the following elements:
Composability of the model: the model development process dictates how composable (and therefore how reusable) a model is.
Model reuse mechanisms available: mechanisms that contribute to model reuse include those that enable uniform model abstraction (e.g., for model comparison and selection), model selection (e.g., from a model repository), model specialisation (e.g., to adapt selected models into reusable entities), and model integration (e.g., for combining and connecting model components).
Technical ability and knowledge of future model users: as related to the previously stated model specialisation, facilitating interoperability between two stand-alone models requires technical expertise and domain knowledge from the model users.
Social processes: the perceived authority of the model and the model owners influences whether and how the model is reused.
This review was scoped to focus on the first two points. We found that the practice of reusing models in multi-models can be broadly summarised into two types of challenges. The first is on technical interoperability issues. This task entails ensuring that information can be exchanged between the components of a multi-model, including reconciling different time representations, bases of value, and scales across multiple models. The second challenge is on achieving substantive interoperability, ensuring that the semantics, assumptions and contexts of the models are not in conflict with one another. The process of facilitating interoperability in between multiple models calls for scientific methods to identify key model and data components which should communicate with one another, as well as to modify and combine those components to answer a modelling question.
The task of interpreting multi-model outputs follows addressing the interoperability challenges of multi-modelling. Decisions on large-scale sustainability transitions that result from such models are likely to have far-reaching impacts that cascade into the future. This increases the importance of understanding and adequately managing how uncertainties in model inputs and model structure influence model outputs. Comprehensive uncertainty analysis methods on the multi-model can help meet such a need. Uncertainties in multi-model may emerge from individual sub-model uncertainties as well as from interactions between sub-models. Model uncertainties can originate from structural or parametric uncertainties, which may be epistemic or ontic. An in-depth understanding of how to manage uncertainties in the model is an integral part of the model verification and validation procedure that impacts the interpretation of model outputs. While there are many studies on uncertainty analysis for individual models, addressing uncertainty propagation in multi-models is a topic that warrants further comprehensive research.
This document presented the reviewed literature surrounding model reuse as related to multi-modelling, including motivations and challenges. In summary, we found that the field of transitions research can benefit from methodical guidelines for reusing existing models in multi-model configurations. The practice of reusing existing models is inhibited by the lack of practical and scientifically grounded methods for approaching the challenges embedded in the multi-model development process. We conclude that developing tried-and-tested methods to treat interoperability issues and implement uncertainty analysis in multi-models can advance the practice of multi-modelling and stimulate the growth of multi-model ecologies in various domains. This outcome is beneficial as multi-models can better encapsulate socio-technical challenges’ multi-domain and multi-scale nature, leading to strengthened decision support for socio-technical transitions.
References¶
Alfaris, A., Siddiqi, A., Rizk, C., Weck, O. D., & Svetinovic, D. (2010). Hierarchical de- composition and multidomain formulation for the design of complex sustainable systems. Journal of Mechanical Design, Transactions of the ASME , 132 , 0910031-09100313. doi: https://doi.org/10.1115/1.4002239
Antle, J. M., Capalbo, S. M., Elliott, E. T., Hunt, H. W., Mooney, S., & Paustian, K. H. (2001). Research needs for understanding and predicting the behavior of managed ecosystems: Lessons from the study of agroecosystems. In (Vol. 4, p. 723-735). doi: https://doi.org/ 10.1007/s10021-001-0041-0
Balci, O., Arthur, J. D., & Ormsby, W. F. (2011). Achieving reusability and composability with a simulation conceptual model. Journal of Simulation, 5 , 157-165. doi: https://doi.org/ 10.1057/jos.2011.7
Balci, O., Ball, G. L., Morse, K. L., Page, E., Petty, M. D., Tolk, A., & Veautour, S. N. (2017).
Model reuse, composition, and adaptation. doi: https://doi.org/10.1007/978-3-319-58544-4 6
Bankes, S. (1993, 6). Exploratory modeling for policy analysis. Operations Research, 41 , 435-449. doi: https://doi.org/10.1287/opre.41.3.435
Bar-Yam, Y. (2004). Multiscale complexity / entropy. Advances in Complex Systems, 7 , 47-63. Berkhout, F., Smith, A., & Stirling, A. (2004). Socio-technological regimes and transition contexts. System innovation and the transition to sustainability: Theory, evidence and policy, 44 (106), 48–75. doi: https://doi.org/10.4337/9781845423421.00013
Bevan, L. D. (2022). The ambiguities of uncertainty: A review of uncertainty frameworks relevant to the assessment of environmental change. Futures, 137 . doi: https://doi.org/ 10.1016/j.futures.2022.102919
Bollinger, L. A., Nikoli´c, I., Davis, C. B., & Dijkema, G. P. (2015). Multimodel ecologies: cultivating model ecosystems in industrial ecology. Journal of Industrial Ecology, 19 (2), 252–263. doi: https://doi.org/10.1111/jiec.12253
Borschev, A. (2013). The big book of simulation modeling multimethod modeling. AnyLogic North America.
Bryant, B. P., & Lempert, R. J. (2010). Thinking inside the box: A participatory, computer- assisted approach to scenario discovery. Technological Forecasting and Social Change, 77 (1), 34–49. doi: https://doi.org/10.1016/j.techfore.2009.08.002
Cooke, R. (1991). Experts in uncertainty: opinion and subjective probability in science. Oxford University Press on Demand.
Dahmann, J., Salisbury, M., Barry, P., Turrell, C., & Blemberg, P. (1999). Hla and beyond: Interoperability challenges. In Simulation interoperability workshop.
Davis, P. K., & Anderson, R. H. R. H. (2003). Improving the composability of department of defense models and simulations. Rand.
Davis, P. K., & Bigelow, J. H. (1998). Experiments in multiresolution modeling (mrm). RAND.
Davis, P. K., & Tolk, A. (2007). Observations on new developments in composability and multi-resolution modeling.. doi: https://doi.org/10.1109/WSC.2007.4419682
de Haan, F. J., Ferguson, B. C., Adamowicz, R. C., Johnstone, P., Brown, R. R., & Wong,
T. H. (2014). The needs of society: A new understanding of transitions, sustainability and liveability. Technological Forecasting and Social Change, 85 , 121–132. doi: https:// doi.org/10.1016/j.techfore.2013.09.005
DeRosa, J. K., Grisogono, A.-M., Ryan, A. J., & Norman, D. O. (2008). A research agenda for the engineering of complex systems. In 2008 2nd annual ieee systems conference (pp. 1–8). doi: https://doi.org/10.1109/SYSTEMS.2008.4518982
DeVolder, B., Glimm, J., Grove, J. W., Kang, Y., Lee, Y., Pao, K., … Ye, K. (2002). Uncer- tainty quantification for multiscale simulations. Journal of Fluids Engineering, Transac- tions of the ASME , 124 , 29-41. doi: https://doi.org/10.1115/1.1445139
Diallo, S. Y., Padilla, J. J., & Tolk, A. (2010). Why is interoperability bad: Towards a paradigm shift in simulation composition.. Retrieved from https://www.researchgate.net/publication/290613784
Drent, A. (2020). Uncertainty analysis on multi-model ecologies .
Falcone, A., Garro, A., Anagnostou, A., & Taylor, S. J. (2017). An introduction to developing federations with the high level architecture. IEEE. doi: https://doi.org/10.1109/WSC.2017.8247820
Febres, G. L. (2018). A proposal about the meaning of scale, scope and resolution in the context of the information interpretation process. Axioms, 7 . Retrieved from www.mdpi.com/ journal/axiomsArticle
Geels, F. W. (2002). Technological transitions as evolutionary reconfiguration processes:
a multi-level perspective and a case-study. Research policy, 31 (8-9), 1257–1274. doi: https://doi.org/10.1016/S0048-7333(02)00062-8
Geels, F. W. (2004). From sectoral systems of innovation to socio-technical systems: Insights about dynamics and change from sociology and institutional theory. Research policy, 33 (6-7), 897–920. doi: https://doi.org/10.1016/j.respol.2004.01.015
Halbe, J., Reusser, D. E., Holtz, G., Haasnoot, M., Stosius, A., Avenhaus, W., & Kwakkel,
J. H. (2015). Lessons for model use in transition research: A survey and comparison with other research areas. Environmental Innovation and Societal Transitions, 15 , 194–210. doi: https://doi.org/10.1016/j.eist.2014.10.001
Hekkert, M. P., Suurs, R. A., Negro, S. O., Kuhlmann, S., & Smits, R. E. (2007). Functions of innovation systems: A new approach for analysing technological change. Technological forecasting and social change, 74 (4), 413–432. doi: https://doi.org/10.1016/j.techfore.2006.03.002
Hirt, L. F., Schell, G., Sahakian, M., & Trutnevyte, E. (2020). A review of linking models and socio-technical transitions theories for energy and climate solutions. Environmental Innovation and Societal Transitions, 35 , 162–179. doi: https://doi.org/10.1016/j.eist.2020.03.002
IEEE Std 1516-2010. (2010). Ieee standard for modeling and simulation (ms) high level architec- ture(hla): 1516-2010 (framework and rules); 1516.1-2010 (federate interface specification); 1516.2- 2010 (object model template (omt) specification). IEEE Std 1516-2010 (Revision of IEEE Std 1516-2000), 1-38. doi: https://doi.org/10.1109/IEEESTD.2010.5553440
Isasi, Y., Noguer´on, R., & Wijnands, Q. (2015). Simulation model reference library: A new tool to promote simulation models reusability..
Iwasaki, Y., & Simon, H. A. (1994). Causality and model abstraction. Artificial Intelligence, 67, 143-194.
Kasputis, S., & Ng, H. C. (2000). Composable simulations..
Köhler, J., De Haan, F., Holtz, G., Kubeczko, K., Moallemi, E., Papachristos, G., & Chap- pin, E. (2018). Modelling sustainability transitions: An assessment of approaches and challenges. Journal of Artificial Societies and Social Simulation, 21 (1). doi: https://doi.org/10.18564/jasss.3629
K¨ohler, J., Geels, F. W., Kern, F., Markard, J., Onsongo, E., Wieczorek, A., … others (2019). An agenda for sustainability transitions research: State of the art and future directions. Environmental innovation and societal transitions, 31 , 1–32. doi: https://doi.org/10.1016/j.eist.2019.01.004
Kwakkel, J. H., Walker, W. E., & Marchau, V. A. W. J. (2010). Classifying and communicating uncertainties in model-based policy analysis. Int. J. Technology, Policy and Management ,10 , 299-315.
Lynch, C., & Diallo, S. (2015). A taxonomy for classifying terminologies that describe simula- tions with multiple models.. doi: https://doi.org/10.1109/WSC.2015.7408282
Moallemi, E. A., Kwakkel, J., de Haan, F. J., & Bryan, B. A. (2020, 11). Exploratory modeling for analyzing coupled human-natural systems under uncertainty. Global Environmental Change, 65 . doi: https://doi.org/10.1016/j.gloenvcha.2020.102186
Morgan, J., Howick, S., & Belton, V. (2011). Designs for the complementary use of system dynamics and discrete-event simulation. IEEE.
Mosterman, P. J., & Vangheluwe, H. (2004, 9). Computer automated multi-paradigm modeling: An introduction. Simulation, 80 , 433-450. doi: https://doi.org/10.1177/ 0037549704050532
Nikolic, I., Warnier, M. ., Kwakkel, J. ., Chappin, E. ., Lukszo, Z. ., Brazier, F. ., … Palensky, P. (2019). Principles, challenges and guidelines for a multi-model ecology. Citation. doi: https://doi.org/10.4233/uuid:1aa3d16c-2acd-40ce-b6b8-0712fd947840
Nordhaus, W. D. (1992). The ‘dice’model: background and structure of a dynamic integrated climate-economy model of the economics of global warming.
Nordhaus, W. D., & Yang, Z. (1996). A regional dynamic general-equilibrium model of alter- native climate-change strategies. The American Economic Review , 741–765.
Orton, J. D., & Weick, K. E. (1990). Loosely coupled systems: A reconceptualization. Source: The Academy of Management Review , 15 , 203-223. Retrieved from http://www.jstor.org/stable/258154Accessed:28/05/200805:05 doi: https://doi.org/10.2307/258154
Pace, D. K. (2015). Fidelity, resolution, accuracy, and uncertainty. doi: https://doi.org/ 10.1007/978-1-4471-5634-5 3
Paul, K. D., & Hillestad, R. (1993). Families of models that cross levels of resolution issues for design calibration and management. In (p. 1003-1012). doi: https://doi.org/10.1145/ 256563.256913
Petersen, A. (2006). Simulating nature.
Petty, M. D., & Weisel, E. W. (2019, 3). Model composition and reuse. Elsevier. doi: https:// doi.org/10.1016/B978-0-12-813543-3.00004-4
Pidd, M. (2002). Simulation software and model reuse a polemic..
Pruyt, E., & Kwakkel, J. H. (2014). Radicalization under deep uncertainty: A multi-model exploration of activism, extremism, and terrorism. System Dynamics Review , 30 , 1-28. doi: https://doi.org/10.1002/sdr.1510
Reed, P. M., Hadjimichael, A., Malek, K., Karimi, T., Vernon, C. R., Srikrishnan, V., … Thurber, T. (2022). Addressing uncertainty in multisector dynamics research. Zenodo. doi: https://doi.org/10.5281/zenodo.6110623
Rip, A., Kemp, R., et al. (1998). Technological change. Human choice and climate change, 2 (2), 327–399.
Rittel, H. W., & Webber, M. M. (1973). Dilemmas in a general theory of planning. Policy sciences, 4 (2), 155–169. doi: https://doi.org/10.1007/BF01405730
Saltelli, A., Bammer, G., Bruno, I., Charters, E., Di Fiore, M., Didier, E., … others (2020). Five ways to ensure that models serve society: a manifesto. Nature Publishing Group. doi: https://doi.org/10.1038/d41586-020-01812-9
Saltelli, A., Tarantola, S., Campolongo, F., & Ratto, M. (2004). Sensitivity analysis in practice: a guide to assessing scientific models. Wiley Online Library. Retrieved from www.andreasaltelli.eu
Savaget, P., Geissdoerfer, M., Kharrazi, A., & Evans, S. (2019). The theoretical foundations of sociotechnical systems change for sustainability: A systematic literature review. Journal of cleaner production, 206 , 878–892. doi: https://doi.org/10.1016/j.jclepro.2018.09.208
Shanthikumar, J. G., & Sargent, R. G. (1983). A unifying view of hybrid simulation/analytic models and modeling. Operations research, 31 (6), 1030–1052. doi: https://doi.org/10.1287/opre.31.6.1030
Storm, S. (2017). How the invisible hand is supposed to adjust the natural thermostat: A guide for the perplexed. Science and engineering ethics, 23 (5), 1307–1331. doi: https:// doi.org/10.1007/s11948-016-9780-3
Sulistio, A., Yeo, C. S., & Buyya, R. (2004, 6). A taxonomy of computer-based simulations and its mapping to parallel and distributed systems simulation tools. Software - Practice and Experience, 34 , 653-673. doi: https://doi.org/10.1002/spe.585
Swinerd, C., & McNaught, K. R. (2012). Design classes for hybrid simulations involving agent-based and system dynamics models. Simulation Modelling Practice and Theory , 25 , 118-133. doi: https://doi.org/10.1016/j.simpat.2011.09.002
van Stralen, J. N., Dalla Longa, F., Dani¨els, B. W., Smekens, K. E., & van der Zwaan, B. (2021). Opera: a new high-resolution energy system model for sector integration research. Environmental Modeling & Assessment , 26 (6), 873–889. doi: https://doi.org/10.1007/ s10666-020-09741-7
Viana, J., Brailsford, S. C., Harindra, V., & Harper, P. R. (2014, 8). Combining discrete-event simulation and system dynamics in a healthcare setting: A composite model for chlamydia infection. European Journal of Operational Research, 237 , 196-206. doi: https://doi.org/ 10.1016/j.ejor.2014.02.052
Voinov, A., & Shugart, H. H. (2013). ’integronsters’, integral and integrated modeling. Environmental Modelling and Software, 39 , 149-158. doi: https://doi.org/10.1016/ j.envsoft.2012.05.014
Walker, W., Lempert, R., & Kwakkel, J. (2013). Deep uncertainty (3rd ed.). Springer. doi: https://doi.org/10.1007/978-1-4419-1153-7
Walker, W. E., Harremoes, P., Rotmans, J., Sluijs, J. P. V. D., Asselt, M. B. A. V., Janssen, P., … Krauss, V. (2003). Defining uncertainty a conceptual basis for uncertainty management in model-based decision support. Integrated Assessment , 4 , 5-17. doi: https://doi.org/10.1076/iaij.4.1.5.16466
Walrave, B., & Raven, R. (2016). Modelling the dynamics of technological innovation systems.
Research policy, 45 (9), 1833–1844. doi: https://doi.org/10.1016/j.respol.2016.05.011 Wang, W., Tolk, A., & Wang, W. (2009). The levels of conceptual interoperability model: Applying systems engineering principles to ms..
Wilby, R. L., & Dessai, S. (2010, 7). Robust adaptation to climate change. Weather , 65 , 176-180. doi: https://doi.org/10.1002/wea.504
Wu, P. P. Y., Fookes, C., Pitchforth, J., & Mengersen, K. (2015). A framework for model integration and holistic modelling of socio-technical systems. Decision Support Systems, 71 , 14-27. doi: https://doi.org/10.1016/j.dss.2015.01.006
Ye, D., Veen, L., Nikishova, A., Lakhlili, J., Edeling, W., Luk, O. O., … Hoekstra, A. G. (2021, 5). Uncertainty quantification patterns for multiscale models. Philosophical Trans- actions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 379 . doi: https://doi.org/10.1098/rsta.2020.0072
Yilmaz, L. (2004). On the need for contextualized introspective models to improve reuse and composability of defense simulations. The Journal of Defense Modeling and Simulation, 1 , 141-151. doi: https://doi.org/10.1177/875647930400100302
Yilmaz, L., & Oren, T. (2005). Discrete-event multimodels and their agent-supported activation and update. In (p. 63-72).
Zeigler, B. (1976). Theory of modelling and simulation. Wiley. Retrieved from https:// books.google.hr/books?id=M-ZQAAAAMAAJ
Zeigler, B. P., Muzy, A., & Kofman, E. (2018). Theory of modeling and simulation: discrete event & iterative system computational foundations. Academic press.
Requirements for the Multi-Model Infrastructure (MMI)¶
28 November 2023
At the start of the MMviB project, two requirement workshops were organised with the consortium. The goals were threefold: first, to generate ideas for desired features of the MMI as a long-term vision; second, to reach a consensus on the objective of this project within that vision; and third, to promote trust and enhance understanding among partners through a collaborative and co-created effort.
To gain a solid understanding of the overarching issues, namely the WHY (needs) and WHAT (features) of MMI, the following key questions guided our discussion in the workshops:
What kind of use cases do we envisage?
What cannot be done now by individual models, and what is MMI’s added value?
What kind of decisions to make with the multi-models?
What features or services are required in the MMI?
What the MMI will need to connect multi-models?
What kinds of models are to be coupled? What is their operational principle (e.g., AMB, optimisation)?
In what manner should the models interoperate?
What information do the models exchange? How often?
We used a five-step brainstorming process (i.e., idea generation, pruning, grouping, defining features, and prioritisation) to elicit, organise and summarise the needs and features. They are summarised and reviewed in five categories and explained in this MMI requirement document.
1. Infrastructure deployment¶
This category addresses the issues to avoid or reduce vendor- or other types of lock-in.
Features |
Priority |
Comments related to Needs |
|
---|---|---|---|
1.1 |
The infrastructure should be deployable on a single computer or within an organisation. |
Critical |
|
1.2 |
The infrastructure shall not depend on a single party. |
Critical |
2. Model description and alignment¶
This category addresses prerequisites for multi-model connection: what the models need to adhere to and what the MMI need to provide to allow model participation.
Features |
Priority |
Comments related to Needs |
|
---|---|---|---|
2.1 |
Provide a generic format for model description. |
Critical |
The model description shall include, e.g., objectives, data requirements, time scales, to help general understanding of a model. It shall be human readable, and ideally also machine readable. This feature is needed to support model selection. |
2.2 |
Provide a common data model/format for input/output data |
Critical |
Consistent synthetic representation of data exchanged in multi-models |
2.3 |
Allow model connection across multiple geographic, time and other scales |
Critical |
Alignment of multiple levels of model granularity. Example of applications: evaluation of macro-level decisions on a micro-level; couple high-level energy scenario models to detailed dynamic simulation models; couple economic market models to technical models. |
2.4 |
Provide a common energy system database/dataset for model input and configuration |
Important |
data sharing, alignment, consistency |
2.5 |
Explicit description of model assumptions |
Important |
Understanding and potential alignment of model assumptions |
2.6 |
Queryable model assumptions |
Nice to have |
Support transparency and understanability among other |
2.7 |
Allow inclusion of external ontologies in addition to the standard model description provided by MMI |
Nice to have |
Alignment of conceptual representations |
3. Model connection and multi-model set-up¶
This category addresses how to connect individual models and to set up a multi-model
Features |
Priority |
Comments related to Needs |
|
---|---|---|---|
3.1 |
Provide interfaces for model connection to allow multi-model creation and connection |
Critical |
Reusable interfaces between models, that allow for easy multi-model creation in future API and/or UI; to easily add and remove models |
3.2 |
The interface shall support different types of model interaction |
Critical |
Support multiple “interaction schemes” (of different types of models, e.g., agent-based model, excel model, optimization model) and to minimize required adaptations to the individual models |
3.3 |
Provide a method to configure the models that uses the infrastructure |
Critical |
Statical or dynamical configuration of models |
3.4 |
Provide a method to communicate uncertainties and source thereof (model inputs and outputs) |
Critical |
Communicate the uncertainty of model results |
3.5 |
Provide a model repository |
Important |
For model search and selection capabilities |
3.6 |
Secure and authorized connection and communication when needed |
Important |
Manage access and communication rights, possibly also for paid use |
3.7 |
Identification or flagging of potential multi-model interaction problems |
Important |
Support model selection capabilities and model interoperation |
3.8 |
Model repository/catalogue with “app store” |
Nice to have |
|
3.9 |
Model discovery and selection based on requirements |
Nice to have |
Find the right model(s) that fit the purpose |
3.10 |
Dashboard/GUI for multi-model selection, connection and configuration |
Nice to have |
Model selection capabilities by human |
4. Model interoperation¶
This category addresses what is needed for model interoperation (i.e. interaction) after a multi-model is set up
Features |
Priority |
Comments related to Needs |
|
---|---|---|---|
4.1 |
Allow for human-in-the-loop control of model interaction |
Critical |
|
4.2 |
Allow for fully-automated model interaction |
Critical |
|
4.3 |
Standardized communication protocol |
Critical |
Informing, e.g., assumptions of one model with outputs from another model |
4.4 |
Provide an orchestration mechanism that allows for control of models |
Critical |
This includes, e.g. start, stop, pause, continue, reset, error report and handling, have keep-alive pings. |
4.5 |
The orchestration mechanism shall be in a decentralized way |
Critical |
|
4.6 |
Provide logging and tracing |
Critical |
|
4.7 |
Provide debugging capabilities |
Critical |
|
4.8 |
Provide backward compatible communication protocol |
Nice to have |
|
4.9 |
Support dynamic real-time model interaction |
Nice to have |
Fit for real-time applications |
4.10 |
Support hardware-in-the-loop |
Nice to have |
Fit for Digital twin applications |
5. Model experimentation and output¶
This category addresses what is needed for model interoperation (i.e. interaction) after a multi-model is set up
Features |
Priority |
Comments related to Needs |
|
---|---|---|---|
5.1 |
Provide experiment management |
Critical |
For documenting model set-up, version, scenarios, parameters, runs, etc. |
5.2 |
Provide multi-model output result management |
Critical |
Link result to experimental setups; who saved the result and where |
5.3 |
GUI MM output analysis |
Nice to have |
Output analysis with respect to MM experimental set-up |
5.4 |
Provide a set of experiment scenarios for a given energy system configuration |
Nice to have |
To assist experimental set-up; Case study repository |
Model description template¶
Introduction¶
Somadutta Sahoo, Last update: 11 November 2023
The model description template was created to compare models under different categories. For this project, the focus is energy system models. The intention was to create a platform for model filtering when determining diverse energy system configuration analyses. The template provides a standard format for model description, including capabilities, important assumptions, data requirements, definitions, and scales.
The template is categorized into the first level, second level, questions to ask, answers, and additional comments. The first category was further subdivided into:
Within general information, further subcategories or the second level are basic information, model versions, type or token model, intended purpose, level of decision that the model aims to support, questions that the model can address, strengths, limitations, part usage of the model, model documentation, model accessibility, model type, model paradigms/formalisms, model implementation environment, and model license.
The model content targets the model structure. The categories are energy system integration, scope, scale, granularity/resolution, model assumptions, model inputs, parameters and output, data sources, verification, validation, test, and uncertainty description. The last category is the reference section.
Questions and the explanation of the question sections were formulated to ask questions and further explain those questions to different model owners if needed. The examples of answers section was created to facilitate model owners in answering some questions. The explanation of the question and the example of answers sections provided clarity to model owners in answering questions. Some question explanations are based on the Terminology document. This allowed for clarity among model owners regarding interpreting different meanings of the same terms. The answers section provides answers from the model owners regarding the questions asked. Some of the answers could be further explained in the additional comments section. The following paragraphs describe the model template in detail.
General model information¶
To start with general model information, basic questions were asked about the model name, owner, and developer. Similarly, questions were asked about the model’s latest versions, versions used for this project, and point of contact for posing any further questions.
Second level |
Questions to ask |
---|---|
Basic information |
Model name |
Model owner |
|
Model Developer |
|
Model version |
Latest model version/date |
The model version used in this project |
|
Point of contact for questions |
Organization |
Individual |
Within general information regarding the model, questions were asked regarding whether the model is a token or a type model. Questions on the modeling’s intended purpose and level of decision support were asked. Some of the questions were further explained in the process of asking questions. Suitable references, wherever needed, were provided.
Second level |
Questions to ask |
---|---|
Type model or token model |
Whether the model is a token model? If so, give illustrations Explanation of the question: These models capture elements or individual properties of the system as opposed to universal property (ref: https://link.springer.com/article/10.1007/s10270-006-0017-9) |
Is the model a type model? If so, give illustrations Explanation of the question: These models capture the universal property of the system rather than emphasizing a particular property (ref: https://link.springer.com/article/10.1007/s10270-006-0017-9) |
|
Intended purpose |
Briefly describe the intended purpose of the model. |
The level of decision that the model aims to support |
Strategic - long-term planning; what do we want? |
Tactical - medium-term; how do we approach this? |
|
Operational - short-term; regular/day-to-day operations? |
Continuing with general model information, questions were asked related to what kind of typical questions can be answered by the model or what the strengths and limitations of the model are. Similarly, questions were asked regarding cases/examples where the model was used for the intended purpose or was not related to the past usage of the model.
Second level |
Questions to ask |
---|---|
Questions to address |
What are typical types of questions that can be asked to the model? Provide examples of such questions. Explanation of the question: The questions may relate to categories such as technologies, techno-economics, society, environment (also emission-related), and policies. |
Strengths |
What are the strengths of this model? What is unique? |
Limitations |
What are the important limitations of the model? |
Past usage of the model |
Cases/examples where the model was used for its intended purpose |
Cases/examples where the model was not used for its intended purpose; are there any examples of model abuse or misuse? |
Second-level categories of general model information included model documentation, accessibility, and types. Questions within the model documentation included whether the model documentation is complete, whether the documentation is accessible, and whether the documentation is in English. Model accessibility-related questions included asking the modeler whether the model has a Graphical User Interface (GUI) and the possibility to access the same. A similar question was whether the model has an application programming interface (API) and how one can access it. To understand the model type, questions were asked whether the model is static or dynamic, continuous or discrete, stochastic or deterministic, and optimization-based. To understand the type of optimization, a question was asked regarding what algorithm the model uses.
Second level |
Questions to ask |
---|---|
Model documentation |
Is the model documentation complete? |
Is the documentation accessible? If so, how? |
|
Whether the documentation is in English? |
|
Model accessibility |
Does the model have a GUI? If so, how to access it? |
Does the model have an API? If so, how to access it? |
|
Model type |
Is the model static or dynamic? |
Is the model continuous or discrete? |
|
Is the model stochastic or deterministic? |
|
Is it an optimization model? If so, what type of algorithms does it use? Examples of answers: linear programming (LP), mixed integer (linear) programming (MIP), non-linear programming (NLP), or a combination of some of these |
Continuing with the general model information, the second-level categorization followed was modeling paradigms/formalisms, model implementation environment, and model license. A question was asked regarding what modeling paradigm or formalism the model uses. Examples of answers included discrete events, system dynamics, agent-based, etc. Questions related to the model implementation environment included if the model was implemented in a general-purpose programming language, such as Python or JAVA, what modeling package the model used, for example, off-the-shelf packages such as AIMMS or MATLAB, and whether the model is implemented in a spreadsheet. The model licensing question was whether any license is required to run the model.
Second level |
Questions to ask |
---|---|
Modeling paradigms/formalisms |
What modeling paradigm or formalism does the model use? Examples of answers: discrete event, systems dynamics, agent-based, regression, network model, math equations, etc. |
Model implementation environment |
Is it implemented in a General purpose programming language? Examples of answers: Python, JAVA, C++, etc. |
Does it use a modeling/Simulation environment/package? Examples of answers: off-the-shelf packages such as AIMMS, GAMS, MATLAB; or modeling packages such as Mesa, PyDevs |
|
Is it implemented in a spreadsheet? Examples of answers: excel, googlesheets, etc. |
|
Model license |
Is any license required for running the model? |
Model content¶
The next set of questions was related to the model content (first level). The first set of second-level categories within this are energy system integration and model scope. The integration question was whether the model represents an integrated energy system. Scope-related questions were what important elements and concepts are included in the model and what are not. To explain these questions further, the explanation was scope could include energy carriers, infrastructure, supply options, demanding sectors, etc. Examples of energy carriers could include heat, electricity, hydrogen, etc. Since flexibility is gaining attention within the context of energy system modeling, an explicit scope-related question was asked regarding what flexibility options were included in the model.
Second level |
Questions to ask |
---|---|
Energy System Integration |
Does the model represent an integrated energy system? |
Scope |
What important elements and concepts are included in the model? Explanation of the question: This can include energy carriers, infrastructure, supply options, demanding sectors, etc. Examples of answers: heat, electricity, hydrogen, etc. – for energy carriers. |
What elements and concepts are currently not included in the model, but in your opinion, those shall be included? |
|
Specific attention to flexibility options: What type of flexibility options are included in the model? Examples of answers: seasonal storage, demand response, etc. |
Continuing with the model content, the next second-level category was scale and granularity or resolution. Within the scale category, questions were asked about the model’s spatial (or geospatial) and temporal (or time) categorization. Answers could include neighborhood, city, province, etc, for spatial scale and a year or multiple years for temporal scale. Granularity also included spatial and temporal categorization with similar possible answers.
Second level |
Questions to ask |
---|---|
Scale |
What spatial (or geospatial) scale does the model have? Examples of answers: neighborhood, district, town/city, province, country, continent, global, etc. |
What temporal (or time) scale does the model have? Examples of answers: annual, multiple years, etc. |
|
Granularity/resolution |
Spatial Explanation of the question: This can be further classified into structural or information granularity. Structural granularity represents the level of disaggregation between model elements and the relationships between them. Information granularity represents the information content of the model elements and output. Examples of answers: individual buildings, neighborhood, district, town/city, province, country |
Temporal Examples of answers: seconds, minutes, hours, annual, time slices within a year, time slices over a time period, etc. |
Within the model-content context, the next set of second-level categories are model assumptions; model inputs, parameters, and outputs; and data sources of the model. Model assumption questions are what important assumptions the model has and what assumptions are likely to be contested by others. Questions related to model input, parameters, and output are: what is/are the model format for input and output, and what important inputs, parameters, and outputs does the model include? Data sources-related questions included the model’s data sources and whether any data can be shared.
Second level |
Questions to ask |
---|---|
Model assumptions |
What important assumptions does the model have? |
Which ones are likely to be contested by others? Why? |
|
Model input, parameters, and output |
What is/are the model input format(s)? |
What is/are the model output format(s)? |
|
What are the important model inputs? |
|
What important parameters does the model have? |
|
What are the important model outputs? |
|
Data sources |
What are the data sources used by the model? |
Any data that can be shared? If so, what and how to access them? |
The next second-level categories within the model content are verification, validation, and test and uncertainty descriptions. Within the first category, questions included what the test coverage of the model is, what is verified, validated, and tested within the model, and what methods are deployed for model verification, validation, and testing. Examples of answers related to the test coverage are direct structure tests, parameter confirmation, structural boundary adequacy, etc. Examples of testing and validation methods include Monte Carlo simulations. Questions related to uncertainty descriptions were simplistic, for example, what could modelers comment on the uncertainties associated with model parameters, inputs, and structure? In the end, there is a description and application-related reference of the model.
Second level |
Questions to ask |
---|---|
Verification, validation, and test |
Can you comment on the test coverage of the model? Explanation of the question: The test could be on structure, behavior, policy implications, etc. Examples of answers: direct structure tests, parameter confirmation, extreme conditions, structural boundary adequacy, unit checks, sensitivity tests, reproduction/prediction tests, etc. |
What are being verified, validated, or tested in the model, if any? Explanation of the question: What type of methods are employed? It could be qualitative, quantitative, etc. Examples of answers: expert opinion, contemporary literature review, running the same model under different scenarios, etc. |
|
What methods are used for model verification, validation, and testing, if any? Explanation of the question: Are there any inbuilt tools, such as Monte Carlo, or ways to perform sensitivity analyses on model inputs? |
|
Uncertainty descriptions |
Can you comment on the uncertainty in model parameters? |
Can you comment on the uncertainty in model input? |
|
Can you comment on the uncertainty in the model structure? |
In the following section, each of the models used in the project is described in detail.
CTM¶
OPERA is a Dutch-based national energy system model focusing on total system cost minimization. Refer to https://multimodelling.readthedocs.io/en/latest/energy_models/CTM/index.html for more overall information regarding the model.
General model information¶
General model information questions were asked regarding basic information, model versions, and point of contact for questions. The CTM model is developed and maintained by Kalavasta.
Questions to ask |
Answers/Explanation |
---|---|
Model name |
CTM (Carbon Transition Model) |
Model owner |
Kalavasta |
Model Developer |
Kalavasta |
The latest model version/date |
|
The model version used in this project |
5th April 2022 |
Organization |
Kalavasta |
Individual |
Karel Zwetsloot |
A second set of questions was asked regarding whether the model is type or token, the intended purpose of the model, and the level of decision that the model aims to support. We understand that the model can be categorized as a token model as it analyzes the feedstock and energy balance of the Dutch energy-intensive industries. The model focuses on large industrial sites and performs bottom-up calculations. The model considers location-specific demand and supply values under different future scenario conditions. The long-term strategy is discussions between industries, grid operators, and the government regarding what energy infrastructure type would be needed for climate neutrality.
Questions to ask |
Answers/Explanation |
---|---|
Is the model a token model? If so, give illustration(s). |
Yes. The model analyzes the feedstock and energy balance of the energy-intensive Dutch industries per site. The starting point is the major energy and mass consumers in the Netherlands. |
Is the model a type model? If so, give illustration(s). |
|
Briefly describe the intended purpose of the model |
1. CTM takes large industrial sites and does the calculation bottom-up. 2. Material and energy balance for Dutch energy-intensive industry on a per-site basis. 3. Location-specific demand and supply values under different change scenarios. 4. It could be used for forecasting infrastructure needs, but this is not the primary goal. It can also be used to explore other options given potential transition goals. |
Strategic - long-term planning; what do we want? |
The long-term strategy of the Dutch energy-intensive industry is included by considering the 2025 – 2050 time frame. For strategic discussion between industry, industry and grid operators, industry and government, and industry and NGOs. For example, from the perspective of TSO/DSOs, what kind of energy infrastructure would be needed depending on plans to go carbon neutral? The model provides a localized, location-specific, scenario-building tool for large energy flows. |
Tactical - medium-term; how do we approach this? |
|
Operational - short-term; regular/day-to-day operations? |
One of the model’s strengths is the ability to perform energy and mass balance analyses of specific industrial sites. The model has been used to create the II3050 infrastructure outlook for the Dutch industry to provide demand profiles to industrial sites for specific future years, such as 2025 and 2030, based on pre-existing scenarios.
Questions to ask |
Answers/Explanation |
---|---|
What are typical types of questions that can be asked to the model? provide examples of such questions |
|
What are the strengths of this model? What is unique? |
Specific analysis of mass and energy balance of industrial sites. |
What are the important limitations of the model? |
|
Cases/examples where the model was used for its intended purpose |
1. Used for the II3050 infrastructure outlook for the Dutch industry to provide demand profiles in industrial sites in 2025, 2030, and 2050 based on four pre-existing scenarios. 2. First Application for II3050 edition 2 together with the 14 largest emitters in the industry, Min. of Economic Affairs and Climate, the Grid Operators, and VNO-NCW. Additional comments/remarks: It could be used for forecasting infrastructure needs, but this is not the primary goal. It can also be used to explore other options given potential transition goals. |
Cases/examples where the model was not used for its intended purpose; are there any examples of model abuse or misuse? |
The next set of questions is related to model documentation, accessibility, and type. The model documentation is not complete. The graphical user interface (GUI) is available online. The model is static, continuous, and deterministic. The model is not an optimization model.
Questions to ask |
Answers/Explanation |
---|---|
Is the model documentation complete? |
No |
Is the documentation accessible? If so, how? |
Some documentation is available online. |
Is the documentation in English? |
Not available |
Does the model have a GUI? If so, how to access it? |
Yes, online |
Does the model have an Application Programming Interface (API) ? If so, how to access it? |
|
Is the model static or dynamic? |
Static |
Is the model continuous or discrete? |
continuous |
Is the model stochastic or deterministic? |
Deterministic |
Is it an optimization model? If so, what type of algorithms it uses? |
No |
The next set of questions are regarding the modeling paradigm, implementation environment, and license. The model applies multiple formalisms, such as one-shot calculations, mass and energy flow-based calculations, mathematical expressions and equations, and graph or network-based calculations. The model uses JAVA-based web applications for online spreadsheets. The model is implemented in Excel spreadsheets. Therefore, no specific license is required to run the model.
Questions to ask |
Answers/Explanation |
---|---|
What modeling paradigm or formalism does the model use? |
1. One-shot calculation and mass and energy flow-based calculations
|
Is it implemented in a General purpose programming language? |
1. Python orchestrator and API application (connects the frontend and the web access) 2. JAVA-based web application for online spreadsheets (Keikai - API) - AWS input/output for the spreadsheet model 3. The user interface in HTML and JS (front end) Additional comments/remarks: Operating system: Windows for now, AWS container, Mac soon available |
Does it use a modeling/Simulation environment/package? |
|
Is it implemented in a spreadsheet? |
Excel model |
Is any license required to run the model? |
No |
Model content¶
A preliminary set of model content questions were related to energy system integration and scope. The model does not represent an integrated energy system. The model fully covers industries; however, heat integration is still in development.
Essential elements and concepts the model includes are most energy carriers used by Dutch industries, such as Electricity, natural gas, hydrogen, heat, and others (including different types of methane, biowaste/non-biowaste, biomass, waste, carbon monoxide, etc.). No specific attention is paid to including flexibility options.
Questions to ask |
Answers/Explanation |
---|---|
Does the model represent an integrated energy system? |
No Additional comments/remarks: Fully integrated for industries, heat integration is still in development |
What important elements and concepts are included in the model? |
1. Energy carriers: Electricity, natural gas, hydrogen, heat, and others (including different types of methane, biowaste/non-biowaste, biomass, waste, carbon monoxide, etc.) 2. A carbon pricing system is in place. 3. The model considers spatial data regarding grid connections (H2, electricity, CO2, and gas). Distance is considered for heat transportation. |
What elements and concepts are currently not included in the model, but in your opinion, those shall be included? |
|
Specific attention to flexibility options: What type of flexibility options are included in the model? |
No |
The next set of content-related questions included scale and resolution. The spatial scale of the model is the national level, and the temporal scale is one future target year, approximately 30 years ahead. The spatial resolution is industry site or cluster level.
Questions to ask |
Answers/Explanation |
---|---|
What spatial (or geospatial) scale does the model have? |
National |
What temporal (or time) scale does the model have? |
One target year. Approx. 30 years ahead. Though the year is arbitrary |
Spatial resolution |
Industry site level or Industry cluster level |
Temporal resolution |
The next set of questions is related to model assumptions, model inputs, parameters, and outputs, and data sources related to the model. One of the critical assumptions is that material and energy balance need to add up for every industrial site. Industries emitting <100 kT CO2 are not included in the analysis. Electricity production is not included, but facilities having their own power production are included in the analysis. One may contest the level of detail in describing a site. Some mass/energy streams might be missed. Some important model inputs are the costs of energy carriers, investments in technologies, annualized investment costs, costs of carbon emissions (the carbon pricing system is in place), and the infrastructure cost at the national level. Similarly, some important model outputs are emissions, demand, cost, etc., at the cluster, sector, and national levels.
Questions to ask |
Answers/Explanation |
---|---|
What critical assumptions does the model have? |
1. For every industrial site, a material and energy balance needs to add up. Sites are networked together. So, a network of mass-balanced elements. 2. There is a cut-off of the size of facilities included, namely <100kT CO2 is considered too small. Electricity production is usually not included; some facilities have power generation of their own, which is generally included. Project consortium and expert estimate driven. To illustrate, some smaller sites than the cutoff are included as they are project partners. |
Which ones are likely to be contested by others? Why? |
Base year assumptions for site activity are approximations made using the best available data. 10-15 main activities are considered. One may contest specific data on the operation of the elements, not their presence. The level of detail in describing the site can be contested. In addition, some mass/energy streams are missed. |
What is/are the model input format(s)? |
|
What is/are the model output format(s)? |
|
What are the important model inputs? |
1. Site setting, national setting, economic interaction within a pool, and technological inputs for industries 2. Costs of carriers and investments in technologies, annualized investment costs, costs of carbon emissions (carbon pricing system is in place), infrastructure cost (National) |
What important parameters does the model have? |
Technological, energetic, and financial parameters related to industries |
What are the important model outputs? |
Emissions, demand, cost, etc., at cluster, sector, and national level |
What are the data sources used by the model? |
|
Any data that can be shared? If so, what and how to access them? |
Continuing with the model content, there were questions regarding verification, validation, and test, and uncertainty descriptions. There is limited testing on verification, validation, and testing within the model. Mass balance checking could be implemented in Excel. One of the validation methods is feedback from industrial partners, i.e., the qualitative method. Similarly, base year data is matched with publicly available data.
Questions to ask |
Answers/Explanation |
---|---|
Can you comment on the test coverage of the model? |
|
What is being verified, validated, or tested in the model? |
Limited testing Additional comments/remarks: Mass balance checking could be implemented in Excel |
What methods are used for the model verification, validation, and testing, if any? |
1. Qualitative method: feedback from industrial partners 2. Quantitative method: face validation, i.e., checking with industrial sites and matching base year data to public data |
Can you comment on the uncertainty in model parameters? |
|
Can you comment on the uncertainty in model input? |
|
Can you comment on the uncertainty in the model structure? |
ESSIM¶
ESSIM is an energy system model simulating energy balances over time and across scales. Refer to https://multimodelling.readthedocs.io/en/latest/energy_models/ESSIM/index.html for more overall information regarding the model.
General model information¶
General model information questions were asked regarding basic information, model versions, and point of contact for questions. The ESSIM model is developed and maintained by TNO.
Questions to ask |
Answers/Explanation |
---|---|
Model name |
ESSIM |
Model owner |
TNO |
Model Developer |
TNO |
The latest model version/date |
|
The model version used in this project |
|
Organization |
TNO |
Individual |
A second set of questions was asked regarding whether the model is type or token, the intended purpose of the model, and the level of decision that the model aims to support. We understand that the model can be categorized as a token model as this represents a small part of the energy system. The model’s intended purpose is to simulate network balancing and its effects in an interconnected hybrid system. Long-term planning aspects include future scenario investigation or studies. In the medium term, the model calculates optimal schedule of flexible producers and the effects of this schedule on emissions, costs, load on the network, etc.
Questions to ask |
Answers/Explanation |
---|---|
Is the model a token model? If so, give illustration(s). |
A small part of the energy system is described. |
Is the model a type model? If so, give illustration(s). |
|
Briefly describe the intended purpose of the model |
1. Energy balances over time and across scales. 2. Simulates network balancing and the effects thereof, in an interconnected hybrid system. |
Strategic - long-term planning; what do we want? |
Future scenario investigation/studies |
Tactical - medium-term; how do we approach this? |
The model calculates optimal schedule of flexible producers and the effects of this schedule on emissions, costs, load on the network, etc. |
Operational - short-term; regular/ day-to-day operations? |
Typical questions about the model include dimensioning and balance of any hybrid system over the whole year. Similarly, questions can be asked about shortage or excess of any particular energy carrier, interactions between them, effects of adding storage, or emissions from different producers.
The model can be used to model the behavior of an aggregator (the role in the energy system, aggregating flex, dealing on markets, etc.). One of the important limitations of the model is a lack of representation of full-scale interactions within an energy system. The model has been used as a model orchestrator between multiple lower-level infrastructure models.
Questions to ask |
Answers/Explanation |
---|---|
What are typical types of questions that can be asked to the model? provide examples of such questions |
1. Is my energy system well-dimensioned and in balance during the whole year? 2. During what periods of the year do I have excess or shortage of energy, and for what energy carrier? 3. How do the different energy carriers interact with each other? 4. What is the load on the transport infrastructure over the year, and how often does overloading happen, and to what extent? 5. What is the total CO2 emission for the simulated system, and how is CO2 emission distributed over the different producers? 6. What are the effects of adding storage? |
What are the strengths of this model? What is unique? |
It can be used to model the behavior of an aggregator (the role in the energy system, aggregating flex, dealing on markets). |
What are the important limitations of the model? |
It does not represent full-scale energy system interactions. |
Cases/examples where the model was used for its intended purpose |
It has been used as a model orchestrator between multiple lower-level infrastructure models; Additional comments/remarks: 1. ESSIM acts as the connection between the energy carrier models (basically responsible for all ‘conversion assets’). 2. It orchestrates and simulates the behavior of the conversion. |
Cases/examples where the model was not used for its intended purpose; are there any examples of model abuse or misuse? |
The next set of questions is related to model documentation, accessibility, and type. The model documentation is complete and is available online in English. The Application Programming Interfaces (APIs) are online. The model is static, deterministic, and discrete.
Questions to ask |
Answers/Explanation |
---|---|
Is the model documentation complete? |
No |
Is the documentation accessible? If so, how? |
Yes, online |
Is the documentation in English? |
Yes |
Does the model have a GUI? If so, how to access it? |
|
Does the model have an Application Programming Interface (API) ? If so, how to access it? |
Yes, APIs are also online. https://essim-documentation.readthedocs.io/en/latest/essim_api/index.html |
Is the model static or dynamic? |
Static Additional comments/remarks: ESSIM simulates a certain period of time with a specific resolution (so a simulation of a year on an hourly basis). Most of the time, the system description doesn’t change (the hourly values in the profiles are the things that change). |
Is the model continuous or discrete? |
discrete |
Is the model stochastic or deterministic? |
Deterministic |
Is it an optimization model? If so, what type of algorithms it uses? |
No |
The next set of questions are regarding the modeling paradigm, implementation environment, and license. The model applies multiple formalisms, such as graph/network-based, non-linear functions, etc. Multiple general-purpose programming languages, such as Python, JAVA, etc., are used. No license is required to run the model; however, permission is required from the model owner.
Questions to ask |
Answers/Explanation |
---|---|
What modeling paradigm or formalism does the model use? |
Graph/network-based, code is object-oriented, heavily data-driven, assets dynamics are non-linear functions, etc. |
Is it implemented in a General purpose programming language? |
1. ESSIM is implemented in JAVA. Some extensions are written in Python (KPI modules) 2. The internal component uses the NATS message bus interface, other projects use MQTT, and others use Rabbit MQ. |
Does it use a modeling/Simulation environment/package? |
No |
Is it implemented in a spreadsheet? |
|
Is any license required to run the model? |
No license is required. Permission is required from the model owner, however. |
Model content¶
A preliminary set of model content questions were related to energy system integration and scope. The model represents an integrated energy system. Though, the user has to define and scope them. Essential elements and concepts included in the model are energy carriers, production, conversion, transport, and storage. The model focuses on flexibility in energy and time for different technology options.
Questions to ask |
Answers/Explanation |
---|---|
Does the model represent an integrated energy system? |
Yes The user has to define and scope them, though. |
What important elements and concepts are included in the model? |
Describes energy carriers, energy production, consumption, conversion, transport, and storage. |
What elements and concepts are currently not included in the model, but in your opinion, those shall be included? |
|
Specific attention to flexibility options: What type of flexibility options are included in the model? |
Yes, it focuses on flexibility in energy and time for different technology options, for example, gas heaters (energy flexible, time inflexible), batteries (energy and time flexible), etc. |
The next set of content-related questions included scale and resolution. There is no specific spatial scale of the model. The model has an annual temporal scale. The model has no specific spatial resolution. The input file can have spatial information included. Temporal resolution is an hour.
Questions to ask |
Answers/Explanation |
---|---|
What spatial (or geospatial) scale does the model have? |
Not specific |
What temporal (or time) scale does the model have? |
Annual |
Spatial resolution |
Not specific Additional comments/remarks: ESSIM can be used to model the energy system of a single house or the world’s energy balance. The ESDL that goes into ESSIM contains geographical information 99% of the time, but ESSIM doesn’t do anything with this information. |
Temporal resolution |
hourly |
The next set of questions is related to model assumptions, model inputs, parameters, and outputs, and data sources related to the model. The model follows an internal algorithm to determine the order of solving various commodity networks. The model follows flexibility-based demand-supply matching algorithm that uses the costs of energy production as a means to grade the desirability of producers. The model does not fully enforce energy or mass conservation, which might be contested by others. The input and output file format is Energy System Description Language (ESDL). Important model inputs are household demand and supply, related technology options, energy network infrastructure, large-scale energy supply options, etc. Important model outputs are production/consumption time series at each node, total production, total costs, imports/exports, full-load hours, etc.
Questions to ask |
Answers/Explanation |
---|---|
What critical assumptions does the model have? |
1. The model follows an algorithm to determine the order of solving various commodity networks. 2. A flexibility-based demand-supply matching algorithm that uses costs of energy production as a means to grade the desirability of producers. 3. A tree-based transport network solver that calculates the load on various transport elements based on the demand-supply solution determined above. |
Which ones are likely to be contested by others? Why? |
1. Infrastructure cycles/loops are “randomly” cut to make a directed tree. 2. Energy conservation is not fully enforced (conversion losses can be ignored or made explicit)
|
What is/are the model input format(s)? |
ESDL |
What is/are the model output format(s)? |
ESDL |
What are the important model inputs? |
Topological city household demand and supply, related technology options, energy network infrastructure, large-scale energy supply options, etc. |
What important parameters does the model have? |
Parameters related to the inputs mentioned above Additional comments/remarks: There are no internal parameters in the model. All necessary data is in the input data files. |
What are the important model outputs? |
1. Mainly time-series (hourly profiles for consumption/production) at each node 2. CO2 output profiles (for each producer or each energy carrier) 3. KPI modules (metrics: energy neutrality, total (local) production/consumption, total import/export, full load hours, etc.) |
What are the data sources used by the model? |
|
Any data that can be shared? If so, what and how to access them? |
Continuing with the model content, there were questions regarding verification, validation, and test, and uncertainty descriptions. There is no specific test coverage of the model. Units and data consistency checks are held manually. The results and inputs are validated by experts. Over-production and system failure are other method of validating and verifying.
Questions to ask |
Answers/Explanation |
---|---|
Can you comment on the test coverage of the model? |
There is no specific comment on the test coverage of the model. |
What is being verified, validated, or tested in the model? |
1. Unit and data consistency checks, including conversion units - manually
|
What methods are used for the model verification, validation, and testing, if any? |
|
Can you comment on the uncertainty in model parameters? |
|
Can you comment on the uncertainty in model input? |
|
Can you comment on the uncertainty in the model structure? |
ETM¶
ETM is a Dutch-based national energy system model focusing on energy transition. Refer to https://multimodelling.readthedocs.io/en/latest/energy_models/ETM/index.html for more overall information regarding the model.
General model information¶
General model information questions were asked regarding basic information, model versions, and point of contact for questions. The ETM model is developed and maintained by Quintel.
Questions to ask |
Answers/Explanation |
---|---|
Model name |
ETM (Energy Transition Model) |
Model owner |
Quintel |
Model Developer |
Quintel |
The latest model version/date |
|
The model version used in this project |
Latest version |
Organization |
Quintel |
Individual |
Chael Kruip |
A second set of questions was asked regarding whether the model is type or token, the intended purpose of the model, and the level of decision that the model aims to support. We understand that the model can be categorized as type and token model. The model’s intended purpose is to analyze the energy system of the Netherlands. Long-term national policy targets are emission reductions, efficiency, and renewable energy production. The medium-term focus is on flexibility analysis and annual demand requirements under different ‘if’ conditions.
Questions to ask |
Answers/Explanation |
---|---|
Is the model a token model? If so, give illustration(s). |
Users/modelers can define specific parts of the energy system in detail |
Is the model a type model? If so, give illustration(s). |
It is an energy domain-specific model that can be configured to represent a specific target system (ETM is a framework: model structure predefined; can be initialized for different energy systems; can be applied to sub-systems) |
Briefly describe the intended purpose of the model |
Energy System Analysis of the Netherlands |
Strategic - long-term planning; what do we want? |
1. Long-term national policy targets related to emissions reductions, efficiency, renewable energy production 2. Long-term targets of, for example, production for industries, including subsectors, sectoral demands, energy infrastructure capacity |
Tactical - medium-term; how do we approach this? |
2. Annual demand requirement under different ‘if’ conditions |
Operational - short-term; regular/ day-to-day operations? |
Typical questions about the model include future flexibility in a national energy system or electrification volume needed to replace heat demand in the built environment. The model is handy for quickly exploring and quantifying potential future energy systems in detail. The model is free to use, open-source, and applicable to national and regional contexts within the EU. One of the major limitations of the model is the lack of complex interactions between different components of an integrated energy system. The governments at different levels within the Netherlands have used the model for regional and national energy transition analysis. The model has been pushed to achieve a given political agenda, which is a case of the model not being used for its intended purpose.
Questions to ask |
Answers/Explanation |
---|---|
What are typical types of questions that can be asked to the model? provide examples of such questions |
2. What is the heat demand for electrifying the built environment? 3. How many petrol-fueled vehicles need to be replaced to reduce mobility emissions by x %? Etc. |
What are the strengths of this model? What is unique? |
1. A handy calculation tool allowing for quick exploration and quantification of potential future energy systems in detail. 2. The model can perform analysis at different geographical levels with ease. 3. The model covers various aspects of the energy system, such as demand, supply, and emissions. 4. The model is free to use, open-source, and available for EU countries, municipalities, and many other regions. |
What are the important limitations of the model? |
1. The model does not consider the complex interactions within the energy system 2. Social interaction and impacts missing |
Cases/examples where the model was used for its intended purpose |
1. The governments at different levels within the Netherlands have used the model for regional and national energy transition analysis. 2. The model has been applied/used in collaboration with industries and universities to understand sectoral energy demands, energy supplies from different technology options, and energy balances. |
Cases/examples where the model was not used for its intended purpose; are there any examples of model abuse or misuse? |
The model has been pushed to achieve a given political agenda/goal (the assumptions/input is an input, which is the user’s responsibility) and interpretation of the result. |
The next set of questions is related to model documentation, accessibility, and type. The model documentation is not complete but adequate. The documentation is available online and is in English. The graphical user interface (GUI) and Application Programming Interface (API) are online. The model is static, deterministic, and continuous.
Questions to ask |
Answers/Explanation |
---|---|
Is the model documentation complete? |
The documentation is not complete but adequate. |
Is the documentation accessible? If so, how? |
Yes |
Is the documentation in English? |
Yes |
Does the model have a GUI? If so, how to access it? |
Yes, the GUI is online. |
Does the model have an Application Programming Interface (API) ? If so, how to access it? |
Yes, APIs are also online. |
Is the model static or dynamic? |
Static Additional comments/remarks: The model has a static start-to-end date calculation. Energy storage and market principles are dynamic time steps. |
Is the model continuous or discrete? |
continuous |
Is the model stochastic or deterministic? |
Deterministic |
Is it an optimization model? If so, what type of algorithms it uses? |
No |
The next set of questions are regarding the modeling paradigm, implementation environment, and license. The model applies multiple formalisms, such as mathematical equations, object-oriented programming, etc. Multiple general-purpose programming languages, such as Python, JAVA, SQL, etc., are used. The model is implemented in an Excel spreadsheet and does not require any license to run.
Questions to ask |
Answers/Explanation |
---|---|
What modeling paradigm or formalism does the model use? |
Mathematical equations (translation of UI input to model input; graph query), procedural (mostly) and functional (some), Object-oriented, etc. |
Is it implemented in a General purpose programming language? |
Python, JAVA, Ruby (mostly), SQL database, and C++ for optimized/memory-intensive activity |
Does it use a modeling/Simulation environment/package? |
No |
Is it implemented in a spreadsheet? |
Excel |
Is any license required to run the model? |
No |
Model content¶
A preliminary set of model content questions were related to energy system integration and scope. The model represents an integrated energy system.
Essential elements and concepts included in the model are energy-demanding sectors, energy-supplying options, energy infrastructure, and fuel feedstock. The model covers a wide range of flexibility options, for example, technologies accommodating large fluctuations in volume such as power-to-gas (P2G) or gas storage and large sudden fluctuations in capacities such as heat and power plants.
Questions to ask |
Answers/Explanation |
---|---|
Does the model represent an integrated energy system? |
Yes |
What important elements and concepts are included in the model? |
1. Covers the entire energy system of the Netherlands 2. Content-wise coverage: Energy-demanding sectors (built environment, industries, agriculture, and mobility), energy supply options (for example, wind, solar, biomass, geothermal, and non-renewable sources), energy infrastructure (electricity, heat, gas, hydrogen, and CO2), and fuel feedstock |
What elements and concepts are currently not included in the model, but in your opinion, those shall be included? |
|
Specific attention to flexibility options: What type of flexibility options are included in the model? |
A wide range of flexibility options are included: a. large fluctuations in volume (P2G, Import/export or storage of gas/hydrogen, and seasonal storage of heat) b. large or sudden fluctuations in capacity (storage in batteries, dispatchable heat and power plants, and demand side response) c. Volume and capacity fluctuations (import/export of electricity, P2H, curtailment of renewable electricity production, and large-scale electricity storage) |
The next set of content-related questions included scale and resolution. The spatial scale of the model is the national level. The model has a long-term temporal scale till 2050; however, the emphasis is till 2050. The spatial resolution is at the city or municipality level. Temporal resolution is an hour.
Questions to ask |
Answers/Explanation |
---|---|
What spatial (or geospatial) scale does the model have? |
National |
What temporal (or time) scale does the model have? |
Long-term (till 2070) However, the emphasis is till 2050. |
Spatial resolution |
Municipality |
Temporal resolution |
hourly |
The next set of questions is related to model assumptions, model inputs, parameters, and outputs, and data sources related to the model. Most energy balances happen annually, allowing the model to provide quick results for different scenarios. The model does not differentiate between different temperature levels, which others might contest as industries require high-temperature heat, and the built environment uses low-temperature heat. The input is through sliders at the GUI, and the output results are graphs visualized through the GUI. Some important model inputs are sectoral energy and services demand, supply options, and profiles. Important model outputs are final energy demands and supplies, investments in technology options, yearly cost of energy production, etc. Links to some of the data sources have been provided. Data can be shared, and some links for that are provided.
Questions to ask |
Answers/Explanation |
---|---|
What critical assumptions does the model have? |
1. Most energy balances happen annually, which allows the model to provide quick results for different scenarios 2. Multiple versions of the II3050 scenario are considered in the model. |
Which ones are likely to be contested by others? Why? |
1. No differentiation between temperature levels; only one type of heat, which is not realistic. Industry uses high-temperature heat, and the built environment uses low-temperature heat 2. In dispatchable power plants, there is no ramping speed |
What is/are the model input format(s)? |
Input is through sliders at the GUI. |
What is/are the model output format(s)? |
Output results are graphs visualized at the GUI. |
What are the important model inputs? |
674 input variables Examples: sectoral energy and services demand (households, buildings, transportation, industry, agriculture, etc.), supply (electricity, district heating, hydrogen, transport fuels, etc.), profiles (demand, supply, prices, etc.), etc. |
What important parameters does the model have? |
Technology- and process-related parameters (for example, efficiency), emission factors, etc. |
What are the important model outputs? |
Final energy demands and supply, investment in technology options, hourly electricity prices, yearly energy system cost, production, etc. |
What are the data sources used by the model? |
Some links to data sources: |
Any data that can be shared? If so, what and how to access them? |
Yes |
Continuing with the model content, there were questions regarding verification, validation, and test, and uncertainty descriptions. The model works in a test-driven development environment. Unit testing is done for low-level functions. Model inputs, model structure, and data consistency are verified, tested, and validated. The effect of policies on the inputs is tested. The qualitative method of validating is expert consultation. One of the quantitative methods deployed by the model is a comparison with other models and pilot runs. No systematic uncertainty verification methods exist, though sensitivity analyses are performed on various input parameters.
Questions to ask |
Answers/Explanation |
---|---|
Can you comment on the test coverage of the model? |
Test-driven development, unit testing for low-level functions, integration test |
What is being verified, validated, or tested in the model? |
1. Input, model structure, data consistency, etc.. 2. The possible effect of policies is given as input to the model |
What methods are used for the model verification, validation, and testing, if any? |
1. Qualitative method: expert validation 2. Quantitative method: comparison with other models with more significant details, pilot runs
Etc. |
Can you comment on the uncertainty in model parameters? |
Sensitivity analyses; no systematic uncertainty verification method |
Can you comment on the uncertainty in model input? |
|
Can you comment on the uncertainty in the model structure? |
MOTER¶
Somadutta Sahoo, Last update: 14 December 2023
MOTER is an optimization model for dispatching multi-commodity energy systems in an interconnected network of multiple energy carriers. Refer to https://multimodelling.readthedocs.io/en/latest/energy_models/MOTER/index.html for more overall information regarding the model.
General model information¶
General model information questions were asked regarding basic information, model versions, and point of contact for questions. The MOTER model is developed and maintained by DNV.
Questions to ask |
Answers/Explanation |
---|---|
Model name |
MOTER (Modeler of Three Energy Regimes) |
Model owner |
DNV |
Model Developer |
DNV |
The latest model version/date |
|
The model version used in this project |
|
Organization |
DNV |
Individual |
Jan Willem Turkstra |
A second set of questions was asked regarding whether the model is type or token, the intended purpose of the model, and the level of decision that the model aims to support. We understand that the model can be categorized as type and token model. The model focuses on long-term investment in energy infrastructure and technology options related to storage and conversions. In the mid-term, the model dispatches assets (for example, generators and heat pumps) within the physical limitations of the system at the minimum overall costs. In the short term, the model balances demand-supply energy and addresses mismatches related to energy flows.
Questions to ask |
Answers/Explanation |
---|---|
Is the model a token model? If so, give illustration(s). |
Yes. For example, the model analyses energy infrastructure in detail, i.e., voltage and current levels of different electricity networks, rather than the universal property of energy flows. |
Is the model a type model? If so, give illustration(s). |
Yes. For example, one of the major components of an integrated energy system, i.e., energy infrastructure, is modeled in detail. |
Briefly describe the intended purpose of the model |
A global optimization model for dispatching flexible assets in a multi-commodity energy system |
Strategic - long-term planning; what do we want? |
Investment in capacity of energy infrastructure and technology options related to storage and conversions. |
Tactical - medium-term; how do we approach this? |
MOTER dispatches the assets (i.e., generators, heat pumps, boilers, compressors, gas blending, storage, etc.) within the physical limitations of the system at the minimum overall cost. Also included in the dispatch plan are supply/ demand curtailment, intraday load shifting, transport and storage losses, and any limitations on maximal annual volumes (like for biogas) |
Operational - short-term; regular/day-to-day operations? |
Demand-supply energy balances, energy flows to address mismatches, dispatch of flexible assets, etc. |
Typical questions asked of the model include future capacities and energy from different technology options. The model has many strengths, including the detailed representation of energy infrastructure, for example, electricity, heat, and gas. One of the critical limitations of the model is that it has only been used in the context of the DNV Energy Transition Simulator. The model has been used to provide near real-time feedback on the techno-economic performance of the investment choices made by the stakeholders. Refer to the table below for further discussion on these aspects.
Questions to ask |
Answers/Explanation |
---|---|
What are typical types of questions that can be asked to the model? provide examples of such questions |
1. What are the capacity and energy supply from different technology options? 2. What type of system integration can be achieved with the model? 3. What flexibility options are included in the analysis? Etc. |
What are the strengths of this model? What is unique? |
All energy carriers can be configured within three classes: Electric (HV/ MV voltage ranges), gaseous (pressure, composition ranges), and heat (temperature ranges). |
What are the important limitations of the model? |
1. As a global optimizer, the number of assets combined with the time resolution is a limiting factor 2. Only used in the context of the DNV Energy Transition Simulator |
Cases/examples where the model was used for its intended purpose |
1. The primary purpose of the MOTER tool is to provide stakeholders with near real-time feedback on the techno- economic performance of the investment choices made. 2. Users can co-create any future multi-commodity energy system |
Cases/examples where the model was not used for its intended purpose; are there any examples of model abuse or misuse? |
The next set of questions is related to model documentation, accessibility, and type. The model documentation is not complete and not accessible. The graphical user interface (GUI) can be accessed with the owner’s permission. The model is static, deterministic, and linear programming (LP)-based.
Questions to ask |
Answers/Explanation |
---|---|
Is the model documentation complete? |
No |
Is the documentation accessible? If so, how? |
Not accessible |
Is the documentation in English? |
Not available |
Does the model have a GUI? If so, how to access it? |
No |
Does the model have an Application Programming Interface (API) ? If so, how to access it? |
In general, the model does not have an API. |
Is the model static or dynamic? |
Static |
Is the model continuous or discrete? |
continuous |
Is the model stochastic or deterministic? |
Deterministic |
Is it an optimization model? If so, what type of algorithms it uses? |
Yes, LP |
The next set of questions are regarding the modeling paradigm, implementation environment, and license. The model applies multiple formalisms, such as mathematical equations and logical expressions. The model is implemented using a modeling package called AIMMS. An AIMMS license is needed, and the owner can share the model.
Questions to ask |
Answers/Explanation |
---|---|
What modeling paradigm or formalism does the model use? |
Mathematical equations, logical expressions, energy balances, math equations, etc. |
Is it implemented in a General purpose programming language? |
No |
Does it use a modeling/Simulation environment/package? |
AIMMS |
Is it implemented in a spreadsheet? |
|
Is any license required to run the model? |
AIMMS license is needed, except for educational and research purposes |
Model content¶
A preliminary set of model content questions were related to energy system integration and scope. The model does not represent an integrated energy system. Essential elements and concepts included in the model are production, transport, storage, conversion, and end-use of resources. Some flexibility options included in the model are combined heat and power plants and heat pumps.
Questions to ask |
Answers/Explanation |
---|---|
Does the model represent an integrated energy system? |
No |
What important elements and concepts are included in the model? |
1. Production, transport, storage, conversion, and end- use are in scope. Networks may have ring topologies with multiple interconnections 2. MOTER can be configured to include all classes of supply and demand |
What elements and concepts are currently not included in the model, but in your opinion, those shall be included? |
|
Specific attention to flexibility options: What type of flexibility options are included in the model? |
Some examples of flexibility options are combined heat and power plants, heat pumps, storage, gas blending, and other similar options. |
The next set of content-related questions included scale and resolution. There is no spatial representation. The model has a topological representation of a fictive world of ‘Enerland.’ Similarly, there is no specified time scale. Users can define the topological resolutions of regions. Temporal resolution is the time slices representing a year, varying from 16 to 800.
Questions to ask |
Answers/Explanation |
---|---|
What spatial (or geospatial) scale does the model have? |
There is no spatial representation. The model has a topological representation of a fictive world of ‘Enerland.’ The modeling framework can represent energy systems ranging from local to national scale. |
What temporal (or time) scale does the model have? |
There is no specified time scale. Modelers can determine the scale based on applications/projects. |
Spatial resolution |
Users can define the topological resolution of regions. No fixed preexisting category is present in the model. |
Temporal resolution |
A yearly dispatch plan is created with hourly resolution using “time slices” (i.e., a limited number of hours (16-800) representing the total 8760 hours of a year). |
The next set of questions is related to model assumptions, model inputs, parameters, and outputs, and data sources related to the model. The model’s standard input and output format is MS Access. Some important model inputs are technology options (supply options) and costs (annualized investments, fixed, variable, and operation and maintenance costs). Similarly, some important model outputs are production, transport, conversion, and storage. Data can be shared with permission from model owners. Most of the data are from open sources.
Questions to ask |
Answers/Explanation |
---|---|
What critical assumptions does the model have? |
|
Which ones are likely to be contested by others? Why? |
|
What is/are the model input format(s)? |
MS Access |
What is/are the model output format(s)? |
MS Access |
What are the important model inputs? |
Technology inputs (supply, transformation, transport, and storage options), costs (investments, fixed, variable, and operation and maintenance costs) |
What important parameters do the model have? |
technology- and process-related parameters (such as, efficiency), demand and supply profiles, limits and ranges on output, etc. |
What are the important model outputs? |
1. The model outputs include an envisaged operation of the production, transport, conversion, storage, and (intelligent) end-use assets on an hourly basis during a year. 2. System KPIs on the renewable share, CO2 emission, energy cost levels, and security of supply |
What are the data sources used by the model? |
|
Any data that can be shared? If so, what and how to access them? |
Databases can be accessed with permission from model owners. |
Continuing with the model content, there were questions regarding verification, validation, and test, and uncertainty descriptions. The answer to test coverage of the model is that there is no formal testing possibility within the modeling framework. Verification, validation, and testing can be done on boundary conditions and input limits/ranges.
Questions to ask |
Answers/Explanation |
---|---|
Can you comment on the test coverage of the model? |
There is not much formal testing possibility within the modeling framework. Input parameters can be tested by sensitivity analyses, for example. Non-optimality or model not converging conditions validate modeling outputs/results. |
What is being verified, validated, or tested in the model? |
Verification, validation, and testing can be on the boundary conditions, inputs, limits/ranges, etc. |
What methods are used for the model verification, validation, and testing, if any? |
1. Qualitative method: stakeholder and expert opinions and perspectives, literature, government reports, etc. 2. Quantitative method: comparison with other contemporary national models, scenario comparisons, and result ranges are also indicative based on the experience of modelers, etc. |
Can you comment on the uncertainty in model parameters? |
Important model parameters within the model operate within ranges, depending upon scenarios, to handle uncertainty |
Can you comment on the uncertainty in model input? |
|
Can you comment on the uncertainty in the model structure? |
References:
Model Description:
OPERA¶
OPERA is a Dutch-based national energy system model focusing on total system cost minimization. Refer to https://multimodelling.readthedocs.io/en/latest/energy_models/OPERA/index.html for more overall information regarding the model.
General model information¶
General model information questions were asked regarding basic information, model versions, and point of contact for questions. The OPERA model is developed and maintained by the TNO-ETS group in Amsterdam.
Questions to ask |
Answers/Explanation |
---|---|
Model name |
OPERA (Options Portfolio for Emission Reduction Assessment) |
Model owner |
TNO-ETS |
Model Developer |
TNO-ETS |
The latest model version/date |
|
The model version used in this project |
2022_3 |
Organization |
TNO-ETS |
Individual |
Joost van Stralen |
A second set of questions was asked regarding whether the model is type or token, the intended purpose of the model, and the level of decision that the model aims to support. We understand that the model can be categorized as type and token model. The model focuses on long-term decision-making with different policies, targets, and measures. Even though some policy measures are incorporated, the model does not emphasize significantly in the mid-term (2025-2045). The model allows the possibility to focus on the mid-term if needed. Energy balances are maintained between the demand and supply on a regular basis, i.e., short-term.
Questions to ask |
Answers/Explanation |
---|---|
Is the model a token model? If so, give illustration(s). |
Yes. For example, the model analyses the techno- economical aspect of the energy system. |
Is the model a type model? If so, give illustration(s). |
Yes. For example, the model reflects universal characteristics of network infrastructure, i.e., energy flows. |
Briefly describe the intended purpose of the model |
Total system cost minimization at the national level (the Netherlands) |
Strategic - long-term planning; what do we want? |
1. Long-term national and regional policy targets related to emissions reductions, efficiency, renewable energy production 2. Long-term targets of, for example, production for industries, including subsectors, sectoral demands, energy infrastructure capacity, if any |
Tactical - medium-term; how do we approach this? |
1. Not much emphasis in the medium term, except in the dynamic run mode of the model 2. Model structure allows for the inclusion of medium- term policies. Most of them are already included are already in the model. 3. Certain input parameters are adjusted based on upcoming policies, for example, energy labels of offices |
Operational - short-term; regular/day-to-day operations? |
demand-supply energy balances, energy flows to address mismatches, short-term flexibility options |
Typical questions asked of the model include future capacities of different renewable resources and energy supply options. The model has many strengths, one of them being replicability. The model can readily apply to other European national energy system modeling contexts. An additional remark is that the model can simultaneously analyze greenhouse gases (GHG), non-energy-related emissions, and air pollutants at the regional and national levels. One of the critical limitations of the model is the assumption of perfect foresight. The model has been used to formulate strategic policy advice for the Dutch government on energy decarbonization and climate change mitigation. Refer to the table below for further discussion on these aspects.
Questions to ask |
Answers/Explanation |
---|---|
What are typical types of questions that can be asked to the model? provide examples of such questions |
1. What are different renewable sources’ future capacities and energy supplies 2. What is the energy flow between regions, and is the network constrained to achieve that? Etc. |
What are the strengths of this model? What is unique? |
1. Replicability: the structure can be readily applied to other countries, particularly in Europe 2. System integration - an ideal tool for assessing the implementation of the energy transition and the establishment of a low-carbon economy 3. Linkage to the NEOMS model - this model uses data from the NEOMS model, which is used for preparing the annual Dutch national energy outlook. Additional comments/remarks: 1. The model analyzes greenhouse gases (GHG), non-energy- related emissions, and air pollutants. 2. The capacity limits (of at least essential technology options or processes) are set by expert consultation. |
What are the important limitations of the model? |
2. Social interaction and impacts missing |
Cases/examples where the model was used for its intended purpose |
1. Used for formulating strategic policy advice on energy decarbonization and climate change mitigation for the Dutch government 2. Performed exploratory studies on the role of specific low- carbon energy technologies in the energy transition of the Netherlands |
Cases/examples where the model was not used for its intended purpose; are there any examples of model abuse or misuse? |
The next set of questions is related to model documentation, accessibility, and type. The model content is documented in a journal paper that is open source. The graphical user interface (GUI) can be accessed with the owner’s permission. The model is static, deterministic, and linear programming (LP)-based.
Questions to ask |
Answers/Explanation |
---|---|
Is the model documentation complete? |
Content documentation is a journal paper (see reference below). There is no public documentation on the details of the model (for example, GUI, API, etc.). In addition, not every update is documented. |
Is the documentation accessible? If so, how? |
The journal paper is open source. |
Is the documentation in English? |
Yes |
Does the model have a GUI? If so, how to access it? |
Yes, the GUI can be accessed with the whole model with the owner’s permission. |
Does the model have an Application Programming Interface (API) ? If so, how to access it? |
In general, the model does not have an API. |
Is the model static or dynamic? |
Static Additional comments/remarks: OPERA can consider 5/10-year time steps, projecting till 2050, i.e., years are optimized individually. Previous year-cycle data are not automatically fed to future years. Dynamic modeling is in progress and will not be a part of this project. |
Is the model continuous or discrete? |
continuous |
Is the model stochastic or deterministic? |
Deterministic |
Is it an optimization model? If so, what type of algorithms it uses? |
Yes, LP Additional comments/remarks: Due to linear structure, discrete values (say, integers) are not considered. However, limits (lower and upper) can be set as discrete values. |
The next set of questions are regarding the modeling paradigm, implementation environment, and license. The model applies multiple formalisms, such as mathematical equations and logical expressions. The model is implemented using a modeling package called AIMMS. An AIMMS license is needed, and the owner can share the model.
Questions to ask |
Answers/Explanation |
---|---|
What modeling paradigm or formalism does the model use? |
Mathematical equations, logical expressions, energy balances, math equations, etc. |
Is it implemented in a General purpose programming language? |
No |
Does it use a modeling/Simulation environment/package? |
AIMMS |
Is it implemented in a spreadsheet? |
|
Is any license required to run the model? |
AIMMS license is needed, except for educational and research purposes |
Model content¶
A preliminary set of model content questions were related to energy system integration and scope. The model represents an integrated energy system. Essential elements and concepts the model includes are all greenhouse gas emissions in the Netherlands. Similarly, content-wise, the model contains important energy infrastructure, such as electricity, heat, and hydrogen. Some flexibility options included in the model are salt caverns (spatially dependent), batteries, or hydrogen (spatially independent).
Questions to ask |
Answers/Explanation |
---|---|
Does the model represent an integrated energy system? |
Yes |
What important elements and concepts are included in the model? |
1. Covers the entire energy system and all greenhouse gas emissions of the Netherlands 2. Content-wise coverage: Energy-demanding sectors (built environment, industries, agriculture, and mobility), energy supply options (for example, wind, solar, biomass, geothermal, and non-renewable sources), and energy infrastructure (electricity, heat, gas, hydrogen, and CO2) |
What elements and concepts are currently not included in the model, but in your opinion, those shall be included? |
|
Specific attention to flexibility options: What type of flexibility options are included in the model? |
A few examples of flexibility options are salt caverns (space-specific), batteries, hydrogen storage, and a significant range of conversion techniques. Additional comments/remarks: Storage, in general, has zero costs. Only electricity and hydrogen have storage costs. |
The next set of content-related questions included scale and resolution. The spatial scale of the model is the national level, and the temporal scale is long-term (till 2050). The spatial resolution is at the city or municipality level, which has only been done for Groningen province in the northern Netherlands. Temporal resolution is time slices, with a maximum possible 80 slices for a year.
Questions to ask |
Answers/Explanation |
---|---|
What spatial (or geospatial) scale does the model have? |
National |
What temporal (or time) scale does the model have? |
Long-term (till 2050) Per run, the calculations are done on an annual basis in the model. |
Spatial resolution |
town/city Additional comments/remarks: This has been done only for Groningen Province. The structure allows us to perform similar analyses in other regions within the Netherlands. |
Temporal resolution |
Time slices Currently, the maximum possible is 80 slices/year. |
The next set of questions is related to model assumptions, model inputs, parameters, and outputs, and data sources related to the model. One of the critical assumptions is the state in which the energy infrastructure is considered in the model. For some, the current state is the base; for others, every investment starts from 0. The model standard input is MS Access, and the output format is MS Excel. Some important model inputs are Technology inputs (supply options), costs (annualized investments, fixed, variable, and operation and maintenance costs), and industrial processes. Similarly, some important model outputs are primary energy supply, secondary energy demand-supply balances, energy flows, and system costs. Data can be shared with permission from model owners. Most of the data are from open sources.
Questions to ask |
Answers/Explanation |
---|---|
What critical assumptions does the model have? |
1. For some infrastructure, the current state of investment is the base (or lower limit), for example, high voltage electricity network, for others, all the investments start from scratch, for instance, medium voltage electricity network 2. Cost or capacity ranges are primarily based on literature or expert suggestions. |
Which ones are likely to be contested by others? Why? |
1. Price includes material costs and does not include social or environmental costs 2. Every stakeholder has complete knowledge of the market Behavior. Only the system operator perspective is considered. |
What is/are the model input format(s)? |
MS Access Additional comments/remarks: There is a preprocessing of inputs within OPERA so that to reduce the number of activities (solving variables) that goes into the optimization process |
What is/are the model output format(s)? |
MS Excel Additional comments/remarks: There is postprocessing of outputs both in OPERA and in Excel. |
What are the important model inputs? |
Technology inputs (supply options), costs (annualized investments, fixed, variable, and operation and maintenance costs), industrial processes, emissions from industries and other activities, future targets (for example, renewable energy production, emission reduction, and efficiency improvement) |
What important parameters do the model have? |
technology- and process-related parameters (such as, efficiency), demand and supply profiles, limits and ranges on output, demand service units (for example, MT_steel) |
What are the important model outputs? |
primary energy supply, secondary energy demand-supply balances, energy flows, system costs |
What are the data sources used by the model? |
Open sources, such as CBS, are mostly linked to other models for specific inputs, etc. |
Any data that can be shared? If so, what and how to access them? |
Databases (MS access format) can be accessed with permission from model owners. Databases contain most input-related data. The remaining data can be accessed by accessing the model with permission from the model owners. |
Continuing with the model content, there were questions regarding verification, validation, and test, and uncertainty descriptions. The answer to test coverage of the model is that there is no formal testing possibility within the modeling framework. Verification, validation, and testing can be done on boundary conditions and input limits/ranges, generally done by sensitivity analyses, expert opinions, and comparisons with other models. Inputs related to the long term are more uncertain compared to the mid-term.
Questions to ask |
Answers/Explanation |
---|---|
Can you comment on the test coverage of the model? |
There is not much formal testing possibility within the modeling framework. Input parameters can be tested by sensitivity analyses, for example. Non-optimality or model not converging conditions validate modeling outputs/results. |
What is being verified, validated, or tested in the model? |
Verification, validation, and testing can be on the boundary conditions, inputs, limits/ranges, etc. |
What methods are used for the model verification, validation, and testing, if any? |
1. Qualitative method: stakeholder and expert opinions and perspectives, literature, government reports, etc. 2. Quantitative method: comparison with other contemporary national models, scenario comparisons, etc. |
Can you comment on the uncertainty in model parameters? |
Important model parameters within the model operate within ranges, depending upon scenarios, to handle uncertainty |
Can you comment on the uncertainty in model input? |
Input is more uncertain for long-term scenarios compared to the mid-term. |
Can you comment on the uncertainty in the model structure? |
References:
Model Description:
Model application:
TEACOS¶
Somadutta Sahoo, Last update: 14 December 2023
TEACOS is a Dutch-based mathematical optimization model focusing on mid- to long-term investment analysis. Refer to https://multimodelling.readthedocs.io/en/latest/energy_models/TEACOS/index.html for more overall information regarding the model.
General model information¶
General model information questions were asked regarding basic information, model versions, and point of contact for questions. The TEACOS model is developed and maintained by QUOMARE.
Questions to ask |
Answers/Explanation |
---|---|
Model name |
TEACOS |
Model owner |
QUOMARE |
Model Developer |
QUOMARE |
The latest model version/date |
|
The model version used in this project |
|
Organization |
TNO-ETS |
Individual |
Gregor Brandt |
A second set of questions was asked regarding whether the model is type or token, the intended purpose of the model, and the level of decision that the model aims to support. We understand that the model can be categorized as type and token model. The model focuses on techno-economic optimization, which is capable of long-term planning by considering investment decisions in technology options.
Questions to ask |
Answers/Explanation |
---|---|
Is the model a token model? If so, give illustration(s). |
Yes, it captures investment decisions related to specific innovative technology options, for example, hydrogen electrolyzer. |
Is the model a type model? If so, give illustration(s). |
Yes, it captures the core elements of an open, targeted system. |
Briefly describe the intended purpose of the model |
Techno-economic optimization |
Strategic - long-term planning; what do we want? |
Investment decisions in technology options |
Tactical - medium-term; how do we approach this? |
|
Operational - short-term; regular/ day-to-day operations? |
Operational decisions of technology options |
Typical questions asked of the model include future capacities and energy supplies for different technology options. One of the significant strengths of the model is its openness, i.e., user-defined system. TEACOS finds the best combination of technology options while considering optimal investment and operational decisions. One of the critical limitations of the model is the assumption of perfect foresight. The model has been used to identify the presence/absence of different
technologies in a targeted energy system analysis. Refer to the table below for further discussion on these aspects.
Questions to ask |
Answers/Explanation |
---|---|
What are typical types of questions that can be asked to the model? provide examples of such questions |
1. What are the capacity and energy supply from different technology options? 2. Whether a given technology option will be selected or not? |
What are the strengths of this model? What is unique? |
2. TEACOS finds the best combination of technology options, considering optimal investment and operational decisions. |
What are the important limitations of the model? |
2. Social interaction and impacts missing |
Cases/examples where the model was used for its intended purpose |
1. Used for identifying the presence/absence of different technologies in a targeted energy system analysis 2. Identifying the capacity of those technologies and investments in them. |
Cases/examples where the model was not used for its intended purpose; are there any examples of model abuse or misuse? |
Sometimes, for operational questions. Core applications and edge cases. |
The next set of questions is related to model documentation, accessibility, and type. The model documentation is not complete. The graphical user interface (GUI) can be accessed with the owner’s permission. The model is static, deterministic, and linear programming (LP)-based.
Questions to ask |
Answers/Explanation |
---|---|
Is the model documentation complete? |
No |
Is the documentation accessible? If so, how? |
Some parts of it are accessible through the QUOMARE website and this project. |
Is the documentation in English? |
Partly English |
Does the model have a GUI? If so, how to access it? |
Yes, the GUI can be accessed with the whole model with the owner’s permission. |
Does the model have an Application Programming Interface (API) ? If so, how to access it? |
In general, the model does not have an API. |
Is the model static or dynamic? |
Static Additional comments/remarks: TEACOS is a multi-period model (i.e., time steps). The model uses information from the previous time step. However, this is not a prerequisite. |
Is the model continuous or discrete? |
Continuous Additional comments/remarks: The model has discrete system elements, but flows are continuous. |
Is the model stochastic or deterministic? |
Deterministic |
Is it an optimization model? If so, what type of algorithms it uses? |
Yes, LP |
The next set of questions are regarding the modeling paradigm, implementation environment, and license. The model applies multiple formalisms, such as mathematical equations and logical expressions. The model is implemented using a modeling package called AIMMS. An AIMMS license is needed, and the owner can share the model.
Questions to ask |
Answers/Explanation |
---|---|
What modeling paradigm or formalism does the model use? |
Mathematical equations, logical expressions, energy balances, math equations, etc. |
Is it implemented in a General purpose programming language? |
No |
Does it use a modeling/Simulation environment/package? |
AIMMS |
Is it implemented in a spreadsheet? |
|
Is any license required to run the model? |
AIMMS license is needed, except for educational and research purposes |
Model content¶
A preliminary set of model content questions were related to energy system integration and scope. The model does not represent an integrated energy system. The model’s essential elements and concepts include detailed information on costs/prices, such as investment profiles or return on investments. Similarly, content-wise, the model contains important supply-related technology options and their interactions.
Questions to ask |
Answers/Explanation |
---|---|
Does the model represent an integrated energy system? |
No |
What important elements and concepts are included in the model? |
1. Economics - CAPEX/full NPV, investment profiles, return on investment and other standard economic KPIs 2. Content-wise coverage: Supply-related technology options, their interactions |
What elements and concepts are currently not included in the model, but in your opinion, those shall be included? |
|
Specific attention to flexibility options: What type of flexibility options are included in the model? |
The next set of content-related questions included scale and resolution. There is no spatial representation but rather a topology and visual representation on a map for communication. The temporal scale is long-term (2020 - 2050).
Questions to ask |
Answers/Explanation |
---|---|
What spatial (or geospatial) scale does the model have? |
There is no spatial representation but rather a topology and visual representation on a map for communication. Additional comments/remarks: The transport sector could have cost and distance considerations. |
What temporal (or time) scale does the model have? |
30-year period (2020 – 2050) |
Spatial resolution |
|
Temporal resolution |
Time slice of 5 years in a 30-year investment trajectory. Arbitrary applicable time periods, given a strategic focus, a month, a quarter, or a year. |
The next set of questions is related to model assumptions, model inputs, parameters, and outputs, and data sources related to the model. One of the assumptions likely to be contested by others is that the model considers the time-slice approach for faster processing speed, which is problematic for analyzing peak loads for electricity. The model standard input format is MS Access, and the output format is MS Excel. Some important model inputs are technology inputs (supply options) and costs (annualized investments, fixed, variable, and operation and maintenance costs). Similarly, some important model outputs are secondary energy demand-supply balances and system costs. Data can be shared with permission from model owners.
Questions to ask |
Answers/Explanation |
---|---|
What critical assumptions does the model have? |
|
Which ones are likely to be contested by others? Why? |
The model considers the time-slice approach for faster processing speed. This is problematic for analyzing peak loads for electricity. |
What is/are the model input format(s)? |
MS Access |
What is/are the model output format(s)? |
MS Access |
What are the important model inputs? |
Technology inputs (supply options), costs (annualized investments, fixed, variable, and operation and maintenance costs) |
What important parameters does the model have? |
Technology-related parameters (such as efficiency) |
What are the important model outputs? |
Secondary energy demand-supply balances, system costs, etc. |
What are the data sources used by the model? |
|
Any data that can be shared? If so, what and how to access them? |
Databases (MS access format) can be accessed with permission from model owners. Databases contain most input-related data. The remaining data can be accessed by accessing the model with permission from the model owners. |
Continuing with the model content, there were questions regarding verification, validation, and test, and uncertainty descriptions. The answer to test coverage of the model is that TEACOS is continuously developed using internal test sets to reference models. Internal review before the branch merges into the master branch on GitHub. Verification, validation, and testing can be done on boundary conditions and input limits/ranges.
Questions to ask |
Answers/Explanation |
---|---|
Can you comment on the test coverage of the model? |
TEACOS is continuously developed using internal test sets to reference models. Internal review before branch merges into master on GitHub. |
What is being verified, validated, or tested in the model? |
Verification, validation, and testing can be on the boundary conditions, inputs, limits/ranges, etc. Forcing options to look at extreme edges of the solution space. |
What methods are used for the model verification, validation, and testing, if any? |
1. Qualitative method: base case review through consumer, etc. 2. Quantitative method: comparison with reference cases, modeling practice, etc. |
Can you comment on the uncertainty in model parameters? |
Sensitivity testing, Monte Carlo on parameter values, multivariant Monte Carlo |
Can you comment on the uncertainty in model input? |
The model is deterministic and, therefore, does not propagate uncertainty. |
Can you comment on the uncertainty in the model structure? |
No structural uncertainty |
Multi-modeling methods¶
In this section, the focus is summarizing multi-modeling methods created within this project.
Operating Principles¶
This is a summary of previous work of combining models of different operating principles, i.e. optimization and simulation, for performing energy system transition analysis.
Optimization and Simulation model coupling¶
5 December 2023
Master thesis summary – Menghua Prisse
This work has been further categorized into the following:
Introduction¶
Multi-models better handle complex issues than singular models due to the combination of the strength of individual models (`Duboz et al. 2003`_; `Quesnel, Duboz, and Ramat 2008`_). However, combining models is challenging on both technical and non-technical levels. Technical challenges involve differences in the system design and alignment of each model. Alignment involves several topics, the most common being formalism, resolution, and scale. There are predominantly two non-technical challenges. First, it becomes increasingly difficult to understand and interpret the meaning of the numbers and outcomes within a multi-model when they become more complex.
One of the critical challenges for multi-models is the coupling of individual models and studying how the coupled models might affect the outcome. This thesis focused on understanding which interaction structures exist for coupling optimization and simulation models and how the choices might affect the workings of the multi-model. Accordingly, the overall research question was formulated as follows:
What is the effect of coupling an agent-based model to an existing optimization model?
This thesis was conducted within the boundaries of the micro case of the multi-modeling project. The case study area was the industrial area of Tholen in Zeeland province. This thesis coupled two models: an optimization and an agent-based model (ABM).
State-of-the-art¶
An ABM can be constructed by following a ten-step method (`van Dam, Nikolic, and Lukszo n.d.`_). One of the main issues associated with coupling is interoperability (`Bollinger et al. 2017`_; `Nikolic et al. 2019`_; `Rezaeiahari and Khasawneh 2020`_). Coupling tightness refers to the fundamental concept of interdependence between models, how they are connected, and their variables intertwined. Five different levels of coupling tightness have been identified based on coupling methodologies (`Brandmeyer and Karimi 2000`_). Four levels of interoperability have been determined: technical, syntactical, semantic, and organizational (`van der Veer and Wiles 2008`_). Four model configurations for coupling a simulation and optimization model have been provided (`Figueira and Almada-Lobo 2014`_).
Methods, results, and findings¶
An existing optimization model named Techno-Economic Analysis Of Complex Option Spaces (TEACOS), developed by our project partner QUOMARE, was used. It is a long-term optimization tool designed to facilitate the transition towards low-carbon energy systems, identifying the most profitable investments while adhering to supply-demand balances and environmental constraints set by the modeler. Optional choices as input to TEACOS are converted to fixed investment decisions based on the total system cost minimization objective. Since TEACOS is not openly available, an adapter was created that communicated with the model through API calls (this creation was external to the master’s thesis scope). The standard communication occurred via Energy System Description Language (ESDL) files.
The industrial area of Tholen was selected because of its strong intention towards low-carbon energy system transition. For this study, the optional assets were limited to solar panels as the purpose was to test the feasibility of the multi-model within the micro case.
An ABM was created to simulate the buying behavior of the companies in Tholen. The Mesa modeling environment was chosen because of the relative ease of linking with ESDL files and sufficient documentation related to the tool. ABM aimed to simulate investment decisions in the optional assets selected by TEACOS. One key outcome of the model was the number and distribution of solar panels purchased by agents in a single simulation run. The conceptual model consisted of environment, agents, and time.
A short overview of model characteristics that are important for coupling was presented to create a meaningful interface between models. The modeling steps are conceptualization, implementation, verification and validation, and implementing data. Conceptualization hypothesized what a multi-model output might look like and how this would answer the overall research question. This coupling is loose, i.e., the modeler interfaces with each model using automated data transfer. The coupling was executed using a Python script to call TEACOS, followed by the ABM, for iterations. This allowed for the investigation into the ABM’s effects on the outcome of TEACOS. The implementation was performed using Python script in the modeler’s integrated development environment. Verification and validation ensured that the developed multi-model was created correctly and performed the way it should. Data implementation involved preparing ESDL files to be exchanged between the coupled models.
Two key performance indicators (KPIs) were devised to facilitate the multi-model investigation: investment trajectory returned by the multi-model and inflection point KPI (iteration at which the suggested optimum trajectory of investment alters). Dynamic experimental setup allowed for a cyclical, adaptive approach to conducting experiments. Observations indicated that KPIs stabilized after 6-8 iterations. When the general tipping point was reached, TEACOS did not recommend further investments, a little over 400 kW for PV array power output. Process analyses demonstrated challenges faced during different phases of the project. A two-dimensional array was created with axes representing interoperability categories and research phases. Organizational and technical interoperability proved to be the most cumbersome, with six and five challenges, respectively.
Conclusion and future work¶
The iterative process of conceptualizing the ABM and the multi-model resulted in a very fit ABM. When coupling models, high cohesion and low coupling are desired (`Hellhake et al. 2022`_), which this combination did. Overfitting of coupled models should be avoided (`Shahumyan and Moeckel 2015`_). Instead, a generic ABM should be created and connected via a wrapper or an adapter. This study highlights the need for more focus on the broader organizational, practical context within which the models operate, particularly related to different operating principles. The limitation is the simplicity and abstraction of the ABM, which might not capture the intricacies of human behavior. The ABM still successfully fulfilled its core function of making decisions, working with ESDL files, and interacting with the TEACOS model. This study shows that coupling an optimization and agent-based model is possible.
This research serves as a basis for integrating optimization and simulation models in a more complicated manner in the future. This can involve more interaction between models than only PV array capacity. ABM can incorporate more complicacies of human behavior. Multi-period optimization can be performed in a singular iteration.
A link to Menghua Prisse’s master thesis work follows: https://repository.tudelft.nl/islandora/object/uuid:53acc329-7990-4fe0-8374-3418d10c3f85
Bibliography
Bollinger, L. A., C. B. Davis, R. Evins, E. J. L. Chappin, and I. Nikolic. 2017. “Multi-Model Ecologies for Shaping Future Energy Systems: Design Patterns and Development Paths.” https://doi.org/10.1016/j.rser.2017.10.047.
Brandmeyer, Jo Ellen, and Hassan A. Karimi. 2000. “Coupling Methodologies for Environmental Models.” Environmental Modelling & Software 15(5):479–88. https://doi.org/10.1016/S1364-8152(00)00027-X.
van Dam, Koen H., Igor Nikolic, and Zofia Lukszo. n.d. Agent-Based Modelling of Socio-Technical Systems. https://link.springer.com/book/10.1007/978-94-007-4933-7.
Duboz, Raphaël, Éric Ramat, Philippe Preux, Raphae¨l Raphae¨, Raphae¨l Duboz, and Ric Ramat. 2003. “Scale Transfer Modeling: Using Emergent Computation for Coupling an Ordinary Differential Equation System with a Reactive Agent Model.” Systems Analysis Modelling Simulation 43(6):793–814. https://doi.org/10.1080/0232929031000150355.
Figueira, Gonçalo, and Bernardo Almada-Lobo. 2014. “Hybrid Simulation-Optimization Methods: A Taxonomy and Discussion.” Simulation Modelling Practice and Theory 46:118–34. https://doi.org/10.1016/j.simpat.2014.03.007.
Hellhake, Dominik, Justus Bogner, Tobias Schmid, and Stefan Wagner. 2022. “Towards Using Coupling Measures to Guide Black-Box Integration Testing in Component-Based Systems.” Software Testing Verification and Reliability 32(4). https://doi.org/10.1002/STVR.1811.
Nikolic, I., Martijn Warnier, J. H. Kwakkel, E. J. L. Chappin, Z. Lukszo, F. M. Brazier, A. Verbraeck, M. Cvetkovic, and P. Palensky. 2019. “Principles, Challenges and Guidelines for a Multi-Model Ecology.” Citation. https://doi.org/10.4233/UUID:1AA3D16C-2ACD-40CE-B6B8-0712FD947840.
Quesnel, Gauthier, Raphaël Duboz, and Éric Ramat. 2008. “The Virtual Laboratory Environment – An Operational Framework for Multi-Modelling, Simulation and Analysis of Complex Dynamical Systems.” Simulation Modelling Practice and Theory 17:641–53. https://doi.org/10.1016/j.simpat.2008.11.003.
Rezaeiahari, Mandana, and Mohammad T. Khasawneh. 2020. “Simulation Optimization Approach for Patient Scheduling at Destination Medical Centers.” Expert Systems With Applications 140:112881. https://doi.org/10.1016/j.eswa.2019.112881.
Shahumyan, Harutyun, and Rolf Moeckel. 2015. “Integrating Models for Complex Planning Policy Analysis: Challenges and a Solution in Coupling Dissimilar Models.” Computers in Urban Planning and Urban Management. http://web.mit.edu/cron/project/CUPUM2015/proceedings/Content/modeling/208_shahumyan_h.pdf
van der Veer, Hans, and Anthony Wiles. 2008. Achieving Technical Interoperability-the ETSI Approach.
Scaling¶
Resolution difference-based coupling¶
20 November 2023
Introduction¶
Energy infrastructure is expected to be an essential component of future energy systems within the Netherlands. Models form the basis of understanding complex interactions of energy systems. Many models that analyze energy infrastructure are available. However, their capabilities and scopes are scattered. Policymakers are interested in a comprehensive oversight of information to provide informed decision-making on integrated energy systems. Here, a multi-modeling concept with the scope of scaling comes into the picture. In addition, models used for energy-related decision support are either available at a higher abstract level or do not contain the entire energy system. Separate modeling of the system is insufficient when studying complex socio-technical systems, as the behavior of the whole system is more than the sum of the behavior of individual parts due to possible interactions between system components (`Vangheluwe et al., 2002`_). One of the main challenges of creating coupling infrastructure is addressing issues related to bridging different resolution levels between models (`Nikolic et al., 2019`_). ‘Resolution’ in this context corresponds to the level of detail on which the models operate, related to space, time, or modeled object (`Rabelo et al., 2016`_). The aim was to bridge the gaps between models regarding the abovementioned resolution gaps by applying the scaling method. Accordingly, the research question is formulated as follows:
How can issues arising when coupling multiple energy models with different resolutions be resolved effectively?
This thesis is part of a larger project aiming to create a multi-model infrastructure to couple existing models to understand better complex energy systems at different geographical scopes within the Netherlands (`Nikolic, 2023`_). Bram focused on detecting and alleviating coupling-related issues related to combining models with different resolutions.
State-of-the-art¶
Brandmeyer and Karimi (`Brandmeyer & Karimi, 2000`_) suggested five levels of coupling: one-way data transfer, loose coupling, shared coupling, joined coupling, and tool coupling. Aggregation and Disaggregation (A/D) of objects is complicated because this requires changing functions and writing an adjacent model at a different resolution (`Salome, 2021`_). The proper choice of resolution is decided by the purpose of a model (`Jie Chen & Xiaoyu Li, 2021`_) and data availability (`Degbelo & Kuhn, 2018`_). It is sometimes advantageous to use multiple low-resolution models instead of a singular high-resolution model, such as for military simulation (`Xuefei et al., 2017`_). Within a multi-model structure, all models can contribute meaningfully to each other to create a more integral view of a complex system (`Nikolic, 2023`_; `Seck & Honig, 2012`_).
Method, results, and findings¶
The method involved creating a model audit for each model. During this process, the auditing method was standardized to facilitate its reuse. The audit enabled better model understanding and identified parameters or variables to be considered when determining A/D functions and consistency checks.
Criteria for shortlisting models
The models must vary in resolution sufficiently to provide a challenging gap to bridge
The models must be energy modeling-related
There must be a feasible case for why one would want to couple these models
The models must be readily available to research
Models shortlisted:
Energy Transition model (ETM) (`Quintel, n.d.`_)
Hydrogen-buffered Wind Power Model (HWP) (`Boereboom, n.d.`_)
Electric vehicle power demand Model (EVM) (`Boereboom, n.d.`_)
Two coupling cases were finalized:
ETM-HWP coupling: ETM provided electricity price, power production, and wind speed. HWP model derived the profitability of a wind farm. This is a one-way connection from ETM to HWP. The coupling is static. The resolution difference is related to wind farm modeling details.
ETM-EVM coupling: ETM provided electricity price and population information (along with growth) to EVM. EVM calculated electric vehicle (EV) electricity demand and storage supply curves. ETM has a national approach, while EVM distributes agents across space at a municipality level. This coupling effort tried to address spatial resolution difference-based challenges in multi-modeling.
The method involved three steps:
Model auditing: a set of questions was prepared as a guide to understand or investigate critical elements related to each model resolution to have a complete model audit.
Coupling auditing: Each model coupling underwent an audit to identify variable-related issues and suggested mitigation strategies.
Coupling implementation: with the knowledge of coupling issues and steps to mitigate them, an actual coupling was implemented.
Results showed that increasing the price of electricity and power supply increases the cash flow from wind farms (ETM-HWP coupling). Changing the electricity price does not have any impact on the constraints of the HWP model, whereas changing the power output has minimal impact. With an increase in the hourly electricity price input to the EVM (ETM-EVM coupling) and an increase in the national population (translated into an increase in municipality population), the mean national power demand and mean national vehicle-to-grid capacity increased. Changing only the electricity price did not impact the aspects mentioned above.
The results provided a clear overview of the effectiveness and limits of the described method to alleviate resolution-based coupling challenges.
Conclusions and future work¶
This study was an exploratory modeling effort conducted to reflect the feasible consistency limits of various techniques to the stakeholders to which the models belong. Some of the essential conclusions from the couple auditing activity are:
the main research question can be answered by using a coupling process based on audits, comprised of questions aimed at detecting issues and checking the effectiveness of the means to solve the problems.
System expertise is highly advantageous for identifying and tackling problems encountered during the auditing and coupling process.
Coupling models with different resolutions requiring aggregation and disaggregation efforts need an intermediate data model to translate and (possibly) store data to facilitate coupling.
The coupling of two models will likely require constructing an A/D effort specifically tailored to the needs of the models in question.
It is challenging to determine what constitutes a good consistency or sufficient overlap. There are no benchmarks to compare with.
In the future, we have options to carry on the research by Bram in the following manner:
Use the macro case, which performs scaling activity, to analyze the demand distribution of different sectors at a regional level.
Identify the differences in the impact of energy infrastructure between the national and regional levels regarding interregional energy flows, investment, and technical characteristics.
A link to Bram Boereboom’s master thesis work follows:
https://repository.tudelft.nl/islandora/object/uuid%3A6b5867d3-e6bb-46f8-bf2a-aea9399cae17
Bibliography
Boereboom, B. (n.d.). EVM and HWP model. 2022. Retrieved November 16, 2023, from https://github.com/bramboereboom/MSc-thesis
Brandmeyer, J. E., & Karimi, H. A. (2000). Coupling methodologies for environmental models. Environmental Modelling & Software, 15(5), 479–488. https://doi.org/10.1016/S1364-8152(00)00027-X
Degbelo, A., & Kuhn, W. (2018). Spatial and temporal resolution of geographic information: an observation-based theory. Open Geospatial Data, Software and Standards 2018 3:1, 3(1), 1–22. https://doi.org/10.1186/S40965-018-0053-8
Jie Chen, & Xiaoyu Li. (2021). Research on Key Technologies of Multi-resolution Modeling Simulation. 687–693.
Nikolic, I. (2023). Towards integrated decision-making in the energy transition. https://multi-model.nl/
Nikolic, I., Warnier, M., Kwakkel, J. H., Chappin, E. J. L., Lukszo, Z., Brazier, F. M., Verbraeck, A., Cvetkovic, M., & Palensky, P. (2019). Principles, challenges and guidelines for a multi-model ecology. Citation. https://doi.org/10.4233/UUID:1AA3D16C-2ACD-40CE-B6B8-0712FD947840
Quintel, (n.d.). Energy Transition Model. Retrieved December 10, 2019, from https://energytransitionmodel.com/?locale=en
Rabelo, L., Kim, K., Park, T. W., Pastrana, J., Marin, M., Lee, G., Nagadi, K., Ibrahim, B., & Gutierrez, E. (2016). Multi resolution modeling. Proceedings - Winter Simulation Conference, 2016-February, 2523–2534. https://doi.org/10.1109/WSC.2015.7408362
Salome, S. (2021). On the challenge of designing a robust military force: a multi-resolution modelling approach to improve the performance of a naval force support system. https://repository.tudelft.nl/islandora/object/uuid%3Abaa50bd3-e32a-44ee-9fe4-f9593d3e0829
Seck, M. D., & Honig, H. J. (2012). Multi-perspective modelling of complex phenomena. Computational and Mathematical Organization Theory, 18(1), 128–144. https://doi.org/10.1007/S10588-012-9119-9/TABLES/2
Vangheluwe, H., de Lara, J., & Mosterman, P. J. (2002). (PDF) An introduction to multi-paradigm modelling and simulation. https://www.researchgate.net/publication/243776266_An_introduction_to_multi-paradigm_modelling_and_simulation
Xuefei, Y., Qiang, L., Xiaolong, W., Dong, L., & Shoubiao, W. (2017). Non-consistence aggregation-disaggregation technology for battle simulation study of SoS. https://doi.org/10.18178/wcse.2017.06.066
Uncertainty analysis¶
This section focuses on two previous works on uncertainty analysis employing an exploratory tool available at TU Delft.
Exploratory modeling and analysis tool-based coupling¶
29 November 2023
Master thesis summary – Alexander Drent
This work has been further categorized into the following:
Introduction¶
Large-scale complex systems are increasingly becoming essential components of an evolving society comprising social and technical or socio-technical aspects combined with a network of different actors. These systems are unpredictable, highly uncertain, and evolve dynamically. To analyze these systems, multi-models need to be developed with each focusing on a specific part of the (sub)system. Multi-models allow the investigation of different uncertainty paths across the entire range of the system. Developing multi-models faces the following challenges: interoperability, composability, and fidelity. In modeling socio-technical systems, there are different sources of uncertainties, such as stochastic variables and processes, a lack of accuracy and precision, and errors (`Pace, 2015`_). Uncertainties within multi-models impact the ability to make decisions. Three types of uncertainties are pointed out: aleatory uncertainty (this uncertainty is impossible to reduce by measurements), epistemic uncertainty (new measurements can reduce this uncertainty), and errors (`Pennock & Gaffney, 2018`_).
Uncertainty might propagate when coupling socio-technical models in a multi-model ecology (`Cuppen et al., 2021`_; `Nikolic et al., 2019`_). Accordingly, the research question was formulated as ‘To what extent can we apply existing uncertainty analysis methods to multi-models?’. This study identified additional sources of uncertainties in multi-model ecologies compared to constituent single models. Existing methods are applied to analyze uncertainty propagation in single models. This study explored a variety of uncertainty analysis tools and methods for performing sensitivity analyses of single models within a multi-model, along with the whole multi-model ecology.
State-of-the-art¶
Based on the literature (`Kwakkel et al., 2010`_; `Petersen, 2012; W.E. Walker P. Harremoës & von Krauss, 2003`_), five locations of uncertainty have been identified: conceptual model, computer model, including model structure and parameters, input data, implemented technical model, and processed output data. The level of uncertainty indicates the degree or severity of uncertainty. Five levels of uncertainty have been identified between deterministic knowledge and total ignorance (`Pruyt & Kwakkel, 2014`_). These levels vary in context, system model, system outcomes, and weights on outcomes (`W.E. Walker P. Harremoës & von Krauss, 2003`_). The nature of uncertainty concerns whether uncertainty is caused by variability in the real-world system (ontic) or by the lack of knowledge (epistemic) (`Kwakkel et al., 2010`_; `Petersen, 2012`_; `van den Hoek et al., 2014`_; `W.E. Walker P. Harremoës & von Krauss, 2003`_).
Multiple methodologies for uncertainty analyses were studied, and some were applied. They were categorized into sensitivity analyses, calibration, and comparison of methods. Two main approaches to sensitivity analyses were identified: local and global. Sensitivity analyses are sometimes called independent sampling because they use model parameters whose values are specified beforehand. The distinction is made between one-at-a-time (OAT) and all-at-a-time (AAT) sampling. The benefit of AAT sampling is that interaction effects between input parameters can be evaluated, which is impossible using OAT sampling. Three types of AAT sampling are mainly classified: Monte Carlo (MC), Latin Hypercube Sampling (LHS), and Sobol sequences (`Pianosi et al., 2016`_). Sensitivity analyses are used for different applications depending upon the purpose of the analysis: factor prioritization (FP – identifies parameters that have a significant influence on the output of the model), factor fixing (FF – identifies the parameters that have a minimal effect on the variance of the output), variance cutting (VC – bring the output uncertainty below a determined threshold by fixing the lowest amount of input values as possible), and factor mapping (FM – determines which region of the output space is associated with which part in the input space) (`Saltelli, A., Ratto, M., Andres, T. et al., 2007`_). Other sensitivity analysis methods identified in this study are Morris, RBD-FAST, PAWN, Random Decision Forests, and Patient Rule Induction Method.
Calibration methods come from an understanding that multiple parameter sets may result in comparable outcomes and can match calibration data or a statistical model. This understanding is called equifinality (Beven & Freer, 2001). Different calibration methods have been suggested in the literature, such as the Generalized Likelihood Uncertainty Estimation (GLUE) (`Beven & Binley, 1992`_) and Markov Chain Monte Carlo (MCMC or MC2) (`Vrugt, 2016`_) method.
Two archetypes of multi-model interaction can be considered: directed graph and undirected graph with feedback mechanisms. The directed graph has no run-time interaction between separate computer models. The undirected graph has run-time interaction between the models.
Method, results, and findings¶
A framework is presented to assess uncertainty in used simulation models and the interface based on the uncertainty matrices proposed in the literature (`Kwakkel et al., 2010`_; `Petersen, 2012`_; `W.E. Walker P. Harremoës & von Krauss, 2003`_). This matrix consists of the location of uncertainty on one axis and the level and nature of uncertainty on the other axis. XPIROV framework (`Agusdinata, 2006`_) captured the impact of uncertainty and policies on the model output.
The global method of sensitivity analyses was focused on as local methods do not adequately explore uncertainty in models with non-linearities (`Saltelli et al., 2019`_). A matrix was created to compare the methods mentioned in the previous section for uncertainty analyses. The methods differ in intended purpose, assumed output distribution, type of uncertainties, sampling methods, and sample size.
An experimental setup was for multi-models with undirected graphs, which was applied to a Windmaster model (described below). First, the uncertainties in the multi-model were assessed using the XPIROV framework and uncertainty matrix. Then, a selection of methods mentioned above was applied. Sensitivity analyses were applied with independent sampling techniques and calibration with dependent sampling. EMA workbench (`Kwakkel, 2017`_) was used as the multi-model is already implemented using this workbench. MC sampling technique was chosen because it allows the possibility of adding samples afterward and investigating the convergence of feature scores over the increasing number of used samples.
The Windmaster model is developed to discover robust policies regarding the investments in the infrastructure and explore uncertainties in the energy demand and supply of the industrial cluster of the port of Rotterdam. The multi-model consists of three connected single models with different modeling paradigms: an exploratory modeling scenario model, a technical-economical infrastructure model, and an investment behavior model. The transition pathways developed in EMA were defined as a series of discrete events affecting peak energy demand, required feedstock, and energy production or conversion. Uncertainty lies in the timing of the availability of new technology options and their implementation lead time.
The policies are included in the model by four defined investment decision-making strategies of different network operators: reactive, current, proactive, and collaborative. These are influenced by time horizon, investment goals, investment budget, the propensity to save, and lead time per investment.
Feature scores showed that decision-making strategies strongly influence electricity transmission network capacity and total capital expenditure costs. Extra-tree features scoring performed per time step, i.e., a year, showed that decision-making strategies after 2030 and 2020 strongly influenced transmission capacity and capital expenditure, respectively. The extra trees feature score also showed that decision-making strategies have high uncertainty scores for the capacity used of the transmission network. In general, uncertainties have a strong influence on aspects such as boiler paths (technologies to produce steam), cogeneration paths (combined production of heat and electricity), decision-making strategies, and furnace paths (production of heat). Sobol analysis was used to understand which uncertainties strongly impact different investment categories. The uncertainties related to the interface had a limited impact.
Conclusions and future work¶
The results showed that the EMA workbench can be used for uncertainty analysis in the multi-model structure. Sobol showed that interaction effects between uncertainties played a role in the Windmaster model. Different assets had different influences on the uncertainty, some significantly more than others, for example, capital investments, network capacity, or impact of policies. Future research will focus on using this tool to perform uncertainty analysis of an existing case study within the multi-modeling project using the tools and methods described in this research.
A link to Alexander Drent’s master thesis work follows:
https://repository.tudelft.nl/islandora/object/uuid%3Adebfcd39-38fc-493d-8948-012bb8e02f6b
Bibliography
Agusdinata, D. B. (2006). Specification of System of Systems for Policymaking in The Energy Sector. 2006 IEEE/SMC International Conference on System of Systems Engineering, 197–203. https://doi.org/10.1109/SYSOSE.2006.1652298
Beven, K., & Binley, A. (1992). The future of distributed models: Model calibration and uncertainty prediction. Hydrological Processes, 6(3), 279–298. https://doi.org/https://doi.org/10.1002/hyp.3360060305
Beven, K., & Freer, J. (2001). Equifinality, data assimilation, and uncertainty estimation in mechanistic modelling of complex environmental systems using the GLUE methodology. Journal of Hydrology, 249(1), 11–29. https://doi.org/10.1016/S0022-1694(01)00421-8
Cuppen, E., Nikolic, I., Kwakkel, J., & Quist, J. (2021). Participatory multi-modelling as the creation of a boundary object ecology: the case of future energy infrastructures in the Rotterdam Port Industrial Cluster. 16, 901–918. https://doi.org/10.1007/s11625-020-00873-z
Kwakkel, J. H. (2017). The Exploratory Modeling Workbench: An open source toolkit for exploratory modeling, scenario discovery, and (multi-objective) robust decision making. Environmental Modelling and Software, 96, 239–250. https://doi.org/10.1016/j.envsoft.2017.06.054
Kwakkel, J. H., Walker, W. E., & Marchau, V. A. W. J. (2010). Classifying and communicating uncertainties in model-based policy analysis. International Journal of Technology, Policy and Management, 10(4), 299–315. https://doi.org/10.1504/IJTPM.2010.036918
Nikolic, I., Warnier, M., Kwakkel, J. H., Chappin, E. J. L., Lukszo, Z., Brazier, F. M., Verbraeck, A., Cvetkovic, M., & Palensky, P. (2019). Principles, challenges and guidelines for a multi-model ecology. Citation. https://doi.org/10.4233/UUID:1AA3D16C-2ACD-40CE-B6B8-0712FD947840
Pace, D. K. (2015). Fidelity, Resolution, Accuracy, and Uncertainty. In Modeling and Simulation in the Systems Engineering Life Cycle. http://www.springer.com/series/10128
Pennock, M. J., & Gaffney, C. (2018). Managing Epistemic Uncertainty for Multimodels of Sociotechnical Systems for Decision Support. IEEE Systems Journal, 12(1), 184–195. https://doi.org/10.1109/JSYST.2016.2598062
Petersen, A. C. (Arthur C. (2012). Simulating nature : a philosophical study of computer-simulation uncertainties and their role in climate science and policy advice. https://doi.org/10.1201/b11914
Pianosi, F., Beven, K., Freer, J., Hall, J. W., Rougier, J., Stephenson, D. B., & Wagener, T. (2016). Sensitivity analysis of environmental models: A systematic review with practical workflow. Environmental Modelling & Software, 79, 214–232. https://doi.org/https://doi.org/10.1016/j.envsoft.2016.02.008
Pruyt, E., & Kwakkel, J. H. (2014). Radicalization under deep uncertainty: a multi-model exploration of activism, extremism, and terrorism. System Dynamics Review, 30(1–2), 1–28. https://doi.org/https://doi.org/10.1002/sdr.1510
Saltelli, A., Aleksankina, K., Becker, W., Fennell, P., Ferretti, F., Holst, N., Li, S., & Wu, Q. (2019). Why so many published sensitivity analyses are false: A systematic review of sensitivity analysis practices. Environmental Modelling & Software, 114, 29–39. https://doi.org/https://doi.org/10.1016/j.envsoft.2019.01.012
Saltelli, A., Ratto, M., Andres, T., Campolongo, F., Cariboni, J., Gatelli, D., Saisana, M. and Tarantola, S. (2007). Sensitivity Analysis: From Theory to Practice. In Global Sensitivity Analysis. The Primer (eds A. Saltelli, M. Ratto, T. Andres, F. Campolongo, J. Cariboni, D. Gatelli, M. Saisana and S. Tarantola). https://doi.org/10.1002/9780470725184.ch6
van den Hoek, R. E., Brugnach, M., Mulder, J. P. M., & Hoekstra, A. Y. (2014). Analysing the cascades of uncertainty in flood defence projects: How “not knowing enough” is related to “knowing differently.” Global Environmental Change, 24, 373–388. https://doi.org/https://doi.org/10.1016/j.gloenvcha.2013.11.008
Vrugt, J. A. (2016). Markov chain Monte Carlo simulation using the DREAM software package: Theory, concepts, and MATLAB implementation. Environmental Modelling & Software, 75, 273–316. https://doi.org/https://doi.org/10.1016/j.envsoft.2015.08.013
W.E. Walker P. Harremoës, J. R. J. P. van der S. M. B. A. van A. P. J., & von Krauss, M. P. K. (2003). Defining Uncertainty: A Conceptual Basis for Uncertainty Management in Model-Based Decision Support. Integrated Assessment, 4(1), 5–17. https://doi.org/10.1076/iaij.4.1.5.16466
Multi-model uncertainty analysis tool¶
The multi-model uncertainty analysis tool connects the Orchestrator (Apache airflow) with the EMA (Exploratory Modeling and Analysis) workbench. It is proof of concept that such a tool can assist in uncertainty analyses on multi-models. Figure 1 shows the structure of the uncertainty analysis tool.

Figure 1: Uncertainty analysis tool structure
A user provides information about the system in a settings file. This information includes which uncertainties to include in generating scenarios and which outcomes to consider. This file is then used to create the scenarios of the EMA workbench, which are subsequently transferred to the input of the model as ESDL files. With one ESDL file per scenario, the model is run once for every scenario. The outcomes (as specified in the settings file) are finally collected from the different model runs and can be analyzed using the tools from the EMA workbench. A more detailed explanation of the workflow is provided in the readme document in the repository.
A mock-up example was created to demonstrate the functionality of the tool. To this end, the meso use case ESDL file was used, and three input and output parameters were chosen for test runs. We created a mock-up model function that generates output data. This resulted in the following pair plot that shows the variations of input and output in relation to the different scenario runs.

Figure 2: Pair plot based on the PV investment (solar panels investment costs), total costs of the system, and the total energy production of the municipality of Tholen. The data in this figure is mock-up data.
The work could show with the mock-up example that uncertainty analysis of a multi-model is possible with such a supporting tool that links the EMA workbench with the Orchestrator. Future work can expand the functionality of this proof-of-concept tool and integrate the tool with the multi-model use cases.
Link to the tool repository: https://github.com/MultiModelling/Multi-Model-Uncertainty-Analysis-tool (accessed November 20, 2023).
End user documentation¶
Running a use-case¶
Go to Airflow on your browser. If you installed according to Installation instructions, you should be able to access it on http://localhost:8080
.
Log in using credantials you set up. If you did not, default username and password is airflow
and airflow
.
After the log in, you will see a page similar to shown below:
Here, you are seing a list of DAGs and their status. DAGs (Directed Asyclic Graph) define workflows by collecting tasks that are going to be performed in steps. In MMviB, workflow of a use-case is defined by a DAG, that is, each DAG in this list define a use-case.
Click on the DAG of the use case you would like to run. You will see scheduling information related to the selected DAG on your screen.
Click on the Graph tab. You will see the workflow steps of this DAG. On top right side of the workflow, click the button with ⏵ (play symbol, or, triangle pointing to right).
You will see a pop-up. Select Trigger DAG w/ config.
You will see the following screen.
Paste your configuration and click Trigger. You can find more information about configuration in Creating a use-case. You can follow the status of the use-case in this screen.
Color of the frames of the boxes will change according to the status. Meaning of each color can be seen in the legend at the top right side of the workflow.
If you click on a step (task) in the workflow, you will se the following pop-up window.
Here, you can select Log to inspect the output of a task that communicates with a Model Adapter. You can also manually mark the task as failed or succeded.
Creating a use-case¶
To have a new use-case, you need to prepare a DAG file and a configuration file.
DAG file¶
Note
This section will not explain how to create a DAG file from scratch. You can review already created examples in Multi Modelling Model Repository. Apache Airflow Documentation provides more generic information on DAGs.
Airflow DAGs are defined in Python language. These are regular .py
files and they have to be placed under a folder in the dags
directory in Model-Orchestrator.
These DAGs must contain the definitions of the **task**s that will run consecutively.
A task can be created with the Operator
classes of Airflow.
Example use-cases use PythonOperator
s for most of their tasks, which they are generic for this project.
These operators are subroutine_initialize
, subroutine_computation
and subroutine_finalize
.
A task that uses subroutine_initialize
operator has to be created. In the examples, it is named as Initialize
.
For subroutine_finalize
operator, the task in the examples is Finalize
.
For each step that is going to require an interaction with a Model Adapter, we need a separate task that utilises subroutine_computation
.
This is an example task called My_model
, which utilises subroutine_computation
:
My_model = PythonOperator(dag=dag,
task_id='My_model',
python_callable=subroutine_computation)
Note that there is no information on configuration and how to use the model. These are going to be provided with a configuration file and will be referenced with the name of the task.
In this case, this is My_model
.
Finally, tasks order can be defined with >>
operator. The task on the right side of the operator will run after the one on the left side.
Example:
Initialize >> My_model >> Finalize
This will trigger Initialize
first, then My_model
, and finally Finalize
.
Repeating tasks¶
If a task, or group of multiple tasks needs to be repeated in the DAG, a function that returns the running order of one repetition can be created as follows:
def group(number, **kwargs):
dyn_value = "{{ task_instance.xcom_pull(task_ids='push_func') }}"
with TaskGroup(group_id=f'Iteration_{number}') as tg1:
t1 = PythonOperator(dag=dag,
task_id=f'First_repeated_task_{number}',
python_callable=subroutine_computation)
t2 = PythonOperator(dag=dag,
task_id=f'Second_repeated_task_{number}',
python_callable=subroutine_computation)
t1 >> t2
return tg1
Then repetition can be achieved as in the following example:
prev = Initialize >> Task_A
iters = 2
for i in range(1,iters+1):
item = group(i)
if prev is not None:
prev >> item
prev = item
prev >> Task_B >> Finalize
The running order will be: Initialize
, Task_A
, First_repeated_task_1
, Second_repeated_task_1
, First_repeated_task_2
Second_repeated_task_2
, Task_B
, Finalize
.
Configuration file¶
This file includes configuration information about a specific run of a use-case. It is written in JSON format. It consists of 4 sections:
metadata
section includes information to identify a task. These are: experiment, project, run, scenario and user. The directory going to be used in Minio for input/output files is selected according to the values under the metadata.modules
includes the address of the Model Registry.databases
holds a dictionary of connection parameters for each of the databases going to be used by models.tasks
holds a dictionary of the configuration of each task in the DAG that this configuration is going to be used with. Key of the dictionary item has to be mach with task name in the DAG. Contents of themodel_config
key is specific to the model that is going to be used.
Example configuration:
{
"metadata": {
"experiment": "Trial_1",
"project": "tholen",
"run": "MM_workflow_run_1",
"scenario": "v05-26kw",
"user": "mmvib"
},
"modules": {
"model_registry": "http://mmvib-registry:9200/registry/"
},
"databases": {
"Influx": {
"api_addr": "influxdb:8086",
"db_config": {
"db_name": "energy_profiles",
"use_ssl": false
}
},
"Minio": {
"api_addr": "minio:9000",
"db_config": {
"access_key": "admin",
"secret_key": "password",
"secure": false
}
}
},
"tasks": {
"Task_A": {
"api_id": "Model_A",
"model_config": {
"input_esdl_file_path": "test/input.esdl",
"output_esdl_file_path": "test/1/output.esdl"
},
"type": "computation"
},
"Task_B": {
"api_id": "Model_B",
"model_config": {
"action": "some_action",
"action_config": {
"some_action": {
"input_esdl_file_path": "test/1/output.esdl",
"output_esdl_file_path": "test/2/output.esdl"
}
},
"some_config": {
"some_val": "sample"
}
},
"type": "computation"
}
}
}
Energy Models¶
The following energy models are used in this project (alphabetic ordering)
CTM (Carbon Transition Model)¶
The Carbon Transition Model (CTM) is a tool to explore pathways to zero emissions for the Dutch industry as well as future industries that produce synthetic molecules (from carbon, water and electricity). Industrial activity is modelled according to historic public data and has been validated. The user can explore a future year by making changes to a reference ‘base year’ scenario. The model then provides information on changes to emissions, costs, energy and feedstock, technology choices, infrastructure and much more.
The model covers the entire Dutch industry. The largest energy intensive industrial sites are modelled using a bottom-up approach. This includes steel, refineries, fertilizer plants, large base chemical plants including steam crackers, industrial gases and methanol production, some inorganic chemical plants for salt, chlorine and petrochemical catalyst production as well as waste incineration plants. The remainder of Dutch industry has been modelled using a top-down approach based on national energy statistics and site specific emission data.
The model gives information about these industries at the national level (The Netherlands), industry sector level, cluster level (Rotterdam, Zeeland, Groningen, Noordzee Kanaalgebied, Chemelot and Cluster 6) and site level.
For more information, see the CTM documentation
EPS (Energy Potential Scan )¶
The Energy Potential Scan for Business Parks (EPS) gives a first order estimate for the business case for sustainable energy measures on business parks, for individual companies and for the business park as a whole. It has been developed in practice and successfully applied for more than 100 business parks in the Netherlands, also as a commercial product. It does not require company-specific data for a first estimate, as it uses geographical and other data which is open or commercially available on a national scale. Therefore, it is a relatively quick and cost-effective scan. The results are presented at the building-level and can be visualized using GIS. The results are also available in ESDL.
For more information, see the BE+ website about the EPS
ESSIM (Energy System Simulator)¶
The Energy System Simulator (ESSIM) is a discrete time simulation tool and collection of models that calculates energy flows in assets and the effects thereof, in an interconnected hybrid energy system over a period of time. With the help of the energy flows ESSIM calculates, one can get insights into how well the assets in a network are dimensioned, if there is overloading in any given transport asset (like pipe, cables, etc.) and what the effect of storage is in any part of the network.
For more information see: ESSIM Documentation
ETM (Energy Transition Model)¶
The Energy Transition Model (ETM) is an online model which enables users to explore possible future for a specific energy system. The model is open-access, open source, web-based and interactive in its use. Through the use of sliders users can make explicit assumptions and choices about the future of their energy system based on its current situation. Currently the ETM models EU-countries and most Dutch provinces, municipalities and RES-regions. Open data is used to model these different energy systems.
The ETM is a bottom-up, simulation model. All relevant processes and energy flows are captured in a graph structure which describes all possible routes for exchanging energy between sectors and processes. All relevant sectors and energy carriers of the energy systems are also included. The ETM calculates the yearly energy balance for all energy carriers, and the hourly energy balance for electricity, heat, and hydrogen. The model is run two times, once for a start year and once for every hour of the selected future year. Based on (new) slider settings the model is rerun and supply and demand is automatically balanced on an hourly basis using a merit-module. The results include system KIP’s such as total costs and CO2 emission reduction of the modelled energy system.
For more information see: ETM Documentation
MOTER (Modeler of Three Energy Regimes)¶
MOTER is an optimization tool for the dispatch of ‘multi-commodity’ energy systems consisting of interconnected electricity, natural gas, hydrogen and heat networks. MOTER was developed in the period 2015-2020 DNV as the calculation engine for the DNV ‘Energy Transition Simulator’ (ETS). The ETS allows for 10-15 participants in a workshop setting to explore decarbonization pathways, from 2020 to 2050, for a simple fictive world (‘Enerland’), but with real world techno-economic physics and price models. As the physics engine, MOTER dispatches all production, end use, transport, conversion and storage assets of Enerland energy system at lowest overall cost. The objective of MOTER within the Multi-Model macro case is to add network dispatch and in particular network congestion management to a the national ‘II3050-Mobility’ case.

The original Enerland case in the Energy Transition Simulator.¶

the Netherlands-II3050-Mobility network template in the MultiModel¶
MOTER scope¶
The objective of MOTER (Modeler of Three Energy Regimes) is to find the optimal techno-economic performance of an externally provided multi-commodity energy system, consisting of the topology and capacities of following assets:
Primary energy generation via wind turbines, solar PV, geothermal, coal power plants (+CCS), natural gas production and energy import/ export.
Residential, industrial heat &power demand centers via ‘smart’ end use applications.
Energy conversion via gas-to-power, power-to-gas, power-to-heat, gas-to-heat, natural gas to hydrogen technologies.
Energy transport networks consisting of power cables, gas pipelines, heat networks (+ transformers, compressors)
Energy storages (battery, pumped hydro, underground gas storage, insulated hot water tanks)
The output of MOTER is the cost optimal ‘dispatch’ of the flex in the energy system. The term ‘flex’ refers to any measure the market can take to reduce supply-demand imbalances, such as:
Bridging energy supply-demand imbalances in space using passive assets (pipes and cables) in combination with dispatchable assets (compressors and transformers)
Bridging energy supply-demand imbalances in time using storage.
Bridging energy supply-demand imbalances in energy type using conversion.
Additional ‘flex’ options:
Production flex: Ramp-up of flexible sources (natural gas, oil import etc.), curtailment of fixed sources (solar PV, wind turbines, geothermal, etc) according to relative merit orders.
End user flex: Curtailment and time shifting of energy end use according to relative merit orders (industrial/ residential v. electric power/ space heating).
Conversion flex: multi-fuel operations (heat pump+ natural gas back up)

Illustration of a multi-commodity energy system in MOTER. Energy carriers enter the system via network ‘entries’ (producers) and leave via network ‘exits’ (consumers). To match supply -demand, energy carriers can transported via cables pipes and stored in storages. Energy carriers can be converted from carrier A -> B via (energy) converters (boilers, electrolyzers, steam methane reformers), Energy ‘states’ (voltage, prerssure) can be altered using (state) converters like compression/ transformers. The ‘quality’ of the energy (mainly gas calorific value, heat network water temperature) can be changes via (quality) converters like natural gas to hydrogen converters, gas mixing stations and back-up heaters. MOTER does not yet model AC power ‘cos phi’ or reactive power.¶
Energy carriers in scope are:
Energy carriers |
Subtypes |
Modeled properties |
---|---|---|
electricity |
HV, MV, LV |
Voltage, current (DC eq.), power |
EGaseous |
Natural gas , biogas, hydrogen |
Pressure, flow, Calorific value, power |
Heated water |
Heat network, local demand |
Pressure, flow, temperature, power |
External fuels |
Oil, coal, nuclear |
Flow, Calorific value, power |
The assets in scope are:
Type |
Examples |
Input parameters |
Output |
---|---|---|---|
Production/entries |
Wind turbine, gas field, geothermal, oil import |
Capacity, curtailment / import cost, requested profile |
Operational hours, realized profile |
End use/ exits |
Industrial/ residential, Heat/ power, mobility |
Capacity, curtailment / export cost, requested profile |
Operational hours, realized profile |
Converter (carrier) |
Gas-to-power, power-to-gas, gas-to-heat, power-to-heat, natural gas to hydrogen |
Capacity, efficiency |
Operational hours, realized profile |
Converter (transport) |
Transformer, compressor |
Capacity, efficiency |
Operational hours, realized profile |
Converter (quality) |
Gas blending, heat booster |
Capacity, efficiency |
Operational hours, realized profile |
Transport |
Cable, pipe |
Length, conductivity |
Operational hours, realized profile |
Storage |
Battery, salt cavern, hot water tank |
Volume, send-in/out capacity |
Operational hours, realized profile |
Special feature: time slices¶
A special challenge when working with optimization modeling is the maximum number of assets that can be modelled in combination with properties and time resolution. In other words, the size of the ‘objects(variables,T) matrix’ that will still fit into computer memory and can be solved in acceptable calculation. Macro Energy modelling requires that both the seasonal and the intraday dynamics must be captured by the model. However modeling 8760 hours/year will ‘blow up’ memory usage and severely restricts the number of active objects (distributed and connected in space and including subcategories) that it becomes unsuitability for real world applications. This especially when Monte Carlo methods (‘perform a great number of runs with stochastically varied inputs’) are being considered and model runs need to be (very) fast. The solution in MOTER, as is used for Opera, is to reduce the 8760 hours to a define a subset of ‘time slices’ during the year. This because a series of 8760 hours will contain a significant amount of redundant information. In one case study and it was established that with only 16 snapshots (night/morning/afternoon/ evening & winter, spring, summer, autumn) sufficient accuracy (~>90%) may already be achieved, in only a fraction ( <1%) of the calculation time. In order to create a 8760 hour profile from the snap shots a simple ‘sample & hold’ reconstruction algorithm is used.
Note that when using ‘global optimization’, special care has been taken is that the relative order of the snapshots, i.e. causality, is being respected. This because global optimizers calculate all time steps all at once, unlike simulators which run though time step consequently. MOTER has extra time hierarchy information on first priority on the intraday snapshot order first (night, morning, afternoon, evening) and then the days ordering during the year next (jan 1st –> December 31st). This ‘proper time ordering’ is important when optimizing energy storage systems that perform both intraday and seasonal balancing functions.

Example of defining the time slices that serve as ‘proxies’ for the reconstruction of full year dynamics.¶
For more information see Netbeheer Nederland datasheet about MOTER
Additional documentation¶
Note that more documentation about MOTER on the TU-Delft server can be found under AIMMS_models
OPERA (Option Portfolio for Emissions Reduction Assessment)¶
OPERA is an optimization model based on linear programming. It represents the entire energy system of the Netherlands, including bunker fuels, feedstocks, and all domestic greenhouse gas (GHG)emissions. It is possible to optimize for individual years and over years (a dynamic optimization). The user can define several policy-relevant targets, like a GHG reduction target, a final energy consumption target, etc. Furthermore, individual technologies, groups of technologies, and resources can be restricted by maxima or minima. For example, a maximum capacity potential for wind offshore can be set. The model will decide if this maximum is needed.
OPERA uses time slices, in which hours with a similar character are grouped in the same time slice. Input data with an hourly resolution are aggregated in these time slices. Examples of hourly input data are hourly wind speeds and the hourly electricity demand profile for the household sector. The time-slice settings can also be set such that an n-hourly resolution is achieved.
The driver of the model is the energy demand. The most significant amount of demand is determined via service demands. Examples are a predefined amount of tons of steel that need to be produced, passenger kilometers that need to be driven by cars, etc. In all these cases, the model determines what technologies are used to fulfill these service demands. Therefore, the final demand for energy carriers is not predefined in advance. For the remaining part of the energy system, which is too small to represent individual types of service demand, a remaining final electricity and heat demand per sector needs to be fulfilled.
The user can subdivide the Netherlands into regions. These regions can be connected via transmission infrastructure. Import and export of electricity with neighboring countries is not determined endogenously, but is covered by coupling the model to a European electricity market model.
For more information, see Netbeheer Nederland datasheet about OPERA
TEACOS (Techno-Economic Analysis Of Complex Option Spaces)¶
TEACOS is a mathematical optimization tool for mid- to long-term strategic investment analysis. The tool is designed to assist in the investment decision making process. It aims to answer the following questions:
In which (decarbonization) opportunities to invest?
What is the optimal investment timing?
How much to invest?
By answering these questions, TEACOS provides credible, affordable and competitive transition pathways towards a low carbon energy system. TEACOS is completely data driven. Because of this, it can be applied in any industrial sector and on any scale.
TEACOS models the supply chain as a network. In the network, nodes represent locations or (production) units, and the connections between the nodes (arcs) represent transport of commodities between the nodes. Additionally, possible adaptations to the network infrastructure can be modelled as investments. The model selects the best combination of investments and calculates the corresponding product flow such that either the Net Present Value is as high as possible, or the costs are minimized.
One of the major strengths of TEACOS lies in answering ‘what-if’ questions: i.e. ‘what if CO2 emission costs rise?’, by defining several scenarios in which certain assumptions are altered: i.e. a scenario with fixed CO2 emission costs and one where CO2 emission costs change over time.
TEACOS needs input on five different aspects:
Supply: resource availability and cost, utility availability and cost.
Conversion Infrastructure: yields and capacities, CAPEX and OPEX.
Transport Infrastructure: capacities, CAPEX and OPEX.
Demand: product/service demand and sales prices.
Strategic input: investment opportunities and their impact, outlook on prices and costs, environmental constraints, learning curves, supply and demand scenarios, other constraints, other scenarios.
The input is currently read from an Excel file.
For more information, see the TEACOS website
The data exchange between the models is defined by the EDSL (Energy System Description Language), by which the information about an energy system can be formally defined in XML format.
Example use cases¶
Within the MMvIB project (Multi Modelling for Integral Decision Making) we’re considering three use cases:
Macro use case - National infrastructure¶
1. Introduction¶
1.1. Use case description¶
Currently models from TSO’s/DSO’s in combination with the ETM are used to determine the impact of energy transition scenario’s on the energy infrastructure.
However, these scenarios/models have their limitations with regards to:
Incorporating changing (economic) conditions (economic consistency vs. technical consistency)
Possibilities for optimization (what is optimal with regards to …?)
Optimization of (existing) assets taking into account spatial distribution and network limitations
Besides, all models have their shortcomings / blind spots. Within a multi-model structure they can compensate for each other’s shortcomings (if done right).
Through the application of a multi-model structure, existing scenarios can be improved with respect to economic consistency (Opera), scenario optimization with respect to (a set of) KPI’s (Opera), asset optimization (Moter). TSO / DSO models will still provide the necessary asset information for infrastructure planning while the ETM will provide the scenario description and function as communication tool to visualize or adjust scenarios/optimization outcomes.
This gives stakeholders, such as TSO’s and DSO’s, more insight into the future energy system and enables them to make better / more cost-effective decisions benefitting society.
1.2. Models used¶
By making the following additions to the existing model structure (models TSO/DSO & ETM) we can:
Abstract information from scenarios created by grid operators or policy makers (hourly curves, costs, kpi’s, power, ..) (ETM)
Do a cost optimization to create economically consistent scenarios (Opera)
Regionalise information and attach it to a grid topology (Regionalisation model & connect infra)
Optimize asset dispatch and dimensions (incl. network calculations) (Moter)
1.2.1. Opera¶
OPERA is a technology-rich energy system optimisation model for the Netherlands. Two features that make OPERA especially useful for developing sustainable energy scenarios for the Netherlands are: (1) it covers the complete energy system of the Netherlands and reflects all domestic emissions and types of greenhouse gases; (2) it simulates energy supply and demand, distinguishing different hour series with comparable supply and demand. These features permit the investigation of how to optimally deploy large capacities of intermittent renewable energy, among other things.
OPERA allows its users to examine the implications of technology diffusion, efficiency improvement and policy interventions that reduce emissions of greenhouse gases. In many studies, OPERA calculates the configuration of the Dutch energy system and the associated emissions, given specific goals and preconditions, at the lowest system costs for specific years (e.g. 2030, 2035, 2040, 2045 and 2050). Although at present OPERA is not a dynamic model, it does consider existing assets by taking into account investments made in previous years and their technical lifetime. In the year for which the optimization is performed, new investments are added to the existing assets if needed. For energy production and use, the model can choose from more than 600 technology options covering the whole technology chain from production to end-use demand services, including technologies that convert primary into secondary sources. The techno-economic data for these options are retrieved from a database containing current data and projections for parameter values in 2030 and 2050, derived from an extensive literature assessment. This techno-economic data has been reviewed by TNO experts for a large number of technologies and summarized in fact sheets (see https://energy.nl/). The fact sheets contain performance and cost parameters for 2030 and 2050 based on learning percentages. For technologies with learning potential for which the learning rate is unknown, an investment cost reduction of 20% is assumed between 2030 and 2050.
The energy system OPERA computes has to meet the annual demand for:
energy services (heat and electricity) of built environment, industry, service sector and agriculture,
domestic transport of people and goods,
fuels for international transport (bunker fuels),
production of industrial products (including steel, aluminium, ammonia, ethylene, methanol, chlorine, salt, ceramics and glass).
OPERA calculates the primary energy mix and an energy mix for each end-use sector. Fossil primary fuels (oil, coal and natural gas) are assumed to be available at a certain exogenous market price. For domestic renewable energy (solar, onshore and offshore wind, biomass, geothermal energy), a maximum potential applies. In OPERA captured CO2 can be stored or used in industrial processes. A maximum capacity applies for the storage of CO2. OPERA can import refined oil products, biomass, biofuels, hydrogen and electricity at a certain price and within assumed supply limits. Electricity trade with neighbouring countries have been determined using the European electricity market model COMPETES (Lise, Sijm, & Hobbs, 2010). To calculate system costs, OPERA uses a national cost-benefit approach with a discount rate of 2.25% (Werkgroep Disconteringsvoet, 2020).
Taxes, levies (e.g. CO2 price) and subsidies are not taken into account. Total system costs are the sum of the annualised investment costs, annual operation and maintenance costs, cost for energy transport and costs for imported energy minus revenues from exported energy. OPERA only takes into account policy preconditions arising from the scenarios, such as closing coal-fired power stations before 2030 or a limited use of CO2 storage in the TRANSFORM scenario.

1.2.2. Energy transition model (ETM)¶
The Energy Transition Model (ETM) is an online model which enables users to explore possible future for a specific energy system. The model is open-access, open source, web-based and interactive in its use. Through the use of sliders users can make explicit assumptions and choices about the future of their energy system based on its current situation. Currently the ETM models EU-countries and most Dutch provinces, municipalities and RES-regions. Open data is used to model these different energy systems.
The ETM is a bottom-up, simulation model. All relevant processes and energy flows are captured in a graph structure which describes all possible routes for exchanging energy between sectors and processes. All relevant sectors and energy carriers of the energy systems are also included. The ETM calculates the yearly energy balance for all energy carriers, and the hourly energy balance for electricity, heat, and hydrogen. The model is run two times, once for a start year and once for every hour of the selected future year. Based on (new) slider settings the model is rerun and supply and demand is automatically balanced on an hourly basis using a merit-module. The results include system KIP’s such as total costs and CO2 emission reduction of the modelled energy system.
1.2.3. Moter¶
Introduction MOTER is an optimization tool for the dispatch of “multi-commodity” energy systems consisting of interconnected electricity, natural gas, hydrogen and heat networks. MOTER was developed in the period 2015-2020 DNV as the calculation engine for the DNV “Energy Transition Simulator” (ETS). The ETS allows for 10-15 participants in a workshop setting to explore decarbonization pathways, from 2020 to 2050, for a simple fictive world (“Enerland”), but with real world techno-economic physics and price models. As the physics engine, MOTER dispatches all production, end use, transport, conversion and storage assets of Enerland energy system at lowest overall cost. The objective of MOTER within the Multi-Model macro case is to add network dispatch and in particular network congestion management to a national “II3050-Mobility” case.

MOTER scope The objective of MOTER (Modeler of Three Energy Regimes) is to find the optimal techno-economic performance of an externally provided multi-commodity energy system, consisting of the topology and capacities of following assets:
Primary energy generation via wind turbines, solar PV, geothermal, coal power plants (+CCS), natural gas production and energy import/ export.
Residential, industrial heat & power demand centers via “smart” end use applications.
Energy conversion via gas-to-power, power-to-gas, power-to-heat, gas-to-heat, natural gas to hydrogen technologies.
Energy transport networks consisting of power cables, gas pipelines, heat networks (+ transformers, compressors)
Energy storages (battery, pumped hydro, underground gas storage, insulated hot water tanks)
The output of MOTER is the cost optimal “dispatch” of the flex in the energy system. The term ‘flex” refers to any measure the market can take to reduce supply-demand imbalances, such as:
Bridging energy supply-demand imbalances in space using passive assets (pipes and cables) in combination with dispatchable assets (compressors and transformers)
Bridging energy supply-demand imbalances in time using storage.
Bridging energy supply-demand imbalances in energy type using conversion.
Additional “flex” options: * Production flex: Ramp-up of flexible sources (natural gas, oil import etc.), curtailment of fixed sources (solar PV, wind turbines, geothermal, etc) according to relative merit orders. * End user flex: Curtailment and time shifting of energy end use according to relative merit orders (industrial/ residential v. electric power/ space heating). * Conversion flex: multi-fuel operations (heat pump+ natural gas back up)

Energy carriers in scope are:

Assets in scope are:

Scalable time granularity: time slices A special challenge when working with optimization modeling is the maximum number of assets that can be modelled in combination with properties and time resolution. In other words, the size of the “objects(variables,T) matrix” that will still fit into computer memory and can be solved in acceptable calculation. Macro Energy modelling requires that both the seasonal and the intraday dynamics must be captured by the model. However modeling 8760 hours/year will “blow up” memory usage and severely restricts the number of active objects (distributed and connected in space and including subcategories) that it becomes unsuitability for real world applications. This especially when Monte Carlo methods (“perform a great number of runs with stochastically varied inputs”) are being considered and model runs need to be (very) fast. The solution in MOTER, as is used for Opera, is to reduce the 8760 hours to a define a subset of “time slices” during the year. This because a series of 8760 hours will contain a significant amount of redundant information. In one case study and it was established that with only 16 snapshots (night/morning/afternoon/ evening & winter, spring, summer, autumn) sufficient accuracy (~>90%) may already be achieved, in only a fraction ( <1%) of the calculation time. In order to create a 8760 hour profile from the snap shots a simple ”sample & hold” reconstruction algorithm is used.
Note that when using “global optimization”, special care has been taken is that the relative order of the snapshots, i.e. causality, is being respected. This because global optimizers calculate all time steps all at once, unlike simulators which run though time step consequently. MOTER has extra time hierarchy information on first priority on the intraday snapshot order first (night, morning, afternoon, evening) and then the days ordering during the year next (jan 1st –> December 31st). This “proper time ordering” is important when optimizing energy storage systems that perform both intraday and seasonal balancing functions.

1.3. Conceptual framework¶
Introduction Macro Energy Modelling Transforming a centralized fossil based energy system into a decentralized renewable energy system impact is one of the greatest challenges for our modern society. Essential to the success of this process is the availability of energy models that can guide to the stakeholders what the impact of their investment/ divestment decisions will be on future energy system.
General modelling approach Macro scale energy models tend to follow the structure as illustrated below:

The first step is for the user to construct a “baseline” energy model by configuring the (predefined) supply, demand, storage, transport, conversion assets with data from the information sources. One usually starts with the configuration and validation of the current situation, a baseline, and then modify the configuration into a set of future situations (scenarios). One of the main challenges for macro energy models is however that the complexity of the real world greatly exceeds the number of objects and interactions a computer model can handle. The detail level will thus need to be (severely) reduced and asset parameter and interactions need to be generalized. When using the ETM in this process, a set of preconfigured objects is presented to the user and the user only has to provide key parameters, usually “relative share of specific category of the total”. A calculation engine will validate the user model configuration and determines the model KPI’s based on generalized interactions between the aggregated assets.
The next step in the modelling process is to introduce changes , i.e. investments / divestments to the baseline configuration in order to better meet the user objectives, i.e. be more sustainable, resilient and or more affordable in a future moment in time. This step can be performed by human users using an intuitive GUI, stakeholder inputs from workshops, or via optimization models like Opera or TEACOS. Usually scenarios are used to explore the range of possible futures.
The third step is to perform validations and/ or corrections for the proposed future scenarios on detail levels below the scope of the main simulation and optimization models. This can be a geographical distribution of the assets in combination with the energy network topologies and capacities. To assess the physical impact of the assets on energy infrastructure, dedicated models like PowerFactory, ESSIM or in this project MOTER could be used. Using the insights gained from these detailed models, the proposed investment/divestment plan can be validated or the time line towards achieving the future scenario can be adjusted.
Challenges
1. Challenge one: model coupling related issues Even though the process outlined here for macro energy system modelling may appear straight forward, in reality the process has many challenges. A first issue for is the wide range in model scopes and functions such as library functions, intuitive GUI for model configuration, KPI simulation, asset investment optimizations, detailed physical system validations. These model functions do not only require different modelling approaches (database queries, web interface, simulation engine, optimization using CPLEX, etc.) but may also be assigned to different users, with different experience levels and backgrounds, possibly from different legal entities separated by firewalls for sharing commercially sensitive data. The overall macro modelling process can become a highly challenging process of users exchanging data versions (usually via Excel and email) back and forth, introducing unknown amounts of communication, interpretation, translation errors through the model process. Thus the first solution proposed by MultiModel is to introduce ESDL to streamline the communication and the orchestrator to replace the back and forth communication process.
2. Challenge two: model resolution related issues The second challenge is that models with different scopes (library, global optimization, detailed simulation, …) may also need to work together on different granularity/detail levels. The overall system configuration and optimization models require assets and interactions to be generalized on three main levels: 1) space , 2) time and 3) category (see illustration below).

Working with models based on aggregated parameters and variables will introduces possible issues that are easily overlooked. As a simple example: “avg(A x B) ≠ avg(A) x avg(B)” when A and B are aggregate (averaged) values. Why this may be so is illustrated in the example below.

3. Challenge three: scale Another particular challenge in macro energy modelling is that crucial “real world interactions” may take place on detail levels below that of the main models. For example:
“Space & Topology”: the distribution of assets in geographical space and the network topology must be included in sufficient detail in order to properly take real world network congestion issues into account.
“Time & Uncertainty”: for storage to be properly modelled the effects of a full year of supply-demand dynamics needs to be included, i.e. winter/summer, week/weekend, day, night. Moreover also a range of possible years (cold/ warm winter, “DunkelFlautes ”, etc) should be included to represent the impact of real world uncertainties and the storage strategy can not know on beforehand what scenario is selected.
Categorization & compatibility. Real world assets can vary greatly in individual properties and applications but will need to be lumped together in “generic containers” in the energy models. This can be a real challenge when models differ significantly in their respective granularity and resolution. See the “electric mobility” example on how a simple and complex model can become “incompatible” as a minor asset category becomes a major energy player.
Summary macro energy modelling challenges
- Macro energy models are crucial to the success of the energy transition but the quality of the output or even overall validity is compromised, in uncertain amounts, by the following issues:
The coupling of a wide range of model scopes and functions, i.e. information library, asset configuration, performance simulation, investment optimization that require not only dedicated models, but also a wide range of specialist users and possibly information firewalls (illustrated with the generic macro model process diagram). Errors are introduced whenever information is exchanged.
Models may differ in space, time and category detail levels. Uncertainties and errors will be introduced in when exchanging information back and forth (illustrated with the electric mobility example).
Aggregated parameters and variables may have (hidden) correlations on deeper levels, as illustrated this with the “avg(A*B) ≠ avg( A)*avg(B)” example, resulting in unknown amounts of numerical uncertainties
Asset parameters may not be constant inputs but will be sensitive to the output value of variables. This effect is illustrated with the solar PV profile example. The real world is full of non-linear physics and non-linear scaling effects, but for macro energy system modelling it is assumed that linear relations can be used throughout. This introduces unknown levels of uncertainty.
A real challenge to macro models is that only endpoints in the future are modelled in extensive detail (2050,..) but not the pathway towards this future. Ideally the future scenario should be “build up” using an incremental investment strategy (i.e. 2025-> 2026-> 2027->…->2050) instead of a “2050 big bang”
Macro energy models use a vast range of input parameters with various levels of uncertainties and cross correlations. In addition of a small set of main scenarios also “Monte Carlo methods” should be used. Ideally not ~4 but ~10000 model variants should be run to determine the robustness and standard deviations of the output KPI’s. Especially when non-linear interactions are involved, the model may give back non-trivial results, and give guidance on investment strategy (“do’s & don’ts”).
- Multi-Model aims to address the macro energy modelling issues as follows:
Model coupling Individual models, owned by different legal entities running on private servers, can now communicate to each other via the internet via “adapters”.
Model compatibility ESDL is used as the common communication language, strongly reducing the potential for data translation/ interpretation errors between models.
Enhancing scope & resolution Specialized sub-models can check / correct the main scenario models on deeper space/time/category resolution levels or adding simulation/optimization functionalities too challenging for the main model.
Successive approximation Automation of control and communication between the models via the orchestrator, allowing for the use of successive approximation or incremental increases to address the non-linear dynamic with successive approximation techniques.
Pathways & Monte Carlo. Automation of control allowing for the running of large numbers of stochastically varied inputs parameters (“Monte Carlo “) or model road maps (2025,2026,…2050) to test for the robustness of model results.
In the next section we will go in to more detail on how ETM, Opera, MOTER, Regionalisation & Connect Infra module aim to work together within the MultiModel framework to achieve the outlined goals.
2. Approach¶
2.1. Model chain¶
The model chain represents the flow of data from one model to another. In this case ESDL was mainly used to exchange information between models. Most of the data-exchange is automatically performed by the orchestrator, however, the initialization still requires manual work. Information is exchanged as follows:
Creating a representation of an energy system in ESDL using the map-editor (manual)
In the map-editor an energy system is constructed on a national level using the following assets and accompanying infrastructure:
Wind turbines
Solar PV
Nuclear power plants
Electricity import
Hydrogen import
Electrolysis
Batteries
Electricity demand transport (car, van, truck)
Hydrogen demand transport (car, van, truck)
These assets merely construct an energy system but does not add any information on this energy system.
Adding information using existing scenarios in the ETM (automated)
Based on the created energy system, the ETM can set an installed capacity (rated output power) range for every production asset. This range is based on two existing scenarios with different assumptions on the total installed capacity e.g. for wind or solar power. This is done to allow optimization of the installed capacity at a later stage. The electricity and hydrogen demand do not have a range as this is used as a fixed variable during the optimization. Therefore, the demand is based on only one scenario. To test this use case the II3050 scenarios were used.
Cost optimization in Opera (automated)
The power ranges and demands are used by Opera to optimize the installed capacity for every asset based on the most optimal cost scenario. Opera adds the result, the optimal installed capacity, to every asset.
4. Changing the power in the ETM (automated) The optimized power set by Opera for every asset is imported in the ETM. With this new information, the ETM calculates and adds the marginal costs, full load hours and hourly production and demand curves to every asset.
5. Regionalization (automated) The energy system consisting of assets with a certain installed capacity, demand, full load hours and marginal costs. This energy system is based on national demands and total installed capacity. For more detailed infrastructural calculations the energy system needs to be regionalized. This process divides all assets into smaller units and attaches a location (e.g. a municipality) to every asset.
6. Coupling to infrastructure (automated) The regionalized energy system still only consists of a ‘list’ of assets with a location attached to them, however, there is no infrastructure which connects them. Using the ‘Connect-infra’ model, the assets are attached to an existing infrastructure (a mock-up of the future national electricity and hydrogen infrastructure) based on their nearest ‘coupling node’. These coupling nodes represent the transition from a regional electricity/hydrogen grid to the national grid. When all assets are connected to a coupling node, similar assets connected to the same coupling node are aggregated again to simplify the energy system.
7. Infrastructure optimization in Moter (automated) Using all information added to the energy system in previous steps, Moter can now perform calculations to optimize the infrastructure and assets attached. Based on the optimization, Moter can give feedback e.g. on the amount of full load hours or max-capacity of assets.
2.2. Individual model developments¶
2.2.1. Orchestrating AIMMS based models¶
In this multi-modelling project three models are used that use AIMMS as modelling and optimization environment: Opera, Moter (both in the macro use case) and Teacos (micro and meso use case). While Teacos already moved to AIMMS’ newer cloud environment, Opera and Moter have been developed in an Windows-based AIMMS application, using older versions of AIMMS. This lead to the challenge on how to orchestrate these models and exchange information with the AIMMS environment. The chosen approach was to wrap the AIMMS executable in a Python application that calls AIMMS using the command line and issue specific command line arguments to load the correct model and start the right AIMMS procedure to run the specific model. Before running the model, the input of the model should be configured based on the input ESDL and after running the model, information should be extracted from the model output and converted back into ESDL.
For those conversions two approaches were developed:
UniversalLink – this Python module converts an arbitrary input ESDL into MySQL tables. AIMMS has the possibility to read these tables and convert the data into the models internal representation. Afterwards, AIMMS updates the MySQL database with the output of the optimization, which is picked up by the Python module to convert the changes back into ESDL. This approach is used for Teacos and Moter.
OperaLink – this Python module directly writes the input ESDL into Opera specific tables in its Access database. This approach was chosen as the impact of the UniversalLink was too high for the Opera model, as specific AIMMS knowledge was lacking for this integration into Opera. The OperaLink approach is therefore less generic, but was needed to have Opera part of the multi-model. Similar to the UniversalLink, the Python module processes the (specific) output of Opera and converts this back into ESDL.
2.2.2. Opera¶
As discussed in the ‘Orchestrating AIMMS based models’ section, Opera uses the OperaLink approach to integrate with ESDL and uses a Python wrapper to start the Opera AIMMS model. All this functionality is added to the Opera Adapter that contains a webservice that is used by the Orchestrator to operate models in a multi model.
In the figure below the process to run a scenario in Opera is depicted:

The following steps are performed to run Opera in a multi-modelling environment:
Input ESDL is send to the OperaAdapter by the multi-model orchestrator.
The OperaAdapter uses the OperaLink to parse the ESDL file and extract the relevant information for Opera. A specific MMvIB scenario is created in the Opera database. This allows Opera to ignore other scenarios and configurations that are also available in the Opera database.
Each asset is converted to an Opera option (a representation of technology option) in the Opera database, including relevant data for that asset, such as its minimum and maximum capacity for production and conversion assets, yearly demand for consumer assets and costs of energy carriers and assets. Based on the available information assets are mapped to an existing Opera technology option or to a generic option.
This information is subsequently written to the different tables in the Opera database (a MS-Access database)
After the pre-processing phase is done, the Orchestrator will instruct the OperaAdapter to run the model. This will use the AIMMS command line to run the model with the right parameters and wait for it to finish its optimization.
After Opera has finished optimizing, the CSV output that comes out of an Opera model run is used to update the input ESDL and serves as the output of this optimization.
The OperaAdapter is informed that the results are ready.
The Orchestrator is informed of the results and can take this result to the next model in the multi-model.
The example Opera output below shows the optimization of the configurated ranges from two ETM scenarios to a specific value that is optimal for this use case.:
Found updated capacity for Electrolyzer_b243: 42.0 GW in range [42.00-51.00]
Found updated capacity for Import_a3ac: 128.60021409 GW
Found updated capacity for WindTurbine_6411: 20.0 GW in range [20.00-20.00]
Found updated capacity for PVPark_37e4: 57.60000001 GW in range [57.60-66.92]
Found updated capacity for NuclearPowerPlant_f521: 4.56731593 GW
This output (in ESDL) is subsequently fed to the regionalization and connect infra models as first step to add network infrastructure as input for Moter.
2.2.3. Moter¶
For the MultiModel project a special version of MOTER was created that is ESDL compatible and can be controlled via an adapter. The MultiModel MOTER operates as follows:

A input.esdl xml file is received
The “Uniform_ESDL_AIMMS_link.py” script unpacks the .esdl file in the MySQL database
The esdl configuration data is imported into the Aimms environment
A ESDL-> MOTER parser creates a validated MOTER configuration from the ESDL data and writes the MOTER case to the MOTER database (local MSAccess)
MOTER loads and runs the case (via the procedure MMviB_read_run_write) and writes results back to the MOTER database.
The MOTER->ESDL module load the MOTER results and writes the results to the ESDL database.
The “Write_to_esdl.py” script creates the output.esdl file.
All steps can also be performed automatically or manually for testing purposes and the results can be inspected via information pages on supply-demand, network, storage and converters.

2.2.4. ETM¶
The ETM translates scenario results into ESDL using the ETM-ESDL app. This app is accessible through an online interface (https://esdl.energytransitionmodel.com/api/v1/ or https://beta-esdl.energytransitionmodel.com/api/v1/ ). The app can currently perform 4 actions: 1. Create a scenario: Generate an ETM-scenario based on an ESDL-file 2. Create a context-scenario: Generate an ETM-scenario based on two separate ESDL-files (current energy system vs. future energy system) 3. Export a scenario: Change an ESDL-file based on one or more ETM- scenario(‘s) 5. Add KPI’s: Add KPI’s to an ESDL file based on an ETM- scenario
In the macro use-case the ETM-ESDL app uses the ‘create a scenario’ function and the ‘export a scenario’ function. Both functions existed before the start of this project, however, beforehand it was not possible to add the amount of information that was necessary in this use case and it was not possible to determine a range based on two scenario’s. Furthermore, the app was not yet connected to the orchestrator enabeling automated multi-model communication. This required the built of an extensive adapter which could, in a flexible and sustainable manner, direct the ETM-ESDL app to perform multiple actions. For more information, you can find the app here: https://github.com/quintel/etm-esdl#readme.

2.3. Multi-model infrastructure and configuration (orchestrator)¶
The figure below shows the workflow of this use case in the orchestrator (Apache AirFlow):

Each step in the workflow requires configuration (e.g. what input to use and where to write output). This configuration is done in a JSON file:


For each step or task a configuration is defined. The ‘app_id’ refers to the ID of the model adapter that is used in each step. This ID is searched for in the Adapter Registry to receive information about where to find the adapter of that specific model such that it can be used by the orchestrator to be run. The configuration of each adapter is described in more detail in the source code repository of the adapter. In Airflow you can use this configuration to start a workflow:

When the Trigger button is pressed, the workflow will be started. The border around each step shows the status of the task, e.g. dark green means a successful model execution.

During Workflow execution the operator can look at the logs to see the progress of each task. Below a screenshot of the Opera model log output, showing that it is configured and running.

Apache airflow also allows you to see how long each task takes when executing a workflow using a Gannt chart:

3. Results¶
3.1. Overview results orchestrator¶
As the starting point of the macro use case, a simplified national model is created in the ESDL MapEditor. The visualisation is shown below:

The ESDL contains several important characteristics:
Connectivity information: how are the different assets connected and which carrier is used in each connection
Which type of assets are used (PowerPlant, WindTurbine, PVPark, Battery, Electrolyser, MobilityDemand, Import)
Costs for utilizing production. E.g. the costs for deploying wind is defined as follows in the ESDL Mapeditor:

ETM adds ranges to the ESDL that is input for the Opera optimization. These ranges are defined based on the configuration of two ETM scenarios, and are added to the ESDL as a constraint for the optimization. E.g. the wind park should be optimized between 10 GW and 15 GW

Opera optimizes based on costs and removes the ranges and updates the power attribute of the assets. E.g. for the excerpt of the output of Opera for the WindTurbine, the optimal power is updated to 15 GW:

The regionalization module subsequently takes the national model and regionalizes it to municipalities. How it is regionalized is fully configurable and for this use case the power and energy values are (automatically) regionalized by the number of inhabitants, based on CBS data. This gives the following visualisation in the ESDL MapEditor:

If you zoom in you can see that every asset in the National model is regionalized for each municipality, but without any connections and infrastructure, as that is the next step.

Connect Infra
For connecting the assets to the infrastructure of Motor, a infra-ESDL is needed that describes this infrastructure. This was provided by DNV and is shown below:

It shows two carriers: electricity (green) and hydrogen (orange). When running the ConnectInfra model, a (large) configuration is required to map the assets of a municipality to a node in the infrastructure. Additionally a mapping is required that maps the carriers of the National model to the carriers of the infrastructure model (i.e. Moter distinguishes between transport infrastructure (high voltage) and distribution infrastructure (medium voltage). Additionally the ConnectInfra model aggregates multiple municipalities where possible when assets of the same type are connected to the same node in the infrastructure. This reduces the amount of assets in the ESDL and makes the optimization in Moter faster.


After the infrastructure is connected to the regionalized assets Moter can be run as a next step in the Workflow. The output of the Moter ESDL is similar to the left figure, but with optimized infrastructure which is not visualized in the MapEditor. For the actual results of the run see the next chapter about Moter.
3.2. Results Moter¶
In order for MOTER to be able to process the optimized, regionalized ESDL, the following modifications were made:
Commodity X: in ESDL a nuclear or coal fired power plant is a converter of an external (X) commodity to an internal commodity (Hydrogen, Electricity, etc). In MOTER, coal fired power plants were however considered production assets. The discrepancy was fixed by adding “XtoPower” converters, a “X” production site and a X transport network to MOTER.
Transport power rating MOTER has the added option to ignore the power rating of the transport lines. This because in the network template the power rating may be unknown or accidentally set to zero. A cable with a (unintentional) maximum power rating of zero will seriously disrupt the network performance. Therefore it is best to first run a scenario first without cable and pipe power ratings, to check the validity of the scenario, before adding network congestion.
Transport Conductivity issue MOTER has the added option to ignore the conductivity of the transport lines. The electric conductivity of a power line is determined by the cable gauge, number of conductors and length. The key issue here is that although ESDL can communicate pipe diameters, it is not yet able to communicate cable gauges & number and /or cable conductivity. MOTER now uses default conductivity ratings for all power cables but MOTER also has the added option to skip this aspect of the network simulation and just focus on maximum line ratings.
Allowed topologies As it turns out there is room for network topology interpretation conflicts between MOTER and ESDL. In MOTER the network nodes are considered fundamental building blocks and production, consumption are added to the network nodes as attributes. In ESDL the Assets are fundamental and the network connections are considered attributes of the assets. In MOTER it is possible to add a producer, battery and a consumer to the same node or couple a producer directly to a consumer. This it not allowed or desirable within ESDL. However, in ESDL it is possible to provide a “virtual pipe / cable” or “logical connection” between an Asset and a network node (just assign an asset with a network node x km away as the inport or outport to use). MOTER however does not understand “virtual cables or pipes”. See illustration on restrictions on allowed network topologies.

MOTER Performance
- The following MOTER performance was observed for the regionalized ETM output (“macro 16”)
Macro case: 425 assets in total
46 producers, 29 consumers
17 storages, 293 transports
Mode=224 time slices (28 days, 8 hours/ day)
3 solver iterations to manage on-linear physics
Total solve time: 45 seconds
Peak memory use: ~2 GB
- Hardware and software specifications were:
intel i5 1600 MHz
AIMMS 4.10 (old version due to licencing issues)
MOTER model type: LP (linear Programming)
CPLEX 12.6.2 , mode “concurrent”
The main concern for LP models like MOTER is the large amounts of internal memory they can require, not so much the CPU intensity. LP models benefit hardly from multiple CPU cores as the LP model cannot be split into branches or sub tasks. CPLEX does offer a “concurrent mode” which does add a little extra performance (in “concurrent mode” CPLEX it starts multiple, independent solves on a model, using different strategies for each. Optimization terminates when the first one completes). Solver times of around 1 minute are considered reasonable as a MMviB starting point and a little more performance may be gained by increasing server specifications or reducing the number of time slices. Ideally the number of transport assets should be scaled back in the future, provided that MOTER can be modified in such a way that it can understand “virtual connections”.
Scenario analysis
The chosen macro scenario was taken from an ETM II3050 scenario, and contained renewable production (solar, wind), mobility consumers (electric and hydrogen mobility, CAR, VAN, BUS, TRUCK), batteries, conversion (nuclear power, electrolyzer). The striped down ETM macro case, coupled to a fictive infrastructure, is thus not a realistic and/or balanced national scenario and the macro case only value is to validate the flow of data through the various models. MOTER was however able to solve the macro case (some production assets set to zero to create more interesting MOTER performance), with following results:
Network modelling
The figure below illustrates the macro network as reconstructed and dispatched by MOTER. In this example artificial congestion has been created along a HV power line (red).

Note that care must be taken to not overload MOTER with congestion (i.e. a fictive case where all cables and pipes are too small) as this leads to an energy system without any clear solution and CPLEX may need to be “times out” or it might take indefinite amounts of time (hours) to solve.
Supply-demand total
The overall macro scenario is characterized by the challenge to supply the very high peak demand from the Electric and hydrogen charging infrastructure, supplied by the batteries:

The relatively oversized battery storage is used as intraday storage and deliver the weekly peak demand and as seasonal storage and absorb the massive wind overproduction in November.
Supply-demand specific

Storage
Because of the use of time slices, special care needs to be taken to model the causal relations between time slices (ordering &hierarchy) between the time slices accurately. MOTER must first decide for a specific modelled day Di , how to use storage for the modelled intraday hours, and then repeat the net daily charge /discharge choice for all following days between Di and Di+1. The result is a fill rate following a “step ladder” as shown below:

The net daily charging/discharging choice for modelled day “Di” must be repeated until day “Di+1” arrives, resulting in fill level jumps between the days.

The batteries are loading all day to provide the peak in mobility demand. Moreover the batteries nett accumulate or discharge to balance seasonal imbalances (the batteries in the macro case are oversized because they only serve mobility demand).
Conversion
The missing links to establish balance between supply and demand are the hydrogen imports (producers) and nuclear power plants and the electrolyzer commodity converters. In MOTER the default behaviour for the imports, coal & nuclear power plants is to be idle and ramp up according to the needs of the rest of the energy system.


4. Conclusions and recommendations¶
4.1. Lessons learned¶
Arguably the biggest benefit of the Multi-Model approach is the approach enables model developers to work together at all. Models like ETM, Opera, MOTER have a long development history, are highly complex and can now only be worked by original developers. This if they are still available or successors are comfortable in meddling with the original code. Any attempt to integrate any two mature models would require developers from both sides to spend significant amounts of time, which they do not have, to understand how their own model and the other model works, before even considering an approach to add functionality to the models without breaking them. This provided that the models involved are fully open source and do not contain special approaches or proprietary information that developers may be reluctant to share with competitors. So basically, any conventional form of model integration will be very challenging under current real world commercial conditions. In the MultiModel approach, the main effort is to adapt models to read/ write ESDL and equipped with an adapter and the developers can then focus purely on their own model and focus all integration efforts on resolving ESDL input/output issues. This greatly facilitates the creative process and open communication and gives all parties involved a way forward in taken models to the next level.
MOTER was able to optimize a ~400 asset system, using 200 time slices within ~45 seconds using 2 GB of RAM. The challenge for MOTER as the number of assets in the future will increase lies in memory management, as LP optimization benefits less from multiple CPU cores.
The focus of the ESDL communication is on the list of assets and their key attributes (capacity, catagorization) and the “FullLoadHour” KPI. However also static context information needs to be communicated and ESDL was only partly used for this. The issues and work arounds were:
MOTER requires extensive detail on production and consumption increase/ decrease merit orders. For example: which customer to curtail first to relieve congestion: car, van or truck? And prioritise electricity over hydrogen? ETM and OPERA do not yet have all this information or cannot yet use ESDL to communicate this information. The current solution is a MOTER GUI for users to manually input this information.
ESDL allows for “logical connections” (i.e. assign an asset a network connection node that is many km away) and this is currently an issue for MOTER. The solution is to avoid logical connections in the network template.
No clear approach yet on models communicating the profile information. The current approach taken by MOTER is to retrieve all the ETM profiles from repositories and manually assign the production/consumption categories with the appropriate profiles.
The map-editor is crucial to ‘ignite’ a multi-model run by creating the first ESDL, working with more complex energy systems therefore requires a large amount of time.
There are many IT challenges along the way seeing that all models work completely different: are open-source/not-opensource, run on different programs, ….. Coupling and communicating between such different models is therefore first and foremost an IT / dev challenge and much time and expertise need to be dedicated in order to get it to work.
- This project has been a challenge as the goals where quite ambitious (for the budget):
Three use cases that are very different in nature are supported by the infrastructure.
There is a lot of IT involved in getting a multi-model working and that knowledge was not always available or lost when people left the project. It is therefore important that multiple people work together and share the knowledge they gained.
4.2. Reccomendations¶
Arguably the biggest benefit of the Multi-Model approach is that the approach enables model developers to work together at all. Models like ETM, Opera, MOTER have a long development history, are highly complex and can now only be worked by original developers. This if they are still available or their successors are comfortable enough in changing the original code. Any attempt to integrate any two mature models would require experienced developers from both sides to spend significant amounts of time to understand how their own model and the other model works. This all before even considering an approach to add functionality to the models without breaking them. This all provided that the models involved are fully open source and do not contain proprietary information that developers may be reluctant to share with competitors. So given all these preconditions, any conventional form of model integration will be very challenging and it is not a stretch top claim that established models will hit dead ends in their development.
However the MultiModel approach, the main effort is in adapting models such that they can read/write ESDL and equipped with an adapter. Once that hurdle is taken the developers can focus purely on resolving any ESDL input/output issues flagged by the other model users, using the ESDL reference documentation and various toolkits. This approach greatly facilitates the communication between developers and gives all parties a way forward in addressing the future energy system challenges.
With the MultiModel MACRO case we have demonstrated a way forward on how a scenario model like ETM can be enhanced with investment decision support, geographical information and network dynamics. The next issues to work on: * Expand the list of assets, * Add heat networks, natural gas, oil, E-fuels,… * Create a more realistic set of network templates * Create tooling for inspecting results (like Mapeditor) * Add road map, scenario batch processing and/or Monte Carlo functionality * Make the MMviB platform “monkey proof” * Service delivery models (open source, premium customers)
ESDL provides a good base for multi-model communications, however we need to develop: * A standardized way of working and communicating with ESDL (e.g. which units / descriptions / process do we use while communicating) * An easy way to generate an ESDL * (More) IT expertise is essential in future projects.
Meso use case report¶
1. Introduction¶
1.1. Use case description¶
The base idea of the meso use case was to model part of the Netherlands energy system on a provincial level. More specifically the South-Western province of Zeeland was selected since there is major industry in Zeeland but the complexity is rather limited in the sense that the number of large industrial parties is low.
After an initial exploration it turned out that the involved models (see paragraph 1.2) had modelled industry in Zeeland in one way or the other. Although it seemed promising to see whether these model variants of Zeeland could be coupled with each other, it became clear that this was really not an option since the models had a large disconnect in terms of scope, detail level, inputs and outputs. If this route would have continued, the remainder of the project would have been filled with mapping information between the models and making assumptions on disconnects. Instead of going that direction it was decided that the focus should be on a simpler, stylised model.

Figure 1. Schematic view on meso case
In figure 1 the schematic setup of the meso case can be seen. Although not explicitly modelling the whole industry in Zeeland we assume that the major business challenge in Zeeland is the production of Hydrogen (H2) and whether that will be made by either electrolysis or by and SMR (Methane Steam Reforming) process. The first one can be considered green with no (or limited, depending on the generation of electricity) CO2 emissions. The latter being the traditional process where besides Hydrogen a significant amount of CO2 is created. In the meso use case, the models are coupled to come up with a system that can decide on whether the electrolysis or the SMR process is the preferred approach based on the costs involved in the processes. The most relevant variable costs are the Gas cost and a CO2 emission penalty for the SMR process and the electricity cost for the electrolysis process. Besides the variable costs, also the CAPEX investments for installing the units are taken into account.

Figure 2 The meso use case as plotted on the Yara company in Zeeland
1.2. Models used¶
The following models were involved in the meso use case:
CTM
The Carbon Transition Model (CTM) is a tool to explore pathways to zero emissions for the Dutch industry as well as future industries that produce synthetic molecules (from carbon, water and electricity). Industrial activity is modelled according to historic public data and has been validated. The user can explore a future year by making changes to a reference ‘base year’ scenario. The model then provides information on changes to emissions, costs, energy and feedstock, technology choices, infrastructure and much more.
The model covers the entire Dutch industry. The largest energy intensive industrial sites are modelled using a bottom-up approach. This includes steel, refineries, fertilizer plants, large base chemical plants including steam crackers, industrial gases and methanol production, some inorganic chemical plants for salt, chlorine and petrochemical catalyst production as well as waste incineration plants. The remainder of Dutch industry has been modelled using a top-down approach based on national energy statistics and site specific emission data.
The model gives information about these industries at the national level (The Netherlands), industry sector level, cluster level (Rotterdam, Zeeland, Groningen, Noordzee Kanaalgebied, Chemelot and Cluster 6) and site level.
ETM
The Energy Transition Model (ETM) is an online model which enables users to explore possible future for a specific energy system. The model is open-access, open source, web-based and interactive in its use. Through the use of sliders users can make explicit assumptions and choices about the future of their energy system based on its current situation. Currently the ETM models EU-countries and most Dutch provinces, municipalities and RES-regions. Open data is used to model these different energy systems.
The ETM is a bottom-up, simulation model. All relevant processes and energy flows are captured in a graph structure which describes all possible routes for exchanging energy between sectors and processes. All relevant sectors and energy carriers of the energy systems are also included. The ETM calculates the yearly energy balance for all energy carriers, and the hourly energy balance for electricity, heat, and hydrogen. The model is run two times, once for a start year and once for every hour of the selected future year. Based on (new) slider settings the model is rerun and supply and demand is automatically balanced on an hourly basis using a merit-module. The results include system KIP’s such as total costs and CO2 emission reduction of the modelled energy system.
TEACOS
TEACOS is a mathematical optimization tool for mid- to long-term strategic investment analysis. The tool is designed to assist in the investment decision making process. It aims to answer the following questions:
In which (decarbonization) opportunities to invest?
What is the optimal investment timing?
How much to invest?
By answering these questions, TEACOS provides credible, affordable and competitive transition pathways towards a low carbon energy system. TEACOS is completely data driven. Because of this, it can be applied in any industrial sector and on any scale.
TEACOS models the supply chain as a network. In the network, nodes represent locations or (production) units, and the connections between the nodes (arcs) represent transport of commodities between the nodes. Additionally, possible adaptations to the network infrastructure can be modelled as investments. The model selects the best combination of investments and calculates the corresponding product flow such that either the Net Present Value is as high as possible, or the costs are minimized.
One of the major strengths of TEACOS lies in answering ‘what-if’ questions: i.e. ‘what if CO2 emission costs rise?’, by defining several scenarios in which certain assumptions are altered: i.e. a scenario with fixed CO2 emission costs and one where CO2 emission costs change over time.
1.3. Multi-model aspects showcased¶
1.3.1. Conceptual¶
There are different conceptual aspects that are challenging in the Meso use Case:
Communication between the models
Convergence of choice between either the electrolysis or SMR process based on the electricity price
Multi-period
Communication between models
There are 3 models involved in the meso use case: CTM, ETM and TEACOS. CTM and ETM had been coupled before in a previous project. Since that was an already working coupling, it was decided that this approach was preferred instead of creating a completely new interface. Since the CTM/ETM combination needed to be communicating with TEACOS as well it was decided that there would be an ESDL file connection between ETM and TEACOS. This implied that both ETM and TEACOS should be able to read from and write to ESDL file format. Since a similar approach was taken in other use cases this seemed a logical way forward.
Convergence
In the meso use case there is a iterative loop between the three involved models, where one model uses input from the previous model to calculate a result. In case of the meso use case TEACOS is the model that calculates the optimum processing configuration based on the electricity price. It is possible that the choice of processing configuration (depending on size) has an impact on the electricity price and that therefor a different choice of processing configuration should be chosen. As soon as the same processing configuration is chosen twice in a row, the model can be considered converged. Since in normal circumstances the size of an electrolysis unit in Zeeland would only very marginally influence the general electricity price in the Netherlands, it is not expected that this convergence would be a problem. In theoretical cases where the demand of Hydrogen and therefor the processing capacity is hugely increased, this could lead to alternating processing configurations.
Multi-period
Thirdly, the multi-period aspect. There are several aspects with regard to the handling of time. ETM uses historic hourly profiles for solar and wind in their calculations. With these hourly profiles insight can be gained on the expected balancing in the system over an entire year. Although TEACOS can handle hourly time periods as well, it is in the meso casus setup in a configuration where a time bucket is the size of a year, meaning that there is no insight of differentiation of what happens within that year. This leads already an interesting situation within MMvIB on how to handle differences in time. ETM calculated results for a given situation over an entire year. In the beginning the project team started out with a scope of a single year. TEACOS would have 1 time bucket, ETM could do a regular time slice of 1 year. However, in real life decisions are not made based on data or expected results of a single year. Investments are often spread out in time and result in a transition path towards the future in order to achieve future goals. It made sense to see how this notion of multiple years would fit in this Meso casus multi model environment. The idea would be that the optimisation would still be an optimisation but not over a single time bucket of a year but over multiple time buckets of a year. Initially this proved to be a challenge because up till then all information was based on a single year ESDL file. TEACOS would now need information of multiple years and there was a choice to be made. Either extend the ESDL to contain multiple years or go for multiple ESDL’s of a single year. It turned out that a single ESDL for multiple years would require too much effort on various sides in order to get it running so the decision was made to go for multiple ESDL files that would each contain information of a single year. The interface with TEACOS needed to be adapted because TEACOS would now use multiple ESDL’s instead of 1 in order to get a complete multi period model run going and output multiple ESDL’s as a result. The runs for ETM would still be based on single ESDL files but would be run multiple times for each of the individual years.
1.3.2. Technical problem description¶
The MMvIB platform seeks to automate complex multi-model workflows in order to support decision making. However, models used in the meso case such as CTM, TEACOS and ETM do not inherently work together. In addition, the location that these models are hosted on may vary across experiment and deployment. In order to provide a robust platform, a large range of circumstances must be supported, and models must fit together like building bricks.
To address these challenges, the platform employs a modular architecture that facilitates seamless integration of a wide range of models. By using a standardized interface and data format, the platform enables smooth communication between the individual models. Models are treated as modular components that can be easily assembled and reconfigured as needed. Furthermore, the platform incorporates a flexible hosting infrastructure, allowing infrastructure and models to be deployed across various locations and environments.
This versatility ensures that the platform can adapt to a wide array of circumstances, providing decision-makers with a reliable toolset to navigate complex scenarios efficiently.
2. Approach¶
2.1. Model chain¶
The micro use case model chain is depicted in a flow diagram in figure 3. The steps are as follows:
In the ESDL MapEditor, the initial setup of the energy system is modeled. The assets for electrolysis and SMR are added as optional assets. This information is stored in an initial ESDL file.
The combined CTM/ETM model provides context in terms of electricity prices of a reference year. Optional assets are considered as not operational. This information is added to the ESDL file.
TEACOS loads the ESDL with optional assets and energy price and calculates the optimal process configuration, using economic parameters as an input. The result is an ESDL file where the status to the assets is changed from optional to either enabled or disabled.
CTM/ETM calculates the impact of the new process configuration on the electricity price and updates this information in the ESDL file

Figure 3 Schematic view on the process in the meso use case
The results of this model loop are the following:
Optimal processing configuration based on the initial gas price, CAPEX numbers, CO2 penalty and calculated/updated electricity price.
Latest calculation of the electricity price given the chosen processing configuration
Investments (in EUR)
CO2 emissions (in kton CO2)
Please note that step 1 of building the system in the Map-editor is a manual step. The others are done by the MMvIB orchestrator.
2.2. Individual model developments¶
CTM
CTM REST Adapter
The CTM REST Adapter is written as a python class whose methods can be called to automatically make the CTM API calls that the project needs. The results of these calls are then automatically used in the CTM ESDL adapter to write the CTM data in the ESDL file, which is then directly uploaded in MINIO through the TNO machine’s localhost.
The CTM REST Adapter can be found under the /Kalvasta/MMvIB directory in GitHub. The Adapter’s single most important piece is the ctm.py file (under /MMvIB/tno/ etm_price_profile_adapter/model). This file contains a CTM class with the following attributes:
Request()
Initialize()
Run()
Results()
CTM ESDL Adapter
The CTM ESDL adapter attempts to create a fully adaptive ESDL reading and writing applet. The specific adapter functions can be found in /MMvIB/specific_adapter/f.py. The file f.py (also called specific adapter in MMvIB contexts) reads/writes assets from/into an ESDL file based on a csv file. Each asset will have its own csv file specifying for each attribute which API interface name it should read to (on the CTMref_r column) and which name it should write to (on the CTMref_w column).
ETM
The Energy transition model (ETM) has a separate app which translates ESDL-files into slider settings and vice versa. Separate from the ETM-ESDL app an adapter was created to incorporate the ETM in the orchestrator and meso-case multi-model. In this multi-model the ETM provides an average electricity price based on slider settings provided by the existing model connection with the Carbon transition model (CTM). The app is available online at: https://esdl.energytransitionmodel.com/api/v1/. For the meso-case the export_esdl function was used as well as the kpis function.
TEACOS
Model developments
There are two sides to the developments made with regard to TEACOS. The adapter and the model logic interpreter.
Adapter
The TEACOS-adapter reads and translates ESDL files to the AIMMS cloud-based TEACOS and writes the results back into an ESDL file. The adapter is built using Flask REST API. The API is available at http://localhost:9300/openapi. TEACOS uses its own API that can only be accessed by requesting an account at the Quo Mare office, where IP whitelisting is necessary and an environment (.env) file is provided with a username to both the TEACOS cloud and the TEACOS SQL database.
The TEACOS Adapter is built by using the main function. The adapter API contains the following functions:
Request() –> Request the local host for an instance of TEACOS to assign a run to
Initialize() –> Initializes the run Requested
Run() –> Runs the ESDL Translation, the TEACOS API, and then the translation back to ESDL.
Status() –> Returns the state of the API.
Results() –> Contains the API Success or Error code.
Remove() –> Deletes the requested instance of TEACOS.
More detailed documentation of the TEACOS adapter is added in Appendix B.
Model logic interpreter
When the relevant information is passed on through the adapter, TEACOS has the relevant data in memory. However, what is not passed on is an explicit topology of what needs to be modeled. There is logic needed that couples all that information in a network structure. In ESDL, it is more or less assumed that the energy system is a network as well but works at some places with implicit references. For example (only illustrative) the electricity grid is always present and the electricity price is a general setting for electricity pulled from the grid. In TEACOS the grid is as much an asset as an electrolysis unit and the price for electricity coming from the grid needs to be specified explicitly. You could possible define multiple “ grids” all with different price structures.
TEACOS is also expecting capacities and cost in a certain unit of measurement, in order to have a mathematically stable problem to solve and to prevent scaling issues. In ESDL some units of measurement were omitted or different than expected so some sort of interpretation had to be given. It was challenging to create a full logic that delivered a consistent TEACOS model.
2.3. Multi-model infrastructure¶
In order to achieve this, first and foremost models need a common way to exchange and parse data. For this ESDL was used as a common language for models, which saw a good fit as ESDL supports inclusion of custom KPIs with relevant metadata.
Next, a common communication methodology is required so that models can communicate results with each other. For this the Handler – Adapter protocol was designed. Each task is linked with a handler that specifies a generic protocol such as REST or MQTT, and each model-specific Adapter is able to interpret such requests and communicate these to the model in a standardised way.
In order to configure such workflows, the researcher executing the experiment needs to provide a configuration for the experiment. This configuration includes what types (and versions) of models each step requires and their configuration. The system dynamically allocates requested models via the model registry, to which model adapters are registered. This method allows for registration of secure external models, local models and even models running on different clusters or operating systems within VMs. This results in a very wide range of support for model applications across operating systems and networks.
Finally, intermediate and final results are stored in an inter-model storage solution. For this the standardised S3 protocol was used, which allows for storing large amounts of varied and unstructured data. This allows models to not only retrieve and store ESDL files, but also store any other files such as separate KPIs, logs, and more.

The multi-model infrastructure used for the micro case consists of the following components:
Core Infrastructure
Airflow
Airflow Webserver
Airflow Infrastructure
Kubernetes/Celery Cluster
Model Registry
Inter-Model Storage
MinIO
Model Infrastructure
TEACOS
TEACOS REST Adapter
TEACOS Infrastructure
TEACOS Model (Proprietary Cloud-Native)
CTM
CTM REST Adapter
CTM ESDL Adapter
CTM Model
Existing CTM-ETM coupling
ETM
ETM Adapter – Integrates the ESDL app into the orchestrator.
ETM-ESDL app – translates ESDL into ETM slider settings and vice versa.
ETM Model (consisting of several separate repositories/models) – Calculates or communicates slider settings.
2.4. Orchestrator configuration¶
Experiments within the MMvIB platform require two components:
Workflow Specification
Experiment Configuration
The workflow is a static definition of what the experiment is about. For the micro use case, this means that it specifies the looping behaviour between TEACOS and ESSIM, as well as calculating the KPIs in the final step.
The configuration on the other hand defines how the experiment should be conducted. For example, which exact model version or end-point to use, how that model should be configured and where the experimental results should be stored.
This division allows for large scale and parallel experimentation by running the same workflow horizontally or vertically over different configurations. Using the Airflow API, parameter spaces can be searched to find optimal solutions to complex multi-model problems by providing robust configurations for the workflow that is being studied.
3. Results¶
On a functional the following results have been achieved:
We were able to create a scenario in the MapEditor that represented a very stylised version of the Zeeland hydrogen production problem.
This scenario was exported from MapEditor to an ESDL file format
CTM/ETM added information to the ESDL file on electricity pricing
The resulting ESDL file could be read and optimised by TEACOS and a resulting file could be written back to ESDL format including the decision to be either “ENABLED” or “DISABLED” for all the optional assets
This ESDL file could be picked up by ETM and an updated electricity price was calculated.
This all worked in an automated sequence via the Apache AirFlow orchestrating software in February 2023. All parties involved participated in providing adapters that made the communication possible. TNO performed tests in a TNO controlled environment and reported that the sequence worked.
Positive side there is the starting of the individual models and the communication between the models via ESDL was proven and working. This in itself is a major result!
On the individual models the following results were achieved:
The ETM-ESDL app can provide the average electricity price based on all slider settings through the ESDL language.
The ETM-ESDL app can provide KPI’s, such as the:
Source of electricity production
Total costs
Total CO2 emissions
CO2 reduction (compared to 1990)
The general ETM adapter makes sure the ETM-ESDL app can be called from the orchestrator to provide the average electricity price and adds the KPI’s
ESDL file representing the hydrogen production of fertilizer producer Yara, which can both be read by TEACOS and interpreted by the CTM
The ability for the CTM to read and write in ESDL
The ability for the CTM to communicate with the orchestrator
TEACOS was able to construct a full model topology based on an ESDL file.
Both on input and output the conversions from and to the ESDL file were handled by the TEACOS adapter.
4. Lessons learned¶
The following lessons were learned:
At this point in the development, coupling models into a multi-model framework requires case-dependent work and forms of communication. This means that adapting the multi-model framework to other projects or topics in the future will still require a considerable amount of time.
Developing skills are key for a successful project, especially in these early stages.
Communication between model can partly be solved by ESDL but also requires standardization (definitions and standard naming) of carriers, assets, levels of detail or profiles.
Building a complete ESDL representation of a complex model like the CTM is not feasible. ESDL files containing industrial sites should be simplified and generalized, such that multiple sites can make use of the same ESDL structure. This way, sites become more of a black box as complex internal structures are left out
Most problems that were encountered (and time we lost fixing it) with TEACOS had to do with units of measurement (UoM). There are multiple ways in ESDL to specify an UoM with a flow, but all of them are depending on text interpretation and often these UoM’s are not specified because some sort of default is assumed. For example, if an investment cost for a PV panel is specified, it can be that it says “100” with a certain max size, say 15 MW. For the interpretation in TEACOS it is not immediately clear whether this 100 is 100 Euro, 100 kEuro, 100 MEuro, or even 100 Euro/MW, 100 kEuro/MW, or 100 MEuro/MW. Even if the UoM’s are specified there are still different ways to interpret the numbers, e.g. 100 MWh can be 100 Mwh per day but also 100 MWh per year. A common set of rules of behaviour around the use of UoM’s would be nice.
Although we are able to make the multi-model work, it is almost certain that if we use a slightly different energy system with a different topology, that we might run into problems pretty soon. Nothing that is not fixable but up to now, that is the case.
When you are working on a multi model, there is often interaction with other parties. Other parties have other priorities and availabilities. What is important for us at this moment might not be important at this time for them, if they are available in the first place. As a result, there is often quite some delay over the total scope of work to get it working. Because of the exploratory nature of this project this is understandable. If this were an operational project, it would put pressure on the timeline if this is not aligned and formalised upfront.
There was a lot of time lost with working in the TUDelft environment. All the access rights needed to be arranged from there while at the same time all the technical knowledge was within TNO. Even people from TNO did often not have the correct access rights to get something working.
On QuoMare side we wanted to get some experience on working with Apache AirFlow but it seemed almost impossible to:
Get access to Apache AirFlow
Get rights to see input files
Get rights to see the correct DAGs
Change the DAGs
Upload the changed DAGs to the correct folder
Run the DAGs
See the correct output files
Individual steps seem small and could eventually be done but it was up to the end not possible to go through the total sequence without needing external help because we didn’t have access ourselves.
5. Conclusions & recommendations¶
From the work done in the meso case we can draw the following conclusions:
Multi-models can provide interesting insights into energy system dynamics but also require in-depth knowledge of each separate model and their dynamics to be able to understand the entire multi-model.
Coupling of models like Teacos, CTM and ETM is possible but does require a lot of manual finetuning in order to work in a meaningful way. ESDL does provide a means of communication. However, it does not provide a strict format in which this information structured
Scenario models like ETM and CTM can be used for optimization when coupled with an optimization model like TEACOS
In general the work involved requires not only people with modelling knowledge but also people with in depth technical IT knowledge. This was not clear at the beginning of the project.
Having a positive energy in the group, helps a lot to move things forward.
The parties combined have the following recommendations:
Further standardization of communication between models (on top of ESDL) can reduce the amount of (future) work significantly and make multi-models more flexible and adaptable for users.
Adding the option for the orchestrator to read/write assets other than the SMR and electrolyzer: In order for MMvIB’s aim of creating a complete multimodelling framework to be truly successful, the orchestrator should direct all of the models’ actions. As such, it would be more proper if it was the orchestrator to input what kind of assets need to be read and written. This would mean adding a ESDLconfig to be the input of the read_inputs and write_inputs specific functions. As of now, the read_inputs and write_inputs are simply told by the CTM adapter to read/write only the yara smr and electrolyzer.
Shared ESDL standards for structuring information can help different models communicate with reduced ‘manual finetuning’
For far-reaching integrated coupling of models, a standardized communication method does not suffice. Certain definitions and assumptions among models should be aligned during development of the models
Make sure the project starts with actual work sooner than it did now. The first halve year most people were waiting for something to happen.
Split the project in a conceptual phase for showing the possibility, and an operational phase where this is expanded to an actual real live case.
Create awareness and common ground for UoM definitions.
When getting a multi model going it need to be properly logged what you need to do in order to get access to ALL the relevant systems and to get it running.
Create sessions where people are physically together working on something. The time that was spent waiting on other parties was enormous. Even with the best intentions from all parties involved.
The personal aspect played a part in getting delay in the process. It would be advised to get at least 2 people involved with a similar knowledge level from all sides such that one person changing roles, being on holiday, sick, whatever, does have a less significant impact for other parties to continue.
Improve payment terms for commercial parties. The current rate really makes this a low priority project and that has an effect on the timeline and the results that are achieved.
Appendix A: CTM and ETM scenario and session ID’s¶
Initiating the MESO case model run from the orchestrator requires either ETM + CTM scenario or session ID’s to be sent to the CTM adapter.
Scenario ID’s corresponds to a fixed scenario which needs to be made and saved by a user in the ETM and CTM respectively. From the perspective of the orchestrator, a scenario is unchangeable (read-only). The only function of a scenario ID is to make a session ID which is the exact copy of the corresponding scenario, but which is changeable (read and write).
If a model run is initiated using scenario ID’s, the CTM generates both ETM and CTM session ID’s which are copies of the scenario ID’s. These sessions then undergo modifications resulting from model interactions in the CTM-ETM-TEACOS model chain. For example, the CTM synchronizes with the ETM by changing the ETM session such that it now corresponds with the CTM session. Then, after the ETM-TEACOS interaction has passed, TEACOS modifies the CTM session etc. etc.
After the MESO model loop has been completed, the CTM returns both ETM and CTM session ID’s to the orchestrator. These session ID’s are now in the ‘end-state’ of the model loop while the scenario ID’s have not changed.
Therefore, if an experimenter wants to do multiple iterations of a MESO case model run, he/she should use the ETM/CTM session ID’s returned by the CTM for the next iteration and not the scenario ID used for the initial model run.
The following CTM and ETM scenarios could be used for a MESO case experiment test run:
ETM scenario ID’s:
13578: a scenario of an energy system in which Yara is expected to opt for a elektrolyser
13579: a scenario of an energy system in which Yara is expected to opt for an SMR
13580: a scenario of an energy system in which the choice between an SMR and an elektrolyser is expected to alternate when multiple iterations of the model run are performed
CTM scenario ID’s:
SC-a1035b76fb350515: a scenario that should be used in tandem with ETM scenario ID 13578 or 13579
SC-0c70f3d36d8e68c3: a scenario that should be used together with ETM scenario 13580
However, you can create your own scenarios by visiting https://energytransitionmodel.com and https://carbontransitionmodel.com/ (you need to make an account first)
Appendix B: TEACOS adapter documentation¶
Introduction¶
This describes the technical inner workings of the TEACOS-adapter that is built for connecting the Quo Mare’s Techno-Economic Analysis Of Complex Option Spaces (TEACOS) tool to the infrastructure set up in the MMVIB project (https://multi-model.nl/). This adapter is based on the Aimms-adapter created by TNO. The input and output of the adapter are Energy system description language (ESDL) files where the documentation can be found at https://energytransition.github.io/. The model-specific TEACOS-adapter is available on GitHub: https://github.com/MultiModelling/teacos-adapter.
Brief description¶
TEACOS is a long-term Optimization tool. It optimizes the transition pathways to obtain the highest margin or lowest cost over a given time horizon, based on the Netto Present Value. It detects the most profitable investments over time and locations, given a predefined supply/demand scenario and potential environmental constraints.
The TEACOS-adapter reads and translates ESDL files to the AIMMS cloud-based TEACOS and writes the results back into an ESDL file. The adapter is built using Flask REST API. The API is available at http://localhost:9300/openapi. TEACOS uses its own API that can only be accessed by requesting an account at the Quo Mare office, where IP whitelisting is necessary and an environment (.env) file is provided with a username to both the TEACOS cloud and the TEACOS SQL database.
Adapter functions¶
The TEACOS Adapter is built by using the main function. The adapter API contains the following functions:
Request()
Request the local host for an instance of TEACOS to assign a run to
Initialize()
Initializes the run Requested
Run()
Runs the ESDL Translation, the TEACOS API, and then the translation back to ESDL.
Status()
Returns the state of the API.
Results()
Contains the API Success or Error code.
Remove()
Deletes the requested instance of TEACOS.
All the functions are standard except the run function, which will be explained in depth.
Config¶
A .env file is provided by Quo Mare in the form of the .env.template with a TEACOS account and password included.
The input of the adapter can be delivered in two distinct manners:
Via MINIO, this is the Inter-model storage (IMS) of the MMVIB infrastructure and requires additional information in the .env file.
Locally, the inputname and outputname configuration must be adjusted in the function start ‘start_aimms_model’ in the teacos.py file.
Input Requirements¶
ESDL describes a full energy system as a node-arc structure, where Producer nodes supply energy to the Consumer nodes, possibly through Transport and Conversion. The Assets with an Optional State may be selected by TEACOS to obtain the lowest cost for the energy system.
The following Sets with parameters that are required by the TEACOS-Adapter:
Producers:
Id: Unique
Display Name: Technology
State: OPTIONAL or ENABLED
Power: in W
Profile: attached to OutPort
Year: Parameters can be time dependent
Consumer:
Id: Unique
Display Name
State: OPTIONAL or ENABLED
Profile: attached to InPort
Carriers:
Id: Unique
Display Name
Year: Parameters can be time dependent
Import Producers (for all carriers):
Id: Unique
Display Name
Export Consumers (for all carriers):
Id: Unique
Display Name
Arcs:
Id: Unique
Outport To InPort of Assets
Specified Carrier
Costs:
Investment costs: Euro/Watt for OPTIONAL Producers (peak capacity)
Marginal cost: Euro/MJ per Carrier for Import
Year: Parameters can be time dependent
Additional Supported Sets:
Conversions
Id: Unique
Display Name
State: OPTIONAL or ENABLED
Efficiency or InputOutputRelation
Transport
Marginal cost: Euro/MJ for Consumers
Run function adapter¶
The run procedure does the following steps in order:
The Translator class ‘Universal Link’ is created and the ESDL is parsed to the TEACOS MySQL server.
The input ESDL is picked up in Minio or by local path in the inputname specified in ‘start_aimms_model’.
The translation is done in function ‘parse_esdl’ by converting all the sets with parameters into three sets:
SetOfTables: per table a single string with the name of the table.
SetOfAttributes: A tuple of all the attributes included in the specific table.
SetOfValues: A tuple of value sets of every instance included in the input.
The specified SQL database is refreshed and filled with the previously listed sets (See Figure 2).
The TEACOS API is called with the TEACOS credentials included.
This creates additional tables in the MySQL database with the TEACOS results.
The Translator Class ‘Universal Link Back’ is created and the data is retrieved from the server and written into an ESDL file in the ‘_generate_esdl’.
The new file is saved in the specified outputname specified in ‘start_aimms_model’

Figure 5: The Translator functionality

Figure 6: The Generated Tables
Results:¶
The following results in the output ESDL:
All the OPTIONAL assets are transformed in either ENABLED or DISABLED.
The Enabled producers are scaled to the Teacos optimal.
KPIs are added to the system.
Micro use case report¶
1. Introduction¶
1.1. Use case description¶
In the micro use case, the MMvIB approach has been applied on a local scale, for a business park.
In recent years, the energy transition on business parks has accelerated. Increased energy prices and changing markets stimulate entrepreneurs to invest in energy efficiency and renewable energy, and policy makers act as they realise business parks play a key role in reaching climate targets. A collective (business park) approach is beneficial in this transition.
The micro use case is intended to support decision making on business parks by using a multi-model approach to calculate long-term optimized investment paths towards a sustainable business park.
As a case study, the business parks Welgelegen and Slabbecoornpolder in the municipality of Tholen have been used. Business on these business parks are united in the Regional Energy Community (REC) Tholen, which has the mission to collectively invest in renewable energy measures towards a CO2 neutral or energy positive business park. As for a lot of other business parks in the Netherlands, current grid congestion is a barrier for electrification and renewable electricity measures for Welgelegen and Slabbecoornpolder.
The energy system for the case study is simplified to provide a technical proof of concept for a business park MMvIB multi-model chain. The multi-model chain has not been tested in a decision making process, as the scope of the model chain is not detailed enough yet.
1.2. Models used¶
To cover all aspects of the problem scope given by the Tholen case study, 6 models/tools have been coupled in this use case:
The ESDL MapEditor is a map-based scenario editor, using the Energy System Description Language (ESDL) to describe the energy system.
The Energy Potential Scan for business parks (EPS) is a calculation tool which estimates the energy use for each business on a business parks, and the potential for energy saving and PV measures.
The Energy Transition Model (ETM) is an interactive energy scenario tool, which can be used for countries, regions and municipalities.
A new agent-based model (ABM) has been developed in this use case to simulate human investment behaviour.
Techno-Economic Analysis Of Complex Option Spaces (TEACOS) is a long-term optimisation tool that calculates the optimal investment paths for an energy system.
The Energy System Simulator (ESSIM) is a tool that simulates network balancing and the effects thereof, in an interconnected hybrid energy system (described in ESDL) over a period of time.
1.3. Multi-model aspects showcased¶
A number of multi-model challenges are addressed in the micro use case, both on a conceptual and a technical level.
1.3.1.Conceptual¶
There are three key conceptual aspects that are challenging in the micro use case:
Convergence of energy flows between models (ESSIM – TEACOS)
Agent-based versus global investment optimum
Multi-period
First of all, the convergence. When using multiple models there is always a chance that several models calculate similar results but in a different way. This occurred between ESSIM and TEACOS, where TEACOS, in the configured setup, determines how many PV panels need to be installed and calculates an electricity flow from a group of PV panels to users/companies that have a demand for electricity, on a yearly basis. In ESSIM the electricity flow is calculated based on a given PV panel capacity, on hourly basis, using a profile of sun intensity for a historic year. With a provided demand profile of the user/company, ESSIM can take into account the disconnect in timing between supply and demand, where TEACOS on a yearly basis cannot. On the other hand, TEACOS can determine the best investment size given the properties of the rest of the energy system and set the PV panel capacity for ESSIM to work with. In this way there is added value to combine the two models. It is not trivial that the two models end up with the same electricity flow between PV panels and users/companies.
The way this has been solved is the following:
TEACOS runs and determines the optimal investment in PV panels. This is based on 100% of the capacity going to users, nothing to grid.
ESSIM uses the TEACOS result as the given capacity in PV panels and calculates how much of this electricity (in %) can actually be used by the users and how much will be absorbed by the grid. This % is then passed back to TEACOS
TEACOS recalculates the optimum investment with the given addition of the useful % electricity and recalculates the optimum investment.
Steps 2 and 3 are repeated until either the % doesn’t change anymore or the optimum investment remains the same. Convergence has then been reached.
Theoretically there is no guarantee that it will converge, but in realistic cases it is expected that it will.
Secondly, the agent-based versus global investment optimum. The thesis work of Menghua Prisse 1 covers this topic extensively, so we will here suffice with a short notion. TEACOS will determine the mathematical optimal solution for the energy system that is presented. In the micro use case this is determining the optimum investment size and resulting electricity flows for the entire business park. In this case the objective value for TEACOS to optimise on, is the total cost for the entire business park lumped together. This means that it is possible that the result is not optimal or even beneficial for some of the individual businesses in the business park. The agent-based approach tries to mimic real live, where there is no full transparency between all the businesses in the business park and each business will take decisions that are most beneficial for their own situation. The struggle for the agent-based approach is when there are general constraints for the entire business park e.g. a grid-limitation for all businesses combined. TEACOS can deal with that when it is optimising the total business park, agent-based modelling will have to find a way around that. It is possible to run TEACOS for individual businesses in order to make the optimal decision for that business and as such use it as a part of the agent-based model.
Thirdly, the multi-period aspect. There are several aspects with regard to the handling of time. Both ESSIM and ETM use historic hourly profiles for solar and wind in their calculations. With these hourly profiles insight can be gained on the expected balancing in the system over an entire year. Although TEACOS can handle hourly time periods as well, it is in the micro casus setup in a configuration where a time bucket is the size of a year, meaning that there is no insight of differentiation of what happens within that year. This leads already an interesting situation within MMvIB on how to handle differences in time. ETM calculated results for a given situation over an entire year. ESSIM can theoretically handle a longer time period. In the beginning the project team started out with a scope of a single year. TEACOS would have 1 time bucket, ESSIM an hourly profile for 1 year and ETM could do a regular time slice of 1 year. However, in real life decisions are not made based on data or expected results of a single year. Investments are often spread out in time and result in a transition path towards the future in order to achieve future goals. It made sense to see how this notion of multiple years would fit in this micro use case multi model environment. The idea would be that the optimisation would still be an optimisation but not over a single time bucket of a year but over multiple time buckets of a year. Initially this proved to be a challenge because up till then all information was based on a single year ESDL file. TEACOS would now need information of multiple years and there was a choice to be made. Either extend the ESDL to contain multiple years or go for multiple ESDL’s of a single year. It turned out that a single ESDL for multiple years would require too much effort on various sides in order to get it running so the decision was made to go for multiple ESDL files that would each contain information of a single year. The interface with TEACOS needed to be adapted because TEACOS would now use multiple ESDL’s instead of 1 in order to get a complete multi period model run going and output multiple ESDL’s as a result. The runs for ESSIM and ETM would still be based on single ESDL files but would be run multiple times for each of the individual years.
1.3.2.Technical problem description¶
The MMvIB platform seeks to automate complex multi-model workflows in order to support decision making. However, models used in the micro use case such as ESSIM, TEACOS and ETM do not inherently work together. In addition, the location that these models are hosted on may vary across experiment and deployment. In order to provide a robust platform, a large range of circumstances must be supported, and models must fit together like building bricks.
To address these challenges, the platform employs a modular architecture that facilitates seamless integration of a wide range of models. By using a standardized interface and data format, the platform enables smooth communication between the individual models. Models are treated as modular components that can be easily assembled and reconfigured as needed. Furthermore, the platform incorporates a flexible hosting infrastructure, allowing infrastructure and models to be deployed across various locations and environments.
This versatility ensures that the platform can adapt to a wide array of circumstances, providing decision-makers with a reliable toolset to navigate complex scenarios efficiently.
2. Approach¶
In this chapter, the model chain workflow, the individual model developments, the infrastructure aspects and the orchestrator configuration for the micro use case multi-model are described.
2.1. Model chain¶
The micro use case model chain is depicted in a flow diagram in figure 1. The steps are as follows:
The EPS calculates an ESDL representation of the energy system of the business park, based on available data for all buildings and businesses, and standard energy demand profiles.
In the ESDL MapEditor, potential energy measures can be added as optional assets.
The ETM provides energy prices to the ESDL, based on energy scenario data.
TEACOS loads the ESDL with optional assets and energy price scenarios, and calculates the optimal investments (in time) from a business park perspective, using economic parameters as an input.
As TEACOS bases its decisions on yearly energy demand, and energy production and demand profiles vary over time, ESSIM is used the simulate the resulting (ESDL) energy system with an hourly resolution, optimizing dispatch and including flexibility.
The import and export electricity flows (between the business park and its connection with the grid) and potential grid congestion are sent back to TEACOS for an adjusted optimization run.
The TEACOS optimization and ESSIM simulation are iterative. When the energy flows between TEACOS and ESSIM have converged, the results are sent to ETM and ABM.
ETM calculates the impact of the investments on the system KPI’s on a municipality level.
ABM uses the TEACOS (business park) investment optimum as an input for agent-based decision making. The agent-based decision for the businesses are compared with the TEACOS business park optimum.
Figure 1. Micro use case multi-model chain.
The results for the optimal investment path(s) for the business park are:
Local energy production (in MWh)
Investments (in EUR)
Energy costs (in EUR/year)
Direct and indirect CO2 emissions (in kton CO2)
Steps 3-8 are part of the MMvIB orchestrator, the other steps are still manual.
2.2. Individual model developments¶
ABM
The Agent-Based Model is a relatively very simple simulation model developed in Python using the Mesa and Mesa Geo packages to simulate investment behaviour in the optional assets based on the ESDL-file. The key outcome this model aims to represent is the number and distribution of solar panels that are purchased by agents in a single simulation run. The Mesa Model aims to replicate the real-life decision-making processes that influence the acquisition of optional assets in an abstract manner, considering financial (i.e., costs and ROI) and social factors (i.e., how agents are influenced by each other). The decisions made by the agents are written back into the ESDL-file. The results of the ABM are presented in the thesis Coupling for multi-models 1.
During the development of the MMvIB project, it was identified that some models needed access to energy profiles but there was no standardised way in providing this. Instead, ESDL was updated by the team at TNO to be able to embed and link to energy profiles directly in the ESDL file. This ensured that models within the multi-model chain could have access to the same set of energy profiles.
The primary adaptations for ESSIM during the MMvIB project were the development of the ESSIM adapter based on a REST API interface, the ability to read and utilise profiles embedded within an ESDL file and the inclusion of calculated KPIs directly into the ESDL file during the operation of multi-model workflows.
ETM
The Energy transition model (‘ETM’) works with a separate app specifically built to enable the translation of ESDL-files to ETM-scenario’s and vice versa. For this use case three features have been added to the app:
Electricity price (curve) additions to the ESDL based on an ETM scenario
Creating a context ETM scenario based on two ESDL files with more local information
Adding KPI’s to an ESDL-file based on ETM scenario results
The first feature enables the addition of a future average electricity price based on the hourly electricity price in a given scenario with a given end-year. For example, for 2030 the ‘Klimaat en Energieverkenning’ (KEV) can be used to provide an average electricity price. The second feature enables users to understand and project the impact of certain choices made by the business park owners on a larger scale, such as the municipality. If the business park makes certain choices with regard to energy production or heating this can be aggregated and projected onto the amount of local energy production or mix of heating technologies in the municipality (or province, country, etc.). This enables efficient and fast communication between stakeholders on multiple levels of scale. Lastly, the KPI feature can quickly showcase the differences and results of energy plans in the business park.
TEACOS
The TEACOS developments that were specifically done within the MMvIB project are the following:
Creating code for reading in ESDL files and converting the information to an SQL database
Creating code for reading in the SQL database information and storing the information in local TEACOS parameters in memory
Creating and implementing logic for interpreting the data and turning it into a consistent TEACOS model that could be optimised
Optimisation procedures were already in place so those did not need to be created
Code was created for writing back results out of the optimisation to the SQL database format
Code was created for writing the combination of input/output information back to a new ESDL format file
The whole sequence was wrapped in an API that could be called externally
The work that was involved in these previous bullets was initially done for the single time period scenario. Later on, all the steps were revisited and extended in order to be able to handle both single and multiple time period scenarios.
General TEACOS information
TEACOS is a mathematical optimization tool for mid- to long-term strategic investment analysis. The tool is designed to assist in the investment decision making process. It aims to answer the following questions:
In which (decarbonization) opportunities to invest?
What is the optimal investment timing?
How much to invest?
By answering these questions, TEACOS provides credible, affordable and competitive transition pathways towards a low carbon energy system. TEACOS is completely data driven. Because of this, it can be applied in any industrial sector and on any scale.
TEACOS models the supply chain as a network. In the network, nodes represent locations or (production) units, and the connections between the nodes (arcs) represent transport of commodities between the nodes. Additionally, possible adaptations to the network infrastructure can be modelled as investments.
The model selects the best combination of investments and calculates the corresponding product flow such that either the Net Present Value is as high as possible, or the costs are minimized.
One of the major strengths of TEACOS lies in answering ‘what-if’ questions: i.e. ‘what if CO2 emission costs rise?’, by defining several scenarios in which certain assumptions are altered: i.e. a scenario with fixed CO2 emission costs and one where CO2 emission costs change over time.
TEACOS needs input on five different aspects:
Supply: resource availability and cost, utility availability and cost.
Conversion Infrastructure: yields and capacities, CAPEX and OPEX.
Transport Infrastructure: capacities, CAPEX and OPEX.
Demand: product/service demand and sales prices.
Strategic input: investment opportunities and their impact, outlook on prices and costs, environmental constraints, learning curves, supply and demand scenario’s, other constraints, other scenario’s.
The input is usually read from an Excel file or from a database. Specially for MMvIB the data is obtained by reading and interpreting ESDL format files.
2.3. Multi-model infrastructure¶
In order to achieve this, first and foremost models need a common way to exchange and parse data. For this ESDL was used as a common language for models, which saw a good fit as ESDL supports inclusion of custom KPIs with relevant metadata.
Next, a common communication methodology is required so that models can communicate results with each other. For this the Handler – Adapter protocol was designed. Each task is linked with a handler that specifies a generic protocol such as REST or MQTT, and each model-specific Adapter is able to interpret such requests and communicate these to the model in a standardised way.
In order to configure such workflows, the researcher executing the experiment needs to provide a configuration for the experiment. This configuration includes what types (and versions) of models each step requires and their configuration. The system dynamically allocates requested models via the model registry, to which model adapters are registered. This method allows for registration of secure external models, local models and even models running on different clusters or operating systems within VMs. This results in a very wide range of support for model applications across operating systems and networks.
Finally, intermediate and final results are stored in an inter-model storage solution. For this the standardised S3 protocol was used, which allows for storing large amounts of varied and unstructured data. This allows models to not only retrieve and store ESDL files, but also store any other files such as separate KPIs, logs, and more.
The multi-model infrastructure used for the micro use case consists of the following components:
Core Infrastructure
Airflow
Airflow Webserver
Airflow Infrastructure
Kubernetes/Celery Cluster
Model Registry
Inter-Model Storage
MinIO
Model Infrastructure
TEACOS
TEACOS REST Adapter
TEACOS Infrastructure
TEACOS Model (Proprietary Cloud-Native)
ESSIM
ESSIM REST Adapter
ESSIM KPI Modules
ESSIM Infrastructure
ESSIM Model (Open-Source)
ETM
ETM REST Adapter
ETM Model (Open-Source Cloud-Native)
2.4. Orchestrator configuration¶
Experiments within the MMvIB platform require two components:
Workflow Specification
Experiment Configuration
The workflow is a static definition of what the experiment is about. For the micro use case, this means that it specifies the looping behaviour between TEACOS and ESSIM, as well as calculating the KPIs in the final step.
The configuration on the other hand defines how the experiment should be conducted. For example, which exact model version or end-point to use, how that model should be configured and where the experimental results should be stored.
This division allows for large scale and parallel experimentation by running the same workflow horizontally or vertically over different configurations. Using the Airflow API, parameter spaces can be searched to find optimal solutions to complex multi-model problems by providing robust configurations for the workflow that is being studied.
A graphic representation of the micro use case orchestrator configuration is depicted in Figure 2.
Figure 2. Directed A-cyclic Graph in AirFlow for a 2 iteration micro use case configuration.
3. Results¶
The micro use case multi-model workflow works on a functional level without iterations between TEACOS and ESSIM, but with multi-period aspects. The corresponding multi-model Apache AirFlow sequence worked in a TNO controlled environment, but unfortunately still has issues in the TU Delft environment. Therefore, the ESSIM-TEACOS convergence couldn’t be further studied within the scope of this project. These key results are further detailed in this chapter.
Successful workflow on a functional level
The micro use case multi-model workflow works on a functional level – for a stylised representation of the Tholen business park and without iterating between TEACOS and ESSIM:
We were able to create a scenario in the MapEditor that (albeit stylised, see Figure 3) represented the Tholen business park case that we were trying to model, including optional assets.
This scenario was exported from MapEditor in an ESDL file format
ETM added information to the ESDL file on electricity pricing
The resulting ESDL file could be read and optimised by TEACOS and a resulting file could be written back to ESDL format including the decision to be either “ENABLED” or “DISABLED” for all the optional assets
This ESDL file could be picked up by ESSIM and assessed by ESSIM on what realistic generation and usage of electricity was, the result written back to an ESDL including a KPI parameter on the percentage of electricity that was effectively used and what percentage would flow back to the grid
The results could be integrated in an ETM scenario for the municipality of Tholen
Figure 3. Stylised representation of the Tholen business parks for the micro use case technical proof of concept.
Successful Apache AirFlow sequence in TNO controlled environment
Now in principle this all worked in an automated sequence via the Apache AirFlow orchestrating software in February 2023. All parties involved participated in providing adapters that made the communication possible. TNO performed tests in a TNO controlled environment and reported that the sequence worked.
Positive side there is the starting of the individual models and the communication between the models via ESDL was proven and working. This in itself is a major result!
No convergence between TEACOS and ESSIM
Negative side was that after the initial tests and due to the limited access to the orchestrator software and output files the results took a long time to check on completeness and correctness. Eventually it became clear that the recursive loop between TEACOS and ESSIM, although being run multiple times, was not resulting in the expected behaviour and therefore not delivering the expected result.
No reproduction in TUDelft environment
In the period from August to November 2023 there has been significant effort to reproduce the initial runs, identify the problem(s), fix them, and do a complete and correct sequence. Due to several issues this work could unfortunately not be completed before the agreed deadline of beginning of November 2023. Below there is a list of issues that came up in this process, to give an idea of what happened in those three months.
Problems that were encountered when trying to get the full sequence running in Apache AirFlow by QuoMare on the TUDelft environment:
When calling the Directed A-cyclic Graph (DAG) in AirFlow, this immediately resulted in an error. After getting access, the logfile indicated that the TEACOS adapter was not present in the model registry on the TUDelft environment. TNO added the needed information in the environment
The needed input files in the TUDelft environment were not in the correct Minio directory. TNO added the correct file at the correct location.
It turned out that TNO did not have sufficient accessing rights in their SQL credentials. TNO switched to a different account with more accessing rights.
When all the information was finally there QuoMare could investigate the problem and it turned out that there was a function missing in the TEACOS adapter for writing back to ESDL while in connection with Minio. This function was added.
It still didn’t work and the next issue was found: the ‘Configuration JSON’ was incorrect with the result that the DAG was called with the wrong configuration.
Now a new issue arose: TEACOS constraint information was not correctly read in from the ESDL file. A predefined maximum capacity of the solar panels was not taken into account and the resulting optimised capacity was over the maximum capacity. This issue was fixed and as a result it worked locally in the QuoMare environment but not when called via AirFlow, then errors occur.
In general, there have been errors generated both by the adapters of ETM and ESSIM dure to configuration errors in the adapters. TNO eventually managed to solve the problems.
It turned out that ESSIM has a default value of 0, meaning that if something has the value 0, that no value is written back in the ESDL file. For TEACOS the interpretation is different: 0 means that there is a value (e.g. for a minimum or maximum capacity) and that value is 0, if there is no value then there is no limit. TEACOS needed also the 0 values for interpreting the scenario correctly. This was fixed by adding very specific logic in the TEACOS code.
A general issue was that access to all the different systems and environments was difficult to get and took a long time. It also was not clear to which systems access was needed in order to get something running in the orchestrator. For looking at results again a different access was needed, and it was just hurdle after hurdle. The access to Minio was in the end the problem that took long to arrange, and which led to insufficient time left to solve all the remaining practical issues.
A changing in personnel on both Quo Mare and TNO sides in August made it extra challenging.
TEACOS-ESSIM results integrated in ETM scenarios
The goal of the Energy Transition Model (‘ETM’) in the micro-case was to simulate the context. In this case this consists of the municipality Tholen and the Netherlands. Two results were produced using the ETM:
The yearly average electricity price based on the (future) installed capacities in the Netherlands
The effect of energy plans on the municipal energy plans (or province or RES-region)
For the first result the ETM calculates the yearly average electricity price based on the hourly electricity price of a given scenario. This can be done based on the KEV (‘Klimaat en Energieverkenning’) for 2030 or the II3050 for 2030 or 2050. However, using the ETM transition path tool these scenario’s can be backcasted to any given year. These results are used by TEACOS to calculate the optimal energy system configuration.
The second set of results are based on the technology decisions made by TEACOS and ESSIM. These technology decisions (such as the amount of solar-pv per building) are aggregated and projected onto the municipal energy system. For solar and wind this means they are simply added to the current solar and wind in municipalities which are set in MW in the ETM. Other technologies, such as a heat pump in buildings, are set with a percentage slider based on the energy demand in buildings and the total energy demand of buildings in the municipality (see image below for example). This enables users of the multi-model chain to understand how plans and decisions made by business parks such as Tholen have an impact on municipal plans. It is also possible to use this function for regional or national plans if necessary.
The amount of heat pumps has increased based on the EPS results. If we translate this to the municipality we can see the use of ambient heat increase and the use of natural gas decrease in the future. In this way it is possible to see what effect plans in business parks have on the energy transition plans within a municipality.
Multi-period integrated in a local TEACOS setting
The initial setup for the micro use case was a single time. It was recognised that there would be value in multiple period aspects as described in an earlier paragraph. The general setup was made by getting multiple ESDL files each reflecting a certain time slice but which combined would deliver a multi period approach. The ETM and ESSIM adapter could basically still run from single period perspective, only reading in a single file and doing the calculation. On the TEACOS side these multiple time period would have to be taken into account into a single optimisation run. Although TEACOS is multi period in itself, the reading of the multiple ESDL files and the conversion into a multi period model needed to be created.
This was all implemented in the TEACOS adapter and TEACOS code and it worked in a local setting. Unfortunately, we were not able to test it in the orchestrator environment due to the earlier mentioned problem we already had with getting the single time period model running there.
4. Conclusions and lessons learned¶
Conclusions¶
The micro use case multi model works!
We can conclude that even though multi-modelling is complex, a major step forward towards a multi-model ecosystem was taken in the micro use case:
The micro use case multi-model workflow works on a functional level with 6 (!) different energy models.
Multi-period functionality was implemented on TEACOS side of the multi-model.
The multi-model orchestrator worked in a TNO controlled environment.
Unfortunately, the convergence between TEACOS and ESSIM in the micro use case multi-model could not be further studied within the scope of this project, due to several issues in getting the multi-model to run in the TU Delft IT environment.
Next step: supporting a decision making process
The micro use case multi-model works as a technical proof of concept for a stylized representation of the Tholen business park energy system and scenarios.
After fixing the current IT infrastructure issues, the energy model representation in the multi-model can be extended step by step towards a full representation of the business park energy system and all relevant scenario/technology options.
When this multi-model orchestrator works correctly, its results can be validated in an integral decision process on a long-term investment path towards a sustainable business park. In this way, the end-user value of the multi-model approach is tested in practice.
In the longer term, if the multi-model approach provides end-user value, management and maintenance for the orchestrator should be set up, and its usability should be matched with (potential) user requirements.
Lessons Learned¶
Pioneering on building a (micro use case) multi-model provided us with valuable lessons learned, which can be used in helping follow-up projects. These lessons, coming from different partners working on the micro use case multi-model, are summed up in this chapter. Both from a technical and an organisational perspective.
Technical
Understanding and harmonising different model languages takes time It is vital to know and understand which information a model needs to operate in order to exchange information in a coherent manner. For example, at first hand the EPS provided results that could not be interpreted by the ETM as it regards the energy system differently. This was the case in the built environment. The EPS views a building simply as a building whilst the ETM needs to know whether a building is a household, utility or industrial building to allocate energy demand and technologies correctly according to the EPS results. It takes time to learn to understand each other. Different modellers use a slightly different language and are sometimes not aware of that.
ESDL is a key enabler in multi-modelling ESDL is a good medium for transporting information between different models when talking about energy systems.
However, ESDL is not the solution for all challenges, so additional agreements are required. Understanding what additional agreements also takes time (you need to understand the core ESDL concepts and the reasoning behind ESDL).
Generic multi-modelling is complex Although we are able to make the multi-model work, it is almost certain that if we use a slightly different energy system with a different topology, that we might run into problems pretty soon. Nothing that is not fixable but up to now, that is the case. Making things really generic is very complex.
Multi-modelling is IT-complex The project is quite “IT-complex” and “IT-intense”, maybe more than we realized in advance.
IT environment barriers can provide major delays There was a lot of time lost with working in the TUDelft environment. All the access rights needed to be arranged from there while at the same time all the technical knowledge was within TNO. Even people from TNO did often not have the correct access rights to get something working.
QuoMare wanted to get some experience on working with Apache AirFlow but it seemed almost impossible to:
Get access to Apache AirFlow
Get rights to see input files
Get rights to see the correct DAGs
Change the DAGs
Upload the changed DAGs to the correct folder
Run the DAGs
See the correct output files
Individual steps seem small and could eventually be done but it was up to the end not possible to go through the total sequence without needing external help because we didn’t have access ourselves.
When getting a multi model going it need to be properly logged what you need to do in order to get access to ALL the relevant systems and to get it running.
Define rules for UoM and default values Most problems we encountered (and time we lost fixing it) with TEACOS had to do with units of measurement (UoM). There are multiple ways in ESDL to specify an UoM with a flow, but all of them are depending on text interpretation and often these UoM’s are not specified because some sort of default is assumed. For example, if an investment cost for a PV panel is specified, it can be that it says “100” with a certain max size, say 15 MW. For the interpretation in TEACOS it is not immediately clear whether this 100 is 100 Euro, 100 kEuro, 100 MEuro, or even 100 Euro/MW, 100 kEuro/MW, or 100 MEuro/MW. Even if the UoM’s are specified there are still different ways to interpret the numbers, e.g. 100 MWh can be 100 Mwh per day but also 100 MWh per year. A common set of rules of behaviour around the use of UoM’s would be valuable.
Similar to the UoM issues, there were problems with default values. If a value is 0, ESSIM will conclude that it is a default. A not included value of 0 will in TEACOS have a different interpretation than an included value of 0.
Organisational
Align expectations Even within the case team there were different expectations of what the result would be of this exercise. This was ranging from “Can we get this multi model to work?” to “What are the results that I can show to my customer?”. Part of this is caused by the desire to start with “real” use-cases, that raises the expectation level for the people who provide the use case. Quo Mare would be in favour in getting the principle working first and then expand to real-life cases.
Split the project in a conceptual phase for showing the possibility, and an operational phase where this is expanded to an actual real live case.
Align available capacities When you are working on a multi model, there is often interaction with other parties. Other parties have other priorities and availabilities. What is important for us at this moment might not be important at this time for them, if they are available in the first place. As a result, there is often quite some delay over the total scope of work to get it working. Because of the exploratory nature of this project this is understandable. If this were an operational project, it would put pressure on the timeline if this is not aligned and formalised upfront.
Creating sessions where people are physically together helps. The time that was spent waiting on other parties was enormous. Even with the best intentions from all parties involved.
Experienced developers required Since coupling models is tech-heavy, you need (relatively) experienced developers at the table in order to create sustainable ESDL conversion models and adapters. The process has a significant technical footprint, for some companies who are less “IT-minded” this can be a hurdle.
Having the right expertise at the table (and thinking about this beforehand) is vital for a successful and efficient project.
Start doing “Just start doing it, with vallen en opstaan”, seems to be a good approach. Once we stopped talking and started doing it, the relevant questions started popping up. “Don’t wait for something to happen.”
Arrange back-up for key project members The personal aspect played a part in getting delay in the process. It would be advised to get at least 2 people involved with a similar knowledge level from all sides such that one person changing roles, being on holiday, sick, whatever, does have a less significant impact for other parties to continue.
Positive energy helps to stay motived Having a positive energy in the group, helps a lot to move things forward.
Technical documentation¶
IT architecture¶
General concepts¶
A model doesn’t know and doesnt’ need to know that it’s part of a multi model
A model doesn’t need to be open (source) to become part of a multi model
An external software component will take care of the right order of model execution
Data exchange between models is standardized
High level architecture¶
The following IT architecture has been developed in the project:

The architecture consists of the following components:
the orchestrator: The orchestrator takes care of orchestrating a workflow in which multiple models are executed in a particular order
models: The models perform the real calculations that are required to answer a certain question
model adapters: The model adapters make sure the orchestrator can interact with the models
model registry: The model registry keeps track of which model (adapters) are there and enables the orchestrator to find a model
intermediate model storage: The intermediate model storage is used to store data that goes into a model or comes out of a model. This can be an intermediate or final result.
Component: Orchestrator¶
The orchestrator is a generic software component that can be configured to execute tasks in a certain order.
Component: Model registry¶
The model registry is a small database that is used to keep track of which model adapters are present and what capabilities they have. When a model adapter is started, it registers itself with the model registry specifying how the adapter can be reached and what its capabilities are (for example if the model allows concurrent execution). The model registry is used by the orchestrator to find out where to find the relevant models for the current multi model workflow execution.
Component: Model adapters¶
Model adapters provide a generic interface for the orchestrator to communicate with a model. The adapter is responsible for collecting the right input data (by loading it from the intermediate model storage), orchestrating the model execution and collecting the output data from the model and making it available for the next model (by storing it in the intermediate model storage).
Component: Intermediate model storage¶
The intermediate model storage provides storage for input and output data of the models. The orchestrator configuration determines how and where the input data and output data is stored. The adapters are responsible for loading and saving data to and from the intermediate model storage
Selected software¶
The following software has been selected for the implementation of the multi modeling platform:
orchestrator: Apache Airflow, a platform to programmatically author, schedule and monitor workflows
model registry: specifically developed for this purpose
model adapters: specifically developed for this purpose
intermediate model storage: Minio, an open source S3 compatible object store
See the Installation instructions for more details.
Model adapters¶
General principles¶
Model adapters act as the interface between the orchestrator and the models
Model adapters make every model accessible in the same way
Model adapters all have the same (generic) API which makes it a lot easier at the orchestrator side
Model adapters are responsible for preprocessing of input data and postprocessing of output data
Model adapter lifecycle¶
One of the first things a model adapter needs to do in its initialisation process, is register itself with the model registry. This is done by sending a HTTP POST message to the registry API. By registring itself, the model adapter announces its presence and tells where it can be found. The orchestrator uses this information during the setup phase of a new workflow run.
…
Model adapter interaction¶
The following picture shows the interaction between the handler (inside Airflow), the adapter and the model.

Model adapter REST API¶
Every adapter should implement the following endpoints:
GET /request
- o Description:
The lifecycle is started by the model handler with the /request message, that can be used at the adapter side to set things up and reserve resources for this specific model run.
- o Returns:
model_run_id: a unique identifier generated by the adapter which the handler can later use to refer to this model run
state: ACCEPTED, PENDING, QUEUED, ERROR
reason: description in case of state ERROR
POST /initialize/<model_run_id>
- o Description:
Call to initialize the model
- o Body contents:
JSON containing model specific configuration parameters, settings, input values, references to file locations, …
- o Returns:
model_run_id
state: READY, ERROR
reason: description in case of state ERROR
GET /run/<model_run_id>
- o Description:
Call to start the model run
- o Returns:
model_run_id
state: RUNNING, …, ERROR
reason: description in case of state ERROR
GET /status/<model_run_id>
- o Description:
Call to retrieve the status of the model
- o Returns:
model_run_id
state: ACCEPTED, PENDING, QUEUED, RUNNING, SUCCEEDED, ERROR
reason: description in case of state ERROR
GET /results/<model_run_id>
- o Description:
Call to retrieve the results from the model run
- o Returns:
model_run_id
result: JSON with results from the model
state: READY, ERROR
reason: description in case of state ERROR
GET /remove/<model_run_id>
- o Description:
Call to free all reserved resources, clean memory. After this call, the orchestrator will not call for information again
- o Returns:
model_run_id
state: UNKNOWN (initial state), ERROR
reason: description in case of state ERROR
Orchestrator configuration¶
Installation instructions¶
Prerequisites¶
This software stack is designed to run on docker, using docker-compose on Linux. The minimum versions required are listed in the table below.
Software |
Versions |
---|---|
Docker Engine |
20.10.22 |
Docker Compose |
2.14.1 |
Python |
3.11 |
Apache Airflow |
2.6.3 |
It is known to be working on Ubuntu 22.10 and RHEL 8.7.
For the installation of Docker Engine and Compose, you can follow: https://docs.docker.com/engine/install/
Development with Python 3.11¶
Components of MMviB run in containers. These containers are configured to use Python 3.11. You do not need to install Python on your system if you are going to run all components with Docker Compose.
If you need to run components that require Python on your bare metal host for development purposes, you need to use Python 3.11. Older versions might work, but it is not guaranteed.
To install Python 3.11 on Ubuntu systems that does not have 3.11 on main repository (22.04 and older):
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update
sudo apt install python3.11
Using a different venv for each components is recommended.
Github repositories¶
All software for this project is hosted on Github.
This page lists the available repositories, at the moment of writing of this documentation.
Generic repositories¶
Model-Orchestrator: For setting up Apache Airflow as the orchectrator
Database-MinIO: For setting up Minio as the intermediate model storage
Model-Registry: a simple model registry implementation
Model-Deployment: Some scripts to automatically deploy multi-model stacks
Model-Repository: contains example workflow configurations for the three use cases considered in the MMvIB project
Documentation: the source of this documentation page
Model adapter repositories¶
Adapter-CTM: The CTM adapter for the MMvIB project
Adapter-ESSIM: The ESSIM adapter for the MMvIB project
Adapter-ETM-KPIs: The ETM adapter for the MMvIB project
Adapter-Regionalization: Adapter for the regionalization module
moter-adapter: The MOTER adapter for the MMvIB project
teacos-adapter: The TEACOS adapter for the MMvIB project
AIMMS based model adapter template repository¶
Repositories for simple adapters used for initial testing¶
Adapter-ESDL-Add-Price-Profile: Demo adapter that adds a price profile to an ESDL energy system. This is a ‘special’ adapter as it doesn’t call any external model, all logic is implemented by the adapter itself.
Adapter-ETM-Price-Profile: Demo adapter that retrieves and stores an electricity price profile from the ETM.
AIMMS-ESDL: Generic repository with code to create a SQL database out of an ESDL file that can be used by an AIMMS based model. This code is used by the MOTER, Opera and TEACOS adapter
Installing Apache Airflow¶
Clone the repo and change directory to cloned repo:
git clone https://github.com/MultiModelling/Model-Orchestrator.git
cd Model-Orchestrator
Create a .env
file for overriding default username and password with following contents:
_AIRFLOW_WWW_USER_USERNAME=airflow
_AIRFLOW_WWW_USER_PASSWORD=airflow
Run Airflow in the background with the following command:
AIRFLOW_UID=$(id -u) docker compose up -d
Now it should be accessible on http://localhost:8080
.
You can access it using credentials defined in .env
file.
After login, you will see a screen with example DAGs created for MMviB. To be able to successfully trigger them, Minio and Model Adapters are required.
Installing Minio¶
Clone the repo and change directory to cloned repo:
git clone https://github.com/MultiModelling/Database-MinIO.git
cd Database-MinIO
Create a .env
file for overriding default username and password:
MINIO_ROOT_USER=admin
MINIO_ROOT_PASSWORD=password
Run Minio in the background with the following command:
docker compose up -d
Now it shoud be accessible on http://localhost:9090
.
You can access Minio using credentials defined in .env
file.
Now you can create a bucket called test
by following the information on the next section.
Buckets¶
Later on, you can create, view or delete buckets via Buckets page, which can be accessed via left side of the Minio’s dashboard, or http://localhost:9090/buckets
.
Buckets are used by your DAGs and adapters to store input/output files used/created by models in a pipeline.
Inside of a bucket is organised like a folder.
You can alter contents of a bucket via Object Browser (on menu), or http://localhost:9090/browser
.
Installing Model Registry¶
Clone the repo and change directory to cloned repo:
git clone https://github.com/MultiModelling/Model-Registry.git
cd Model-Registry
To use memorydb
implemented within the registry instead of an external Postgre instance,
you should edit the option DB_TYPE
to memorydb
in .env.docker
file.
Alternatively, you can set up a PostgreSQL instance and edit the .env.docker
accordingly.
After preparing your .env.docker
file, you can use the following command to run Model Res=gistry in background:
docker compose up -d
To list registered model adapters, you can use:
curl localhost:9200/registry/ | jq '.[]'
Initially this will return nothing, as newly launched registry will be empty. You can try running this again after you deploy new adapters in following sections.
Installing Adapters¶
Here, installation steps for adapters used in MMviB listed. Instructions for the first adapter (TEACOS) are given in more detail. Since instructions are similar, installation instructions for the following adapters will only include brief descriptins followed by commands and configs.
TEACOS¶
Clone the repo and change directory to cloned repo:
git clone https://github.com/MultiModelling/teacos-adapter.git
cd teacos-adapter
Edit .env.docker
file to add Minio username and password you used while installing Minio.
Alternatively, you can create an access key for this adapter and use it.
#username
MINIO_ACCESS_KEY=fill_in
#password
MINIO_SECRET_KEY=fill_in
To let your adapter establish a connection to TEACOS, you should provide following values in .env.docker
file.
TEACOS_API_URL=fill_in
TEACOS_USER=fill_in
TEACOS_ENV=fill_in
TEACOS_PASSWORD=fill_in
And following values are for the database instance going to be accessed by TEACOS.
DATABASE_HOST=fill_in
DATABASE_NAME=fill_in
DATABASE_USER=fill_in
DATABASE_PASSWORD=fill_in
Finally, execute the following for running the adapter at the bachround:
docker-compose up -d
To check if the adapter is registered in the Model registry, use the following command:
curl localhost:9200/registry/ | jq '.[]'
ESSIM¶
Clone:
git clone https://github.com/MultiModelling/Adapter-ESSIM.git
cd Adapter-ESSIM
Edit .env.docker
:
MINIO_ACCESS_KEY=admin
MINIO_SECRET_KEY=password
Run:
docker-compose up -d
ETM-KPIs¶
Clone:
git clone https://github.com/MultiModelling/Adapter-ETM-KPIs.git
cd Adapter-ETM-KPIs
Edit .env.docker
:
MINIO_ACCESS_KEY=admin
MINIO_SECRET_KEY=password
Run:
docker-compose up -d
Adapter-ConnectInfra¶
Clone:
git clone https://github.com/MultiModelling/Adapter-ConnectInfra.git
cd Adapter-ConnectInfra
Edit .env.docker
:
MINIO_ACCESS_KEY=admin
MINIO_SECRET_KEY=password
Run:
docker-compose up -d
Adapter-Regionalization¶
Clone:
git clone https://github.com/MultiModelling/Adapter-Regionalization.git
cd Adapter-Regionalization
Edit .env.docker
:
MINIO_ACCESS_KEY=admin
MINIO_SECRET_KEY=password
Run:
docker-compose up -d
Opera¶
This adapter require Windows to run.
MOTER¶
This adapter require Windows to run.
Terminology¶
Last updated: October 18, 2023, Yilin Huang, Delft University of Technology
This document provides an overview of the terminology used in the MMviB project. The listed terms primarily consist of those commonly used and defined in simulation modelling literature. Some terms are new and specific to MMviB. These are derived from literature and based on common modelling practices, if applicable, and are the outcomes of collaborative design and development within the project.
Strategic, Tactical, and Operational goals
Model, Simulation model, and Calculation model
Model parameters and Model inputs
Multi-model and Multi-modelling
Multi-model experimental setup
Workflow task and Model adapter
Model orchestrator and Model orchestration
Model verification and Model validation
Experiment, Experimentation and Experimental frame
List of Terms in Alphabetical order
List of Terms in Thematic¶
Strategic, Tactical, and Operational goals¶
Main Question |
Planning Horizon |
Scope |
|
---|---|---|---|
Strategic goals |
What do we want |
Long-term |
Broadest |
Tactical goals |
How do we approach this |
Medium-term |
Medium |
Operational goals |
How do we plan day-to-day operations |
Short-term |
Least broad |
Strategic goals have a long-term planning horizon. They deal with the main question of “what do we want”. These goals have the broadest scope compared to tactical and operational goals. In terms of cascading effect, strategic plans are cascaded to tactical plans, and subsequently to operational plans.
Tactical goals have a medium-term planning horizon. They deal with the main question of “how do we approach this” where this refers to a given strategic goal.
Operational goals have a short-term planning horizon. They deal with the main question of “how do we plan day-to-day operations”. In terms of scope, these goals are the least broad. The achievement of operational goals leads to the achievement of tactical goals, which leads to the achievement of strategic goals.
Model, Simulation model, and Calculation model¶
A model is an abstraction of a system intended to replicate some properties of that system. This means that a model needs to possess three features. (1) Mapping feature. A model is based on an original system, existing or non-existing. We may call the original system a source system or a referent. (2) Reduction feature. A model only reflects a relevant selection of an original system’s properties. (3) Pragmatic feature. A model needs to be usable in place of an original system with respect to some purpose.
A (computational) simulation model is a piece of software that has a set of instructions that defines rules and constraints, among others, for generating input-to-output (I/O) behaviour of the model.
Simulation is often used to imitate the operation of a real system by executing a model of that system over time. In this case, the model is executed (a.k.a. simulated) iteratively with changing model states by advancing a time axis (a.k.a. time-stepping). Simulation models of this kind are dynamic, as opposed to static models, which do not simulate systems change over time. A (computational) static model is not computed with time-stepping.
A calculation model refers to a computational model that makes numerical calculations. A calculation model is a static model.
Distributed simulation¶
Distributed simulation is executed on distributed computer systems, namely systems composed of multiple interconnected computers.
Source system¶
A source system is also known as an original system, a target system, a system referent, a system of interest, etc. It refers to any system that is under modelling interest. This can be, for example, a natural system, an engineered system, a social system, etc., and a combination of any of these systems.
Abstraction¶
Abstraction is a process of modelling that focuses on a source system and simplifies it by selecting a set of quantities and relationships that represent that source system given a modelling purpose. The validity of an abstraction is considered in relation to the modelling purpose and the experimental frame of a model.
Scope and Scale¶
Scope refers to the fact that a modelled set of diverse elements or concepts is a (selected) subset of those in a source system. This modelled set of elements or concepts can represent various aspects, phenomena, ideas, or any subject matters in a source system that deemed relevant given a modelling purpose.
For example, a wind farm model or a solar farm model can have a scope that includes energy, heat, electricity, economics, finances, and weather conditions. Furthermore, the model may or may not include the change of weather conditions. In this case, we say that the change of weather conditions is within or out of the scope of the model.
Scale is the range (or sometimes extent or dimension) of the elements or concepts of a model representing a source system.
Scale, in general, implies a mapping relation from a model to its source system. The latter characterizes the range, extent or dimension captured by the model given a modelling purpose.
For example, a wind farm model may simulate the wind energy generation from all wind farms in the Netherlands for the next 10 years. In this case, we say that the geographical (or spatial) scale of the model is the Netherlands, and the time scale of the model is 10 years.
Scale is often deemed as being temporal or spatial, but it is not limited to these two types. It also can be defined with respect to objects, processes, or any other subject matters in a source system. For example, a model of a biological system may be at a scale of cell, tissue, organ or beyond.
Granularity and Resolution¶
Granularity refers to the level of details at which a model represents a source system. It is a property belongs to a model, and is often reflected by the number of variables, and the complexity of the relations of variables in the model.
Example 1: a wind farm model that simulates wind energy generation of all wind farms in the Netherlands, may represent each wind farm individually with different characteristics. In this case, the granularity of this model is higher or finer than a model that would represent all Dutch wind farms in an aggregated manner.
Example 2: a wind farm model that simulates wind energy generation for the next 10 years may calculate energy generation at yearly, monthly, weekly, daily, or hourly intervals. These are different temporal granularities on a time scale of 10 years.
Granularity can be structural (a.k.a. compositional) or atomic. Structural (or compositional) granularity is characterized by the number of model components and their relations within a composite model. Atomic granularity is characterized by the information details, i.e., the number of variables and their relations, within a non-compositional model.
Resolution typically refers to atomic (non-compositional) granularity, a.k.a. data granularity or data resolution.
Model parameters and Model inputs¶
Model parameters are constants that define the relationships among the variables in a model. Once set, the value of a model parameter does not change during one simulation run.
“The distinction between these [variables and parameters] is not always clear cut, and it frequently depends on the context in which the variables appear. Usually a model is designed to explain the relationships that exist among quantities which can be measured independently in an experiment; these are the variables of the model. To formulate these relationships, however, one frequently introduces ‘constants’ which stand for inherent properties of nature (or of the materials and equipment used in a given experiment). These are the parameters.” Bard, Yonathan (1974). Nonlinear Parameter Estimation. New York: Academic Press. p.11.
For example, consider a simple model y=f(x) where f(x)=ax+b. Commonly known, x is the model input variable, y is the model output variable. The function f(x) defines the input-to-output relation in which a and b are the (constant) model parameters.
The term of model inputs is often used loosely. It may refer to model input variables, model input data, or both. Model input variables refer to a model’s independent variables. Model input data are used to configure a model’s independent variables and sometimes also model parameters.
In MMviB, model inputs can consist of static data and dynamic data.
Static data and Dynamic data¶
Static data are used to configure the independent variables, sometimes also parameters, in a model. They typically determine the boundary conditions and other initial conditions of a model. For instance, the placement of buildings, cables, and pipelines.
Static data are used for model configuration before the start of a simulation run. They are not used for model configuration during a simulation run.
Dynamic data are generated by the single (stand-alone) models in a multi-model workflow. In the MMviB project, both the (intermediate) outputs of the single models, and the (final) outputs of a multi-model, are deemed as dynamic data.
Note that dynamic output data of a single model often becomes dynamic input data of another (coupled) single model in a multi-model workflow. Dynamic data does not exist before a simulation run.
Multi-model and Multi-modelling¶
In MMviB, a multi-model is an (ensemble) model that consists of two or more single (independent) models that can interoperate to produce meaningful experimental outputs given a predefined modelling purpose.
In MMviB, multi-modelling refers to multi-model constitution as well as multi-model experimentation.
Multi-model infrastructure¶
In MMviB, multi-model infrastructure refers to all facilitating services (including software and methods) that enable multi-modelling. The multi-model infrastructure does not include the individual independent models themselves.
Multi-model constitution¶
In MMviB, multi-model constitution refers to design-time processes (and activities) of multi-model composition (including the workflow design) prior to multi-model experimentation.
This includes, e.g., the selection of plausible single models, the definition of data exchange methods and sequences, the adaptation required thereof, among others, with respect to a given modelling purpose.
Multi-model experimental setup¶
A multi-model experimental setup describes what is required to conduct a multi-model experiment. It consists of (1) a multi-model workflow (and workflow parameters), and (2) a multi-model configuration.
Multi-model workflow¶
A multi-model workflow defines a sequence of tasks (and thereby the sequence of individual model runs and the corresponding dynamic data flow) through which a multi-model experiment can be conducted from initialization to completion.
Multi-model configuration¶
A multi-model configuration defines a set of data (via static data) to set up a multi-model experiment, with respect to an experimental goal. A multi-model configuration is associated to a given multi-model workflow.
Workflow task and Model adapter¶
In MMviB, a workflow task calls a model (run), via a model adapter, and (if applicable) passes on references to model inputs. An orchestrator calls a workflow task and waits for the model run to be completed and collects a reference to the corresponding model output (i.e., dynamic data).
In MMviB, a model adapter is designed for a specific model with respect to model orchestration. A model adapter is responsible for the configuration and execution of a model run, and for collecting the corresponding model output.
A multi-model workflow task calls a model adapter, providing references to model inputs.
Model orchestrator and Model orchestration¶
In MMviB, a model orchestrator is responsible for model orchestration. An orchestrator controls a multi-model workflow that runs defined workflow tasks.
In MMviB, model orchestration refers to the overall management and automation of a multi-model experiment.
Model verification and Model validation¶
Model verification addresses the main question of “Did we build the model right?” It is the process of determining if an implemented model is consistent with the model specification.
Model validation addresses the main question of “Did we build the right model?” It is the process of establishing that the behaviours of the model and the source system agree in the frame in question, corresponding to the modelling purposes and the experimental frame.
Scenario and Scenario space¶
In general, a scenario is the description of one (possible) situation (including actions, events, etc.) that exists or could exist (in the past, at present, or in the future). In modelling and simulation, we refer to a single (configured) model setting as a modelling scenario. Ideally, a simulation scenario (definition) is platform- and model-independent. This means one scenario may be simulated by different models, each of which may have a platform- and model-specific setting that is necessary to run the experiments specific to that model.
For example, the four scenarios in the II3050 scenario space are the Europese, Internationale, Nationale, and Regionale sturing (in Dutch), each of which specifies a projection for future gas and electricity price profiles. An individual scenario goal might therefore be to identify the influence of the different price profiles on energy usage.
A scenario space consists of a (often large) set of scenarios that are guided by a modelling goal. An individual scenario goal is informed by a distinct set of (past, current, and/or future) ideals, conditions, and/or constraints, among others.
For example, the II3050 scenario space contains a set of four scenarios that provide a range of projections for future energy prices.
Experiment, Experimentation and Experimental frame¶
In general, a (scientific) experiment is a procedure that is driven by an experimental goal, to make a discovery, test a hypothesis, or demonstrate a known fact. A simulation experiment serves the same purpose, with a model in place of the real system.
In MMviB, a (simulation) experimental goal guides one multi-model experimental setup as well as the selection of (multi-model) output metrics and KPIs.
An experimental goal can be, e.g., to calculate the gas and electricity usage given the price profile specified by a scenario. One scenario can form a basis for multiple experiments, e.g., with different (multi-) model configurations. This means one simulation scenario can have multiple simulation experiments.
In modelling and simulation, one experiment refers to one (multi-) model run (a.k.a., one simulation run) of a deterministic model, or replication runs (a.k.a. replications, i.e., repeated runs with random seeds) in case of a stochastic model, where the model has fixed configuration of parameter setting and input settings. This means an experiment is scenario-and-model specific.
Experimentation is a general term that refers to conducting experiments in a collective sense. It is the activity of conducting different experiments driven by different experimental goals.
An experimental frame is a term used initially by Zeigler (1976) to formally describe a model’s context with the goal of providing reproducible experiment descriptions. It specifies the conditions under which the modelled system is observed and experimented with.
Uncertainty analysis¶
Uncertainty analysis in modelling and simulation refers to the process of understanding how uncertainty in model parameters, model input and model structure affect the model output.
List of Terms in Alphabetical order¶
Abstraction |
Abstraction is a process of modelling that focuses on a source system and simplifies it by selecting a set of quantities and relationships that represent that source system given a modelling purpose. The validity of an abstraction is considered in relation to the modelling purpose and the experimental frame of a model. |
Calculation model |
A calculation model refers to a computational model that makes numerical calculations. A calculation model is a static model. |
Distributed simulation |
Distributed simulation is executed on distributed computer systems, namely systems composed of multiple interconnected computers. |
Dynamic data |
Dynamic data are generated by the single (stand-alone) models in a multi-model workflow. In the MMviB project, both the (intermediate) outputs of the single models, and the (final) outputs of a multi-model, are deemed as dynamic data. Note that dynamic output data of a single model often becomes dynamic input data of another (coupled) single model in a multi-model workflow. Dynamic data does not exist before a simulation run. |
Experiment |
In general, a (scientific) experiment is a procedure that is driven by an experimental goal, to make a discovery, test a hypothesis, or demonstrate a known fact. A simulation experiment serves the same purpose, with a model in place of the real system. In MMviB, a (simulation) experimental goal guides one multi-model experimental setup as well as the selection of (multi-model) output metrics and KPIs. An experimental goal can be, e.g., to calculate the gas and electricity usage given the price profile specified by a scenario. One scenario can form a basis for multiple experiments, e.g., with different (multi-) model configurations. This means one simulation scenario can have multiple simulation experiments. In modelling and simulation, one experiment refers to one (multi-) model run (a.k.a., one simulation run) of a deterministic model, or replication runs (a.k.a. replications, i.e., repeated runs with random seeds) in case of a stochastic model, where the model has fixed configuration of parameter setting and input settings. This means an experiment is scenario-and-model specific. |
Experimental frame |
An experimental frame is a term used initially by Zeigler (1976) to formally describe a model’s context with the goal of providing reproducible experiment descriptions. It specifies the conditions under which the modelled system is observed and experimented with. |
Experimentation |
Experimentation is a general term that refers to conducting experiments in a collective sense. It is the activity of conducting different experiments driven by different experimental goals. |
Granularity |
Granularity refers to the level of details at which a model represents a source system. It is a property belongs to a model, and is often reflected by the number of variables, and the complexity of the relations of variables in the model. Example 1: a wind farm model that simulates wind energy generation of all wind farms in the Netherlands, may represent each wind farm individually with different characteristics. In this case, the granularity of this model is higher or finer than a model that would represent all Dutch wind farms in an aggregated manner. Example 2: a wind farm model that simulates wind energy generation for the next 10 years may calculate energy generation at yearly, monthly, weekly, daily, or hourly intervals. These are different temporal granularities on a time scale of 10 years. Granularity can be structural (a.k.a. compositional) or atomic. Structural (or compositional) granularity is characterized by the number of model components and their relations within a composite model. Atomic granularity is characterized by the information details, i.e., the number of variables and their relations, within a non-compositional model. |
Model adapter |
In MMviB, a model adapter is designed for a specific model with respect to model orchestration. A model adapter is responsible for the configuration and execution of a model run, and for collecting the corresponding model output. A multi-model workflow task calls a model adapter, providing references to model inputs. |
Model inputs |
The term of model inputs is used loosely by modelling practitioners. It may refer to model input variables, model input data, or both. Model input variables refer to a model’s independent variables. Model input data are used to configure a model’s independent variables and sometimes also model parameters. In MMviB, model inputs can consist of static data and dynamic data. |
Model orchestration |
In MMviB, model orchestration refers to the overall management and automation of a multi-model experiment. |
Model orchestrator |
In MMviB, a model orchestrator is responsible for model orchestration. An orchestrator controls a multi-model workflow that runs defined workflow tasks. |
Model parameters |
Model parameters are constants that define the relationships among the variables in a model. Once set, the value of a model parameter does not change during one simulation run. “The distinction between these [variables and parameters] is not always clear cut, and it frequently depends on the context in which the variables appear. Usually a model is designed to explain the relationships that exist among quantities which can be measured independently in an experiment; these are the variables of the model. To formulate these relationships, however, one frequently introduces ‘constants’ which stand for inherent properties of nature (or of the materials and equipment used in a given experiment). These are the parameters.” Bard, Yonathan (1974). Nonlinear Parameter Estimation. New York: Academic Press. p.11. For example, consider a simple model y=f(x) where f(x)=ax+b. Commonly known, x is the model input variable, y is the model output variable. Function f(x) defines the input-to-output relation in which a and b are the (constant) model parameters. |
Model validation |
Model validation addresses the main question of “Did we build the right model?” It is the process of establishing that the behaviours of the model and the source system agree in the frame in question, corresponding to the modelling purposes and the experimental frame. |
Model verification |
Model verification addresses the main question of “Did we build the model right?” It is the process of determining if an implemented model is consistent with the model specification. |
Model |
A model is an abstraction of a system intended to replicate some properties of that system. This means that a model needs to possess three features. (1) Mapping feature. A model is based on an original system, existing or non-existing. We may call the original system a source system or a referent. (2) Reduction feature. A model only reflects a relevant selection of an original system’s properties. (3) Pragmatic feature. A model needs to be usable in place of an original system with respect to some purpose. |
Multi-model configuration |
A multi-model configuration defines a set of data (via static data) to set up a multi-model experiment, with respect to an experimental goal. A multi-model configuration is associated to a given multi-model workflow. |
Multi-model constitution |
In MMviB, multi-model constitution refers to design-time processes (and activities) of multi-model composition (including the workflow design) prior to multi-model experimentation. This includes, e.g., the selection of plausible single models, the definition of data exchange methods and sequences, the adaptation required thereof, among others, with respect to a given modelling purpose. |
Multi-model experimental setup |
A multi-model experimental setup describes what is required to conduct a multi-model experiment. It consists of (1) a multi-model workflow (and workflow parameters), and (2) a multi-model configuration. |
Multi-model infrastructure |
In MMviB, multi-model infrastructure refers to all facilitating services (including software and methods) that enable multi-modelling. The multi-model infrastructure does not include the individual independent models themselves. |
Multi-model workflow |
A multi-model workflow defines a sequence of tasks (and thereby the sequence of individual model runs and the corresponding dynamic data flow) through which a multi-model experiment can be conducted from initialization to completion. |
Multi-model |
In MMviB, a multi-model is an (ensemble) model that consists of two or more single (independent) models that can interoperate to produce meaningful experimental outputs given a predefined modelling purpose. |
Multi-modelling |
In MMviB, multi-modelling refers to multi-model constitution as well as multi-model experimentation. |
Operational goals |
Operational goals have a short-term planning horizon. They deal with the main question of “how do we plan day-to-day operations”. In terms of scope, these goals are the least broad. The achievement of operational goals leads to the achievement of tactical goals, which leads to the achievement of strategic goals. |
Resolution |
Resolution typically refers to atomic (non-compositional) granularity, a.k.a. data granularity or data resolution. |
Scale |
Scale is the range (or sometimes extent or dimension) of the elements or concepts of a model representing a source system. Scale, in general, implies a mapping relation from a model to its source system. The latter characterizes the range, extent or dimension captured by the model given a modelling purpose. For example, a wind farm model may simulate the wind energy generation from all wind farms in the Netherlands for the next 10 years. In this case, we say that the geographical (or spatial) scale of the model is the Netherlands, and the time scale of the model is 10 years. Scale is often deemed as being temporal or spatial, but it is not limited to these two types. It also can be defined with respect to objects, processes, or any other subject matters in a source system. For example, a model of a biological system may be at a scale of cell, tissue, organ or beyond. |
Scenario space |
A scenario space consists of a (often large) set of scenarios that are guided by a modelling goal. An individual scenario goal is informed by a distinct set of (past, current, and/or future) ideals, conditions, and/or constraints, among others. For example, the II3050 scenario space contains a set of four scenarios that provide a range of projections for future energy prices. |
Scenario |
In general, a scenario is the description of one (possible) situation (including actions, events, etc.) that exists or could exist (in the past, at present, or in the future). In modelling and simulation, we refer to a single (configured) model setting as a modelling scenario. Ideally, a simulation scenario (definition) is platform- and model-independent. This means one scenario may be simulated by different models, each of which may have a platform- and model-specific setting that is necessary to run the experiments specific to that model. For example, the four scenarios in the II3050 scenario space are the Europese, Internationale, Nationale, and Regionale sturing (in Dutch), each of which specifies a projection for future gas and electricity price profiles. An individual scenario goal might therefore be to identify the influence of the different price profiles on energy usage. |
Scope |
Scope refers to the fact that a modelled set of diverse elements or concepts is a (selected) subset of those in a source system. This modelled set of elements or concepts can represent various aspects, phenomena, ideas, or any subject matters in a source system that deemed relevant given a modelling purpose. For example, a wind farm model or a solar farm model can have a scope that includes energy, heat, electricity, economics, finances, and weather conditions. Furthermore, the model may or may not include the change of weather conditions. In this case, we say that the change of weather conditions is within or out of the scope of the model. |
Simulation model |
A (computational) simulation model is a piece of software that has a set of instructions that defines rules and constraints, among others, for generating input-to-output (I/O) behavior of the model. Simulation is often used to imitate the operation of a real system by executing a model of that system over time. In this case, the model is executed (a.k.a. simulated) iteratively with changing model states by advancing a time axis (a.k.a. time-stepping). Simulation models of this kind are dynamic, as opposed to static models, which do not simulate systems change over time. A (computational) static model is not computed with time-stepping. |
Source system |
A source system is also known as an original system, a target system, a system referent, a system of interest, etc. It refers to any system that is under modelling interest. This can be, for example, a natural system, an engineered system, a social system, etc., and a combination of any of these systems. |
Static data |
Static data are used to configure the independent variables, sometimes also parameters, in a model. They typically determine the boundary conditions and other initial conditions of a model. For instance, the placement of buildings, cables, and pipelines. Static data are used for model configuration before the start of a simulation run. They are not used for model configuration during a simulation run. |
Strategic goals |
Strategic goals have a long-term planning horizon. They deal with the main question of “what do we want”. These goals have the broadest scope compared to tactical and operational goals. In terms of cascading effect, strategic plans are cascaded to tactical plans, and subsequently to operational plans. |
Tactical goals |
Tactical goals have a medium-term planning horizon. They deal with the main question of “how do we approach this” where this refers to a given strategic goal. |
Uncertainty analysis |
The process of understanding how uncertainty in model parameters, model input and model structure affect the model output. |
Workflow task |
In MMviB, a workflow task calls a model (run), via a model adapter, and (if applicable) passes on references to model inputs. An orchestrator calls a workflow task and waits for the model run to be completed and collects a reference to the corresponding model output (i.e., dynamic data). |
Zeigler, B. P., Muzy, A., & Kofman, E. (2018). Theory of modeling and simulation: discrete event & iterative system computational foundations. Academic press.
Richard M. Fujimoto (2000), Parallel and distributed simulation systems. Wiley Series on Parallel and Distributed Computing, John Wiley & Sons.
Social process¶
Due to changes to the project finding after acceptance, most of the work analysing the social processes could not be completed. However, we have managed to do several useful things.
`Social learning in participatory multi-modelling: Crossing boundaries for multi-party collaboration.<https://github.com/MultiModelling/Documentation/blob/main/docs/source/scientific/social_process/Social%20learning%20in%20participatory%20multi-modelling%3A%20Crossing%20boundaries%20for%20multi-party%20collaboration.pdf>`_ Paper by Sander ten Caat & Annemiek de Looze. * Manual for Qualtrics survey site. * R script for data analysis