Abstract
1- Introduction
2- Method
3- Results
4- Discussion
5- Conclusion
References
Abstract
Sensitivity analysis (SA) can be applied to building energy models (BEM) to identify which input parameters that drive the majority of the model output variation. The screening-based Morris method is often applied for this purpose; however, consideration regarding the effect of the user-defined number of levels (p) and trajectories (r) on the obtained results are rare. This paper investigates how the choice of p and r affects the outcome of a SA using the Morris method on a high fidelity BEM. The results indicates that the Morris method was not able to replicate the ranking from the variance-based Sobol’ method no matter the choice of r and p. It was, however, able to identify groups of input parameters (parameter clusters) most sensitive to the model output variability, but it required significantly more r than usually applied in studies featuring the Morris method. The reason is that marginal differences in absolute values of elementary effects (the sensitivity indices of the Morris method) for some input parameters may lead to a change in ranking position several times as the number of r increases. Users of the Morris method must therefore not be predetermined on the size of the parameter cluster; instead, they must make a visual assessment of the convergence of the parameter ranking to qualitatively determine the appropriate size of parameter cluster. The final recommendation for future studies deploying the Morris method for SA applied to a high fidelity BEM is to choose p ≥ 4 as it seems to lead the analysis towards a more truthful ranking, and then run simulations in steps of r = 100 when making the visual assessment to determine convergence and the size of parameter cluster. The identified need for more r questions the general notion that the Morris method is a computationally efficient screening method in terms of absolute time use. However, the Morris method is still much more computational efficient than a Sobol’-based analysis if the purpose of the SA is to identify a cluster of input parameters most sensitive to the model output variability.
Introduction
Building designers may find it informative to employ a sensitivity analysis (SA) to a building energy model (BEM) to identify which design variables that drive the majority of the model output variation in terms of indoor climate and energy use. SA methods for this purpose can in general be categorised as either local sensitivity analysis (LSA) or global sensitivity analysis (GSA) [1]. LSA methods rely on a one-parameter-at-a-time (OAT) technique where all parameter values have equal probability of occurrence. The OAT technique means that LSA methods do not account for any effects from correlated input parameters. However, LSA methods are easy to implement and fast to conduct as they require only few model evaluations. The GSA category covers a range of methods applying different techniques. Common for the methods in the GSA category is that they evaluate the effect of an input parameter on the output by varying not only the parameter in question, but all other input parameters chosen for analysis as well. GSA methods are therefore able to include effects from correlated input parameters as well as non-linear and non-additive model behavior. The outcome of a GSA may therefore be more reliable than the outcome from a LSA but GSA methods are more complicated to implement and are significantly slower to conduct as they require many model evaluations. A specific group of GSA methods are the so-called screening methods [2]. Screening-based SA methods are often considered useful for qualitative identification of design variables to which the model output variability is most sensitive, whereas more advanced GSA methods, such as a variance-based method, must be applied if a quantitative ranking of parameters is desirable. The screening method initially described by Morris [3], and since refined and expanded by different authors [4-5], seems to be widely used for BEM-based analysis; see Table 1 for an overview of BEM-based studies featuring the Morris method. A compelling argument for applying the Morris method, instead of a more comprehensive variance-based GSA method, is that it is a computational efficient alternative if only a rough ranking of the parameters is desired.