All are welcome to our virtual conference series on Epistemic Uncertainty in Engineering. This event encourages discussion and collaborations on modelling and decision making in the face of epistemic and aleatory uncertainty in practical engineering contexts. Building on the recent REC meeting in Liverpool and the RUC meetings in Cambridge and Amsterdam, we hope that this event will feature even more discussion and debate, serving as a workshop to address and perhaps come to concrete shared conclusions about handling uncertainty in engineering.

The event will feature a series of 2-3 hour sessions during February and March 2021 on different ways of handling uncertainty. Each session will focus on a different theme focused by talks by thought leaders from across the globe sharing their perspectives on dealing with epistemic and aleatory uncertainty in engineering contexts.

Uncertainty can be confusing, especially with the plethora of techniques and approaches to choose from. Which should engineers choose and why? How should we communicate uncertainty and can we agree on the methods and inferences to use? The themes are:

  • Decision making: Must decisions and designs distinguish kinds of uncertainties?
  • Uncertainty arithmetic: do epistemic and aleatory need different calculi?
  • Uncertainty in engineering: how can we build a model with what we don’t know?
  • Computing with uncertainty: what would a unified uncertainty calculator look like?
  • Admitting you don’t know: how should epistemic uncertainty be communicated?
  • Manifesto on epistemic uncertainty: can we agree on anything?
  • Managing uncertainty: how are management, compliance and monitoring affected by uncertainties?
  • Communicating uncertainty: how should we explain and talk about uncertainty? (discussion)

Decision making: Must decisions and designs distinguish kinds of uncertainties?

We Already Have a Unified Uncertainty Theory

Professor Anthony O'Hagan, The University of Sheffield

Probability is the quantification of uncertainty. It handles all kinds of uncertainties. Terms like 'epistemic' and 'aleatory' can be convenient in discussion but they are only loosely defined - probability handles them both seamlessly. Uncertainty in engineering is no different from uncertainty in any other field. There is no need for a conference when it is based on imaginary problems.

Slider 1

Anthony O'Hagan is a statistician specialising in the methodology and applications of Bayesian statistics. Within that broad field he has made methodological contributions in heavy-tailed distributions and conflicting information, Gaussian process modelling, emulation and uncertainty quantification for mechanistic models, elicitation of expert knowledge and probabilistic numerics. He has applied publications in many areas including health economics, nuclear engineering, accounting, environmental monitoring, drug development and asset management. Tony is Professor Emeritus of Statistics at the University of Sheffield, UK. He has served on the Council and the Research Section Committee of the Royal Statistical Society, the Board of Directors and the Programme Council of the International Society for Bayesian Analysis, the peer review colleges for the Engineering and Physical Sciences Research Council and the Medical Research Council, the Methodology panel of the National Health Service and as president of the Mathematics section of the British Association for the Advancement of Science.

Simulation-Informed Decision Making

William L. Oberkampf, Sandia National Laboratories

Simulation is becoming the primary tool in predicting the performance, reliability, and safety of engineered systems. Terminology such as “virtual prototyping,” “virtual testing,” and “full physics simulation” are extremely appealing when budgets and schedules are highly constrained, or when competitive pressures convince project managers to move forward with little testing of new systems or manufacturing processes. Many contend that higher fidelity physics modelling, combined with faster computers, is the path forward for improved decision making informed by simulation. I argue that these factors are important, but business managers and policy makers who make consequential decisions also necessitate information on the uncertainty of simulation results. Many of these decision makers understand that some uncertainties are well characterized, whereas some are very poorly understood; potentially not even included in the simulation. To capture a wide range of uncertainty sources and characterizations, the term predictive capability or total predictive capability has been used in certain communities. In contrast to traditional uncertainty estimation which concentrates on random variables, predictive capability attempts to capture all potential sources of uncertainty. These include numerical solution error, model form uncertainty, and uncertainty in the environments and scenarios to which the system could be exposed, either intentionally or unintentionally. This talk will contrast traditional uncertainty quantification approaches and approaches based on imprecise probabilities which can simultaneously include both random variability and lack of knowledge uncertainty. It is argued that imprecise probability approaches are more informative and revealing for a decision makers.

Slider 1

Dr. Oberkampf has 50 years of experience in research and development in fluid dynamics, heat transfer, flight dynamics, and solid mechanics. After his completion of graduate studies at the University of Notre Dame in 1970, he was on the faculty at the University of Texas at Austin until 1979. From 1979 until 2007 he worked at Sandia National Laboratories in both staff and management positions. During the last 25 years, Dr. Oberkampf emphasized research and development in methodologies and procedures for verification, validation, and uncertainty quantification for a wide variety of applications. He has over 185 journal articles, book chapters, conference papers, and reports, and has taught 60 short courses in the field of verification and validation, and uncertainty quantification. He and Prof. Chris Roy co-authored the book Verification and Validation in Scientific Computing published by Cambridge University Press. He is a fellow of the American Institute of Aeronautics and Astronautics and NAFEMS.

View the slides

Computing with Uncertainty: What would a unified uncertainty calculator look like?

Bayesian/Fiducial/Frequentist Uncertainty Quantification by Artificial Samples

Professor Min-ge Xie, Rutgers University

Bayesian, frequentist and fiducial (BFF) inferences are much more congruous than they have been perceived historically in the scientific community. Most practitioners are probably more familiar with the competing narratives of the two dominant statistical inferential paradigms, Bayesian inference and frequentist inference. The third, lesser known fiducial inference paradigm was pioneered by R.A. Fisher in an attempt to define an inversion procedure for inference as an alternative to Bayes' theorem. Although each paradigm has its own strengths and limitations subject to their different philosophical underpinnings, this talk intends to bridge these three different inferential methodologies through the lenses of confidence distribution theory and artificial sampling procedures. The talk attempts to understand how uncertainty quantifications in these three distinct paradigms, Bayesian, frequentist, and fiducial inference, can be unified and compared on a foundational level, thereby increasing the range of possible techniques available to both statistical theorists and practitioners across all fields.

Slider 1

Dr. Min-ge Xie is a Distinguished Professor and Director of Office of Statistical Consulting, Department of Statistics, Rutgers, the State University of New Jersey. His main research interest is in the foundation of statistical inference and fusion learning. His other expertise includes estimating equations, robust statistics, hierarchical models, and applications in biostatistics and industry. Dr. Xie received his BS degree in mathematics from University of Science and Technology (USTC) with high honour and PhD degree in statistics from University of Illinois at Urbana-Champaign (UIUC). He is a fellow of the American Statistical Association and an elected member of the International Statistical Institute. He has served on numerous scientific review panels and editorial boards. His is a co-founder of the BFF research community. His research has been supported in part by grants from NSF, NIH, DHS, NSA and FAA.

View the slides

Epistemic vs Aleatory: Granular Computing and Ideas Beyond That

Professor Vladik Kreinovich, University of Texas at El Paso

Many practical problems are naturally reduced to solving systems of equations. There are many efficient techniques for solving well-defined systems of equations, i.e., systems in which we know the exact values of all the parameters and coefficients. In practice, we usually know these parameters and coefficients with some uncertainty—uncertainty usually described by an appropriate granule: interval, fuzzy set, rough set, etc. Many techniques have been developed for solving systems of equations under such granular uncertainty. Sometimes, however, practitioners use previously successful techniques and get inadequate results. In this paper, we explain that to obtain an adequate solution, we need to take into account not only the system of equations and the granules describing uncertainty: we also need to take into account the original practical problem—and for different practical problems, we get different solutions to the same system of equations with the same granules.

Vladimir Kreinovich

Vladik Kreinovich is a professor of computer science at the University of Texas at El Paso. He was educated at Leningrad State University (currently known as St. Petersburg University) and received a doctorate in mathematics from the Sobolev Institute of Mathematics, affiliated with Novosibirsk State University in Novosibirsk. His research spans several areas of computer science, computational statistics and computational mathematics generally, including interval arithmetic, fuzzy mathematics, probability theory, and probability bounds analysis. His research addresses computability issues, algorithm development, verification, and validated numerics for applications in uncertainty processing, data processing, intelligent control, geophysics and other engineering fields. He has been extremely productive with over 2,400 scholarly publications. In 2015, the Society For Design and Process Science gave him its Zadeh Award.

View the slides

Manifesto on Uncertainty: Can we agree on anything about handling epistemic uncertainty in engineering?

Random Fuzzy Sets: A General Model of Epistemic Uncertainty

Professor Thierry Denoeux, University of Compiègne, France

We will start with a critical discussion about the concepts of “aleatory” and “epistemic” uncertainty. We will then review the two classical mathematical models of uncertain information: probabilities and sets (as exemplified by interval analysis), and proceed with two important extensions: fuzzy sets and possibility theory on the one hand, random sets and belief functions on the other hand. We will then introduce an even more general model based on random fuzzy sets, and argue for its relevance as a general approach to reasoning with epistemic uncertainty. As an illustration, we will demonstrate the application of this model to statistical prediction. Related publication: Thierry Denoeux, Belief functions induced by random fuzzy sets: a general framework for representing uncertain and fuzzy evidence, Fuzzy Sets and Systems, 2020.

Vladimir Kreinovich

Thierry Denoeux is a Full Professor (Exceptional Class) with the Department of Information Processing Engineering at the University of Compiègne, France, and a senior member of the French Academic Institute (Institut Universitaire de France). His research interests concern reasoning and decision-making under uncertainty and, more generally, the management of uncertainty in intelligent systems. His main contributions are in the theory of belief functions with applications to statistical inference, pattern recognition, machine learning and information fusion. He has published more than 300 papers in this area. He is the editor-in-chief of the International Journal of Approximate Reasoning and the journal Array, and an associate editor of several journals including Fuzzy Sets and Systems and International Journal on Uncertainty, Fuzziness and Knowledge-Based Systems.

The Basic Principles of Reasoning About Uncertainty: A Subjectivist Approach

Professor Michael Goldstein, University of Oxford

The basic questions that we should ask of any real world uncertainty analysis are as follows: what do we mean by the uncertainty statements that we make, what have we done to ensure that our uncertainty statements do have the stated meaning and for what purposes are these uncertainty statements useful. We will discuss these questions from a subjectivist Bayes viewpoint, looking at the strengths and weaknesses of this framework for carrying out such an uncertainty analysis and suggesting ways in which the usual theory may be augmented to sharpen the value and meaning of the statements made. While our discussion will be quite general, it will be largely motivated in the framework of problems of uncertainty quantification for complex physical systems modelled by computer simulators.

Vladimir Kreinovich

Michael Goldstein is a statistician at the University of Durham, who has worked for many years on the foundations, methodology and applications of Bayesian statistics. In particular, he has developed the general approach termed Bayes linear statistics which is similar in spirit to conventional Bayes but takes expectation, rather than probability, as the fundamental primitive for the theory. For the last thirty years, his main area of application for this theory has been to problems of uncertainty quantification and decision making for complex physical systems modelled by computer simulators.

Kinds of Uncertainty: Are there other kinds of uncertainty?

Innovation Dilemmas: An Info-Gap Perspective

Professor Yakov Ben-Haim, Israel Institute of Technology

The search for ever better outcomes should guide the decision maker in engineering design, public policy, economics, medical decisions and many other areas of human endeavor. However, uncertainty, ignorance, and surprise may jeopardize the achievement of optimal outcomes.
The concept of an innovation dilemma assists in understanding and resolving the planner's challenge. An innovative and highly promising new design or policy is less familiar than a more standard approach whose implications are more familiar. The innovation, while purportedly better than the standard approach, may be much worse due to uncertainty about the innovation. The resolution (never unambiguous) of the dilemma results from analysis of robustness to surprise (related to resilience, redundancy, flexibility, etc.) and is based on info-gap decision theory.
Info-gap theory provides decision-support tools for managing the challenges of planning and decision under deep uncertainty. We discuss the method of robustly satisfying critical requirements as a tool for protecting against pernicious uncertainty. These ideas will be illustrated with a range of brief examples, and a closer look at design for seismic safety.

Vladimir Kreinovich

Prof. Yakov Ben-Haim initiated and developed info-gap decision theory for modeling and managing deep uncertainty. Info-gap theory is a decision-support tool, providing a methodology for assisting in assessment and selection of policy, strategy, action, or decision in a wide range of disciplines. Info-gap theory has impacted the fundamental understanding of uncertainty in human affairs, and is applied in decision making by scholars and practitioners around the world in engineering, biological conservation, economics, project management, climate change, natural hazard response, national security, medicine, and other areas (see info-gap.com). He has been a visiting scholar in many countries and has lectured at universities, technological and medical research institutions, public utilities and central banks. He has published more than 100 articles and 6 books. He is a professor of mechanical engineering and holds the Yitzhak Moda'i Chair in Technology and Economics at the Technion - Israel Institute of Technology

Uncertainy in Numerical Simulations: Does it Matter? What to Do About it?

Dr. François Hemez, Lawrence Livermore National Laboratory

High-consequence decision-making is increasingly being supported by numerical simulations. Examples include predicting the trajectories of hurricanes; anticipating the consequences of terrorist threats; and managing complex, inter-dependent infrastructure in urban environments. Attempting to forecast the future or predict conditions that cannot be observed experimentally raises into question the veracity of numerical models and the quantification of prediction uncertainty. We provide a high-level overview of the three classes of uncertainty (randomness, numerical uncertainty, assumption-making) in modeling and simulation. Due to their different natures, quantifying these uncertainties and aggregating them are daunting tasks (still today!). Communicating to decision-makers what the uncertainty represents is another serious roadblock. We contend that these challenges can be addressed by focusing on answering “what if” questions for scenarios that the decision-makers care about and establishing confidence in these answers. Confidence comes from assessing the extent to which our predictions, and the decisions they support, are not adversely affected by our information gaps (what we do not know or cannot control). The discussion is illustrated with examples that range from simple engineering design to training emergency personnel to respond to large-scale incidents in urban environments.

Vladimir Kreinovich

François Hemez is a scientist at Lawrence Livermore National Laboratory (LLNL) where he contributes to assessments of the U.S. nuclear deterrent and supports non-proliferation efforts of the Intelligence Community. Before joining LLNL, he spent twenty-one years at Los Alamos National Laboratory with responsibilities in various programs for computational engineering and physics, and was adjunct professor at the University of California San Diego (Structural Engineering Department). François graduated from Ecole Centrale Paris, France in 1989 and earned a doctoral degree in aerospace engineering from the University of Colorado Boulder in 1993. François is recognized for his expertise in model verification and validation, uncertainty quantification and decision-making. Since 2001, he has authored 430+ technical reports, peer-reviewed manuscripts and book chapters; given 162 invited lectures, including 9 international keynotes; and taught short-courses to 938 graduate students and practicing engineers around the world.

View the slides

Uncertainty Engineering: How can we build a model given what we don't know?

Uncertainty in Engineering Decision Making

Daniel Straub, Technical University of Munich

Within the idealized world of Bayesian decision analysis, optimal decisions are well defined. Unfortunately – or maybe fortunately – finding optimal decisions in real life is complicated by the fact that multiple decision makers are involved, preferences are often difficult to specify and probabilistic quantification of uncertainties is challenging and can be misleading. This talk will focus on the uncertainty quantification challenge. I will examine common arguments for and against probabilistic quantification based on specific examples. The talk will not provide answers to all open questions, but should provide some food-for-thought to the discussion, if, when and how uncertainty should be quantified.

Vladimir Kreinovich

Daniel Straub is currently Associate Professor for engineering risk and reliability analysis at the Technical University of Munich (TUM), Germany. His interest is in developing physics-based stochastic models and methods for decision support in infrastructure, environmental and general engineering systems, with a particular focus on Bayesian techniques and decision analysis for risk and reliability analysis.
Daniel is particularly interested in linking fundamental research to application-specific challenges. He is developing novel models and algorithms for reliability assessment, data analysis, decision, risk and sensitivity analysis. Concurrently, he works successfully with partners in multiple industries, including infrastructure engineering, offshore and marine engineering, geotechnical engineering, natural hazards, automotive as well as aero- and astronautical engineering.
Daniel is past president of IFIP WG 7.5 and Geosnet. He is active in multiple professional organizations and code committees in Germany, and in the editorial boards of the leading journals in engineering reliability and risk. His awards include the ETH Silbermedaille and the Early Achievement Research Award of IASSAR. He is also an Honorary Professor at the University of Aberdeen, UK.

Uncertainty Modelling and Optimization Under Uncertainty

Dr. Luis G. Crespo, NASA Langley Research Center

This talk presents strategies for modeling uncertainty as well as the implications of using such models when performing uncertainty quantification and robust optimization.

Vladimir Kreinovich

Dr. Luis G. Crespo is a research scientist of the dynamic systems and controls branch of NASA Langley Research Center. His research interests are control theory, uncertainty quantification, learning theory and robust-optimization. He is the author of 100+ papers in these areas.

Manifesto on Uncertainty (revisited): Can we agree on anything about handling epistemic uncertainty in engineering?

Conceptualising, Representing and Describing Epistemic Uncertainties in Risk Analysis

Terje Aven, Ph.D., University of Stavanger, Norway

Professor Terje Aven will in this talk reflect on how to conceptualise, represent and describe epistemic uncertainties in a risk analysis context. Aven will focus his discussion on the probability concept, both precise and imprecise, and the knowledge supporting the probability judgments. Also related management issues, ensuring decision makers and other stakeholders are adequately informed, will be addressed.

Vladimir Kreinovich

Terje Aven, Ph.D., is Professor of Risk Analysis and Risk Terje AVEN | University of Stavanger (UiS), StavangerScience at the University of Stavanger, Norway. He has many years of experience as an industry risk analyst and is the author of many books and papers in the field. His research covers a broad range of topics within risk analysis and risk science, in particular on foundational issues related to risk and uncertainty assessment and characterization. He is past President of Society for Risk Analysis (SRA) and the European Safety and Reliability Association (ESRA). He is Editor-in-Chief of the Journal of Risk and Reliability and area editor of Risk Analysis.

View the slides

Say What?

Keith Worden and Scott Ferson

We are at a crossroads in our scientific appreciation of uncertainty. The traditional view is that there is only one kind of uncertainty and that probability theory with Bayes' rule is its calculus. But some engineers hold that, in practice, the quantitative results of traditional probabilistic models are often misconstrued and sometimes demonstrably misleading.
By relaxing a single axiom of traditional (von Neumann–Morgenstern) utility theory that assumes the decision maker can always decide which of any two nonidentical decision choices would be preferred, traditional decision theory devolves to a version entailing a richer concept of uncertainty and a broader framework for uncertainty analysis. The resulting theory admits a kind of uncertainty that is not handled by traditional Laplacian probability measures and might therefore be called non-Laplacian uncertainty. This non-Laplacian view argues that different kinds of uncertainty must be propagated differently through simulations, reliability and risk analyses, calculations for robust design, and other computations.
We suggest that these two views can be unified into a modern pragmatic approach to uncertainty quantification. Many, and perhaps most, practical calculations involving uncertainty may be well handled with traditional probability theory, implemented as standard applications of Bayes' rule and Monte Carlo simulations. But there are special cases involving epistemic uncertainty where it is difficult or impossible to fully specify probabilities or other measured quantities precisely, where a non-Laplacian approach can be useful. Such a unified approach would make practical solutions easier for engineering and physics-based models, and the inferences drawn from such models under this view would be more defensible.
Remarkably, this emerging consensus parallels the historical resolution of the initially extreme controversy in mathematics that produced the present balance between Euclidean and non-Euclidean approaches that forms modern geometry.

Vladimir Kreinovich

Keith Worden began academic life as a theoretical physicist, with a degree from York University and a PhD in Mechanical Engineering from Heriot-Watt University eventually followed. A period of research at Manchester University led to a professorship at the University of Sheffield in 1995, where he has happily remained since. Keith's research is concerned with applications of advanced signal processing and machine learning methods to structural dynamics. The primary application is in the aerospace industry, although there has also been interaction with ground transport and offshore industries. One of the research themes concerns non-linear systems. The research conducted here is concerned with assessing the importance of non-linear modelling within a given context and formulating appropriate methods of analysis. The analysis of non-linear systems can range from the fairly pragmatic to the extremes of mathematical complexity. The emphasis within the research group here is on the pragmatic and every attempt is made to maintain contact with engineering necessity. Another major activity within the research group concerns structural health monitoring for aerospace systems and structures. The research is concerned with developing automated systems for inspection and diagnosis, with a view to reducing the cost-of-ownership of these high integrity structures. The methods used are largely adapted from pattern recognition and machine learning; often the algorithms make use of biological concepts e.g. neural networks, genetic algorithms and ant-colony metaphors. The experimental approaches developed range from global inspection using vibration analysis to local monitoring using ultrasound.

View the slides
Vladimir Kreinovich

Scott Ferson is director of the Institute for Risk and Uncertainty at the University of Liverpool in the UK. For many years he was at Applied Biomathematics in the US. He holds a Ph.D. in Ecology and Evolution from Stony Brook University and an A.B. in biology from Wabash College. He has published five books, ten commercially distributed software packages, and over a hundred scholarly publications, mostly in environmental risk analysis, uncertainty propagation, and conservation biology. He is a fellow of the Society for Risk Analysis and was named Distinguished Educator by the Society. He has been a central figure in the development of probability bounds analysis, an approach to reliably computing with imprecisely specified probabilistic models. His research, funded primarily by the Engineering and Physical Sciences Research Council, National Institutes of Health, NASA, and Sandia National Laboratories, has focused on developing reliable statistical tools for uncertainty analysis when empirical information is very sparse in engineering, environmental and medical risk analyses.

View the slides

Uncertainty Arithmetic: Do epistemic and aleatory need different calculi?

Unified View of Uncertainty Theories

Didier Dubois, INRIT-CNRS, Université Paul Sabatier

The variability of physical phenomena and partial ignorance about them motivated the development of probability theory in the two last centuries. However, the mathematical framework of probability theory, together with the Bayesian credo claiming the inevitability of unique probability measures for representing agents' beliefs, have blurred the distinction between variability and ignorance. Modern theories of uncertainty, by putting together probabilistic and set-valued representations of information, provide a better account of the various facets of uncertainty.

Laura Swiler

Didier Dubois is professor emeritus at Université Paul Sabatier in Toulouse, France, and research director (directeur de recherche) at the Centre national de la recherche scientifique (CNRS), at the Institut de Recherche en Informatique de Toulouse. He was educated as a civil engineer of aeronautics at the French National School of Aeronautics and Space from which he also earned a doctorate in engineering. His second doctorate (docteur d'état) is from the Scientific and Medical University of Grenoble in mathematical models of the imprecise and the uncertain. Dubois served as president of the International Fuzzy Systems Association, and is the longtime editor-in-chief of Fuzzy Sets and Systems. He has worked at the IMAG Institute (Bâtiment IMAG) in Grenoble, Purdue University in the United States, the Centre d'Études et de Recherche de Toulouse, Departement d'Études et de Recherche en Automatique, and the Languages ​​and Computer Systems Laboratory at Paul Sabatier with the artificial intelligence and robotics team. He received the Pioneer Award from the IEEE Neural Network Society, and a doctor honoris causa from the Faculté Polytechnique de Mons (Belgium). He is a fellow of the International Fuzzy Systems Association, and was named one of the 300 most cited French scientific authors by Institute for Scientific Information.

Epistemic Uncertainty: Computation and Usage

Dr. Laura Swiler, Sandia National Laboratories

This talk will present three approaches used in propagation of epistemic uncertainty: interval analysis, Dempster-Shafer evidence theory, and probability theory. The talk will present both sampling methods and optimization methods that can be used in these calculations as well as surrogate models. Additionally, the discussion of “mixed” epistemic-aleatory uncertainty will be presented with an emphasis on efficient methods beyond nested sampling. Results will be presented for various test problems. The last section of the talk will cover some history, including the treatment of epistemic uncertainty in large uncertainty analyses supporting risk assessment for nuclear power plants and waste repositories.

Laura Swiler

Dr. Laura P. Swiler is a computational scientist whose research focuses on quantifying the uncertainty associated with predictions from computational models. Her research addresses the question “how much can we infer from as few model runs as possible” given the high cost of running advanced science and engineering models. Particular research areas include experimental design, adaptive sampling algorithms, Bayesian inference, model calibration, and Gaussian process surrogate models. Dr. Swiler has been a staff member at Sandia National Laboratories for 26 years.

Admitting what you don't know: How should we explain and talk about uncertainty?

Risk Literacy and Health

Gerd Gigerenzer, Harding Center for Risk Literacy, University of Potsdam

Efficient and affordable health care requires both informed patients and doctors. Yet studies show that most doctors do not understand health statistics. Health organizations and industries exploit this innumeracy to make small benefits of treatments or screenings appear big and their harms appear small. I will talk about techniques for helping doctors and patients make sense of medical evidence. Promoting risk literacy in health could save more lives than expensive screening programs and Big Data.

Laura Swiler

Gerd Gigerenzer is Director of the Harding Center for Risk Literacy at the University of Potsdam. He former director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development and the Max Planck Institute for Psychological Research in Munich, former professor of psychology at the University of Chicago and John M. Olin Distinguished Visiting Professor, School of Law at the University of Virginia. Gigerenzer was educated and earned his doctoral degree in psychology at LMU Munich. He is a fellow of the German Academy of Sciences, the Berlin-Brandenburg Academy of Sciences, the Cognitive Science Society and the Association for Psychological Science, and a member of the Scientific Council of the European Research Council. He is a recipient of the Allais Memorial Prize in Behavioral Sciences, the Henry Walton Prize of the Association for the Study of Medical Education, the AAAS Prize for the best article in the behavioral sciences, the Association of American Publishers Prize for the best book in the social and behavioral sciences, the Hufeland Award from the German Foundation for General Medicine (Stiftung Allgemeinmedizin) for improving knowledge and transparency in medicine, and was named one of the top 100 Global Thought Leaders by the Gottfried Duttweiler Institute. He is an honorary professor at the University of Potsdam, Humboldt-Universität Berlin.

Ways people talk about risk and uncertainty

Ann Bostrom, University of Washington

Ann Bostrom

Ann Bostrom is the Weyerhaeuser endowed Professor of Environmental Policy at the Daniel J. Evans School of Public Policy and Governance at the University of Washington. Dr. Bostrom holds a Ph.D. in policy analysis from Carnegie Mellon University. She completed postdoctoral studies in Engineering and Public Policy at Carnegie Mellon University and in cognitive aspects of survey methodology at the Bureau of Labor Statistics. From 1999 to 2001 she took leave to co-direct the Decision Risk and Management Science Program at the National Science Foundation (NSF). Bostrom studies mental models of hazardous processes (how people understand and make decisions about risks), and collaborates with interdisciplinary teams to investigate risk perceptions and communication strategies for environmental and health risks, for example, with regard to earthquake early warning, earthquake and tsunami risks, climate change and extreme weather. She is currently working with the United States Geological Survey Social Sciences Working Group on perceptions and preparedness for earthquake early warning, and with the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) on risk perception and communication of AI in environmental forecasting. Bostrom is the recipient of the Distinguished Educator Award and of the Chauncy Starr Award from the Society for Risk Analysis, of which she is a Fellow and Past President.

Management of Uncertainty: How are management, compliance and monitoring affected by uncertainties

Counting on Uncertain Models for Structural Health Monitoring

Professor Eleni Chatzi and Dr. Vasilis Dertimanis, ETH Zürich

The necessity to undertake preventive measures and to come up with methodologies for assessing structural performance and safety has rendered Structural Health Monitoring (SHM) a critical paradigm for condition assessment and life-cycle management of engineered systems. SHM harvests information from sensors suitably deployed on structural systems. In recent years, technological advances have provided an abundance of low-cost and easily deployable sensors, delivering diverse information including strains, dynamic response quantities, loads and environmental condition data. When coupled with appropriate models, this information may guide engineers and operators in the effective management of these systems. However, the task of inferring adequate system models and indicators of performance, is hindered by the so-called polymorphic uncertainties, stemming from a mixture of modeling and measurement imprecisions. Due to lack of a priori knowledge, damage and deterioration processes, variability of environmental and operational influences, measurement errors, as well as simplified simulation assumptions, almost every structural system is characterized by uncertainty. The propagation of uncertainty through such a system comprises a non-trivial task, particularly when the system at hand is described by nonlinear or time varying dynamics, thus furthering the complexity of the governing laws involved. For a number of tasks however, as for increasing the safety, robustness, resilience and capacity of engineered systems, it is necessary to develop models that are able to encompass the aforementioned uncertainties.

Vladimir Kreinovich

Eleni Chatzi received her PhD (2010) from the Department of Civil Engineering and Engineering Mechanics at Columbia University, New York. She is currently an Associate Professor and Chair of Structural Mechanics and Monitoring at the Institute of Structural Engineering of the Department of Civil, Environmental and Geomatic Engineering of ETH Zürich. Her research interests include the fields of Structural Health Monitoring (SHM) and structural dynamics, nonlinear system identification, and intelligent life-cycle assessment for engineered systems. She is an author of over 200 papers in peer-reviewed journals and conference proceedings, and further serves as an editor for international journals in the domains of Dynamics and SHM, including the Journal of Sound and Vibration, Structure & Infrastructure Engineering, the Journal of Structural Engineering, Mechanical Systems and Signal Processing, the Journal of Engineering Mechanics, as well as the Sections on Structural Sensing and Computational Methods in Structural Engineering of Frontiers in Built Environment. She is currently leading the ERC Starting Grant WINDMIL on the topic of "Smart Monitoring, Inspection and Life-Cycle Assessment of Wind Turbines". Her work in the domain of self-aware infrastructure was recognized with the 2020 Walter L. Huber Research prize, awarded by the American Society of Civil Engineers (ASCE). She is further recipient of the 2020 EASD Junior Research Prize in the area of Computational Structural Dynamics.

Vladimir Kreinovich

Vasilis Dertimanis was born in Greece. He received a Diploma in Mechanical Engineering from the University of Patras, Greece, and the Ph.D. Degree from the National Technical University of Athens (NTUA), Greece, in the area of modeling and identification of faults in mechanical and structural systems. His research interests lie in the areas of structural identification and health monitoring, linear and nonlinear state estimation, active and passive structural control, hybrid testing and optimization. Vasilis has served as a senior researcher in the NTUA Vehicles Laboratory, Machine Design Laboratory and Laboratory for Earthquake Engineering. He has also participated as a Marie Curie experienced researcher to the EU funded SmartEN ITN project. For more than a decade, he has been in parallel self-employed as a freelancer engineer and inspector, as instructor in training seminars on transportation of dangerous goods by road/rail, as well as a measurement engineer and structural vibration analyst. Since January 2014, Vasilis is a member of the Chair of the Structural Mechanics in ETH Zurich and as of May 2017, he is Senior Assistant (Oberassistent) actively supporting the Chair in Research & Teaching.

Counting on Uncertain Models for Structural Health Monitoring

Professor Michael Beer, Leibniz Universität Hannover

Epistemic uncertainties appear across all engineering fields to quite some significant extent. Although they can often be described phenomenologically and qualitatively, they counteract a rigorous quantitative description, which is needed as a basis for a realistic risk assessment. In the presence of epistemic uncertainties the specification of a probabilistic model and the associated risk analysis lead to hypothetical results presuming some intuitive guess to capture the influence of the epistemic uncertainty. That is, we quantify risk based on conditions that represent assumptions rather than facts. Such results can be significantly misleading. It is thus of paramount importance to quantify epistemic uncertainties most realistically. This quantification should neither introduce unwarranted information nor should it neglect information. On this basis there is a clear consensus that epistemic uncertainties need to be taken into account for a realistic assessment of risk and reliability. However, there is no clearly defined procedure to master this challenge. There are rather a variety of concepts and approaches available to deal with epistemic uncertainties, from which the engineer can chose. This choice is made difficult by the perception that the available concepts are competing and opposed to one another rather than being complementary and compatible. Clearly, the first consideration should be devoted to a probabilistic modelling, naturally through subjective probabilities, which express a belief of the expert and can be integrated into a fully probabilistic framework in a coherent manner via a Bayesian approach. While this pathway is widely accepted and recognized as being very powerful, the potential of set-theoretical approaches and imprecise probabilities has only been utilized to some minor extent. Those approaches, however, attract increasing attention in cases when available information is not rich enough to meaningfully specify subjective probability distributions. The presentation will feature models for epistemic uncertainties, and it will highlight their capabilities and added value when used for engineering analysis and design. Illustrative examples are used to explain the respective features. The discussion on the models is complemented by presenting a powerful numerical technology for processing epistemic uncertainties even in very complex and nonlinear engineering analyses. This technology can be used not only for reliability analysis, but also for sensitivity analysis, design, model updating and more.

Michael Beer

Michael Beer is Professor and Head of the Institute for Risk and Reliability, Leibniz Universität Hannover, Germany, since 2015. He is also part time Professor at the Institute for Risk and Uncertainty, University of Liverpool and in the Shanghai Institute of Disaster Prevention and Relief, Tongji University, China. He obtained a doctoral degree from the Technische Universität Dresden and pursued research at Rice

University, supported with a Feodor-Lynen Fellowship from the Alexander von Humboldt-Foundation. From 2007 to 2011 Michael worked as an Assistant Professor at National University of Singapore. In 2011 he joined the University of Liverpool as Chair in Uncertainty in Engineering and Founding Director of the Institute for Risk and Uncertainty. He is serving on the Board of Directors (2020-2028) of the International Association for Probabilistic Safety Assessment and Management, and he is a Co-Chair (2020–2023) of the Risk and Resilience Measurements Committee (RRMC) of the ASCE Infrastructure Resilience Division. Michael’s research is focused on non-traditional uncertainty models in engineering with emphasis on reliability and risk analysis.