Bayes Days

Bayes Days is a convocation of mutual support for Bayesians and would-be Bayesians who have been stymied by intractable problems, wonky software, problematic data, or just their own inexperience.

Welcome to Bayes Days organized by the Institute for Risk and Uncertainty at the University of Liverpool and co-sponsored by the Society for Risk Analysis. The Institute and its Centre for Doctoral Training are supported by the Engineering and Physical Sciences Research Council and the Economic and Social Sciences Research Council of the United Kingdom.

Bayes Days is a convocation of mutual support for Bayesians and would-be Bayesians who have been stymied by intractable problems, wonky software, problematic data, or just their own inexperience. Presentations with flawed or preliminary Bayesian analyses are welcomed; presenters self-label their problems as "n00b needs help", "technical issues", "philosophical puzzle", "my analysis is perfect", or some other descriptive phrase to indicate the nature of their experience and difficulty. Presenters get audience feedback, off-the-cuff suggestions, and the chance to meet possible collaborators. There are also several longer tutorials offered on the second day.

Bayes Days 2019


Monday 21st October

09:00   Introduction to Bayes Days
09:30 Ulrika Sahlin Tutorial: To Bayes or Bootstrap - that is the question
11:00 Alex Sutton & Rhiannon Owen Introduction Bayesian Meta-Analysis
12:00   Lunch Break
13:00 Alex Sutton Tutorial: Bayesian Meta-Analysis
14:30 Diego Estrada-Lugo Modelling Logical Statements with Credal Networks
15:30 David Hughes Modelling Multiple Longitudinal Markers in Large Datasets using Variational Bayes

Tuesday 22nd October

10:00 Yu Chen TBA
11:00 Anna Auer-Fowler A Bayesian mixture model for inferring B cell specificity for vaccines
12:00   Lunch Break
13:00 Peter Hristov Probabilistic Numerics
14:00 Leonel Hernandez TBA
15:00 Brendan McCabe Approximate Bayesian Forecasting

Tutorials featured in Bayes Days 2019

How important is the organization of the data?

Leonel Hernandez

The use of data analysis methods starts with the focus on the data organization, which can be the decision point of whether or not the data collected is useful. This shows that the input data of any probabilistic and machine learning technique is key to find the expected results. In this research work, a machine learning technique is presented to address the issue of reliability and safety of electronic systems. This work focuses on accurate prognostic of remaining useful lifetime (RUL). In tests performed, the methodology utilized is the support vector regression (SVR) that learns from electronic signals. SVR was chosen among other methodologies, e.g. neural networks. The prognostics on the RUL has shown an accuracy of practically 100%, and this is why is worth to ask if Bayes could offer a better prediction or it all depends on each case.

To Bayes or Bootstrap - that is the question

Ulrika Sahlin

This tutorial compares Bayesian inference with the Bootstrap. Bootstrap is a method that frequentists often use to get estimation errors or probability intervals to their estimates, when that doesn’t follow automatically from the analysis. Thus, when you want things you always can get by Bayesian inference. I will discuss in what way it may be easier or more difficult to embrace a bootstrap compared to a Bayesian analysis using general mixed models for illustration. My conclusion is that Bayesian inference is a coherent principles for inference and more straightforward to implement than bootstrap added to a frequentist analysis.

Probabilistic Numerics

Peter Hristov

Numerical algorithms, such as methods for the numerical solution of integrals and ordinary differential equations, among others can be interpreted as estimation rules. They estimate the value of an unobservable quantity, given the result of tractable computations. Thus, these methods perform inference, and are consequently accessible to the formal frameworks of probability theory. A probabilistic numerical method is a numerical algorithm that takes in a probability distribution over its inputs, and returns a probability distribution over its output. This talk will focus on introducing the field of probabilistic numerics and some applications.

Modelling logical statements with credal networks

Diego Estrada-Lugo

Fault Tree Analysis (FTA) is a deductive method to find an undesired event by means of Boolean logic. This methodology has been implemented for decades in reliability analyses of complex engineering systems like the nuclear and oil industries to mention a few. The popularity of this method resides in the logical approach in which the failure event is studied and the simplicity of the relatively small amount of data that is required to model the events and components of the system. In this way, a failure probability of the so-called Top Event can be predicted and studied with relative ease. Despite the FTA success, there are several limitations that restrict the scope of the analysis and modellers had to explore another option. One of them is the Bayesian network (BN) method (or its imprecise probability version Credal networks, CNs), which consist in a probabilistic graphical model to study and understand the dependencies of events with epistemic uncertainty associated. The method allows to perform not only prognostic (like FTA) but also diagnostic analysis of specific components in the system. Furthermore, the CNs can incorporate new information in the form of evidence to update the probabilistic distributions of the model. However, given the significant advantages of CN over FTA, the latter is still very widely implemented. In this presentation, the principles about both methods are revealed and compared by means of didactic examples. and analogies and, to facilitate the discussion, we will ask some questions like why FTA is still widely used even though there are options, like CNs, that can produce generalised analyses? How much is too much in terms of the complexity of a tool that makes it hard to implement in the industry?

Modelling Multiple Longitudanal markers in Large Datasets using Variational Bayes

David Hughes

Data on multiple variables are now collected routinely at repeated visits in many clinical research areas. Mixed models are a popular way of analysing trends in longitudinal data. There is now increasing access to very large datasets, with both large numbers of patients, and large numbers of predictors. In these settings the current modelling approaches for mixed models can be very computationally intensive.
Mean Field Variational Bayes (MFVB) is a technique commonly used in Computer Science, which aims to find approximations to the posterior likelihood for Bayesian Models. In many settings these solutions have been shown to be very accurate and can be calculated in a fraction of the time required for equivalent MCMC models.
In this talk we describe how MFVB approaches can be used to estimate Bayesian Multivariate Mixed Models. I’ll discuss some of the computational obstacles I’ve faced, my current attempts to overcome them, and hopefully give opportunity for the audience to suggest better ones! We compare the time taken to estimate the models, and the accuracy of model parameter estimates through a variety of simulation studies and also through application to a large diabetes dataset aiming to model variables linked to the risk of developing Sight Threatening Diabetic Retinopathy.
The MFVB approach for Bayesian Mixed Models allows fast and approximate inference to be made in a variety of clinical models.

Bayes Days 2017


Friday 22nd September 2017

09:00 Sara Owczarczak-Garstecka What can YouTube tell us about dog bites? Analysis of dog bites videos
09:30 Ochenna Oparaji Software problems in Python
10:00 Alfredo Garbuno Setting up a Bayesian machine
10:30   Break
11:00 Roberto Roccheta Bayesian model updating
11:30 Hector Diego Estrada-Lugo Extreme climate events studied with a Bayesian network approach: application to a general hydropower station
12:00 Andrea Verzobio Mechanical equivalent of logical inference
12:30 Scott Ferson Bayes' rule in medical counselling: implications for kindergarteners' cooties
13:00   Lunch
14:00   General discussion
14:30 Diego Andres Perez Ruiz Bayes classifiers for functional data
15:00 Pete Green Estimating the parameters of dynamical systems from large data sets using sequential Monte Carlo
15:30   Discussion
16:00 Diego Andres Perez Ruiz Tutorial: Bayesian linear regression and hierachical models

Saturday 23rd September 2017

10:00 Alejandro Diaz de la O Tutorial: Prior knowledge, proportions and probabilities of probabilities
11:30   Break
12:00 Ulrika Sahlin Tutorial: Quantifying uncertainty using data and experts
13:30   Lunch
14:00 Brendan McCabe Tutorial: Approximate Bayesian computation (ABC)
15:30 Peter Green Tutorial: Generating samples from strange probability distribution

Tutorials featured in Bayes Days 2017

Prior knowledge, proportions and probabilities of probabilities

Alejandro Diaz, Institute for Risk and Uncertainty, University of Liverpool, UK

Which proportion is higher: 5 out of a 10 or 400 out of 1000? The answer seems obvious. But if these proportions were number of successes divided by number of trials, would you still think the same? Is a baseball player who achieved 5 hits out of 10 chances throughout his career, better than one who achieved 400 hits out of 1000 chances? In this introductory tutorial, we will see how Bayesian inference helps us add context in order to make decisions. The key will reside on representing prior knowledge using a probability distribution for probabilities: the very famous and elegant beta distribution.

Bayesian linear regression and hierarchical models

Alfredo Garbuno, Institute for Risk and Uncertainty, University of Liverpool, UK

Bayesian data analysis allows researchers to conduct probabilistic inference about non-observable quantities in a statistical model. This introductory workshop is aimed at those interested in applying the Bayesian paradigm in their data analysis tasks.
The tutorial will start with Bayesian linear regression models, and will provide guidelines for probabilistic enhancement to the model's complexity. This improvement will lead to the hierarchical regression model in which the Bayesian paradigm allows for a more flexible model, whilst providing a natural mechanism to prevent over-fitting. The session will present a classical Bayesian regression problem which can be followed through Python notebooks.

Quantifying uncertainty using data and experts

Ullrika Sahlin,Centre for Environmental and Climate Research, Lund University, SE

This tutorial introduces some of the basic principles to quantify uncertainty by Bayesian probability. I will demonstrate a way to quantify uncertainty by integrating expert’s knowledge and data. Participants can follow practical examples in R, using existing R packages for expert’s elicitation (SHELF) and sampling from the posterior (rjags requiring JAGS). The first example is a simple risk classification problem under sparse information and several experts with differing judgements. The second example is the familiar task to quantify uncertainty in input parameters of an assessment model using different sources of information and where uncertainty in assessment output matters.

Approximate Bayesian computation (ABC)

Brendan McCabe, Economics, Finance and Accounting, University of Liverpool, UK

This tutorial looks at how to do Bayesian Inference when it is too difficult to calculate the true likelihood and hence the exact posterior. (This is a Bayesian version of frequentist ‘indirect inference’ really.) We use model based summary statistics to match simulations form the assumed (difficult) model with the actual data at hand. Conventional approaches to ABC emphasize the role of parameter estimation but, in time series problems, forecasting is often the focus of attention and so it is to this dimension we direct our efforts. The role of Bayesian consistency is highlighted.

Generating samples from strange probability distributions

Peter Green, Institute for Risk and Uncertainty, University of Liverpool, UK

When conducting a probabilistic analysis, we often end up having to generate samples from a probability distribution. This, for example, is a crucial part of Monte Carlo simulations. For better-known probability distributions (Gaussian, uniform etc.), some simple tricks allow us to generate samples without too much difficulty. For the more ‘strange-looking’ distributions – which commonly arise in a Bayesian analysis – the problem becomes more difficult. This tutorial describes methods which can be used to generate samples from generic probability distributions. They often form an essential part of a Bayesian analysis. The tutorial is aimed at beginners, and will cover basic sampling algorithms before describing Markov chain Monte Carlo (MCMC) and importance sampling algorithms. Sample Matlab code will also be provided.


Bayes Days is held in the Risk Institute Seminar Room in the Chadwick Building at the end of Peach Street in the heart of the University of Liverpool campus. Use the south entrance to the Chadwick Building; other entrances have no access to the Risk Institute. When you enter the building, you'll see the Muspratt Lecture Theatre. Turn left and enter the brown door, and follow signs to the Risk Institute Seminar Room.

Chadwick Building
University of Liverpool
Latitude: 53.404110 / 53°24'14.8"N,
Longitude: -2.964600 / 2°57'52.6"W
What3words: curiosity.memory.daring

Postal Address:
Institute for Risk and Uncertainty, University of Liverpool
Peach Street, Chadwick Building
L7 7BD
Click here for directions