Machine Learning

Risk Institute Winter School

29th - 31st January 2020

29th to 31st January 2020


The Institute for Risk and Uncertainty at the University of Liverpool is to host a multidisciplinary symposium and workshop on uncertainty in image processing. Tentatively over two days, the symposium will introduce and explore issues from multiple perspectives, from uses in biomedical research to remote sensing and computer vision. Envisioned as a collaborative event, the symposium will feature a series of talks aiming to fostering collaboration, detailing common problems, and sharing solutions.

Add to Calendar 29-10-2019 10:00 31-10-2019 17:00 Europe/London Machine Learning Winter School Risk Institute Winter School. A multidisciplinary symposium and workshop on uncertainty in image processing. Risk Institute Seminar room

The conference is open to everyone free of charge and will include training and tutorial workshops aimed at both introductory and intermediate-level audiences.

If you would like to make an oral or poster presentation send an abstract (300 words or less), along with your name, affiliation, email, and phone number.

If you would like to offer a training workshop, send a description (less than a page) explaining the content, intended audience, required resources, planned duration (2 hours to 4 hours) and maximum number of participants.

Schedule (tentative)

Wednesday 29th January 2020

13:00 - 15:00 Dr Emma Robinson Muspratt Lecture Theatre
15:15 - 15:45 Antonis Alexiadis Muspratt Lecture Theatre

Thursday 30th January 2020

09:30 - 11:30 Dr Alexey Melnikov Foresight Centre (Thornton Room)
14:00 - 16:00 Dr Yhalin Zheng Foresight Centre (Thornton Room)

Friday 31st January 2020

09:30 - 11:30 Dr Lauro Snidaro Foresight Centre (Thornton Room)
13:00 - 13:30 Prof Scott Ferson Foresight Centre (Thornton Room)
13:30 - 14:00 Dr Ardern Hulme-Beaman Foresight Centre (Thornton Room)


Dr Emma Robinson

King's College, London

Dr Robinson's research focuses on the development of computational methods for brain imaging analysis, and covers a wide range of image processing and machine learning topics. Most notably, her software for cortical surface registration (Multimodal Surface Matching, MSM) has been central to the development of of the Human Connectome Project’s “Multi-modal parcellation of the Human Cortex“, and has featured as a central tenet in the HCP’s paradigm for neuroimage analysis.

Multi-modal surface matching frameworks
Machine Learning is starting to have a significant impact of medical imaging and radiology (for example in cancer detection). However, methods for diagnosis and modelling of cognitive disorders still face significant challenges. Human brains display considerable natural variation in shape and functional organisation. This is difficult to model and drowns out clinical signals of interest. In this session I will discuss techniques for addressing this. First, MSM, a discrete optimised tool, for learning spatial correspondences between surface-mesh models of the brain, which has lead to advances in modelling cortical organisation (Glasser Nature 2016). Second I will touch on the potential of Deep Learning to further increase the sensitivity and specificity of models of neurological development and impairment. I will finish with a tutorial on how to assess and use cortical surface imaging Big Data and processing pipelines.

Dr Alexey Melnikov

University of Basel, Switzerland

Dr Melnikov is a Postdoctoral Research Scientist in Theoretical Physics, working in Basel, Switzerland on quantum machine learning. Current interests are mainly in applying reinforcement learning techniques to physics problems, and in studying advantages of quantum-enhanced reinforcement learning agents.

Interplay between machine learning and quantum physics
What is the role of quantum physics in machine learning and, vice versa, the role of machine learning in quantum physics? In this lecture, both questions will be addressed by connecting quantum physics and machine learning in different ways.

I will talk about reinforcement learning agents and present a physics-inspired model called projective simulation, demonstrating how the deliberation of agents can be sped up via a quantum walk process. I will show how the same reinforcement learning agent is now used to design quantum optics experiments and quantum networks, and to test the non-locality of correlations. I will finish by showing how we improve our understanding of quantum walk advantages with computer vision.

Dr Lauro Snidaro

University of Udine, Italy

Dr Lauro Snidaro is currently an Associate Professor in Computer Science at University of Udine. Research interests include data fusion, computer vision, and artificial neural networks.

Computer Vision: the past and the «uncertain» future
The talk will take the audience from “old-style” Computer Vision, based on manual feature engineering, to nowadays workflows based on Deep Learning
The role of uncertainty in past and current Computer Vision techniques will be discussed along with future perspectives.
Examples of a typical processing pipeline for Visual Surveillance will be presented and compared to current approaches. Also, the key role of uncertainty for data fusion will be highlighted in a real example for autonomous driving.

Dr Ardern Hulme-Beaman

University of Liverpool

Dr Ardern Hulme-Beaman studies the evolution of animals in association with humans, identifying if their change in feeding behaviour is related to human activity or natural changes in the environment. This involves working with photogrammetry methods, which are methods that reconstruction 3D virtual models from multiple 2D photographs, building models for both research and archival purposes.

Imaging and geometric morphometric analyses in archaeology for the quantification of shape
2D and 3D shape analyses (geometric morphometrics) are becoming an increasingly important part of archaeological research. These analyses range from tracking the shape change of skeletal elements of domestic species, to regional differences in the shape of man made objects such as stone tools. 2D and 3D approaches have both pros and cons, and the approach taken is often determined by the subject material and the scope of the project. 2D has the most complex issues — problems with inter-observer and inter-equipment error — whereas 3D approaches are usually constrained by logistics — data capture time and processing time. 2D image capture therefore needs to be highly controlled. Central to both analyses is that, once photographed, Cartesian coordinates (a.k.a. landmarks) must be recorded from the 2D image or the 3D model, usually manually, to capture shape in an analysable form. This is a lengthy process and subject to inter-observer variation. Automation of various aspects of the process is desirable, though difficult.

Prof Scott Ferson

Liverpool Institute for Risk and Uncertainty

Scott Ferson is director of the Institute for Risk and Uncertainty at the University of Liverpool in the UK. For many years he was senior scientist at Applied Biomathematics in New York and taught risk analysis at Stony Brook University. He has over a hundred publications, mostly in risk analysis and uncertainty propagation, and is a fellow of the Society for Risk Analysis. His recent research, funded mostly by NIH and NASA, focuses on reliable statistical tools when empirical information is very sparse, and distribution-free methods for risk analysis.

Error propagation in shape analysis
Shape analysis is a special problem in feature extraction from images often used to recognise outlines, tracks, silhouettes, signatures, symbols, ordered points, etc. Quantitative methods that can characterise shapes do not comprehensively address measurement uncertainties arising from blurred or missing imagery, registration errors, pixel resolution, ambiguity about landmarks, and deformation of non-rigid objects by stretching, twisting or drooping during image capture. Such imprecision can be characterised using interval methods and projected through intervalized elliptic Fourier analysis and statistical methods such as t-tests and discriminations. The approach also allows us to assess quantitatively how close one shape is to another and to display an array of shapes that are as close to a given shape as a test shape is.

Dr Yalin Zheng

University of Liverpool

Dr Yalin Zheng is a Reader in Ophthalmic Imaging at the University of Liverpool with extensive experience in artificial intelligence, machine learning, computer vision, image processing and medical image analysis. My team is working on intelligent imaging solutions to address healthcare challenges with a focus on eye disease.

Deep learning for image analysis and their biomedical applications
In this tutorial, I will first provide a brief overview of artificial intelligence and recent advances in deep learning. I will then focus on convolutional neural networks (CNNs) that have been widely used for computer vision and image analysis tasks, standard network architectures and advanced building blocks of CNNs will be introduced. Finally, I will present some our recent progress in biomedical applications of using CNNs followed a summary of the tutorial.