Nick Gray:
Why Risk and Uncertainty are Key for Humane Algorithms

  Watch on YouTube

Algorithms have no idea about the significance of the calculations they are performing. They just mindlessly output the results of complex mathematical operations, often requiring untenable assumptions to be made, irrespective of the risk posed by even simple errors. Humane algorithms need to provide meaningful information about why a particular decision has been made and, when encountering an error, fail so that a human overseeing it can deal with the uncertainties and not increase the risk. I argue that such an algorithm needs to be able to balance the uncertainties and risks that govern even simple problems. This can be achieved by carefully considering the different types of uncertainty present and utilising the framework provided by imprecise probabilities when making calculations and analysing results.

Slider 1

Nick Gray is a PhD Student at the Institute for Risk and Uncertainty at the University of Liverpool under the supervision of Scott Ferson. His PhD studies are multidisciplinary, with research interests including machine ethics, uncertainty in machine learning, and risk communication. He recently spent six months working as a research associate at Imperial College London researching Human-in-the-loop machine learning.