Empower planners and individual travellers in the developing world to make smarter (i.e., safer, greener, cheaper) mobility choices to reduce alarming historical trends in injuries, environmental impacts and economic costs.Visit the project website
Prof Scott Ferson
Link to University Profile
Machine learning tools work well when data is abundant. Many statistical methods were invented for the situation in which sample size is limiting. But not all uncertainty has to do with small sample sizes. Poor or variable precision, missing values, non-numerical information, dubious provenance, and contamination by outliers, errors, and lies are just a few of the causes that give us bad data.
Some basic questions about bad data seem not to have clear answers:
- When investing in empirical effort, should we get more or better data?
- Is it always smart to combine good data with bad data?
- What cam we do if it clear that our data are not collected randomly?
- What can be done with ludicrously small samples like n=2 or even n=1?
- If data aren't missing "at random", we can still draw any conclusions?
- Is it prudent to ignore, as statisticians so often do, the reported precision statements associated with measurements?