“If you can’t describe what you are doing as a process,
you don’t know what you’re doing”
W. Edwards Deming

“Aren’t such basic errors inevitable?”

Writing in the Financial Times recently about the spreadsheet “issues” that have caused the West Coast Mainline to be retendered, the “Undercover Economist” Tim Hardford wrote:

There’s a more worrying question: given how complicated the modern world is, aren’t such basic errors inevitable? This seems to have been a howler, but large spreadsheets are ubiquitous and their size makes it almost impossible to eliminate errors. The Office for National Statistics misreported gross domestic product last year, thanks to such an error.

Tim is right on both counts. Spreadsheets are ubiquitous and it is almost impossible to eliminate errors. However, it is not correct to say that say that error elimination is a function of size. According to research on spreadsheet errors, which we discussed in one of our first financial modelling podcasts with Ray Panko, spreadsheets are no different than other areas of human activity when it comes to making errors. Ray states:

Broadly speaking, when humans do simple mechanical tasks, such as typing, they make undetected errors in about 0.5% of all actions. When they do more complex logical activities, such as writing programs, the error rate rises to about 5%

Therefore large spreadsheets will have more errors due only to having more code. However their size is not what makes it “almost impossible to eliminate errors”. The possibility of identifying and eliminating errors is a function of the discipline of the modeller and the consistency and structure they have employed in the creation of the model. In my view it’s easier to identify and eliminate errors in a large model where a standard methodology such as FAST has been used, than it is to find errors in a small but chaotic model.

It’s not clear to me whether the errors in the West Coast models where conceptual errors, or structural errors. With two separate enquiries announced into the affair, we shall soon find out.

Where humans are involved there is always a risk of error (no software is ever completely bug free), but that doesn’t mean we should throw our hands up in the air and do nothing. The case for standards in modelling is clear.

Comments

  1. Charles Broadbent says:

    The Panko stuff on errors makes sobering reading – thanks for the link

Help make the handbook better