Algorithms

IMA/BCS The Debate about Algorithms Mathematics TodayVol. 53 No. 4 August 2017 pp162-165

‘Mathematical’ unscientific reasoning

The IMA/BCS recommend that:

The Government … needs to review its use of automatic processing to ensure it: … Is capable of being explained in line with GDPR 15.1h. In particular, buzzphrases like Artificial Intelligence, deep learning, algorithm, data-based algorithm should act as a warning sign that the algorithm is in fact probably no more than unscientific reasoning  by generalisation.

where GDPR says:

The data subject shall have the right to obtain from the controller confirmation as to whether or not personal data concerning him or her are being processed, and, where that is the case, access to the personal data and the following information: … the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.

where Article 22 says:

(1) The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

(4) [unless].

My interpretation of this is that while the scope of the recommendation is quite narrow, the view that above buzzphrases may be taken as a warning sign that the methods used cannot be meaningfully explained (as in 15.1(h)), a warning of unscience that might be applied much more generally.

This might seem obvious. Except that my own experienced is that even those with science degrees often assume that as a mathematician with a strong interest in the use of computational methods I would be a strong advocate of their widespread use, downplaying their dangers, as in finance prior to 2008. Here is a very clear reference that that is NOT the view of real mathematicians!

The importance of context

The IMA/BCS further recommend that:

In machine learning algorithms, the background data contribute to the decision, so every such algorithm should be prominently labelled with the data that created it, both as a statement the lay person can understand (e.g. ‘based on London traffic data 1982-2002’) and question (‘but that was before the congestion charge’), and ultimately such that an expert can analyse it. This is an algorithmic consequence of the 15.1(h) right.

Again, this would seem to apply much wider: not just to machine learning and not just to Governments and their agents.

Considerations in usage

The article has a useful list of factors to consider in connection with all types of algorithm. I would add the availability of alternative methods. For example, in the selection of advertising for web pages, there is no alternative to using some kind of algorithm. On the other hand, British medical consultants are currently allowed some discretion, and this may well be a good thing. Possibly the main problem with the use of algorithms, as well as the more widespread use of statistics is how to ensure that the lay person (or even expert) understands their implications? We need some good examples.

David Marsay

Advertisements
%d bloggers like this: