On June 3, 2019, the UK Information Commissioner’s Office (“ICO”), released an Interim Report on a collaboration project with The Alan Turing Institute (“Institute”) called “Project ExplAIn.” The purpose of this project, according to the ICO, is to develop “practical guidance” for organizations on complying with UK data protection law when using artificial intelligence (“AI”) decision-making systems; in particular, to explain the impact AI decisions may have on individuals. This Interim Report may be of particular relevance to organizations considering how to meet transparency obligations when deploying AI systems that make automated decisions that fall within the scope of Article 22 of the GDPR.
Following the results of the research, the Interim Report provides three key findings:
In terms of next steps, the ICO plans to publish a first draft of its guidance over the summer, which will be subject to public consultation. Following the consultation, the ICO plans to publish the final guidance later in the autumn. The Interim Report concluded three possible implications for the development of the guidance:
This report is potentially very important, and probably deserves more attention than one, quick, reactionary post in reply. However, at a first glance:
I am somewhat heartened by the realization, and emphasis, right up front, that one size definitely does not fit all with regard to artificial intelligence, even in regard to generic guidance and policy. But that does put into question the value of a 30 page report.
Under the subheading of "Why is The Alan Turing Institute working on this?", the issue of "Explainability" is raised. Explainability is fairly easy in programs using expert system approaches. However as one gets into areas such as genetic programming and neural networks explainability becomes much more difficult to assess with any certainty. These are areas where we, essentially, expect the machines to surprise us with programs and decisions that we couldn't come up with on our own. (A later mention of this in regard to the "citizen juries" seems to amount to an opinion survey. In addition, the choice of "accuracy" over explainability seems to indicate a misunderstanding that explainability is one of the only measures we have for the reliability of accuracy. Still later in the report the issue of this dichotomy is raised but dismissed.)
Ultimately, this interim report is a disappointment. The methodology seems to be little more than an opinion survey, and a number of important areas in regard to guidance on pursuing work with AI systems seems to be either out of scope or dismissed.
Under the subheading of "What is an AI decision?", there is an acknowledgment that AI is a catch-all term for a range of technologies. However, the section then goes on to emphasize machine learning, which may limit the overall scope and outcome. The document then goes on to discuss GDPR, seemingly without directly raising the issue of privacy. However, at this point it does not address the technical issue of the danger of using unedited masses of "real" data for the development and testing of AI systems, specifically those using machine learning technologies. The lack of this consideration is concerning, in regard to the overall value of the final outcomes of the report.