This month, the futurists podcast team were joined by Paul Caulton (@ProfTriviality) and Franziska Pilling from Lancaster University (@ImaginationLancs). To give you a sense of place, while we were all sitting in home offices, Paul was swinging gently in a hammock being serenaded by birds. As such, the song of sunset going on in the background is 100% organic; no birds were harmed in the making of this podcast.
Paul and Franziska have been looking into the emerging field of AI legibility. Their research focuses on how the complex ideas behind AI can be translated into simplified terms that give the average person a better idea of what a particular AI is doing behind the scenes. As Paul points out, when most people think of AI, they immediately go to killer robots but the reality is that AI is far more mundane and is probably just calculating your credit score. For now at least…
Understanding what various AIs do is an essential component of informed consent. The pair argue that since most people are unaware of the various AIs and algorithms they encounter most days, they do not have the information they need to make a decision. As Franziska points out, there is a significant danger in the mundane because when you aren’t really paying attention, it is easy to give away more data than you intended. Presenting people with icons to designate particular functions or uses of data could become a shorthand to allow people to understand AI functionality better and, therefore, make informed decisions about their use of AIs.
And yet, the difficulty with this project is that often simplification obscures depth and nuance of meaning. A single icon might give someone an idea of what an AI is doing but this is not the same transparency as showing them the code at work. Here lies the issue. Where is the balance between offering a simple explanation and fully explaining an idea?
To find out more about Paul and Franziska’s project, visit their website here: http://imagination.lancaster.ac.uk/project/uncanny-ai/