This past week the Scottish Government published its playbook and overall strategy document for the use, growth, and control of AI (Artificial Intelligence) in the country. And my thanks go to academic James Johnston, and to Gisele Simoes, who is undertaking doctoral research into the ethical challenges of AI at the University of West Scotland, for bringing this to my attention.
It’s not the first, and it won’t be the last such initiative, at Deep Analysis we have reviewed a number of similar national government documents over the past few years. But in reading through its 44 pages of guidance I was struck by a few things that I thought worth sharing with the broader tech community. Like any strategy, it is a bit vague on details at times, not due to a lack of effort but due to the fact that AI means lots of things to lots of people. It’s a technology that is used in everything from space exploration to healthcare through gaming and law enforcement. Building out a strategy and playbook for your countries use of AI is like fighting with fog, you know it’s there, you can see it, but you can’t really put your hands on it. What this initiative does well, beyond plans to harness it for growth in Scotland though, is in accepting that vague nature and detailing the human and ethical criteria. As I have already said I have read a number of similar initiatives but none, in my experience at least, so clearly state the boundaries.
To quote, the principals for AI in Scotland are as follows (my bolding):
AI should benefit people and the planet by driving inclusive growth, sustainable development, and well-being.
AI systems should be designed in a way that respects the rule of law, human rights, democratic values, and diversity,
and they should include appropriate safeguards – for example, enabling human intervention where necessary –
to ensure a fair and just society.
There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them. AI systems must function in a robust, secure, and safe way throughout their
life cycles and potential risks should be continually assessed and managed.
Organizations and individuals developing, deploying, or operating AI systems should be held accountable for their proper functioning in line with the above principles.
On the surface, these all seem fine and logical, and I for one am 100% in agreement with them. However, in practice, following and indeed enforcing these principles is going to be a major challenge. The trend in AI today is to build black boxes and embrace Deep Learning though there is a performance and practical advantage to that, providing transparency is not part of the equation. Though I am no lawyer, businesses and government departments in Scotland are going to have to be very wary of using Deep Learning if at some point there is the possibility of the decisions made by the system could be challenged. But maybe even more importantly, holding organizations and individuals accountable for the AI they use and work with has profound implications. In practice, this means one cannot claim ignorance, you cannot blame the computer (AI), it’s you that carries the can. That closes a loophole that I feel many in the tech field have hidden behind for too long, meaning that you can no longer claim that it’s too complex, or unexplainable. Or if you do, then you are accepting responsibility for creating such a complex and unexplainable situation……