I was recently interviewed by TechTarget journalist Mark Labbe, regarding the need for regulating AI (Artificial Intelligence). The interview was prompted by the release of a new draft US government directive called Guidance for Regulation of Artificial Intelligence Applications.
The regulation of AI is something we have thought about a lot over the past couple of years. As we designed and built the AI training course and wrote the Practical AI book, the subject of ethics and controls kept raising their heads. In workshops with enterprises, it can be difficult, if not impossible, to steer the discussion away from their deep-seated concerns regarding the use of AI. In short, the dark side of AI is real and concerns us all. But to date, despite the massive growth of AI with citizens and employees daily, if not hourly interacting with AI, regulators have been slow to respond.
If you were to ask us if we believe there should be strict regulations regarding the use of AI, we would tell you in a heartbeat that yes, this is needed. But as of today, it is absent. Here is the twist, at Deep Analysis, we are big proponents for the use of AI in the enterprise, we teach and advise widely on the topic. We are, as folks like to say, “AI evangelists.” Yet, even so, we believe strongly that its use should be tightly regulated.
Without delving too deeply into the murky world of politics, regulations are seen by many as designed to stifle innovation and growth. Though that can be true at times, so can the reverse be true. Consider, for example, a pharmaceutical and healthcare industry void of regulation that is a scary thought indeed. Yet few could argue that these sectors struggle to make money or to innovate. They have thrived so well (in the US particularly) that many would argue for even more regulation.
By analogy, AI is in its wild west phase, with traveling salespeople turning up uninvited and hawking snake oil to cure all ills. For some, it is a lucrative business, but it is one that leaves a trail of disappointment and, at times, destruction in its wake. It also muddies the water, confuses the buyer, and creates a toxic and long-lasting atmosphere of distrust. Imposing some regulated ethical guidelines would shake out at least some of the frauds and, in turn, create an atmosphere of trust and clarity for positive growth.
Black box AI systems that make life-changing decisions, data bias that unfairly discriminates, the corporate ownership, and leverage of private information should all concern us. As should the impact of intelligent automation on jobs and the workplace. We would argue that without regulation, there is a dark future ahead. Even so, we are pragmatists and doubt regulations will ever appear, if they do in some form or other, they will likely be too weak to be effective. The US guidance document is at least a start; in and of itself, it is toothless, but it helps to trigger a much needed and critical discussion. It at least clarifies one thing, that it is not AI technology itself that needs regulating; it is the use of AI that needs regulating. We have many plans already in place to explore the ethics of AI in 2020; if this is something that concerns you or your company, please reach out to us as we would love to learn more, and if possible, lend a hand in guiding you forward.
Get trusted advice and technology insights for your business from the experts at Deep Analysis. [email protected]