Enterprises beware of Deep Learning

by:
last updated:
Enterprises beware of Deep Learning | Analyst Notes | Deep Analysis

Enterprises beware of Deep Learning

by:
last updated:

As AI works its way into the enterprise, we have noticed one particular term gaining traction, that of ‘Deep Learning.’ Both in conversations with buyers of AI and technology vendors of AI, Deep Learning appears to have caught the imagination. That is worrisome as Deep Learning is a branch of AI that promises a lot, but you should approach it with extreme caution.

As AI works its way into the enterprise, we have noticed one particular term gaining traction, that of ‘Deep Learning.’ Both in conversations with buyers of AI and technology vendors of AI, Deep Learning appears to have caught the imagination. That is worrisome as Deep Learning is a branch of AI that promises a lot, but you should approach it with extreme caution.

There are many different models and approaches to AI, everything from Random Forests to Support Vector Machines and Hidden Markov Models. Each method is optimized for a particular set of tasks. These models and approaches are often used in conjunction with one another to achieve a specific goal. In other words, there is no single approach to AI that is better than another. Instead, options are carefully chosen, then configured, and utilized to meet your specific business goals. The branch of AI we refer to as Deep Learning, however, is increasingly positioned as an ‘uber’ approach to AI, a superior, advanced approach. That is only partly true. 

Deep Learning, in simple terms, is an advanced machine learning approach that uses multiple hidden layers of analytical neural networks. It sounds cool, it typically requires a lot of data and computing power, and can deliver outstanding results in the most complex of situations. However, the kicker here is the word ‘hidden’. The layers in Deep Learning will analyze data and provide you with an output decision, but not an explanation as to how it came to that decision. Deep Learning systems are mostly black boxes, what happens inside there, stays there. The concept of the black box in AI is a controversial topic, some data scientists play it down, others do the opposite. Some systems are a little more transparent than others, some less so. Either way, if a Deep Learning system makes a mistake, there is no guarantee, indeed there is a likelihood that you will not know why it made the mistake. And let there be no doubt about it, mistakes and erroneous decisions will happen. When they do happen, you cannot ask a Deep Learning system. Why? Both enterprises and technology vendors alike need to be aware that Deep Learning systems are for all intents and purposes unaccountable.

Furthermore, as more people use Deep Learning; the more its limitations are becoming apparent. Data scientists have recently told us how easy it is to run an impressive example on a Deep Learning system. But also how difficult it is to scale that example with accuracy and consistency. Others have pointed out how easily fooled Deep Learning systems can be; a ‘fun’ example can be seen here from MIT https://www.labsix.org/physical-objects-that-fool-neural-nets/

So take a step back for a minute and consider the implications of using Deep Learning systems for your customers or employees. Is your business regulated, is it accountable for its actions? The answer, of course, is yes. Every organization, every business, is responsible for its actions. Be that the corner shop or a major hospital. The specifics requirements of regulations will differ widely, but you will always be responsible.

A short analyst note cannot be the forum to explore the technical in’s an out of machine learning models, or the complexity of implementing and managing AI systems. But it may be the right place to give some high-level advice, so here goes. 

1: Enterprises should understand that any critical AI system you use will likely need to be ‘supervised.’ You will need to monitor and manage its performance over time.

2: Unsupervised systems like Deep Learning have their place, but you should be wary of using them in critical decision making.

3: If a Deep Learning system starts to give you misleading, incorrect, or divisive outputs. You may have no other option than to shut it down permanently. There is no going back and unraveling what went wrong.

To repeat, at the end of the day, every organization is accountable for its actions. AI systems can be of enormous benefit in increasing efficiencies, reducing costs, and providing new and actionable insights. But the manner in how the AI makes its decisions will often need to be explainable; such systems will often require a ‘human in the loop.’ At times that may mean choosing a more straightforward option over a more complex one. You may need to sacrifice some minor perceived gains, to remain compliant.

The enterprise AI world currently resembles the wild west, one in which is everyone is going to get rich, and the possibilities are endless. Many will get rich, and there are near unlimited possibilities. But some of those possible outcomes will mean failure and a negative impact on businesses, customers, and employees. In such frontier situations, charlatans abound, selling too good to be true solutions. We are seeing that in the hyping of the use of Deep Learning. Deep Learning does not represent unlimited intelligence. Deep Learning is complex, and it does go wrong, at times spectacularly so. Deep Learning has its uses and its strengths, for some tasks it is ideal, for others, it is not a fit at all, as the risks are too high. So if you are going down the route of Deep Learning, you need to proceed with caution.

Work with us today to ensure you are a disruptor not one of the disrupted! 

Get trusted advice and technology insights for your business from the expert analysts at Deep Analysis. [email protected]

Read our bestselling book ‘Practical Artificial Intelligence – An Enterprise Playbook’ by clicking here.

Leave a Comment