Lately, I’ve been speaking at length with digital process automation vendors about the state of the process automation software market. Whenever we discuss trends, the conversations inevitably turn to the future of artificial intelligence and machine learning (AI/ML) in process automation and robotic process automation (RPA). Frankly, I’ve found that most vendors latch on to the idea but have very little substance behind their words in terms of strategy and product roadmaps. Many of them aren’t sure when customers will want to buy AI/ML, or how and where to add AI/ML to their process automation platforms. What customers want, and when, is uncertain.
There’s a good reason why the vendors are so unsure. For AI/ML to work in applications, the tools must be applied in very specific, targeted ways. The AI/ML platforms – IBM Watson, Salesforce Einstein, Google TensorFlow, Microsoft Azure AI, and Amazon AWS Machine Learning – are just that . . . platforms. They are pre-trained for generic purposes, but software vendors looking to use them must build their own applications to meet their own needs. As my colleague, Alan Pelz-Sharpe, writes in Practical Artificial Intelligence (the book he co-authored with Kashyap Kompella), “every time the AI/ML platforms are used, they are used differently with some elements of the platform leveraged and other elements not, depending on the circumstances.”
Each time Einstein or Watson, for example, is used in a software product, it requires a new AI/ML instance with new data. “In reality, platforms such as Einstein and Watson encompass armies of large and small AI products,” Alan writes. “Some leverage this or that model, others a variant of some other model,” and so on.
If it sounds complicated – it is.
That reality puts cracks in the process automation AI/ML holy grail, which is something along these lines:
An event from a dynamic case management system automatically triggers human actions and automated systems based on machine advice, and creates new processes on the fly; for example, when a system detects an intruder and dispatches a guard to look at the situation while at the same time updating other people, systems, and even IoT devices.
Most intelligent applications leverage the common AI/ML platforms, but these platforms are just the base building blocks. Few software vendors, if any, would build all these parts themselves; most AI systems comprise multiple parts. But the building blocks only get them so far. Software vendors must add their own secret sauce and, most importantly, their own data for their applications to work. Then they must train their AI/ML product using that data. The bottom line is that no two AI software applications are alike. They all have unique data and unique learning experiences. The same or similar building blocks could be used to build several related AI products, but the data used to train them might be very different. For example, one might result in an application that analyzes marine claims from Florida, while another might review car claims from the Midwest. These multiple apps could use the same building blocks but create very different output.
Despite the steep ramp for adding AI/ML to digital process automation, there are some use cases already deployed, such as:
- Fraud detection in financial services and government agencies
- Natural language processing in contact centers and process modeling
- Facial and pattern recognition in retail websites
- Voice recognition in service industries
These examples show how AI/ML applications are being used for digital process automation:
- Nintex has developed a process modeling application that uses natural language processing, for customers that implement process mapping and modeling (Nintex Promapp) alongside Nintex Workflow Cloud. The customer can describe processes in plain English, then use natural language processing to match the text to standard workflow actions and automatically generate the skeleton of a workflow process. These descriptions are stored as parent/child process snippets that can be reused in multiple processes.
- ABBYY has developed a neural network for analyzing dynamic case management instances, including forecasting the process outcomes and event timing. For example, in a healthcare process, the system will predict whether the patient will be sent home, sent to the hospital emergency room, or sent to the operating room, and make recommendations based on those predictions. After defining the deadlines for a process instance, the software can also predict the probability of missing a deadline, and the financial impact. Following this analysis, the software can then advise the businessperson of the next best event for the case.
- S&P Global uses Appian’s natural language generation and RPA to create stories (e.g., about an earnings report). This is done by tapping into datapoints and earnings information from public websites and producing stories that would previously have been written by people.
- Manufacturing and engineering companies use process mining data to drive AI-based business process modeling. Note that process mining should precede the use of AI-driven process modeling and not the other way around. AI/ML is not effective with process mining, but once the data sets are produced from the mining effort, then AI is a powerful tool for analyzing super deviant processes.
Still, software companies face two main challenges to moving forward:
The first challenge is to maintain a broad and holistic view of customer needs, which is hard for any vendor to sustain. The second is to understand the limitations of AI and work within them. Artificial intelligence and machine learning are powerful tools, but without good quality data and clear business ownership, they can be at best blunt objects and, at worst, destructive.
Keeping that reality in mind, digital process automation vendors might be able to add AI/ML functionality such as the following going forward:
- Analyze workstreams to determine the value of different workflows
- Combine case data and business data to improve decision-making
- Recommend design options in modeling and RPA scripting
- Eliminate duplicated/conflicting processes
- Improve process models based on process mining outputs
- Identify duplicated global processes and recommend the best version
- Analyze process performance data to correct inefficiencies
- Use process mining for predictions based on process models
- Use AI/ML for intelligent capture
It isn’t a matter of whether we will use AI/ML in digital process automation and RPA, it’s a matter of when and of how much effort it will take to get there. Buyers can help by letting their vendors know which of the many AI/ML uses within business processes they value, and which take priority.
For an interesting discussion of how to use AI in existing processes, see https://atos.net/en/blog/applying-artificial-intelligence-in-existing-business-processes
See our ABBYY vendor vignette: https://www.deep-analysis.net/vendor-vignette/abbyy-timeline/
See our Appian vendor vignette: https://www.deep-analysis.net/vendor-vignette/appian/
See our blog post, “Process Mining & the Lost Art of Continuous Improvement” at https://www.deep-analysis.net/2020/03/process-mining-the-lost-art-of-continuous-improvement/
The two challenges are outlined based on an interview with Alan Pelz-Sharpe on April 2, 2020.
Work with us to ensure you are a disruptor not one of the disrupted!
Get trusted advice and technology insights for your business from the experts at Deep Analysis. [email protected]