More IDP vendors announce GPT features

by:
last updated:

More IDP vendors announce GPT features

by:
last updated:

We’re at the very beginning of the announcement cycle for GPT integration with IDP. Some vendors are all-in on GPT, while others take a more cautious approach. From a trickling stream in April, it's now a raging torrent of Class IV rapids as the huge snow mass melts. Time to “buckle up, buttercup”.

June 27, 2023

This week I’m highlighting three 4th Wave IDP companies who were doing LLMs long before the ChatGPT bandwagon rolled into town.

Instabase: I met with CEO Anant Bhardwaj and VP-Intl Luke Rogers in London last week, lucky to get a few minutes at the a16z conference where they recounted being overwhelmed by the response to their new AI Hub, where discriminative meets generative LLMs. After test-driving AI Hub myself, I wasn’t surprised to hear they were so popular with attendees (Deep Analysis June 7 blog). Thanks to Head of Products Clemens Mewald & marketing chief Nikita Jagadeesh, I also received a full briefing on the entire platform and roadmap. Then I built my first AI app on the Hub for analyzing stock purchase agreements. This thing actually works. Sufficiently impressed to begin work on our deeper dive report (we call it a Vendor Vignette.) Stay tuned.

Eigen Technologies: same London trip, I met with Comms Director Thomas Cahn for a briefing on Eigen’s GenAI announcement and other news about the platform. Tom’s been with Eigen since the early days and talks more like a product manager than a comms person (which I loved). Eigen is known for building best-of-breed LLMs and fine-tuned AI models for the most complex financial services instruments such as derivative contracts. They’ve branched out into adjacent vertical markets and expanded the platform on both the input and output ends. Eigen has also married its discriminative LLMs to the power of generative AI. They say this practically eliminates the hallucination issue. Time to update our Eigen vignette!

Indico Data: talked to CEO Tom Wilde today about the launch of its integration with Microsoft Azure OpenAI Service to connect Indico’s discriminative LLM capabilities with GPT. Indico’s co-founder Alec Radford published a seminal generative AI paper way back in 2015, still one of the most cited papers; so they really know what they’re doing with GPT. Indico has a competitive unstructured data processing platform for any document or content types. But what stands out to me is their launch focus on the insurance industry, which surely can use the help. Good strategy move, Indico. And another vignette for me to update.

Still wary about introducing GenAI into your IDP or automation stack? No wonder based on the dire warnings. However we think these vendors (and others – see below) have built secure and scalable systems around GPT.

  • Hallucinations are not an issue; they’ve bounded GPT answers with their own fine-tuned, domain-specific discriminative LLMs.
  • Each offers an on-prem version for ultra-secure and compliant ops.

So remember…Move slowly and don’t break anything – yet!

 

May 4, 2023

Doesn’t it already seem like “hamster years” ago that ChatGPT launched? Since November 2022, Deep Analysis has occupied a front row seat to the shock and awe that OpenAI unleashed on the enterprise software industry. We’ve watched and listened as the Work Intelligence and Intelligent Document Processing vendor communities have responded to the challenge of answering the one question on the mind of every board member, every investor, every client, and most likely every employee: what the heck are you doing about this? 

Now we’re at the very beginning of the announcement cycle for GPT integration with IDP. Some vendors are all-in on GPT, while others take a more cautious approach. From a trickling stream in April, we expect the PR will soon become a raging torrent of Class IV rapids as the huge snow mass melts. Time to “buckle up, buttercup”.

Smooth as butter?

It was great to see the generative AI joy coming out of Seattle this week. (No, not the ebullient Jeff Teper @ Microsoft again: this time it’s from the BotMinds AI team). They are “over the moon” to announce the integration of AutoGPT into the BotMinds AI platform, because now their customers can access the “sheer awesomeness” for their intelligent automation journey. Apparently, AutoGPT can do “smooth as butter” data extraction from “mind-boggling” documents. Clearly BotMinds AI are “all in”. 

That certainly got my attention. With the exception of those merry marketers at Veryfi, the vast majority of IDP marketing is, well, a bit less enthusiastic (which is classic English understatement for incredibly dull and dreary). While BotMinds AI marketing may be entertaining, the problem as always will be in the actual delivery of those mind-blowing improvements into everyday process automation.

Can GPT really elevate IDP to this next level of bliss? Will anything in IDP ever be as smooth as butter? We don’t know yet but the BotMinds AI team have already demonstrated their AI bonafides and seem as likely as any IDP vendor to bring us closer.

GPT is like a calculator for words

That quote comes from Dan Rotelli, CEO of long-time IDP and document management provider BIS, specialists for the oil & gas industry with its esoteric documents such as wellhead leases that date back over 100 years. Dan was comparing the advent of GPT to when calculators first appeared and replaced slide rules. Back then, suddenly anyone could calculate equations without experience and training in maths or engineering. The power of the calculator was transformative but also very, very DANGEROUS. Any idiot with a calculator… now, replace “calculator” with “ChatGPT”.

This quote came out of our briefing from Tim McMullin, VP Sales for the BIS Grooper IDP product line. Tim is a very hands-on sales manager who has spent a lot of time testing GPT 3 and GPT 3.5 alongside the Grooper engineers. Following is a summary of what the Grooper team has learned so far.

  • What is GPT actually good at doing for IDP?
    • GPT is very good at reading and interpreting OCR data; Tim stressed how important it is to feed GPT with the most accurate OCR data possible.
    • It is exceptional out of the box at Named Entity Recognition (NER), including address extraction and date fields.
    • GPT’s keyword summarization capability works very well for document classification and will assist in unstructured document use cases such as contract lifecycle management without the need to develop a separate NLP-driven summarization tool.
    • For the average knowledge worker, GPT will make document processing far easier than today. Any worker familiar with their domain can learn to ask the right questions (prompts) to the system, without the need for any data analysis skills or understanding of Boolean logic, regular expressions, etc.
  • What are GPT’s current limitations for IDP?
    • It can become very expensive very quickly at current pricing. This is expected to decline over time, but no one knows the timeline of that arc.
    • GPT is a black box; there’s no way to peer inside and see exactly how it decided to output a string of words. However, Tim suggested this is not dissimilar to the long-standing problems around IDP validation, and we need to deploy the same remedies, such as nth or random sampling by a human in the loop (HITL).
    • Currently GPT can only handle a limited amount of text in a query; 8 or 32 tokens. This is expected to change in the future.
    • GPT is not yet good enough for reliable table data extraction and table understanding.
  • What about ChatGPT’s hallucination problem?
    • In the GPT 3.5 API, hallucination is less of a problem because the operator can fine-tune settings such as “Temperance” to reduce the possibility of hallucinating. When GPT is fed domain-specific data such as an invoice or a contract, hallucination problems should be further reduced. When in doubt, HITL.

Move slowly and don’t break anything (yet)

Hyperscience is among the IDP vendors taking a more cautious approach. Yesterday they previewed a Know Your Customer (KYC) demo with GPT integrated in their Flow Studio low-code development environment. GPT 3.5 is called during data validation to compare a new banking application against the bank statements and KYC docs submitted by the applicant. As already noted above, GPT is very good at NER and Hyperscience uses that to spot the inconsistencies or errors in the data. GPT then helps the developer easily create and apply a new rule to the workflow for data validation. 

Fig. 1 Hyperscience Flow GPT 3.5 validation demo (edited for clarity)

Hyperscience is also careful to emphasize the urgent need to keep HITL at all times for GPT and like Grooper highlights the primacy of data accuracy from the OCR engine. It also prefaces the demo with warning statements about known GPT inaccuracy, bias, and data privacy concerns.

While Deep Analysis is well-known for emphasizing innovation and showcasing early-stage disruptions from fast movers such as BotMinds AI or Grooper, we have to agree with the Hyperscience “adult in the room” approach. Because GPT and other competitive generative AI tools are still early-stage and much is not known about them in everyday business use, we advise our enterprise clients to flip Zuck’s script: move slowly and don’t break anything (yet). 

Keeping up with Microsoft Syntex

Meanwhile, who could have predicted that Microsoft, the ultimate B2B software adult in the room, would become the new disruptor in IDP (and all content services, for that matter)? Microsoft also seems to be all-in for GPT and is on a roll with weekly announcements about new uses  across its many product lines. Syntex is the Microsoft product that includes IDP functionality (and much more).

Microsoft announced that GPT-4 APIs are now available through the Azure OpenAI Service, so any developer can build IDP-type applications and include GPT functions such as summarization, keyword extraction, and classification. Over on Github, this has unleashed a slew of new innovative document automation projects. We expect Azure OpenAI to be integrated alongside Syntex and PowerAutomate to amp up document automation across Microsoft-centric enterprises. 

Microsoft is certainly delivering on its AI promises, and because of its two year headstart with OpenAI and GPT Microsoft should have the benefit of experience at already breaking things before unleashing GPT features on the market. But by dropping so many gen-AI choices so fast across its entire B2B product ecosystem, does Microsoft risk overwhelming its customers? If you’re dazed and confused, follow our Deep Analysis coverage on all things Syntex.

Subscribe to our monthly newsletter

Leave a Comment

Subscribe to our monthly newsletter

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.

Work Intelligence Market Analysis 2024-2029