As I mentioned last time, my January was spent primarily updating our Work Intelligence Market Analysis; both refreshing and adding to the underlying data – we’ve added formal geographic and company size data to the model for this edition – as well as adding to the analysis of that data to that we published in 2023.
With the launch of the debut version last year, we published several posts pulling out some of the themes – some of the geographic disparities that we’d noticed and a commentary on the notion of the market itself – which seemed popular, so with the new version now firmly in the hands of our editorial and design teams making it read properly and look pretty, it’s now time to to repeat that process.
This first one is as much of a public service announcement as it is an analytical insight. I don’t know who is out there telling people this is a legitimate way to handle the introduction of new technology, but [deep breath Matt, you can do this] trust in magic alone won’t make generative AI work for you and your organization.
I have in the last couple of months heard a number of anecdotes about the ways in which some organizations are choosing to using trust in magic as a primary sponsor for generative AI…. well we can’t call them “projects” for fear of angering professionals in that field, “experiments” too suggests a formal process with a testable hypothesis, method and conclusion. Perhaps, faith healing? Activities predicated on a belief that by simply plugging generative AI in, it will just find good solutions by itself.
Worse, is that this is being rationalized by pulling an out-of-the-box feature of a chosen platform and referring to that as a generative AI “use case”. Summarization seems to be the most popular one right now, but without knowing who is summarizing what, for whom, for what purpose and to what benefit? Is this a common requirement? How often does it occur and in which existing processes is the current lack of automatic summarization an issue? How many instances of this process operate daily, weekly and in which locations?
Intelligent progress vs an imagined future
It’s become clear in the last year that generative AI will become a foundational element within big business application platforms within a pretty short timeframe and as such will become a common set of available functionalities within their customers within 18-24 months from now. Not an optional bolt on as it is in 2024, but fundamentally within the deep structure of the next generation of these products. As we pointed out last year, this has financial implications over time, but from a Work Intelligence perspective, where discovering, understanding and augmenting processes is key to building healthy organizations, the implications are just as wide.
In the short term, for organizations that are keen to reap the purported benefits of generative AI but sensible enough to not trust the faith healing approach, understanding the range of nodal points within their processes – the task that people and machines process where decisions are made – is critical in finding points of leverage. As I pondered a while back, the question we need to consistently ask is “is it useful?” and Work Intelligence provides the answer in a quantifiable form which can then be operationalized to ensure it continues to be true over time.
We’ve written in the past about the need to be able to use Work Intelligence as a way of testing hunches; taking ideas from the workforce and being able to use technology to be able to find out whether existing “desire paths” (or “trampelpfad” if you prefer the German alternative) through processes favored by workers are actually preferable (and if not, why they are taken)?
There’s a danger that with generative AI as it stands, that the specific desires of a top-down selection of management defines organizational usefulness in a less than scientific way. Indeed, summarization is a good example of this at work as it tends toward the “I have a lot of reports in my inbox and I’d like a summary of what’s important” almost-use-case, which while legitimate is of relevance to a specific tier of workers. That’s not to say that it’s not useful, but it’s also not new and has been technologically deliverable and commercially available in forms for close to 2 decades without finding a repeatable point of value.
There’s a point at which the devotion to the faith healing based introduction of generative AI becomes notionally attached to it being the spear tip of an AGI, when it is absolutely nothing of the sort, despite how hard some really want it to be.
A solid path to follow
The positive in all this is that there’s been some real investment in Work Intelligence, specifically in the use of task and process mining technology to help uncover suitable processes for automation. All the heat generated around RPA (part of “task execution” in our Work Intelligence world) created a lot of pent up desire to use the technology, without a good pipeline of operational candidates to be executed upon.
The result of this was RPA vendors building and/or acquiring task and process mining technology to provide a good on-ramp for suitable automation candidates. Of course in the process, what was also being developed was the right basic methodology for the determination of whether generative AI was a good fit too. Those questions above about summarization? A decent process analysis would provide you the answers and give you a lot of the data to determine value too. As ever, the answers lie within the exhaust fumes already being generated by the work your organization is already completing, should you choose to sample and analyze.
Work Intelligence as a practice is designed to provide the flexibility to adapt to whatever change is proposed to an organization. It might be that generative AI provides the pressure to see the practice become perfect.